- Palantir is developing big data analytics software whose main purpose is to assist human analysts in studying big data.
This alone sounds like a very bold claim. I'm no expert in either machine learning or psychology/neurobiology, but it does sound to me like a problem of knowledge rather than ability. We didn't understand how we think, then Freud and Co. came and expanded our understanding of the inside - not filled, but it's been a huge step towards self-knowledge. Once we figure out exactly how we think - which, as far as I'm aware, might be equally plausible to put tomorrow or decades forward - we can then implement the mechanism, or even improve upon it, in an artificial thinking creation, whatever form it will take. Isn't that how prediction works? You take input, compare it to previous results and output a prognosis, don't you? I see what it might mean, though: that it can't "imagine" the new possible outcome given unmet input. Can it not? I imagine that such a comparison of inputs will require deep analysis, but I don't see it as a problem for a sufficiently powerful AI. Once it figures out exactly what factors made up the situation (responding as well to the new inputs, thus getting previously invisible points of view on a given factor), it can then tweak the image of the situation to imagine how it might have turned out. Or am I making the picture too fantastical? ..."at the moment". This - from a person telling us that computers can't imagine the future. I find it mildly ironic. It's all very focused on the present. "They can't do this, they aren't capable of that". Sure, they aren't currently. What's to stop them from evolving - or, more precisely, to stop their creators from doing better? This is what humanity has been doing so far, and I don't see why it would stop here when it can clearly go a lot further than even the go championship.computers are not “creative,” do not “learn” and cannot “predict.”
Computers can only be tasked with making inductive predictions based on past experiences.
The author of this article stands firmly within this camp, arguing that human capabilities far transcend anything computers can achieve.
This is the part that has me on the fence. Humans haven't figured out how we think yet. In order to program a machine to do it, there has to be more of an understanding of how humans do it. A lot of prediction right now is intuitive. That just means that the person themselves don't know how they did it. It's a combination of looking at various inputs from the environment, but most people can't describe exactly what those inputs are. For instance, people can read body language and that can be taught. But sometimes when I don't trust someone, it's because of a look in their eye. I don't exactly know what that means because I'm sensing various inputs that I can't fully describe. It's a gut feeling. Could that feeling be described with precise body movements and then explained in an algorithm? Maybe. But I think it's a ways off. People will behave irrationally. That's hard to predict for humans, much less machines. If humans can predict their own behavior with greater accuracy, then it can be taught to machines. I think it's a ways off before humans have a grasp on their own behavior. It's difficult to teach and program something that is not understood.Once we figure out exactly how we think - which, as far as I'm aware, might be equally plausible to put tomorrow or decades forward - we can then implement the mechanism, or even improve upon it, in an artificial thinking creation, whatever form it will take.
You're saying "It's difficult, and it's going to take a while", I'm saying "It's not impossible". Which just means we don't know the mechanisms behind those decisions yet. Just because you don't understand it doesn't mean there's no reasoning behind it, which is a trap many seem to fall in unintentionally. While it may take time, effort and a lot of data, I'm sure we'll get to understanding it someday, just like we understood that there is something beyond our reasoning. Which is you thinking like a human. Once we figure out what makes us act the way we do - which is, as I have already asserted, not as unreasonable as we tend to believe - a computer (provided it has better computational capabilities than those of human beings) will have much less trouble crunching the details. A computer, unless programmed, lacks a lot of what takes our brain's clocktime up: movement, breathing, computation of hunger, processing input from multiple sources completely different in classes of data... Its sole dedication is to think and output data - which, it seems to me, will only make it more capable of dissecting humans' underlying motives, not less.A lot of prediction right now is intuitive.
People will behave irrationally. That's hard to predict for humans, much less machines.
Right. It's not that we're disagreeing, but it's all relative. Within the category of "it's not impossible" are things like God appearing in the sky (especially if you don't believe in God) or animals talking like humans or humans flying on their own. Those are things that the mechanisms for those things happening are unknown. Humans have lacked the ability to learn to live peaceably with each other since the beginning of the race. Considering that the lessons in the Bible apply today as much as they have when it was written, there hasn't been much learning and understanding of how to change people's greed, malice and jealousy. Because of that, among other things, I have more belief that humans will program robots to destroy each other and mankind before they're able to program them to be creative enough to mimic human predictive ability. I think it won't take as much ability to program robots to be destructive as I think it will take to program them to be intuitive.You're saying "It's difficult, and it's going to take a while", I'm saying "It's not impossible".
How does humanity's restlessness equate to less ability to develop an intuition engine? Oh, they already are destructive. You didn't notice the onset of armed UAVs - the ones the media calls "killer drones"? How about bomb disposal robots? In their own way, they are destructive, even if they allow for more prosperity by being destroyed in place of a human being (or several, if we also consider potential non-involved explosion victims). Following that logic alone, we're already on our way to developing an ASI. Invented beings don't appear in the sky.Humans have lacked the ability to learn to live peaceably with each other since the beginning of the race.
I think it won't take as much ability to program robots to be destructive as I think it will take to program them to be intuitive.
Within the category of "it's not impossible" are things like God appearing in the sky
I wouldn't call killing other people in war just humanity's restlessness. I used that as an example of humanity's inability to understand itself well enough to work for everyone's common good. That shows humans' inability to understand their own motivations often enough that they continually create conflict. I suppose that can turn into a debate about whether war is detrimental to humans or not. Another example would be people who are self-destructive. Many people are self--destructive in some ways, doing things against their own self-interest, like becoming addicts or doing things that are against their own stated wishes. I think the neuroscience of motivations is in its infancy and has a long way to go before a lot of this can be understood. Until it's better understood, I don't think that humans can program robots to duplicate it very well. What does ASI stand for in this context? I agree with you that there are people who are trying to develop AI (artificial intelligence). What I don't agree with is that people are able to develop AI that are intuitive enough to predict human behavior accurately. How can you know that? As you said: That could apply to God as well.How does humanity's restlessness equate to less ability to develop an intuition engine?
Following that logic alone, we're already on our way to developing an ASI.
Invented beings don't appear in the sky.
Which just means we don't know the mechanisms behind those decisions yet. Just because you don't understand it doesn't mean there's no reasoning behind it, which is a trap many seem to fall in unintentionally. While it may take time, effort and a lot of data, I'm sure we'll get to understanding it someday, just like we understood that there is something beyond our reasoning.
I must admit to not having used the appropriate abbreviation. ASI stands for "artificial superintelligence", while I meant a "simpler" version, AGI, "artificial general intelligence", which is supposed to be on par with human intelligence. Alright. Why aren't they?What does ASI stand for in this context?
What I don't agree with is that people are able to develop AI that are intuitive enough to predict human behavior accurately.
It's possible (maybe) but since no one has done it, or even come close, then it's pretty unlikely. If we could make machines that think like we do, we'd sure as hell do it, because that shit would open up frontiers like you can't imagine. Free anti-gravity, or eternal life, are kinda similar - no reason they couldn't be discovered (maybe) but it's looking pretty unlikely since no one's done it.
I fail to grasp this line of thinking: because it hasn't been done yet, it will hardly be done next. It is as if a XIX century Londoner tells me "Well, maybe we can get to the Moon, but since nobody's done it, it's pretty unlikely". And yet, a centurysome later... It hasn't been done yet because we don't have the capability, much like XIX century Britain didn't have the capability to launch a rocket into space. Time goes on, and opportunities arise, and so far, humanity's been pretty grasping about them. We now have an almost-instantanious information exchange between any part of the world, food production system capable of feeding billions, we can travel at astonishing speeds all across the globe and build both deep underground and high above it. Keep your mind open about what hasn't been done yet.
We're in agreement - you saw how I said "It's possible (maybe)", right? Someone 100 or 200 years ago would have been 100% CORRECT to say "it's pretty unlikely" that they could go to the moon! I think it's also quite possible that we WILL develop a way to have immortality - and I think that possibility is on a par with creating human-level intelligence artificially. For either, there's no clear path for how to get there, so barring some unforeseen breakthrough - it's gonna be a while, at best. I try not to engage in wishful thinking.
Prove to me with solid facts that humans will grasp their own behavior. It's a prediction. There are no solid facts to prove what will happen in the future. It's the nature of human existence that humans can't know the future with absolute certainty to the level of solid facts. Because of this, humans behave and act in the face of incomplete knowledge all the time. My prediction is based on my view of the trajectory of the progression of neuroscience and the level of self-knowledge that humans have gained over their existence. It's also based on my personal experiences of seeing people struggle with understanding their own behavior and being unable to act in accordance with their own stated wishes. I'm sure there are many more experiences and pieces of knowledge that lead me to this conclusion, but asking me to line up those beliefs would be like asking anyone to explain their belief system to someone else. You've proven my point to a great degree, actually. Robots need solid facts in order to process their actions. Humans base their actions on beliefs, incomplete information and intuition. I suppose that robots could act on incomplete information and random facts, but then their actions wouldn't add anything more than randomness. That would make them unnecessary. Adding more randomness into data mining doesn't improve the process in any way.
Let me get this straight. You're disagreeing with me but aren't willing to see my point. I suppose that's fair, given that I haven't been willing to see your point as well. This conversation is counterproductive. We're arguing over something neither of us has a good idea about. It's satisfying, but it has no purpose rather than to scratch our egos a bit and feel clever. This will get us nowhere. If we're both here in a few years, let's talk about it; see if our views will have changed.
This is essentially the issue, though I believe it goes a step further. It is not just about how we as humans think, but also how we are able to be concious and self-evaluative of that thinking. And in so doing make our actions have meaning. The main gist of this is surmised by this quote: “If machines are to be anything more than depositories of mechanistic formulae, or humans by proxy, they will need to produce something original and valuable, and know why it is.” - Julian Pepperell Can A.I. look as though it's being creative, learning, or otherwise thinking? Sure. Can it to some degree, however large or small, replicate the ways human achieve these functions? Absolutely. Where it all breaks down is that a computer as a separate entity has absolute no concious idea why it's doing what it's doing. There is no self-driven intention, motive, reason, or meaning behind any of its actions. It is blindly carrying out instructions defined by a human. In all current day systems, the human mind has played a crucial part during critical stages of thought. Whether that's curating content, making value judgements, or setting criteria, the part that the teams behind these systems play is still too great for the system to be classed as completely separate and autonomous entity carrying out these functions of thought. We can reduce human behaviour to something a computer can carry out, but that doesn't make up for its lack of a "attitudes, perspectives, moral judgements, imagination and intuition." On the other hand, some argue that trying to make computers work towards the human method of thinking - whatever that may be - is half the problem. The brain and mechanistic computers work in vastly different ways. A computer that performs arithmetic doesn't work in the same way a human brain does when it adds two numbers together, rather it conforms to an abstract definition of mathematics. As a result, they argue that a computer need not directly emulate the brain but instead successfully align with an abstract definition. Though coming up with these 'abstract definitions' can be equally difficult. It's probable (though not necessary) that figuring how we think and creating the abstract definitions of that process go hand in hand. There are some systems that are pushing the envelope. One example is a system called ‘Eurisko,’ a sequel to a mathematical discovery system called “Automated Mathematician”. Eurisko is heuristically based, I.E. it functions on imprecise, ‘rule-of-thumb’ type mental short cuts rather than specific rules. Additionally, it has the heuristics that allow it to change its own heuristics. In this way, if a certain rule has not produced any results or has only been useful a minimal amount of the times, it will be used less often in the future or designated to only specific circumstances. There are also Genetic Algorithms that have the capacity to randomly mutate a rule or combine two different rules and then judge the relative success of the output. These approaches take a tiny step towards the hypothetical system that the aforementioned Julian Pepperell imagines when considering what would truly allow such functions in a machine. Such a computer would need to have “sufficient complexity to produce emergent global behaviour, [have] access to rich data, and [have] the capacity for adaptation, learning and mutation.” He proposes a system which is made of many millions of individual processors. These processors would all perform in somewhat predictable fashion but would additionally all operate in reference to each other. The computer would also have to be susceptible to both regular and irregular fluctuations of its values, rules and evaluative criteria as influenced by environmental stimuli. Furthermore, he states such a system would have to find meaning in unstable as well as stable sources, and as such be able to ‘think’ in a discontinuous way to relate juxtaposed concepts. A system like this, working at near the speed of light, would perhaps be possible of producing the global, emergent, and organically unpredictable behaviour that is required. Of course, that is all just hypothetical and easier said then done, but interesting nonetheless. Still, having said all that, it still comes back to what you said. It's not that it's categorically impossible, it's just hasn't been figured out yet. We don't have the knowledge at the moment but we may well do someday.Once we figure out exactly how we think - which, as far as I'm aware, might be equally plausible to put tomorrow or decades forward - we can then implement the mechanism, or even improve upon it, in an artificial thinking creation, whatever form it will take.
Except it's not, really. The case that there's something special about humans with their magic squishy hardware that can never be implemented on another substrate has exactly as much support as the opposite ridiculous extreme of zomg not only will the singularity happen but it will happen next tuesday I know because I am a s00per-rational techno-prophet, which is to say pretty much none beyond a desire to believe one way or the other.
Except what's not? I neither understand your point, nor am I sure what about my post made you leap to the conclusion that I think that imagination and volition can only be comprised by brains. I would assert that if one were able to replicate those things, it surely wouldn't be in silicon, because the qualitative differences between thought and logic gates are far too broad for one to be anything like the other.
My point is that neither "humans can X, computers can't X" nor "someday computers will X" say anything interesting about computers or humans.The author is talking about imagination, and while superficial given the brevity of the piece, the point is well made and understood elsewhere.