Second video from The Advanced Apes is up! If you missed episode one you can check it out here.
Excellent job! This is an interesting way of looking at it. This reminds me of a phenomenon I have been seeing on the internet lately. There are all these online groups of educated people who identify as atheist but they become so headstrong in their atheism that it has almost turned into a religion itself. There are now conferences and meetups in which these groups of people are coming together in real life to share their views. More and more you see them trying to educate non-non-believers and assert these views on others as well. Sound familiar? In this case it is broadening the traditional definition of religion (a belief in a higher power or superhuman figure) but the group of people coming together to talk, listen, discuss and share their beliefs remains. Even if that belief is that they don't believe. That said, the idea that singularity can be viewed as a religion doesn't seem nearly as farfetched. There are certainly obvious differences and I have yet to see a bunch of singularity preachers on the street corners but, technically, it is the belief in a superhuman figure. That figure is the future rather than the past and the believers are scientists who also hypothesize and research but...yeah. Interesting. I don't know if I personally agree with the idea that but I think that point of view has some validity and is interesting to think about. Anyways. Great job on the video. I can't wait to see more!I know the neuroscientist Sam Harris has even gone so far as to call the singularity “science enabled religion”
The "New Atheists" have been compared to a religious movement quite frequently. I think some of the parallels are there, but in my mind the main criteria for a religion is belief in the supernatural so IMO it doesn't really qualify. I think the reason some see them as a religious group is because they are just as passionate about religion not existing as religious people are about their religious belief. I like this. I think its important for the areligious to meet and share their views and organize. Especially for people that just realized religion is wrong and are only surrounded with family and friends that believe. Ya, the function and structure share common parallels. System parallels. You can actually say the same about any major religion and the scientific enterprise. Just as major religions have their symbolic figure heads (e.g., Jesus, Mohammed, Joseph Smith, L. Ron Hubbard), we have our own symbolic figures heads (e.g., Isaac Newton, Charles Darwin, Albert Einstein, etc.). Just as major religions have places of worship, centers of power, and important socio-geographical landmarks (e.g., Vatican, Wailing Wall, Mecca, etc.) we have our own versions of those things (e.g., Sagan Walk, Down House, Galapagos Islands, Oxford, etc.). Even the pervasiveness of the belief structures in daily life shares commonalities. People who believe in a major religion believe that their god will save them from disease, death, and make sure they are well-fed and safe... whereas people in secular society believe that science can fill those functions. You may not see singularity preachers on street corners... but that's because we are too tech savvy for that. We have blogs. Thanks :-)There are all these online groups of educated people who identify as atheist but they become so headstrong in their atheism that it has almost turned into a religion itself.
There are now conferences and meetups in which these groups of people are coming together in real life to share their views.
In this case it is broadening the traditional definition of religion (a belief in a higher power or superhuman figure) but the group of people coming together to talk, listen, discuss and share their beliefs remains. Even if that belief is that they don't believe.
There are certainly obvious differences and I have yet to see a bunch of singularity preachers on the street corners
Anyways. Great job on the video. I can't wait to see more!
Your first video was really good but it's remarkable how much better this one is. The pacing of your speech and message were spot on and the humor in the animation was really spectacular. Very well done all the way around. Regarding the question as to "what I think?" I hadn't been much exposed to these theories prior to you joining Hubski but the more that I think about them, the more they seem inevitable. You have joked that I might just be young enough to make it to the point where the "sands of time" are running back in to the hour glass and not out of them. -I sure hope so. Still, as many of us have discussed here, the need for religion is born largely out of the fear of death. I suppose there is a part of me that suspects that the conjuring of such a positive view of the not too distant future is also aided by this fear. Again, great work Cadell. I'm off to watch it again...
I think what is inevitable is the general idea that human intelligence will one day be surpassed. And that the pathway will be largely computational. If you (or anyone else) is interested in reading about potential singularity scenarios I recommend this paper by Ben Goertzel (also linked in the TAA article). At the moment singularity and global brain theorists are quite combative. However, I definitively take the stance that singularity theorists are observing the individual trend towards increased intelligence and global brain theorists are observing the collective trend towards increased intelligence. The singularity pathway is computation and the global brain pathway is through the internet. So from my perspective both theorists are trying to explain the same phenomenon on different levels (individual vs. system). I hope to write a paper in the future describing the unity of singularity/global brain theories. It is something that has been on my mind a lot lately. (I've also linked several talks by researchers working on the revolutions mentioned in the video in the TAA article). Yes, I agree with this. I think that is fundamental to the phenomenon of religion. Yes, I am aware of that, most definitely. I'm currently in talks with a researcher at the Global Brain Institute to conduct a psychological survey of singularity/global brain theorists and their perspective on death, the idea of singularity as religion, and whether they feel there own fear of death effects extrapolations and predictions for a 2045 singularity. In a report I'm in the process of writing, one of the main questions that I have to address is "Global Brain as Religion". I personally think a form of transhumanism (stemming from an earlier version of humanism) will become a "religion-like" in the Global Brain. I would more conceptualize it as spiritual... but the link between these ideas and religion is obviously there. The main difference is that the Global Brain is an idea rooted in empirical study and scientific theory (or at least we are working on that). If all predictions and extrapolations prove erroneous we will have to discard of the theory.the more that I think about them, the more they seem inevitable.
Still, as many of us have discussed in the past the need for religion is born largely out of the fear of death.
I suppose there is a part of me that suspects that the conjuring of such a positive view of the not too distant future is also aided by this fear.
An interesting question along that line, is that of Emergent Intelligence. Some AI researchers believe we simply have to create a big enough neural network, and an intelligence will "emerge," similar to how intelligence "emerged" via evolution. Others believe emergence is fantasy, and we'll have to actually write a large portion of the intelligence. Minksy has some fantastic papers on intelligence, from the perspective of an AI researcher.
I think intelligence is an emergent property of the universe (like many other things e.g., complex chemistry, life, multicellularity, etc.). Yes, all dominant theories I'm aware of discuss super-intelligence as an emergent property. However, my supervisor, Francis Heylighen believes that A.I. theorists are very mistaken to believe that artificial general intelligence will emerge from robotics (He wrote about this in a paper called "A brain in a vat cannot break out). He believes the only way superintelligence can arise is through the collective network we are creating on the Internet I'm not sure where I stand in regards to his criticism of A.I. I think we should take the possibility of artificial general intelligence seriously... however I think in any scenario the A.I. would be dependent on our system and our internet and that it would almost certainly be "friendly" because of that. It would be in its best interest to be altruistic with a system as massive as ours. And in any scenario the Internet itself will be deeply rooted into humanity's biology (as long as an artificial general intelligence arises post-2035ish). So the emergence of artificial general intelligence would probably just cause us to accelerate the process of our own transformation into robots (via brain-interface technology). But I think that because I think there is an inherent unity in singularity and global brain theories as I said to thenewgreen. Hm. I'm personally skeptical of anyone who thinks that it wouldn't be emergent. Intelligence requires evolution in an environment. I think if we get AGI it will be from evolutionary robotics. What ones? I'd love to read them.An interesting question along that line, is that of Emergent Intelligence.
Some AI researchers believe we simply have to create a big enough neural network, and an intelligence will "emerge," similar to how intelligence "emerged" via evolution.
Others believe emergence is fantasy, and we'll have to actually write a large portion of the intelligence.
Minksy has some fantastic papers on intelligence, from the perspective of an AI researcher.
mit.edu has a great collection of Minsky's papers. One of my favorites is Communication with Alien Intelligence, wherein he attempts to demonstrate that it will be possible for us to communicate with any alien intelligence we meet. In doing so, he reveals some very interesting deductions about intelligence itself. On the tangent of AI personalities, the comic Dresden Codak has a great AI story with an unusual twist. Rather than being benevolent or malevolent, the AI simply has its own interests and humanity becomes redundant. The definitive line of the Mother AI is, "We can give you anything you want, save relevance." The beginning is here, defining quote is here.the only way superintelligence can arise is through the collective network we are creating on the Internet
I'm inclined to believe that intelligence as we understand it requires input. But I'm not convinced the "massive network" is necessary. If natural evolution produced intelligence with only physical sensory input, why couldn't an artificial intelligence be achieved with only software, some motors, and a camera?it would almost certainly be "friendly"
I think an artificial intelligence and the environment that produces it will be so complex as to make predictions implausible. I think predicting any aspect of an AI's personality, including hostility, would be as complex as predicting the stock market or hurricanes.
Oh, that interests me tremendously. I think Heylighen's point is that you would need an entire system of artificial intelligences to evolve together in an environment - you can't just have intelligence "arise in a vat" so-to-speak. So he is arguing that the natural system-intelligence of the Internet is a better candidate for superintelligence emergence than is robotics. Again, I'm quite committed to the perspective that robotics and the internet are going to both produce higher intelligence levels. Fair. Of course. Wow, one of my favourite quotes now. Thanks. I'd be scared of it if I didn't think that we will be that intelligence.One of my favorites is Communication with Alien Intelligence, wherein he attempts to demonstrate that it will be possible for us to communicate with any alien intelligence we meet
If natural evolution produced intelligence with only physical sensory input, why couldn't an artificial intelligence be achieved with only software, some motors, and a camera?
I think an artificial intelligence and the environment that produces it will be so complex as to make predictions implausible.
"We can give you anything you want, save relevance."
In your view, how will these non-biological changes take place from an economic standpoint? It seems to me that if we use smart phones as a jumping off point, then it stands to reason that augmentations or replacements to parts or all of the human form might not be standardized either; that these augmentations will be offered by companies competing for customer's attention and perhaps money. How might humans transcend their biological bodies without being constrained by limitations imposed by technology producers? Edit: As an aside, I'd like to post a link to a clip from the first science show I watched as a kid, which was called 3-2-1 Contact (which had a pretty sweet theme song :D) just to contrast how far science shows for kids have come in a relatively short time.
I think what we will see is first a transition to wearable computing (which is already starting to happen); followed in the late 2020s/2030s to a transition to "internal computing". So the "iphone" "ipad" or "google glass" of the 2030s will be internal (whatever the product is that you will be marketed). I think that is undoubtedly true. Several problems arise from this - they are transhuman philosophical problems - which is why transhumanist philosophy is so important to study. To be honest, some of the problems these technologies present are already with us. For example, does my smart phone and constant access to the Internet give me a massive intellectual and economic advantage over those that don't have access to either? I think the answer is yes. This will be magnified in the 2030s when the computational process is internal - essentially making me "super human" (or what we conceptualize today as super human). But hopefully the lag (or diffusion time) of these technologies will be short (on the scale of less than a decade). That is at least to be expected from the speed of information technology diffusion today. The Ratchet at work! (Magnified by exponential technological change of course.)how will these non-biological changes take place from an economic standpoint?
these augmentations will be offered by companies competing for customer's attention and perhaps money.
just to contrast how far science shows for kids have come in a relatively short time.
If you don't mind, who do you think is writing important transhumanist philosophy at the moment/whose work do you think is worth following?Several problems arise from this - they are transhuman philosophical problems - which is why transhumanist philosophy is so important to study.
Really well done. The pacing of this second episode is very good. It's long been my goal to make it to 125. I'm curious what a simple extrapolation of the current increase in expectancy gives for the year 2100. Is there a Moore's Law for life expectancy? I look at my daughter, and wonder if she will ever have to experience advanced aging.
"Really well done. The pacing of this second episode is very good." Thank you mk. I truly appreciate the kind words and accurate observation!
I feel quite confident saying that each episode makes steady progress with both visuals and content. It's the lovely RATCHET effect at work.
Keep watching!
Thanks humanodon. (incredible name btw) /Just out of curiosity, why no faces?/ I exclude faces, as well as a lot of detail in general, for a number of reasons. Simplicity is golden. Consistency is important. And the faster the better. Designing faces for characters, along with appropriate facial expressions would just take too much time. With all this project required in the time frame set, I had to be honest with myself that my level of design skills could not maintain a consistent over all look beyond geometric shape. So Cadell and I both felt very comfortable keeping the same aesthetics as the "Are Chimps Cultural?" video and trusting that with each episode our skills would improve! As each episodes wraps up, I can feel this happening! Small amounts of confidence and skill are accumulating and bit by tiny bit the videos shall convey this! (I hope!)
Thanks! (By the way, to quote stuff, you can put these things: | around the text | but without any spaces. Also, at the top right corner of the comment field is a tiny link that says "markup" with all kinds of other stuff you can do.) The two episodes so far are pretty well put together in my opinion, so I'm interested to see how the videos develop further. That makes sense about the faces.Small amounts of confidence and skill are accumulating and bit by tiny bit the videos shall convey this! (I hope!)
Thank you humanodon.
Look at me learning and applying interwebby tactics. Thank you. I appreciate your interest very much.(By the way, to quote stuff, you can put these things:around the text but without any spaces. Also, at the top right corner of the comment field is a tiny link that says "markup" with all kinds of other stuff you can do.)
The two episodes so far are pretty well put together in my opinion, so I'm interested to see how the videos develop further.
Lish appreciates your observation! Yes, the animator of The Advanced Apes is finally a part of hubski! Great question. Aubrey de Grey has introduced an important concept that doesn't function quite like a Moore's Law, but helps us conceptualize what it will be like for us to age. He calls it "Longevity Escape Velocity" (LEV). This is a quantification of our aging that attempts to describe whether we are aging faster or slower than technological evolution. He hypothesizes that for us there will come a point where people that would normally be dying at a high rate (70-100 year olds) will start to get gradually "younger" biologically and their death rate will actually start to decrease (perhaps dramatically). When this starts to happen we'll know that we are reaching the "Longevity Escape Velocity" and that people younger than 70 will likely be aging slower than technological evolution (and will therefore have a new life expectancy of 1,000 (because this is how long you would live on average if aging didn't cause death).Really well done. The pacing of this second episode is very good.
I'm curious what a simple extrapolation of the current increase in expectancy gives for the year 2100. Is there a Moore's Law for life expectancy?
Just because we can do something doesn't mean we should. I think that humanity has a serious problem with self-constraint. We feel we are destined -- that it is our right -- to control the natural order. What does eternal life mean for Earth's already diminishing resources? And besides that, would we even still be human? Sure, species evolve. But have they ever been self induced like this would be? I'll be watching humanity's self-destruction from the safety of my cave.
I know it may be hard to believe, but the event itself is not a moral question. It would be like a hunter-gatherer saying that everyone else is immoral for practicing agriculture or industry. The next transition is going to occur (unless we go extinct). High intelligence seems to be quite good at ordering nature to our own purposes. All life tries to do this - we are just the best at it. We are not perfect at it - the next intelligence will be better. The next system will be based on an infinite resource (e.g., solar). It would be a post-human world. No. High intelligence has never existed before. This seems to be what our intelligence is pushing towards. A complete eradication of the biochemical pathway - and a complete embrace of the technocultural pathway. The technocultural pathway is everything we consider human. The biochemical pathway is everything we consider animal. From this perspective you can say that the singularity will usher in the first true humans. I'm guessing I'll see you in the global brain.Just because we can do something doesn't mean we should.
We feel we are destined -- that it is our right -- to control the natural order.
What does eternal life mean for Earth's already diminishing resources?
And besides that, would we even still be human?
Sure, species evolve. But have they ever been self induced like this would be?
I'll be watching humanity's self-destruction from the safety of my cave.
One of the first posts I ever commented on was titled something along the lines of "What is your most controversial belief?" While I can't remember my comment verbatim, it was essentially that I believe any sort of control over nature is immoral, including agriculture and industry. Yes, I do use and benefit from them regularly. But only because I don't have a choice at present moment.It would be like a hunter-gatherer saying that everyone else is immoral for practicing agriculture or industry.
Please read about the great Siddhas of india, nandi devar, Agastyar, The Yoga of Boganathar, Tirumular, the best book for beginners is, Babaji and the 18 Siddha Tradition by marshal Gowindam. Just to get started.