a thoughtful web.
Good ideas and conversation. No ads, no tracking.   Login or Take a Tour!
comment by theadvancedapes
theadvancedapes  ·  3782 days ago  ·  link  ·    ·  parent  ·  post: Can We Live Forever?

    the more that I think about them, the more they seem inevitable.

I think what is inevitable is the general idea that human intelligence will one day be surpassed. And that the pathway will be largely computational. If you (or anyone else) is interested in reading about potential singularity scenarios I recommend this paper by Ben Goertzel (also linked in the TAA article).

At the moment singularity and global brain theorists are quite combative. However, I definitively take the stance that singularity theorists are observing the individual trend towards increased intelligence and global brain theorists are observing the collective trend towards increased intelligence. The singularity pathway is computation and the global brain pathway is through the internet. So from my perspective both theorists are trying to explain the same phenomenon on different levels (individual vs. system). I hope to write a paper in the future describing the unity of singularity/global brain theories. It is something that has been on my mind a lot lately.

(I've also linked several talks by researchers working on the revolutions mentioned in the video in the TAA article).

    Still, as many of us have discussed in the past the need for religion is born largely out of the fear of death.

Yes, I agree with this. I think that is fundamental to the phenomenon of religion.

    I suppose there is a part of me that suspects that the conjuring of such a positive view of the not too distant future is also aided by this fear.

Yes, I am aware of that, most definitely. I'm currently in talks with a researcher at the Global Brain Institute to conduct a psychological survey of singularity/global brain theorists and their perspective on death, the idea of singularity as religion, and whether they feel there own fear of death effects extrapolations and predictions for a 2045 singularity. In a report I'm in the process of writing, one of the main questions that I have to address is "Global Brain as Religion". I personally think a form of transhumanism (stemming from an earlier version of humanism) will become a "religion-like" in the Global Brain. I would more conceptualize it as spiritual... but the link between these ideas and religion is obviously there. The main difference is that the Global Brain is an idea rooted in empirical study and scientific theory (or at least we are working on that). If all predictions and extrapolations prove erroneous we will have to discard of the theory.





rob05c  ·  3782 days ago  ·  link  ·  

An interesting question along that line, is that of Emergent Intelligence.

Some AI researchers believe we simply have to create a big enough neural network, and an intelligence will "emerge," similar to how intelligence "emerged" via evolution.

Others believe emergence is fantasy, and we'll have to actually write a large portion of the intelligence.

Minksy has some fantastic papers on intelligence, from the perspective of an AI researcher.

theadvancedapes  ·  3782 days ago  ·  link  ·  

    An interesting question along that line, is that of Emergent Intelligence.

I think intelligence is an emergent property of the universe (like many other things e.g., complex chemistry, life, multicellularity, etc.).

    Some AI researchers believe we simply have to create a big enough neural network, and an intelligence will "emerge," similar to how intelligence "emerged" via evolution.

Yes, all dominant theories I'm aware of discuss super-intelligence as an emergent property. However, my supervisor, Francis Heylighen believes that A.I. theorists are very mistaken to believe that artificial general intelligence will emerge from robotics (He wrote about this in a paper called "A brain in a vat cannot break out). He believes the only way superintelligence can arise is through the collective network we are creating on the Internet I'm not sure where I stand in regards to his criticism of A.I. I think we should take the possibility of artificial general intelligence seriously... however I think in any scenario the A.I. would be dependent on our system and our internet and that it would almost certainly be "friendly" because of that. It would be in its best interest to be altruistic with a system as massive as ours. And in any scenario the Internet itself will be deeply rooted into humanity's biology (as long as an artificial general intelligence arises post-2035ish). So the emergence of artificial general intelligence would probably just cause us to accelerate the process of our own transformation into robots (via brain-interface technology).

But I think that because I think there is an inherent unity in singularity and global brain theories as I said to thenewgreen.

    Others believe emergence is fantasy, and we'll have to actually write a large portion of the intelligence.

Hm. I'm personally skeptical of anyone who thinks that it wouldn't be emergent. Intelligence requires evolution in an environment. I think if we get AGI it will be from evolutionary robotics.

    Minksy has some fantastic papers on intelligence, from the perspective of an AI researcher.

What ones? I'd love to read them.

rob05c  ·  3782 days ago  ·  link  ·  

mit.edu has a great collection of Minsky's papers.

One of my favorites is Communication with Alien Intelligence, wherein he attempts to demonstrate that it will be possible for us to communicate with any alien intelligence we meet. In doing so, he reveals some very interesting deductions about intelligence itself.

    the only way superintelligence can arise is through the collective network we are creating on the Internet
I'm inclined to believe that intelligence as we understand it requires input. But I'm not convinced the "massive network" is necessary. If natural evolution produced intelligence with only physical sensory input, why couldn't an artificial intelligence be achieved with only software, some motors, and a camera?

    it would almost certainly be "friendly"
I think an artificial intelligence and the environment that produces it will be so complex as to make predictions implausible. I think predicting any aspect of an AI's personality, including hostility, would be as complex as predicting the stock market or hurricanes.

On the tangent of AI personalities, the comic Dresden Codak has a great AI story with an unusual twist. Rather than being benevolent or malevolent, the AI simply has its own interests and humanity becomes redundant. The definitive line of the Mother AI is, "We can give you anything you want, save relevance." The beginning is here, defining quote is here.

theadvancedapes  ·  3782 days ago  ·  link  ·  

    One of my favorites is Communication with Alien Intelligence, wherein he attempts to demonstrate that it will be possible for us to communicate with any alien intelligence we meet

Oh, that interests me tremendously.

    If natural evolution produced intelligence with only physical sensory input, why couldn't an artificial intelligence be achieved with only software, some motors, and a camera?

I think Heylighen's point is that you would need an entire system of artificial intelligences to evolve together in an environment - you can't just have intelligence "arise in a vat" so-to-speak. So he is arguing that the natural system-intelligence of the Internet is a better candidate for superintelligence emergence than is robotics. Again, I'm quite committed to the perspective that robotics and the internet are going to both produce higher intelligence levels.

    I think an artificial intelligence and the environment that produces it will be so complex as to make predictions implausible.

Fair. Of course.

    "We can give you anything you want, save relevance."

Wow, one of my favourite quotes now. Thanks. I'd be scared of it if I didn't think that we will be that intelligence.