a thoughtful web.
Good ideas and conversation. No ads, no tracking.   Login or Take a Tour!
comment by kleinbl00

I think you're mistaken. "Sensory input" is easy. "An ability to interact with the environment" is huge. "the ability to apply this type of processing to all these domains" doesn't get you there, though. It's been fifteen years since Kismet and we haven't moved appreciably beyond it.

If I haven't recommended this book to you yet, I've been remiss. The main point is not that we're getting better at AI, we're grading on more and more and more of a sliding scale and requiring less and less in order to qualify.





mk  ·  3760 days ago  ·  link  ·  

Perhaps, but Kismet was way too ambitious. I'm thinking more like sewer-cleaning robots that try to get refuse out, but also need to watch their energy levels, and must get to a charging station at times. Network a bunch, and they could have their own multiple competing directives, and those of the hive, like helping move larger objects, rescuing stuck workers, etc. When I say AI, I'm not talking about C3P0. I'm talking about robots that work more or less independently from humans, and with behavior (within a domain) that we cannot completely predict.

kleinbl00  ·  3760 days ago  ·  link  ·  

You're talking, essentially, about BEAM Robotics which, I agree, are not C3PO-grade AI. However, I'm not sure cockroach-grade intelligence really counts as AI.

mk  ·  3760 days ago  ·  link  ·  

BEAM robotics, but with an AI based on this emergent type of decision-making, which I do believe is the path to C3PO.

Many years ago, I was impressed by a program outlined by Douglas Hofstadter called COPYCAT in his book Creative Concepts and Fluid Analogies. The goal of the program was to solve puzzles such as:

  I change 'egf' to 'egw', can you do the same for 'ghi'?
or

  I change 'aabc' to 'aabd', can you do the same for 'ijkk'?
The program was non-deterministic; successive runs would result in different answers. However, the program also measured the computation required to arrive at an answer, and the program could be written so as the 'effort' could bias the output.

1000 runs of the second program lead to something like:

  ijll: 612, ijkl: 198, jjkl: 121, hjkk: 47, jkkk: 9, etc...
There is no right answer, but some are more satisfactory than others. This was a very early version of what Watson does.

It's fairly obvious that our own brains have competing processes, and that our decisions are the result of this competition. Check out what happens when you cut someone's corpus collosum. I imagine that some day, along this path, C3PO will appear to be of one mind, but will in fact be the emergent behavior of numerous processes running in parallel. He won't be programmed around a body of knowledge, but will absorb and create bodies of knowledge, and different ways to operate on them. Google's image processing software would be just one of hundreds, if not thousands of similarly complex processes that could be drawn upon. Of course, some processes would probably work to coordinate them all.

I give C3PO 15 years.

kleinbl00  ·  3760 days ago  ·  link  ·  

    BEAM robotics, but with an AI based on this emergent type of decision-making, which I do believe is the path to C3PO.

I think it'll get us closer to Solaris or V'GER than C3P0. This is admittedly not my expertise, but human cognition is a byproduct of millions of years of biological evolution. When you start a system in an environment utterly devoid of biology, the structures and means that will appear have no reason to resemble our own.

That was actually Alan Turing's point when he came up with the test: not "are you intelligent" but "can you imitate intelligence:"

    "Are there imaginable digital computers which would do well in the imitation game?"

He saw "intelligence" as an impossible thing to judge; he argued that "imitating intelligence" was easy. I think the processes you're talking about are going to lead to "intelligence" - I'd argue that in many ways, they already have. I don't think they'll ever lead to human intelligence, though. Again, read the Sherry Turkle book.

mk  ·  3758 days ago  ·  link  ·  

    When you start a system in an environment utterly devoid of biology, the structures and means that will appear have no reason to resemble our own.

I agree with that. However, many animals express an intelligence we can at least relate to. I expect that the same might go for non-biological AI, at least to the degree in which we operate in the same environment, but maybe not.

    He saw "intelligence" as an impossible thing to judge; he argued that "imitating intelligence" was easy. I think the processes you're talking about are going to lead to "intelligence" - I'd argue that in many ways, they already have.

For sure. I don't think there's a difference between intelligence and the perfect imitation of it. It's in the eye of the beholder. It's telling that we can't even agree upon the fundamentals of our own intelligence. We just know it when we see it.

I'll add the book. I am starting an actual doc now. I'm not sure if you've read Godel, Escher, Bach, but it's fantastic. The rest of what I've read from Hofstadter are variations on themes outlined in GEB.

kleinbl00  ·  3758 days ago  ·  link  ·  

    I agree with that. However, many animals express an intelligence we can at least relate to.

Right: We grew up in the same environment. We breathe the same air, we drink the same water, we bask in the same sun, we experience the same weather, our predators and prey are drawn from the same grab bag. That was my point: we have a long legacy of parallel development with animals. Machine intelligence? We're going to have absolutely nothing in common with it.

    It's telling that we can't even agree upon the fundamentals of our own intelligence. We just know it when we see it.

And I'm not sure we will. When the inception parameters are so wildly different, what we see is not very likely to strike us as "intelligence."

My dad has been trying to get me to read Godel Escher Bach for about 30 years now. Maybe one of these days.

thundara  ·  3758 days ago  ·  link  ·  

There's one dialogue particular in it that's revolves around the ideas of your two's discussion: Ant Fugue

The fundamental idea being that from a physical point of view, there's nothing unique about neurons firing in a certain pattern.

A population of macroscopic organisms / actors in a computer could interact and produce the same patterns, given enough moving parts and the ability to reorganize itself in response to input.

briandmyers  ·  3760 days ago  ·  link  ·