a thoughtful web.
Good ideas and conversation. No ads, no tracking.   Login or Take a Tour!
comment by user-inactivated
user-inactivated  ·  3388 days ago  ·  link  ·    ·  parent  ·  post: The Society of Mind – Marvin Minsky

As well respected as Minsky is (and deserves to be), and as widely read as this book is, no one really went anywhere with it. As far as I know, there is just AMBR and its descendants, and they're not as exciting as one might hope. AI isn't a humble discipline, but all of its successes have been with more humble problems. Grand theories sketched out in advance do more harm than good; they don't help with the nuts and bolts, they give the impression that we understand much more than we do, and then people rightly ask "so where are our minds-in-a-box?"





rob05c  ·  3388 days ago  ·  link  ·  

    no one really went anywhere with it

Sure they did. The connections just aren’t obvious.

Minsky’s research on information classification and neural networks (among other things) are essential to modern machine learning and neural network training, which are used in handwriting recognition, facial recognition, and your phone telling you where to hide a body.

It didn’t turn into Strong AI (yet?) like he hoped, but it’s still been invaluable, and enabled some very powerful human-computer interfaces.

With regard to his classifications of the human brain, I’m more inclined to think they’re not wrong, just not practical to program manually. They’re much more complex than we thought. Which is why we're turning to things like machine learning.

But his classifications still make sense for the most part, I think. We do know regions of the brain are associated with specific functions.

    Grand theories sketched out in advance do more harm than good

Darwin? Freud? Einstein? They were all wrong on countless points. But their grand theories created and advanced their fields. 'Grand theories' give science targets. Proving something wrong is just as good as proving something right.

user-inactivated  ·  3388 days ago  ·  link  ·  

Funny you mention perceptrons, since it was Minsky who pointed out that they weren't as general as they seemed at first, making work in neural networks unfashionable for years. When neural networks came back it was in particular applications. Yeah, they're great at pattern recognition, but no one is saying we just need to find the right neural network topology, train it, and boom, mind-in-a-box anymore. All the successes you point out were relatively modest problems. Which is great; better face recognition and question-answering are things within our reach, let's do those. Let's not promise things we can't back up. Promising things we can't back up has not worked out well for us in the past.

    With regard to his classifications of the human brain, I’m more inclined to think they’re not wrong, just not practical to program manually.

A computational model you can't implement or prove no one can is not a very useful computational model.

    They’re much more complex than we thought. Which is why we're turning to things like machine learning.

We cannot now nor will we ever be able to devise a learning algorithm that performs well for every problem. We can devise learning algorithms that work well enough for particular problems.

    Darwin? Freud? Einstein? They were all wrong on countless points. But their grand theories created and advanced their fields. 'Grand theories' give science targets. Proving something wrong is just as good as proving something right.

Computing does not work like empirical science works. Mostly it works the way mathematics works; we precede by proving things, if only existence proofs in the form of working artifacts. To the extent that it's empirical, it's empirical the way engineering is; we want to see how well our artifacts work.

user-inactivated  ·  3388 days ago  ·  link  ·  

Doesn't the no free lunch theorem apply as much to brains as the machine learning algorithms we program?

rob05c  ·  3388 days ago  ·  link  ·  

    We cannot now nor will we ever be able to devise a learning algorithm that performs well for every problem.

Do you believe the human brain is strictly more powerful than a Turing machine. If so, in what way?

user-inactivated  ·  3388 days ago  ·  link  ·  

I am not a neuroscientist. I know that there is no learning algorithm that is better than random guessing, because there is a theorem that tells me so. Math trumps speculation.

rob05c  ·  3388 days ago  ·  link  ·  

    I know that there is no learning algorithm that is better than random guessing, because there is a theorem that tells me so.

Random guessing for all values. For one, random guessing might be fast enough. For another, it doesn’t have to learn for all values, just for most. Mathematically, the implications are significantly different.

    We cannot now nor will we ever be able to devise a learning algorithm that performs well for every problem.

The NFL doesn’t tell us that. The NFL tells us we can’t design a single algorithm which performs optimally for all inputs [1] [2]. 'Well' is relative. It doesn’t need to be optimal, it just needs to be fast enough. Just like O(n log n) is fast enough to sort several hundred million integers on your CPU in a second or so, there is a point of computing power for which O(q) is fast enough to generally learn, whatever q is, even if it’s no better than random. Now, if you can prove q = n! or some such, you can demonstrate it’s not achievable with the mass we have to work with in the universe. But nobody’s proved that, and the NFL certainly doesn’t.

Furthermore, 'for all values' is misleading. Consider quicksort. Quicksort performs better than most other algorithms, at the price of a significantly worse worst-case (There exist analogous NFLs for sort). Likewise, there may (you might even be able to prove there does) exist a general-purpose learning algorithm which performs better than random for 99.99% of problems, at huge cost for the 0.01%.

Unless the human brain is doing something odd with quantum physics, or has some unknown metaphysical component – unless it is strictly more powerful than an LBA – then there is, by definition, some level of FLOPS after which a computer can learn faster than a human brain, even for random guessing.

I’m not making claims or predictions. I’m not saying we’ll have a sentient computer in the next ten, or hundred, or thousand years. I’m simply saying, it’s theoretically possible (1) given enough computing power and (2) assuming the human brain is an LBA.

user-inactivated  ·  3388 days ago  ·  link  ·  

    Random guessing for all values. For one, random guessing might be fast enough. For another, it doesn’t have to learn for all values, just for most. Mathematically, the implications are significantly different.

Random guessing might be fast enough, but it is probably not good enough. Random guessing is as bad as it gets.

    The NFL doesn’t tell us that. The NFL tells us we can’t design a single algorithm which performs optimally for all inputs.

That is what NFLs tell us. From the paper you linked:

    In addition to governing both how a practitioner should design their search algorithm, and how well the actual algorithm they use performs, the inner product result can be used to make more general statements about search, results that hold for all P(f)’s. It does this by allowing us to compare the performance of a given search algorithm on different subsets of the set of all objective functions. The result is the no free lunch theorem for search (NFL). It tells us that if any search algorithm performs particularly well on one set of objective functions, it must perform correspondingly poorly on all other objective functions. This implication is the primary significance of the NFL theorem for search. To illustrate it, choose the first set to be the set of objective functions on which your favorite search algorithm performs better than the purely random search algorithm that chooses the next sample point randomly. Then the NFL for search theorem says that compared to random search, your favorite search algorithm “loses on as many” objective functions as it wins (if one weights wins/losses by the amount of the win/loss). This is true no matter what performance measure you use.

For a particular region of search space, you can do better than random guessing. You will do worse than random guessing on other regions. Thus, if you want a useful search algorithm, you tune your algorithm for the region it will be operating in; you make your TSP algorithm perform well for TSP problems, knowing that if you feed it something else, well, GIGO.

rob05c  ·  3388 days ago  ·  link  ·  

    Random guessing is as bad as it gets.

Seems good enough for evolution. But, there's a good argument that Strong AI isn't practical in a reasonable amount of time: human evolution took 14 galactic years. Of course, we don't know to what degree sentience is learned versus inherited.

It does kind of bother me that debates about Minsky always seem to degrade to Strong AI. Yeah, they failed to accomplish that. And relative to Strong AI, things like natural language processing are modest. But relative to everything else we've achieved in computer science, I think things like Watson, Siri, and Wolfram Alpha are rather significant. And those are only the consumer-visible ones. Machine learning is used by everything from search engines to bioinformatics. Most of which in some way extend Minsky's work.

Even his brain-oriented work like Society of Mind. The book addresses numerous subjects like learning meaning, language processing, ambiguity, and spatial perception, which are equally applicable to various Weak AI systems.

user-inactivated  ·  3388 days ago  ·  link  ·  

I thought certain regions of the brain are associated with specific functions only because neuroplasticity?

rob05c  ·  3388 days ago  ·  link  ·  

    I thought certain regions of the brain are associated with specific functions only because neuroplasticity?

I'm not a neurologist, but as far as I know, neuroplasticity lets the brain use other areas when damaged. But there are some 'defaults.' For example, the temporal lobe is ordinarily responsible for hearing and smelling.

More importantly, this suggests the functions are discrete, as Minsky theorizes in the book, rather than some nebulous emergent amalgamation that can't be separated.

user-inactivated  ·  3388 days ago  ·  link  ·  

Maybe there's only defaults because those are the pieces of the brain that are hooked up to hearing and smelling when we're born. They only seem like defaults because they've been wired for input from the ears and the nose for a long, long time, but if you switched those "wires" to a different section of the brain at birth, it would be just the same. I'm not a neurologist either, but I feel like if you have a generic algorithm that allows for neuroplasticity, then it's probably generic everywhere in the brain.