a thoughtful web.
Good ideas and conversation. No ads, no tracking.   Login or Take a Tour!
comment by reguile
reguile  ·  2916 days ago  ·  link  ·    ·  parent  ·  post: Moore's law is nearing its end

    in fact, it may be up to chance for us (or the self-improving AI) to do so right now, because neuroscience is nowhere near that far in explaining our thinking.

It's essentially what AI are doing right now, massive amounts of trial and error to hope the AI finds some solution that makes sense and goes along with it. The AI is an "optimizer" in that it continuously tries to find a better solution rather than thinking through the problem as a person might with a big set of previous experience. We just kind of hope it stumbles on the right answer with a whole lot of repetition and a bit of smart direction-picking.

    Do you think it's possible to create an AI that's whole purpose would be to crunch through data (with the reward being more data connections made)?

The problem is that two sets of data can look very different, with very different inputs, and it's almost impossible to make a single program that can deal with more than one or two "types" of data. Also we have to have a massive set of meaningful data in order to do something like this, which is not something easy to come by.

That's the problem with neural networks is that it has a set of inputs then a set of nodes that represent the connections and ideas from those inputs. Put a new set of info from a different situation in and those nodes are going to suddenly become meaningless.

I think the big thing here will have to be an AI that can manage other AI that deal with sets of data. One whose job is to create/identify information and find the best program to solve that problem.

And all this training is expensive and hard to do, as well. Neural network/machine learning rigs are set up with a whole bunch of high end graphics cards, and eat up a whole lot of processing power. The bigger the network the more exponentially costly it is to run it. You can kind of see this problem having been solved in the brain as well, where it has billions of neurons but only a few percent are used at a time for any individual input.

The brain doesn't even exist in a space where it has to have a processor go through each neuron to simulate it before it can get some sort of output.





user-inactivated  ·  2916 days ago  ·  link  ·  

    I think the big thing here will have to be an AI that can manage other AI that deal with sets of data.

So, compartmentalization, like in the brain. Have one subsystem deal with X, another - with Y, gives others Z, A, B and C and so on, and you have a somewhat-similar neural structure. My thinking is that those parts would still need to be capable of type conversion of some sort, given that the X module might output, say, strings while Y gives away arrays. Such a capability defeats the purpose of compartmentalization, though: if any subsystem is capable of dealing with anything, there's no reason to separate them but for parallel computation, and that's a whole other story.

reguile  ·  2916 days ago  ·  link  ·  

    My thinking is that those parts would still need to be capable of type conversion of some sort, given that the X module might output, say, strings while Y gives away arrays

Neural networks work by having information used as a set of numbers, weights, and functions. They don't really output "numbers" or "strings" or "arrays" they almost always have numbers in, numbers out, then the numbers are converted into whatever meaning they should need.

For example, a network that identifies age from an image will take in numbers as R G B values across a massive array and use a big set of arrays and functions to narrow those down to a single number that slowly is "optimized" until it starts matching somewhat accurate examples.

So a network that deals with strings may just break that string down into characters and have a node for each character. A network that deals with images uses a node for each RGB value.

So if you have a network that was working with words in one type of sentence it will be used to having one region lighting up meaning one thing. However, if the sentence is of a new type then the network will have that meaning light up, but it will not mean what it should.

That's the sort of issue with different datatypes, not so much arrays or ints or doubles or anything of that sort.

Say you can have a sentence

This is a great time.

This is a horrible time.

And you train a neural network on those.

Now you input a sentence.

This time was a great one

The area that normally lights up in response to "great" is now staying dim with the input of "was", and will say that the sentence is not positive. It was trained in an environment where great or horrible always appear in the same place, and assumes that fact. It will mess up when given a different input, and will be useless in those cases.