a thoughtful web.
Good ideas and conversation. No ads, no tracking.   Login or Take a Tour!
comment
reguile  ·  2916 days ago  ·  link  ·    ·  parent  ·  post: Moore's law is nearing its end

I'm partially talking out of my ass here, just casual observation of things I've seen online rather than expert knowledge. Take my words with a grain of salt.

Normally people assume computers are big predictive machines that solve a problem 100% or 0%. A computer does it's job perfect, or it doesn't do the job at all.

People imagine an intelligent computer to be this super-machine that is fare superior to human knowledge, super logical, and all that other stuff. But with things like neural networks and other machine learning systems you can see the machine slowly develop much like a kid might, making really funny decisions that make sense if you are the machine, having a subjective set of knowledge it is acting from, but doesn't really make sense in the grand scheme of things.

For example, one of these programs being trained to navigate a maze might find out that you can hug the right wall to get to the end, and be satisfied with that solution. No attempt to find the optimal, the program lazily grabs the first strong solution and follows that one every time, because trying to take a new path is more harmful than not.

Sound familiar?

This is stuff the AI designer has to actively tweak variables to fight against, making sure the machine has to experiment every once in a while, or making it so that following the same path over and over again starts to result in a negative or non-reward for the AI.

Bored yet?

Neural networks often have to be reduced to a certain size when being trained, otherwise the network literally sets itself up to memorize everything in the data-set you train it on rather than actually learning to solve the problem. It's called over-fitting in an official setting.

http://www.mathworks.com/help/nnet/ug/improve-neural-network-generalization-and-avoid-overfitting.html?s_tid=gn_loc_drop

Memorizing everything is the best solution to the problem, just not the one we want.

One example of an algorithm acting weird is in those "teaching a bot to play NES games" video. There are examples where it will jump of a big ledge because doing so lets it move right for a while, which the program considers a "good" thing. However it then dies, but apparently the "good" best action kept getting selected for a long time until the program figured out a new way of doing things.