As we slumber, our brains filter information collected in waking hours. Neurological processes spring into action. They discard what is irrelevant and consolidate what is important; they form and store memories. These mechanisms — found throughout the mammal world — are so effective that a team of Italian researchers have mathematically modeled them for use in artificial intelligence.

    The result is an algorithm that expands the storage capacity of artificial networks by forcing them into an off-line sleep phase during which they reinforce relevant memories (pure states) and erase irrelevant ones (spurious states).

___________________________________________________

The article's intended audience isn't computer scientists. If you'r a huge nerd like me, you're probably asking yourself, "What's so novel about a neural network that trains itself and then performs on test data? That's like every neural net I've ever written..."

Fear not, my geeky cohorts! Here's the paper, and a choice excerpt I pulled from the abstract:

    Inspired by sleeping and dreaming mechanisms in mammal brains, we propose an extension of this model displaying the standard on-line (awake) learning mechanism (that allows the storage of external information in terms of patterns) and an off-line (sleep) unlearning&consolidating mechanism (that allows spurious-pattern removal and pure-pattern reinforcement): this obtained daily prescription is able to saturate the theoretical bound , remaining also extremely robust against thermal noise.

    ...

    Beyond obtaining a phase diagram for neural dynamics, we focus on synaptic plasticity and we give explicit prescriptions on the temporal evolution of the synaptic matrix. We analytically prove that our algorithm makes the Hebbian kernel converge with high probability to the projection matrix built over the pure stored patterns. Furthermore, we obtain a sharp and explicit estimate for the “sleep rate” in order to ensure such a convergence. Finally, we run extensive numerical simulations (mainly Monte Carlo sampling) to check the approximations underlying the analytical investigations and possible finite-size effects, finding overall full agreement with the theory.

WOW. If you followed every single word of that, you're better at this than I am. Either way, thought this was a fun article to enjoy with my morning coffee!


posted 1763 days ago