Hey hubski!
I've been working on a really interesting topic right now and since there seems to be some interest in this subject around hubski and because I like talking about what I'm doing and getting people interested in cool stuff, I'm going to talk about Artificial Neural Networks. Shoutout to insomniasexx!!
Anyway, lets get down to business.
An artificial neural network is, as one would assume, an artificial replication of a neural network! Cool huh? But since that kind of argument doesn't really lead to new knowledge, let's break things down from the start:
- A neural network is a network of neurons.
- Neurons are the most important part of our nervous systems, in which they are what transfer information and, when linked in networks, allow for complex thoughts. This is a neuron:
- Neurons, in the brain (and in other places), work by reacting against a change in voltage which is transmitted to them by other neurons (this is a synapse). They are what we call electrically excitable.
- Each neuron in the brain has two kinds of connections to other neurons: input and output. Generally, a neuron will have several (thousands, in a human brain) of each.
- How signals are transmitted in the brain, through a bunch of neurons, is the following: disregarding the signal origin (which isn't very important to us in ANNs (that's short for artificial neuron networks, remember that) because we feed those to our ANN), when a neuron gets a signal from another neuron connected to it's "input part" it may or may not send that signal forward. If that signal isn't sent forward it, obviously, ends there, and if it is then it will be up the the next neuron (to which the first is outputting) what happens to it.
- Now the magic: in a very very very oversimplified way, what a neuron itself does is calculate whether to pass a signal forward or not. How it does this is, in a very oversimplified way, add up the inputs from the neurons that input into it and, if the sum of those inputs are over a certain threshold, pass the signal on. This is the model we use in ANNs.
I hope I was clear with my wording there, but if you didn't get everything don't sweat it. That was just a basic introduction to the mechanical workings of a neuron which you will certainly understand better when I give some concrete examples.
Alright, that was very pretty but how does an ANN actually work?
The model we use, specifically, for a neuron, in ANNs is the following:
(note that an artificial neuron will probably have more than one output)
That image's pretty good, but words also do a really good job in the explanation department. In that picture you can see an artificial neuron connected to n inputs and 1 output. What it's going to do is read all of the inputs (all of which are binary, 0 or 1), multiply each by it's respective weight (to be explained further) and output a signal if the sum of those is higher than it's threshold. Pretty basic, right? The deal with the weights is this: to be able to fine tune the results of a neuron in face to a specific input and to further advance it's mimicry of a real neuron, instead of just adding up the inputs and seeing if they're over the threshold, we assign a weight to each inputting neuron. That makes it much easier to create and adjust, on the fly, the output that a certain set of input will yield, and allows us to have much more complex behaviour with much simpler neural nets than if we took the inputs directly. Since we allow this value (usually) to be within -1 and 1, we can also have negative inputs, or inhibitory connections. This is also great because it allows us to have more complex calculations with fewer neurons. Only one more thing about our model!: Each neuron has it's own threshold value, usually. We treat is as one of the weights, whose input is always -1. That makes it so instead of writing
x_1*w_1+x_2*w_2+...+x_n*w_n >= threshold
we can write
x_1*w_1+x_2*w_2+...+x_n*w_n -1*threshold >= 0
which can make programming a model of a neural network and changing it's values on the fly easier. (Don't sweat the equations, I don't know how to format them here and they're probably unreadable. I just wanted to provide a concrete example of how a neuron calculates whether it passes a signal forward or not).
I want to continue writing but it's 3 am here and I'm starting to get quite drunk, and I should probably go to bed (or at least stop writing about neural networks). I want to tell you guys more about neural networks though, so I'll post the remainder of this tomorrow!!
Definitely, and this is one of the most important things I want to discuss after I finish writing a reasonably comprehensive rundown of the rest of the inner working of artificial neuron networks. There are soooooo many possible applications for neural networks due to their intrinsic ability to adapt to problem, as long as you give it a reasonable input and output.
As Hofstadter put it (in Gödel, Escher, Bach: An Eternal Golden Braid):what a neuron itself does is calculate whether to pass a signal forward or not. How it does this is, in a very oversimplified way, add up the inputs from the neurons that input into it and, if the sum of those inputs are over a certain threshold, pass the signal on.
In any case, it is simple addition which rules the lowest level of the mind. To paraphrase Descartes' famous remark, "I think, therefore I sum" (from the Latin Cogito, ergo am).
You did a great job explaining the neuron (afaik), and I learned a lot. This is a seriously interesting topic. I would love to understand more about all those dogs in those creepy google images, please continue!
Thanks a lot man! I myself wondered whether mathjaxMathJax integration would benefit hubski while writing my post, but then I wasn't sure because I feel like hubski tries to keep things light. MathJax integration wouldn't weigh down a lot though, I feel, and would be pretty cool.
That's really cool! Can I ask what you're using as / whether you're using a framework for your neural networks? When I started my current project I started playing around with a few different frameworks so I could simplify most of my work, but I ended up starting to write my code from scratch (efficiency was a major concern). Machine learning is a really interesting topic (with soooo much stuff to talk about) and I'll definitely talk about it. Anyway, it seems like you've got a very cool idea and I want to know more about it!
Thanks, I've used Torch as a framework. I probably wouldn't be able to produce anything more efficient myself. I'm currently still very much at the layperson stage and taking lots of learning materials and code from examples online but I've been really pleased with the results so far.