I've been working on a really interesting topic right now and since there seems to be some interest in this subject around hubski and because I like talking about what I'm doing and getting people interested in cool stuff, I'm going to talk about Artificial Neural Networks. Shoutout to insomniasexx!!
Anyway, lets get down to business.
An artificial neural network is, as one would assume, an artificial replication of a neural network! Cool huh? But since that kind of argument doesn't really lead to new knowledge, let's break things down from the start:
- A neural network is a network of neurons.
- Neurons are the most important part of our nervous systems, in which they are what transfer information and, when linked in networks, allow for complex thoughts. This is a neuron:
- Neurons, in the brain (and in other places), work by reacting against a change in voltage which is transmitted to them by other neurons (this is a synapse). They are what we call electrically excitable.
- Each neuron in the brain has two kinds of connections to other neurons: input and output. Generally, a neuron will have several (thousands, in a human brain) of each.
- How signals are transmitted in the brain, through a bunch of neurons, is the following: disregarding the signal origin (which isn't very important to us in ANNs (that's short for artificial neuron networks, remember that) because we feed those to our ANN), when a neuron gets a signal from another neuron connected to it's "input part" it may or may not send that signal forward. If that signal isn't sent forward it, obviously, ends there, and if it is then it will be up the the next neuron (to which the first is outputting) what happens to it.
- Now the magic: in a very very very oversimplified way, what a neuron itself does is calculate whether to pass a signal forward or not. How it does this is, in a very oversimplified way, add up the inputs from the neurons that input into it and, if the sum of those inputs are over a certain threshold, pass the signal on. This is the model we use in ANNs.
I hope I was clear with my wording there, but if you didn't get everything don't sweat it. That was just a basic introduction to the mechanical workings of a neuron which you will certainly understand better when I give some concrete examples.
Alright, that was very pretty but how does an ANN actually work?
The model we use, specifically, for a neuron, in ANNs is the following:
That image's pretty good, but words also do a really good job in the explanation department. In that picture you can see an artificial neuron connected to n inputs and 1 output. What it's going to do is read all of the inputs (all of which are binary, 0 or 1), multiply each by it's respective weight (to be explained further) and output a signal if the sum of those is higher than it's threshold. Pretty basic, right? The deal with the weights is this: to be able to fine tune the results of a neuron in face to a specific input and to further advance it's mimicry of a real neuron, instead of just adding up the inputs and seeing if they're over the threshold, we assign a weight to each inputting neuron. That makes it much easier to create and adjust, on the fly, the output that a certain set of input will yield, and allows us to have much more complex behaviour with much simpler neural nets than if we took the inputs directly. Since we allow this value (usually) to be within -1 and 1, we can also have negative inputs, or inhibitory connections. This is also great because it allows us to have more complex calculations with fewer neurons. Only one more thing about our model!: Each neuron has it's own threshold value, usually. We treat is as one of the weights, whose input is always -1. That makes it so instead of writing
x_1*w_1+x_2*w_2+...+x_n*w_n >= threshold
we can write
x_1*w_1+x_2*w_2+...+x_n*w_n -1*threshold >= 0
which can make programming a model of a neural network and changing it's values on the fly easier. (Don't sweat the equations, I don't know how to format them here and they're probably unreadable. I just wanted to provide a concrete example of how a neuron calculates whether it passes a signal forward or not).
I want to continue writing but it's 3 am here and I'm starting to get quite drunk, and I should probably go to bed (or at least stop writing about neural networks). I want to tell you guys more about neural networks though, so I'll post the remainder of this tomorrow!!