Last century, when I worked for Orbism, a co-worker, Olivier Ansaldi (now working for Google) showed me a Java applet he was working on which learned how to balance a stick using a neural net.
I decided to try it myself, and wrote a neural net that does it yesterday.
There is a total of 3 neurons in the net – 1 bias, 1 input (stick angle) and 1 output.
It usually takes about 20 iterations to train the net. Sometimes, it gets trained in such a way that the platform waggles back and forward like a drunk, and sometimes it gets trained so perfectly that it’s damn boring towatch (basically, it’s a platform with a stationary stick on it).
Anyway… for my next trick, I’ll try building a net which can recognise letters and numbers.
I’m getting interested in my robot gardener idea again, so am building up a net that I can use for it.
Some points about how this differs from “proper” ANNs.
- Training is done against a single neuron at a time (not important in this case, as there are only three neurons anyway).
- This net will attach all “normal” neurons to all other neurons. I don’t like the “hidden layer” model of ANNs – I think they’re limited.
- No back-prop algorithms are used – I don’t trust “perfect” nets and prefer a bit of organic learning in my nets.
- The net code itself is object-oriented and self-sufficient. It would be possible to take the code and use in another JS project.