17 Nov

KFM 1.2 released

That time of year again, kiddies! KFM 1.2 has been released. This is the last KFM version (apart from bug-fixes on the 1.2.x tree) which will have support for PHP4.

New Features

Improvements

Bug Fixes

13 Nov

lesson learned – recurrent networks are sequential

Okay – I was watching paint drymy neural network learning today and something surprising happened about an hour into the sequence – suddenly, instead of getting the tests 90%+ correct, it was getting them almost 100% incorrect!

It took me a few moments to realise what had happened – for some reasons, I had decided to try teach the network the difference between two objects for a lot of sessions before adding a third ingredient. I thought that would teach the network to very finely know the difference between them.

It turns out, though, that the network was smarter than me. It learned that the answer to the current test is… the exact opposite of whatever the previous answer was. The damn thing learned to predict my tests!

So – what I’ve done to correct this, is that ta the beginning of any test, I now erase the neural memory in the network. In effect, I’m whacking the network on the side of the head so it can’t learn to predict my sequences and must learn the actual photos instead.

12 Nov

ANN progress

Last week, I posted a list of goals for my neural network for that weekend. I’ve managed most of them except for the storing of the network.

Didn’t do it last weekend because I was just plain exhausted. After writing the post, I basically went to sleep for the rest of the weekend.

demo (Firefox only)

The demo is not straightforward. Usually, I create a demo which is fully automatic. In this case, you need to work at it. In this case, you need to enter an image URL (JPG-only for now), a name (or names) for a neuron which the photo should answer “yes” to, and similar for “no”, then click “add image”.

The network then trains against that image and will alert you when it’s done. The first one should complete pretty much instantly. Subsequent ones are slower to learn (in some cases, /very/ slow) as they need to fit into the network without breaking the existing learned responses.

I’ve put a collection of images here. The way I do it is to insert one image from Dandelion, enter “dandelion” for yes and “grass,foxtail” for no. Then when that’s finished, take an image from Foxtail and adapt the neuron names, then take one from Grass, and similar. Over time, given enough inputs, I think the network would train well enough to learn new images almost as fast as the first one.

Onto the problems… I spent all weekend on this. First problem is with the sheer number of inputs – with an image of 160×120 size, there are 19200 pixels. With RGB channels, that’s 57600!

So the first solution was to read in the RGB channels and change them to greyscale – a simple calculation – that cut the inputs by two thirds.

Then there was the speed – JavaScript is pretty fast these days, but it’s nowhere near as fast as C, Java or even Flash. If a loop runs for too long, an annoying pop-up appears saying “your browser appears to have stopped responding”. So, I needed to cut the training algorithms apart so they worked in a more “threaded” fashion. This involved a few setTimout replacements for some for(...) loops.

Another JS problem had to do with the <canvas>. There is no “getPixel()” yet for Canvas, so I get to rely on getImageData which returns raw data. According to the WhatWG specs, you cannot read the pixels of a Canvas object which has ever held an image from an external server. So, I needed to create a simple Proxy script to handle that. (just thought of a potential DOS for that – will fix in a few minutes)

Another problem had to do with the network topology. Up until now, I was using networks where there were only inputs and outputs – no hidden neurons. The problem is that when you train a network to recognise something in a photo based on the actual image values, you are relying on the exact pixels, and possibly losing out on quicker tricks such as combining inputs from neurons which say “is there a circle” or “are there parallel lines”. Also, networks where all neurons can read from all inputs are slow. A related problem here is that it is fundamentally impossible to know exactly what shortcut neurons would give you the right answer (otherwise there would be no need for a neural network!).

So, I needed to separate the inputs from the outputs with a bunch of hidden neurons. The way I did this was to have a few rules – inputs connect to nothing, hidden neurons connect to everything, outputs connect to everything except inputs.

The hope with this is that the outputs would read from the hidden neurons, which would pick up on some subtle undefinable traits they spot in the inputs. So the outputs would be almost like a very simple network “overlaid” on the more complex hidden neurons.

Problem is – how do you train these hidden units in a meaningful way? I’m still working on this… Right now, it’s based on “expectation” – the output neuron, when it produces a result, is either right or wrong. If the neuron is then to be trained, then it tells the hidden units that it relies on what output it expected from those units in order to come to the correct conclusion. The hidden units wait until there have been a number of these corrections pointed out to them, then it adjusts accordingly. (two thoughts just occurred to me – 1; try adjusting the hidden neurons after every ‘n’ corrections (instead of at arbitrary points), and 2; only correct a hidden unit if the output neuron’s result was incorrect.)

Another thing was that there is no way of knowing how many hidden units are actually required to have the network produce the right results, so the way I get around this is to train the network, and every (hidden units * 10) cycles, if a solution has still not been found, add another hidden network. This means that at the beginning of training, hidden units will be added pretty regularly, but as the network matures, new neurons will be added at rarer periods.

I’ve caught the ANN bug. After spending the weekend on this, I still have more ideas to work on which I’ll probably do during the week’s evenings. Some ideas:

  • Store successful networks after training.
  • Have a number of tests installed from “boot” instead of needing them to be entered manually.
  • Disable the test entry form when in training mode.
  • Work some more on the hidden unit training methodology – it’s still way off.
  • Allow already-inputed training sets to be edited (for example, if you want to add “rose” to the “no” list of “grass”).

I think that a good network will have a solid number of hidden neurons which do not change very often. The outputs will rely on feedback from each other as well as those hidden units. For example, “foxtail” could be gleaned from a situation where the “dandelion” is a definite no, and “grass” is a vague yes, along with some input from the hidden units approximating to “whitish blobs at the top” (over-simplification, yes, but I think that’s approximately it).

update It turns out the network learns very very well given just two neurons to learn – I’ve managed to get the network to learn almost every image in “dandelions” and “grass” with only 4 hidden units and 2 output neurons.

07 Nov

word->html thing

If you use a rich-text editor to allow your clients to post stuff to the net, then you’ve probably come across the annoying problem of people pasting content from Word into their sites, and then complaining that their Internet Explorer crashes when they view it.

This is because when content is pasted from Word, it includes crap such as <o:p></o:p> which Inernet Explorer does not like.

One very small regexp to clean up bits like that is:

$body=preg_replace('#</?([ovw]|st1):[^>]*>#','',$body);

That’s cleaned up one site I’m working on anyway. There may be more pseudo-elements out there still, but I haven’t noticed them yet.

03 Nov

today's ANN goals

I’ve been doing well over the last two weeks – I started with an ANN which can balance a pole, and upped that by then creating a net which could recognise letters.

The plan for today is a bit more ambitious. I’m writing it down here in case it takes longer than a day to write it. In general, I want to write a PHP application which will allow you to upload images, which the ANN will then try to recognise. If it gets it right, all well and good. If it gets it wrong, you can correct it.

Some milestones for the project:

  • readers can upload images and have them tested and/or added to the training sequence
  • extraction of image pixels using <canvas>
  • automatic creation of new neurons as they are required
  • best net is stored on server, so new readers always start with a working net

I think this may grow into a damned cool thing – I’m already thinking of other cool features like distributed nets, or background nets which can be placed in other pages of a site so the thing can continue training even though the user is not actively viewing the thing (that might be a bit cheeky though).

Anyway – now that I’ve written what I intend to do, I suppose I’d better actually do it.

01 Nov

bank of ireland 365online "upgrade"

The login procedure for BOI’s 365online changed recently. I am not sure why they did it, but they’ve made it less secure.

Previously, the login was something like this:

Please enter your six digit User ID
Please enter the last four digits of your contact number
Please enter the first, second and fourth numbers of your pin
***

They separated the form in the upgrade. The first two inputs are on the first page, and the PIN input is on the second page.

Notice that the User ID and contact number fields above are not password inputs – when something is typed into it, you can see what was typed. The only security with the above form is in the PIN input. Each input there is a password field, and there are only random bits of the PIN that are requested. Someone would need to watch your fingers to know what you typed.

Contrast that with the new PIN input system:

Please enter the first, second and fourth numbers of your pin
01234
56789
01234
56789
*01234
56789
**

In the actual BOI form, when you select a number from the above, it is masked immediately after selection by changing it to a ‘*’ symbol. But that makes no bloody difference at all because everyone in the room can see me choose the numbers in the first damn place.

And if they miss it the first time around, all the need to do is wait for me to transfer some money to an account, because they ask for the PIN to be inputted again, using the same stupid insecure form!

Please, BOI, when you decide to change a login form, make it more secure – not less.