23 Jan

this year so far

I’ve started teaching guitar. I think I have a pretty good method. My student is off work for about six weeks due to some back surgery, so I’ve made the assumption that he will have time to learn some theory, so I started off by explaining “power chords” (or “fifths” – made from the root note and the fifth note), and explaining how major and minor chords are made from the respective 1st, 3rd and 5th notes of the major scale (ionian mode) and minor scale (aeolian mode) from whichever root you want.

Instead of showing twenty open-string versions of chords and expecting to have to remind him of them all again the next week, I demonstrated only four chord shapes, A Am E and Em, and showed how barre chords can be used with those shapes to make any major or minor chord at all.

Hopefully, he has not run screaming for the hills already… I at least only show him songs he actually names himself. Last week, I showed him The Jam’s That’s Entertainment, and this week, I’ll be showing him Pink Floyd’s Wish You Were Here (which will be a handy introduction to open string chords G C and D, as well as hammer-ons and pull-offs).

On the work front, I’m currently working on some pretty significant improvements to our WebME engine‘s AJAX shopping cart. In the admin area, I’m working on allowing products to be created and dragged around categories as if they were files in a file manager. To manage this, I’ve ported my KFM project into WebME and converted the appropriate areas.

Robots… my mini-itx bot has served its purpose – I built it to see if I could, and I could, so it’s time to strip it down and reuse its parts for something else (files server, maybe). I’ll be building a few robots this year, using the Gumstix platform for the brain. This should allow for a much better battery life, as well as a much smaller body size.

We (Bronwyn and I) will be moving house this year. When we moved into our present house, we told the landlords that it would only be for a year or two while we looked for a permanent house to buy and live in. Bronwyn’s parents have given us an enormous boost in this – they found a two storey building with a converted attic, which is about the same size as our current cottage stacked on top of itself three times, and bought it for us. Of course, we’ll still need to pay off the mortgage, etc, but the gesture is definitely not unnoticed! Thanks, guys! More on that later, when details are more concrete.

My son is still a genius. Yesterday, at age 3.3, he wrote his name without prompting. He forgot the “T” in “JARETH”, but still – can your 3.3 year old son write his name? Of course, still not a word out of him yet, but he’s making progress in that as well – every night, he mumbles himself to sleep, trying out various sounds.

update: got a phone call earlier on to say that Bronwyn passed him while he was playing in the house, and she noticed he had written his name in total, not forgetting the “T”.

Boann is calming down a little – instead of screaming all day and night, she now only screams for a while, and I only spend an hour or so every night (usually some time between 1am and 3am) walking up and down wondering why she’s so wide awake at that unholy hour.

Life is getting slowly better.

I have a doctor’s appointment on Friday – I have a lump in a certain area since December, which may or may not be cancerous. We’ll see.

I emailed an old friend, who we had had a falling out with about seven years ago, over some stupid event that may or may not have happened when I was blind drunk after two bottles of vodka. He’s the piercer for a shop in Canada, and is apparently doing quite well for himself! Hi Andy! Bygones have become bygones, and hopefully, we’ll be able to sit down over a few jars at some point and laugh at ourselves.

Anyway… gotta get to work. It’s just past 9am, and I am officially into work-time now, so I’d better stop nattering.

24 Sep

my robot is now an actual robot

I took the time today to connect up some wheels to the base of my bot. I can now place the bot on the ground on the far side of the room from me, and tell it to move forward or backward from my laptop.

The next step is left and right. I just need to scrounge up two transistors. I’m sure I can find them somewhere.

robot - from front From the front, you see the mini-itx me6000 motherboard, powered by a 12v battery, using a pw200 dc-dc convertor to control the power input. You can also see two cameras and a wireless card.

robot - from back From the back, you can see my messy wiring. I took a parallel port cable and stripped it. The “ground” wire is connected to the bracket on the left. After a bit of testing, I found the wires that correspond to the 1 and 2 bits when accessed through 0x0378 (the parallel port address on my machine). those wires are connected to the circuit board, and control the power going to the motors through some transistors I ripped from an old radio.

robot - from bottom The bottom view shows the wheels and motor controller. A friend of Bronwyn’s dropped them in to me a few weeks ago. They’re from an iBotz robot – maybe this one. They’re powered by a 5v connection I found on the pw200.

18 Sep

30 in 1:30, arrr….

In one hour, thirty minutes, it’ll be September the 19th, I’ll be 30 years old, and it will also, probably most importantly, be talk like a pirate day.

What I am doing right this moment:

  • installing sqlite on my home server for the next batch of kfm upgrades.
  • upgrading my robot from fc4 to fc5 – it’s a mini-itx motherboard with a wireless card, sturdy wheels, two cameras and a 12v lead battery. currently sitting on my kitchen table
  • being harassed by my wife, who thinks that I’m old – ignoring that she will also be this age in about two years.

I was reading about foreign keys in mysql earlier. the last time I’d looked at them was about 5 years ago, and they were just being introduced with the InnoDB storage engine. I haven’t really had the time since then to mess around. today, I took an hour out to read up on them and try out some code, and was pretty impressed to find that they work great – even automatic cascades triggered by index deletes. Maybe I shouldn’t really be surprised at that, though – they have been supported since about mysql 3.4 or so.

07 May

very modest bounty

In an effort to get some work done on my robot, I’d like to offer a small bounty for help with some parts. I’m going to start off very small, but hopefully as time advances (and my finances improve), I’ll be able to offer larger sums.

The first bounty will be a very modest €50 for a C console application which can do the following:

  • Accept two filenames from the command line as input, which correspond to two photographs of one scene, taken parallel to each other, separated by about 6 inches. The cameras may not be exactly parallel, and my be slightly twisted or unfocused, relative to each other, so the application should be relatively intelligent about that.
  • From those files, extrapolate 3D shapes and textures, and echo them to STDOUT. The format of the output should be well known. I offer VRML and X3D as ideas, but if there is another more-apt language, then that will be acceptable.
  • Any textures needed should be output as images to the current directory as 1.png, 2.png, etc.
  • The application should run in Linux, and must not use up a huge amount of RAM.
  • The application may use other libraries to achieve the goal (in fact, I’d recommend this, to make maintenance easier), but those libraries must be open source.

The closing date for this is July 1. All solutions offered will be compared for efficiency and accuracy, against a few different models created in POV-Ray, and also against real-life photographs.

If no solutions are submitted, I will announce the project again, with a higher bounty (probably increasing by another 50, or 100).

All applications submitted should have the source-code readily available.

Once an application is accepted, it will immediately be released as open source. You will be asserted as the author of the code, but the ownership of the code will be public.

I know the bounty is kind of low, but it’s really just to drum up interest, and I’m not all that wealthy 😉

This project is really an experiment. If it is successful, I have many other projects that I want done, which I will also announce bounties for.

22 Nov

syncing and bots

Been a while. I took two weeks off work after I got married, to just relax and play some geetar (practicing my speed soloing with some classical riffs and a metronome).

So, I was just reading through my daily blog list today and noted someone talking about kde.org and opensync.org collaberating to help reduce personal data redundancy.

I have a big problem with personal data redundancy. I use five computers on a regular basis. My laptop, my home server, my work machine (this one), my robot, and the office server. Forget about the bot, as I don’t do any personal computing on it, but on all of the others, I run programs such as Firefox, IE, Konqueror, Thunderbird and Kontact on a regular basis (as well as many others – vim, etc).

Sharing email is not a problem, as I use IMAP. However there are a number of itches that I’m tired of scratching:

  • Only yesterday, I was wishing that Firefox and Konqueror could share the same bookmarks file, as I have a well-categorised list of personal bookmarks on my laptop which I would not like to have to rebuild every time I move to a different browser or machine. Okay, I can share the same bookmark file between various Firefoxes on separate machines, by using NFS or something (which is insecure, slow, and can cause locking problems), but it’s a shame that there does not seem to be a simple way of sharing bookmarks between two browsers on the same machine without having to manually export and import it every time something changes!
  • In work, I do my coding in vim, on the office server, in KDE, viewed via vnc. I used to use Synergy for it, when I was sitting right next to the office server, but we moved everything around here and removed the monitor from teh server, so I need to use VNC. One crap thing about VNC is that I can’t seem to copy from outside it, and paste inside it. This means that if, for example, I am debugging some JavaScript for IE, and need to copy something from the output and paste it into what I’m working on, then I can’t do it! I need to either do a complex notepad and ftp dance, or write it out by hand. It would be nice to just select, copy, and paste. The same applies the other way around. If I come across an interesting blog post in Akregator in the VNC window, I can’t just copy the link and paste it in Firefox or IE on the main workstation.
  • The only other example I can think of is the preferences I have for vim – I have customised PHP and JavaScript folding commands, which I use on all four of my linux boxes, and the three production servers that I manage. That’s seven copies each of two files. That’s a pain to keep uptodate…
  • The only thing I can think of that would work for keeping the files uptodate is to keep those specific files in an exported NFS directory on a trusted computer (trusted not to go bye-bye in the middle of an important job). Unfortunately, NFS is not secure for a list of reasons, and I cannot think of a better way. Anyone?

Aaaanyway… I also spent some time working on my bot.

As people may know, my goal is to build a bot that can do my gardening for me. While many people just point and laugh (in the building here, I’m known as “Luke Skywalker” (actually, it was Anakin that built C3P0, but correcting the misnomer would prove that I’m a little geeky)), I am of the absolute and firm belief that this is possible, and inevitable.

So, what have I done already? Nothing much, I guess. I build a shell which holds an Epia ME6000 Mini-ITX mainboard. This board has a Netgear WG311 wireless card attached, as well as two generic webcams. The whole lot is powered by a 12V acid-lead battery, connected through a PW200M power convertor.

What that means, is that I have a working computer which does not rely on any cable connections for power or networking. At the moment, I guess the coolest thing I can do with it is to plonk it down somewhere in the garden, then go inside and do some bird-watching via the cameras, on a different computer.

I guess the next thing to do is to attach some wheels to it so it will be an actual moving robot! For that, I will need some tank tracks (feel free to buy them for me 😉 ).

Once the tank tracks are attached and working, I will have a remote control robot that I can move with my laptop, and see exactly what it sees.

After that, thinks get difficult… but I’ll get to that!

16 Sep

great bargain in a cheap shop

I dropped into a charity shop on the way home today and came across a copy of Artifical Intelligence, by Elaine Rich and Kevin Knight for €2. I couldn’t resist it. I spent the rest of my walk reading the connectionist chapter. It described everything very clearly, even though my eyes rolled back in my head and I started gibbering when I came across some maths in it.

It turns out that the model of neural network that I have chosen to build for the recognition engine in my gardening robot is actually closer to a Boltzmann Machine than a Hopfield Network. The difference appears to be that Hopfield Networks give binary outputs, and are therefore kind of jerky in response, while a Boltzmann Machine gives more of an analogue output, which allows fuzzy results (instead of “Yes, that is a cat” in the former, you get “That’s probably a cat” in the latter, which would be more accurate).

Another interesting part of that chapter was its treatment on recurrent networks, which allows a neural net to do things like learn to speak, learn to walk – generally anything which has a list of actions which must be performed in sequence. This is something I have had an interest in since I started thinking about how to make my robot mobile. The first generation of my bot will run on tank treads, but once I am confident that the prototype works, I will be considering insect-like legs, which take up less room, and allow the robot to step over vegetation without damaging it too much.

Stay tuned – I hope to have the first release of my Rekog engine complete by next weekend – I’m getting the hang of KDE programming. That engine will be multi-purpose – it will be a general recognition engine, usable by other people for other purposes (facial recognition, etc); not specifically what I planned it for.

22 Jul

why am I trying to build a robot anyway?

People look at me as if I’m insane, when I tell them of my efforts to build a robot capable of automatically growing and caring for vegetables. – or they laugh and say I will still be trying in twenty years time.

To the latter, I just smile, and say “probably”. To the former, though – I don’t understand why people think it is crazy.

This post should hopefully explain my rationale, and maybe even convince you that it’s insane to not at least try it.

Ever since they were invented, computers have been touted as the device which would make everything so much more efficient than before.

But then – so is the same said of every other tool – the plough, for example, helps to till the ground faster, the printing press allows books to be shared faster. The insert-tool-here allows you to insert-job-here faster.

However – in all cases, creating faster and more efficient tools simply allows you to do more work – it doesn’t save you any labour – it just allows you to do more labour in the same time.

Are you working a five hour week? If not, then you are still working around the same lengthy hours that people have done for centuries – you’re just getting more of it done on time.

Efficiency is not the solution! The solution is to remove the need for the work altogether – not to make the work easier to do.

So – here is an alternative idea:

Sit back and relax. Now, think to yourself – why are you working? Is it to get money? What do you need money for?

When I think of it, the reasons for money can be broken into two essential categories and an “optional” category:

  • Essential: housing
  • Essential: food
  • Optional: everything else

The “Optional” category includes such crap as Entertainment, Clothing, Transport, etc. I am not advocating that you should completely ignore those things in your quest for an efficient life – just be aware that they are optional – they are not absolutely necessary for you to be able to live comfortably and without hassle.

When it comes down to it, food is really the most important thing you can spend money on. You can live without entertainment, clothes and a place to live, but you cannot live without food. Let’s ignore the homeless route, though – it’s not comfortable, and we want to be comfortable.

So, assuming we own our own house (work with me here…), the only thing left that you require, that is costing you money, is food. You need to learn to provide food for yourself, without using money.

The easiest way to do this is through gardening (or “farming” – whatever you want to call it). To grow potatoes, for instance, it’s just necessary that you put some potatoes onto some soil, and dump some compost or dead weeds on top of them, and weed it every now and then.

However, that’s just exchanging your office job for a job in the garden – I mean, you could just as well be working for the same amount of hours and buy the potatoes without touching the garden (back to square one). The true way to escape from the drudgery is to have someone else do the job for you.

But – who? If you hire a gardener to do it, then you’ll need to get a job in order to pay the gardener (back to square one again), so you need to somehow get your gardening done from someone that does not require payment.

Let’s take a detour: imagine what your life would be like if you owned your own house, and your food was provided completely free…

You would never need to work, except if you wanted to purchase something. I cannot emphasise how important that is – you would never need to work, unless you wanted to purchase something.

In fact, you could, if you wanted, live out your entire life just chilling out in the back garden of your house, as your robot toiled in the fields. You could learn to enjoy watching the clouds instead of TV. You could learn to get over your need for neat clothes and just let it all hang out. Anything you ever did that could be called “work” would be done purely for the pleasure of it.

If you do feel the need for entertainment, then why not pop around and play a game of cards with someone, or join a band, etc? If you feel the need for clothes, then learn to make your own, or barter for them from someone else.

Once you have everything that you need, the pressure to provided things that you want will disappear.

Sounds fantastic, right?

That’s where my robot comes in.

Robots do not require any form of pay, except in the form of electricity, which can be provided freely anyway, once you’ve installed a wind farm, sterling engine, solar array, or any other alternative renewable electricity source.

I really do not understand why people don’t see it as essential that this dream be brought to fruition – I consider it personally to be the ultimate goal in any civilization to free themselves from all drudgery and allow themselves to do just basically what they want, whenever they want, and without needing to spend 90% of their waking hours hunched over a keyboard.

23 Jun

neurons for memory

New Scientist has an article about a study which is honing in on particular neurons which fire when a person recognises an image of a person.

What I find surprising about this is that the concept is very simple to understand, but it seems to be taking researchers decades to come to the point – they seem surprised to find single neurons firing, as a single neuron is a very simple organism, so how could it hold an abstract concept?

I’ve been doing a lot of thinking about neural networks recently, as I’m working on a robotic gardening machine, which will eventually be put to good use in my own garden to help with my farming.

During my own thinking on this, I’ve also come to the realisation that one single neuron can hold an entire complex memory. When you think of it, a neuron includes not just itself, but its connections to the neurons around it. It is the connections that give a neuron its “intelligence”. A memory, then, is the sum of a neuron’s connections.

Now, it’s not quite as simple as that… the connections take input from other neurons, which in turn are calculated from further connections. In short, a simple yes/no question is actually quite complex when you try to work it out with neurons, but when you get the answer, you can trace back on the connections and get a very rich “reason” for the solution.

For instance, the article mentions Halle Berry. Now, for me, Halle Berry rings several bells – a very nice golf swing in a certain film I can’t remember the name of being the strongest. So, for me at least, the neuron (or small group of neurons) that recognises Halle also links the recognition strongly to that scene. There is also an image of her face, and for some reason, a Michael Jackson video (did she play an Egyptian queen in a video?).

That’s at least four neurons, each of which, if I think about them, will throw up a load more connections.

I think that the various neurons help to keep the memory strong. In Artificial Neural Networks, changing a single neuron is discouraged if it has strong connections to many others, as that change will affect the results of those other neurons.

I think that this is why mnemonic memory works so well. In Mnemonics, in order to remember a single item, you try to link it with something you already know. For example, in the old Memory Palace method, you imagine a walk through your house, or another familiar place. Each room that you enter, you can associate with a certain thought. For more memories, you can associate individual points of interest in the room – shelves, windows, corners, etc.

For instance, let’s say you are to remember a shopping list of “bananas, lightbulbs, baby food, and clothes pegs”, you could associate it with my own house like this: “I walk into my house. Before I can enter, I need to push a huge inflated banana out of the way. On my left is a lavatory. In that room, the walls are covered in blinking lightbulbs. Further on, I reach the main hall. The floor is cobbled with jars of baby food. I walk over the jars into the sitting room, where my girlfriend is sitting, trying to stick as many clothespegs to her face as possible”.

Now, by associating the front door with a banana, for instance, you are doing a few things – you strengthen connections between your front door and bananas, you also connect bananas with your front door, and the absurdity of the situation impresses the connections further. Later on, when you reach the shopping market, you don’t need to remember what was on your list – you just need to go through your memory palace a room at a time.

What is very important about this is that you have used only two items of memory (your front door, and bananas) to remember a third item – that bananas are on your list.

I wonder – Is the sum of possible memories far greater than the sum of neurons available to you? It seems to me that it’s dependant more on the connections than the neurons.

Ramble finished…

18 May

thinking about thinking

As many of you may know, one great pastime of mine is thought-experiments about robotic gardening.

I’ve bought a mini-itx board for building my robot, so the obvious next step was to think about how the robot should think.

I’ve been interested in Artificial Neural Networks for a few years, and they seem like the right way to go about what I want.

The problem I decided to focus on was this:

Given a photo of what the robot is facing, make it figure out is the photo of something organic, or inorganic.

A very simplistic diagram of how the machine might do this is shown below:

The above shows a very basic neural net. I think it’s called a “feed-forward” net, because each column of units is connected directly to just the adjacent columns (note that the rightmost column is not connected to the leftmost).

In the actual net, the “input” units would correspond to individual pixels of the image. The image is most definitely not to-scale – hundreds of input units would be required, and much more than just two hidden units – possibly two or more layers would be required as well, but you get the picture.

This net, when trained, would give an adequate answer. But then, the question arose – could the same net be used to provide more detail?

ie; What if we want to know if what we’re looking at is a nettle?

Logically, it would be possible to rebuild the network with just that question in mind, but it occured to me that it may be possible to do both at the same time.

The two answers come from the same hidden data. This may end up with a little less accuracy, as the neurons are now providing answers tailored to two different end goals, instead of one.

Looking at the diagram, though, it becomes clear that the “is nettle” unit is not availing itself of all available data. One major point about nettles, is that they’re organic, so there really should be a link between the “is organic” and “is nettle” units. It would drastically aid in accuracy, I believe.

There is a subtle effect which would appear in the above network…

Let’s say that the network is looking at a photo of a brick wall. That photo is then replaced by a photo of a nettle. The units are all updated one at a time, from left column to right column, top to bottom.

A point to note here is that the “is nettle” unit would be updated before the “is organic” unit.

I expect that “is organic” would be very tightly bound to the answer to “is nettle”, so it’s weightings would be pretty high. But, as the “is organic” unit in this case would be still holding to answer to the brick wall question by the time it is polled by “is nettle”, that the “is nettle” unit would most likely not recognise the picture of a nettle for what it was.

Interestingly, it would get it right when the exact same image was put through immediately afterwards.

I think that is similar to how we ourselves take a moment to re-orient ourselves when suddenly changing focus from concentrating on one subject to another.

Expanding on that, I think it would be interesting to have every neuron connected directly to every other neuron. It would lead to some slower results, but I think that it would allow much more accurate results over time.

For example, in video, if ever frame was considered one at a time, with absolutely no memory of what had been viewed the time before, then it may be possible to get drastically different results from each frame. However, if, for example, the previous frame was of a man standing in a field, then with the new connection-based network, the network would be pre-disposed to expect a man standing in a field. I think this may be called “feed back”.

This will be very useful for my robot, as it means I can track actual live video, and not have to rely on just still frames.

12 Oct

finding the X,Z coordinate of a point visible in two parallel cameras

Okay – forgive the messy maths. It’s been a decade since I wrote any serious trigonometry, so this may be a little inefficient. All-in-all, though, I believe it is accurate.

I thought I would need a third image in order to measure Z, but figured it out eventually.


fig.1

Consider fig.1. In it, the blue rectangle is an object seen by the cameras (the black rectangles).

The red lines are the various viewing “walls” of the camera – the diagonals are the viewing angle, and the horizontal lines are the projected images of the real-life objects. We assume that the photographs are taken by the same camera, which is held parallel in the Z axis.

The green triangle portrays the distance between the camera view points, and the distance to the point we are determining. We cannot determine whether the final measurements will be in centimetres, metres, inches, etc, but we can safely assume that no matter what final measurement unit is used, it will be a simple multiple to correct all calculations arising from this.

The only numbers that we can be sure of are the camera angle C, the width in pixels of the view screen c, and the distance in pixals from the left of the view screens that the projections cross the line ch and j. As the final measurement unit does not matter, we can safely assign d any number we want.

What we are looking for are the distance in d-type units for x and z. We can safely leave y out of this for now. It’s a simple matter to work that out.

Right… here goes… We’ll start with the obvious.

G = (180-C)/2

As there are two equal angles in each of the camera view triangles, the line f is one wall of a right-angled triangle. Therefore, using the trig. rule that Tan is the Opposite divided by the Adjacent,

Tan(G) = f/(c/2)

Which, when re-ordered and neatened, gives us

f=(Tan(G)*c)/2

We can then work out B using the same rule.

Tan(B) = f/(h-(c/2))

Which neatens to

B=ATan(2f/(2h-c))

Using similar logic,

A=ATan(2f/(2k-c))

So, now that we have A and B, we can work out D

D=180-A-B

An because of the rule that angle sines divided by their opposite sides are the same for any triangle,

d/Sin(D) = a/Sin(A)

Therefore

a = (d*Sin(A))/Sin(D)

And via more simple trig,:

z = a*Sin(B)

And

x = a*Cos(B)

Tada!

So, to get z with one equation:

z=((d*Sin(ATan(2*((Tan((180-C)/2)*c)/2)/(2k-c))))/Sin(180-(ATan(2*((Tan((180-C)/2)*c)/2)/(2k-c)))-ATan(2*((Tan((180-C)/2)*c)/2)/(2h-c))))*Sin(ATan(2*((Tan((180-C)/2)*c)/2)/(2h-c)))

Obviously, I’ll optimise that a little before writing it into my program…