robot localisation
I’m compiling KDevelop at the moment, in preparation for leaping into my simulation project.
While I’m at it, I’ve been looking into what other people have done to solve some of the problems I’m likely to face for this project.
For example, here is an innovative way to quickly distinguish one large area from another based on histograms.
The method I’ve been thinking of, though, is a little different…
The simulation daemon will need to simulate whatever sensors the real robot has. As I am planning on building the robot with cameras, the daemon must therefore have a ray-tracer built in.
The ray tracer will not need to be extremely correct, as this is, after all, just a training exercise designed to give the robot most of the instincts that it needs.
So anyway – the robot will be initialised with a belief that it is one certain area of the grid map. Imagine this as the machine being turned on at a “home” area. The robot will be familiar with this area and will be virtually certain of what position the camera is in, and what angle, etc.
From that certainty, we can then make up a trainer designed to give the robot a sense of balance and location.
For instance, if the robot moves, then it will have a fair idea of how far it has moved, and in what direction. The problem that I’m trying to solve here, though, is that there is no certainty that the robot has actually moved that far – its sense of speed may be wrong, or its steering may be a little awry, or maybe it has slipped on a smooth patch of ground.
What we need is a second opinion so the robot can be certain, which is the purpose of the camera.
The base unit will always be assumed to be moving in an environment that has been manually created specifically for that purpose. Because of this, we can be certain that a few focal points, or landmarks will be present. For instance, the robot will assume that based on where it thinks it is, the path will look a certain way, including any expected junctions.
So, we need a ray tracer for the environment, to provide an accurate-ish view of the world, and we also need for the robot itself to be able to formulate what it expects to see, based on its believed position in the world.
What the robot will do, is to build up a simulated view of what it expects, with matching colours, in a perspective 3d image. This simulation will be repeated in a few different ways – from a point to the left, a point to the right, forward, backward, etc. The robot will then compare the simulations to the “real” image received from the daemon, and correct its believed location based on which image is closest to the reality.
The simplest way to do this (I think) is to do a full-sized compare using a root mean-squared deviation to find the closest fit. That’s potentially too CPU-intensive for real-time work, though, so it might be quicker to do a quick-fit test of areas around the expected answer to see which simulations should be tested in full.
For example, let’s say you have a grid of 7×3 ( [0,0] to [6,2] ) possible locations to test (assuming a test of only two dimensions, and that the camera is not expected to shift much up or down), the center location [3,1] is the one that the robot expects to be the best fit. Let’s say the real camera returns a 320×200 image in 24b colour. We can change this to 8b grayscale and shrink it to 80×50 for a quick test. The quick test will do a quick compare between the locations in the grid. Full tests will then be done in the area that provides the best matches.
Of course, the algorithm needs work, as it’s just pure thought at the moment, but I’ll see how it turns out as the program progresses.