Take a look at this:
So? It’s two photographs that overlap. What’s so special about that?
What’s special is that the overlapping was done by my computer. It’s the first step towards computer vision for my robots.
See, in order for my bots to know where they are visually, they must be able to compare a real-life camera view to a rendered internal model. The above overlap thing is a first step towards that internal model.
What happened to get the overlap coordinates, is that the two separate images were compared to each other at different points. The closest match (according to the algorithm I created) is the overlap you see above.
It is possible, with this method, to determine how far away most things are, making it possible to build up a 3D model of the world through photographs.
The script is written in PHP at the moment (available here). I want to do a bit more debugging and optimisation on it before porting it to C.
The next step will be to build up a simple 3D copy of a room based on photographs.