Last night, I put in some work on my Mapper project (creating a 3D mesh from photos automatically). This morning, I was thinking about the background image on my laptop, and realised it should be possible to automatically emulate a realistic transparency by using the project I’m working on. (the linked photos are very painstaking to set up)
How it would work is that a camera mounted on my lappie would keep track of where my eyes are (there are programs out there that can already track the head – the eyes are located in predictable points based on that). The laptop would already have a mesh version of the present room in its memory.
By the view of the room minus “volatile” elements such as me, the lappie could figure out the location of the screen of the lappie.
Combining that information, the lappie could figure out what the left eye and right eye should see where the laptop screen is, if that screen was not there.
What is very interesting about this is that the computer should be able to account for the head moving, and for the laptop moving as well!