Japanese Team Makes 3D Scenes from Images in Real-Time3D mapping is cool. Unfortunately it usually requires expensive hardware and lots of time. Some experimental techniques, however, are being developed to allow for 3D images to be derived from regular pictures. A team of Japanese scientists have managed to develop a system that does this in real time.
The system works like this: a camera with an eye-fi snaps a picture. That picture is broadcast over to a laptop. A program on the laptop then processes the image and finds key features. Then the camera snaps a second picture. The software uses the key features to figure out where the second picture was taken from in relation to the first.
That, in turn, brings in more details to the scene, and more data that can be worked with. As pictures are taken, you can see the image slowly being built up--from a few scant points in the air to a fully realized scene.
Actually creating a 3D scene from pictures has been done before. It is still a difficult puzzle, and the reconstructed scenes generally are fairly rough, but that part is not something novel that this team is doing. Rather, what is novel is the ability to get immediate feedback on what was taken. Most techniques process the images all at once, after they are taken.
Unfortunately that means that you won't know if you took enough shots to build a good 3D scene until you finish processing them. But with this system, you can see the scene evolve snap to snap. You can quickly fill in holes that you might have accidentally missed. Watch the video embedded below--it is quite impressive.