A team of researchers at Google has come up with really awesome use of all those photos of popular tourist spots. They’ve created an algorithm that takes those photos and turns them into realistic and very detailed 3D renderings. And the best of all is that it even edits out interfering objects and evens out the changes in lighting.
The researchers used thousands of photos to train the algorithm to recognize the object. These photos show the object from many different angles, which is how the system can later create a 3D. But there are also various lighting conditions and times of day when these photos were taken. There are also often obstructing objects in the photos, like people, cars or signs. And as you can probably imagine, there are also tons of different editing techniques people apply to their photos. But it’s impressive that this tech is able to even them all out and create a 3D rendering that looks like it takes you through the scene.
The system was named “NeRF in the Wild” or simply NeRF-W. This is because the system for 3D reconstruction of landmarks was made from unconstrained, “in-the-wild” photo collections, as the team explains. The results are pretty impressive, especially considering all the challenges the system had to overcome to create smooth and uniform renderings.
There could be AR and VR various applications of this system, according to the researchers. I believe it could also be used in video games and for special effects in movies. Perhaps also for virtual tours, which would be useful especially now that we can’t travel anywhere.
If you’d like to read the full paper, you can find it here. You can also see a few examples in the video above, or via this link.