A simple programming tool can build a model of a scene in a two-dimensional photograph and insert a realistic-looking synthetic object into it. Unlike other augmented reality programs, it doesn't use any tags, props or laser scanners to model a scene's geometry — it just uses a small number of markers and accounts for lighting and depth. The result is an augmented scene with proper perspective, which looks so realistic that testers could not distinguish between an original photo and a modified one.
With just a single image and some annotation by a user, the program creates a physical model of a scene, as demonstrated in the video below.
Kevin Karsch, Varsha Hedau, David Forsyth and Derek Hoiem at the University of Illinois at Urbana-Champaign developed a new image composition algorithm to generate an accurate lighting model. It uses geometry to build upon existing light-estimation methods, and it can work with any type of rendering software, the researchers explain. It works by breaking down the scene's geometry and depth of field, and then determining how much of the scene's overall illumination is a result of reflection (albedo) and how much directly emanates from light fixtures. This provides light parameters that can be transposed onto an inserted object. The team has developed algorithms for interior lights and for external light sources, typically light shafts from the sun.
To test how well it worked, Karsch et al. showed some study participants a series of images — some with no synthetic objects, and some with synthetic objects inserted in one of three ways: either an existing light-derivation method, their new algorithm with a simplified lighting model, and their new algorithm in all its light-modeling glory. The subjects had computer science or graphics backgrounds.
"Surprisingly, subjects tended to do a worse job identifying the real picture as the study progressed," the authors explain in a paper describing their method. "These results indicate that people are not good at differentiating real from synthetic photographs, and that our method is state of the art."
The method could be used for video games, movies, home decorating or other uses. The work is slated to be presented at SIGGRAPH Asia 2011.
SO photo shop?
Photography is now totally dead as an art form,RIP...
Moving photos? Maybe in a couple years, we'll all have harry potter style photos !! :D
That is cool. This definitely needs to be a Photoshop plugin! This also definitely needs to work for 3d animation tools like Blender and Maya.
Augmented reality just got a lot more advanced. E-hallucinations!
Oh, and that last part brought weeping angels to mind...
-Spouting a fountain of nonsense since 1995-
I thought the glowing globe ball thing in the beginning was cool and all the other pictures look real!
I've always said that one of the main issues with AR is that it doesn't take lighting into account (or if it does, not very well). But with this...well, I got the question wrong when they showed the 4 images and asked the viewer to pick the real one, so I think that says it all xD .
However, the other major AR problem--lag--won't be helped by this. In fact, without some advanced tracking (which would slow it down even more), it won't even work for video since it needs the references of the bounding box, close objects, and light sources. Tracking those would increase the lag, and I'd assume (given what I've seen of more simplistic AR) it would already lag from the insertion anyway, so...too slow.
But still, REALISTIC AR LIGHTING! Finally! :D
When science and art marry, it's a beautiful thing - reality is so monotone. My favorite part of the video was when the moving spherical light source did its magic. Ahhhh. My second favorite - the glass dragon under the tree branches. Bravo to these researchers and their fantabulous algorithms!
This seems a little scary. So now when some crime is being investigated and they have 'photo evidence' showing someone at a crime scene, were they really there, or was this technique used to set them up?