Over the past six months I’ve had the privilege of working with an outstanding group of folks from USC’s Interactive Media Division and the Institute for Creative Technologies on a new project called Viewfinder, directed by Michael Naimark. It’s a departure from my usual work in that it’s more of a pure research project—the goal being to make it easy for people to place their photographs into a 3D world model like Google Earth so the the image is perfectly aligned with the model. We launched the piece this week with a website, demo video, and coverage on the NY Times Bits technology blog.
As part of the project, we developed a browser-based 2D method for lining up a photograph with a Google Earth screen shot and then doing the necessary calculations to correctly “pose” the photo in Google Earth in 3D. This involved a Google Maps/Earth mashup developed by Will Carter that allows you to pick a point on the earth in Google Maps and see the resulting location in Google Earth (a navigation method which turns out to be much easier than trying to move around at ground level in Google Earth itself).
The second part of the 2D method was a Flex application I developed that allows you to drop a photo on top of the Google Earth image and alter its scale and position until the two are aligned as closely as possible. Some trigonometry is then applied to generate the KML code that correctly places the photo in Google Earth. Once we got the workflow up and running, it was pretty interesting to try posing different kinds of images—my personal favorite was the high angle matte painting of the United Nations building from Hitchcock’s North by Northwest (see below). It’s amazing to see how closely the painting matches the Google Earth image (especially considering the angle). I’ve posted a few more stills from the project as well.
If you check the “Results” area of the website, you’ll see that we also developed a proof-of-concept for a 3D posing method in which the user drags 3D geometry around to match the photo while an algorithm interactively solves for the correct pose. This is hardcore computer science stuff and it was great to see the folks from ICT put this together. A fascinating experience overall.
Page 1 of 1 pages
Here’s a list of links to works cited in my recent talk “Storytelling in the Age of Divided Screens” at Gallaudet University.
I’m very happy to announce the launch of “Timeframing: The Art of Comics on Screens,” a new website that explores what comics have to teach us about creative communication in the age of screen media.
To celebrate the launch of Upgrade Soul, here’s a screen shot of an eleven year old prototype I made that sets artwork from Will Eisner’s “The Treasure of Avenue ‘C’” (a story from New York: The Big City) in two dynamically resizable panels.
The last couple of months have seen an uptick in published commentary on Strange Rain, much of it owing to notice the app received at this year’s Modern Language Association conference in Seattle.
Dialogue bubbles huddle together in the Unity authoring environment like backstage theatre performers awaiting their chance to shine in the forthcoming iOS and Android release Upgrade Soul, from Opertoon.