Photosynth is Zooming on Steroids



Check out Blaise Aguera y Arcas’ presentation at TED on his Photosynth technology that focuses on a new way of manipulating our digital images.

Using photos of oft-snapped subjects (like Notre Dame) scraped from around the Web, Photosynth (based on Seadragon technology) creates breathtaking multidimensional spaces with zoom and navigation features that outstrip all expectation. Its architect, Blaise Aguera y Arcas, shows it off in this standing-ovation demo. Curious about that speck in corner? Dive into a freefall and watch as the speck becomes a gargoyle. With an unpleasant grimace. And an ant-sized chip in its lower left molar. “Perhaps the most amazing demo I’ve seen this year,” wrote Ethan Zuckerman, after TED2007. Indeed, Photosynth might utterly transform the way we manipulate and experience digital images.

Thanks for the link Drew!

8 Comments
  • csven

    June 18, 2007 at 12:56 am Reply

    Neat technology, but like most innovations it can be seen from a variety of perspectives. I pointed out a few of them and raised one issue on an earlier post of mine: http://blog.rebang.com/?p=1300 . However, I’d be more interested in hearing what you think about potential impact on camera designs. How might this feed back?

  • Design Translator

    June 18, 2007 at 9:27 am Reply

    Hey csven,
    As far as i can see, the big weakness of this technology is the user still has to provide the meta data. Unless the system is smart enough to do an image recognition.
    If that is the case cameras will start to get more intelligent. I can see GPS data being embedded into the photos, supplimenting the current tags such as focal length, camera type, film speed etc.
    Otherwise, I dont see this system radically changing cameras, as it seems according to the video to work pretty well off existing image collection from flickr.

  • csven

    June 18, 2007 at 8:39 pm Reply

    The GPS data was the first thing that came to my mind as well. The other thing that occurred to me is using the timestamp that cameras already record. While watching the videos I kept thinking, “What if the current building is painted a new color?” or “What if they make an architectural change; an add-on?”
    I was also wondering if stereoscopic imaging might not get a boost. If the movie industry thinks it’s the future (?) then maybe prosumers will want a camera that allows them to take images that help construct 3D images. There aren’t many – any – photos on Flickr of my home, for example. A binocular camera might be an evolutionary step in the old standard simply because the flat 2D image may not be the way most people view “images” in the future. Think “Minority Report”.

  • Design Translator

    June 19, 2007 at 11:28 am Reply

    I think that is the end point eventually, a 3D photograph. At this time the example he presented must still use a 3D model of the church as a basis and the photos stitched on top of it. I could be wrong.
    Eventually a 3D binocular photo could mean that this 3D model can truly be constructed from the data available.
    However as far as i can see, this Photosynth looks to me as a crowd sourced program only, and with crowd favourites it would unlikely have a representation of your house or mine as we are not that famous. I don’t think! I think though we are pushing this discussion to the next level, and they should really hire the 2 of us!

  • csven

    June 19, 2007 at 9:03 pm Reply

    A stereoscopic image isn’t a 3D model though; and the film industry is working with a controlled situation in which the viewers perspective is angularly constrained. In other words, you only get a limited piece of the necessary data for full reconstruction. If the movie shows a 3D film of the front of an old church, but never shows the back or the interior, you get *only* the front (again, recall “Minority Report” and the 3D “film” displayed that fragments when viewed off-axis).
    So you’d still need additional points from where the stereoscopic image is taken and then something like Photosynth to stick a series of them together. And if someone took, say, a few hundred photos of their house, tagged it with the GPS, and let Photosynth go to work (effectively with a crowd-source of One), then a person could have a 3D model of their home. Imagine the impact on the real estate market.
    So, afaic, a stereoscopic camera doesn’t really do anything more than halve the number of times someone would press the shutter release. Or does it? Perhaps it might help make more accurate models by virtue of knowing that pairs of images are taken from identical positions.

  • csven

    June 19, 2007 at 9:09 pm Reply

    btw, for reference: http://blog.rebang.com/?p=1178

  • David Sanchez

    June 19, 2007 at 11:28 pm Reply

    All the sudden, Photo Synch makes the zombie Web 2.0 AJAX pedestrian.
    Envy Polarizes. Amazing breakthrough with off the shelf “consumerismâ€

  • Design Translator

    June 20, 2007 at 3:50 pm Reply

    Hi csven,
    I see where you are coming from. It would be nice if a photo allowed some kind of “limited panning” ie Minority Report.
    Hi David,
    I noticed that to, that all that amazing zooming seem to be operated off an IBM laptop. It must be some pretty light weight code. I wonder if it was coded in Windows. Nah.

Post a Comment