Wed

Aug 16
2006

Tim O'Reilly

Tim O'Reilly

Live Motion 3D Video Camera

The other day, Noel Gorelick of Arizona State University and Google Mars fame gave me an amazing demo of images taken with a very cool new 3D live motion video camera that uses LIDAR technology to get a range-finding for every pixel. Advanced Scientific Concepts, the company that built the camera, is so young that they don't have a website up, but here's one of the images (moved to the second page because it's an animated GIF):

Imagine how a camera like this could be used to populate Second Life or Google Earth!

The creators of the technology did a Google Tech Talk about the 3D camera. I haven't watched it yet, but someone who was there said that the last 30-40 minutes are better than the beginning, so if it doesn't catch your attention, skip ahead.


tags:   | comments: 37   | Sphere It
submit:

 
Previous  |  Next

0 TrackBacks

TrackBack URL for this entry: http://blogs.oreilly.com/cgi-bin/mt/mt-t.cgi/4857

Comments: 37

  Richard Dyce [08.16.06 01:21 PM]

Now, if they could just combine two or more shots...

  Paul [08.16.06 01:37 PM]

I wonder how it works with looking left ot right.

  Mark Sylvester [08.16.06 01:38 PM]

Tim,

I think that it is ironic that the sample image above was taken of our hometown, Santa Barbara. That is the courthouse in all it's Moorish beauty.

Prior to founding introNetworks, I was the co-founder of Wavefront, which became Alias Wavefront and then was bought by Autodesk. As a developer on the cutting edge of 3D, we had an opportunity to work with some amazing technologies, like Lidar.

In fact, the first production use of the technology was done by a crew at Panavision that did second unit work on the film Dinosaur by Disney. They used a beta version of Maya at the time and fed in millions of pointcloud datasets that were then mapped with texture (taken from Panavision cameras) and used as set pieces in the film. Amazing to see (at the time) what could be done in the creation of 'virtual sets'.

Now, fast-forward, these many years and imagine how cool it would/will be when you can point and shoot a venue and convert it to 3D data... ah....

One thing that is extra cool about the technology is that a color per vertex can be captured from the LIDAR and (with enough resolution in the point cloud) can substitute for texture mapping.

Now, knowing the Google is in the mix, makes me think that someone, somewhere in the Googleplex is working on putting this on a button someplace and making it available for all of us.

Thanks for sharing and causing me to reminisce about how much fun it was being on the bleeding edge of computer animation.

Cheers
Mark

  Christian Cadeo [08.16.06 01:45 PM]

That is truly amazing.

  bob [08.16.06 03:08 PM]

What's the white stuff floating in midair?

  Peter Rothman [08.16.06 03:36 PM]

Hey its Mark Sylvester!

Very cool technology. I'm impressed and I've seen probably a dozen different attempts at buildng something like this.

  Martin G. Smith [08.16.06 05:09 PM]

I was met with this image when I walked into the shop this afternoon [The crew lets me know something is important if by throwing it up on the 16’ screen [It’s an old-school analog Eidophor http://www.spgv.com/columns/eidophor.html hacked for HD]]. My question is can this system be adapted to QTVR? Acknowledging that the latter is a ‘Less Live’ version of the former, the application I have has a use.

  Raj Bala [08.16.06 06:33 PM]


LIDAR was used extensively at the World Trade Center site after 9/11 to give the workers a map to show the extent of the actual damage. It wasn't very clear because of the smoke and fires.

  Search Engines WEB Ûž [08.16.06 09:22 PM]


www.mars.asu.edu/~gorelick/foo157.gif


...here's another image to go 'o-ooh' & a-aah' Over

  Tony Haddon [08.17.06 03:03 AM]

Bob,

I'm guessing that if the camera is shooting from a single point, there's going to be stuff thats not visible from the original position. Then when the scene rotates, those bits will be shown devoid of data...

My guess anyway.

  rob levy [08.17.06 03:28 AM]

I wonder if a network of cameras would be the answer, allowing multiple angles to be filmed at the same time and mixed ino one video. alternatively, filming stationary objects from all angles would provide a 3d version. The only problem would be tracking the cameras location(s) over time. maybe this is where googles latest acquisition comes in! they have some nice image recognition tech which should be able to track the cameras position from recognising parts of the picture and their distances!

  Anonymous [08.17.06 04:18 AM]

Tony, this sort of artifacting is visible in the photo, a natural consequence of this sort of capture. Lidar produces this sort of data, multiple shots are usually combined and processed with a surface algorithm to generate complete geometry.

  Peter Morse [08.17.06 05:45 AM]

There are a zillion applications for this in cultural heritage, archaeology and so forth. I hope they get away from the fairly nauseating military and surveillance imperatives and recognise the wonderful human and civilian opportunities that there are. Great work that I hope a big company like Google would use to not "be evil."

  GAm [08.17.06 05:55 AM]

As much as the evil forces do exist, why are they the ones willing to put the money where the curosity is in terms of technologiacl research?

By the way similar systems are currently being used for Crime Scene reconstructions. 3D cameras and visualizzation is a busy field.

  gwinerreniwg [08.17.06 06:03 AM]

I saw a similar project from M$ last week. It likely uses different technology, but the result looks similar:

http://labs.live.com/photosynth/video.html

  Julien Marchand [08.17.06 06:23 AM]

This is clearly different from Photosynth, as we have a real 3D dataset here, captured in 3D. Photosynth uses many photographs to reconstruct a 3D approximation of the scene and then projects the photos on it. Photosynth is not a tool to build 3D models per se, it's primarily a quick and easy way to view photographs belonging to the same set.

Not trying to dismiss their efforts (which were great, btw), I think that this project has more potential than Photosynth, because you need a lot less computing to arrive at a 3D model.

  Shay [08.17.06 08:03 AM]

The Z-Cam has been doing similar things for a while (and things like 'bluescreening' using depth information. Based on Israeli military laser rangefinders I believe.

http://www.3dvsystems.com/products/zcam.html

  Tim O'Reilly [08.17.06 09:06 AM]

Just to be clear, this is NOT a Google project. It was developed by a small company called Advanced Scientific Concepts. Noel Gorelick, the ASU scientist who worked on the Google Mars project, is involved as a consultant, and because of that connection, Noel and the company founders gave a tech talk at Google. So Google is obviously aware of this technology, but they have tech talks from outsiders all the time, so I wouldn't read too much into the fact that it was demoed there.

  Shrinkled [08.17.06 09:37 AM]

This is sort of 3-D. There is no left to right movement or real up and down. It does go up and down but there is a blank left because it has no clue what is behind the left and center palm tree's leaves or on the roof. It shows depth of a picture. To do a full rotating around an object, several pictures are needed from different angles.

  James [08.17.06 09:47 AM]

Congratulations, they've ripped off Kai Krause's Canoma.

  anonymous [08.17.06 09:57 AM]

But if a woman was naked behind a towel, and I took a pic with this camera.....

  Jolyon [08.17.06 10:16 AM]

Bladerunner anyone?!

  P.J. Onori [08.17.06 10:34 AM]

Very cool. It actually reminds me of an organzation I worked for a while back that does something quite similar. They use LIDAR for archiving endangered architecture. It's quite a cool technology.

  Anonymous [08.17.06 01:28 PM]

The MS Tech is more like blade runner. Decker only pans around and zooms in on a pic. He doesn't recreate the scene in 3D.

The MS stuff is interesting because it can pull images from the web to match what you have. Throw in your vacation snaps of statue of Liberty and it search for other pics from the web to add to your dataset.

  Joe Hunkins [08.17.06 10:14 PM]

Wow, cool. 2D is one of the great deficiencies of most of our media and this is helping to fix that.

  James [08.18.06 07:58 AM]

Sorry, I do not get the whole "Ooh and Aah" of this video clip. Please someone, what does the clip demonstrate?

I understand that the camera provides depth per pixel, but how is that related to the video clip? What is the attraction of "live motion"? Is the "3D" supposed to eventually provide stereoscopic/multiviews?

  Reed [08.18.06 11:00 AM]

You don't neccesarily need LIDAR, you can extract 3D information from a multicamera system that can be moved around in space. It will have certain errors though, I guess using the laser to find range information is much more straightforward.

  Sophie Kahn [08.19.06 12:59 PM]

Now, I love Canoma, & still use it in my artwork, but a) it would have taken several days to model that scene, b) it would have been full of perspective errors, seams, and other artifacts and c) you could only use spheres, cubes and planes in your modelling; those palm leaves would have been pretty tricky... I'm not sure how labour-intensive this is, but I can't imagine it would approach Canoma, PhotoModeler et al.

  Yucky yuks [08.20.06 03:41 PM]

Where's the 3d Jessica Simpson video?

  Jadawin [08.28.06 03:09 PM]

Here's (most of) the blurb from the TechTalk video; perhaps that might explain why this is an 'ooh-ah' sorta thing. :)


Google TechTalks August 9, 2006


ABSTRACT Advanced Scientific Concepts has developed a 3D camera unlike any other in existence. At video frame rates (30Hz) their solid-state flash LADAR system is able to simultaneously measure the distance to every point in the scene by recording the time-of-flight of a laser pulse. At full speed the camera collects 500,000 range points per second using a 1.57um eye-safe laser that has been successfully tested at distances greater than 5km.The entire system is the size of a shoebox and weighs only 12 pounds. It uses less than 60 watts of power and can be controlled from a laptop.

  Anonymous [09.06.06 01:17 PM]

This is terrific stuff!

I saw a demo similar to this at ETech earlier this year - but that one was much more cartoonish than this one. Amazing detail here in comparison.

I want this in production - I suspect my neighbor is growing a weed garden but I need proof!

  Hawk [09.19.06 02:28 AM]

Good video but requires higher speed to make it really useful but good enough for google earth

  lidar [09.21.06 09:17 PM]

I reviewed technology recently developed from a Utah based company; they coupled lidar with cmos sensors and GPS. The camera handled motion of the object and the apparatus thus allowing multi perspectives of a given scene or a complete model that could then be displayed. The crux is in the size of the data set generated as the detail was in the sub-milimeter range. Very cool stuff coming some from this company, they indicated they were in early stage dialogue with game, film and ecommerce companies, which led me to believe they have software to handle the file size issue especially if they can display such data via the internet. I am sure Google will be hunting this...

  anonymous [09.25.06 09:50 PM]

Found their website, it must be rather new

http://www.advancedscientificconcepts.com/

  Ranny [11.13.06 05:42 AM]

Take a look at this out of Michigan. cplanet

  Fishcough [07.13.07 05:06 PM]

Gorelick was just hired by Google (he started work at Mountain View this week). Pairing this tech with Google's new Street Views, who knows what's on the horizon next?

  Anonymous [10.06.07 04:17 AM]

Hi,
It's Cool Yaar, It's really very Nice, It Looks Like a Flower.

Post A Comment:

 (please be patient, comments may take awhile to post)






Type the characters you see in the picture above.

RECOMMENDED FOR YOU

RECENT COMMENTS