• One of the things that distinguishes the researchers' new system from earlier high-speed imaging systems is that it can capture light 'scattering' below the surfaces of solid objects, such as the tomato depicted here.

    Image: Di Wu and Andreas Velten, MIT Media Lab

    Full Screen
  • Media Lab postdoc Andreas Velten, left, and Associate Professor Ramesh Raskar with the experimental setup they used to produce slow-motion video of light scattering through a plastic bottle.

    Photo: M. Scott Brauer

    Full Screen

Trillion-frame-per-second video

By using optical equipment in a totally unexpected way, MIT researchers have created an imaging system that makes light look slow.


Press Contact

Kimberly Allen
Email: allenkc@mit.edu
Phone: 617-253-2702
MIT News Office

Media Resources

2 images for download

Access Media

Media can only be downloaded from the desktop version of this website.

MIT researchers have created a new imaging system that can acquire visual data at a rate of one trillion exposures per second. That’s fast enough to produce a slow-motion video of a burst of light traveling the length of a one-liter bottle, bouncing off the cap and reflecting back to the bottle’s bottom.

Media Lab postdoc Andreas Velten, one of the system’s developers, calls it the “ultimate” in slow motion: “There’s nothing in the universe that looks fast to this camera,” he says.

Video: Melanie Gonick

The system relies on a recent technology called a streak camera, deployed in a totally unexpected way. The aperture of the streak camera is a narrow slit. Particles of light — photons — enter the camera through the slit and are converted into electrons, which pass through an electric field that deflects them in a direction perpendicular to the slit. Because the electric field is changing very rapidly, it deflects the electrons corresponding to late-arriving photons more than it does those corresponding to early arriving ones.

The image produced by the camera is thus two-dimensional, but only one of the dimensions — the one corresponding to the direction of the slit — is spatial. The other dimension, corresponding to the degree of deflection, is time. The image thus represents the time of arrival of photons passing through a one-dimensional slice of space.

The camera was intended for use in experiments where light passes through or is emitted by a chemical sample. Since chemists are chiefly interested in the wavelengths of light that a sample absorbs, or in how the intensity of the emitted light changes over time, the fact that the camera registers only one spatial dimension is irrelevant.

But it’s a serious drawback in a video camera. To produce their super-slow-mo videos, Velten, Media Lab Associate Professor Ramesh Raskar and Moungi Bawendi, the Lester Wolfe Professor of Chemistry, must perform the same experiment — such as passing a light pulse through a bottle — over and over, continually repositioning the streak camera to gradually build up a two-dimensional image. Synchronizing the camera and the laser that generates the pulse, so that the timing of every exposure is the same, requires a battery of sophisticated optical equipment and exquisite mechanical control. It takes only a nanosecond — a billionth of a second — for light to scatter through a bottle, but it takes about an hour to collect all the data necessary for the final video. For that reason, Raskar calls the new system “the world’s slowest fastest camera.”

Doing the math

After an hour, the researchers accumulate hundreds of thousands of data sets, each of which plots the one-dimensional positions of photons against their times of arrival. Raskar, Velten and other members of Raskar’s Camera Culture group at the Media Lab developed algorithms that can stitch that raw data into a set of sequential two-dimensional images.

The streak camera and the laser that generates the light pulses — both cutting-edge devices with a cumulative price tag of $250,000 — were provided by Bawendi, a pioneer in research on quantum dots: tiny, light-emitting clusters of semiconductor particles that have potential applications in quantum computing, video-display technology, biological imaging, solar cells and a host of other areas.


Media Lab postdoc Andreas Velten, left, and Associate Professor Ramesh Raskar with the experimental setup they used to produce slow-motion video of light scattering through a plastic bottle.
Photo: M. Scott Brauer

The trillion-frame-per-second imaging system, which the researchers have presented both at the Optical Society's Computational Optical Sensing and Imaging conference and at Siggraph, is a spinoff of another Camera Culture project, a camera that can see around corners. That camera works by bouncing light off a reflective surface — say, the wall opposite a doorway — and measuring the time it takes different photons to return. But while both systems use ultrashort bursts of laser light and streak cameras, the arrangement of their other optical components and their reconstruction algorithms are tailored to their disparate tasks.

Because the ultrafast-imaging system requires multiple passes to produce its videos, it can’t record events that aren’t exactly repeatable. Any practical applications will probably involve cases where the way in which light scatters — or bounces around as it strikes different surfaces — is itself a source of useful information. Those cases may, however, include analyses of the physical structure of both manufactured materials and biological tissues — “like ultrasound with light,” as Raskar puts it.

As a longtime camera researcher, Raskar also sees a potential application in the development of better camera flashes. “An ultimate dream is, how do you create studio-like lighting from a compact flash? How can I take a portable camera that has a tiny flash and create the illusion that I have all these umbrellas, and sport lights, and so on?” asks Raskar, the NEC Career Development Associate Professor of Media Arts and Sciences. “With our ultrafast imaging, we can actually analyze how the photons are traveling through the world. And then we can recreate a new photo by creating the illusion that the photons started somewhere else.”

“It’s very interesting work. I am very impressed,” says Nils Abramson, a professor of applied holography at Sweden’s Royal Institute of Technology. In the late 1970s, Abramson pioneered a technique called light-in-flight holography, which ultimately proved able to capture images of light waves at a rate of 100 billion frames per second.

But as Abramson points out, his technique requires so-called coherent light, meaning that the troughs and crests of the light waves that produce the image have to line up with each other. “If you happen to destroy the coherence when the light is passing through different objects, then it doesn’t work,” Abramson says. “So I think it’s much better if you can use ordinary light, which Ramesh does.”

Indeed, Velten says, “As photons bounce around in the scene or inside objects, they lose coherence. Only an incoherent detection method like ours can see those photons.” And those photons, Velten says, could let researchers “learn more about the material properties of the objects, about what is under their surface and about the layout of the scene. Because we can see those photons, we could use them to look inside objects — for example, for medical imaging, or to identify materials.”

“I’m surprised that the method I’ve been using has not been more popular,” Abramson adds. “I’ve felt rather alone. I’m very glad that someone else is doing something similar. Because I think there are many interesting things to find when you can do this sort of study of the light itself.”


Topics: Computational photography, Femtosecond lasers, Faculty, Frames per second (FPS), Graduate, postdoctoral, Media Lab, Research, Streak camera, Students, Ultrafast imaging

Comments

Cool! Now can they do this for a laser in the double slit experiment?
Same question here. Are we seeing the light pulse as a wave traveling through the media or ad a group of particles trans-versing the media?
"photons enter the camera through the slit and pass through an electric field that deflects them in a direction perpendicular to the slit." Can someone explain to me how photons are deflected by an electric field?
They are of course not. According to wiki, "optoelectronic streak cameras work by directing the light onto a photocathode, which when hit by photons produces electrons via the photoelectric effect". So they first convert light into electron stream, which is then easily deflected using electric or magnetic field.
There are several problems with using this technology for watching the double slit experiment. A laser beam traveling through space will not be captured by the camera because there are no photons being scattered back toward the camera. If you see more videos of this group, you will see that all videos of actual beams propagating are in milky water--a scattering medium--or just reflections off objects. You need some sort of scattering medium, which would then be interfering with your double slit experiment.
Starting at about 2:30 in this video: [ http://www.youtube.com/watch?v=-fSqFWcb4rE ] you can see the bottle's label. I imagine MIT is already in negotiation with Coca Cola, and will get a hefty sum before we get to watch it on a TV ad. Can't get much more practical than that. A bit more seriously, I'm still not sure what we're seeing. What photons are getting into the sensor(s)? Are photons from a source laser bouncing off the photons going through the bottle and then returning to the camera to generate each slit view?
At one trillion exposures per second is not faster then the speed of light?
<b>rdowling, nohous: you're right. Sorry for the oversight. We've corrected the text of the story accordingly. -<i>Larry Hardesty</i></b>
This seems to be a bit misleading. They are not tracking one photon but separate groups of photons at specifically different times. Not slow motion photography of a single phoeon or single collection of photons in a pulsed group.
Speed of light: 3.0 x 10^8 meters/second Speed of camera: 1 x 10^12 exposures/second Do some dimensional analysis to find that: You are getting 3.3 x 10^3 exposures per meter. You get one exposure for each .3 millimeters (300 micrometers) the light travels.... That's pretty quick. Quick enough to produce a fluid video of light movement, actually.
I agree, I kinda get that we're seeing light bounce off of the apple but I think watching a flashlight pierce through fog would be a better showcase of the technology, no?
Had the same thought here (thats why i searched this side in order to post.^^ But what id like to see is an optical grid. How the Laser Beam is splitted into several beams^^.
Most interesting, indeed! Several decades ago, 'scope bandwidths were extended radically by recurrent sampling and holding; they could only observe events that repeated consistently. Each sample (many picoseconds) typically was taken a few nanoseconds or so later than the previous, to build up the displayed waveform. (When complete, sampling time went back to "earliest".)(I'll skip random sampling, for now.) This technique is analogous to how a waveform display was created in these 'scopes. Tektronix was the principal manufacturer.
Please forgive my ignorance, but this question popped into my head and I am in no way educated enough to answer it myself although I have the feeling I already know the answer is no. Could this technology be used in conjunction with a particle accelerator to film the exact moment that particles were collided together?
I have data on a single femtosecond "snapshot" of multi-spectral light. In which the light is distorted in both spatial and temporal domains. Then "stretched out" to about 10 minutes. Using a "inverse time lapse approach." What appears on the substrate is photon migration along with the different features that look like actual photon waves & electron clouds. I need quite the pipeline to process the images. And do flybys to explore what looks like superstructures resembling the Grand Canyons. Had the guys at Maplesoft help me out back a couple years ago, but I ran out of money and still looking for sponsors to date. I decided to do a independent movie on it called, QFLUX: The Movie. My next path is to get cloud funding to complete it. I had myself set up for a paper to present at SMPTE 2011, but essentially got withdrawn because I just started working for a 3D rig company and didn't want to have a conflict of interest. Anybody interested to fund? Write me. drdude1@verizon.net
was that if we were recording from a telescope ; a star and from zoomed in to zoomed out at the right speed would keep recording the same instance if desired and slightly sloser for instance say as to capture each of the the next as it is way nearer; we then have exact as close as possible to slowmotion aside of the then possibility of getting a closer look and capturing even the changes in between the reflections.
Somebody told me that if you could send a pulse of light through a colloidal solution, when the exit beam hits a surface, the beam more or less remains sharp (minus the effects of index changes through media), but as the rest of the pulse arrives, it starts becoming increasingly fuzzy, as a result of the rest of the light having to take longer and longer paths (via reflection within the solution). Is this true? Could you at least create a video comparing the normal vs the pico world?
Since there is only one spatial dimension, isn't it possible to record an event in a single pass with multiple cameras situated at different positions considering the economic constraints are relaxed ?
I viewed the Ted video and the first image of the were correct... UNLESS you took into consideration the energy cancellation of the different reflective surfaces... What is being seen or allowed is the maximum representation of energy. From as many points possible. We know it can not exceed the original distribution this distribution radiates outward from the bottle then reflecting off of the table to and hitting the dissipated bottle reflection... This does not explain the wave ripple outward unless we add the reflection light energy from the light sensor which takes that 2x and makes it 4x This does not happen at once it build up to a maximum and then reduces below the recorded input. Or Oscillates This oscillation is the hidden factor of energy distribution which may have many variables or just one.... The fixed image is just a changed perspective of another viewer!!!
I had a doubt, if you are taking photos around 1.7 ps, this means that electrons are running inside the conductors 2x faster than light. Is that correct? And which kind of supercomputer do you use that allows recording the amount of data generated in 10^-12 seconds? Thanks.
This devise would have problems with the Double slit experiment but with some minor modifications the double slit experiment is easy and will answer an age old question about time and the perception of time.
I'm wondering about the energy released and the clues of atoms in transition. Could this camera work on Extending hydrogen's emission spectrum into the UV and IR? We get fussy pictures now?
Back to the top