Skip to content ↓

Researchers amplify variations in video, making the invisible visible

New software amplifies changes in successive frames of video that are too subtle for the naked eye.
Press Inquiries

Press Contact:

Sarah McDonnell
Phone: 617-253-8923
Fax: 617-258-8762
MIT News Office

Media Download

In these frames of video, a new algorithm amplifies the almost imperceptible change in skin color caused by the pumping of the blood.
Download Image
Caption: In these frames of video, a new algorithm amplifies the almost imperceptible change in skin color caused by the pumping of the blood.
Credits: Photo: Michael Rubinstein

*Terms of Use:

Images for download on the MIT News office website are made available to non-commercial entities, press and the general public under a Creative Commons Attribution Non-Commercial No Derivatives license. You may not alter the images provided, other than to crop them to size. A credit line must be used when reproducing images; if one is not provided below, credit the images to "MIT."

Close
In these frames of video, a new algorithm amplifies the almost imperceptible change in skin color caused by the pumping of the blood.
Caption:
In these frames of video, a new algorithm amplifies the almost imperceptible change in skin color caused by the pumping of the blood.
Credits:
Photo: Michael Rubinstein

At this summer's Siggraph — the premier computer-graphics conference — researchers from MIT's Computer Science and Artificial Intelligence Laboratory (CSAIL) will present new software that amplifies variations in successive frames of video that are imperceptible to the naked eye. So, for instance, the software makes it possible to actually "see" someone's pulse, as the skin reddens and pales with the flow of blood, and it can exaggerate tiny motions, making visible the vibrations of individual guitar strings or the breathing of a swaddled infant in a neonatal intensive care unit.

Best of 2012
The system is somewhat akin to the equalizer in a stereo sound system, which boosts some frequencies and cuts others, except that the pertinent frequency is the frequency of color changes in a sequence of video frames, not the frequency of an audio signal. The prototype of the software allows the user to specify the frequency range of interest and the degree of amplification. The software works in real time and displays both the original video and the altered version of the video, with changes magnified.



Although the technique lends itself most naturally to phenomena that recur at regular intervals — such as the beating of a heart, the movement of a vibrating string or the inflation of the lungs — if the range of frequencies is wide enough, the system can amplify changes that occur only once. So, for instance, it could be used to compare different images of the same scene, allowing the user to easily pick out changes that might otherwise go unnoticed. In one set of experiments, the system was able to dramatically amplify the movement of shadows in a street scene photographed only twice, at an interval of about 15 seconds.

Happy accident

The MIT researchers — graduate student Michael Rubinstein, recent alumni Hao-Yu Wu '12, MNG '12 and Eugene Shih SM '01, PhD '10, and professors William Freeman, Fredo Durand and John Guttag — intended the system to amplify color changes, but in their initial experiments, they found that it amplified motion as well. "We started from amplifying color, and we noticed that we'd get this nice effect, that motion is also amplified," Rubinstein says. "So we went back, figured out exactly why that happens, studied it well, and saw how we can incorporate that to do better motion amplification."

Using the system to amplify motion rather than color requires a different kind of filtration, and it works well only if the motions are relatively small. But of course, those are exactly the motions whose amplification would be of interest.

Rubinstein envisions that, among other applications, the system could be used for "contactless monitoring" of hospital patients' vital signs. Boosting one set of frequencies would allow measurement of pulse rates, via subtle changes in skin coloration; boosting another set of frequencies would allow monitoring of breathing. The approach could be particularly useful with infants who are born prematurely or otherwise require early medical attention. "Their bodies are so fragile, you want to attach as few sensors as possible," Rubinstein says.

Similarly, Rubinstein says, the system could be used to augment video baby monitors for the home, so that the respiration of sleeping infants would be clearly visible. A father himself, Rubinstein says that he and his wife equipped their daughter's crib with commercial pressure sensors intended to gauge motion and reassure anxious parents that their children are still breathing. "Those are kind of expensive," Rubinstein says, "and some people really complain about getting false positives with them. So I can really see how this type of technique will be able to work better."

In their paper, the researchers describe experiments in which they began investigating both of these applications. But since they've begun giving talks on the work, Rubinstein says, colleagues have proposed a range of other possible uses, from laparoscopic imaging of internal organs, to long-range-surveillance systems that magnify subtle motions, to contactless lie detection based on pulse rate.

"It's a fantastic result," says Maneesh Agrawala, an associate professor in the electrical engineering and computer science department at the University of California at Berkeley, and director of the department's Visualization Lab. Agrawala points out that Freeman and Durand were part of a team of MIT researchers who made a splash at the 2005 Siggraph with a paper on motion magnification in video. "This approach is both simpler and allows you to see some things that you couldn't see with that old approach," Agrawala says. "The simplicity of the approach makes it something that has the possibility for application in a number of places. I think we'll see a lot of people implementing it because it's fairly straightforward."

Related Links

Related Topics

More MIT News