• Maurice Fallon, a research scientist in MIT’s Computer Science and Artificial Intelligence Laboratory, demonstrates how a user would wear the sensor.

    Photo: Patrick Gillooly

    Full Screen

Automatic building mapping could help emergency responders

A prototype sensor array that can be worn on the chest automatically maps the wearer’s environment, recognizing movement between floors.


Press Contact

Sarah McDonnell
Email: s_mcd@mit.edu
Phone: 617-253-8923
MIT News Office

Media Resources

2 images for download

Access Media

Media can only be downloaded from the desktop version of this website.

MIT researchers have built a wearable sensor system that automatically creates a digital map of the environment through which the wearer is moving. The prototype system, described in a paper slated for the Intelligent Robots and Systems conference in Portugal next month, is envisioned as a tool to help emergency responders coordinate disaster response.

tk
The prototype sensor included a stripped-down Microsoft Kinect camera (top) and a laser rangefinder (bottom), which looks something like a camera lens seen side-on.
Photo: Patrick Gillooly

In experiments conducted on the MIT campus, a graduate student wearing the sensor system wandered the halls, and the sensors wirelessly relayed data to a laptop in a distant conference room. Observers in the conference room were able to track the student’s progress on a map that sprang into being as he moved.

Connected to the array of sensors is a handheld pushbutton device that the wearer can use to annotate the map. In the prototype system, depressing the button simply designates a particular location as a point of interest. But the researchers envision that emergency responders could use a similar system to add voice or text tags to the map — indicating, say, structural damage or a toxic spill.

“The operational scenario that was envisioned for this was a hazmat situation where people are suited up with the full suit, and they go in and explore an environment,” says Maurice Fallon, a research scientist in MIT’s Computer Science and Artificial Intelligence Laboratory, and lead author on the new paper. “The current approach would be to textually summarize what they had seen afterward — ‘I went into this room on the left, I saw this, I went into the next room,’ and so on. We want to try to automate that.”

Fallon is joined on the paper by professors John Leonard and Seth Teller, of, respectively, the departments of Mechanical Engineering and of Electrical Engineering and Computer Science (EECS), and EECS grad students Hordur Johannsson and Jonathan Brookshire.

Shaky aim

The new work builds on previous research on systems that enable robots to map their environments. But adapting the system so that a human could wear it required a number of modifications.

tk
Maurice Fallon, a research scientist in MIT’s Computer Science and Artificial Intelligence Laboratory, demonstrates how the sensor is worn.
Photo: Patrick Gillooly

One of the sensors that the system uses is a laser rangefinder, which sweeps a laser beam around a 270-degree arc and measures the time that it takes the light pulses to return. If the rangefinder is level, it can provide very accurate information about the distance of the nearest walls, but a walking human jostles it much more than a rolling robot does. Similarly, sensors in a robot’s wheels can provide accurate information about its physical orientation and the distances it covers, but that’s missing with humans. And as emergency workers responding to a disaster might have to move among several floors of a building, the system also has to recognize changes in altitude, so it doesn’t inadvertently overlay the map of one floor with information about a different one.

So in addition to the rangefinder, the researchers also equipped their sensor platform with a cluster of accelerometers and gyroscopes, a camera, and, in one group of experiments, a barometer (changes in air pressure proved to be a surprisingly good indicator of floor transitions). The gyroscopes could infer when the rangefinder was tilted — information the mapping algorithms could use in interpreting its readings — and the accelerometers provided some information about the wearer’s velocity and very good information about changes in altitude.

Adjudicating the data from all the other sensors is the camera. Every few meters, the camera takes a snapshot of its surroundings, and software extracts a couple of hundred visual features from the image — particular patterns of color, or contours, or inferred three-dimensional shapes. Each batch of features is associated with a particular location on the map.

Seeing is believing

If the person wearing the sensors returns to an area that he or she has previously visited, the system’s location estimate could be off: For instance, its compensation for the tilt of the rangefinder might not have been perfect, and a wall now looks several feet farther away than it did, or its inference of position from accelerometer data could be off. In such cases, a fresh snapshot and a comparison of the visual features with those already stored can help correct its location estimate.

The prototype of the sensor platform consists of a handful of devices attached to a sheet of hard plastic about the size of an iPad, which is worn on the chest like a backward backpack. The only sensor whose volume can’t be reduced significantly is the rangefinder, so in principle, the whole system could be shrunk to about the size of a coffee mug.

Wolfram Burgard, a professor of computer science at the University of Freiburg in Germany, says that the MIT researchers’ work is on the general topic of SLAM, or simultaneous localization and mapping. “Originally, this came out as a problem of robotics,” Burgard says. “This idea of having a SLAM system that is attached to a human’s body, for figuring out where it is, is actually innovative and pretty useful. For first responders, a technology like this one might be highly relevant.”

“With a robot, we typically assume that the robot lives in a plane,” Burgard continues. “What they definitely tackled is the problem of height and dealing with staircases, as the human walks up and down. The sensors are not always straight, because the body shakes. These are problems that they tackle in their approach, and where it actually goes beyond the standard 2-D SLAM.”

Both the U.S. Air Force and the Office of Naval Research supported the work.


Topics: Computer Science and Artificial Intelligence Laboratory (CSAIL), Electrical engineering and electronics, Location detection, Mapping, Mechanical engineering, Faculty, Graduate, postdoctoral, Research, Simultaneous location and mapping (SLAM), Staff, Students

Comments

OMG, my dream came true. You guys there are amazing.
How this works...can please someone explain the concept and logic behind it.
I always like seeing at the bottom of each news who was the organization supporting/financing the research. I wich I could see this in my country as well.
Parrot AR Drone + this = Google Caves?
We at Palo Alto Research Center did similar work in the last year that was published at ACM UbiComp12, titled: Walk&Sketch: Create Floor Plans with an RGB-D Camera.
Я работник МЧС России (официальный сайт - http://www.mchs.gov.ru/eng/). Ваша разработка очень интересная для меня. Хочу узнать более подробную информацию о ней. Когда можно ожидать готовый рабочий образец? Сколько стоит один экземпляр модели? Прошу связаться со мной по почте - vetuev@gmail.com (Automatic translation: I am a worker Russian Emergencies Ministry (the official site - http://www.mchs.gov.ru/eng/). Your design is very interesting for me. I want to know more information about it. When will prepare a working model? How much is a copy of the model? Please contact me by e-mail - vetuev@gmail.com)
技术很赞,可以用在搜索领域,但是同时也隐含着潜在的危险,比如这个对罪犯绘制地图提供了很大的方便,技术永远是一把双刃剑。 Automated translation: Technology is awesome, and can be used in the search field, but also implies a potentially dangerous criminals maps, such as this provides a lot of convenience, technology is always a double-edged sword.
hi the comment from Russia echos my thought. it would be highly desirable after an innovation like this to have aa link to FAQ. - Can I buy this" Yes/no/whatever- but it is in Creative Commons click here - is there a contact person if you want more info ? fill in this Google form here etc etc save time and speed up efficient distribution of progress and knowledge
I have measured approximately 4000 buildings using a laser in the past 3 years as a commercial / residential appraiser and Insurance cost estimator. There is currently an app that measures the interior of buildings using the camera on an Ipad or Iphone, but only for the interior. If you could develope this mapping program to allow it to measure accurately to the 4th digit the exterior of buildings then I for one would likely buy it today! Interior is great but exterior is what appraisers and real estate agents must use for determining heated square footage, which is where the money is! Please email me and I can give you some more guidance on standards that would be required to meet industry norms. Thanks
Great start. Ultimately will need infrared capability (i.e. a portable security camera?) and ability to penetrate atmospheric interference (radar/laser combination?) to be effective in dark/dusty/smokey environments.
Back to the top