This is the first story ever published using a paper-based Audio Notebook developed by Media Lab graduate student Lisa Stifelman. But it won't be the last. The device for rapidly accessing recorded notes will be a powerful tool for anyone who has ever tried to reconstruct quotations from a tape recorder by repeatedly fast-forwarding and rewinding.
The Audio Notebook is made of wood, with an embedded speaker and microphone. On the top is a slot for a 5.5-by-9-inch pad of paper.
As the user takes notes during a class or at an interview, the Audio Notebook records sound while sensors under the pad synchronize note-taking with the recording. When the user flips through the notepad, the tool detects whether he or she has gone from page 1 to page 5, or from page 8 to page 3.
TWO WAYS TO FIND NOTES
When reconstructing notes, there are two options. Touching the special digital pen to an audio scroll bar (actually a thin strip of lighted diodes) to the left of the pad activates the recording. If the user touches a point halfway down, for instance, it will play back from the midpoint of what was recorded while he or she was open to that page. Or, touching the pen to a quoted word written on the notepad will start playback at that very spot.
The Audio Notebook is a product of Ms. Stifelman's own experiences and frustrations. When her adviser, Christopher Schmandt, head of the Media Lab's Speech Interface group, gave her a microcassette recorder to test, she took it to a telephony class where, she said, "the professor spoke faster than the guy in the Federal Express commercial." It took hours for her to reconstruct the notes. Her listening notebook reduces the process to minutes.
For more information about the Audio Notebook, visit the Web site at http://www.media.mit.edu/~lisa/anb.html>. The work is sponsored by the NSF, AT&T, and the News in the Future and Digital Life consortia at the Media Lab.
(This story was written by Editor-in-Residence Jack Driscoll of the Media Lab, the former editor of The Boston Globe. It originally appeared in the December 1996 issue of FRAMES).
A version of this article appeared in MIT Tech Talk on March 5, 1997.