Today, the capabilities of modern technologies are constantly increasing, and instruments are becoming smaller, faster, cheaper,
more portable, and more easily interconnected. This is true for many analytical spectroscopy techniques as well as for a wide
range of other technologies that have the potential to intersect with the field of spectroscopy and expand its boundaries.
To explore these developments, Spectroscopy is launching an article series about new technologies and new applications of existing technologies that are based on or
related to light. We kick off the series with this interview with Andreas Velten about his work as a postdoctoral associate
at the Massachusetts Institute of Technology (MIT) Media Lab in Cambridge, Massachusetts. (Velten has since taken a position
as associate scientist at the Morgridge Institute for Research at the University of Wisconsin in Madison).
Velten and his colleagues in Professor Ramesh Raskar's "Camera Culture" group at the MIT Media Lab, in collaboration with
the spectroscopy laboratory of MIT Professor Moungi Bawendi, developed a technique they called "femto photography." The technique
uses a titanium–sapphire laser that emits pulses every ~13 ns, picosecond-accurate detectors, and complex mathematical reconstruction
techniques. By combining hundreds of "streak" images (one-dimensional movies of a line), captured with this high-speed camera,
they have created moving pictures (never perhaps was there a more apt use of the term) that show the movement of light (groups
of photons). Examples of their use of the technique include combined images of light traveling through a soda bottle and,
in a separate application, over a piece of fruit.
Spectroscopy: How did the femto photography project get started?
About two years ago, I joined Ramesh Raskar's group at the MIT Media Lab to do a post-doc. Ramesh had been thinking for a
long time about combining ultrafast optics and computational photography to build an imaging system that can look around corners.
He and his group had taken some initial steps in implementing the idea. It's kind of an unusual match, because my background
is in ultrafast optics and this group is doing computer vision and computational photography. But it's very interesting to
combine the two fields. People in ultrafast optics are trying to push the envelope of the hardware — to see how short we can
make the pulses, to improve ranging. For example, with light detection and ranging (LIDAR) we send a laser pulse to a target
and wait until the light comes back, and from the time that has passed, we can measure the distance to the target. It's used
in traffic control these days. But I was thinking about imaging and what could be done with imaging data in signal processing.
On the other hand, with computational photography people basically take consumer cameras and make small modifications to them,
and do amazing things by processing the data and looking at the data in a new way. Our project is kind of a combination of
the two fields. We use nonstandard hardware — hardware you can set to the time of flight of the light that you are using it
for imaging. We wanted to develop new capabilities of this method by further processing the data. The initial goal was to
build a camera that could image around a corner (a special application).
Spectroscopy: So how did you end up photographing visible light photons — in other words, doing photography at the speed of light?
Once you have the time-of-flight imaging, you can get a lot more information from the light by post-processing the data.
Professor Raskar and our whole Camera Culture group is very interested in computational photography and were inspired by the
"bullet through an apple" strobe photos by Doc Edgerton. I had taken some of our streak camera images and created one-dimensional
movies. Professor Raskar challenged us to think about ways to convert the one-dimensional streak tube to create visually meaningful
ultrafast two-dimensional movies. I realized at some point, from playing with the camera, that you could actually compose
movies — that you could stitch the data together in a way that would allow you to reconstruct a complete movie out of the
data that you capture. Making these movies was really a side project. Our team, especially Everett Lawson and I, started to
put together a mirror based system. Then a set of collaborators, Diego Gutierez, Di Wu, and members of Diegos group, Adrian
Jarabo, Elisa Amoros Galindo, and Belen Masia, got excited and worked on visualizing the results better in the videos and
doing things like generating single pictures from them.