Next-Generation Infrared Spectroscopic Imaging

Article

Spectroscopy

SpectroscopySpectroscopy-01-01-2015
Volume 30
Issue 1

Infrared spectroscopic imaging has been advancing significantly in recent years. Key to that advance is improving the understanding of the underlying mechanisms that influence the ability to achieve greater resolution and speed. Rohit Bhargava

Infrared spectroscopic imaging has been advancing significantly in recent years. Key to that advance is improving the understanding of the underlying mechanisms that influence the ability to achieve greater resolution and speed. Rohit Bhargava, a professor in the Department of Bioengineering and at the Beckman Institute at the University of Illinois, Urbana-Champaign, has been elucidating those mechanisms, and won the 2014 Applied Spectroscopy William F. Meggers Award for his paper on this topic. He recently spoke to Spectroscopy about this work.

In your recent paper on high-definition infrared spectroscopic imaging (1), you describe a model for light propagation through an infrared spectroscopic imaging system based on scalar wave theory. How does this approach differ from traditional approaches?

Bhargava: That's a great question. Traditionally, there have been two approaches. One is the so-called ray model approach, in which the wavelength of light is much smaller than the features of interest in your sample. Clearly that doesn't hold for microscopically diverse samples in an infrared microscope where you're trying to look at feature sizes of 10–15 μm or smaller with wavelengths of light that have approximately the same dimensions. In 2010, we published a series of papers describing the electromagnetic model, which is a fully detailed model that accounts for light propagation explicitly with structures that is fully accurate from first principles. This model is also computationally very, very intensive, so doing large-scale modeling with that type of approach is difficult. The new approach that we've proposed in this paper is actually a happy medium. It provides a very high level of detail so that you can capture the right physics but it's also computationally very tractable so you can actually start to simulate images now from large objects. In this paper, for example what you saw was a set of images of structures that are maybe 500 μm in dimension with feature sizes less than 1 μm to into tens of micrometers. So the approach has a very nice range of applicability and captures all of the essential physics. It is computationally tractable. So we think it will be very useful.


TOM FULLUM/GETTY IMAGES

You state that the paper provides a complete theoretical understanding of image formation in an IR microscope. What was missing from the previous understanding?

Bhargava: Previously, in terms of ray optics, we never explicitly got to include the sample structure in our calculations. So that approach is completely out when you want to look at microscopic objects. In the electromagnetic model, we had never actually modeled the effect of different optics starting from the source to the detector, partly because there would be a lot more detail than is perhaps relevant to what you might need to understand the data or design instruments. And it would take a long time to model each and every component with full electromagnetic theory. In this particular paper, since the model is tractable, we were able to extend it back through all the optics in the system: the interferometer, the image formation optics - the full optics to the detector. In that sense, it's a complete model of light or of how light propagates from the source through the interferometer, through the microscope, through the sample, and then onto the detector. There are no adjustable parameters, no fitting or empirical parameters. It is a completely analytical expression of how light propagates. And in that sense too it's a complete theoretical model for a Fourier transform infrared (FT-IR) imaging system from the ground up. To my knowledge, this is the first time that's been done.

You comment in the paper that mixing the concepts of resolution and pixel size for correct sampling has led to significant confusion in IR microscopy. Can you explain that?

Bhargava: Yes, that's a great point. On one hand, we have the emergence of the so-called high-definition IR imaging in which we have shown that pixel sizes smaller than the wavelength of light are actually optimal for getting the best image quality that you can get. This is an interesting concept because in the past optical microscopy led us to believe that the wavelength of light and the numerical aperture of the lens are primarily the two things that determine what kind of image quality you might get. In the past, the understanding of this was centered around what we might think of as a spot size that a certain wavelength of light would form at the sample plane. That spot size it traditionally assumed to be roughly equal to the wavelength, However, the spot itself also has some structure to it. It's not a top hat kind of structure. It's got a central maximum and it's got wings in which intensity is distributed and so on. So the first idea is that if you take a high numerical aperture lens, of course you'll get a smaller spot. That is sort of classical physics, there is no confusion there. And if you think that the smallest spot size is what localizes the signal, then to resolve the signal at that spot from another similar spot right next to it is really the resolution criteria that we have known for many hundreds of years. What is known from there is that you must have two features that are separated by a distance such that you can resolve both of them under identical illumination conditions and identical spectral identity. So that part is well established.

Now the question is: If you were to have a pixel size that is exactly at that resolution criterion, are you losing some information? And indeed you are losing information because within that spot, light is not distributed evenly. As said previously, a central maximum and decreasing intensity of light as you move away. So it's got some sort of a curve and to sample a curve within a resolution volume or within a resolution area, you need to sample it more frequently than the size of the curve itself. It's like taking a picture. If you want to capture a picture of an object, then you need to have more than one pixel to sample that object rather than fit it all into one pixel. Similarly, if we want to capture the intensity distribution, we have to have more pixels than simply the total width of the spot. The next question is: How many more pixels do you need - how many pixels are really ideal? That's what this paper tells us from a theoretical perspective - that you need approximately five pixels per resolution element or per wavelength-determined spot size to accurately sample the intensity distribution. It's not to say that within those five pixels you'll be able to resolve features. It's only to say that to sample that spot correctly you need five pixels. Which means that our pixel requirement is actually a little bit higher than we had previously thought. The ideal pixel size to look at the intensity changes in an image is not the size of the wavelength, but rather maybe a factor 4 or 5 or smaller than that. So in that sense, the resolution concept and the concept of having a certain number of pixels to measure intensity changes in a field of view traditionally have been mixed. This paper separates those two concepts. It says that resolution is where it's determined by classical ideas but the optimal pixel size to sample image features correctly is given in the paper.

 

How were computer simulations used in the study to analyze the performance of the imaging system?

Bhargava: Computer simulations were used for two reasons. One was fundamentally to validate theoretical model that is based on physics. This is basic science and simulations are the easiest way to predict something from it. The simulations and experiment matched up perfectly in this case. So we can assure ourselves that we have a good model. The second thing that we are using the model for now is to design instruments. So if we want a particular kind of lens or if we have particular wavelengths that we want to measure, then we can use the computer simulations to check if we're indeed getting the performance that we're supposed to get for the samples of interest to us. The third thing, which has not been tackled so far but we're working on it in our lab, is how do we use real-world samples like tissues and polymer samples and use the predictive power of this algorithm, to try to understand what the spectra truly mean in terms of information. This topic has been well explored in the last five years, and the effect of morphology on the data we record is quite profound in some cases. But we don't quite understand fully the science of how different feature sizes, wavelengths, and scattering influence what we can record. These computer simulations would be a great way to systematically understand all these factors. The simulations are very carefully validated now so we can be pretty confident in the results that they give us.

Was the choice of the standard USAF 1951 target as a sample, consisting of chrome on glass, an important factor in testing the system?

Bhargava: You need some sort of standard sample so we use chrome on glass in this case because there is a very nice, clean difference between the chrome part and the glass part: Glass absorbs pretty much all light at the longer wavelengths and chrome would reflect some of the light back. So it's a very convenient target. Since then we've also developed other USAF targets. There is one that in particular we like: It's a barium fluoride substrate on which we have deposited a lithographically patterned USAF target and there we actually have a polymer that has a spectrum we can measure. In the chrome-on-glass targets, typically your wavelength range is limited and also you don't really have an absorption feature that can be measured. We also don't have a nice absorption spectrum if we wanted to correct for some effects and measure back and correlate. The new target has the same USAF features that are used conventionally in optical microscopy, and the resin is very stable as it is cross-linked, offering a great standard for spectroscopy. The use of this standard target cannot only enable us to correlate theory and experiments but is also useful in comparing the performance of different instruments and for other researchers to use and exchange information using a common basis.

What are some of the challenges you faced when demonstrating the ability to perform high-definition IR imaging in the laboratory by using minimally modified commercial instruments?

Bhargava: The biggest challenge was the low light throughput. When pixels become small, say going from 5 μm to 1 μm, we are only collecting one twenty-fifth of the light that passes through the system. So if you don't change anything else, the signal itself is just one twenty-fifth and the signal-to-noise ratio of the spectrum that you acquire is one twenty-fifth of that obtained previously. So that's the biggest challenge. When you are doing imaging experiments, the signal is low and data are limited in signal-to-noise ratio. The signal-to-noise ratio is not as high as in conventional spectrometers simply because you are dispersing light over a large area and also because the detectors are not as sophisticated as decades-old single-element detectors. These array detectors used for imaging cannot be particularly sensitive either because of the need to miniaturize electronics. When in high-definition mode, the challenge with low light and with detectors that are not very sensitive becomes magnified. We address that challenge in a number of ways. One of course, is to use better optics, so now we sometimes use refractive optics; other companies have designed better optics for high-definition that can provide more light throughput, longer working distances, and so on. Going from regular definition microscopy to high-definition microscopy, the one overriding challenge is signal-to-noise ratio and other than that everything else is pretty much the same.

In what fields where IR imaging is applied might this approach be a catalyst for improved applications?

Bhargava: I think this approach can be useful in almost every field you can think of. My own personal interests are presently in the biomedical sphere. Here, high definition allows you to see features in tissue and cells that you couldn't see before. It allows a quality of images that just wasn't available before. Now our images look much more like optical microscopy images than like the limited-resolution images that we've seen in the past. It also has very interesting applications for materials science, because sometimes domains are on the order of several micrometers, which was previously spanned by a single pixel and the actual dimensions of the domain could not be seen. Hence, we did not know if we were getting pure spectra. Now, we can start to get a little better visualization than we could before. The same is true for forensics: If there is some small particulate matter, or little bit of evidence that needs to be examined, we can now focus on that little part and see it much more clearly, define it if it's heterogeneous, and look at the heterogeneity with a finer scale. So I think it has implications for all areas of application. Of course, it doesn't provide a solution for every single problem that we might not have been able to solve in the past with IR microscopy. But it is certainly a major step for everybody involved.

What are the next steps in your research with high-definition spectroscopic imaging?

Bhargava: In addition to the challenges presented by having less light throughput, we have many more pixels to scan in high-definition. Again going from 5-μm to 1-μm pixels, there are 25-fold more pixels to scan for the same area to be covered. So speed of data acquisition becomes a major issue. As we continue with high-definition imaging, we are focusing on obtaining faster data acquisition by developing new equipment. Higher speeds can come from hardware improvements in which newer designs, higher throughput spectrometers, newer components like lasers, and more sensitive detectors all play a role. It can also come from software, in which software approaches like those we have used in the past can be used to improve your data quality. That will also drive some applications in our lab. I think the next generation of instruments and applications will come from a combination of hardware, software, and specific driver problems. All three are connected in lots of ways because you need to get a certain quality of data for a specific application typically and you also need hardware of a certain speed if you want to solve the problem in a reasonable period of time. Hardware, software, and new applications that are enabled would be the three major directions for us.

Reference

(1) R. Bhargarva, P.S. Carney, R. Reddy, M. Schulmerich, and M. Walsh, Appl. Spectrosc.67(1), 93–105 (2013).

This interview has been edited for length and clarity. For more interviews on spectroscopy-related techniques, please visit spectroscopyonline.com/spectroscopy-interviews

Related Videos
John Burgener | Photo Credit: © Will Wetzel
Robert Jones speaks to Spectroscopy about his work at the CDC. | Photo Credit: © Will Wetzel
John Burgener | Photo Credit: © Will Wetzel
Robert Jones speaks to Spectroscopy about his work at the CDC. | Photo Credit: © Will Wetzel
John Burgener of Burgener Research Inc.
Related Content