Resolution in Mid-Infrared Imaging: The Theory

Article

The field of mid-infrared (mid-IR) imaging has made significant developments in recent years, but the theory has not kept pace. Rohit Bhargava, an associate professor of engineering at the University of Illinois at Urbana-Champaign and the associate director of the University of Illinois Cancer Center, recently undertook studies to address that gap. Spectroscopy spoke to him recently about that work.

The field of mid-infrared (mid-IR) imaging has made significant developments in recent years, but the theory has not kept pace. Rohit Bhargava, an associate professor of engineering at the University of Illinois at Urbana-Champaign and the associate director of the University of Illinois Cancer Center, recently undertook studies to address that gap. Spectroscopy spoke to him recently about that work.

Spectroscopy: In your talk at the recent FACSS conference (now called the “SCIX conference”), you discussed the theory of resolution and image quality in mid-IR imaging. Why did you undertake this research into the theory?

Bhargava: Actually, the research started as just as curiosity, as a fundamental exploration of the theoretical basis for our field. I had been working in the area of infrared imaging for almost 10 years to that point, and we did not have a firm theoretical basis or a book where one could look up the theory of infrared microscopy and imaging. So this really started as an intellectual exercise, and took off from there.

Spectroscopy: Do the issues you have explored only arise with certain types of samples such as very small samples, at the nano- to micrometer scale?

Bhargava: The issue is present with all kinds of samples, whether it has any microscopic structure or not. The minute you take a sample and put it into an infrared microscope, the spectrum that you record will be different from that in the conventional interferometer, and that’s because you don’t have the approximately plane-wave geometry in which the light is slowly converging; you actually have a very highly converging beam of light in the microscope. So even if you have a uniform sample, under the microscope you will start to see some changes in the recorded data compared to the conventional spectrometer. With small samples, you get additional effects. So if the domain sizes are comparable to the wavelength, for example, you start to see distortion, you start to see a more prominent scattering effect. If the particle size becomes really, really small compared to the wavelength, it appears homogeneous to the light. So, you start getting back the sort of bulk effects that you would see with a homogeneous sample to begin with. So it’s in both cases that you see some sorts of differences between a microscope and a conventional interferometer.

Spectroscopy: Can you summarize your theory?

Bhargava: : Our theory actually can be thought of as a two-part process. The first is modeling the instrument and the optics itself. The second is modeling the sample and light interactions. Modeling the instrument is relatively straightforward; it’s quite similar in theory to an optical microscope, for example. So we used much of the same framework. We start from Maxwell’s equations, from the very fundamentals, and then we built up at each step, how light propagates from the source, to the beam splitter, to the mirrors, back again, through the microscope optics and on to the detector. As far as the sample is concerned, we again break it down into how light interacts with the sample, completely from first principles. We had no assumptions, no empirical parameters, nothing of that sort. So it’s a very rigorous theory.

Now, there are two issues to keep in mind for this development. The first is how this differs from conventional optical microscopy and optical imaging. If you examine optical theory, a simplification is often made. In visible microscopy, we consider the sample to be a fairly uniform sample, with just a few, isolated points that actually scatter light. However, it’s the scattering points that actually give the image contrast. On the infrared spectroscopy side, we have traditionally considered the samples to be highly absorbing but not scattering. If we examine the literature for almost all imaging techniques and mechanisms, you will see that this problem — image formation, that is — can either be broken down into places that don’t have a whole lot of scattering but have absorption or have some amount of scattering and insignificant absorption. Hence, the theory is new, not only for infrared microscopy, but in general in the field of image formation.

Spectroscopy: Based on your theory, what should spectroscopists do to optimize image quality in mid-IR imaging?

Bhargava: That’s a very interesting question. This is still up to some debate. For the last 15 years or so, we’ve held this dogmatic belief that the resolution can be no finer than approximately the wavelength of light. And that comes from optical microscopy considerations, mainly the Rayleigh criteria, for example. What we have shown in recent publications — one of them came out with Carol Hirschmugl’s group in Nature Methods a few months ago, and we’ve submitted a few papers since then — is that one can rigorously look at how light is scattered through a microscope and how it distributes and how we can capture the spatial and spectral distribution of light. We always capture an analog form of light using a lense. This analog form gives us the spatial distribution of light as it comes across from any point in the sample. When we go to digitize it and record it with a detector, we have to be very careful. We have to take the highest sort of feature size that’s there in the scattered light that’s collected by the lens, and record it appropriately. So sampling theory comes in and the good old Nyquist criterion comes in. This is essentially what we have to do using our theory. We find the highest frequency in terms of spatial features that we are recording with a particular lens, and then simply adjust the detector spacing to be twice that frequency. So that’s how we get the correct sampling. What was surprising, though, to us, and certainly it’s a surprising insight for the community, is that this correct recording limit, or correct digitization limit, was much smaller than the pixel sizes that were being used in commercial instruments or even research instruments up until that point. It turns out the pixel size is about 100-fold smaller in area that what we we’ve been used to. So once you implement that on an instrument, you see stunning improvement in image quality, which is what the papers coming out now are showing us.

Spectroscopy: You have also suggested a different method for signal processing. Can you explain that?

BhargavaThis is actually tied into the issue of what pixel size we need. If we take a pixel size that is 100-fold smaller, that means we are getting 100-fold less light being collected by the detector, which means our recorded signal is 100-fold smaller. And for equivalent measurement times, that means that our signal-to-noise ratio is 100-fold smaller as well. So if that is the case, then we would need to spend a lot of time trying to signal average, and get our signal-to-noise ratio back up. And that’s sometimes not feasible. So if you look at a real experiment, for example, comparing the commercial instrumentation with the high-definition imaging instrumentation we have set up in our lab, we actually record about 10-fold smaller signal. This is due to adjusting the optics and the detector parameters. To recover the signal-to-noise ratio by signal averaging would still take us a 100-fold longer time. And that’s obviously not feasible. So we turned to an old technique, that we published around the year 2000 as part of my graduate work. This approach is essentially a signal processing technique that takes spectra and transforms them into components. From those components, we choose the ones that have signal and discard the ones that have noise. Then we inverse-transform the same data back to obtain fairly noise-free spectra. So by combining signal processing and optical optimization, we can actually acquire data that is of much higher quality and also more appropriate for analysis.

Related Content