The performance of a Raman spectrograph for a particular application will depend, among other things, on its sensitivity and
spectral resolution. The sensitivity will determine how long it will take to record a spectrum with a given signal-to-noise
ratio. In turn, the grating reflectivity will determine the optical throughput of the instrument. The spectral resolution
will determine how easy it will be to extract subtle information from a spectrum. The spectral resolution is determined by
the focal length of the spectrograph and the groove density of the grating used to disperse the light, and will also affect
the apparent sensitivity. Note that in many cases, spectral resolution may be improved at the expense of sensitivity. Because
many new users of Raman equipment are not familiar with these grating–spectrograph properties, we thought it would be useful
to summarize the physics, in simplistic terms, that determine how the instruments work.
Over the past 20 years Raman spectroscopy has gained popularity because instrumental innovations have made it easier to use
them for problem-solving, providing spectra in 1% of the time that it took prior to the Raman revolution following the introduction
of the holographic notch filters. In addition, new graduates of chemistry and materials science are taking jobs in industry
without any graduate education in spectroscopy or the operation of spectroscopic instrumentation. Consequently, the new user
is faced with myriad choices in configuring a new instrument or in optimizing an existing instrument for a given experiment.
Understanding how the spectrograph core of a Raman instrument works will aid the novice in producing quality, defensible results
for solving industrial problems or characterizing new materials.
A spectrograph is designed to accept light with many wavelengths, separate the wavelengths in space, and then "detect" each
wavelength on a multichannel detector, which today is synonymous with a charge-coupled device (CCD). Figure 1 is a schematic
of a spectrograph.
Figure 1: Schematic of a dispersive Raman spectrograph.
Figure 1 shows a typical Raman spectrograph. The collected Raman light is focused onto an entrance slit. After passing the
slit, it diverges until it reaches a concave mirror whose focal length corresponds to the distance between the mirror and
slit; after being reflected by the mirror, the light is "collimated." When the light hits the grating, which is an array of
finely spaced lines on a reflective surface, there is constructive and destructive interference, which is wavelength and angle
dependent. Consequently, each wavelength is reflected at a different angle (1). As each wavelength is then reflected from
the camera mirror onto the array detector (CCD), it is focused at a different position on the array. The wavelength of the
light on each array pixel can then be calculated from the known equations of grating physics, as shown in Figure 2 (2).
Figure 2: Schematic showing how a pixel position is converted to a wavelength.
It is not the purpose of this article to derive the equations that determine the conversion, but only to indicate to the new
user the origin of the "magic" inside the software that enables the spectral "image" on the camera to be converted to a spectrum,
that is, a plot of intensity (counts or counts/s) versus Raman shift (cm-1). The physical quantities determining the separation on the camera of two wavelengths will be the incident angle of the light
on the grating, the diffracted angles, as determined by these equations, and the focal length of the focusing element (2).
When a Raman instrument is designed, the spectral dispersion at a given wavelength is selected, and then the angles and focal
length are calculated to produce the desired result. In addition, there is optical software that enables asymmetrizing the
geometry so that the images are kept as tight as possible on the surface of the detector, which, of course, is flat.
Of course, the Raman spectrum is only meaningful when the wavelength values are converted to Raman shift units, also in the
software, according to equation 1:
where ν is derived from λ according to equation 2:
Typical widths of lines in a Raman spectrum are between 1 and 10 cm-1 full width at half maximum (FWHM). If parameters are selected that produce 1 cm-1/pixel, then about 1000 cm-1 can be covered on a CCD that has 1024 pixels in the long direction, which is the spectral dispersion direction. In principle,
the selection of the grating would be straightforward, but as we will see, there are important characteristics that have to
be acknowledged in the choice.
Just to get oriented to these effects, examine the behavior shown in Figure 3 of the 155 cm-1 line of sulfur that was recorded with the 633-nm line of a HeNe laser on instruments whose focal length varied between 150
mm and 1920 mm. Depending on the goal of the measurement, it may or may not be important to resolve the components in the
spectrum; how much will be necessary to resolve will determine the selection of gratings as will be discussed in the following
Figure 3: 155 cm-1 line of crystalline sulfur recorded with instruments whose focal length varied between 1920, 640, 460, and 250 mm (from top
to bottom). (Courtesy of Sergey Mamedov of Horiba Scientific.)
The important grating characteristics that have to be matched with the desired instrument characteristics are
- the dispersion, which is a result of the groove density (g/mm) and focal length, and
- the reflectivity, which is a complicated function of the groove density, the groove profile (including the blaze angle), the
angle at which the grating is used, and the metallic coating (3,4).