Calibration Transfer, Part V: The Mathematics of Wavelength Standards Used for Spectroscopy - - Spectroscopy
 Home   Mass Spectrometry   ICP-MS   Infrared   FT-IR   UV-Vis   Raman   NMR   X-Ray   Fluorescence  
Issue Archive
Special Issues
The Application Notebook
Current Issue
Submission Guidelines
Digital Edition
Subscribe to the Digital Edition
The Wavelength
Subcribe to The Wavelength
Subscribe to the MS E-news
Market Profiles
Information for Authors
Advertiser services
Contact Us
Atomic Perspectives
Chemometrics in Spectroscopy
Focus on Quality
Laser and Optics Interface
Mass Spectrometry Forum
The Baseline
Molecular Spectroscopy Workbench

Calibration Transfer, Part V: The Mathematics of Wavelength Standards Used for Spectroscopy

Volume 29, Issue 6, pp. 18-27

How does one compute the mathematical certainty of wavelength standards used for calibrating instrumentation used for molecular spectroscopy measurements? This question becomes of major importance when the technique used for measurement requires the collection of large databases for use in qualitative searches or quantitative multivariate analysis. Wavelength drift over time within a single instrument or wavelength differences between instruments create errors and variation in the accuracy of measurements using databases collected with different wavelength registrations. The loss of integrity over the wavelength axis of data collected over time is a significant problem in large database creation and usage. So, what are the techniques and mathematics used to compute uncertainty, and the optimum methods for maintaining wavelength accuracy within instrumentation over time, when considering measurement condition changes?

Wavelength precision and accuracy, more formally termed repeatability and reproducibility, within a single instrument and between multiple instruments over time are essential for optimum transfer of quantitative calibrations, and for the continuous use of databases in qualitative search techniques. To maintain the integrity of data, it is essential that the wavelength axis be stable within instruments and identical across different instruments over changing measurement conditions. This is a difficult task for the basic commercial instrument designers and manufacturers in terms of making spectrometers highly precise and accurate on the wavelength axis. Questions arise as to the engineering tolerances required for high-precision spectrometers in terms of mechanical, optical, and electronic components. Also essential to the precise wavelength alignment are the exact procedures required to properly calibrate the wavelength axis of the spectrometer based on measurements of known reference standards (1,2).

Different Approaches to Alignment of the Wavelength Axis

Because of the inherent uncertainties of standard reference materials (SRMs), one may ask if they are precise enough to be useful for aligning the wavelength axis of spectrometers over time (3). For example, National Institute of Standards and Technology (NIST) SRM 1920a reported a published uncertainty of 1 nm (4,5). With large uncertainties in published wavelength peak positions, referred to as measurands, is the most reasonable approach to ensure that wavelength accuracy is within inherent NIST uncertainties when using SRMs, but strive to be far closer in agreement for instrument-to-instrument matching? This column demonstrates the state of the art in commercial spectrometers today and proposes the question of whether there might be improved methods for spectrometer wavelength alignment for enhanced repeatability and reproducibility.

A seminal paper on the use of rare-earth oxides for use as wavelength standards in near infrared (NIR) spectroscopy is familiar to many spectroscopists (4). One important wavelength standard made for use in the NIR region is SRM 1920a, consisting of a mixture of rare-earth oxides. This SRM has known stability problems related to variation in wavelength scale and, as such, has been replaced with SRM 2036 glass (6). SRM 1920a is not a particularly stable material and wavelength registration changes based on temperature have been published as (0.15 nm). In the certification of SRM 1920a, NIST also experienced challenges with their secondary Cary model spectrometer and their manual method of peak selection resulting in the large reported uncertainty (4). In calibrating this SRM, NIST reportedly used the precise NIST (formerly the National Bureau of Standards [NBS]) spectrometer to measure three wavelengths (that is, 1012.9, 1260.7, and 1535.5 nm). The Cary instrument was reported to exhibit a deviation of 1 nm during the test measurement period. For the NIST spectrometer, emission reference lamps included the use of neon (Ne) at 703.24 nm, xenon (Xe) at 881.94 and 979.97 nm, mercury (Hg) at 1013.97 and 1529.58 nm, and krypton (Kr) at 1816.73 nm. The use of such atomic line spectra for wavelength registration produces the lowest uncertainty (7).

For the longer NIR wavelength region, 1,2,4-trichlorobenzene was used to mark the 2152.60 nm position (8). Toluene (used for infrared at 3290.8, 3422.0, 3484.0, 5147.2, 5381.8, and 5549.0 nm) and carbon disulfide (2224.0 nm) have also been recommended for wavelength calibration in early published work (9,10). The task of selecting appropriate standards for certified peak wavelength positions is critical for improved x-axis (wavelength) alignment and calibration of spectrometers. Wavelength alignment technology is dependent on selecting a suitable set of reference peak wavelengths and stable materials to align each spectrometer system using best available measurement practices. If wavelength precision must approach 0.05 nm and accuracy across the entire spectral region must be better than 0.2 nm for 95% confidence interval, a series of technological improvements must be implemented in the future.

So, what if a master instrument, with its own idiosyncratic scanning features, can be duplicated? One might analyze a paired reference standard with the master instrument and align the newly manufactured instrument to the set of wavelength peak position values found using the master instrument. This approach was used in the early development of NIR instruments in an attempt to make them more similar (11–13). The problem with this approach is the entire world may have a different set of nominal wavelength values (measurands) than the master instrument. Spectrometers change over time and conditions and wavelength alignment drifts. A common spectrometer may demonstrate inherent nonlinearity in its wavelength axis, which does not allow accurate alignment throughout the complete spectral range. So to consistently align the wavelength axis with other instrument types, and even to align a single instrument over time, requires not only repeatability, but also accuracy indicated by agreement with reference standard peak positions (measurands). These reference peak positions are provided by a set of known physical standards (physical in the sense of primary lines that are universally reproducible). Such standards may be primary emission standards (that is, atomic lines), physical standards of mixed rare earth glasses, or even organic liquids at standard pathlength and temperature. The measurands for calibrating wavelength scales would optimally be obtained using first principles whenever possible because this approach is universal and based on sound physical science and metrology, which by definition is fixed to a specific known uncertainty. The national laboratories around the globe use this approach as much as possible (14,15). There are other materials that may be used as physical standards, such as etalons. The use of etalons for wavelength alignment was successfully implemented for a process diode array spectrometer (16–20). The instrument was extremely precise, but too expensive for market acceptance because the spectrograph had originally been designed for the harsh conditions of an Earth-orbiting satellite.

To determine a unique set of peak wavelength values based on an arbitrary master instrument does not serve our purposes adequately over time. One would ideally expect high precision with universal accuracy so that databases now and in the future may be aligned with any instrument technology and be carried reproducibly into the future. In addition, using a first principles approach allows one to standardize the data based on known wavelength positioning at any time. By using first principles, wavelength alignment may be defined electronically and, thus, no physical instrument or object is required to transfer the technology required for wavelength axis calibration. What is required to make all this work are repeatable and well-characterized peak wavelength standards, either emission sources, pure compounds, or solid rare-earth oxide glasses with known peak wavelength positions, stability, and well-defined uncertainty. Requirements are for glasses, crystalline polymers, and pure chemical solutions with an uncertainty of better than 0.1 nm for multiple wavelengths. For infrared and Raman systems the uncertainty should be better than 0.1% and better than 0.05 cm-1 (wavenumber) (21). If a spectrometer is repeatable, but not reproducible (accurate), then database and calibration transfer becomes an issue of trying to align the wavelength axis to some unknown or poorly defined system.

Rate This Article
Your original vote has been tallied and is included in the ratings results.
View our top pages
Average rating for this page is: 10
Headlines from LCGC North America and Chromatography Online
LCGC TV Library
New Carbon-Based Phases for 2D LC
Emerging Trends in Pharmaceutical Analysis
Waters EU - Combining Mass and UV Spectral Data with Empower 3 Software to Streamline Peak Tracking and Coelution Detection
Pharma Focus: Where pharmaceutical analysis is heading
Source: Spectroscopy,
Click here