Units of Measure in Spectroscopy, Part I: It's the Volume, Folks! - - Spectroscopy
 Home   Mass Spectrometry   ICP-MS   Infrared   FT-IR   UV-Vis   Raman   NMR   X-Ray   Fluorescence  
Issue Archive
Special Issues
The Application Notebook
Current Issue
Submission Guidelines
Digital Edition
Subscribe to the Digital Edition
The Wavelength
Subcribe to The Wavelength
Subscribe to the MS E-news
Market Profiles
Information for Authors
Advertiser services
Contact Us
Atomic Perspectives
Chemometrics in Spectroscopy
Focus on Quality
Laser and Optics Interface
Mass Spectrometry Forum
The Baseline
Molecular Spectroscopy Workbench

Units of Measure in Spectroscopy, Part I: It's the Volume, Folks!

Volume 29, Issue 2, pp. 24-37

The data show that different units of measurement have different relationships to the spectral values, for reasons having nothing to do with the spectroscopy. This finding disproves the unstated, but near-universal, assumption that different measures of concentration are equivalent except, perhaps, for a constant scaling factor.

Over the course of these "Chemometrics in Spectroscopy" columns we have introduced or explained various concepts related to multivariate calibration and assessment using mostly near-infrared (NIR) spectroscopy, but also delineating concepts that are applicable to other molecular spectroscopy techniques. One of the elusive problems associated with quantitative analysis using these techniques is the unexplained error in analysis, and the use of reference techniques to calibrate spectroscopic methods relegates them to being referred to as "secondary techniques." Recent installments of this column have introduced a subject that has mystified analysts over the past several decades (1–11).

For as long as we've been working with NIR (since 1976 for one of us), we've recognized that modern NIR analysis is subject to a near-universal, but widely ignored problem. In the early days everybody knew that. Since then, the application of chemometrics has covered up the problem and enabled calibration models that "worked." Even though these applications are valuable, they represent game-playing, not science. Newcomers accept the paradigms used without much thought given to what is happening "under the hood." Experienced practitioners of NIR think that something is wrong, however, and the methodology is sensed as not being part of the universe of science. Many characteristics of NIR appear to fly in the face of conventional science, constituting a set of symptoms:

  • the apparent need for, and use of, more variables in calibrations than can reasonably be justified
  • algebra dictates that no more equations should be needed than variables
  • difficulty in reproducing calibrations for the same constituents in the same type of samples;
  • inability to reproduce wavelength sets (for multiple linear regression [MLR] models)
  • difficulty or inability to relate wavelengths chosen (for MLR) or prominent bands (for principal components regression [PCR] or partial least squares [PLS]) to spectral features
  • standard error of calibration (SEC) should drop precipitously to the noise level
  • unexpected and unexplained (or unexplainable) "outliers"
  • SEC and standard error of prediction (SEP) should drop precipitously when the number of wavelengths or factors equals the number of variations in the samples (in general that doesn't happen)
  • spectroscopic measurements should be accurate over the entire range of concentrations, not only dilute solutions or a small range of values
  • calibrations should be extrapolatable, with a calculated reduced accuracy at the extremes of the range, and
  • calibration transfer should be as easily and readily performed as comparing two mid-IR spectra.

Taken together, these discrepancies from all previous knowledge of chemistry, spectroscopy, physics, and mathematics constitute a set of "mysteries" that nobody in the affected fields had been able to explain.

These various symptoms of an unknown problem have been attributed to a variety of causes, such as optical scatter effects, reference laboratory error, instrument noise, instrumental nonlinearities, stray light, detector saturation, calibration issues, the "wrong" transforms of spectral data, the "wrong" calibration algorithm, incorrect calibration parameters, an unknown reflectance relation between spectrum and composition, and sample inhomogeneity.

To be sure, all of these effects exist and affect the spectral readings and the nature of the calibration models achieved. However, they failed to satisfy.While attempts to mitigate these effects "worked" to some greater or lesser extent in individual situations, they failed to improve calibration performance as often as they succeeded in that attempt, and a "shotgun" approach to trying different corrections was often needed. Here again, the reasons for that behavior were unknown, and it was impossible to use the knowledge to predict, for a new calibration situation, whether any given "fix" would succeed. Therefore, while the various methods developed to address the symptoms allowed NIR to enjoy the widespread success it had achieved, the lack of understanding of the underlying problem prevented solving that problem in the scientific sense, and the net result was simply to replace one set of mysteries with another. Something was missing.

The recent discovery about the effect of using different units for the reference values, which we've described over the course of a number previous column installments (1–11), seemed to have the right properties to explain these mysteries. This approach basically required a different kind of data transform, which differed in two key respects from previous data transforms used in conjunction with NIR data:

  • It was a transform of the concentration values.
  • It was based on known physical chemistry.

The experimental finding that electromagnetic spectroscopy is sensitive to the volume percent (or, strictly speaking, the volume fraction) of materials in a sample is the resulting conclusion. There are a variety of measurement errors because of variation in sample presentation and instrumentation, as described above, but fundamentally the spectroscopy is relating to volume fraction and not weight percent (despite the overwhelming use of weight percent as the most common units used for expressing analyte "concentration" in a majority of analytical situations, nor volume irrespective of density and mixture or solvation issues). Understanding the ramifications of this volume effect may lead to an improved understanding of quantitative measurement errors and, eventually, to changing multivariate spectroscopic techniques into primary analytical methods.

Rate This Article
Your original vote has been tallied and is included in the ratings results.
View our top pages
Average rating for this page is: 6.89
Headlines from LCGC North America and Chromatography Online
Emerging Trends in Pharmaceutical Analysis
Detection of Low-Level Sulfur Compounds in Spearmint Oil
Water for GC-MS Analysis of VOCs
Differential Analysis of Olive Oils with Pegasus® GC-HRT and ChromaTOF-HRT® Reference Feature
Streamline Data Analysis of Tandem Mass Spectrometry for Inborn Errors of Metabolism Research
Source: Spectroscopy,
Click here