Now that we have shown the relationships between different units for concentration, we continue by demonstrating their effects on the data we collected and used for our examples. What are the ramifications and consequences of these findings?

**Now that we have shown the relationships between different units for concentration, we continue by demonstrating their effects on the data we collected and used for our examples. We also begin our discussion on the ramifications and consequences of our findings.**

This column is the next continuation of our discussion of units on calibration, as described in part I of this series (1) and examined through the use of the classical least squares (CLS) approach to calibration (2–13). In this column, we continue the numbering of equations, figures, and tables from where we left off in part I (1).

In our previous columns (1,9) we confirmed that the volume percent is the physical quantity that agrees with the spectroscopic evaluation of the contribution of the components of a mixture to the spectrum of the mixture. We also demonstrated that the nature of the CLS algorithm allowed us to determine two important properties of the conversion of "concentration" between different units.

We determined in Table I from part X of the previous subseries (11), as well in Table IV from part XI (12), that there is not a unique conversion between concentration values expressed in different units. We also showed, in Figures 1–3 from part I of this subseries (1), that there is not a linear relationship between concentration when expressed in different units.

Thus, the data (Figures 1–3 from part I of this subseries and Figure 7–9 here) show that different units of measurement have different relationships to the spectral values, for reasons having nothing to do with the spectroscopy. One of the conclusions of this finding is that it disproves the usual, although inevitably unstated, assumption that different measures of concentration are equivalent except, perhaps, for a constant scaling factor. Furthermore, it is clear that if two measures of concentration are nonlinear with respect to each other, then a third measurement, such as a spectroscopic measurement, that is linear with respect to one measure must be equally nonlinear with respect to the other measure.

We also showed how a concentration measurement unit can be constructed so that it is indeed equivalent to the volume percent, except for a scaling factor. The key to this conversion is to multiply the volume percentage by a unit that has a volume in the denominator — examples include weight/unit volume and molarity (moles/unit volume).

Figure 7: Plot of CLS values versus weight percent and mole percent, for toluene. (a) Weight percent versus CLS values; (b) volume percent versus CLS values.

One of the more common measures of concentration used in conjunction with spectroscopic analysis, however, is weight percent (that is, weight/unit weight, which is not included among the measurement units that are equivalent to volume percent). Table I from part X (11), as we previously noted, shows that a value for volume percent of a given component can correspond to a wide range of weight percentages and vice versa: A given value of weight percent can correspond to a correspondingly wide range of volume percents. Here, Figures 7–9 show the same effect: Many values for concentration calculated from the spectra using the CLS algorithm, which stands as a surrogate for the volume fraction, correspond to a given value of the concentration expressed as weight percent units. This variability does not depend on the concentration of the analyte, but instead on the composition of the rest of the mixture, affecting the volume fraction and spectroscopic concentrations, whereas the weight percents are essentially constant at each level (note that they are not exactly constant, because of the use of the "dispense approximately then measure exactly" method of making the samples). This nonequivalency has nothing to do with the spectroscopy; it is purely a matter of elementary physical chemistry, and is the source (or at least one of the sources) of what we used to call *matrix effect *in undergraduate analysis courses. In those courses, matrix effect was typically considered small, but we see in Table I from part X (11) and in Figures 7–9 that it can be, and in our experiments indeed is, very large — being, in our case, a source of errors as large as 5–10%. This error source is larger than any other we typically encounter in spectroscopic analysis. It is much larger than any of the usual laboratory error sources (a laboratory showing such poor performance would be rejected for consideration as a reference laboratory), larger than virtually any instrumental error, and larger than any other error source we normally encounter.

Figure 8: Plot of CLS values versus weight percent and mole percent, for dichloromethane. (a) Weight percent versus CLS values; (b) volume percent versus CLS values.

Yet this error source has previously been hidden and undetected, despite its role as the largest error source in our chemometric calibration work. When you think about it, you realize that one of the problems a calibration algorithm has to solve when confronted with data where the "wrong" units are used for the analyte values, is how to determine the value of that analyte when two (or more) samples are described as having different values of the analyte by the scientist performing the calibration (that is, different reference values) while the spectroscopic data are telling the algorithm that they are the same. And, of course, the opposite situation invariably also occurs, that the spectroscopy indicates that the analyte concentrations are the same, while the reference values indicate that they are different.

Figure 9: Plot of CLS values versus weight percent and mole percent, for n-heptane. (a) Weight percent versus CLS values; (b) volume percent versus CLS values.

It is a testimony to the power of the mathematics that despite the large magnitude of this error source, the algorithms can indeed unravel the effects and (most of the time) create models for describing the mixtures with reasonable, but not complete, accuracy.

Classically, the underlying quantity that chemical analysis attempts to determine has been the concentration of the analyte. Little mention was made of the units that the concentration was to be measured in. Underlying this lack of interest was a hidden and unstated assumption, namely that the units used were immaterial because different measures of concentration were expressing the same underlying quantity, and the only difference between different units was a scaling factor.

However, that this is not so is clearly seen in Figures 1–3 from part I (1). We noted at the time we examined these figures that because there are many lines representing the relationship between two different units, that relationship cannot be 1:1. Any line that is drawn either vertically or horizontally on the plot would represent a single value of the concentration as expressed in one unit, but that value of concentration would correspond to a number (indeed, an infinite number) of values of concentration when expressed in the other unit, corresponding to different compositions of the "matrix." It is therefore impossible for there to be a linear relationship, or indeed any type of one-to-one relationship, between the two different units of measure because of this many-to-one correspondence.

We also see in Figures 1–3 from part I (1) that not only are the lines representing each value of constant composition of the matrix curved, but also these lines have variable spacing between them despite the fact that in those figures the compositions of the matrix varies by constant amounts between two adjacent lines. This variable spacing has significant consequences when considering the effects of these phenomena on the calibrations produced by chemometric algorithms.

We have previously described the development of the multiple linear regression (MLR) algorithm (14–16). We also showed that a correction factor for the presence of an interfering absorbance band and the way measurements at a different wavelength can correct an absorbance reading at the analytical wavelength. Unstated, although implicit in the assumption of Beer's law, is that all absorbance readings are in fact linear with respect to the concentration of the analyte, and also with respect to any materials in the sample that contain interfering absorbance bands. Thus, the relationships between different units is critical in interpreting the results from an MLR calculation.

The main relationship that Figures 1–3 from part I (1) show is the nonlinear relation between weight percent and volume percent, or equivalently, between weight percent and absorbance. The surface meaning of those figures is that because of those nonlinear relationships, at best a different coefficient would be needed for samples with different analyte concentrations to convert the absorbance value to a concentration value.

In the presence of interfering absorbance bands, the situation becomes even more complicated. With linear data, ordinarily a single correction factor would be needed to calculate from the absorbance reading at one wavelength the correction to the measured absorbance at the appropriate analytical wavelength. As we described in the discussions of Figures 1–3 from part 1 (1), however, the curves describing the relationships between different units are not evenly spaced, for constant differences between the corresponding sample matrixes. This means that the correction factor is different for different concentrations of the interferences as well as for the analyte. Therefore, even the correction for interferences is not a constant coefficient, but the correction factor also needs a correction. The conversion between spectral absorbance and concentration, therefore, requires a coefficient that depends not only on the amount of analyte, but also on the amount of each of the interferences.

In other words, if all was linear, then the theoretical correction for interference would be all that was needed to get accurate results. In the face of the fact that nearly all relationships between optical response and concentration are nonlinear, this nonlinearity of the relationships means the correction also needs a correction, and very likely the correction to the correction also needs a correction, as well.

Looking at typical calibration results, we should have always been aware that something anomalous was happening. What sorts of calibration results would we expect from properly-behaving data? Tomes dealing with the theory of calibration (17) tell us that as meaningful variables are added to a calibration equation, the calibration error should decrease, and when sufficiently many variables are included in the calibration model, the error of calibration should decrease and relatively abruptly reach the fundamental noise level when sufficiently many variables are included. From that point on, any new variables that are added to the calibration model should have no effect, and statistics for that new variable should show that it is not statistically significant.

How many variables (considering "variables" to be either absorbances at various wavelengths for an MLR model, or weights for factors for a principal component regression [PCR] or partial least squares [PLS] model) could be included in a near-infrared (NIR) model when this happens? Calibration theory plus mathematical theory also tells us that. There should be no more variables in a calibration model than there are actual physical variables that affect the absorbance of the samples. Generally, that should be a fairly small number; few sample types include more than four or five components that absorb appreciable amounts of the incident radiation, so that would require four or five variables in the model, plus maybe one more to account for the effects of optical scattering in powdered samples.

How many variables are typically found in NIR calibrations? While experience indicates that occasionally a model contains two or four variables, much more often they are found to contain 8–20 variables, with a broad maximum at around 12. And rather than rapidly decreasing to an abrupt transition to errors being at the noise level, the calibration error corresponding to all those variables continues a long, slow decrease that often never terminates (or at least, not within the range of the maximum number of variables included).

Now we know why that happens. Rather than fitting the data to discrete compositional or physical variations of the data, the calibration is fitting the higher factors of the spectral data, not to the changes of the spectra caused by physical or chemical variations, but to the nonlinearities between the spectra and the analytical values the calibration is presented with, to try to fit the spectra to. And to be sure, the residual errors because of those nonlinearities do indeed continually decrease as each new spectral variable is included in the calibration model. This gives the calibration a better fit to the data, but also leaves smaller and smaller residual nonlinearities to be fit by further factors; the underlying noise level is generally not reached.

When all is said and done, we also note that very early references clearly define the requirement for concentration in the Lambert-Beer equations to be gram-molecular weight per liter (18). This relationship between measured spectral signal and concentration of a molecule is most often expressed as:

Where e is the molar absorbtivity (referred to as molar extinction coefficient by earlier physicists) in units of L • mol^{-1} • cm^{-1}; *c* is the concentration of molecules in the spectrometer beam in units of mol • L^{-1} (note: this is a scaled volume fraction unit); and pathlength *l* is the thickness of the sample in units of centimeter of the measured sample at a specific concentration. With this assignment of units, it can be seen that the units on the right-hand side of equation 14 cancel, as they must to match the dimensionless quantity (absorbance) on the left-hand side. Later authors (19) generalized this concept, noting that the fundamental requirement was for the units on both sides to match, but that various sets of units could be used as long as this restriction was met. It seems likely that the specifications of the units by Harrison and colleagues (18) were intended to provide a single, universal set of absorbtivities for common analytes that could be published in tables so that other scientists could use them to perform analysis without having to redetermine them every time they wanted to measure those analytes.

(1) H. Mark and J. Workman, Jr., *Spectroscopy ***29**(2), 24–37 (2014).

(2) H. Mark and J. Workman, Jr., *Spectroscopy ***25**(5), 16–21 (2010).

(3) H. Mark and J. Workman, Jr., *Spectroscopy ***25**(6), 20–25 (2010).

(4) H. Mark and J. Workman, Jr., *Spectroscopy ***25**(10), 22–31 (2010).

(5) H. Mark and J. Workman, Jr., *Spectroscopy *** 26**(2), 26–33 (2011).

(6) H. Mark and J. Workman, Jr., *Spectroscopy ***26**(5), 12–22 (2011).

(7) H. Mark and J. Workman, Jr., *Spectroscopy ***26**(6), 22–28 (2011).

(8) H. Mark and J. Workman, Jr., *Spectroscopy ***26**(10), 24–31 (2011).

(9) H. Mark and J. Workman, Jr., *Spectroscopy ***27**(2), 22–34 (2012).

(10) H. Mark and J. Workman, Jr., *Spectroscopy ***27**(5), 14–19 (2012).

(11) H. Mark and J. Workman, Jr., *Spectroscopy ***27**(6), 28–35 (2012).

(12) H. Mark and J. Workman, Jr., *Spectroscopy ***27**(10), 12–17 (2012).

(13) H. Mark and J. Workman, Jr., *Spectroscopy ***28**(2), 24–37 (2013).

(14) H. Mark,* Principles and Practice of Spectroscopic Calibration *, (John Wiley & Sons, New York, 1991).

(15) J. Workman, Jr., and H. Mark, *Spectroscopy *** 7**(1), 44–46 (1992).

(16) J. Workman, Jr., and H. Mark, *Spectroscopy ***7**(3), 20–23 (1992).

(17) N. Draper and H. Smith, *Applied Regression Analysis, 3 Ed.* (John Wiley & Sons, New York, 1998).

(18) G.R. Harrison, R.C. Lord, and J.R. Loofbourow, *Practical Spectroscopy* (Prentice-Hall, New York, 1948).

(19) J.D. Ingle and S.R. Crouch, *Spectrochemical Analysis* (Prentice-Hall, Upper Saddle River, New Jersey, 1988).

**Jerome Workman, Jr.** serves on the Editorial Advisory Board of *Spectroscopy* and is the Executive Vice President of Engineering at Unity Scientific, LLC, (Brookfield, Connecticut). He is also an adjunct professor at U.S. National University (La Jolla, California), and Liberty University (Lynchburg, Virginia). His e-mail address is JWorkman04@gsb.columbia.edu

Jerome Workman

**Howard Mark** serves on the Editorial Advisory Board of *Spectroscopy* and runs a consulting service, Mark Electronics (Suffern, New York). He can be reached via e-mail: hlmark@nearinfrared.com

Howard Mark

Articles in this issue

Sulfobutyl Ether-β-Cyclodextrin–Assisted Fluorescence Spectroscopy for Determination of L-Amlodipine in Tablets

Headspace Raman Spectroscopy

How Trace Elemental Analysis Provides Important Insight into Wine Chemistry

Units of Measure in Spectroscopy, Part II: What Does It Mean?

Improving Drug Formulation with Raman and IR Spectroscopy

What Do You Mean by Good Documentation Practices?

Vol 29 No 9 Spectroscopy September 2014 Regular Issue PDF

Related Content

Deep Learning Advances Gas Quantification Analysis in Near-Infrared Dual-Comb Spectroscopy

May 15th 2024Article

Researchers from Tsinghua University and Beihang University in Beijing have developed a deep-learning-based data processing framework that significantly improves the accuracy of dual-comb absorption spectroscopy (DCAS) in gas quantification analysis. By using a U-net model for etalon removal and a modified U-net combined with traditional methods for baseline extraction, their framework achieves high-fidelity absorbance spectra, even in challenging conditions with complex baselines and etalon effects.

AI-Based Neural Networks Revolutionize Infrared Spectra Analysis

May 13th 2024Article

A Researcher from Lomonosov Moscow State University has developed a convolutional neural network (CNN) model for Fourier transform infrared (FT-IR) spectra recognition. This AI-based system is capable of classifying 17 functional groups and 72 coupling oscillations with remarkable accuracy, providing a significant boost to material analysis in fields like organic chemistry, materials science, and biology.