A Pragmatic Approach to Managing Interferences in ICP-MS

Article

Spectroscopy

SpectroscopySpectroscopy-05-01-2008
Volume 23
Issue 5

While inductively coupled plasma-mass spectrometry (ICP-MS) is capable of part-per-quadrillion (ppq) detection limits under ideal conditions, most applications do not require this level of sensitivity and do not justify the cost associated with achieving it. Practical sensitivity in ICP-MS is determined not by instrument signal-to-noise ratio, but rather by controlling interferences and matrix effects in real samples. Understanding the sources of these effects and their management is critical in determining the most practical way to achieve specific data quality objectives.

Let's look at interferences in inductively coupled plasma mass spectrometry (ICP-MS) from a real-world standpoint. What are they, where do they come from, and to what extent do they impact data quality? We commonly think of interferences as anything that contributes to an analytical measurement that is not the analyte itself. This is correct of course, but not everything that meets this criterion actually impacts the final analytical result. Instrument background, or even matrix background, if it is constant and not too large, might not impact analytical accuracy or even sensitivity significantly. On the other hand, even relatively small contributions from the sample that are unique to that sample or difficult to predict can result in large errors in quantification. A simple way to understand the real effects of interferences might be this: If you could remove the analytes from your samples completely, and then measure those samples compared to your calibration blank, your result would be the sum of your quantitative interferences. Depending upon the particular sample and analyte, this might be small or large, positive or negative, and for a given lot of samples, this range determines your reporting limits. Of course, we can't do this with every sample, though clearly, we need to manage these interferences — but how, and to what extent? The "how" part is fairly well understood, given the tools currently available, and it depends upon dividing the interferences roughly into two groups that we call spectroscopic interferences and nonspectroscopic interferences. Spectroscopic and nonspectroscopic interferences are caused by different things and are handled differently, but the net effect is the same — if they are not reduced sufficiently, they can impact the analytical result. The "to what extent" is the topic of this column.

Steve Wilbur

Spectroscopic and Nonspectroscopic Interferences

Spectroscopic interferences contribute directly to a specific analyte signal by possessing the same mass-to-charge ratio (m/z) as the analyte ion. There are three types, all of which must be managed differently. The first type is isobaric (an isotope of another element with the same mass as the analyte isotope), for example, 100Mo and 100Ru. In nearly all cases, these can be avoided through the selection of an alternate analyte isotope that does not suffer an isobaric overlap. The second type is doubly charged. Because the mass spectrometer measures mass to charge, rather than purely mass, if the charge is other than 1, the mass to charge will be different from the mass. A number of common elements possess second ionization potentials sufficiently low to allow formation of doubly charged ions in an argon plasma. These ions will interfere with analytes at half their actual mass. For example, 136Ba2+ commonly interferes with 68Zn+. Again, isotope selection is the most common way to avoid doubly charged interferences. The third and most problematic type of spectroscopic interference is polyatomic. In this case, the presence of a molecular ion, either from the plasma or from interactions in the interface region or collision cell, possesses the same m/z as the analyte ion. These are extremely common and frequently result in multiple polyatomic interferences on a single analyte isotope. Table I shows some typical examples. Management strategies for polyatomic interferences include controlling the presence of the interfering components in the sample, analyte isotope selection, mathematical correction, and the use of collision or reaction cell technology. The specific details of these techniques are well understood and documented (1) and will not be covered here. The best approach depends upon a number of factors, but generally, the use of collision or reaction cell techniques is superior to the use of mathematical correction in all but the simplest of matrices. Even here, there are two strategies, each with advantages and limitations, depending upon the specific analytical requirements. Reaction cell techniques (those depending upon specific chemical reactions within the cell) can offer very high efficiencies, but they generally are limited to the removal of one or a few specific interferences in well-defined matrices. Collision cell techniques use a nonreactive gas such as helium to reduce the kinetic energy of the larger polyatomic ions within the cell preferentially. A simple positive voltage gradient at the exit of the cell discriminates against the low-energy polyatomics and transmits the higher-energy atomic ions using a process called kinetic energy discrimination (KED). While KED has somewhat poorer efficiency than the best of reactive techniques, it is effectively able to reduce all polyatomic interferences simultaneously, and for this reason is the preferred technique for multielemental determinations in complex or unknown matrices.

Table I: Potential polyatomic interferences from a matrix containing common components, sulfur, chlorine, carbon, and calcium in dilute nitric adic, on first-row transition elements

Nonspectroscopic interference is the catch-all term for any interference that is, well, not spectroscopic. These include primarily enhancement and suppression effects that alter the response of groups of analytes. In general, nonspectroscopic interferences do not create a response where there was none, but rather alter the response of a group of analytes in a sample compared with a reference sample. These are commonly termed matrix effects because they are caused by sample matrix components that are different from the reference or calibration samples. There are many different sources of matrix effects, but they fall into a few common categories. Sample transport and nebulization effects result from physical attributes (such as viscosity, volatility, or surface tension) of the sample that can alter the efficiency of sample transport and nebulization, resulting in either increased or decreased sample reaching the plasma. Ionization suppression is caused when high concentrations of easily ionized elements in the plasma preferentially suppress the ionization of elements with higher ionization potentials. Space-charge effects preferentially suppress low mass ions in the presence of high concentrations of high mass ions by repelling the low mass ions from the highly positive ion beam in the ion optic region of the mass spectrometer. A reliable test for measuring matrix tolerance in general is the rate of formation of metal oxides in the plasma. Cerium oxide is normally used. Figure 1 shows the signal suppression effects of undiluted seawater on a group of elements for cerium oxide levels ranging from 0.2% to 2%. Not only is sensitivity higher at lower CeO/Ce ratios, but internal standard correction is simpler and more reliable and accuracy is much improved. Other effects can enhance the response of groups of analytes specifically. For example, it is known that the presence of organic carbon in the sample will enhance the response for arsenic and selenium and possibly a few other similar elements. Within limits, matrix effects are controlled through the use of internal standards. Internal standards are elements (or isotopes) other than the analytes that behave similarly to the analytes in the presence of matrix effects. As a result, their behavior can be used to correct for presumably similar (though unknown) effects on the analytes. To be effective, the internal standards must respond to the variable matrix conditions in exactly the same way as the analyte elements. They also must not be present in the original sample. The perfect internal standard is another isotope of the analyte element spiked at an unnatural abundance using a technique called isotope dilution. These unusual isotopes are expensive, when available, and for some elements are not available at all. Furthermore, while isotope dilution is excellent at eliminating matrix effects, it cannot eliminate spectroscopic interferences. As a result, because perfect internal standards generally are not the norm, matrix effects still must be minimized as much as possible. This is achieved in one of two ways. Matrix matching — that is, making up standards and blanks in the same matrix as the samples — effectively cancels the effects of the matrix and works well when the matrix is well characterized and the same for all samples. The method of standard addition is a special case of matrix matching, where the calibration standards are added to the sample itself, thus ensuring a perfect matrix match. This is time consuming because it involves the preparation and measurement of multiple spikes into every sample. When the samples in a group are not identical in terms of matrix, then matrix matching is not practical. In this case, the matrix must be reduced or eliminated. This can be achieved through various techniques of matrix elimination, or more commonly through sample dilution. Both methods have disadvantages, including analyte loss or reduction of analyte concentration, contamination, the possibility of error, and increased time and effort.

Figure 1

The Role of Interferences in Practical Limits of Detection

As ICP-MS instrument manufacturers and users, we tend to get caught up in the "How low can you go?" game of competitive detection limits. Because everyone knows that ICP-MS is very sensitive, the tendency is to see who can push that limit the furthest, and we speak routinely in terms of parts-per-quadrillion (ppq) or even sub-part-per-quadrillion levels as though it is something we can grasp easily. A concentration of 1 ppq is equal to 1 part in 1015, or the first 0.15 mm on a trip from Earth to the sun. And while a simple calculation of blank counts per second (cps) divided by maximum response (cps/concentration) for a standard can in fact give us blank equivalent concentrations (BECs) in the parts-per-quadrillion range, most of us need to ask ourselves: "Does my application really need that?" and "What are the real-world limitations and costs associated with the quest for parts-per-quadrillion detection limits?" This leads me to my point, which is how we can achieve sufficient sensitivity (limits of detection or quantification) to answer our question or solve our problem at an acceptable cost in equipment, supplies, time, and frustration.

Detection limits in ICP-MS are determined by measuring the standard deviation of the blank signal times some confidence factor and multiplying that by the response factor (concentration/cps) for the particular analyte isotope. Any sample dilution required before analysis also must be factored in. The response factor is fairly easy to determine with good accuracy and precision, and the relative impact of interferences is small. It is the blank that is problematic. Lots of things can play havoc with the blank, including of course, the counting statistics. Mainly, these are contamination (the blank contains the analyte or analytes of interest) and interferences (the blank contains something that is not the analyte of interest but is indistinguishable to the instrument). The possible sources of both of these are numerous and become myriad as we attempt to measure at lower and lower levels. For all practical purposes, reporting limits in almost all cases are blank-limited; that is, you can't report a concentration that is not different statistically from the method blank or blanks. If we use a very expensive blank, treat it well, and measure for a sufficient length of time in a very expensive clean room, we can achieve instrument detection limits for most elements in the low to sub-part-per-trillion range or even lower. For some applications, particularly in the semiconductor industry, this is necessary and worth the cost and effort. In many other applications, this type of sensitivity and precision are not necessary or even achievable within reasonable limits of cost and effort. For example, what if the "blank" isn't very reproducible at all? Remember, the most representative "blank" is the actual sample with the analytes removed. A real blank should include all possible sources of error that might affect a sample, including contamination, human error, matrix effects, and long-term measurement imprecision. Typically, this is accounted for through the use of a preparation blank that is subjected to the same sampling and preparation steps as the sample. While this certainly accounts for potential contamination introduced via sample preparation, it does not account for the imprecision in preparation of the replicate preparation blanks or for matrix effects, which can vary significantly from sample type to sample type.

Summary

At the end of the day, the first question we need to ask is "How low do we really need to measure reliably?" A good example would be drinking water measurement — a fairly simple sample with important public health ramifications. Most regulated elements in drinking water have maximum contamination levels in the single-digit parts-per-billion range, about a million times higher than the parts-per-quadrillion limits that are possible under perfect conditions with ultraclean samples. In this case, we don't have to be quite as careful with the blank as we would for a semiconductor-grade water application, for example. If we can manage matrix effects (drinking waters can have quite high total dissolved solids) simply and effectively as well as common polyatomic interferences, we can achieve the necessary reporting limits easily (~0.1 ppb). More-complex samples such as soil digests or geological samples require more careful management of matrix effects, which can vary significantly from sample to sample. While dilution is used commonly, other techniques that can minimize matrix effects without diluting the sample are preferred because they eliminate the dilution factor as well as the potential for contamination and error associated with dilution. A newly developed technique called aerosol dilution (2) addresses the need to minimize matrix effects without the need for conventional dilution. With matrix effects under control, the accuracy of internal standard correction also is enhanced significantly. Managing polyatomics is also more critical in variable and complex matrices in which the interferences might be unknown and differ significantly from sample to sample. Because of this, neither mathematical correction nor the use of reactive collision cell gases is ideal, because neither is universal. Fortunately, the use of helium collision mode coupled with kinetic energy discrimination can reduce polyatomic interferences universally in complex, variable, and unknown samples to low parts-per-trillion levels with good reliability.

Steve Wilbur is a senior applications scientist in the ICP-MS group at Agilent Technologies. He is part of a small, but ardent group of fanatics worldwide who believe we will ultimately build the "perfect" mass spectrometer. In the meantime, he spends his time thinking about both living with and eliminating the current "imperfections." The views expressed in this column are his and do not represent those of Agilent Technologies.

References

(1) E. McCurdy and G. Woods, J. Anal. At. Spectrom. 19, 607–615 (2004).

(2) "Direct Analysis of Undiluted Soil Digests Using the Agilent High Matrix Introduction Accessory with the 7500cx ICP-MS," Agilent Technologies application note 5989-7929EN 2008.

Recent Videos
Robert Jones speaks to Spectroscopy about his work at the CDC. | Photo Credit: © Will Wetzel
John Burgener | Photo Credit: © Will Wetzel
Robert Jones speaks to Spectroscopy about his work at the CDC. | Photo Credit: © Will Wetzel
John Burgener of Burgener Research Inc.
Related Content