Mass Spectrometry in the Clinical Laboratory—Challenges for Quality Assurance

Article

Special Issues

Spectroscopy SupplementsSpecial Issues-07-01-2015
Volume 13
Issue 3
Pages: 14–19

Beyond the long-established, optical standard techniques of photometry and immunoassay, LC-API-MS-MS has opened new horizons for clinical pathology. This is related to biomedical research and standardization as well as to routine diagnostic testing. It can be expected that the latter field will see important growth, including the introduction of automated MS-based analyzer systems. This will shift the application of mass spectrometric tests from a few specialized laboratories to many standard clinical laboratories with a lower level of analytical expertise.

Beyond the long-established, optical standard techniques of photometry and immunoassay, liquid chromatography-atmospheric pressure ionization-tandem mass spectrometry (LC-API-MS-MS) has opened new horizons for clinical pathology. This is related to biomedical research and standardization as well as routine diagnostic testing. It can be expected that the latter field will see important growth, including the introduction of automated MS-based analyzer systems, which will shift the application of MS tests from a few specialized laboratories to many standard clinical laboratories with a lower level of analytical expertise. Although MS-with its high specificity of detection and the unique feature of isotope dilution internal standardization-has the potential to realize highly reliable analyses, a variety of method-inherent potential sources of errors and inaccuracy should be recognized; this must be addressed not only in initial method validation but also in daily quality assurance.

Most physicians’ decisions on individual patients’ care are based on or take into consideration in vitro diagnostic tests (in internal medicine this is estimated to be up to 70%). About 500 tests are clinically established, with a set of approximately 100 standard tests that are available in most hospitals in industrialized nations within time frames between 30 and 120 min after sampling. However, in Germany, for example, only approximately 2% of the total healthcare expenses are allocated for in vitro testing. Because of a very high level of automation and the industrialized availability of reliable and convenient high-throughput assays, today clinical pathology makes a highly valuable and efficient contribution to patient care.

The main technologies used today in clinical diagnostics for the quantification of small-molecule analytes are based on optical detection in photometry and affinity of ligand binding in immunoassays. Whereas photometry is used for basic tests such as liver and kidney function tests, immunoassays are the predominant technology in more specialized testing, such as for hormones and drugs. Given this high degree of clinical performance and efficacy, why are innovative mass spectrometric technologies of interest as an additional orthogonal technology for clinical pathology? 

LC-MS-MS: Next Generation Clinical Chemistry?
The current standard technologies indeed have several shortcomings. They are prone to interference by individual matrix factors, for example, by heterophilic antibodies in immunoassays. However, the nature of most interference remains unidentified. Analytical specificity is often poor, such as in the cortisol immunoassay cross reacting with exogenous steroids. Classical tests address single analytes, and profiling by the combination of several different individual tests is expensive. Furthermore, and very importantly, development of novel tests is either technically very restricted (in photometry-based tests) or very demanding, time consuming, and expensive in immunoassays. Liquid chromatography-tandem mass spectrometry (LC-MS-MS) has the potential to overcome all of these issues and outperform traditional techniques: Matrix effects can be compensated for by the unique principle of isotope dilution internal standardization, detection based on molecular mass and disintegration pattern is highly specific, hundreds of analytes can be quantified in a paralleled way, and method development is straightforward without the need to raise antibodies in extremely complex biological systems. With these characteristics, LC-MS-MS offers the potential to efficiently address gaps in the diagnostic portfolio of laboratory medicine and introduce novel diagnostic concepts into a field that has seen only a few innovations during the last two decades.

Highly Complex Technology
In relation to the current standard technologies in laboratory medicine, however, MS-based methods are far more complex. Mass spectrometry is based on the interaction of chemical and physical interactions, high precision handling of fluids and vacuum, as well as on the evaluation of large data sets. The software interface of current MS systems is far from the user friendliness and convenience of standard clinical chemistry analyzers, which are more or less foolproof and can be operated continuously by personnel with mid-level training. Indeed, the complexity is higher by orders of magnitude compared to standard methods and it seems at least questionable if this technology is really compatible with the setting, workflow, and safety requirements of standard clinical laboratories-even if substantial achievements are made in terms of automation and streamlining software in the future.

Distinct Worlds: Biomedical Research and Diagnostic Testing
It is crucial to distinguish biomedical research applications of MS from application in a routine clinical laboratory as direct opposites. This difference is related to many aspects of the workflow (see the “Biomedical research versus clinical diagnostics” sidebar), but essentially also to the general aim of analyses: In biomedical research, results should and may primarily be interesting and improve our understanding of biochemical processes in health and disease. In contrast, results of clinical testing have to be useful and cost efficient for actual decision making, not on a population level but for an individual patient. Clinical laboratory tests should be the subject of a structured health technology assessment (HTA).

Today’s Typical MS Diagnostic Applications
Today, MS methods are applied in clinical laboratories (1-5); however, on a global perspective, their use is very much restricted. Gas chromatography (GC)-MS is used in toxicological laboratories mainly for confirmation of immunoassay screening test results or for a general unknown screening in very selected cases. LC-MS-MS has been used for approximately a decade now in most industrialized countries to screen newborns for some treatable inherited errors of metabolism during the first five days after birth from heel-prick dried blood spots. This is the oldest application of LC-MS-MS in medicine; it is semiquantitative by its concept, and not questioned regarding its usefulness and efficacy. In particular, because of the simultaneous assessment of phenylalanine and tyrosine a far lower recall rate for suspected phenylketonuria has been realized when compared to previously used technologies. 

The quantification of pharmaceutical drugs in blood to personalize dosing according to individual pharmacokinetic variables and verify compliance (therapeutic drug monitoring [TDM]) is done for a growing number of compounds using LC-MS-MS--particularly for psychiatric and infectious diseases. Quantification of endogenous compounds is far less widely addressed with LC-MS-MS at present (mainly 25-hydroxyvitamin D, methylmalonic acid, cortisol, testosterone, and an increasing number of other steroid hormones and plasma metanephrins in a few specialized reference laboratories). Testing by LC-MS-MS is so far restricted mainly to some tertiary care hospitals and core laboratories of private laboratory trusts. It has to be emphasized that there are substantial regional disparities with a rather widespread application in the United States and few applications, for example, in Eastern Europe.

 

 

Quality in Clinical Pathology
ISO standard 9001 defines quality as “the degree to which a set of inherent characteristics fulfills requirements.” Most clinical pathologists probably would agree that the main determinant of quality in laboratory tests is the analytical trueness; that is, the agreement between a reported molar concentration of an unknown analyte in a diagnostic sample and the true and actual number of analyte molecules in a sample. Other determinants of quality include the appropriateness of turnaround times in relation to clinical needs. Trueness of a test determines if (or to what degree) established decision limits for an analyte can applied (for example, radiological studies in case of serum cortisol results >1.8 µg/dL after administration of dexamethasone in suspected Cushing’s disease) and how reliably results of an individual patient can be followed up over time and with different laboratories (such as serum methylmalonic acid levels under long-term administration of vitamin B12 in pernicious anemia). Trueness is affected by both systematic errors (such as calibration bias and cross-detection of structurally related nonanalyte molecules in a complex sample), and by random imprecision--summed up as the total error. As a preconception, for most clinical “end users” of clinical chemistry tests, the term mass spectrometry is associated with the understanding of highest reliability and analytical quality.

Good and Bad LC-MS-MS Tests
However, it is essential that medical doctors--as the “customers” of the clinical laboratory--realize that the quality of LC-MS-MS methods applied in a clinical laboratory can differ fundamentally. For example, quality may be excellent in a method with calibration materials directly traceable to an international reference preparation and in authentic sample matrix, with calibration over the whole reported concentration range, with a highly efficient sample preparation procedure eliminating sample matrix effects, with a fourfold 13C-labeled internal standard compound added in a concentration close to the clinical decision levels of the test, and with verification of results by two characteristic mass transitions and ion-ratio assessment. Quality may be critical and the risk of inaccuracy may be high instead in a method with one-point calibration with “home-brewed” calibration materials only, in a non-matching matrix (such as methanol instead of serum); with a twofold deuterated internal standard that may be prone to hydrogen-deuterium scrambling because of its labeling pattern; mere deproteinization as the sample preparation resulting in a substantial degree of ion suppression in diagnostic samples (but not in calibrators); assessment with only a poorly specific loss-of-water mass transition and high imprecision because of a poor signal-to-noise ratio (see Table I). Notably the latter hypothetical test may well be appropriate for its intended clinical use if it is only applied to assess the probable presence or absence of a compound of interest in a clinical sample.

How Does Quality Assurance Work in Standard (“Non-MS”) Laboratory Tests?
Since clinical results are directly derived from laboratory tests, a highly reliable and efficient system of quality assurance has to be in place in every clinical laboratory. Today the mainstay of this system includes
• Daily internal quality assurance using commercially available, lyophilized QC samples with assigned target values, ideally based on a reference method measurement. These samples have to be analyzed daily throughout each analytical series with predefined acceptance criteria related to the intended clinical use of the tests and the individual total allowable error.
• Periodic external quality assurance based on proficiency testing schemes (“round robins,” interlaboratory surveys) organized by accredited (typically) nonprofit organizations with identical samples sent out to a substantial number of different laboratories (1-12 times per year).

Based on these two essential tools, a rather high level of reliability and between-laboratory agreement has been achieved now for many (but not all) routine diagnostic tests quantified in decentralized clinical laboratories with standard analytical systems and industrially manufactured, fully standardized (confectionated) tests.

 

 

Key Question: Additional Tools Required for QA in Clinical MS Methods?

With the increasing use of LC-MS-MS in diagnostic testing, a key question is whether the traditional backbone of quality assurance (QA) used for today’s simple standard technologies is also sufficient to verify reliability of individual test results in clinical LC-MS-MS methods given the extremely high complexity of this technology. It is widely accepted that this technology has the potential to allow analyses on a very high level of reliability; however, it must be recognized in the clinical context that there are a number of potential inherent pitfalls and threats concerning the accuracy and reliability of such analyses. These issues can be grouped into two categories: general challenges of the implementation of this demanding technology in the setting of a clinical laboratory (see the “ General issues concerning the quality of clinical MS methods” sidebar) and specific potential sources of errors inherent to all of the different stages of this technology (see the “Specific issues concerning the quality of clinical MS methods” sidebar). Both categories of issues could be subject to systematic risk management, addressing synoptically the likelihood of occurrence of an error, relevance and impact (analytical and clinical), and likelihood of detection (for example, by a quantitative assessment matrix).


It is widely accepted in life sciences that analytical methods have to undergo structured method validation procedures with predefined performance goals to verify meaningful results and describe the limits of an individual method. A sophisticated system of quality management tools for analytical procedures has been developed for decades, particularly in the context of pharmaceutical drug development and forensic testing (7). Validation protocols essentially aim to clear individual measuring campaigns in highly specialized research and development (R&D) laboratory settings. Characteristics for this type of laboratory are as follows: analyses are performed in clearly distinct series; they have highly trained staff members; they provide stable analytical configurations; they are not restricted by financial resources; and there is not a high urgency to report results in terms of time-to-results. The setting of a clinical laboratory--in particular in hospital laboratories--differs profoundly: Here, the time to results is typically very critical, testing is integrated into a continuous workflow for 52 weeks a year, and series tend to be less well defined with the aim to even realize random access testing. Clinical laboratories are often under substantial pressure in the context of structural challenges of most healthcare systems worldwide, and there is often a moderate to low training level of the staff trained in a multitude of different technologies and a very limited time budget to care for complex instruments. These limitations are a systematic threat for the reliability of results generated in the context of patients’ care and have to be followed up continuously by an efficient quality management policy that clearly has to differ substantially from those implemented in life-science laboratories. Daily monitoring of a continuously ongoing process is the challenge rather than to validate a high-end method in a highly stabilized environment.

 

Evaluation and Validation
The term validation is often used with a certain degree of inconsistency: Sensu strictu validation is the act of declaring a documented measurement procedure in conformity with regulations based on a set of data obtained in experiments within the limits of statistics (yes or no). It is essential to define the intended use of a method and consider a lifecycle definition of procedures.

Exemplary frameworks for method validation are the United States Food and Drug Administration’s (FDA) Guidance for Industry and Bioanalytical Method Validation, and the European Medicines Agency’s (EMA) Guidelines for bioanalytical method validation (8,9). Notably, the aim of these documents is not to describe or suggest quality management in a routine clinical laboratory (10). Essentially, the performance characteristics of a test procedure are tested in a single-instrument setting over a short period of time, which represents a rather “static” situation. Instruments and methods in a clinical laboratory, in contrast, represent something like a highly dynamic “living system.” This applies in particular to LC-MS-MS instruments with their extremely complex interaction of physical, chemical, and electronic processes, and their inconstant nature of signal generation. Shortcomings of the above mentioned protocols are not only that they don’t address the clinical laboratory setting, workflow and structure of clinical laboratories, but that acceptance criteria are not related to the clinical aim of a test. For example, the generally accepted CV of 15% in the FDA guidelines would be far to high for an HbA1c assay for which scientific societies claim a long-term and multilot CV of <2% to be necessary.

In most publications on MS methods suggested for clinical use an individual protocol is configured to evaluate the performance characteristics and analytical limits. These protocols increasingly take into consideration the particular analytical challenges of LC-MS-MS, such as differential ion suppression or product ion cross talk. However, the acceptance criteria are mostly self-defined in a rather arbitrary manner (for example, concerning the degree of tolerable ion suppression) and are not traceable to official or consensus documents. For this practice of “pseudo-validation” the term method evaluation seems more appropriate. Notably, publications typically don’t define, recommend, or evaluate a test specific scheme for quality assurance and monitoring in long-term application of a measurement procedure in diagnostic use in a “real-world” multi-instrument setting.

It should be noted that general documents on quality assurance in laboratory medicine (in particular ISO 15189, checklists from the College of American Pathologists [CAP], and the Guideline of the German Medical Association on Quality Assurance in Medical Laboratory Examinations [11]) are very general and generic in nature and do not take issues into consideration the specifics for LC-MS-MS (such as in-source fragmentation of conjugate-metabolites of target analytes).

Indeed, until very recently there was no document available that specifically addressed quality assurance in the routine application of LC-MS-MS in clinical diagnostic testing. The Clinical and Laboratory Standards Institute (CLSI) guideline C62-A (12) (Liquid Chromatography-Mass Spectrometry Methods; Approved Guideline), released in October 2014, has substantial novelty in this respect:
• Assessment of matrix effects
• Assessment of product ion crosstalk
• Impurity of calibrators addressed
• Internal standard peak area assessment
• Retention time monitoring
• System suitability assessment
• Ion ratio monitoring
• Correlation of multiple instruments
• Calibration slope monitoring
• Blanks, double blanks
• Assessment of mass calibration and tuning

It addresses the most important potential quality issues that are inherent to LC-MS-MS. Appropriately, it is distinguished between an initial method evaluation process and continued post-implementation monitoring. The guideline declares best practice, but has to be translated into individual laboratory settings and specific test procedures with their respective intended use. Because of limitations in resources many laboratories will not be able to fully adhere to this guideline. Furthermore, it should be considered if some potential issues are not yet adequately addressed in the present edition of CLSI C62-A (for example, assessment of peak shape in addition to retention time monitoring). It can be expected that CLSI-62A will have an essential role in the context of accreditation procedures in the near future. 

Outlook
It can be forecast that the “landscape” of clinical application of MS will change profoundly during the next few years: Fully laboratory developed tests (also referred to as home brew assays) will probably experience decreasing use, with kit solutions becoming more and more widely used. These kits (including columns, solvents, standards, and software tools) come with basic validation data, CE-IVD label for the EU market, and are often implemented by manufacturers’ specialists on site. In this way, MS will become increasingly common in laboratories with rather little general expertise in instrumental analyses. It seems likely as well, that main clinical diagnostic companies will address MS, aiming to develop fully automated, closed MS-based analyzer systems to be integrated also in laboratory automation systems. This foreseeable perspective holds important challenges for quality management in this technology which is--to emphasize this again--some orders of magnitude more complex compared to the technologies currently used in the clinical laboratory.

Conclusions
To translate the enormous potential of MS into meaningful, actionable, and safe test results in the specific setting of a clinical laboratory is a very substantial challenge. In this context, it seems essential to realize that reliability is not inherent to this technology but must be addressed carefully and questioned systematically. These approaches of quality assurance have to cope with profound changes in the settings of application--from highly specialized laboratories to ordinary hospital laboratories. A generally accepted and specific “codex” for quality assurance of MS methods in the very particular environment of clinical laboratories is needed and is under development within the community of clinical MS users.

References
(1) M. Vogeser and C. Seger, Clin. Biochem.41, 649-62 (2008).
(2) Clinical and Laboratory Standards Institute, C50-A, “Mass Spectrometry in the Clinical Laboratory: General Principles and Guidance”; Approved Guidelines. 2007. ISSN 0273-3099
(3) J.W. Honour, Ann. Clin. Biochem.48, 97-111 (2011).
(4) J.M.W. Van den Ouweland and I.P. Kema, J. Chrom. B 883-884, 18-32 (2012).
(5) R.P. Grant, Clin. Chem.59(6), 871-3 (2013).
(6) M. Vogeser and C. Seger, Clin. Chem. 56(8), 1234-44 (2010).
(7) Guide to Achieving Reliable Quantitative LC-MS Measurements, M. Sargent, Ed. (RSC Analytical Methods Committee, 2013, www.rsc.org/images/).
(8) European Medicines Agengy (EMA), Guideline on bioanalytical method validation, 2011, www.ema.europa.eu.
(9) U.S. Department of Health and Human Services Food and Drug Administration (FDA), Center for Drug Evaluation and Research, 2001, www.fda.gov.
(10) S. Baldelli, D. Cattaneo, S. Fucile, and E. Clementi, Ther. Drug Monit.36(6), 739-45 (2014).
(11) German Medical Association, Revision of the “Guideline of the German Medical Association on Quality Assurance in Medical Laboratory Examinations - Rili-BAEK,” J. Lab Med. 39, 26-69 (2015).
(12) Clinical and Laboratory Standards Institute, C62-A. Liquid Chromatography-Mass Spectrometry Methods; Approved Guideline. 2014. ISSN 1558-6502.

Michael Vogeser is a professor of laboratory medicine and a senior physician at the Institute of Laboratory Medicine, Hospital of the University of Munich, Germany. 
Direct correspondence to: michael.vogeser@med.uni-muenchen.de

 

 

 

 

 

Related Content