Where Are We Now with USP <1058>?

Article

Spectroscopy

SpectroscopySpectroscopy-11-01-2010
Volume 0
Issue 0

It is just over two years since USP <1058> on Analytical Instrument Qualification (AIQ) became effective. To coincide with the American Association of Pharmaceutical Scientists (AAPS) Meeting in New Orleans this month where a roundtable discussion on the subject will be held, here are my views.

United States Pharmacopoeia (USP) general chapter 1058 (<1058>) on Analytical Instrument Qualification (AIQ) became effective in August 2008 (1). The general chapter started life as an AAPS meeting on the Validation of Analytical Instruments in 2003. The conference soon decided that the title of the conference was wrong and that qualification was a better term to use with analytical instruments than validation. In 2004, the AAPS published the meeting report on Analytical Instrument Qualification (2) that was used as the basis of the draft <1058> general chapter. The draft chapter was circulated for industry comment and following incorporation of updates was finalized and effective in August 2008. Over the past five years, I have written and commented on the content of this general chapter in various installments in this column for Spectroscopy (3–8) and also in my "Questions of Quality" column for LCGC Europe (9–11).

Like the curate's egg (12), USP <1058> is good in parts.

The aim of this column is to summarize my views on the content of <1058> and to highlight areas where more work is needed to strengthen the AIQ chapter and also to ensure that spectroscopists and analytical scientists working in GMP laboratories are aware of its strengths and weaknesses. So we will start from the top and work our way through the various sections of <1058> on Analytical Instrument Qualification (1). As I mentioned in my last column (3), the concepts of AIQ (modular and holistic, and the phases of qualification) are not new. AIQ or rather equipment qualification was started in the early 1990s (13–18).

Pharmacopoeias and Harmonization

In a time of harmonization of the rules and regulations in the pharmaceutical industry through the International Conference on Harmonisation (ICH), there are major efforts to have the same requirements globally for development and manufacture of drug products. The harmonization effort has extended to the pharmacopoeias. However, there is no equivalent chapter to USP <1058> in the European Pharmacopoeia (EP) or the Japanese Pharmacopoeia (JP). There have been many efforts to harmonize approaches for the same subject (for example, spectroscopy and chromatography) in the general chapters in the USP, EP, and JP, but this has not been applied to an overall approach to AIQ across the three regions.

The USP now has a unique topic that is contained in no other pharmacopoeia and this can be looked at from two perspectives:

  • The USP and AAPS have been farsighted and implemented a much needed general chapter on the subject as an umbrella over the other instrument specific general chapters.

  • It is a pain for companies to implement, as compliance with harmonized regulations is always cheaper, more efficient, and effective than compliance with a single pharmacopoeia, which, in turn, depends if you are based in or want to sell products in the U.S. A non-U.S. company that has not sold products to the U.S. before now needs to comply with a new standard if they want to market products there. This is not a satisfactory approach.

Harmonization simplifies compliance through standardization of approach and this principle should be applied to analytical instrument qualification as well as the instrument specific general chapters in the major pharmacopoeias.

Chapter Numbering in the USP

Now a potential problem comes with the numbering of the general chapter: <1058>. Is this significant? You bet, especially for the really sad people (like me) who read the General Notices and Requirements (19) section at the front of the USP (this is the bit that most people skip on the way to the monographs and general chapters). However, contained in this section under a heading of general chapters is the significance of the numbering scheme of the general chapters in the USP. Chapters numbered from <1> to <999> are requirements (that is, do it or else) and those numbered <1000> to <1999> are informational (important but do not carry as much weight as a requirement). So <1058> is informational and not a requirement general chapter.

However, <1058> is implicitly or indirectly linked to requirements general chapters that deal with analytical instruments and systems (Groups B and C respectively) such as <41> weights and balances, <621> chromatography, or <851> spectrophotometry to name just a few applicable instrument-specific general chapters. The problem is that when you read <1058> alone, there is no explicit statement or linkage to the other general chapters to which it applies, and therefore, you or your company has to make the connection.

FDA inspectors are used to inspecting analytical systems that might have been validated using the principles in GAMP 4 (20), especially in large pharmaceutical companies. Now, as <1058> is an informational general chapter, what do FDA inspectors think of a more efficient risk-based approach?

Definition of Qualification

The terminology used when we "qualify equipment" or "validate systems" must be developed further, because we use installation qualification (IQ), operational qualification (OQ), and performance qualification (PQ) for both but mean different things depending upon the context in which we use them (6). This is a major cause for confusion between vendors and users (7).

Although the individual terms design qualification (DQ), IQ, OQ, and PQ are defined adequately for instrument qualification, there is no definition of the term qualification in USP <1058> or indeed, in any published FDA regulation or guidance document. This is an area in which <1058> needs to be improved as we use the term qualification but there is no adequate definition available.

Therefore, for the overall process of qualifying either an instrument or a system, we need a definition of qualification. The following is the one I have taken from ICH Q7 (GMP for active pharmaceutical ingredients) and modified accordingly (7).

Qualification: The action of installing and documented demonstration that a laboratory instrument or system is properly installed, works correctly, and actually leads to the expected results (7). The process is subdivided into four phases: DQ, IQ, OQ, and PQ, which is described adequately in <1058>.

Data Quality Triangle Critique

The USP <1058> data quality triangle has four layers: AIQ, method validation, system suitability tests, and quality control samples. I have reviewed this in depth in my "Questions of Quality" column looking at why system suitability tests are not a substitute for analytical instrument qualification (9,10) and the triangle itself briefly in this column (5). There are two problems to consider for improvement in this area, and these are shown in Figure 1.

Figure 1: The USP data quality triangle highlighting two problems.

Problem 1: The true role of the vendor is missing. This is shown as the four boxes underneath the data quality triangle. The initial specification, detailed design, and manufacture of the instrument are not mentioned in <1058>. This is a crucial omission. The reason is that design qualification mentions that a user can make use of the vendor's specification instead of writing their own leading into problem 2. However, problem 1 is that vendors are typically engineering-based and certified to ISO 9001 and not GMP, so specifications written to meet ISO standards might not be appropriate for GMP regulations. Also, a vendor cannot anticipate all the requirements of every user at the time of instrument specification and design, which is typically some time in advance of a user considering purchase of the same instrument.

Problem 2: Users are responsible for DQ. USP <1058> places great emphasis on the fact that in the design qualification stage is the responsibility of the vendor. To quote from <1058>: "Design qualification (DQ) is the documented collection of activities that define the functional and operational specifications of the instrument, based upon the intended purpose. Design qualification (DQ) is most suitably performed by the instrument developer or manufacturer." Absolute rubbish! Only users can define their instrument needs and how they will use an instrument (intended purpose), which might be different to that envisaged by the designer. It is also out of compliance with the GMP regulations (§211.63, 21), which requires that equipment is fit for its intended purpose, and that leaving DQ to a vendor will not define YOUR intended purpose, merely the vendor's. In my view, this GMP regulation will take precedence over an informational general chapter of the USP (unlike the situation with a requirement's general chapter, in which the situation is reversed. Furthermore, as discussed in my last column, a vendor specification might be meaningless without information on how it was derived or tested, for example, the centrifuge rotor speed that was measured without a rotor (3).

In contrast, DQ from the computerized system validation perspective is an assessment of how closely the selected instrument or system matches your user requirements. Looking at it from this perspective determines how wrong the USP <1058> approach is.

Classification into Three Groups: Implicit Risk Assessment

The best part of USP <1058> is the classification of laboratory items into one group: A, B, or C depending upon its intended use. In doing this, you have carried out an implicit risk assessment by classifying analytical instrumentation and systems into one of these three groups. This is very good and a pragmatic approach and certainly is far easier and far simpler than the GAMP good practice guide on Validation of Laboratory Computerised Systems (22) that I reviewed in an earlier column (8). However, the mechanism of the risk assessment, that is, how each company classifies a specific item of apparatus, instrument, or system into these groups is not covered and it is left up to each organization to define their own classification criteria.

The key point in the classification process is to define the function or functions of the item as the use and how much calibration is necessary once the item is in operational use. I have discussed the example of a sonic bath that could be in group A, B, or even C depending how it was used (5). Therefore, any risk assessment-based on USP <1058> needs to define criteria for:

  • Any items that have a non-GMP use

  • Group A apparatus

  • Group B instrument only

  • Group B instruments with inbuilt calculations

  • Group B instruments with the ability to write user defined programs

  • Group C systems

  • Standalone software including LIMS, spectrometry systems software, and spreadsheets

Figure 2 illustrates two elements of risk associated with <1058>. The first is increasing business and regulatory risk as one goes from group A to group C items.

Figure 2: Risk assessment adapted from USP showing missing compliance areas.

The first element is the higher the category, the higher the risk. For the most part, <1058> works for groups A and B items; it is a small portion of group B instruments but group C is where we run into the most trouble, which leads to the second element.

The second element is what is not stated in <1058>: what is missing or inadequately covered in the overall approach and these are highlighted in red in Figure 2. They are:

  • Non-GMP items (used for development and never for regulated work)

  • Group B instruments with embedded firmware calculations and the capability for user-defined programs

  • Group C systems, where the software can be configured or customized as well as standalone software from spreadsheets to LIMS

The risk assessment shown in Figure 2 is an integrated approach to both analytical instrument qualification and incorporates and computerized system validation (CSV). I would strongly argue that an integrated approach to laboratory instrumentation, systems, and software is essential as it is rare that an instrument comes without software of some type these days. We will discuss <1058> software issues next and the integrated AIQ–CSV approach at the end of this column.

Software Qualification and Validation

Problem 3: Poor software validation guidance: How USP<1058> describes software qualification and validation is the poorest part of this general chapter. Again, like the curate's egg, it is good in parts.

The good part is the approach to handling embedded software in Group B instruments, in which the firmware is implicitly or indirectly validated during the instrument qualification. This is a simple and practical approach that was consistent with the GAMP guide up until version 4 but is now different because the guide dropped category 2 (firmware) in version 5 of the guide (23). I commented on this when I reviewed the GAMP Software categories in this column (24,25), and I think that the <1058> approach is far better than that advocated in GAMP 5.

However, for Group B instruments there are omissions. Users need to be aware of the FDA GMP requirements in 211.68(b) that requires (21)

  • Input to and output from the computer or related system of formulas or other records or data shall be checked for accuracy. The degree and frequency of input/output verification shall be based upon the complexity and reliability of the computer or related system.

  • A backup file of data entered into the computer or related system will be maintained except where certain data, such as calculations performed in connection with laboratory analysis, are eliminated by computerization or other automated processes. In such instances a written record of the program shall be maintained along with the appropriate validation data.

This is not a quotation of the whole section of this regulation but a selection of the key points as they pertain to laboratory instruments and systems. So <1058> omits mentioning that users must validate calculations and document the formulae used for the calculations. The documentation of the equation might be adequate within the user manuals for systems provided that the vendor knows what they are and the software reflects this. From personal experience, there are a number of instances in software in which equations can be defined correctly in the manual but the implementation of equations or setting of software controlling an instrument is incorrect (26,27). This is why a user needs to validate any equations that he or she will use and confirm that they will work within the operating ranges used by the laboratory and qualify the instrument to ensure that both function as expected.

Some Group B instruments such as dispenser dilutors have the ability for users to define routines and programs within the firmware and save them for use later. This is not mentioned in <1058>. One discussion point could be that I would classify these instruments in group B whilst perhaps others would classify them in group C. This is a fair point but classifying them as group C systems would typically require more validation work. Looked at from my perspective (that of the lazy sloth) if the item was group B then a dispenser–dilutor would be qualified for the operating ranges specified by the laboratory. When a user defined program is written it can be adequately controlled by an SOP covering the specification, testing, saving and change control of such routines. I would contend that this is a much simpler and easier approach than a higher classification: it qualifies the instrument and adequately controls any user defined programs written.

Software Validation for Group C Systems and Standalone Software

This is the weakest area of the whole of <1058>. The responsibility for software validation is dumped on the vendor: "The manufacturer should perform DQ, validate this software, and provide users with a summary of validation. At the user site, holistic qualification, which involves the entire instrument and software system, is more efficient than modular validation of the software alone. Thus, the user qualifies the instrument control, data acquisition, and processing software by qualifying the instrument according to the AIQ process" (1). This is great in theory but poor in practice. How does a vendor know how you will use the system in your laboratory? They don't is the answer, just looking at Figure 1 will illustrate this point. Will a vendor be present to help you defend your approach to an FDA inspector? I think you will know the answer to this one!

Furthermore, <1058> ignores the fact that many software applications can be configured and even customised by the users (24,25). Wait I hear you say, USP <1058> quotes "An authoritative guide for validating stand-alone software, such as LIMS, is available" and cites an FDA Guidance for Industry called General Principles of Software Validation (28). Yes it does, but look at the main FDA division that wrote this document: Center for Devices and Radiological Health (CDRH). Here software is part of a medical device and apart from run time configuration (28) the business process automated by the software cannot be changed. This key fact has been overlooked when <1058> was written. However, if you follow the advice above then you will not have an adequate validation especially if you have software that can be configured or customized with a vendor scripting language or you can write macros to manipulate acquired data. It is also interesting in section 3.1.2 of this FDA guidance it mentions the terms IQ, OQ, and PQ but states categorically that they will not be used in the document (28).

USP <1058> puts the main emphasis on validation onto the software developer and vendor, who will be ISO 9001 accredited and will be developing the software according to these processes and not working to GMP. However, <1058> ignores the reality that the system owner is responsible for validation and will be inspected accordingly and not the vendor.

The Future: Integrated AIQ and CSV

The way forward in this debate must be to integrate analytical instrument qualification (AIQ) and computerized system validation (CSV) effectively (6,7). <1058> has made a start along this road with the approach to firmware used on instruments: implicit validation of the functions while qualifying the instrument. However, there are the omissions regarding in-built calculations for instruments and user defined programs discussed earlier. However, in my view, <1058> lost its way when we come to data system software and standalone software and a blind-faith reliance on a vendor for software validation instead of the system owner.

There is also a divergence between USP <1058> that is looking for integration and GAMP good practice guide which ignores USP and anything else that is not in GAMP 4 or 5. So GAMP and USP have gone in different directions while both using risk-based classification of systems (7 versus 3, respectively). Interestingly, in a recent publication (29) Horatio Pappa of USP is quoted as, "When we were developing this chapter, our committee was concerned with the GAMP guidance. We made it easy to perform but did not contradict what is in the GAMP guidance." So USP has considered a commercial publication when it was drafting the general chapter. In contrast, there is little evidence of GAMP looking outside of its own navel when writing guidance, this is the Henry Ford approach to qualification: you can have any guidance you want as long as it is written by GAMP.

To address this problem, Chris Burgess and I are writing a book on the integrated analytical instrument qualification and computerized system validation. There are two drivers for both AIQ and CSV: compliance and business. The compliance driver is meeting regulatory requirements to be able to manufacture and sell pharmaceutical products. The business driver to qualify and validate an instrument and system faster and cheaper and is heavily risk-based. The aim should be to do a single job rather than two separate ones and comply with both drivers in a single process. At the heart of the integrated approach is a flexible and scalable model that covers the scope of any item used in a laboratory and all possible life cycle phases. In essence, this is a two-dimensional accordion or squeeze box. The aim is to tailor the model to an individual item rather than force all items to fit an inflexible model.

Lastly, with the emphasis on pharmaceutical quality systems (PQS) outlined in ICH Q10 (30), we need to see where <1058> fits into the whole scope of a PQS — an explicit reference to Q10 would be a good start.

Conclusions

This column provides a more detailed discussion of a 10-min presentation that I will give at the AAPS meeting in New Orleans at an AIQ roundtable this month. This is the first time that AAPS has reviewed the topic since the general chapter was published. I want to highlight some of the issues that readers need to be aware of when reading USP <1058> on Analytical Instrument Qualification and where it needs to be improved to benefit analytical scientists working in GMP laboratories.

R.D. McDowall R.D. McDowall is principal of McDowall Consulting and director of R.D. McDowall Limited, and "Questions of Quality" column editor for LCGC Europe, Spectroscopy's sister magazine. Address correspondence to him at 73 Murray Avenue, Bromley, Kent, BR1 3DJ, UK.

References

(1) United States Pharmacopeia, <1058> Analytical Instrument Qualification.

(2) Analytical Instrument Qualification, AAPS white paper, 2004.

(3) R.D. McDowall, Spectroscopy 25(9), 22–31 (2010).

(4) R.D. McDowall, Spectroscopy 24(12), 67–72 (2009).

(5) R.D. McDowall, Spectroscopy 24(4), 20–27 (2009).

(6) R.D. McDowall, Spectroscopy 21(12), 90–110 (2006).

(7) R.D. McDowall, Spectroscopy 21(11), 18–23 (2006).

(8) R.D. McDowall, Spectroscopy 21(4), 14–31 (2006).

(9) R.D. McDowall, LCGC Eur. 23(7), 368–365 (2010).

(10) R.D. McDowall, LCGC Eur. in press.

(11) R.D. McDowall, LCGC Eur. 22(7), (2009).

(12) Curate's egg: http://www.en.wikipedia.org/wiki/Curate's_egg

(13) W.B. Furman, T.P. Layloff, and R. Tetzlaff, JOAC Int. 77, 1314–1317 (1994).

(14) M. Freeman, M. Lang, D. Morrison, and R.P. Munden, Pharm. Technol. Eur. 10(11), 45–48 (1995).

(15) C. Burgess, D.G. Jones, and R.D. McDowall, Analyst 123, 1879–1886 (1998).

(16) The Development and Application of Guidance on Equipment Qualification of Analytical Instruments, P.Bedson and M.Sargent, J. Accreditation and Quality Assurance 1 265–274 (1996).

(17)Guidance on Equipment Qualification of Analytical Instruments: High Performance Liquid Chromatography (HPLC), June 1998, LGC/VAM/1998/026.2.

(18) Guidance on Equipment Qualification of Analytical Instruments: UV-Visible Spectro(photo)meters (UV-Vis) Version 1.0 – September 2000, LGC/VAM/2000/079.

(19) United States Pharmacopeia, General Notices and Requirements, USP Convention, Inc., Rockville, Maryland.

(20) Good Automated Manufacturing Practice (GAMP) guidelines version 4, GAMP Forum, International Society of Pharmaceutical Engineers, Tampa, Florida, 2001.

(21) Current good manufacturing practice regulations for finished pharmaceutical products, 21 CFR 211.

(22) GAMP Good Practice Guide Validation of Laboratory Computerised Systems, GAMP Forum, International Society of Pharmaceutical Engineers, Tampa Florida, 2005.

(23) Good Automated Manufacturing Practice (GAMP) guidelines version 5, GAMP Forum, International Society of Pharmaceutical Engineers, Tampa, Florida, 2008.

(24) R.D. McDowall, Spectroscopy 25(4), 22–31 (2010).

(25) R.D. McDowall, Spectroscopy 24(6), 22 (2009).

(26) C. Burgess, D.G. Jones, and R.D. McDowall, Analyst 123, 1879–1886 (1998).

(27) J. Moore, J. Solanki, and R.D. McDowall, Laboratory Automation and Information Management 31, 43–46 (1995).

(28) FDA Guidance for Industry, General Principles of Software Validation, 2002.

(29) L. Valigra, Qualifying Analytical Instruments: General Chapter <1058> clarifies terminology, classifies instruments, Pharmaceutical Formulation and Quality June / July 2010 (available at www.pharmaquality.com).

(30) ICH Q10 Pharmaceutical Quality Systems, step 4, 2008 (www.ich.org).