Does CSA Mean “Complete Stupidity Assured?”

Publication
Article
SpectroscopySeptember 2021
Volume 36
Issue 9
Pages: 15–22, 58

The U.S. Food and Drug Administration (FDA) has a new approach to computerized system validation (CSV) called computer system assurance (CSA). Without a draft guidance issued, are we entering an era of regulation by presentation and publication? As a result, does CSA risk becoming “complete stupidity assured?”

Computerized system validation (CSV) has an uninspiring reputation for being a slow, no-value-added activity that only wastes time and delays the implementation of new software.Is that an accurate portrayal?

As somebody who has been involved with CSV for over 35 years, I would say it depends. Here are two CSV examplesone in which using CSV is sublime and another in which using CSV is ridiculous:

  • The sublime example: Map and improve your processes from which your intended use requirements are written. Then, apply effective risk management and scientifically sound logic to leverage supplier development and application configuration to focus testing. The rationale for why you don’t test one function is just as important as why you should test another. This approach is an example of managing the cost of compliance versus the cost of noncompliance, as discussed in a recent “Focus on Quality” column (1).
  • The ridiculous example: An organization has a one-size-fits-all approach to CSV. You know that you are wasting effort when the corporate CSV standard operating procedure (SOP) states that you must write three specification documents (user requirements specification, functional specification, and a design specification), but the intended use for a UV spectrometer is measuring absorbance of samples at one or two wavelengths. A detailed risk assessment adds fuel to the no added value fire, which is followed by a demand for detailed test instructions accompanied by copious screen shots to demonstrate that each step of each test had been executed.

A one-size-fits-all validation approach lacks the flexibility to tailor each validation based on intended use and condemns any regulated laboratory to mountains of paper. You can see why CSV gets a bad reputation; instead of applying common sense that then results in business benefit generated by the system, CSV is an inflexible approach, and coupled with the ultraconservative nature of the pharmaceutical industry, consigns it as an old-fashioned, outdated process. You should have an accurate assessment of a supplier’s development and testing, the application software category, and the impact of the records created by the system to focus the CSV effort where it is most needed. Flexibility is the name of the game.

CDRH: The Case for Quality

Approximately 10 years ago, the FDA’s Center for Devices and Radiological Health (CDRH) started the “Case for Quality” initiative that was aimed at reviewing the problems medical device companies had with regulatory compliance. By 2015, one of the areas identified was CSV for the following reasons:

  • Instead of demonstrating its intended use, CSV was aimed at producing documentation to keep auditors and inspectors quiet.
  • CSV was synonymous with delay, and as a result, it ended up being a necessary evil instead of a value-added activity.
  • Risk assessments were complex, burdensome, and expensive.
  • 80% of test problems were because of errors by a tester or in the test instructions.
  • Industries used regulatory burden as an excuse for not using technological advances to improve CSV.

As a result, the FDA set up a joint CDRH and industry team to develop a new validation approach to computerized systems used in the medical device industry with the aim of following the least burdensome approach, called computer system assurance (CSA).

Least Burdensome Approach

An CDRH guidance for industry is the General Principles of Software Validation issued in 2002. In section 2.3, it states:

We believe we should consider the least burdensome approach in all areas of medical device regulation. This guidance reflects our careful review of the relevant scientific and legal requirements and what we believe is the least burdensome way for you to comply with those requirements (2).

This section concludes with an invitation that if a company knows of a better way to validate software, talk to the Agency. The guidance goes further in section 4.8 on validation coverage which is quoted verbatim:

Validation coverage should be based on the software’s complexity and safety risk, not on firm size or resource constraints. The selection of validation activities, tasks, and work items should be commensurate with the complexity of the software design and the risk associated with the use of the software for the specified intended use. For lower risk devices, only baseline validation activities may be conducted. As the risk increases additional validation activities should be added to cover the additional risk. Validation documentation should be sufficient to demonstrate that all software validation plans and procedures have been completed successfully (2).

The key takeaway here is to focus on intended use, risk and the nature of the software used. The 20-year-old FDA guidance sends a clear message that paraphrased says it is important to not kill yourself with compliance. For too long, the pharmaceutical industry has not evolved from a risk-averse to a risk-managed industry.

FDA Centers: CDRH and CDER

The FDA is divided into several centers. There are two centers that are discussed at length in this column:

  • Center for Drug Evaluation and Research (CDER): This center is responsible for the pharmaceutical industry. The applicable Good Manufacturing Practice (GMP) regulations are 21 CFR 211, which were first issued in 1978 with a minor update in 2008 (3,4). Because of the age of these regulations, there is no explicit requirement to perform validation of computerized systems.
  • Center for Devices and Radiological Health (CDRH): This center is responsible for medical devices, including scanners. The GMP regulations CDRH uses are 21 CFR 820 (5) that are based on a 1990s version of ISO 13485 (6). GMP has specific requirements to validate software used in a medical device (21 CFR 820.30) and used for process control and the quality management system (QMS) (21 CFR 820.70[i]) (5).

Software that is used to control or operate a medical device is already validated when you purchase it, which contrasts radically with software used in the pharmaceutical industry that is not validated when you buy it (although many suppliers would like you to believe it is), and the laboratory must undertake CSV to demonstrate that it is fit for intended use, which is based on business needs and the process being automated. Bear in mind that CSA is aimed primarily at medical devices and not the pharmaceutical industry.


CSA Principles and Pilot Projects

The joint industry–CDRH team developed the principles of CSA as:

  • Focus on the intended use of an application, usually a medical device
  • Shift from documentation to critical thinking
  • Allow undocumented testing
  • Only using trusted suppliers
  • The evidence should add value to the testing process, which is to reduce the number software bugs as well as demonstrate intended use
  • Use automation to help the process such as requirements management and testing

Pilot projects were used to verify and refine the CSA approach. Since 2017, there have been a number of presentations and publications from both FDA staff members and members of the various pilot projects. So far, so good.

However, despite a draft guidance for industry for CSA being on CDRH’s list of documents to be issued since 2018, nothing has appeared from the Agency, which is a problem.

Waiting for Godot?

This is the regulatory equivalent of “Waiting for Godot” where the two main characters of the play are on stage for over two hours spouting rubbish and Godot never turns up.

It is interesting to contrast the differences in approach to regulations between the USA and Europe. Since 2011, the European Union (EU) GMP has updated eight of the nine chapters of Part I, including several Annexes such as 11 and 15. Indeed, chapter 4 and Annex 11 are being revised again to reinforce data integrity principles. In contrast, the U.S. GMP (21 CFR 211) published in 1978 has only been updated once in 2008 with the addition of one clause impacting manufacturing: 211.68(c) (4).

It is my opinion that the FDA, specifically CDRH, is inept and unprofessional in failing to issue a draft guidance on CSA.

Instead of updating regulations, FDA issues advice as either a Level 1 Guidance for industry documents or a Level 2 Question and Answers section published on the FDA website. Let us focus on the Level 1 guidance, which are usually issued as a draft for industry comment and after a prolonged reflection once a final version is issued. Relatively fast track examples of guidance
issuance are:

  • The Part 11 Scope and Application guidance, which was issued in February 2003 and finalized in August the
    same year (7).
  • The Data Integrity and Compliance with cGMP guidance moved more slowly in progressing from a draft in April 2016 to a final version in December 2018 (8).
  • However, the usual guidance issuance within the FDA is less stellar and more glacial:
  • The Out of Specification guidance draft was completed in September 1998, but the final version did not arrive until October 2006 (9).
  • The GMP method validation guidance draft guidance issued in 2000. The final guidance was issued in July 2015 a mere 15 years after the draft. (10).

It could be argued that guidance documents are regulation by the back door, but all have the phrase “contains nonbinding recommendations” emblazoned on each page, which could mean that content could be difficult to enforce.

The Genie’s Out of the Bottle

In the absence of a draft guidance for industry, there are presentations from CDRH officials and industry members from the pilot programs as well as articles, white papers, and industry guidances published. The situation is that we are putting the industry interpretation cart before the regulatory horse. Typically, the reverse is true: FDA issue a draft guidance for industry comment and then presentations and publications follow with industry implementing after citations in Warning Letters. Not this time, as the genie is already out of the bottle. Houston, we have a bigger problem.

Regulation by Presentation and Publication

Regulations and Level 1 regulatory guidance documents must go through due process. This process involves issuing a draft for industry comment, revision where appropriate, followed by the final version. For regulations, the final version published in the Federal Register contains a precis of industry comments together with the review and response by the Agency that either rejects or acts upon them. You can see this for 21 CFR 11 in the March 1997 issue of Federal Register: The regulation is three pages and the preamble comments are 35 pages (11).

However, with CSA, the situation is different. The FDA list guidance documents that they will issue each year and one for CSA has been on the list for at least three years. Covid is not an excuse for inaction by the Agency as the guidance was promised before the pandemic and working from home should enable a guidance to be issued. Instead, presentations and publications outlining how to undertake CSA are being thrown out like garbage. However, it is important to note that:

  • There is no due process being followed.
  • This is regulation by presentation and publication.
  • FDA has abrogated its regulatory role.

Without a draft guidance for industry, we cannot see if the FDA aims for CSA are being filtered, enhanced, or subverted. In other words, there is a concern over the regulatory integrity of the perspectives being presented. As a regulated industry, we cannot change direction based solely on rumor: We need a draft guidance. But...


Do We Need CSA?

With all of the issues surrounding CSA, an important question emerges: Do we need it at all? Have we got regulations and guidance for the pharmaceutical industry in place now that give us the flexibility to do what is purported to be in the CSA guidance? In my view, the answer is yes, and I’ll explain in the following sections. Of course, this is my interpretation, but because there is no guidance for industry, there may be gaps in my discussion.


Regulatory Flexibility

Below are two quotes from “General Principles of Software Validation”. Section 2.1 simply states:

This document is based on generally recognized software validation principles and, therefore, can be applied to any software (2).


The guidance scope outlined in section 2 notes:

This guidance recommends an integration of software life cycle management and risk management activities. Based on the intended use and the safety risk associated with the software to be developed, the software developer should determine the specific approach, the combination of techniques to be used, and the level of effort to be applied” (2).

A flexible risk-based CSV approach for all software is mirrored in EU GMP Annexes 11 and 15. Clause 1 of Annex 11 focuses on risk management:

Risk management should be applied throughout the lifecycle of the computerized system taking into account patient safety, data integrity and product quality. As part of a risk management system, decisions on the extent of validation and data integrity controls should be based on a justified and documented risk assessment of the computerized system (12).

The regulation explicitly states that the extent of validation and data integrity controls should be based on the risk posed by a system to a product and patient. A product for a laboratory can be interpreted as the data for a submission or the medicinal product for patients. Clause 1 implicitly means that a one-size-fits-all validation approach is inappropriate. It is important to fit the validation to the system, not the other way around.

A system level risk assessment for analytical instruments and systems was published in this column in 2013 by Chris Burgess and myself (13). This approach can be used to classify each into the updated USP <1058> Groups B and C subtypes (14)
to determine the extent of qualification and validation required. Next, we have Annex 15 on Qualification and Validation clause 2.5:

Qualification documents may be combined together, where appropriate, e.g. installation qualification (IQ) and operational qualification (OQ) (15).


This makes sense when an engineer installs and qualifies your next spectrometer,
you could have an single installation qualification (IQ) or operational qualification (OQ) document that combines the two activities into one document. A single document for pre-execution review and post-execution approval is appealing. This approach is mirrored in USP <1058> that allows, where appropriate, qualification activities and associated documentation to be combined together (IQ and OQ) (14). But why stop there?

An Integrated Validation Document

Remember the UV spectrometer we discussed in the introduction where all that the system did was measure absorbance at a few wavelengths? Why not take Annex 15 clause 2.5 to a logical conclusion and combine all validation elements into one? The document should include:

  • The intended use requirements only (that is, the operating range, accuracy, linearity, expected performance, and compliance).
  • Requirements traceability within the document is easy to establish with test references and links to SOPs for system use.
  • Either cross reference to a supplier’s installation and commissioning documents or, if not available, the same elements can be written within the integrated document.
  • Test preparation and any documented assumptions, exclusions, and limitations of the test approach.
  • Tester identification
  • Test cases to demonstrate the system meets intended its requirements
  • Test execution notes to record and
    resolve any testing problems
  • Validation summary report and operational release statement

This sounds like a long list, but with the focus on intended use requirements only, this can be a relatively short document. Control of the process would be via an SOP or validation master plan. I have practiced and published such an approach for systems based on GAMP software category 3 and simple category 4 even if the data generated were used in batch release or submitted to regulatory agencies (16). The key is documented risk management as required by clause 1 of Annex 11 (12).

Do I Need a Risk Assessment?

I know what you are thinking, General Principles of Software Validation (2) recommends risk assessments to focus work and EU GMP Annex 11 says your need risk management should be applied throughout the lifecycle of the computerized system (12). However, this does not mean you must always perform a qualitative failure mode effect analysis (FMEA) described in GAMP 5 (17).

Let me give you an example of stupidity of a risk assessment for remediation of a data integrity audit finding: All users sharing the same user account. Sharp intake of breath: no attribution of action! What happened next? The laboratory then assessed the risk and impact of a shared user account with an FMEA risk assessment with a numeric assessment of all elements. At the end of the assessment, a single number is produced and compared against a scale to determine if it is critical, major, minor, or low. Unsurprisingly, the number indicates that this is a critical issue (the same as the auditor’s finding!)—and only now a remedial action is triggered. Guess what the resolution is? Yep, give each user their own account. Least burdensome approach? I don’t think so.

How stupid is this? It is death by compliance. Once identified in the audit, you know you have a critical vulnerability. You are out of compliance. Fix it. Don’t perform a risk assessment when you know what the only possible outcome will be. Just fix it. Annex 11 does not require a risk assessment; it requires that risk management be applied. The audit has identified the risk; the remediation is to give each user their own account that is faster, easier, and compliant with GMP. As Audny Stenbråten, a retired Norwegian GMP inspector, stated, “Using risk assessment does not provide an excuse to be out of compliance.” There is an interesting article by O’Donnell and others entitled “The Concept of Formality in Quality Risk Management“ that is recommended for further reading on this topic (18).

Rather than just apply a single risk assessment methodology as described in the GAMP Guide (17), there are methodologies that could be applied to implement a scalable risk assessment approach to both application software and IT infrastructure (19).

Leveraging the Supplier’s Development

Advocates of CSA mention trusted software suppliers. Let’s go back to 2008 and the publication of GAMP 5, often cited by the FDA, which discusses leveraging supplier involvement in sections 2.1.5, 7, and 8.3 plus Appendix M2 for supplier assessments (17). There are comments about leveraging supplier testing into your validation. To leverage supplier testing and reduce your validation effort, you must do more than just sending out a questionnaire for the supplier to fill out and QA to stick in a filing cabinet or document management system. This process requires a proactive assessment that reviews the procedures and practice of software development for software category 4 applications, such as:

  • What is the software development life cycle is used?
  • What software development tools are used?
  • How are requirements for the system specified and understood?
  • How extensive and accurate are software code reviews and are they acted upon?
  • How are software builds and configuration managed?
  • Is regression testing systematically
    performed and accurately reported?
  • How extensive is the formal testing?
  • How are software errors identified,
    classified and resolved?
  • What is the release process?

The greater the investment in understanding the suppliers QMS and software development, the more you can rely on supplier decisions and processes. This type of assessment is not suitable for a questionnaire, but either an onsite or remote audit. It will require at least a day to perform. You are looking for a robust software development process. Identify two or three requirements and trace them through the supplier’s development process: How extensive is the work and does this give you confidence in the supplier? As part of the evaluation, include questions that a supplier must answer about collaboration and sharing information about instrument and software issues and updates. You want a supplier you can trust. This assessment must be documented in a report as it is the foundation on which you leverage the supplier’s development into your validation project to reduce the amount of work.

This information can be used as follows:

  • The application assessed is a configurable software product (GAMP category 4).
  • Although classified as category 4 at the application level, there are many functions that are category 3 that are simply parameterized, such as selecting a wavelength to measure a sample absorbance.
  • Assess the requirements in the URS and compare to the application then classify each one as either category 4 (configured) or 3 (parameterized)
  • Providing the assessment report is positive, all category 3 requirements considered as validated by the supplier and are implicitly tested in the user acceptance testing phase (20).

A small investment in time here can reduce the amount and extent of user acceptance testing of any system. This means there is no need for CSA as the regulations and industry guidance have been suggesting such an approach for over 10 years.

Undocumented Testing

One of the purported CSA approaches is undocumented testing, but without the draft FDA guidance, care needs to be taken. I would caution any regulated pharmaceutical laboratory saying that they did undocumented testing, especially in today’s data integrity environment. Remember that software controlling a medical device is validated under 21 CFR 820 and cannot be configured, so one interpretation is that undocumented testing can be conducted during beta testing with the aim of finding errors rather than in formal release testing.

How could this be applied to software used in pharmaceutical laboratories?
One area is prototyping to learn how to configure and use an application. Provided this phase is described in the validation plan, an undocumented prototyping is acceptable with the deliverable of an application configuration specification containing the agreed software settings.

Critical Thinking

Rather than a test everything regardless approach, critical thinking should focus on demonstrating intended use of the system and the associated compliance functions to support product development or release.

  • Consider a simple case of password expiry set at 90 days: apart from confirming that the expiry has been configured in the application, what are you going to do now?
  • One system that I audited had a test script that started in November and finished in February the next year. Why so long to run the script? The answer is that we were waiting for the password to expire!
  • An alternative approach is to reset the password expiry to one day to test it expires and then reset to 90 days. Not the best way to demonstrate that you can change configuration settings easily in the current data integrity climate.
  • How about thinking this through: How is password expiry measured? It uses a system clock that is synchronized to a network time server which itself is synchronized to a trusted time source. The clock is a vibrating quartz crystal of known frequency that the computer counts then converts them into time. All you are doing is checking that a computer can count. Why are you testing this?
  • Alternatively, if password expiry fails and passwords are still valid after 90 days, what is the risk? The answer is that the risk is minimal. If expiry fails in use, it is readily detectable. Furthermore, allowing passwords to go to 91 days before discovery does little to put the product at risk: Roles and passwords are still in place and work.


Testing Assumptions, Exclusions, and Limitations

Barry Boehm explained that it is impossible to test software exhaustively in a 1970 report for the U.S. military (21). We see this in everyday use of computers with security updates, patches, minor versions, quick fixes, or whatever name is applied to them to fix bugs. However, our focus is on testing efficiency and how to focus on what is important to demonstrate the intended use of a system. The key to reducing the amount of effort in testing, in addition to leveraging supplier development, is to document what assumptions, exclusions, and limitations you are making in your test approach. Just because you have a requirement does not mean that you must test it blindly: You need to think objectively.

For example, if an application has 100 different access privileges and you want five different user roles, this results in 500 different combinations. Hopefully, you won’t test all of them, but how many will you test? How will you justify your approach? This is the role of documented assumptions, exclusions, and limitations of your test approach, which documents any rationale for what, how, and the extent of your testing and how you can leverage supplier development. If you are going to exclude specific user requirements from testing, state why you are doing this (20).

The other side of the coin is including additional requirements in the system’s unified registration system (URS) just in case they might be used in future. If tested, these requirements can result in extra work for zero value if they are never used. For example, if a system is used for quantitative analysis do you validate all calibration modes or just the ones that you use now? A better way is to focus on current requirements in the initial validation. If required later, other software features can be evaluated but not used for regulated work. If you want to use them, raise a change request, include the requirements in an updated URS, verify that they work as expected. For standalone spectrometer systems, it is unlikely that you will have the luxury of a separate test instance to evaluate them, so this is the only practical way of adding new functionality to a validated system.

Test Instructions

The bane of CSV’s existence is test documentation: at what level of detail will you document? Will you be using trained users or drag someone off the street to test your software? If it is the former, you can reduce the detail required compared to the latter. Don’t treat testers as though they are naïve people with mind numbing detailed instructions, testers are educated and trained; treat them like adults.

Table I compares test instructions for risk-averse and trained users. Generally, with risk-averse instructions shown in the left-hand column, each instruction needs to be documented with observed results, dated, and initialed. If you are really unlucky, you’ll have a screenshot to take at each step. In contrast, a better way is to give a trained user a simpler instruction shown in the right column of Table I. A trained user will know how to execute this instruction consistently. Note that the quality of test instructions is dependent on the knowledge of the software by both the test writer and the tester. The more training and experience with the system, the easier it will be to write simpler instructions and execute them.

Instead of dating and initialing each test step, just allow the tester and reviewer to sign and date the bottom of each page just as you would do for a laboratory notebook. Furthermore, some test instructions may instruct a tester to get to a different function of the application. If so, why does a test need expected and observed results for such instructions?

Can you go further with reduction of test documentation? Absolutely, you could do so but without the draft guidance available, why would you dare to?

Screenshot at Dawn

Screenshots are the bane of CSV: They are overused and, in most cases, have zero value. If used for documenting every step in a test, it is indicative of an overcautious and risk-averse approach to computer validation and an absolute waste of resource required to execute, collate, review, and retain. If used sparingly, a screenshot can add value to a document a transient message on the screen where there is no other way of recording it. A GAMP Good Practice Guide on Testing emphasizes the point that only take screenshots when there is value added by doing so.

However, if a transient massage on the screen also results in an audit trail entry, why take a screenshot? Use the audit trail entry to automatically document the activity. In this way, you can save time not just by testing analytical functions but simultaneously verify audit trail functionality. This way, it increases the testing elegance as well as reduces the time to test.

An alternative approach to documenting you testing could be utilizing screen videos that record all that is being done. If properly described in the test plan and outline test instructions, using screen videos is a perfect way to document the evidence. Reviewers can randomly select passages to review (22).

Automation of Testing

Test automation is excellent, but it comes with caveats. One of the best times for test automation is in software development, such as regression testing, to see if existing functions in new software builds still work as expected. As such, test automation is fast, and if a new build fails regression testing, then no manual testing is conducted until the error is fixed. Manual testing tends to be focused on new functions added in the release under development.

Using automated testing in validating laboratory systems is best focused on networked applications as most automated test tools, such as HP Application Lifecycle Management (ALM), are networked. However, test tools come with the same problems as manual testing in that you need to know the application software you are testing and the level of test step detail. Frewster and Graham note that when using an automated test tool for the first time, writing the test suite takes longer than manual testing. In addition, it takes over 10 executions to make a return on investment in the test tool (23). The advantage of an automated test tool is automated attribution of action via log-on credentials, contemporaneous execution, and the ability to capture and integrate electronic documented evidence (included dreaded screen shots) easily, quickly, and automatically. This makes review quicker and easier.

Reality will bite when you try to consider automated user acceptance testing on standalone systems as is more problematic. How will you load the test tool onto a spectrometer system? How will you manage the documented evidence and store it safely after the validation? Don’t tell me you’ll use a USB stick (24)!

Summary

Although the FDA has developed CSA, the inability to issue draft guidance on the subject has led to regulation by presentation and publication. The lack of draft guidance is not an appropriate regulatory process. However, is CSA needed? There is sufficient existing flexibility in current regulations and industry guidance that the pharmaceutical industry has not taken advantage of. If the pharmaceutical industry read regulations to understand CSV as a business benefit and investment protection rather than regulatory overhead, it would make the CSV process simpler and easier. However, is FDA incompetence meant to keep consultants gainfully employed?

Acknowledgments

I would like to acknowledge Chris Burgess, Mark Newton, Yves Samson, Siegfried Schmitt, and Paul Smith for helpful comments and advice during the writing of this column.


References

  1. R.D. McDowall, Spectroscopy 35(11), 13–22 (2020).
  2. US Food and Drug Administration, Guidance for Industry General Principles of Software Validation (FDA, Rockville, Maryland, 2002).
  3. Part 211 - Current Good Manufacturing Practice for Finished Pharmaceuticals. Federal Register 43(190), 45014–45089 (1978).
  4. 21 Code of Federal Regulations (CFR), 211 Current Good Manufacturing Practice for Finished Pharmaceutical Products (Food and Drug Administration, Silver Spring, Maryland, 2008).
  5. 21 Code of Federal Regulations (CFR), 820 Quality System Regulation for Medical Devices (Food and Drug Administration: Rockville, Maryland, 1996).
  6. ISO 13485: Medical devices−Quality management systems−Requirements for regulatory purposes (International Standards Organization: Geneva, Switzerland, 2016).
  7. US Food and Drug Administration, FDA Guidance for Industry, Part 11 Scope and Application (FDA, Rockville, Maryland, 2003).
  8. US Food and Drug Administration, FDA Guidance for Industry Data Integrity and Compliance With Drug CGMP Questions and Answers (FDA, Silver Spring, Maryland, 2018).
  9. US Food and Drug Administration, FDA Guidance for Industry Out of Specification Results (FDA, Rockville, Maryland, 2006).
  10. US Food and Drug Administration, FDA Guidance for Industry: Analytical Procedures and Methods Validation for Drugs and Biologics (FDA, Silver Springs, Maryland, 2015).
  11. 21 Code of Federal Regulations (CFR), Part 11, Electronic records; electronic signatures, final rule, in Title 21 (Food and Drug Administration: Washington, D.C., 1997).
  12. European Commission Health and Consumers Directorate-General, EudraLex: Volume 4 Good Manufacturing Practice (GMP) Guidelines, Annex 11: Computerized Systems (European Commission, Brussels, Belgium, 2011).
  13. C. Burgess and R.D. McDowall, Spectroscopy 28(11), 21–26 (2013).
  14. General Chapter <1058> “Analytical Instrument Qualification,” in United States Pharmacopoeia (United States Pharmacopoeia Convention, Rockville, Maryland).
  15. European Commission Health and Consumers Directorate-General, EudraLex: Volume 4, Good Manufacturing Practice (GMP) Guidelines. Annex 15: Qualification and Validation (European Commission, Brussels, Belgium, 2015).
  16. R.D.McDowall, Quality Assurance Journal 12, 64–78 (2009).
  17. ISPE, Good Automated Manufacturing Practice (GAMP) Guide, version 5 (International Society for Pharmaceutical Engineering, Tampa, Florida, 2008).
  18. K. O’Donnell, D. Tobin, S. Butler, G. Haddad, and D. Kelleher, Understanding the Concept of Formality in Quality Risk Management (IVT Network, 2020).
  19. R.D.McDowall, Quality Assurance Journal 9, 196–227 (2005).
  20. R.D.McDowall, Validation of Chromatography Data Systems: Ensuring Data Integrity, Meeting Business and Regulatory Requirements Second Edition ed. (Royal Society of Chemistry, Cambridge, United Kingdom, 2017).
  21. B. Boehm, Some Information processing Implications of Air Force Missions: 1970–1980 (RAND Corporation, Santa Monica, California, 1970).
  22. S. Schmitt, personal communication, 2021.
  23. M. Frewster and D. Graham, Automated Software Testing - Effective Use of Test Execution Tools (Addison Wesley, Harlow, 1999).
  24. R.D.McDowall, Spectroscopy 36(4), 14–17 (2021).

R.D. McDowall is the director of R.D. McDowall Limited and the editor of the “Questions of Quality” column for LCGC Europe, Spectroscopy’s sister magazine. Direct correspondence to: SpectroscopyEdit@MMHGroup.com

Related Content