Validation of Spectrometry Software: The Proactive Use of a Traceability Matrix in Spectrometry Software Validation, Part I: Principles

Article

Spectroscopy

SpectroscopySpectroscopy-11-01-2008
Volume 23
Issue 11

Tracing requirements from a user requirements specification throughout the life cycle is not only a very effective business tool to save time and effort in validation projects but also a regulatory expectation. This first installment of a two-part column series looks at the principles of a traceability matrix.

The various columns on validation of spectrometry software over the last few years have covered many topics ranging from user requirements specifications to validation master plans, but requirements traceability and a traceability matrix have not yet been discussed. Traceability of user requirements in the validation of computerized systems, and spectrometry software in particular, is a regulatory expectation of the FDA as evidenced from their Guidance for Industry on the General Principles of Software Validation (1). In Europe, if the proposed update to the GMP Annex 11 regulations covering computerized systems is approved then it will become a regulatory requirement (2). This column will look at what a traceability matrix is, its role in a validation project, and when this document should be written. From my perspective, a traceability matrix is a very useful tool to help you control the detailed steps of any spectrometer computer validation project.

R.D. McDowall

Throughout this column the discussion assumes that the system to be validated is a commercially available spectrometer system. If macros or other custom elements have been developed to work with these applications, these are outside the scope of this column. However, the principles of requirements traceability will still apply, but the number of documents required in a validation will increase due to the complexity of the work compared with implementing a commercial system.

This column will look at the principles of traceability matrix, and the next will look at turning the principles into practice.

A Life Cycle Model Refresher

Implementation of a computerized system, such as a spectrometer with a data system, is typically visualized by a life cycle model. Traditionally, a V model is used and one such model is shown in Figure 1. This has been modified from the GAMP version 5 for a category 4 software system (3) by the elimination of the stages for writing a functional specification and system testing. On the left hand side of the V, representing the specification of the system, the main system requirements are defined in a user requirements specification (URS), the configuration of the software (user types and their access privileges, functional configuration, and so forth) is described in a configuration specification, and the hardware platform and its integration into the IT infrastructure is defined in a technical specification.

Figure 1: Simplified life cycle model for GAMP category 4 software (adapted from reference 3).

On the right-hand side of the V, representing the implementation of the system, is the installation and integration of the computer workstation, spectrometer components and software (installation qualification [IQ] and operational qualification [OQ]), configuration of the software against the configuration specification, and finally the user acceptance testing (performance qualification [PQ]).

Nice and easy so far. However, the key question is: how do you know that a requirement documented in the URS has been further specified and also implemented in a later stage of the validation? In essence, you don't unless you have developed a traceability matrix over the life cycle. In the absence of a traceability matrix, the whole of the life cycle documentation can be a validation black hole: stuff goes in but little comes out. So from a business perspective, a traceability matrix helps to ensure that requirements are managed and traced from the start to the delivery of the final system and, as such, it is a very useful tool in ensuring the overall quality of the finished system. The traceability matrix should cover the whole of the life cycle, although in discussing the principles I will sometimes focus on specific phases to illustrate principles.

Figure 2: Principles of requirements traceability 1: From the URS to other specifications.

Regulatory Requirements and Industry Guidance

We also need to consider the regulatory perspective to this, as currently there are no specific regulatory requirements that require a traceability matrix. However, a traceability matrix is a regulatory expectation as the FDA's guidance for industry, General Principles of Software Validation, states in section 5.2.2 on software requirements notes specifically (1).

A software requirements traceability analysis should be conducted to trace software requirements to (and from) system requirements and to risk analysis results. In addition to any other analyses and documentation used to verify software requirements, a formal design review is recommended to confirm that requirements are fully specified and appropriate before extensive software design efforts begin.

Further, in the document in section 5.2.3 on the subject of design (1):

A traceability analysis should be conducted to verify that the software design implements all of the software requirements. As a technique for identifying where requirements are not sufficient, the traceability analysis should also verify that all aspects of the design are traceable to software requirements.

Figure 3: Principles of requirements traceability 2: From the URS to later life cycle deliverables.

The proposed changes to EU GMP Annex 11 published in April 2008 (2), if implemented, will elevate the regulatory expectation of a traceability matrix to a requirement:

Section 3.3: User requirements should be traceable throughout the validation process/ life cycle

One comment about requirements is made in the PIC/S Guidance for Computerized Systems in GXP Environments (4) in clause 9.2:

When properly documented, the URS should be complete, realistic, definitive, and testable. Establishment and agreement to the requirements for the software is of paramount importance. Requirements also must define nonsoftware (for example, standard operating procedures [SOPs]) and hardware.

Therefore, it is important to understand that some user requirements will trace to the installation phase of the life cycle (installation qualification for the spectrometer, workstation, and software) and also traceability to user acceptance testing (performance qualification) or other elements of the life cycle. These other elements can include procedures for people to use the system, quality of software development, calibration of the spectrometer, and IT support (for example, backup and recovery). Because user requirements can be traced to any phase of the life cycle, this must be reflected in the traceability matrix.

Appendix M4 of the Good Automated Manufacturing Practice (GAMP) guidelines version 5 (3) covers design reviews and traceability. The aim of these two topics is to ensure that all requirements have been addressed and the delivered system meets the needs to the business. If done proactively, this can reduce overall project cost. Traceability is intended to establish and monitor the relationship between the requirements and subsequent phases of implementation or development of the system. For example, a requirement can be traced further to a functional specification or configuration specification and then further either to writing an SOP, performed during an IQ or tested during the user acceptance testing or PQ.

If you think these are onerous regulations and guidance, consider yourself lucky you are not developing avionics software. The avionic software guidance DO-178B requires that "every line of code be directly traceable to a requirement and a test routine, and that no extraneous code outside of this process be included in the build" (5). This aims to ensure that all requirements are included in the final system and there is no orphan or redundant code. This is the importance of requirements traceability — especially if you are flying at 40,000 feet.

Terms and Definitions

Before we start the traceability matrix discussion in more detail, we need some terms defined. I have selected the four most appropriate from the Institute of Electronic and Electrical Engineers (IEEE) software engineering standard glossary of software engineering terms (6). Please note that I have edited the traceability definition slightly.

Trace: To establish a relationship between two or more products of the development process; for example, to establish the relationship between a given requirement and the design element that implements that requirement.

Traceability: The degree to which a relationship can be established between two or more products of the development process; for example, the degree to which the requirements and design of a given software component match.

Traceability analysis: The tracing of software (that is, user) requirements specifications requirements to system requirements in concept documentation, software design descriptions to software requirements specifications and software requirements specifications to software design descriptions, and source code to corresponding design specifications and design specifications to source code.

Traceability matrix: A matrix that records the relationship between two or more products of the development process; for example, a matrix that records the relationship between the requirements and the design of a given software component.

From these definitions, traceability is, quite simply, a mechanism for connecting two or more parts of the software development process together, and a traceability matrix is the means for achieving this. I will not discuss traceability analysis per se, but the important part of this definition is that traceability is required forward from the URS to the remainder of the documentation and backward from the later documents to the URS. Therefore, requirements traceability is the ability to describe and follow the life of a requirement, in both a forward and backward direction throughout the life cycle.

You may ask why I have not quoted a regulatory source for these definitions. The simple answer is that when the FDA published a glossary of computer validation terms in 1995 (7), they used the same source of these definitions above, and I have just referenced the original. Note the use of the term product in the definitions above. You should interpret this as a deliverable of the life cycle, such as a document or a function in the final system.

Why Bother to Trace Requirements?

It is all very well the regulator's jumping up and down demanding us to have a traceability matrix for each validation, but the real reason, as I have said before, is that it makes good business sense. My advice is to forget the regulations and look at the business rationale for writing this document.

Traceability of requirements has two main objectives:

A requirement should be traceable forward from the URS to further specification and implementation and test documents. Figure 2 shows that the software and hardware requirements in the URS can be traced forward to other specification documents such as a configuration specification or a technical specification. This ensures that your user requirements have been implemented correctly and no elements have been omitted from the following phases of work or from the final delivered product if required.

A requirement should be traceable backward to the URS (for example, from a configuration specification or qualification phase back to the individual user requirements). This is also shown in Figure 2, where the software configuration of a spectrometer software application can be traced back to the items in the URS, Similarly, the technical specification or architecture for the system can be traced back to the hardware elements defined in the URS.

As Figure 1 indicated, requirements traceability should extend throughout the life cycle. This is shown in more detail in Figure 3. Requirements can be traced by either verification or testing, as shown in Figure 3, where we can see the full impact of traceability (this is an expansion of the right hand part of Figure 1). Testable and verifiable are defined as follows:

Testable: The degree to which a requirement is stated in terms that permit establishment of test criteria and performance of tests to determine whether those criteria have been met [IEEE 610] (6). This is typically undertaken in the operational qualification or performance qualification phases of the life cycle.

Verifiable: The degree to which a requirement can be fulfilled by implementation (IQ), software configuration, writing SOPs, user training, vendor audit, calibration, or documentation.

In essence, you get a better quality system with improved quality documentation of how to use it. Using a traceability matrix, you will know that all requirements have been captured and know through the traceability matrix how it has been implemented, tested, or verified. This implies that traceability is undertaken proactively throughout the life cycle and not just conducted as an afterthought at the end. In fact, writing a traceability matrix at the end of the life cycle is an exercise in producing a document for a document's sake. It is a futile and non-added value activity.

Why a Matrix?

The best way to trace requirements across different documents is in a series of tables; hence, the matrix. Note that for highly configured systems, there can often be a one-to-many relationship between a user requirement and the underlying configuration of the spectrometer software. The traceability matrix usually will be developed using either Word or Excel tables that correlate the relationship between the requirement and where it is implemented in the life cycle.

Conclusions

We have discussed the principles of a traceability matrix and the business and regulatory drivers that are intended to help produce a better system. In the next part of this series, we will look at turning principles into practice and ways to develop and maintain a traceability matrix.

R.D. McDowall is principal of McDowall Consulting and director of R.D. McDowall Limited, and "Questions of Quality" column editor for LCGC Europe, Spectroscopy's sister magazine. Address correspondence to him at 73 Murray Avenue, Bromley, Kent, BR1 3DJ, UK.

References

(1) General Principles of Software Validation, FDA Guidance for Industry, 2002.

(2) Proposed Update for EU GMP Annex 11, April 2008.

(3) Good Automated Manufacturing Practice guidelines, version 5, International Society for Pharmaceutical Engineering, Tampa, Florida.

(4) PICS Guidance PI-011, Computerized Systems in GXP Environments, Pharmaceutical Inspection Convention, 2003.

(5) Software Considerations in Airborne Systems and Equipment Certification, D0-178B, RTCA, Inc., Washington, D.C.

(6) Institute of Electrical and Electronic Engineers, Software Engineering Standard 610 Glossary, 1990.

(7) FDA Glossary of Computerized System and Software Development Terminology, 1995.

Related Content