For more than 25 years, regulated life sciences companies have been performing Computer System Validation (CSV) to ensure that systems used in the manufacture of regulated drug products and medical devices meet their intended use and satisfy data integrity requirements. Following the FDA guidance first released in 1997, this process applies to any software or system used to automate any part of the device or drug production process or any part of the quality management system (QMS). This includes software that automates production, testing, labeling, and packaging. This also applies to any system that maintains electronic records or manages electronic signatures. CSV is a critical means of confirming a system’s data integrity. Originally developed as a result of medical device recalls due to software failures, CSV plays a key role in ensuring a higher level of product quality and patient safety. 

Difficult Use of Computer System Validation

While well-intentioned, over the years CSV has increasingly been seen as rigid, arduous, time-consuming, and costly to many companies as the guidance treated all system requirements as equal in terms of the level of impact and testing required. The speed with which the software providers were able to put out new applications or upgrades to existing ones, continued to improve. And later, the advent of cloud services reduced that deployment time even further. As developers were releasing software more often, their clients weren’t able to keep up with the latest releases. Instead, they would have to wait until the most recent CSV process was completed.

Over the years, the FDA realized that the current level of testing was not valuable or sustainable. It found that companies were beginning to innovate and find ways to decrease the testing and financial burden while still providing a high level of assurance that their computer systems were maintaining product quality and patient safety. Organizations developed risk assessments to identify software-related risks and rank them using a scoring system. Then focused their testing on the higher-risk items. They were also relying more on testing done by vendors who capitalized on this large customer documentation burden and made their pre-release testing packages available to their customers for a fee.

New Release of FDA Computer Software Assurance

In 2021, the FDA released new draft guidance on Computer Software Assurance (CSA) with the same general intention as CSV, to “establish confidence in the automation used for production or quality systems” while still in compliance with GAMP 5. The hallmarks of the FDA Computer Software Assurance concept are:

  • Intended Use of System – ensuring that it is well-defined and documented which parts or features of a system are part of the production process or the quality system
  • Risk-Based Approach 
  • Streamlined Testing - including leveraging vendor testing

Intended Use of FDA Computer Software Assurance 

When getting started with Computer Software Assurance, a company should first determine the intended use of the software. Software falls into one of two categories: 

  • Software used directly as part of production or the quality system
  • Software used to support production or the quality system

This can be done for the system as a whole or, to further decrease scope, can be refined down to certain parts or even certain features of the system as long as the process is well-documented. The net effect is to identify those systems, components, or features that will have an impact on the product or quality system (directly or indirectly) or threaten the data integrity of an automated process and ignore the rest (for CSA purposes, that is).

Risk-Based Approach

Once it has been determined that the software, components, or features are intended for use within the production process or quality system, the next step is to apply a risk-based approach to determine the appropriate level of assurance activities required for each. The analysis should identify “reasonably foreseeable” software failures and determine whether that failure would pose a high process risk.

As defined in the FDA guidance, a software component or feature would pose a high process risk if its failure to perform as intended could result in a compromise to product quality and patient safety. Examples of this include:

  • Maintaining process parameters that affect the physical properties of the product or process, which are identified as essential to safety or quality.
  • System activities with limited or no additional human awareness or review such as:
    • Measurement, inspection, analysis, and/or determining the acceptability of a product
    • Performance of process corrections or adjustments based on data monitoring or automated feedback

A feature whose failure would not result in a compromise in product quality or patient safety would be considered a low process risk.

Streamlined testing

Once the risk assessment has been completed, the rankings determine the appropriate level of assurance activities to be carried out. Provided the vendor has been vetted and is reliable, their pre-release testing can be relied upon to reduce the internal assurance efforts and avoid duplication of testing. For remaining items that warrant assurance testing, the FDA describes several testing methods, some of which are a significant change from the original CSV guidance.

Scripted Testing

For higher-risk items, FDA suggests the use of more traditional scripted testing. Robust scripted testing calls for evidence of repeatability, traceability to requirements, and audibility. This is most similar to the CSV approach. FDA also introduces the use of Limited script testing, which is a hybrid approach of scripted and unscripted testing where scripted testing is used for high-risk items and unscripted is used for low- and medium-risk items.

Unscripted Testing

The largest divergence from the CSV approach is in the introduction of unscripted/ad-hoc testing. Unscripted does not mean undocumented. The tester still records what tests they performed and the results of those tests. But the tests are not necessarily pre-defined and written in a stepwise fashion. Types of unscripted testing include:

  • Error-Guessing – testers utilize their own knowledge of past failures to create test cases
  • Exploratory Testing – this type is based also on the tester’s experience. He or she spontaneously designs and executes tests based on existing knowledge, previous test results, common software behaviors/failures, user behaviors, or accidental use situations.

While protocol authors and testers might have already been familiar with using a risk assessment to help dial down the amount of testing required, some of these new testing methods might be hard to grasp. The focus of the new guidance is on reducing that paperwork, the burden of authoring, recording, and signing test steps, and on making better use of the tester’s knowledge to provide more meaningful tests.

In the end, the goal of the new guidance is lower volume but higher quality testing which translates to a more cost-effective solution and a safer, higher quality product that continues to meet regulatory and data integrity requirements.

Considering changing your systems? Contact Dayspring Technology at consulting@dayspringtechnology.com or visit here to learn more.