NRL News

Opinion Piece

TIME TO RETHINK QC FOR INFECTIOUS DISEASE SEROLOGY

We validate new assays – why not QC procedures?

In a recent publication(1), NRL has demonstrated that following quality control (QC) guidelines(2-5) developed for clinical chemistry are not appropriate for infectious disease serology. There are no published validations of these methods applied to infectious disease serology. If you are following these guidelines and using Westgard rules for monitoring the performance of serological assays, you are most likely experiencing numerous false rejections. Here is why.

As laboratory scientists, we know that all test systems have inherent (normal) variation.  The sources of variation are many, including reagent lot changes, instrument and equipment performance, components and calibrations, operator procedures, environmental conditions and shipping and storage of consumables and reagents. Good laboratory practice dictates that we monitor the extent of variation by testing a sample (run control) with reactivity close to levels related to medically important decisions; plot the results on a Levey-Jennings chart and monitor shifts, drift and random changes in QC sample reactivity.  

The challenge for us is to differentiate normal from abnormal variation.  The guidelines dictate that the first 15-20 QC test results are used to calculate the mean and standard deviation (SD), although the most recent CLSI guideline (3) suggests recalculating after an undefined period.  The calculated means and SD are applied to the Levey-Jennings charts and the QC test results assessed against some predefined criteria, including Westgard rules.  This approach relies on the principles of Gaussian (Normal) distribution, where ~95% of all normally distributed data will fall within 2SD.  That is, if a result is outside 2SD, there is only a 5% change that it belongs to that dataset. In this way, we expect 5% false rejections. However, this approach assumes that the results are normally distributed and that the data used to calculate the mean and SD will be representative of all subsequent data. 

In infectious disease serology testing, this is not the case.

NRL has been collecting, analysing and investigating QC results for infectious disease serology for over 15 years. We have examined many millions of results.  The following example is representative of what we commonly find.  Have a look at some of your data to confirm. The first figure shows results from an HIV Ag/Ab combination assay over a one-year period.  We can clearly see shifts due to reagent lot changes, indicated at the top of the chart.  If we calculate the mean ± 2SD using all the results, we see that all but a few results are within the range, as would be expected if the results were normally distributed. Next, we calculated the mean ± 2SD using the first 20 QC results (Figure 2).  Again, all but a few results are within the range.  However, guidelines suggest that the range calculated on the first 20 results should be used to monitor subsequent results.  If we do that, rejections are frequently detected (Figure 3).

Our publication reviewed 103 datasets for 14 different infectious disease serology assays obtained from 5 different countries.  A total of 21,510 results were reviewed. Each dataset comprised of QC results submitted for the same QC sample on the same assay over a 12 month period.  Given all the QC results came from test runs that were valid (i.e. passed the manufacturer’s validation rules provided in the instructions for use), we deemed that any QC rule that failed more than 20% of the results were not fit for purpose. Of the 103 data sets, the number of datasets with > 20% failure for each Westgard rules ranged from 3 (2.9%) for R4S to 66 (64.1%) for 10X rule when the first 20 results were used to establish the mean ± SD.  The German RiliBÄK control limits procedure has > 20% failures is 25 (24.3%) datasets; whereas only 2 (1.9%) datasets failed NRL’s QConnect limits more than 20% of the time.  Further investigation showed that both of these two datasets detected significant testing issues within the participating laboratory.

Calculating the mean ± SD on the first 20, or even 100 data points, and using Westgard rules is an inappropriate method for establishing control limits.  This is because those data are not representative of subsequent data due to frequent changes in reagent lots and the associated change in reactivity.  The change in reactivity of QC samples due to reagent lot change is “normal” variation.  It does not signal that something adverse has occurred. Every infectious disease serology assay will experience changes associated with different reagent lots and in some jurisdictions each lot is validated by regulatory bodies prior to release.  When establishing control limits, it is important that all “normal” variation is included.

QConnect Limits, developed and published by NRL, use historic data to establish the acceptance criteria (6).  As NRL has been collecting QC results from QC/assay combinations for more than 15 years, we include variation from all normal sources; sometime using more than 100,000 data points to establish the QConnect Limits.  Since implementing QConnect Limits in EDCNet in 2015, NRL has detected many testing abnormalities which we routinely investigate and publish

So, what is the alternative to Westgard rules? Some labs, anecdotally, recalculate for each new reagent lot.  In our paper(6), we found that the mean number of QC tests performed per reagent lot was less than 40. Therefore, by the time 20 QC results were collected for that reagent lot, it would almost be time to change and start over. Any process for establishing acceptance criteria for QC test results must include all “normal” variation.  We believe that this should include lot-to-lot variation.

It is time to re-think the way we monitor QC results for infectious disease serology testing.  You wouldn’t introduce a new assay without validating its fitness for purpose; why would you use a non-validated QC process?

Figure 1
Figure 2
Figure 3
  1. Dimech W, Karakaltsas M, Vincini GA. Comparison of four methods of establishing control limits for monitoring quality controls in infectious disease serology testing. Clin Chem Lab Med 2018 (ahead of print).
     

  2. CLSI. Statistical quality control for quantitative measurement procedures: Principles and definitions; approved guidelines - third edition. Vol. C24-A3. Wayne PA: CSLI, 2006.
     

  3. CLSI. Statsitical quality control for quantitative measurement procedures: Principles and definitions. Vol. CLSI Guideline C24. 4th ed. Wayne PA: Clinicaland  Laboratory Standards Institute, 2016.
     

  4. German Medical Association. Revision of the “guideline of the german medical association on quality assurance in medical laboratory examinations – rili-baek” (unauthorized translation). Journal of Laboratory Medicine 2015;39:26-69.
     

  5. Public Health England. Quality assurance in the diagnostic virology and serology laboratory. In: Standards Unit MS, ed., Vol. Colindale, UK: Public Health England, 2015.
     

  6. Dimech W, Vincini G, Karakaltsas M. Determination of quality control limits for serological infectious disease testing using historical data. Clin Chem Lab Med 2015;53:329-36.

Contact Us

Social Media

 Address: 4th Floor, Healy BuildinNRL

4th Floor, Healy Building

41 Victoria Parade

Fitzroy VIC 3065

Australia

Fax: +61 3 9418 1155

Phone: +61 3 9418 1111

Email: info@nrlquality.org.au

  • Twitter Social Icon
  • LinkedIn Social Icon

NRL is designated a WHO Collaborating Centre for Diagnostics and Laboratory Support for HIV and AIDS and Other Blood-borne Infections