Tip:
Highlight text to annotate it
X
In this section of the course, we will look more closely at how Biological Variation (BV)
data can be used to set Total Allowable Error (TEa) targets.
Upon completion of this section you will be able to:
Define Total Allowable Error (TEa), Total Error (TE) and z score.
List reasons why it is beneficial to use a secondary feedback mechanism like TEa.
Explain how Total Error (TE) and Total Allowable Error (TEa) are used to determine if patient
test results are reliable.
Calculate Total Error (TE), Total Allowable Error (TEa), Imprecision (actual and performance
goal) and Bias (actual and performance goal).
List the methods used to obtain the imprecision for a test.
Describe the difference between the three performance goals of minimum, desirable and
optimum.
Determine if published Biological Variation data exists for a test.
Compare actual values for imprecision and bias to selected goals to help identify where
reasoned troubleshooting is needed.
Total Allowable Error (TEa) is an analytical quality specification that sets limits for
imprecision and bias that are acceptable in a single test result. It is a process control
tool based on human Biological Variation data and can be used as a secondary feedback mechanism
for Statistical Process Control (SPC) analysis.
Why use a secondary feedback mechanism? For several reasons:
Lack of Quality Planning
Lack of working knowledge of SPC rules
Misapplication of SPC rules
Desensitization to error flags
As stated in this CLSI standard the first step to planning an effective quality control
procedure is to define the intended quality for a test.
While laboratories will devote time ensuring a new instrument, kit or method meets manufacturer
claims, many do not plan for how much imprecision and bias is acceptable for a test. Consequently,
such laboratories often set process control rules that intuitively may seem appropriate
or reflect past experience but inherently discount the technical aspects of the test
that directly relate to analytical quality.
Technologists are often not aware of the statistical power of each SPC rule when applied singly
or in combination (multirule). Often they are also not aware that some rules identify
error due to imprecision and others identify errors due to bias.
Lack of a working knowledge of the SPC rules can lead to misapplication and an increase
in false error flags. This results in rejection of credible patient test results that are
appropriate for clinical decision making.
Regardless of the analytical capabilities of a test some laboratories will continue
to use only a 1-2s rule as a rejection limit.
According to Dr. Westgard, failure to allow for valid points between 2SD and 3SD may result
in falsely rejecting:
5% of analytical runs when using one level of control
10% of analytical runs when using two levels of control
14% of analytical runs when using three levels of control
Another effect of using the 1-2s rule indiscriminately is the narrowing of the standard deviation
over time and, therefore, range on the Levey-Jennings chart. This is because rejected data is not
used to calculate the cumulative mean thereby skewing the mean and standard deviation.
The consequence of a narrowing standard deviation over time on the Levey-Jennings chart is the
increase in frequency of false error detections and unnecessary rejections.
Another practice is the application of the same single rule or multirule to all tests
regardless of instrument or method. This is done because it makes “QC easier to manage”.
However, with the rules not reflecting test performance in relation to stability, sensitivity
and specificity along the method curve, unnecessary error flags, run rejections, recalibrations
and repeat testing is caused.
For all your laboratory QC needs go to www.qcnet.com.