reliability

Calculating the SEM

I’ve had a number of enquiries recently about how to calculate the standard error of measurement (SEM) for a range of different repeatability studies. This has struck me as odd because in my mind the SEM is  a simple and clearly defined measure and given this it seems quite obvious to me how to calculate it.

SEM picture

On looking at a range of text books though I think I can see what the problem is. As I’ve pointed out in a previous post the SEM is almost always presented as a derivative of the intra-class correlation coefficient (ICC). Portney and Watkins for example introduce it through the formula SEM = SD√(1-ICC). For those not used to maths this  looks bad enough on its own. When they probe a little further, however, they will find that the ICC itself is an esoteric output from a specifically structured ANOVA. No wonder so many give up and assume that the SEM is the rather abstract product of some largely incomprehensible calculations.

But nothing could be further from the truth. The SEM is simply the standard deviation of a number of measurements made on the same person. Bland and Altman actually recommend that it should be referred to as the within-subject standard deviation to make this clear (although I think SEM is so well established now that this is a battle not worth fighting). If you understand what a standard deviation is and how it represents variability on measurements from different people (and everyone the most basic interest in clinical measurement really should) then you should also understand what the SEM is and hown it represents variability within measurements taken on the same person. In a very real sense it is the SEM that is the primary measure of repeatability and the ICC should be seen as a derivative of it rather than vice versa.

Most importantly if you know how to calculate a standard deviation (either with a pencil and paper, calculator, or spreadsheet) then you already know how to calculate the SEM. You just use the same equation to calculate the SD of a number of measurements made on the same person rather than the those made on a number of different people. If the measurements have been made by a number of different assessors working in a particular gait lab then the SEM can be taken as representative of the lab as a whole. If they have all been made by the same assessor then they are only really valid when that individual is making the measurements.

If you make measurements on more than one person (and you should in any well designed repeatability study) then you can calculate the within-subject standard deviation for each person and you will find that this varies a little from person to person. This is where the only mildly complicated step comes in the calculations in that the overall SEM is the root mean square average of these within subject standard deviations (rather than the simple arithmetic mean).

Just to show how straightforward the calculations are I’ve prepared a document outlining how to do the sums which you can download at this link. All the data, figures and calculations for the examples are also available in these two Excel spreadsheets (here and here). If you want to listen to a more general talk about repeatability studies then there is one on my YouTube channel which uses the same examples. This is a recording of an open virtual classroom giving publicity to our MSc in Clinical Gait Analysis by distance learning so you’ll have to listen to a couple of minutes sales pitch  before you get to the interesting bit!

PS Apologies to some of my recent students who probably wish they had had access to these resources a long time ago!

 

 

 

 

Advertisements

Can U C thru the ICC?

This post is really a follow-up to the rant I had about psychometrics about a month ago. Again its prompted by preparing some material on Measurement Theory for our Masters programme. It focusses on the use of reliability indices for assessing the variability associated with measurements. Needless to say reliability indices are a central feature of the psychometric approach.

The more I think about these the more worked up I get. How can something so useless be so widely implemented? The main problem I have is that the indices is that they are almost impossible to make any sense of. Fosang et al. (2003) reported an interrater intra-class correlation coefficient (ICC) for the popliteal angle of 0.78. What on earth does this mean? According to Portney and Watkins (Portney & Watkins, 2009) this rates as “good”. How good? If I measure a popliteal angle of 55° for a particular patient how do I use the information that the ICC is 0.78? Perhaps even more important, if another assessor, measures it to be 60° a few weeks later how do we interpret that?

What is even more frustrating is that there is far superior alternative, the standard error of measurement (SEM – don’t confuse with the standard error of the mean which sounds similar but is something entirely different). This expresses the variability in the same units as the original measure. It is essentially a form of the standard deviation so we know that 68% of repeat measures are likely to fall within ± one SEM of the true value. Fosang et al. also report that the SEM for the popliteal angle is 6.8°. Now if we measure a popliteal angle of 55° for a particular patient we have a clear idea of how accurate our measurement is. We can also see that the difference of 5° in the two measurements mentioned above is less than the SEM and there is thus quite a reasonable possibility that the difference is simply a consequence of measurement variability rather than of any deterioration in the patient’s condition. (Rather depressingly we need to have a difference of nearly 3 times the SEM to have 95% confidence that the difference in two such measurements is real).

Quite often the formula for the SEM is given as

SEM=SD√(1-ICC).

This suggests that the SEM is a derivative of the ICC which is quite misleading. The SEM is quite easy to calculate directly from the data and should really be seen as the primary measure of reproducibility with the ICC the derivative measure:

ICC = 1-(SEM/SD)2

There are at least six different varieties of the ICC  representing different models for exactly how reliability is defined. Although the differences in the models appear quite subtle the ICC calculated on the basis of the different models vary considerably (see pages 592-4 of Portney & Watkins, 2009 for a good illustration of this) . It is quite common to find publications which don’t even tell you which model has been used.

Simplifying a little, the ICC is defined as the ratio of the variability arising from true differences in the measured variable between individuals in the sample (variance = σT2) and the total variability which is the sum of the true variability and measurement error (variance = σT2E2), thus

ICC=(σT2)/(σT2E2)

Unfortunately this means that the ICC doesn’t just reflect the measurement error but also the characteristics of the sample chosen. If the sample you choose has a large range of true variability then you will get a higher ICC even if the measurement error is exactly the same. This means that, even if you can work out how to interpret the ICC clinically, you can only do so sensibly with an ICC calculated from a sample that is typical of your patient population. It is nonsensical, for example, to assess ICC from measurements on a group of healthy individuals (which is common in the literature because it is generally easier) and then apply the results for a particular patient group.

Luckily there is a safeguard here in that for most measures we are interested in the true variability in a group of healthy individuals is lower than that in the patients we are interested in so the ICC calculated form the healthy individuals is likely to be a conservative estimate of the ICC for the patient group.

Interpretation of the ICC is generally based on descriptors. Fleis (1986) suggested that an ICC in the range 0.4 – 0.75 was good and over 0.75 was excellent. Portney and Watkins (2009) are a little more conservative regarding values below 0.75 as poor to moderate, above 0.75 as good. In their latest edition, however, they do suggest that “for many clinical measurements, reliability should exceed 0.90 to ensure reasonable validity [sic]”.

It is possible to do a little maths to explore these ratings. Using the formula  above we can calculate the ICC for different values of the SEM (σE, as expressed as a percentage of the standard deviation of the true variability within the sample σT).

SEM

You can see that even if the measurement error is the same size as the total variability in the sample studied then the ICC is still 0.5 so Fleis’ early suggestion that an ICC as low as 0.4 represents good reliability is a little suspicious. Using his scale reliability is still assessed as excellent starts at an ICC of 0.75 which corresponds to the measurement error still being over half (60%) the standard deviation of the true variability – doesn’t sound particularly good to me, let alone excellent! Even Portney and Watkins’ cut-off of 0.90 for clinical measurements still allows for the measurement error to be almost exactly a third of the true variability. All in all I’d suggest that either set of descriptors is extremely flattering.

So the ICC is ambiguously defined, difficult to interpret, compounds reproducibility with sample heterogeneity, and has a clearly superior alternative in the SEM. Why on earth is it so popular? I’d suggest the reason lies in the table above – if your reproducibility statistics aren’t very good then put them through the ICC calculator and you’ll feel a great deal better about them. Award yourself an ICC of just over 0.75 and feel that nice warm glow inside as you allow Fleis to label you as excellent!

PS

You might ask how we ever got in this situation and I suspect the answer may lie in the original paper of Shrout and  Fleis (1979) and the example they use to discuss the use of the ICC;

“For example Bayes (1972) desired to relate ratings of interpersonal warmth to nonverbal communication variables …”

Does it surprise us that measures developed to quantify reliability of variables such as  interpersonal warmth and nonverbal communication may not be directly applicable to clinical biomechanics? Perhaps interdisciplinary collaboration can be taken a little too far.

.

Fleis, J. (1986). Design and Analysis of Clinical Experiments. New York: John Wiley & Sons.

Fosang, A. L., Galea, M. P., McCoy, A. T., Reddihough, D. S., & Story, I. (2003). Measures of muscle and joint performance in the lower limb of children with cerebral palsy. Dev Med Child Neurol, 45(10), 664-670.

Portney, L. G., & Watkins, M. P. (2009). Foundations of clinical research: applications to practice. (3rd ed.). Upper Saddle River, NJ: Prentice-Hall.

Shrout, P. E., & Fleiss, J. L. (1979). Intra-class correlations: uses in assessing rater reliability. Psychology Bulletin, 86, 420-428.