Measurements made by different assessors

Many measurements are quite objective and will not depend that much on who they are made by (generally referred to as the assessor). Taking someone’s weight on an electronic scale with a digital readout, for example, is likely to give the same result whoever makes the measurement. Many clinical measurements, however, depend on how the measurement is made and thus the results may depend on the assessor. Thus assessing the passive range of motion of the hip or knee joint might depend, amongst other things, on just how hard the assessor pushes.

Such measurements may thus vary from assessor to assessor both in the average measurement and the variability in measurement. One assessor might consistently apply more force than another in assessing passive joint range resulting in measurements that are consistently higher. Alternatively one assessor may apply the force at a more consistent level than the other leading to more consistent measurements. In interpreting measurements of this nature it can thus be useful to understand how different assessors make the same measurements.

Essentially the same approach as we have developed so far can be used except that rather than just making a series of repeat measurements we want to make sure that a number of different assessors make a number of different measurements. Again we will limit the discussion to balanced designs which are those in which a given number of assessors each make the same number of measurements.

We can then generate a number of different indicators of measurement variability:

  • The overall SEM is simply the SEM as we have calculated it above without regard to which assessor made the measurements. This reflects the variability that would be expected if the assessor was chosen randomly every time a measurement was made.
  • An individual SEM is the SEM calculated as above but using only the measurements made by an individual assessor (we will thus end up with one individual SEM for each assessor). This is reflects the variability that would be expected if all measurements were made by one particular assessor.
  • The averaged individual SEM which is generally calculated as an RMS average of the individual SEMS will be a better estimate of how assessors perform in general than the values for any of the individual assessors.

Note: This is a different (but related) issue to that of quantifying how much of the overall SEM is attributable to within-assessor variability (which is given by the averaged individual SEM as described above) and the between-assessor variability (which has not been calculated as part of this analysis).

Next page: A note for gait analysts