I’ve just been invited to write a “Commentary” for Developmental Medicine and Child Neurology. This is one of those journals that publishes brief articles (I’ve been offered a 650 word and 5 reference limit) commenting on a recently accepted paper. For this particular journal the practice would appear to be to ask one of the reviewers to write the commentary. My first reaction was to feel flattered and start writing immediately … but then I remembered that I often read similar articles written by other people with a sense of annoyance and even indignation.
It takes a lot of hard work to conduct research and write it up to the point of acceptance for a major journal. I remember reading somewhere (haven’t a clue where I’m afraid) that the average clinical paper costs more than a £100,000 to produce once you factor in full direct and indirect costs. Teams of authors spend a considerable time analysing the data and preparing and discussing successive drafts of their manuscripts. The final draft is then subject to cautious scrutiny through peer review to ensure that the resulting product is a fair and considered report of the study and its implications. Given the effort that goes into the article itself I’m not particularly comfortably with the idea that a single individual who’s probably only spent a couple of hours reviewing at the submitted version is given the platform of expounding their views alongside those of all the individuals who have done the real work. That individual is often granted considerably more freedom than the original authors to comment about what he or she likes whereas the original authors are constrained to comment only on the evidence provided within the paper. All in all it just doesn’t feel right.
I’ve held this opinion for some time but was particularly stung last year when we published a paper reporting a substantial clinical trial into the effects of progressive resistive strength training (PRST) on walking and mobility related function in children with cerebral palsy (Taylor et al., 2013). This was the culmination of 7 years work on a project that recruited 44 participants to one of the largest randomised clinical trials recorded of any physical intervention for cerebral palsy. Obtaining the quarter of a million dollar grant from the Australian National Medical Health and Research Council before we started was a substantial achievement in itself. The results were clear (if disappointing for clinicians) that whilst PRST results in stronger muscles it doesn’t appear to lead to any improvements in gait or other measures of mobility. Not only were the findings of that study unambiguous but they substantiated the findings of the three previous (smaller) randomised clinical trials that had been published investigating similar interventions.
The published commentary, however, chose to damn the article with faint praise of its strengths and a focus on minor limitations. It went on to conjecture that the problem was that, although we had put in place a rigorous PRST programme, we hadn’t specifically “trained gait”. Whilst I can’t argue with this, I feel incensed that conjecture about what might work has been used to trump evidence of what clearly doesn’t. The commentary’s conclusion that that, “therapists who look to evidence in the literature to design their interventions will not find this article useful”, is particularly galling. Even if the author of the commentary is correct, that a combination of gait training and strength training is required for functional improvement, it must still be useful to know that strength training alone will not achieve this.
The situation is often exacerbated when the evidence in a paper goes against what the clinical community wants to hear. In the example I’ve cited we would all love to believe that adolescents with cerebral palsy can make substantial functional gains by going to the gym. As optimistic human beings (rather than dispassionate scientists) that’s what our team hoped we were going to provide evidence of when we embarked on the research. For similarly enthusiastic clinicians reading the two articles side by side it will be all too easy to allow the optimism of the conjecture to displace the reality of the evidence.
A particular problem of having commentary published alongside research articles in this fashion is that it can be cited in exactly the same manner. The informed reader will be suspicious of papers that are just one or two pages in length, but not everyone is an informed reader and considerable caution is required to prevent “expert” conjecture entering the evidence base on the same terms as genuine research results. There is also the issue that the model of publishing is of peer review. Inviting one of those peers (equals) to write a commentary implicitly elevates them to quite a different status. The invitation I received referred to an “authoritative background piece by an expert“.
Of course I can be fairly criticised for writing similar opinion pieces in this blog. My only defence is that this is very clearly a blog – it doesn’t make any pretence to be anything else. The banner picture across the top shouts out that these are “personal reflections”. I don’t suppose anyone has ever tried to cite what I’ve written for a journal article but if they do it will be very clear from the citation that these are no more than the ramblings of a bigoted biomechanist.
Another great post Richard, sharing your personal thoughts and beliefs like you do on your blog is refreshing, informative and thought provoking, this blog is a rare pearl on my feedley app and hope you keep it up. I know you’re not a fan of publisher pays open access journals but it’s always a interesting to read “expert” reviewers opinions on papers in the pre publication history, especially when a paper gets sledged for minor omissions
Well said Richard! Speculation on what could be found should never out weigh what is actually reported in the published article.