Editorial: How DoesCORR® Evaluate Survey Studies?

    loading  Checking for direct PDF access through Ovid

Excerpt

President Ronald Reagan once opined that “the nine most-terrifying words in the English language are ‘I'm from the government and I'm here to help’” [7]. Of course, he said this more than 30 years ago, long before nightly dinnertime interruptions by telemarketers and email spam. If the Gipper were still around, his nine most-feared words today might be: “We are conducting a brief survey to better understand …”
You'll find most of us hiding under our desks when these requests come our way, whether by phone or email.
But as editors at Clinical Orthopaedics and Related Research®, there is no hiding from the fact that we receive many research studies based on email surveys, postal surveys, surveys of large single- or multispecialty collaborative groups, and surveys of society members. While some of them may be interesting, only a much-smaller number are important and robust enough to justify the attention of CORR’s readers.
We assess studies of this design with the needs of those readers in mind. The studies we publish will, in general, share three traits:
First, these studies should tell readers something important that they did not know before. Simply summarizing what some group of experts (or community practitioners) prefers is, generally speaking, not of sufficient interest to publish here. Most of the time, practitioners are aware of the available options, and they usually also know when multiple options are in common use. The goal of a high-quality general-interest journal like CORR® should be to determine which option the best evidence supports; practice-pattern surveys and reports of provider preferences are at best Level-V evidence, and as such, represent a poor basis for choosing a therapeutic approach. But we can imagine—and have published—exceptions to this. We recently published a practice-pattern survey demonstrating that an important element of fracture care in practice deviates from solid clinical evidence [5]; in the future, we might also consider practice-pattern surveys that present some unexpected or counterintuitive findings, but by definition such studies are likely to be rare. By contrast, we especially enjoy survey studies that cause us to second guess what we thought we knew, and have published a number of these lately; a few recent examples are “Do Surgeons Treat Their Patients Like They Would Treat Themselves” [3], “High Rates of Interest in Sex in Patients with Hip Arthritis” [4], “New Total Knee Arthroplasty Designs: Do Young Patients Notice?” [6], and “Do Orthopaedic Surgeons Acknowledge Uncertainty?” [8].
Second, the group surveyed must represent some well-defined larger group of interest. While the availability of free online survey tools like SurveyMonkey (www.surveymonkey.com) has made it easier to conduct anything from an intradepartmental questionnaire to an international assessment of expert opinion, these tools do not change the fact that high-quality social-science research generally gets conducted by qualified social scientists. We would be surprised if a sociologist could develop and evaluate a surgical approach to the shoulder; it is no more reasonable to assume that a shoulder surgeon can conduct a valid survey study without expert guidance. A key element of survey-study design is the definition of the group of interest, and finding a representative cohort within this group to query; to do this, it often helps to have at least one member on a research team who has particular expertise in survey design. CORR is an international journal, and so we assess whether the surveys we publish address a need of a large-enough subset of our readers.
    loading  Loading Related Articles