Editorial Views  |   May 2016
Reporting of Observational Research in Anesthesiology: The Importance of the Analysis Plan
Author Notes
  • From the Department of Anesthesiology, Wake Forest University School of Medicine, Winston-Salem, North Carolina (J.C.E.); Department of Anesthesiology, University of Michigan School of Medicine, Ann Arbor, Michigan (S.K.); and Department of Anesthesiology and Critical Care, Massachusetts General Hospital, Harvard University Medical Center, Boston, Massachusetts (T.T.H.).
  • Daniel I. Sessler, M.D., served as Handling Editor for this manuscript.
    Daniel I. Sessler, M.D., served as Handling Editor for this manuscript.×
  • Accepted for publication February 2, 2016.
    Accepted for publication February 2, 2016.×
  • Address correspondence to Dr. Eisenach:
Article Information
Editorial Views / Ethics / Medicolegal Issues / Quality Improvement
Editorial Views   |   May 2016
Reporting of Observational Research in Anesthesiology: The Importance of the Analysis Plan
Anesthesiology 5 2016, Vol.124, 998-1000. doi:10.1097/ALN.0000000000001072
Anesthesiology 5 2016, Vol.124, 998-1000. doi:10.1097/ALN.0000000000001072

“... observational studies play a critical role in medical research. ... The purpose of this change in [reporting] requirements ... is to enhance the trust, validity, consistency, and confidence in their key findings.”

Image: A. Johnson.
Image: A. Johnson.
Image: A. Johnson.
We have recently published changes in Journal policy regarding reporting of randomized controlled clinical trials1  and preclinical studies.2  This editorial is meant to describe changes in Journal policy regarding observational studies. Specifically, we strongly encourage authors of observational studies to develop a statistical analysis plan before accessing data, and we strongly encourage reviewers and readers to consider these plans when evaluating the reliability of their conclusions. Going forward, we require that authors transparently report in the article whether an analysis plan was developed before accessing data.
Observational studies complement randomized trials by examining real-world clinical practice, oftentimes across multiple sites, regions, or countries, and frequently generate novel hypotheses that, when formally tested, contribute to better understanding of pathophysiology, therapy, and clinical care. In many situations, observational studies are the only practical study design due to the inability to ethically and prospectively randomize and control certain interventions. Observational studies can take many forms including existing database analyses, post hoc analyses of prospectively collected data during interventional or observational trial cross-sectional studies, case-control designs, case series, cohort designs, and any design where measurements are collected on a process of interest that is left free to vary.
In comparison to randomized controlled clinical trials, observational studies have many strengths, but also some weaknesses. There are many well-known threats to the interpretation of such designs that each stem from potential bias in any observed association that arises from a process unrelated to the hypothesis of interest (e.g., confounding, selection bias). Additionally, there is typically uncertainty surrounding the role of any unmeasured variables/processes that might influence the results in such designs. Because of the many threats to the interpretations of observational research, a key element of such studies is the development of a carefully crafted plan of analysis that prospectively defines variables of interest (primary outcome, exposure variables, and confounder variables), subgroup analyses, sensitivity analyses, and selection and adjustment methods to reduce the impact of biases on the interpretation of the results. The development of this plan of analysis is best done before examination of the data, when the study is being designed. When such designs are rigorous and prospectively specified, observational studies have in fact been found not to systematically overestimate treatment effects as compared to randomized trials.3  However, failure to prespecify a plan of analysis can lead to several unwanted consequences, regardless of the type of study.
When the major elements of the plan of analysis are not predesigned, a serendipitous discovery of a particularly surprising finding is more likely. Searching the data without a predefined outcome and predefined methods can trick even seasoned investigators into thinking that any uncovered results are plausible or robust. Surprising findings have at their very basis a low prior probability of being true (i.e., that is why they are surprising), yet the very act of finding them increases the perceived probability of their “true” existence until one fully considers the context of how they were observed. For example, the confidence in finding a single association for a prespecified hypothesis (i.e., high prior probability) is much greater than finding an association when it was the most surprising finding of several hundred evaluated (i.e., low prior probability). The medical literature has only recently demonstrated an increasing self-awareness about the dire consequences that may result from the promotion of surprising findings that were derived through poor observational or interventional research methods.4  In most cases, these spurious findings fail to replicate in future studies.5 
Although failure to prespecify a plan of analysis may represent only one cause of the failure of many findings to replicate, there is growing concern that failure to report the pursuit of multiple post hoc hypotheses, or data dredging, represents a remediable cause.6,7  In this age of numerous, large, easily searchable data sets—when multiple highly technical analyses can be done at the click of a single graphic user interface within a highly regarded statistical platform—it behooves our community to encourage practices that are not always easy to do, particularly for an inexperienced investigative team standing in front of an existing data set with an exciting and well-intentioned idea. It is for this reason that Anesthesiology, in consultation with leaders in observational studies in the specialty and with its Editorial Board, is requiring transparent reporting of whether a predefined statistical analysis plan had been established.
Anesthesiology, like most journals in the specialty, encourages authors to consult the reporting guideline for observational studies, Strengthening the Reporting of Observational Studies in Epidemiology (STROBE)8  and its extensions for various types of observational studies, and enforces, using automated software, reporting of key aspects recommended in this guideline. The quality of reporting, as judged in part by adherence to STROBE, has been suggested to be reasonably good in publications in the specialty,9  so one may wonder at the need for an additional statement for yet more reporting requirements. The problem is this: STROBE only tangentially addresses the statistical analysis plan, requiring reporting of any prespecified hypotheses (STROBE Item 3), how the sample size was calculated (STROBE Item 10), and the statistical methods used (STROBE Item 12). We, like other leading journals,6,7  are hereby requiring authors to provide additional details in these essential elements of study design and reviewers to pay attention to them.
We require that randomized clinical trials be registered in a publicly available site before patient enrollment. Many investigators also register observational studies, although we do not require this at this time. However, the development of a robust analytical plan and reporting of the date of a documented submission of an analytical plan to a peer-review or registration forum is strongly encouraged. This peer-review forum may include a departmental, institutional, or multicenter research group that reviews and critiques proposals, or submission of an analytic plan to an ethics or institutional review board is also accepted as prospective documentation of an analytical plan. The Journal reviewers and Editors understand that secondary analyses of existing data sets may require the adjustment of an analytical plan as data are reviewed. However, a prospective plan that establishes fixed analytical elements such as primary outcome and patient inclusion/exclusion criteria is valued by the Journal as indicators of a mature and hypothesis-driven analytical process.
The essential elements of the statistical analysis plan should be reported. Indicate whether this primary outcome definition was established a priori at initiation of the analysis or was post hoc during data cleaning and statistical analysis. Indicate whether subgroup or sensitivity analyses were established a priori or post hoc. For studies evaluating a treatment effect, indicate whether a clinically meaningful effect size was defined, either a priori or post hoc. If an a priori effect size cannot be defined due to lack of empirical data or previous literature to guide a choice, state this clearly. Finally, indicate whether the same or largely overlapping data sets had been previously examined for similar outcome measures by the current study’s authors.
The purpose of this clarification of STROBE recommendations is not to generate unnecessary regulatory hurdles to investigators. As noted, observational studies play a critical role in medical research. They complement randomized trials in several important ways and generate novel hypotheses based on surprising findings. In some instances, they are the only feasible study design. The purpose of this change in requirements to this important class of research is to enhance the trust, validity, consistency, and confidence in their key findings.
Support was provided solely from institutional and/or departmental sources.
Competing Interests
Dr. Eisenach is the Editor-in-Chief of Anesthesiology, and his institution receives salary support from the American Society of Anesthesiologists (ASA) for this position. Dr. Houle is the statistical Editor of Anesthesiology, and his institution receives salary support from the ASA for this position. Dr. Kheterpal declares no competing interests.
Eisenach, JC, Houle, TT Replication to advance science: Changes in A. nesthesiology. Anesthesiology. (2014). 121 209–11 [Article] [PubMed]
Eisenach, JC, Warner, DS, Houle, TT Reporting of preclinical research in A. nesthesiology. Transparency and enforcement.. Anesthesiology. (2016). 124 763–5 [Article] [PubMed]
Concato, J, Shah, N, Horwitz, RI Randomized, controlled trials, observational studies, and the hierarchy of research designs.. N Engl J Med. (2000). 342 1887–92 [Article] [PubMed]
Ioannidis, JP Why most published research findings are false.. Chance. (2005). 18 40–7 [Article]
Schoenfeld, JD, Ioannidis, JP Is everything we eat associated with cancer? A systematic cookbook review.. Am J Clin Nutr. (2013). 97 127–34 [Article] [PubMed]
Thomas, L, Peterson, ED The value of statistical analysis plans in observational research: Defining high-quality research from the start.. JAMA. (2012). 308 773–4 [Article] [PubMed]
The PLOS Medicine Editors, Observational studies: Getting clear about transparency.. PLoS Med. (2014). 11 e1001711 [Article] [PubMed]
von Elm, E, Altman, DG, Egger, M, Pocock, SJ, Gotzsche, PC, Vandenbroucke, JP STROBE Initiative, STROBE Initiative, The Strengthening the Reporting of Observational Studies in Epidemiology (STROBE) statement: Guidelines for reporting observational studies.. Lancet. (2007). 370 1453–7 [Article] [PubMed]
Guglielminotti, J, Dechartres, A, Mentré, F, Montravers, P, Longrois, D, Laouénan, C Reporting and methodology of multivariable analyses in prognostic observational studies published in 4 Anesthesiology Journals: A methodological descriptive review.. Anesth Analg. (2015). 121 1011–29 [Article] [PubMed]
Image: A. Johnson.
Image: A. Johnson.
Image: A. Johnson.