Correspondence  |   March 2019
When Large Administrative Databases Provide Less Relevant Information than Randomized Studies
Author Notes
  • Department of Emergency Medicine and Surgery, Hôpital Pitié-Salpêtrière, Assistance Publique-Hôpitaux de Paris and Sorbonne Université, UMR Inserm 1166, IHU ICAN, Paris, France (B.R.). bruno.riou@aphp.fr
  • (Accepted for publication November 16, 2018.)
    (Accepted for publication November 16, 2018.)×
Article Information
Correspondence
Correspondence   |   March 2019
When Large Administrative Databases Provide Less Relevant Information than Randomized Studies
Anesthesiology 3 2019, Vol.130, 514-515. doi:10.1097/ALN.0000000000002569
Anesthesiology 3 2019, Vol.130, 514-515. doi:10.1097/ALN.0000000000002569
We read with interest the retrospective study conducted by Wasserman et al.,1  based on a national administrative database assessing the impact of intravenous acetaminophen on perioperative opioid utilization and outcomes in patients undergoing open colectomies.
Research based on administrative data sets can provide information of major importance for clinical practice, but the interpretation of results is difficult, and causal inference is circumscribed by intrinsic methodologic limitations. In this study,1  we observed three main limitations with potential impact on result interpretation: (1) the validity of main outcome data (morphine consumption) is questionable compared with monitored clinical studies, (2) the doses of acetaminophen administered in the treated group were heterogeneous, and (3) the estimation of treatment effect is likely to be biased by uncontrolled confounding factors. The sensitivity analysis provided by the authors is not enough to provide an unbiased estimation of treatment effect. To minimize bias, a propensity score analysis2  or another sophisticated multivariable matching process3  should have been performed, because the patients who received acetaminophen differed markedly from those who did not. Despite the large sample size (n = 181,640),1  we believe that the average treatment effect estimation is not robust enough to support any practice recommendations based on this study.  Therefore, the amount of new information is relatively limited.