Correspondence  |   August 2016
Current Quality Registries Lack the Accurate Data Needed to Perform Adequate Reliability Adjustments
Author Notes
  • David Geffen School of Medicine at UCLA, Los Angeles, California (I.S.H.).
  • (Accepted for publication March 30, 2016.)
    (Accepted for publication March 30, 2016.)×
Article Information
Correspondence   |   August 2016
Current Quality Registries Lack the Accurate Data Needed to Perform Adequate Reliability Adjustments
Anesthesiology 8 2016, Vol.125, 422-423. doi:
Anesthesiology 8 2016, Vol.125, 422-423. doi:
To the Editor:
We would like to thank Drs. Wakeam and Hyder1  for their excellent discussion and description of reliability adjustment in the recent issue of Anesthesiology. The authors correctly highlight the important role that the statistical analysis of data submitted to the various registries can play in the ranking of institutions. This is particularly important now that the Centers for Medicare and Medicaid Services requires providers to participate in a Physician Quality Reporting System2  using a Qualified Clinical Data Registry. These requirements are a precursor to altering physician payments based upon measures of care quality.
We would like to raise the issue of another area of “ reliability”: the reproducibility of the underlying data themselves. While some registries such as National Surgical Quality Improvement Program do periodic data audits and have well-described accuracy thresholds,3  many do not. In fact, some registries, including the Anesthesia Quality Institute and the American Society of Anesthesiologists Perioperative Surgical Home initiative, allow for widely divergent methods of data collection, yet lump these data together assuming they are comparable. For example, one group might define postoperative nausea and vomiting based on postanesthesia care unit antiemetic administration, while another bases it on direct patient interviews. Other registries, such as some maintained by the National Quality Foundation, utilize administrative claims data, which have been shown to be discordant with data collected by other methods.4–8  Despite these very different methods of data collection, all of these examples are considered equally valid national quality registries.
We find the idea that the underlying data used in these registries may be inconsistent to be worrisome. Ideally, the data on patients in various registries should be identical regardless of the method by which they were collected. At the very least, even if the data are not identical between registries, it is critical that within a registry, the data from various sites be of equal quality and have the same definitions, something the major registries in our own specialty lack. If the data inputs are not consistent, we are left with the question of which data to believe, and the conclusion is that the risk adjustment models used may be unable to control for patient-specific risk factors the way they are intended.
It seems inevitable that in the near future, providers will be compared to each other and paid partially based on these comparisons. This concept is based upon the unverified supposition that we can effectively compare patients across institutions. On the basis of the current landscape, we find this supposition unlikely, and we are concerned that using these inadequate tools may lead to incorrect choices in the near future. Drs. Wakeam and Hyder are absolutely correct that “big data” require more than assembling a large sample size and assuming that the “N” will solve the problem, but rather a thorough understanding of statistics and attention to detail. Unfortunately, it seems that the goals of some of the quality registries are outpacing the science behind them.
Competing Interests
The authors declare no competing interests.
Ira S. Hofer, M.D., Yannick Le Manach, M.D., Ph.D., Maxime Cannesson, M.D., Ph.D. David Geffen School of Medicine at UCLA, Los Angeles, California (I.S.H.).
Wakeam, E, Hyder, JA Reliability of reliability adjustment for quality improvement and value-based payment.. Anesthesiology. (2016). 124 16–8 [Article] [PubMed]
Manchikanti, L, Hirsch, JA Regulatory burdens of the Affordable Care Act.. Harvard Health Policy Rev. (2012). 13 9–12
Shiloach, M, Frencher, SKJr, Steeger, JE, Rowell, KS, Bartzokis, K, Tomeh, MG, Richards, KE, Ko, CY, Hall, BL Toward robust information: Data quality and inter-rater reliability in the American College of Surgeons National Surgical Quality Improvement Program.. J Am Coll Surg. (2010). 210 6–16 [Article] [PubMed]
McIsaac, DI, Gershon, A, Wijeysundera, D, Bryson, GL, Badner, N, van Walraven, C Identifying obstructive sleep apnea in administrative data: A study of diagnostic accuracy.. Anesthesiology. (2015). 123 253–63 [Article] [PubMed]
Quach, S, Blais, C, Quan, H Administrative data have high variation in validity for recording heart failure.. Can J Cardiol. (2010). 26 306–12 [Article] [PubMed]
Quan, H, Parsons, GA, Ghali, WA Validity of information on comorbidity derived from ICD-9-CCM administrative data.. Med Care. (2002). 40 675–85 [Article] [PubMed]
Pasquali, SK, He, X, Jacobs, JP, Jacobs, ML, Gaies, MG, Shah, SS, Hall, M, Gaynor, JW, Peterson, ED, Mayer, JE, Hirsch-Romano, JC Measuring hospital performance in congenital heart surgery: Administrative versus Clinical Registry Data.. Ann Thorac Surg. (2015). 99 932–8 [Article] [PubMed]
Lawson, EH, Louie, R, Zingmond, DS, Brook, RH, Hall, BL, Han, L, Rapp, M, Ko, CY A comparison of clinical registry versus administrative claims data for reporting of 30-day surgical complications.. Ann Surg. (2012). 256 973–81 [Article] [PubMed]