Correspondence  |   December 2018
In Reply
Author Notes
  • The American Board of Anesthesiology, Raleigh, North Carolina (H.S.). huaping.sun@theABA.org
  • (Accepted for publication August 31, 2018.)
    (Accepted for publication August 31, 2018.)×
Article Information
Correspondence
Correspondence   |   December 2018
In Reply
Anesthesiology 12 2018, Vol.129, 1191-1192. doi:10.1097/ALN.0000000000002463
Anesthesiology 12 2018, Vol.129, 1191-1192. doi:10.1097/ALN.0000000000002463
We appreciate the interest in our publication1  and the opportunity to respond to these two Letters to the Editor.
Dr. Pivalizza and colleagues have questions about our methodology and inclusion criteria, and we would like to clarify. Their first question related to not accounting for those residents who did not take the in-training examination in their clinical base year in the analysis. There were actually two different models employed in the analysis of changes in in-training examination scores from the clinical base year to the clinical anesthesia year 1, and from the clinical anesthesia year 1 to year 2. The latter analysis (and our main conclusion) did not depend upon whether the residents had taken the in-training examination during their clinical base year. Second, given the study question of in-training examination score increment, residents who did not take the in-training examination in both clinical anesthesia years 1 and 2 could not be analyzed, and concerns were raised regarding the possibility of those who had failed the BASIC examination leaving training before taking the in-training examination in their clinical anesthesia year 2, thus biasing the composition of the cohort. We note that three failures of the BASIC examination are required for mandatory extension of training, and that for the 2013 cohort, only 0.2% failed twice. Thus, we think it is unlikely that this factor significantly affected the analysis. Dr. Pivalizza and colleagues also question whether preparing for the BASIC examination may have distracted residents from preparing for the preceding in-training examination, lowering in-training examination performance at clinical anesthesia year 1 and biasing toward an increase in performance from clinical anesthesia year 1 to year 2. As shown in table 1 and figure 2 of our article,1  there is no evidence that the introduction of the staged examination system in the 2013 cohort was associated with lower in-training examination scores at clinical anesthesia year 1; indeed, the 2014 cohort had higher in-training examination scores at clinical anesthesia year 1. Finally, it is our perspective that what constitutes a “small” effect size is a matter of interpretation. The in-training examination performance of clinical anesthesia year 2 residents after the introduction of the staged examination system was similar to that of clinical anesthesia year 3 residents in the traditional examination system; we leave it to the readers to judge the significance of this finding.