Free
Correspondence  |   January 2008
The Use of Simulation Education in Competency Assessment: More Questions than Answers
Author Affiliations & Notes
  • Pamela J. Morgan, M.D., C.C.F.P., F.R.C.P.C.
    *
  • *Women’s College Hospital, University of Toronto, Toronto, Ontario, Canada.
Article Information
Correspondence
Correspondence   |   January 2008
The Use of Simulation Education in Competency Assessment: More Questions than Answers
Anesthesiology 1 2008, Vol.108, 168. doi:10.1097/01.anes.0000296642.84818.f3
Anesthesiology 1 2008, Vol.108, 168. doi:10.1097/01.anes.0000296642.84818.f3
In Reply:—
We would like to thank Dr. Edler for her interest in our article.1 First and foremost, in no way are we suggesting that simulation education be discarded, but, as Dr. Edler comments, there is a pressing need to ensure that we are assessing its impact using valid, reliable tools.
We also agree with Dr. Edler that generalizability is the way of the future and in fact is the analytic basis of our current ongoing study, designed to examine the psychometric properties of a newly developed obstetric team performance tool.
Our study was based on a sample size and using a design that would make the use of multifaceted generalizability analyses complicated at best. In fact, a generalizability analysis would likely be unstable because individuals were partially nested within team, and team was nested within scenario, and the number of actual performances was small for a generalizability analysis, which is why, ultimately, we adopted the analytic techniques we did for the study in question.
However, rater was fully crossed with all the dimensions that Dr. Elder described as other potential sources of variance, and it was the interrater reliability of the Human Factors Rating Scale that was a problem. Although we agree that other facets might add further error, there is no reason to believe that the interrater reliability would be better understood through the partitioning of other variances. With regard to the Human Factors Rating Scale, as Dr. Edler suggests, undoubtedly “something else is going on here.” However, based on our findings in the study, we have become substantially less interested in figuring out that “something else” than we might otherwise have been.
Instead, we decided that it will ultimately be more profitable to start from scratch, developing a tool that is generated out of the practice and experience of healthcare teams, rather than to spend our resources investigating a better understanding of what we think will ultimately prove to be the wrong tool.
Although we have independently come to the same opinions as Dr. Edler regarding the types of studies needed, based on our preliminary data, we decided that we will be looking to tools other than the Human Factors Rating Scale as the subject of these more elaborate generalizability analyses.
*Women’s College Hospital, University of Toronto, Toronto, Ontario, Canada.
Reference
Reference
Morgan PJ, Pittini R, Regehr G, Marrs C, Haley MF: Evaluating teamwork in a simulated obstetric environment. Anesthesiology 2007; 106:907–15Morgan, PJ Pittini, R Regehr, G Marrs, C Haley, MF