Editorial Views  |   November 2007
Experience ≠ Expertise: Can Simulation Be Used to Tell the Difference?
Author Notes
  • Center for Perioperative Research in Quality, Vanderbilt University School of Medicine, and Geriatric Research, Education and Clinical Center, Veterans Affairs Tennessee Valley Healthcare System, Nashville, Tennessee.
Article Information
Editorial / Education / CPD / Geriatric Anesthesia / Quality Improvement
Editorial Views   |   November 2007
Experience ≠ Expertise: Can Simulation Be Used to Tell the Difference?
Anesthesiology 11 2007, Vol.107, 691-694. doi:
Anesthesiology 11 2007, Vol.107, 691-694. doi:
AN article in this issue of Anesthesiology raises a thorny issue for our specialty, and for organized medicine as a whole: the evaluation and maintenance of clinical competence of experienced clinicians. Murray et al.  1 report that board-certified anesthesiologists (BCAs) and anesthesia residents performed similarly during a simulation-based assessment of acute intraoperative clinical skills. In a well-controlled experiment, 35 practicing anesthesiologists, of whom 91% were board certified, and 33 experienced anesthesia residents managed a series of critical events presented within brief (5-min) clinical scenarios in a high-fidelity simulated operating room. A separate group of 31 first-month CA1 residents each managed similar simulated events. Raters blinded to group viewed the videotaped performance in random order and ascertained whether the subject successfully performed predefined key diagnostic or therapeutic actions. As expected, the first-month residents’ performance was poorer than that of the more experienced groups. However, there were no significant differences in the overall performance between the experienced residents and the BCAs (26 from community and 9 from academic practices). All groups managed some critical events well (e.g.  , bronchospasm, tension pneumothorax) but had appreciable difficulty with others (e.g.  , malignant hyperthermia, hyperkalemia). There was large variability in the performance of individuals at all experience levels. Moreover, an individual’s skill in managing one event did not necessarily predict his or her performance on other types of events.
Before interpreting these findings, we must review the study’s limitations. The BCAs were self-selected from a single community and thus may not be representative of all experienced anesthesiologists. The BCAs were less familiar with the simulation environment than were the residents. However, before study initiation, they practiced doing a simulated induction and were provided time and resources to “set up” the anesthesia equipment and other resources within their simulated operating room as they wished. There did not seem to be any training (or order) effects on the BCAs’ performance. In contrast, residents may have had more recent training in the management of similar critical events. Further, the scenarios were only 5 min in duration and thus may have lacked some key contextual factors associated with the ongoing care of actual patients. Yet, in an analogous situation, BCAs supervising certified registered nurse anesthetists or residents may be called urgently to an operating room to manage an evolving event. Each experienced clinician performed a different random combination of available scenarios, which varied in their clinical incidence and in their degree of difficulty. However, performance did not seem to be related to the frequency with which clinicians might deal with such events in actual practice. On balance, the scenarios and associated scoring criteria seem to have evaluated the kinds of acute management skills one might expect of experienced anesthesiologists.
Despite this study’s limitations, I believe its findings are fundamentally valid and consistent with other research on the relation between clinical experience and quality of care.2,3 Experienced anesthesiologists, as a group, may not be nearly as expert at managing intraoperative critical events as some may have thought or would like. And, perhaps more importantly, there is tremendous variability in performance, not only between experienced practitioners, but also in the management of different types of events by the same practitioner.
Is Realistic Patient Simulation a Valid and Reliable Method of Assessing Clinical Competence?
In any performance evaluation, one must establish the validity of the scoring criteria (i.e.  , measuring the correct things) and the reliability of the scoring (measuring those things correctly). Some will legitimately argue that the simulated care environment is artificial and that performance during simulations may not reflect performance during actual clinical practice. However, few of our current alternatives (e.g.  , number of continuing medical education credits, hospital quality assurance reports, malpractice claims history) reliably measure the clinical competence of practicing physicians. Simulation-based assessment is still in its infancy. Before we use simulation for high-stakes assessments (e.g.  , board certification), substantially more research and development will be required to refine and standardize scenario content, scoring methods, evaluator training, and assessment protocols (e.g.  , number of scenarios per assessment). A randomized controlled trial of simulation-based assessment versus  actual patient outcomes would be highly desirable but may not be feasible.
Experience Is Not Synonymous with Expertise
The evidence across many disciplines, from sports to medicine, suggests that experience is not synonymous with expertise. One might speculate that the BCAs in the study of Murray et al.  either never attained the evaluated competencies or may have lost these skills during their years of clinical practice. Ericsson argues that most professionals attain only a mediocre level of performance during initial training and that performance level is, at best, only maintained during the rest of their careers.3 
By definition, an expert consistently provides superior domain-specific performance.3,4 Expertise is more than simply having extensive factual knowledge or competent skills. Experts have specific psychological attributes such as self-confidence, excellent communication skills, adaptability, and risk tolerance. They also have specific skills, including highly developed attention to what is relevant, ability to identify exceptions to the rules, flexibility to changing situations, effective performance under stress, and ability to make decisions and initiate actions quickly based on incomplete data.
For clinical procedures, one can demonstrate a correlation between experience (number of procedures performed) and performance (number of errors).5–7 This relation is less clear for complex cognitive skills. Previous simulation studies provided only weak evidence that clinicians with more (or more recent) training manage untoward events more effectively.8,9 
Expertise can be very context sensitive. Expert management of one specific type of event (e.g.  , myocardial ischemia) will not necessarily generalize to other types of events (e.g.  , anaphylaxis)—even if many of the same cognitive and psychomotor skills are applicable. Moreover, clinical expertise is dynamic—it varies with the situational demands and the individual’s cognitive and emotional resources.10 A well-trained clinician, subjected to an event in a novel environment (e.g.  , new operating room or anesthesia workstation), may not perform at the same level as he or she would in a more familiar setting.
Training for, and Maintenance of, Expertise
Expertise seems to be most reliably formed and maintained through (1) motivation to improve, (2) focus on clearly defined tasks, (3) immediate useful feedback, and (4) repetitive deliberate practice strategically guided by an expert instructor.3 Notably, in other domains (e.g.  , sports, gaming), expertise primarily derives from countless hours of solitary and systematic practice typically characterized by careful study of one’s own and other experts’ performance, mindful emphasis on subtask performance, and increased challenges. Evidence suggests that, at least for the acquisition of procedural expertise, simulation may be an ideal approach.11 
From healthcare quality improvement, we know that outcomes are determined by process. Is physician training currently designed to generate clinical expertise? In fact, the current system does not support lifelong learning and improvement. Repeating the same actions and behaviors on a daily basis during routine clinical care, without deliberate practice, will not improve clinical performance.
Although still speculative, what may generalize across the management of different acute clinical events is training in “nontechnical skills”: behavioral attributes of teamwork, communication, and resource management embodied in the simulation-based Anesthesia Crisis Resource Management curricula originated by Gaba et al.  12 Nontechnical skills can be reliably assessed.13 
What’s Next for Simulation-based Assessment?
Anesthesiology has been on the forefront of the use of high-fidelity simulation for clinical training, and increasingly, this strategy is supported by evidence.14 The recent meta-analysis by McGaghie et al.  14 showed a strong positive correlation between hours of simulator practice and standardized learning outcomes. In the more controversial area of simulation-based assessment, other clinical specialties, especially nursing and family practice, are taking the initiative. American anesthesiologists could particularly learn from the experience of our Israeli colleagues, who have implemented mandatory simulation-based assessment in their national board certification examination.15 
Before one confidently undertakes simulation-based high-stakes assessment, it is essential to establish and agree on its content (what should be evaluated) as well as the examination’s scoring validity, reliability, generalizability, and “passing” criteria. Because of the complexity and dynamism of high-fidelity simulation, creating evaluation systems with acceptable psychometric properties is more difficult than for static evaluations (e.g.  , written examinations). How do you score a scenario where the correct things are done in the wrong order? If everything is done correctly except for one critical item? If performance varies substantially over time? Also, the dynamic interactions between trainee and simulation affect reproducibility; it may be impossible to predict and script a consistent response to every possible trainee action. Murray et al.  1 finesse some of these issues by using a small number of brief circumscribed scenarios and limiting participant–simulation interactivity. Such decisions enhance feasibility and reliability but at the potential expense of generalizability and validity.
An American Society of Anesthesiologists (ASA) Task Force, under the direction of Michael Olympio, M.D. (Chair, Workgroup on Simulation Education) struggled with these and other issues (e.g.  , performance anxiety, expense) as it created a program to provide standardized simulation-based courses to ASA members.1In 2006, the ASA created a formal Committee on Simulation Education, chaired by Randolph H. Steadman, M.D. This Committee recently issued a call for applications for simulation centers to become “ASA Approved.” The American Board of Anesthesiology has been working closely with the ASA to foster the creation of simulation-based training experiences for Maintenance of Certification in Anesthesiology.
It is probably inevitable that anesthesiologists, as well as other medical professionals, will be required to participate in simulation-based training and competency assessment (including credentialing). There will be external pressures to do so—from the public, Joint Commission on Accreditation of Healthcare Organizations, government regulators, and even other specialties.16 The road ahead may be long and will be rocky. Thanks to two decades of leadership in medical simulation, our specialty is in a unique position to affect both the pace and direction of this journey. Murray et al.  are to be congratulated for taking an important step in the right direction. The next steps must include (1) a substantial investment in extramurally funded simulation research to establish valid content, structure, and scoring metrics; (2) collaboration with educators, psychometricians, and other specialties; (3) training of a larger cadre of skilled instructors/evaluators; and (4) a commitment to large-scale standardized training and assessment of medical students, residents, and experienced anesthesiologists.
Center for Perioperative Research in Quality, Vanderbilt University School of Medicine, and Geriatric Research, Education and Clinical Center, Veterans Affairs Tennessee Valley Healthcare System, Nashville, Tennessee.
The author explicitly acknowledges the inspiration and ideas of David M. Gaba, M.D. (Professor of Anesthesia, Associate Dean for Immersive and Simulation-based Learning, Stanford University, Stanford, California), John Shatzer, Ph.D. (Director, Center for Experiential Learning and Assessment, Associate Professor of Medical Education and Administration, Vanderbilt University School of Medicine, Nashville, Tennessee), and K. Anders Ericsson, Ph.D. (Conradi Eminent Scholar and Professor of Psychology at Florida State University, Tallahassee, Florida).
Murray DJ, Boulet JR, Avidan M, Kras JF, Henrichs B, Woodhouse J: Similar performance of residents and anesthesiologists in a simulation-based skill assessment. Anesthesiology 2007; 107:705–13Murray, DJ Boulet, JR Avidan, M Kras, JF Henrichs, B Woodhouse, J
Choudhry NK, Fletcher RH, Soumerai SB: Systematic review: The relationship between clinical experience and quality of health care. Ann Int Med 2005; 142:260–73Choudhry, NK Fletcher, RH Soumerai, SB
Ericsson KA: Deliberate practice and the acquisition and maintenance of expert performance in medicine and related domains. Academic Med 2004; 79:S70–81Ericsson, KA
Ericsson KA, Lehmann AC: Expert and exceptional performance: Evidence of maximal adaptation to task constraints. Annu Rev Psychol 1996; 47:273–305Ericsson, KA Lehmann, AC
Konrad C, Schupfer G, Wietlisbach M, Gerber H: Learning manual skills in anesthesiology: Is there a recommended number of cases for anesthetic procedures? Anesth Analg 1998; 86:635–9Konrad, C Schupfer, G Wietlisbach, M Gerber, H
Rosser JC, Rosser LE, Savalgi RS: Skill acquisition and assessment for laparoscopic surgery. Arch Surg 1997; 132:200–4Rosser, JC Rosser, LE Savalgi, RS
See WA, Cooper CS, Fisher RJ: Predictors of laparoscopic complications after formal training in laparoscopic surgery. JAMA 1993; 270:2689–92See, WA Cooper, CS Fisher, RJ
Gaba D, DeAnda A: The response of anesthesia trainees to simulated critical incidents. Anesth Analg 1989; 68:444–51Gaba, D DeAnda, A
Kurrek MM, Devitt JH, Cohen MM: Cardiac arrest in the OR: How are our ACLS skills? Can J Anaesth 1998; 45:130–2Kurrek, MM Devitt, JH Cohen, MM
Dreyfus HL, Dreyfus SE, Athanasiou T: Mind Over Machine: The Power of Human Intuition and Expertise in the Era of the Computer. New York, The Free Press, 1986.Dreyfus, HL Dreyfus, SE Athanasiou, T New York The Free Press
Reznick PK, MacRae H: Teaching surgical skills: Changes in the wind. N Engl J Med 2006; 355:2664–9Reznick, PK MacRae, H
Gaba DM, Howard SK, Fish KJ, Smith BE, Sowb YA: Simulation-based training in anesthesia crisis resource management (ACRM): A decade of experience. Simul Gaming 2001; 32:175–93Gaba, DM Howard, SK Fish, KJ Smith, BE Sowb, YA
Flin R, Maran N: Identifying and training non-technical skills for teams in acute medicine. Qual Saf Health Care 2004; 13:i80–4Flin, R Maran, N
McGaghie WC, Issenberg SB, Petrusa ER, Scalese RJ: Effect of practice on standardised learning outcomes in simulation-based medical education. Med Educ 2006; 40:792–7McGaghie, WC Issenberg, SB Petrusa, ER Scalese, RJ
Berkenstadt H, Ziv A, Gafni N, Sidi A: Incorporating simulation-based objective structured clinical examination into the Israeli National Board Examination in Anesthesiology. Anesth Analg 2006; 102:853–8Berkenstadt, H Ziv, A Gafni, N Sidi, A
Scott DJ: Patient safety, competency, and the future of surgical simulation. Simul Healthcare 2006; 1:164–70Scott, DJ