Free
Education  |   November 2004
Introduction of Anesthesia Resident Trainees to the Operating Room Does Not Lead to Changes in Anesthesia-controlled Times for Efficiency Measures
Author Affiliations & Notes
  • Sunil Eappen, M.D.
    *
  • Hugh Flanagan, M.D.
    *
  • Neil Bhattacharyya, M.D.
  • * Assistant Professor, Department of Anesthesiology, Perioperative and Pain Medicine, † Associate Professor, Department of Otology and Laryngology, Brigham and Women’s Hospital and Harvard Medical School, Boston, Massachusetts.
Article Information
Education
Education   |   November 2004
Introduction of Anesthesia Resident Trainees to the Operating Room Does Not Lead to Changes in Anesthesia-controlled Times for Efficiency Measures
Anesthesiology 11 2004, Vol.101, 1210-1214. doi:
Anesthesiology 11 2004, Vol.101, 1210-1214. doi:
IN the current era of financial responsibility and budgetary constraints in the delivery of medical care, there is a growth of the use of industrial management techniques to increase efficiency in health care.1 As a result of the greater scrutiny of hospital efficiency, there is increasing pressure to optimize efficiency in the operating room. Operating room efficiency has become a high priority for many hospitals,2–4 and a number of studies have explored various measures of operating room utilization and methods to improve efficiency.5 One method defined in industry to improve quality in clinical care is to establish performance standards for the workplace. Performance standards are a way to describe the activities for which a group of people hold themselves specifically accountable. Managers in both the healthcare and the business communities believe that performance standards will improve the quality of care delivered by promoting efficiency and teamwork. This has been used in the practice of anesthesiology, e.g.  , by looking at operating room turnover time.6,7 
This type of critical evaluation creates a unique dichotomy for academic anesthesia residency training programs. There is increased pressure to be efficient; however, we must educate trainees while optimizing clinical care. The Anesthesia Patient Safety Foundation has recognized the greater challenge of anesthesia-related efficiency in academic programs when compared with private practice in community hospitals.8 It has been suggested that trainees adversely affect the efficiency of anesthesiology-controlled operating room measures.9,10 Unfortunately, there are no studies that have specifically evaluated the effect of differing models of anesthesia delivery created by an anesthesia residency training program on operating room efficiency. We designed this study to determine whether the introduction of anesthesia trainees change anesthesia-controlled times in the operating room.
Although different residency programs may have slightly differing methods of introducing anesthesia residents to clinical work, we believe ours to be representative of the experience encountered at a large academic teaching institution. At our institution, the first 2 weeks of the residency program is devoted to largely didactic lectures, with little or no time spent in the operating room. Therefore, during the first 2 weeks of the beginning of training, staff anesthesiologists working alone run the majority of the operating rooms. This period is followed by 6–8 weeks of one-to-one pairing of one anesthesia resident with one attending physician caring for one patient in the operating room during a brief apprenticeship period. If the anesthesia resident is assessed as being competent to advance to the next stage of training by the staff, concurrent patient care is allowed. Therefore, characteristically by the end of 2 months, the residents have advanced to the typical, two-to-one coverage of two anesthesia residents supervised by one attending physician covering two operating rooms. This 2-month, yearly suspension of the routine staff assignment practice allows for a unique evaluation of the impact of anesthesia trainees on operating room efficiencies typically believed to be controlled by the anesthesiologist.
The goal of our study was to measure the impact of the initiation of new residents to the operating room on anesthesia-related time measures of operating room efficiency.
Materials and Methods
All data were collected at Brigham and Women’s Hospital (Boston, Massachusetts), a 745-bed academic hospital and tertiary care center. The operating room suite has 38 rooms that are scheduled to simultaneously begin surgery at 7:30 am, Monday through Friday. Our institution prospectively maintains an automated and computerized operating room information system that tabulates information for every procedure. For each case, several data fields are entered, thus allowing the tracking of times of the start and end of specific portions of operating room procedures. These data include the following endpoints as defined by the American Association of Clinical Directors1:
  • The room is deemed ready for the patient by the nurses.

  • The patient is in the room (PIR).

  • Anesthesia induction is complete and surgical preparation may begin (AR).

  • Incision is made.

  • Initiation of wound closure

  • End of surgery or procedure (PF)

  • Ready to transfer patient to stretcher (PRT)

  • Time out of the room (POR)

Time data are entered in real time during each case by the circulating nurse with an automated recording of the time based on key punches on a computer within the operating room itself. Thus, substantial data are available regarding individual time factors involved in the completion of each surgical case. In addition, names of anesthesia attending physicians and resident physicians are also prospectively logged, along with standard case data for the surgical procedure. All cardiac and thoracic operating rooms were excluded from our analysis because they are staffed with senior residents or fellows at all times. In addition, there are no pediatric or obstetric case data because these occur in different locations. The nurses entering the data were unaware that this information was being used for the purposes of this study. Anesthesia and surgical personnel have no direct role in assessing or recording these times. The small number of cases managed with nurse anesthetists was excluded.
We used anesthesia-controlled time as previously defined11 as induction time and emergence time. Induction time was defined as the time between when the patient entered the room and when the induction was complete (AR − PIR). Similarly, the time from surgery end until ready to transfer (PRT − PF) was considered emergence time. These two time periods were thought to be most representative of anesthesia-controlled time intervals during the natural flow of a surgical case. In addition, turnover time was calculated as the time from a when given patient left the room until the subsequent patient entered the room (POR − PIR). Turnover time data were restricted to only cases in which the same surgeon was following himself/herself in the same room with a scheduled case.
During the first 2 weeks of July, corresponding to the beginning of the academic year, the operating room is staffed during scheduled block time hours almost completely by anesthesia attending physicians working alone. During all other weeks of the academic year, the majority of the cases in the operating room are cared for by the anesthesia team, consisting of one or more anesthesia residents teamed with an anesthesia attending physician. Therefore, the first 2 weeks of July offer efficiency data corresponding to anesthesia provided only by attending anesthesiologists, whereas other months of the year offer efficiency data for anesthesia provided in a standard teaching hospital model.
From the case log database, time data were extracted for scheduled cases beginning between the hours of 7:00 am and 5:00 pm, Monday through Friday inclusive, for the first 2 weeks in July 2003. Cases occurring outside these hours were considered “add-on” cases and were excluded. These cases were logged as solo staff  . Similarly, case log data for scheduled cases for 2 randomly selected weeks in September 2002 (new CA-1  ) and 2 randomly selected weeks in May 2003 (experienced CA-1  ) of the same academic year were also extracted. September is the first month where new anesthesia residents are allowed to work 2:1. By May, we considered these same residents to be significantly more experienced.
Operating room case data were downloaded, simultaneously stripping from the data all patient identifiers in compliance with the Healthcare Information Privacy and Portability Act. Data were tabulated and imported into SPSS version 10.0 (Chicago, IL). Data were verified to ensure that only scheduled cases were included in the analysis. Specifically for the data extracted for July, manual data verification was conducted to ensure that cases with any resident participation were deleted from subsequent analysis. Similarly, for cases performed in May or September, data were manually verified according to physician logs, deleting cases that have no resident participation during the data samples from these 2 months. From the case log data, induction time, emergence time, and turnover time in minutes were computed.
In addition, at our request, the hospital administration provided their monthly data on case mix severity, reported as an index. The case mix index  (CMI) is a standardized system of comparing patient severity used in many institutions throughout the country. The algorithm involves assigning patients to one of more than 500 diagnosis related groups based on the primary and secondary diagnosis, the age of the patient, and the presence of comorbidity or complications. Each diagnosis-related group has a numerical weight associated with it, and the CMI is calculated by averaging the diagnosis-related group values. It is commonly used to compare the severity of patient illnesses or the hospital resource consumption expected of patients. The individuals involved in the preparation of this data were unaware that a study was in progress during this time period. Importantly, the CMI reported in our study includes all surgical patients during the study period, not just the ones we evaluated.
Standard descriptive statistics were computed. Analysis of variance testing was then conducted to compare each of these time durations across the 3 different months. Significance was set at P  < 0.05. Post hoc  testing was conducted with Dunnett T3 to determine statistically significant differences between individual groups when statistically appropriate based on the analysis of variance.
Results
A total of 3,004 surgical procedures were performed during the 2-week study periods in July, September, and May. Of these cases, 1,008 were excluded as described to leave 481, 727, and 788 scheduled cases in July, May, and September, respectively, of the same academic year for analysis.
The corresponding mean anesthesia induction times were 17.3, 19.0, and 20.8 min/case for these months, respectively (P  = 0.047, analysis of variance). The corresponding mean anesthesia emergence times were 8.7, 9.7, and 10.0 min, respectively (P  = 0.024). The corresponding mean room turnover times were 47.6, 48.5, and 48.6 min, respectively (P  = 0.907). Operating time by the surgeons (i.e.  , incision until surgery end) actually decreased in July to 150 min/case from an average of 160 min/case during the other periods of the study (table 1)
Table 1. Anesthesia and Surgery-related Times 
Image not available
Table 1. Anesthesia and Surgery-related Times 
×
Post hoc  testing revealed that the difference in mean induction times between July and September was statistically significant (P  = 0.015, Dunnett T3), whereas the difference in induction times between July–May and May–September was not statistically different (P  > 0.05). Similarly, the difference in mean emergence times between July and September was statistically significant (P  = 0.009), whereas the other differences were not. The means and 95% confidence intervals for induction time, emergence time, and surgical time for the three study periods are displayed in figure 1.
Fig. 1. (  A  C  ) Mean and 95% confidence intervals for induction time, emergence time, and surgical time for the three study periods.  Solo  refers to the period in July when the staff worked alone in the room,  New  refers to the period in September when the staff worked with first-year anesthesia residents in a 1:2 ratio early in their training, and  Experienced  refers to the period in May when the staff worked with more experienced first-year residents in a 1:2 ratio. See  table 1for details. 
Fig. 1. (  A  –C  ) Mean and 95% confidence intervals for induction time, emergence time, and surgical time for the three study periods.  Solo  refers to the period in July when the staff worked alone in the room,  New  refers to the period in September when the staff worked with first-year anesthesia residents in a 1:2 ratio early in their training, and  Experienced  refers to the period in May when the staff worked with more experienced first-year residents in a 1:2 ratio. See  table 1for details. 
Fig. 1. (  A  C  ) Mean and 95% confidence intervals for induction time, emergence time, and surgical time for the three study periods.  Solo  refers to the period in July when the staff worked alone in the room,  New  refers to the period in September when the staff worked with first-year anesthesia residents in a 1:2 ratio early in their training, and  Experienced  refers to the period in May when the staff worked with more experienced first-year residents in a 1:2 ratio. See  table 1for details. 
×
The CMI in table 2is reported for the months containing the weeks of the study as well as the surrounding months.
Table 2. Case Mix Severity Index 
Image not available
Table 2. Case Mix Severity Index 
×
Discussion
Anesthesia training programs in the United States consist of a combination of clinical apprenticeship and didactic training spanning a minimum 4-yr period.2Unlike other residency training programs, the typical medical school education exposure to anesthesia provides little in terms of the initial preparation for providing direct anesthesia care for patients independently in the operating room. As a result, there is a dramatic difference required in the level of attending supervision in the initial period of anesthesia training compared with a short time later. There has been some evidence that the lower anesthesia resident training level may lead to higher relative risk for critical incidents and escalation of care.9 The same study suggested that inexperienced trainees might contribute to operational inefficiencies.9 If this supposition is true, this could have enormous impact on economic and other costs for the anesthesia department, the operating room, and the hospital.
Our study reveals that there were statistically significant differences in induction times and emergence times between the three different groups evaluated but that there was no difference in room turnover time among the three groups. In addition, although there was a difference in the anesthesia-controlled time periods, the difference in times between the staff working alone and the staff working with inexperienced residents was 3 min for induction times and was 30 s for emergence times. Dexter et al.  11 evaluated how much time would need to be saved to schedule one additional case at the end of a typical 8-h day. They concluded that even significant decreases of greater than 50% reductions in anesthesia-controlled times would not allow scheduling one additional case into the room during regular hours. In addition, the reduction in staffing cost due to changes in nonsurgical times of 3–9 min have been estimated to be approximately 1%; these savings would be achieved predominantly by reducing allocated operating room times and not by reducing the hours that staff work late.4 Therefore, although statistically significant, these times would be considered clinically meaningless in the context of both expenses and efficiency.
Our numbers are consistent with an abstract presented by St. Jacques et al.  10 in 2003 in which they noted induction time differences of a little more than a minute in both induction and emergence times between junior and senior resident–staffed cases. The implication of these conclusions are that although the initiation of new residents may have slight differences in anesthesia-controlled operating room times, the practical significance of these times is limited. Therefore, there is no need, for example, to schedule more operating room time for surgical cases during times of trainee induction, nor should managers be concerned that there will be increased expenses due to overtime expenditures during these training phases. Simply put, our data indicate that integration of anesthesia residents into the operating room care team does not substantially impact operating suite cost. It seemed that the initiation of the new residents added approximately 9 min to an 8-h day, and that this amount was gradually attenuated as the residents gained proficiency.
In addition, healthcare payers, institutions, and patients want physicians to create and meet standards of performance, including productivity and cost efficiency.5 Hospitals often supplement anesthesiology group revenues12 and may require efficiency reports. It is important for these groups to know that anesthesia trainees do not adversely affect efficiency and costs.
One of the limitations of our methodology is that we did not evaluate the specific types of surgical cases scheduled for the summer versus  the other times of the year. Although the CMI were unchanged during the study periods, the data include all the surgical patients and not just the study subset. It is possible that the surgeons evaluated for this study schedule less complex cases in the summer so that anesthesia-controlled times cannot be realistically compared to the cases the rest of the year. Certainly, the fact that surgical procedures were on average 10 min shorter during the summer sampling period may be an indicator of simpler cases in less complicated patients. Alternatively, a decrease in surgical time may also reflect that in the early phase of the academic year, attending surgeons may be performing a greater fraction of the case than the primary surgeon with the resident staff assisting. Our study did not evaluate the surgical cases or the American Society of Anesthesiologists physical status of the patients, so no comparisons can be drawn regarding whether we were comparing comparable groups of patients. This is a flaw in our study. We believe that the reason that the surgical case durations were shorter in the summer is because the staff surgeons were operating more with less time given to their new residents and not because the cases were any different. Also, although the CMI is an indicator of the status of all of our surgical patients and not only the study subset, we believe it is an accurate representation that the complexity of our patients did not change during the study periods.
Another limitation of this study is that the data may reflect an institutional bias because the data are abstracted from a single training program at a single academic teaching institution. Therefore, it becomes difficult to extrapolate these results to other teaching anesthesia programs. In addition, we only evaluated data in our general operating rooms and specifically excluded obstetric, pediatric, thoracic, and cardiac anesthesia care locations. We believe our numbers of anesthesia-controlled times are comparable to others reported11 and thus are generally applicable. It could be that our staff are either more or less efficient than other attending anesthesiologists, so that trainees may have variable affects on these times in other institutions. However, the current efficiency data may not be properly extrapolated to smaller teaching institutions or other training venues such as community hospitals because staff supervision models and types of residents (clinical anesthesia year 1 vs.  other years) may be substantially different.
It is also important to note that these anesthesia-controlled times reported are but one possible measure of anesthesia-related efficiency. Our study did not evaluate other important factors in overall efficiency, such as drug utilization, postanesthesia care unit time, discharge time, or patient-related factors, such as postoperative pain, nausea, and vomiting. All of these factors can contribute significantly to the financial and time-related efficiencies in the operating room. Our findings may or may not be related to these other factors.
Overall, the current study provides the initial evidence for assessing the effect of anesthesia trainees on anesthesia-controlled times in the operating room. These data strongly suggest that the initiation of anesthesia trainees to the operating room has no clinically or economically meaningful adverse effect on operating room efficiency.
References
Jeon AA: A hospital administrator’s view of the operating room. J Clin Anesth 1995; 7:585–8Jeon, AA
Overdyk FJ, Harve SC, Fishman RL, Shippey F: Successful strategies for improving operating room efficiency at academic institutions. Anesth Analg 1998; 86:896–906Overdyk, FJ Harve, SC Fishman, RL Shippey, F
Sokolovic E, Biro P, Wyss P, Werthemann C, Haller U, Spahn D, Szucs T: Impact of the reduction of anaesthesia turnover time on operating room efficiency. Eur J Anaesthesiol 2002; 19:560–3Sokolovic, E Biro, P Wyss, P Werthemann, C Haller, U Spahn, D Szucs, T
Dexter F, Abouleish AE, Epstein RH, Whitten CW, Lubarsky DA: Use of operating room information system data to predict the impact of reducing turnover times on staffing costs. Anesth Analg 2003; 97:1119–26Dexter, F Abouleish, AE Epstein, RH Whitten, CW Lubarsky, DA
Vitez TS, Macario A: Setting performance standards for an anesthesia department. J Clin Anesth 1998; 10:166–75Vitez, TS Macario, A
Strum DP, Vargas LG, May JH: Surgical subspecialty block utilization and capacity planning: A minimal cost analysis model. Anesthesiology 1999; 90:1176–85Strum, DP Vargas, LG May, JH
Strum DP, Vargas LG, May JH, Bashein G: Surgical suite utilization and capacity planning: A minimal cost analysis model. J Med Syst 1997; 21:309–22Strum, DP Vargas, LG May, JH Bashein, G
Eichorn JH: Patient safety and production measures: Academic practice. Anesth Patient Safety Foundation Newsletter 2001; 16:7–9Eichorn, JH
Posner KL, Freund PR: Resident training level and quality of anesthesia care in a university hospital. Anesth Analg 2004; 98:437–42Posner, KL Freund, PR
St Jacques P, James H, Higgins M: Level of training of anesthesiology resident or nurse anesthetist affects anesthesiology controlled intraoperative time periods (abstract). J Clin Anesth 2003; 15:77St Jacques, P James, H Higgins, M
Dexter F, Coffin S, Tinker JH: Decreases in anesthesia-controlled time cannot permit one additional surgical operation to be reliably scheduled during the workday. Anesth Analg 1995; 81:1263–68Dexter, F Coffin, S Tinker, JH
Tremper KK, Barker SJ, Gelman S, Reeves J: A demographic service and financial survey of anesthesia training programs in the United States. Anesth Analg 2003; 96:1432–6Tremper, KK Barker, SJ Gelman, S Reeves, J
Fig. 1. (  A  C  ) Mean and 95% confidence intervals for induction time, emergence time, and surgical time for the three study periods.  Solo  refers to the period in July when the staff worked alone in the room,  New  refers to the period in September when the staff worked with first-year anesthesia residents in a 1:2 ratio early in their training, and  Experienced  refers to the period in May when the staff worked with more experienced first-year residents in a 1:2 ratio. See  table 1for details. 
Fig. 1. (  A  –C  ) Mean and 95% confidence intervals for induction time, emergence time, and surgical time for the three study periods.  Solo  refers to the period in July when the staff worked alone in the room,  New  refers to the period in September when the staff worked with first-year anesthesia residents in a 1:2 ratio early in their training, and  Experienced  refers to the period in May when the staff worked with more experienced first-year residents in a 1:2 ratio. See  table 1for details. 
Fig. 1. (  A  C  ) Mean and 95% confidence intervals for induction time, emergence time, and surgical time for the three study periods.  Solo  refers to the period in July when the staff worked alone in the room,  New  refers to the period in September when the staff worked with first-year anesthesia residents in a 1:2 ratio early in their training, and  Experienced  refers to the period in May when the staff worked with more experienced first-year residents in a 1:2 ratio. See  table 1for details. 
×
Table 1. Anesthesia and Surgery-related Times 
Image not available
Table 1. Anesthesia and Surgery-related Times 
×
Table 2. Case Mix Severity Index 
Image not available
Table 2. Case Mix Severity Index 
×