Free
Education  |   August 2014
Simulator-based Transesophageal Echocardiographic Training with Motion Analysis: A Curriculum-based Approach
Author Affiliations & Notes
  • Robina Matyal, M.D.
    From the Department of Anesthesia, Critical Care, and Pain Medicine, Beth Israel Deaconess Medical Center, Harvard Medical School, Boston, Massachusetts (R.M., J.M., P.E.H., B.C., R.B., V.W., F.M.); and Department of Anesthesia and Pain Medicine, University Medical Center Groningen, Groningen, The Netherlands (J.J.).
  • John D. Mitchell, M.D.
    From the Department of Anesthesia, Critical Care, and Pain Medicine, Beth Israel Deaconess Medical Center, Harvard Medical School, Boston, Massachusetts (R.M., J.M., P.E.H., B.C., R.B., V.W., F.M.); and Department of Anesthesia and Pain Medicine, University Medical Center Groningen, Groningen, The Netherlands (J.J.).
  • Philip E. Hess, M.D.
    From the Department of Anesthesia, Critical Care, and Pain Medicine, Beth Israel Deaconess Medical Center, Harvard Medical School, Boston, Massachusetts (R.M., J.M., P.E.H., B.C., R.B., V.W., F.M.); and Department of Anesthesia and Pain Medicine, University Medical Center Groningen, Groningen, The Netherlands (J.J.).
  • Bilal Chaudary, M.D.
    From the Department of Anesthesia, Critical Care, and Pain Medicine, Beth Israel Deaconess Medical Center, Harvard Medical School, Boston, Massachusetts (R.M., J.M., P.E.H., B.C., R.B., V.W., F.M.); and Department of Anesthesia and Pain Medicine, University Medical Center Groningen, Groningen, The Netherlands (J.J.).
  • Ruma Bose, M.D.
    From the Department of Anesthesia, Critical Care, and Pain Medicine, Beth Israel Deaconess Medical Center, Harvard Medical School, Boston, Massachusetts (R.M., J.M., P.E.H., B.C., R.B., V.W., F.M.); and Department of Anesthesia and Pain Medicine, University Medical Center Groningen, Groningen, The Netherlands (J.J.).
  • Jayant S. Jainandunsing, M.D.
    From the Department of Anesthesia, Critical Care, and Pain Medicine, Beth Israel Deaconess Medical Center, Harvard Medical School, Boston, Massachusetts (R.M., J.M., P.E.H., B.C., R.B., V.W., F.M.); and Department of Anesthesia and Pain Medicine, University Medical Center Groningen, Groningen, The Netherlands (J.J.).
  • Vanessa Wong, B.S.
    From the Department of Anesthesia, Critical Care, and Pain Medicine, Beth Israel Deaconess Medical Center, Harvard Medical School, Boston, Massachusetts (R.M., J.M., P.E.H., B.C., R.B., V.W., F.M.); and Department of Anesthesia and Pain Medicine, University Medical Center Groningen, Groningen, The Netherlands (J.J.).
  • Feroze Mahmood, M.D.
    From the Department of Anesthesia, Critical Care, and Pain Medicine, Beth Israel Deaconess Medical Center, Harvard Medical School, Boston, Massachusetts (R.M., J.M., P.E.H., B.C., R.B., V.W., F.M.); and Department of Anesthesia and Pain Medicine, University Medical Center Groningen, Groningen, The Netherlands (J.J.).
  • Supplemental Digital Content is available for this article. Direct URL citations appear in the printed text and are available in both the HTML and PDF versions of this article. Links to the digital files are provided in the HTML text of this article on the Journal’s Web site (www.anesthesiology.org).
    Supplemental Digital Content is available for this article. Direct URL citations appear in the printed text and are available in both the HTML and PDF versions of this article. Links to the digital files are provided in the HTML text of this article on the Journal’s Web site (www.anesthesiology.org).×
  • Submitted for publication May 20, 2013. Accepted for publication February 20, 2014.
    Submitted for publication May 20, 2013. Accepted for publication February 20, 2014.×
  • Address correspondence to Dr. Mahmood: Department of Anesthesia, Critical Care, and Pain Medicine, Beth Israel Deaconess Medical Center, Harvard Medical School, 330 Brookline Avenue, YA-204, Boston, Massachusetts 02215. fmahmood@bidmc.harvard.edu. Information on purchasing reprints may be found at www.anesthesiology.org or on the masthead page at the beginning of this issue. Anesthesiology’s articles are made freely accessible to all readers, for personal use only, 6 months from the cover date of the issue.
Article Information
Education / Original Investigations in Education / Cardiovascular Anesthesia / Radiological and Other Imaging
Education   |   August 2014
Simulator-based Transesophageal Echocardiographic Training with Motion Analysis: A Curriculum-based Approach
Anesthesiology 08 2014, Vol.121, 389-399. doi:10.1097/ALN.0000000000000234
Anesthesiology 08 2014, Vol.121, 389-399. doi:10.1097/ALN.0000000000000234
Abstract

Background:: Transesophageal echocardiography (TEE) is a complex endeavor involving both motor and cognitive skills. Current training requires extended time in the clinical setting. Application of an integrated approach for TEE training including simulation could facilitate acquisition of skills and knowledge.

Methods:: Echo-naive nonattending anesthesia physicians were offered Web-based echo didactics and biweekly hands-on sessions with a TEE simulator for 4 weeks. Manual skills were assessed weekly with kinematic analysis of TEE probe motion and compared with that of experts. Simulator-acquired skills were assessed clinically with the performance of intraoperative TEE examinations after training. Data were presented as median (interquartile range).

Results:: The manual skills of 18 trainees were evaluated with kinematic analysis. Peak movements and path length were found to be independent predictors of proficiency (P < 0.01) by multiple regression analysis. Week 1 trainees had longer path length (637 mm [312 to 1,210]) than that of experts (349 mm [179 to 516]); P < 0.01. Week 1 trainees also had more peak movements (17 [9 to 29]) than that of experts (8 [2 to 12]); P < 0.01. Skills acquired from simulator training were assessed clinically with eight additional trainees during intraoperative TEE examinations. Compared with the experts, novice trainees required more time (199 s [193 to 208] vs. 87 s [83 to 16]; P = 0.002) and performed more transitions throughout the examination (43 [36 to 53] vs. 21 [20 to 23]; P = 0.004).

Conclusions:: A simulation-based TEE curriculum can teach knowledge and technical skills to echo-naive learners. Kinematic measures can objectively evaluate the progression of manual TEE skills.

A simulation-based transesophageal echocardiography curriculum can teach knowledge and technical skills to echo-naive learners. Kinematic measures can objectively evaluate the progression of manual transesophageal echocardiography skills.

Supplemental Digital Content is available in the text.

What We Already Know about This Topic
  • Proficiency in transesophageal echocardiography (TEE) requires both cognitive and manual skills.

  • This study determined whether (1) TEE simulation technology can be used to develop kinematic measures, (2) these kinematic measures can be used to assess learning of manual skills in a simulation-based TEE curriculum, and (3) skills learned by echo-naive trainees in a simulation-based TEE curriculum could be used in the clinical setting.

What This Article Tells Us That Is New
  • A simulation-based TEE curriculum can teach knowledge and technical skills to echo-naive learners. Kinematic measures can objectively evaluate the progression of manual TEE skills.

PROFICIENCY in transesophageal echocardiography (TEE) requires both cognitive and manual skills.1,2  During standard TEE training, a trainee gains knowledge primarily through reading and lectures, supplemented by bedside teaching. However, image acquisition skills can only be gained at the bedside and require repetitive exposure and practice. One challenge of this method of education is that pressures of the clinical environment, such as limitations on work hours and variations in patient volume, may not provide enough time for each trainee to acquire sufficient skills.3  Another challenge is that the patient population and presentation is unpredictable and does not provide graduated lessons from normal to abnormal or simple to complex presentation.1,4,5  Finally, ensuring that the trainee has gained proficiency is difficult—cognitive understanding can be tested with standardized questionnaires, but manual skills can only be subjectively implied.
Medical simulation has evolved to include haptic devices for the acquisition of motor skills.6–17  Because no patient needs to be present, simulator-based training might teach basic skills in invasive procedures while minimizing patient risk.1  Simulation-based educational programs have demonstrated improvement in performance of some clinical tasks including laparoscopic and suturing skills during surgical residency training.14,18,19  As a result, the American College of Surgeons has mandated that accredited surgical residency programs participate in the Fundamentals of Laparoscopic Surgery initiative.11,20,21  Similarly, these principles have been extended to the training of other invasive procedures (e.g., Fundamentals of Endoscopic Surgery program). Fundamentals of Endoscopic Surgery is a curriculum-based program, which incorporates Web-based didactics and hands-on simulator training.11,22  Such training and evaluation initiatives are a paradigm shift from the traditional Halsted apprenticeship model of clinical training.23,24 
In the context of changing training paradigm, the recent development of realistic echocardiography simulators might also allow for a safer, more integrated training program in TEE.2,15,25,26  A program consisting of a coordinated didactic component with focused hands-on simulator training might facilitate proficiency in manual skills.11,27  Furthermore, because simulators can export data describing the movement of the TEE probe, defined kinematic assessment measures can now be developed and might be used to evaluate the acquisition of manual skills. In addition to assessing the skill of a trainee, these kinematic measures might also identify whether individual aspects of training require additional attention and might allow for specific feedback and remedial training of an individual. We hypothesized that: (1) TEE simulation technology can be used to develop kinematic measures, (2) these kinematic measures can be used to assess learning of manual skills in a simulation-based TEE curriculum, and (3) skills learned by echo-naive trainees in a simulation-based TEE curriculum could be used in the clinical setting.
Materials and Methods
After Institutional Review Board (Beth Israel Deaconess Medical Center Committee on Clinical Investigations, Boston, MA) approval with waiver of informed consent, this prospective study was conducted in five 4-week training sessions between August 2011 and December 2011.
Participants
Nonattending anesthesiology physicians in training were invited to participate in the study. Trainees were selected based on rotation and call schedules without infringing on core lecture requirements and with keeping in mind Accreditation Council for Graduate Medical Education requirements. Exclusion criteria were (1) individuals who participated in a previous or current dedicated TEE rotation during training, (2) individuals who had structured training in echocardiography, (3) individuals who already sat for and/or passed the NBE Basic or Advanced PTEeXAM examination, and (4) individuals who personally performed more than five examinations or reviewed more than 20 examinations before the start of the course.
Materials
The material support was provided by departmental funding and consisted of a customized educational Web site and a TEE Simulation Center. The simulation center was equipped with two Vimedix TEE simulators (CAE Healthcare, Montreal, Canada), which are capable of capturing the motion data of TEE probe in real time.
Description of Program
The training program was organized into nine modules divided over 4 weeks (for details of the training program, see appendix). Course faculty developed this approach and have no affiliation or financial relation with any commercial entity selling or developing technologies used in this study or in the creation of ultrasound equipment, echo simulators, or online teaching materials.
The Web-based teaching materials were developed independently by the authors and can be used with any type of echo-training program, whether classroom, simulator, operating room based, or hybrid. Each week of the training program consisted of two to three online modules, with coordinated live educational and hands-on training. The educational program located on the Web site consisted of presentations from faculty on specific echocardiography topics. The subjects were required to complete the specific didactic modules on the Web site before participation in the hands-on session for that day. In addition, a focused discussion of the topic of the session was included in the first 20 to 30 min of each session.
An active clinician certified by the National Board of Echocardiography supervised all simulator sessions. The first hands-on training session focused on the features of the simulator and TEE probe, and it was designed to provide familiarity with the equipment and the haptic simulator. Subsequent training sessions focused on a specific set of images for acquisition, which were coordinated with the didactic program. After practice of these specific images, all standard views were also obtained from each trainee at the end of each hands-on session. On the second day of each week, in addition to the focused discussion and hands-on practice, each subject took a skills test, which tested the subject on all the components of a comprehensive intraoperative examination and not on just the focused-on views for the topic of the session.
Acquisition of Cut Planes
We defined a target cut plane (TCP) as the simulator reference image corresponding to a standard TEE image. Each TCP was acquired by a single investigator (F.M.) and agreed upon by all investigators as the reference image. On the basis of the image quality and resemblance to the standardized TEE images, a total of 13 TCPs were selected as the images for acquisition of data. The 13 TCPs were the (1) mid-esophageal two-chamber, (2) mid-esophageal four-chamber (ME-4C), (3) mid-esophageal aortic valve long axis, (4) mid-esophageal aortic valve short axis, (5) mid-esophageal two-chamber with a focus on the left atrial appendage, (6) mid-esophageal four-chamber view with a focus on the left upper pulmonary vein, (7) mid-esophageal four-chamber view with a focus on the right upper pulmonary vein, (8) mid-esophageal right ventricular inflow–outflow, (9) mid-esophageal mitral commissural, (10) mid-esophageal bi-caval views, (11) upper esophageal aortic arch long axis, (12) upper esophageal aortic arch short-axis views, and the (13) transgastric right ventricle inflow view.
For acquisition of images during the study, the probe was inserted in the mannequin, and the examination was initiated in the upper esophagus. The subject was first asked to acquire an image as close to the ME-4C reference TCP as possible. From this position (i.e., the ME-4C view), the subject was asked to acquire subsequent views as close to the respective reference TCPs as possible. The TCP image was displayed next to the active scan plane and served as a reference for the subject (fig. 1; see video, Supplemental Digital Content 1, http://links.lww.com/ALN/B43, which is a video loop demonstrating the sequence of image acquisition by a trainee as he or she seeks to acquire the TCP). The subject was asked to return to the ME-4C view after acquiring each image so that the starting point of all image acquisition sequences except for the ME-4C view itself was the ME-4C view. The simulator recorded real-time data regarding probe tip position, rotation, and omniplane angle during probe movement. This sequence was continued until all 13 images were captured during each testing session.
Fig. 1.
Simulator interface during trainee evaluation. Current probe position is seen at the top left (A). Image obtained by trainee appears in the right half of the screen (B), whereas augmented reality image (C) and idealized TCP (target cut plane) (D) are shown in the bottom left panel.
Simulator interface during trainee evaluation. Current probe position is seen at the top left (A). Image obtained by trainee appears in the right half of the screen (B), whereas augmented reality image (C) and idealized TCP (target cut plane) (D) are shown in the bottom left panel.
Fig. 1.
Simulator interface during trainee evaluation. Current probe position is seen at the top left (A). Image obtained by trainee appears in the right half of the screen (B), whereas augmented reality image (C) and idealized TCP (target cut plane) (D) are shown in the bottom left panel.
×
Kinematic Analysis
The simulator recorded data continuously, documenting the starting point, the change in position in the x, y, and z positions, the angular rotation of the scan plane, and the roll, pitch, and yaw of the TEE probe over time. The z-axis refers to the height of the probe, and x-axis and y-axis refer to the horizontal and vertical (cephalad-caudad) motion, respectively, of the probe in the chest. Roll, pitch, and yaw refer to angular motion of the probe along the x-axis, y-axis, and z-axis, respectively (fig. 2). The data were exported from the simulator as a Microsoft Excel (Microsoft Corporation, Redmond, WA) spreadsheet, and the following data were extrapolated:
Fig. 2.
Probe motion. Roll is motion in the horizontal (x) plane, pitch is motion in the vertical (y) plane, and yaw is motion in relation to the height of the probe in the thorax, or the (z) plane. Image of the simulator was reproduced, with permission, from CAE Healthcare, Montreal, Quebec, Canada.
Probe motion. Roll is motion in the horizontal (x) plane, pitch is motion in the vertical (y) plane, and yaw is motion in relation to the height of the probe in the thorax, or the (z) plane. Image of the simulator was reproduced, with permission, from CAE Healthcare, Montreal, Quebec, Canada.
Fig. 2.
Probe motion. Roll is motion in the horizontal (x) plane, pitch is motion in the vertical (y) plane, and yaw is motion in relation to the height of the probe in the thorax, or the (z) plane. Image of the simulator was reproduced, with permission, from CAE Healthcare, Montreal, Quebec, Canada.
×
  • Time: The total time required to acquire the image measured in seconds from the start time until the learner felt that the target image was obtained.
  • Path length: The sum of all linear and the angular movements of the probe measured as centimeters of total distance moved. (see examples in fig. 3)
  • Lead time: The time before initiation of the first motion of the TEE probe after being told to acquire a specific image. (see examples in fig. 4)
  • Peak movements: The number of times the subject performed rapid motion or change in direction of the TEE probe. (see examples in fig. 5)
  • Time–Distance multiple: Represents the area under the curve of path length over the time of the study.
Fig. 3.
Path length. Representative three-dimensional path length of a trainee at week 1 (A) and week 4 (B) and of an expert (C) for movement of the probe from the mid-esophageal four-chamber view to the mid-esophageal long-axis view. The trainee’s first session demonstrates a high number of misdirected probe movements and a long path length to the final image position. The trainee’s fourth session shows a more efficient prove movement with fewer misdirections and a shorter path length that approximates the movement patterns of the expert. The “Start” point is the mid-esophageal four-chamber view; the “End” point is where the subject believed the final image (the mid-esophageal long-axis view) was.
Path length. Representative three-dimensional path length of a trainee at week 1 (A) and week 4 (B) and of an expert (C) for movement of the probe from the mid-esophageal four-chamber view to the mid-esophageal long-axis view. The trainee’s first session demonstrates a high number of misdirected probe movements and a long path length to the final image position. The trainee’s fourth session shows a more efficient prove movement with fewer misdirections and a shorter path length that approximates the movement patterns of the expert. The “Start” point is the mid-esophageal four-chamber view; the “End” point is where the subject believed the final image (the mid-esophageal long-axis view) was.
Fig. 3.
Path length. Representative three-dimensional path length of a trainee at week 1 (A) and week 4 (B) and of an expert (C) for movement of the probe from the mid-esophageal four-chamber view to the mid-esophageal long-axis view. The trainee’s first session demonstrates a high number of misdirected probe movements and a long path length to the final image position. The trainee’s fourth session shows a more efficient prove movement with fewer misdirections and a shorter path length that approximates the movement patterns of the expert. The “Start” point is the mid-esophageal four-chamber view; the “End” point is where the subject believed the final image (the mid-esophageal long-axis view) was.
×
Fig. 4.
Lead time. Representative comparison of lead times for a single trainee at week 1 (A), week 2 (B), week 3 (C), and week 4 (D). Lead times decreased with each successive week of training. For each graph in this figure, the run was truncated at 120 s to allow for clear visualization of the lead time and does not represent the total time for image acquisition.
Lead time. Representative comparison of lead times for a single trainee at week 1 (A), week 2 (B), week 3 (C), and week 4 (D). Lead times decreased with each successive week of training. For each graph in this figure, the run was truncated at 120 s to allow for clear visualization of the lead time and does not represent the total time for image acquisition.
Fig. 4.
Lead time. Representative comparison of lead times for a single trainee at week 1 (A), week 2 (B), week 3 (C), and week 4 (D). Lead times decreased with each successive week of training. For each graph in this figure, the run was truncated at 120 s to allow for clear visualization of the lead time and does not represent the total time for image acquisition.
×
Fig. 5.
Peak motion. Representative comparison of peak motion of trainees at week 1 (A), week 2 (B), week 3 (C), and week 4 (D). The number of high-velocity movements required to obtain an image decreased for trainees over time. The velocity threshold to count the probe movement was 0.2 cm/s.
Peak motion. Representative comparison of peak motion of trainees at week 1 (A), week 2 (B), week 3 (C), and week 4 (D). The number of high-velocity movements required to obtain an image decreased for trainees over time. The velocity threshold to count the probe movement was 0.2 cm/s.
Fig. 5.
Peak motion. Representative comparison of peak motion of trainees at week 1 (A), week 2 (B), week 3 (C), and week 4 (D). The number of high-velocity movements required to obtain an image decreased for trainees over time. The velocity threshold to count the probe movement was 0.2 cm/s.
×
Data Collection
We enrolled expert examiners, who were certified by the National Board of Echocardiography and were at the time active clinicians. These experts completed a simulated examination, that is, acquisition of TCPs that was used to develop the objective measures and to compare with the progress of the echo-naive subjects.
Manual skill development was assessed at the end of the second session (day 2) of each week. The subject was asked to perform a TEE examination, which was recorded for analysis. Any subject who missed a training session was provided a remedial session held before their next scheduled exposure. Subjects who missed two sessions in a row were removed from the study.
Clinical Performance
To assess the simulator-acquired skills in a clinical arena, eight trainees who went through the simulation-training program and had not been exposed to clinical TEE during their training were individually taken to the operating room to perform perioperative TEE. Both a single expert (F.M.) and the trainee conducted the examination on the same patient. The expert acquired the images in a standard sequence before the arrival of each trainee to allow for a standardized comparison. Following that, each trainee was asked to acquire the images on the same patient in the same standard sequence without any assistance. They were only prompted by being given the name of the view to obtain in a standard American Society of Echocardiography/Society for Cardiovascular Anesthesiology nomenclature.
The sequence of the entire examination and final images by the expert and trainee were recorded using the screen capture system Epiphan DVI2 USB3 (Epiphan Systems Inc., Ottawa, Canada) and a Mac Mini (Apple Inc., Cupertino, CA) computer. The analog video output from the TEE machine (iE33; Phillips Healthcare, Andover, MA) underwent digital conversion for display on the Mac Mini computer. The entire sequence of the TEE examination was recorded in real time from start to finish. The expert and novice examinations were not labeled as such on the screen and were coded and digitally stored anonymously.
Another expert (R.M.) evaluated the examinations in a blinded manner. The expert reviewed the video captures and graded the examinations for time required to acquire an image, the number of probe transitions used to obtain the image, and the quality of the final image compared with an ideal view. The time was counted from start to final image. The number of probe transitions was determined by counting the number of screen transitions from the reference image (i.e., the ME-4C) to the desired image. The image quality was graded on six parameters, each being scored as a 1 (yes) or 0 (no):
  1. Anatomic detail: Anatomic detail was assessed by considering the ease of identifying the outline of cardiac and vascular tissues and cavities;

  2. Absence of artifacts: Artifacts including dropout, acoustic shadowing, and reverberation were noted if present in the target image28 ;

  3. Omniplane angle used: Omniplane angle used to achieve each target image was compared with those presented in the American Society of Echocardiography/Society for Cardiovascular Anesthesiology guidelines and considered appropriate if within 10 degrees in either direction from the published standard29 ;

  4. Stability of target image: Stability of target image was defined as the ability to hold the target image, once achieved, in a frame for at least 3 s;

  5. Complete inclusion of structures of interest in frame; and

  6. Composition of images similar to reference standards: To determine complete inclusion of structures of interest and the image composition criteria, the reviewer compared each target image with the image by the American Society of Echocardiography/Society for Cardiovascular Anesthesiology guideline.29 

The total of the scores for the six parameters determined the score for overall image quality; a maximum of 6 points was awarded with 0 to 2 points considered poor, 3 to 4 points moderate, and 5 to 6 points high quality.
Grading for the time required to acquire the image, the number of probe transitions used to obtain the image, and the quality of the final image was done for each target image and averaged for each examination.
Statistical Analysis
We first assessed the kinematic measures to determine whether they could be used to assess the skill acquisition of the novices. The change of each of the five kinematic measures over the course of the 4 weeks was assessed using Friedman test. Kinematic measures were averaged for all trainees for each week and divided by the number of cut planes to derive the mean value. Comparison between the first and final week evaluations was performed using the Wilcoxon rank sum test and between the novices and the experts using Mann–Whitney U test. We then determined to identify which kinematic measures could be used to discriminate between the expert and novice, independently. Because the assumptions for discriminant analysis were violated, we used ordinal regression to assess this. The categorization of novice (first week or fourth week) and expert was entered as dependent variables, and the kinematic variables as the covariates. A logit function was used for the final model. The internal consistency of the training among novices was assessed using the Intra-Class Correlation Coefficient.
Data are presented as mean ± SD, median (interquartile range), or proportion of group, as appropriate. Statistical significance was determined using two-tailed testing at the P value of 0.05 level or less. Data analysis was performed using PASW Statistics 18.0 (International Business Machines Corporation, Armonk, NY).
Results
A total of 34 subjects participated in the study. Twenty-six subjects completed the study (13 Post-Graduate Year 3 and 13 Post-Graduate Year 4). Eight subjects were excluded from the study due to missed training sessions, leaving 18 echo-naive subjects for kinematic analysis. In addition to these 18 subjects, eight trainees who also completed the course were enrolled in the study for assessment of clinical performance. Five expert examiners participated as the comparison group.
Over the course of the 4 weeks, the novices had a shorter delay before starting image acquisition (lead time), took less time to complete each image (total time), had fewer high-velocity movements (peak movements), traveled a more direct path to an image (path length), and had a smaller area-under-the-curve (time–distance multiple); (P < 0.01 for all; fig. 6). We also compared kinematic values between trainees and experts; specifically comparing the trainees’ starting (week 1) and final (week 4) measures with the values of the experts. All five measures of image acquisition were significantly greater among trainees at the end of their first week compared with the measures of the experts (P < 0.01 for all) but had approached the values of the experts by the end of the 4-week program (P > 0.05 for all) (table 1). Ordinal logistic regression was performed to identify which kinematic measures best discriminate between the expert and the novice. We found that only peak movements and path length were statistically significant (table 2).
Table 1.
Comparisons of Kinematic Measures for Trainees at Week 1, Trainees at Week 4, and Experts
Comparisons of Kinematic Measures for Trainees at Week 1, Trainees at Week 4, and Experts×
Comparisons of Kinematic Measures for Trainees at Week 1, Trainees at Week 4, and Experts
Table 1.
Comparisons of Kinematic Measures for Trainees at Week 1, Trainees at Week 4, and Experts
Comparisons of Kinematic Measures for Trainees at Week 1, Trainees at Week 4, and Experts×
×
Table 2.
Results of Ordinal Regression of Kinematic Measures
Results of Ordinal Regression of Kinematic Measures×
Results of Ordinal Regression of Kinematic Measures
Table 2.
Results of Ordinal Regression of Kinematic Measures
Results of Ordinal Regression of Kinematic Measures×
×
Fig. 6.
Comparison of kinematic measures between experts and trainees. The kinematic measures (A, total time; B, lead time; C, path length; D, peak movements; and E, time–distance multiple) represent values calculated from data exported from the simulator during an evaluation (see Kinematic Analysis under Materials and Methods for description). Data are a composite of all 13 images acquired during the simulated examination. The measurement periods are the week of evaluation, performed at the end of the second hands-on session. Comparison among subjects was performed by Friedman test. All five measures showed a statistically significant reduction over the course of 4 weeks (P < 0.01 for all). The expert cohort is included for reference and is not statistically compared here.
Comparison of kinematic measures between experts and trainees. The kinematic measures (A, total time; B, lead time; C, path length; D, peak movements; and E, time–distance multiple) represent values calculated from data exported from the simulator during an evaluation (see Kinematic Analysis under Materials and Methods for description). Data are a composite of all 13 images acquired during the simulated examination. The measurement periods are the week of evaluation, performed at the end of the second hands-on session. Comparison among subjects was performed by Friedman test. All five measures showed a statistically significant reduction over the course of 4 weeks (P < 0.01 for all). The expert cohort is included for reference and is not statistically compared here.
Fig. 6.
Comparison of kinematic measures between experts and trainees. The kinematic measures (A, total time; B, lead time; C, path length; D, peak movements; and E, time–distance multiple) represent values calculated from data exported from the simulator during an evaluation (see Kinematic Analysis under Materials and Methods for description). Data are a composite of all 13 images acquired during the simulated examination. The measurement periods are the week of evaluation, performed at the end of the second hands-on session. Comparison among subjects was performed by Friedman test. All five measures showed a statistically significant reduction over the course of 4 weeks (P < 0.01 for all). The expert cohort is included for reference and is not statistically compared here.
×
Clinical Performance
When examining the intraoperative examinations, all the novices were able to obtain the requested views without assistance; however, the images were rated as moderate quality (median 3 of 6; interquartile range, 3 to 4). Compared with the expert examinations, the novices required more time to complete the examination (199 s [193 to 208] vs. 87 s [83 to 16]; P = 0.002) and performed more transitions throughout the examination (43 [36 to 53] vs. 21 [20 to 23]; P = 0.004). We found significant consistency among novices in the number of transitions required to obtain a view (Intra-Class Correlation Coefficient, 0.77 [95% CI, 0.29 to 0.97]; P = 0.005), but not in the time required for image acquisition (Intra-Class Correlation Coefficient, 0.17; P = 0.55).
Discussion
The results of our study demonstrate that an integrated simulation-based educational curriculum for TEE training resulted in a significant improvement in manual skills of echo-naive trainees. Using the data exported from the simulator, we developed several kinematic measures, suggesting that it may be possible to objectify acquisition of manual skills on the TEE simulator. We found that these kinematic measures improved over the course of the 4 weeks of training, and that the novices on the final week were performing the TEE examination similar to that by the experts. Finally, we found that two of these measures were most important in distinguishing between the expert and novice: the number of rapid movements or transitions and the total length travelled by the TEE probe. This suggests that two of the traits that differentiate experts from novice learners are the abilities to smoothly and efficiently manipulate the TEE probe to acquire an image.
The value of this simulator-based training demonstrated that the eight novices who had no previous experience were able to perform an actual perioperative TEE examination. Although the images were of moderate quality compared with that of an expert, the novices acquired all the images in less than 3.5 min. Furthermore, we demonstrated that the simulator-acquired training resulted in the novices performing the TEE examination in a consistent pattern, as judged by the number of transitions made to acquire each image. We found little consistency in the time required to acquire each image; it would appear that speed is a trait gained by more practice. Our impression is that this training shortens the clinical learning curve, but further studies are needed to demonstrate transferability of skills from the simulator to live patients.
The inclusion of a virtual reality TEE simulator can potentially improve upon the existing paradigm for teaching basic echocardiography. With its focus on the trainee rather than the patient, and no consequences of failure, simulator-based training can reduce the early learning curve for trainees. Faster acquisition of basic proficiency would allow more opportunities to teach advanced information within an identical timeframe. Furthermore, kinematic analysis of motion during simulated examinations might also help with assessing manual proficiency, which is currently a complex task.6,7,9,25,27,30–36  We believe that the kinematic measures that we developed may be examples of novel information that can be acquired with simulator-based training. Although our study is exploratory, we hope that it is a step toward improving training future programs. Similarly, this analysis might be useful in identifying trainees who require more instruction and improving trainers.30 
Traditionally, success in task completion during manual skills acquisition training has been evaluated by measuring time or the number of errors.14,34  These end point–based parameters may be inadequate to differentiate the subtle differences in quality of motion between experts and novices.30,33,37–39  Two individuals can be indistinguishable in time to task completion and number of errors with completely different kinematics.38  Motion analysis might also assess automaticity (i.e., task performance without significant concentration), which is a strong predictor of transferability of simulator skills to actual tasks.14,37–41  For example, the improvements in lead time and peak movements in our study imply that with repetition, probe manipulation by trainees becomes smoother and more intuitive. The differences identified between novices and experts in peak movements and path length suggest that experts have more economy of movement than trained novices. However, because of the small size of our study, we could not make definitive conclusions on this finding. We would postulate that the simulator might be used both objectively monitor progression of manual dexterity skills and possibly as a testing method.1  Furthermore, when working with experts, simulation has the potential to improve skills by presenting very rare pathologies fore examination; we plan to explore the impact of simulation on experts now that simulators can also provide training in pathology states.
One criticism is that the results may not be reproducible to other trainees—the volunteers (i.e., residents) in our study might be a self-selected enthusiastic cohort. In our study, the trainees were expected to complete the online modules on their own time. Furthermore, we faced a logistic challenge for the biweekly hands-on sessions, as evidenced by the loss of eight subjects from the analysis. Voluntary participation has been identified as a major factor in success in similar surgical programs.42–45  Learned complex motor skills required several stages of development, such as cognition, association, and finally autonomous motor behavior.24,46  Once behavior becomes automatic, the likelihood of transferability is greater, as long as the simulator is of high quality.14,37–41  Future investigations must be aimed at determining whether simulator-based training facilitates development of clinical skills.
Our study was based on acquisition of images of “normal” cardiac anatomy. Whether this training transfers to identification of pathology is not known. Clinical echocardiographic imaging involves significant changes in ultrasound machine settings for image optimization, which cannot be simulated due to technological limitations.
During grading of the intraoperative examinations, bias in the grading was minimized by having the grader score the images offline in a blinded manner. However, based on elements of probe manipulation that we were unable to quantify, the grader might have been able to unconsciously differentiate the expert from the novice. Therefore, there may have been bias in her scoring for which we could not control.
A final limitation of our study has to do with the power of the study, which may be inadequate to evaluate some desired points. For example, we could not determine whether some individual cut planes were more challenging for novices to learn than others. Furthermore, because of our small sample size, we are at risk for β error. Thus, although we did not identify differences between experts and trainees at 4 weeks in some measures, it is still possible that these groups are not equivalent. Our ability to enroll trainees was limited by our access to echo-naive trainees in our training program. To enroll more subjects, we would need to conduct multicenter enrollment. More work is required to corroborate these findings.
In conclusion, it is feasible to approach TEE training with a curriculum-based program and hands-on training with a TEE simulator. Kinematic analysis using time and our empirically derived metrics may be used to track the progress of motor skill acquisition with repetitive experience over a specified time. This information might be used to improve the quality of instruction by identifying the areas of deficiency in manual training. Echo-naive trainees who went through this training were able to perform an unassisted intraoperative TEE examination.
Acknowledgments
Support was provided solely from institutional and/or departmental sources.
Competing Interests
The authors declare no competing interests.
References
Shakil, O, Mahmood, F, Matyal, R Simulation in echocardiography: An ever-expanding frontier.. J Cardiothorac Vasc Anesth. (2012). 26 476–85 [Article] [PubMed]
Bose, R, Matyal, R, Panzica, P, Karthik, S, Subramaniam, B, Pawlowski, J, Mitchell, J, Mahmood, F Transesophageal echocardiography simulator: A new learning tool.. J Cardiothorac Vasc Anesth. (2009). 23 544–8 [Article] [PubMed]
Antiel, RM, Van Arendonk, KJ, Reed, DA, Terhune, KP, Tarpley, JL, Porterfield, JR, Hall, DE, Joyce, DL, Wightman, SC, Horvath, KD, Heller, SF, Farley, DR Surgical training, duty-hour restrictions, and implications for meeting the accreditation council for graduate medical education core competencies.. Arch Surg. (2012). 147 536–41 [Article] [PubMed]
Nadel, FM, Lavelle, JM, Fein, JA, Giardino, AP, Decker, JM, Durbin, DR Teaching resuscitation to pediatric residents: The effects of an intervention.. Arch Pediatr Adolesc Med. (2000). 154 1049–54 [Article] [PubMed]
Dumont, TM, Tranmer, BI, Horgan, MA, Rughani, AI Trends in neurosurgical complication rates at teaching vs nonteaching hospitals following duty-hour restrictions.. Neurosurgery. (2012). 71 1041–6; discussion 1046 [Article] [PubMed]
Woo, HS, Kim, WS, Ahn, W, Lee, DY, Yi, SY Haptic interface of the KAIST-Ewha colonoscopy simulator II.. IEEE Trans Inf Technol Biomed. (2008). 12 746–53 [Article] [PubMed]
Woo, HS, Kim, WS, Ahn, W, Lee, DY, Yi, SY Improved haptic interface for colonoscopy simulation.. Conf Proc IEEE Eng Med Biol Soc. (2007). 2007 1253–6 [PubMed]
Yi, SY, Ryu, KH, Na, YJ, Woo, HS, Ahn, W, Kim, WS, Lee, DY Improvement of colonoscopy skills through simulation-based training.. Stud Health Technol Inform. (2008). 132 565–7 [PubMed]
Walsh, CM, Sherlock, ME, Ling, SC, Carnahan, H Virtual reality simulation training for health professions trainees in gastrointestinal endoscopy.. Cochrane Database Syst Rev. (2012). 6 CD008237 [PubMed]
Elvevi, A, Cantù, P, Maconi, G, Conte, D, Penagini, R Evaluation of hands-on training in colonoscopy: Is a computer-based simulator useful?. Dig Liver Dis. (2012). 44 580–4 [Article] [PubMed]
Vassiliou, MC, Dunkin, BJ, Marks, JM, Fried, GM FLS and FES: Comprehensive models of training and assessment.. Surg Clin North Am. (2010). 90 535–58 [Article] [PubMed]
Hung, AJ, Zehnder, P, Patil, MB, Cai, J, Ng, CK, Aron, M, Gill, IS, Desai, MM Face, content and construct validity of a novel robotic surgery simulator.. J Urol. (2011). 186 1019–24 [Article] [PubMed]
McCaslin, AF, Aoun, SG, Batjer, HH, Bendok, BR Enhancing the utility of surgical simulation: From proficiency to automaticity.. World Neurosurg. (2011). 76 482–4 [Article] [PubMed]
Stefanidis, D, Scerbo, MW, Montero, PN, Acker, CE, Smith, WD Simulator training to automaticity leads to improved skill transfer compared with traditional proficiency-based training.. Ann Surg. (2012). 255 30–7 [Article] [PubMed]
Neelankavil, J, Howard-Quijano, K, Hsieh, TC, Ramsingh, D, Scovotti, JC, Chua, JH, Ho, JK, Mahajan, A Transthoracic echocardiography simulation is an efficient method to train anesthesiologists in basic transthoracic echocardiography skills.. Anesth Analg. (2012). 115 1042–51 [Article] [PubMed]
Våpenstad, C, Buzink, SN Procedural virtual reality simulation in minimally invasive surgery.. Surg Endosc. (2013). 27 364–77 [Article] [PubMed]
Gallagher, AG, Jordan-Black, JA, O’Sullivan, GC Prospective, randomized assessment of the acquisition, maintenance, and loss of laparoscopic skills.. Ann Surg. (2012). 256 387–93 [Article] [PubMed]
Rosenthal, ME, Ritter, EM, Goova, MT, Castellvi, AO, Tesfay, ST, Pimentel, EA, Hartzler, R, Scott, DJ Proficiency-based fundamentals of laparoscopic surgery skills training results in durable performance improvement and a uniform certification pass rate.. Surg Endosc. (2010). 24 2453–7 [Article] [PubMed]
Scott, DJ, Cendan, JC, Pugh, CM, Minter, RM, Dunnington, GL, Kozar, RA The changing face of surgical education: Simulation as the new paradigm.. J Surg Res. (2008). 147 189–93 [Article] [PubMed]
Derossis, AM, Fried, GM, Abrahamowicz, M, Sigman, HH, Barkun, JS, Meakins, JL Development of a model for training and evaluation of laparoscopic skills.. Am J Surg. (1998). 175 482–7 [Article] [PubMed]
Fried, GM, Derossis, AM, Bothwell, J, Sigman, HH Comparison of laparoscopic performance in vivo with performance measured in a laparoscopic simulator.. Surg Endosc. (1999). 13 1077–81; discussion 1082 [Article] [PubMed]
Fried, GM, Feldman, LS, Vassiliou, MC, Fraser, SA, Stanbridge, D, Ghitulescu, G, Andrew, CG Proving the value of simulation in laparoscopic surgery.. Ann Surg. (2004). 240 518–25; discussion 525–8 [Article] [PubMed]
Scott, DJ, Dunnington, GL The new ACS/APDS Skills Curriculum: Moving the learning curve out of the operating room.. J Gastrointest Surg. (2008). 12 213–21 [Article] [PubMed]
Stefanidis, D Optimal acquisition and assessment of proficiency on simulators in surgery.. Surg Clin North Am. (2010). 90 475–89 [Article] [PubMed]
Platts, DG, Humphries, J, Burstow, DJ, Anderson, B, Forshaw, T, Scalia, GM The use of computerised simulators for training of transthoracic and transoesophageal echocardiography. The future of echocardiographic training?. Heart Lung Circ. (2012). 21 267–74 [Article] [PubMed]
Matyal, R, Bose, R, Warraich, H, Shahul, S, Ratcliff, S, Panzica, P, Mahmood, F Transthoracic echocardiographic simulator: Normal and the abnormal.. J Cardiothorac Vasc Anesth. (2011). 25 177–81 [Article] [PubMed]
Fried, GM FLS assessment of competency using simulated laparoscopic tasks.. J Gastrointest Surg. (2008). 12 210–2 [Article] [PubMed]
Miller, JP, Perrino, ACJr, Hillel, Z Perrino, ACJr, Reeves, ST Chapter 20: Common artifacts and pitfalls of clinical echocardiography. A Practical Approach to Transesophageal Echocardiography. (2008). 2nd edition Philadelphia Lippincott Williams & Wilkins 417–34
Shanewise, JS, Cheung, AT, Aronson, S, Stewart, WJ, Weiss, RL, Mark, JB, Savage, RM, Sears-Rogan, P, Mathew, JP, Quiñones, MA, Cahalan, MK, Savino, JS ASE/SCA guidelines for performing a comprehensive intraoperative multiplane transesophageal echocardiography examination: Recommendations of the American Society of Echocardiography Council for Intraoperative Echocardiography and the Society of Cardiovascular Anesthesiologists Task Force for Certification in Perioperative Transesophageal Echocardiography.. Anesth Analg. (1999). 89 870–84 [PubMed]
Stefanidis, D, Scott, DJ, Korndorffer, JRJr Do metrics matter? Time versus motion tracking for performance assessment of proficiency-based laparoscopic skills training.. Simul Healthc. (2009). 4 104–8 [Article] [PubMed]
Stefanidis, D, Korndorffer, JRJr, Black, FW, Dunne, JB, Sierra, R, Touchard, CL, Rice, DA, Markert, RJ, Kastl, PR, Scott, DJ Psychomotor testing predicts rate of skill acquisition for proficiency-based laparoscopic skills training.. Surgery. (2006). 140 252–62 [Article] [PubMed]
Salkini, MW, Doarn, CR, Kiehl, N, Broderick, TJ, Donovan, JF, Gaitonde, K The role of haptic feedback in laparoscopic training using the LapMentor II.. J Endourol. (2010). 24 99–102 [Article] [PubMed]
Pfau, PR Colonoscopy and kinematics: What is your path length and tip angulation?. Gastrointest Endosc. (2011). 73 322–4 [Article] [PubMed]
Obstein, KL, Patil, VD, Jayender, J, San José Estépar, R, Spofford, IS, Lengyel, BI, Vosburgh, KG, Thompson, CC Evaluation of colonoscopy technical skill levels by use of an objective kinematic-based system.. Gastrointest Endosc. (2011). 73 315–21, 321.e1 [Article] [PubMed]
Bose, RR, Matyal, R, Warraich, HJ, Summers, J, Subramaniam, B, Mitchell, J, Panzica, PJ, Shahul, S, Mahmood, F Utility of a transesophageal echocardiographic simulator as a teaching tool.. J Cardiothorac Vasc Anesth. (2011). 25 212–5 [Article] [PubMed]
Panait, L, Akkary, E, Bell, RL, Roberts, KE, Dudrick, SJ, Duffy, AJ The role of haptic feedback in laparoscopic simulation training.. J Surg Res. (2009). 156 312–6 [Article] [PubMed]
Stefanidis, D, Scerbo, MW, Sechrist, C, Mostafavi, A, Heniford, BT Do novices display automaticity during simulator training?. Am J Surg. (2008). 195 210–3 [Article] [PubMed]
Stefanidis, D, Scerbo, MW, Korndorffer, JRJr, Scott, DJ Redefining simulator proficiency using automaticity theory.. Am J Surg. (2007). 193 502–6 [Article] [PubMed]
Cristancho, SM, Hodgson, AJ, Panton, N, Meneghetti, A, Qayumi, K Feasibility of using intraoperatively-acquired quantitative kinematic measures to monitor development of laparoscopic skill.. Stud Health Technol Inform. (2007). 125 85–90 [PubMed]
Shiffrin, RM, Schneider, W Automatic and controlled processing revisited.. Psychol Rev. (1984). 91 269–76 [Article] [PubMed]
Stylopoulos, N, Vosburgh, KG Assessing technical skill in surgery and endoscopy: A set of metrics and an algorithm (C-PASS) to assess skills in surgical and endoscopic procedures.. Surg Innov. (2007). 14 113–21 [Article] [PubMed]
Stefanidis, D, Heniford, BT The formula for a successful laparoscopic skills curriculum.. Arch Surg. (2009). 144 77–82; discussion 82 [Article] [PubMed]
Stefanidis, D, Acker, CE, Swiderski, D, Heniford, BT, Greene, FL Challenges during the implementation of a laparoscopic skills curriculum in a busy general surgery residency program.. J Surg Educ. (2008). 65 4–7 [Article] [PubMed]
Chang, L, Petros, J, Hess, DT, Rotondi, C, Babineau, TJ Integrating simulation into a surgical residency program.. Surg Endosc. (2007). 21 418–21 [Article] [PubMed]
Twijnstra, AR, Kolkman, W, Trimbos-Kemper, GC, Jansen, FW Implementation of advanced laparoscopic surgery in gynecology: National overview of trends.. J Minim Invasive Gynecol. (2010). 17 487–92 [Article] [PubMed]
Cox, M, Irby, DM, Reznick, RK, MacRae, H Teaching surgical skills—Changes in the wind.. N Engl J Med. (2006). 355 2664–9 [Article] [PubMed]
Appendix.
Description of the Training Program
Description of the Training Program×
Description of the Training Program
Appendix.
Description of the Training Program
Description of the Training Program×
×
Fig. 1.
Simulator interface during trainee evaluation. Current probe position is seen at the top left (A). Image obtained by trainee appears in the right half of the screen (B), whereas augmented reality image (C) and idealized TCP (target cut plane) (D) are shown in the bottom left panel.
Simulator interface during trainee evaluation. Current probe position is seen at the top left (A). Image obtained by trainee appears in the right half of the screen (B), whereas augmented reality image (C) and idealized TCP (target cut plane) (D) are shown in the bottom left panel.
Fig. 1.
Simulator interface during trainee evaluation. Current probe position is seen at the top left (A). Image obtained by trainee appears in the right half of the screen (B), whereas augmented reality image (C) and idealized TCP (target cut plane) (D) are shown in the bottom left panel.
×
Fig. 2.
Probe motion. Roll is motion in the horizontal (x) plane, pitch is motion in the vertical (y) plane, and yaw is motion in relation to the height of the probe in the thorax, or the (z) plane. Image of the simulator was reproduced, with permission, from CAE Healthcare, Montreal, Quebec, Canada.
Probe motion. Roll is motion in the horizontal (x) plane, pitch is motion in the vertical (y) plane, and yaw is motion in relation to the height of the probe in the thorax, or the (z) plane. Image of the simulator was reproduced, with permission, from CAE Healthcare, Montreal, Quebec, Canada.
Fig. 2.
Probe motion. Roll is motion in the horizontal (x) plane, pitch is motion in the vertical (y) plane, and yaw is motion in relation to the height of the probe in the thorax, or the (z) plane. Image of the simulator was reproduced, with permission, from CAE Healthcare, Montreal, Quebec, Canada.
×
Fig. 3.
Path length. Representative three-dimensional path length of a trainee at week 1 (A) and week 4 (B) and of an expert (C) for movement of the probe from the mid-esophageal four-chamber view to the mid-esophageal long-axis view. The trainee’s first session demonstrates a high number of misdirected probe movements and a long path length to the final image position. The trainee’s fourth session shows a more efficient prove movement with fewer misdirections and a shorter path length that approximates the movement patterns of the expert. The “Start” point is the mid-esophageal four-chamber view; the “End” point is where the subject believed the final image (the mid-esophageal long-axis view) was.
Path length. Representative three-dimensional path length of a trainee at week 1 (A) and week 4 (B) and of an expert (C) for movement of the probe from the mid-esophageal four-chamber view to the mid-esophageal long-axis view. The trainee’s first session demonstrates a high number of misdirected probe movements and a long path length to the final image position. The trainee’s fourth session shows a more efficient prove movement with fewer misdirections and a shorter path length that approximates the movement patterns of the expert. The “Start” point is the mid-esophageal four-chamber view; the “End” point is where the subject believed the final image (the mid-esophageal long-axis view) was.
Fig. 3.
Path length. Representative three-dimensional path length of a trainee at week 1 (A) and week 4 (B) and of an expert (C) for movement of the probe from the mid-esophageal four-chamber view to the mid-esophageal long-axis view. The trainee’s first session demonstrates a high number of misdirected probe movements and a long path length to the final image position. The trainee’s fourth session shows a more efficient prove movement with fewer misdirections and a shorter path length that approximates the movement patterns of the expert. The “Start” point is the mid-esophageal four-chamber view; the “End” point is where the subject believed the final image (the mid-esophageal long-axis view) was.
×
Fig. 4.
Lead time. Representative comparison of lead times for a single trainee at week 1 (A), week 2 (B), week 3 (C), and week 4 (D). Lead times decreased with each successive week of training. For each graph in this figure, the run was truncated at 120 s to allow for clear visualization of the lead time and does not represent the total time for image acquisition.
Lead time. Representative comparison of lead times for a single trainee at week 1 (A), week 2 (B), week 3 (C), and week 4 (D). Lead times decreased with each successive week of training. For each graph in this figure, the run was truncated at 120 s to allow for clear visualization of the lead time and does not represent the total time for image acquisition.
Fig. 4.
Lead time. Representative comparison of lead times for a single trainee at week 1 (A), week 2 (B), week 3 (C), and week 4 (D). Lead times decreased with each successive week of training. For each graph in this figure, the run was truncated at 120 s to allow for clear visualization of the lead time and does not represent the total time for image acquisition.
×
Fig. 5.
Peak motion. Representative comparison of peak motion of trainees at week 1 (A), week 2 (B), week 3 (C), and week 4 (D). The number of high-velocity movements required to obtain an image decreased for trainees over time. The velocity threshold to count the probe movement was 0.2 cm/s.
Peak motion. Representative comparison of peak motion of trainees at week 1 (A), week 2 (B), week 3 (C), and week 4 (D). The number of high-velocity movements required to obtain an image decreased for trainees over time. The velocity threshold to count the probe movement was 0.2 cm/s.
Fig. 5.
Peak motion. Representative comparison of peak motion of trainees at week 1 (A), week 2 (B), week 3 (C), and week 4 (D). The number of high-velocity movements required to obtain an image decreased for trainees over time. The velocity threshold to count the probe movement was 0.2 cm/s.
×
Fig. 6.
Comparison of kinematic measures between experts and trainees. The kinematic measures (A, total time; B, lead time; C, path length; D, peak movements; and E, time–distance multiple) represent values calculated from data exported from the simulator during an evaluation (see Kinematic Analysis under Materials and Methods for description). Data are a composite of all 13 images acquired during the simulated examination. The measurement periods are the week of evaluation, performed at the end of the second hands-on session. Comparison among subjects was performed by Friedman test. All five measures showed a statistically significant reduction over the course of 4 weeks (P < 0.01 for all). The expert cohort is included for reference and is not statistically compared here.
Comparison of kinematic measures between experts and trainees. The kinematic measures (A, total time; B, lead time; C, path length; D, peak movements; and E, time–distance multiple) represent values calculated from data exported from the simulator during an evaluation (see Kinematic Analysis under Materials and Methods for description). Data are a composite of all 13 images acquired during the simulated examination. The measurement periods are the week of evaluation, performed at the end of the second hands-on session. Comparison among subjects was performed by Friedman test. All five measures showed a statistically significant reduction over the course of 4 weeks (P < 0.01 for all). The expert cohort is included for reference and is not statistically compared here.
Fig. 6.
Comparison of kinematic measures between experts and trainees. The kinematic measures (A, total time; B, lead time; C, path length; D, peak movements; and E, time–distance multiple) represent values calculated from data exported from the simulator during an evaluation (see Kinematic Analysis under Materials and Methods for description). Data are a composite of all 13 images acquired during the simulated examination. The measurement periods are the week of evaluation, performed at the end of the second hands-on session. Comparison among subjects was performed by Friedman test. All five measures showed a statistically significant reduction over the course of 4 weeks (P < 0.01 for all). The expert cohort is included for reference and is not statistically compared here.
×
Table 1.
Comparisons of Kinematic Measures for Trainees at Week 1, Trainees at Week 4, and Experts
Comparisons of Kinematic Measures for Trainees at Week 1, Trainees at Week 4, and Experts×
Comparisons of Kinematic Measures for Trainees at Week 1, Trainees at Week 4, and Experts
Table 1.
Comparisons of Kinematic Measures for Trainees at Week 1, Trainees at Week 4, and Experts
Comparisons of Kinematic Measures for Trainees at Week 1, Trainees at Week 4, and Experts×
×
Table 2.
Results of Ordinal Regression of Kinematic Measures
Results of Ordinal Regression of Kinematic Measures×
Results of Ordinal Regression of Kinematic Measures
Table 2.
Results of Ordinal Regression of Kinematic Measures
Results of Ordinal Regression of Kinematic Measures×
×
Appendix.
Description of the Training Program
Description of the Training Program×
Description of the Training Program
Appendix.
Description of the Training Program
Description of the Training Program×
×