0
We're unable to sign you in at this time. Please try again in a few minutes.
Retry
We were able to sign you in, but your subscription(s) could not be found. Please try again in a few minutes.
Retry
There may be a problem with your account. Please contact the AMA Service Center to resolve this issue.
Contact the AMA Service Center:
Telephone: 1 (800) 262-2350 or 1 (312) 670-7827  *   Email: subscriptions@jamanetwork.com
Error Message ......
Original Article |

Developing and Testing Competency Levels for Laparoscopic Skills Training FREE

James R. Korndorffer Jr, MD; Daniel J. Scott, MD; Rafael Sierra, MD; William C. Brunner, MD; J. Bruce Dunne, PhD; Douglas P. Slakey, MD; Michael C. Townsend, MD; Robert L. Hewitt, MD
[+] Author Affiliations

Author Affiliations: Department of Surgery, Tulane University Health Sciences Center, New Orleans, La.


Arch Surg. 2005;140(1):80-84. doi:10.1001/archsurg.140.1.80.
Text Size: A A A
Published online

Hypothesis  Expert levels can be developed for use as training end points for a basic video-trainer skills curriculum, and the levels developed will be suitable for training.

Design  Fifty subjects with minimal prior simulator exposure were enrolled using an institutional review board–approved protocol. As a measure of baseline performance, medical students (n = 11) and surgery residents (n = 39) completed 3 trials on each of 5 validated video-trainer tasks. Four board-certified surgeons established as laparoscopic experts (with more than 250 basic and more than 50 advanced cases) performed 11 trials on each of the 5 tasks. The mean score was determined and outliers (>2 SDs) were trimmed; the trimmed mean was used as the competency level. Baseline performance of each subject was compared with the competency level for each task.

Setting  All research was performed in a laparoscopic skills training and simulation laboratory.

Participants  Medical students, surgical residents, and board-certified surgeons.

Main Outcome Measures  Expert scores based on completion time and the number of subjects achieving these scores at baseline testing.

Results  For all tasks combined, the competency level was reached by 6% of subjects by the third trial; 73% of these subjects were chief residents, and none were medical students.

Conclusions  These data suggest that the competency level is suitably challenging for novices but is achievable for subjects with more experience. Implementation of this performance criterion may allow trainees to reliably achieve maximal benefit while minimizing unnecessary training.

Figures in this Article

Simulator-based training holds great promise in enhancing surgical education and providing a safe, cost-effective means for practicing techniques prior to their use in the operating room. Because of the inherent difficulties associated with laparoscopy, laparoscopic simulators have become quite popular.15 The need for additional training outside of the operating room is underscored by the fact that many surgeons feel inadequately prepared to perform advanced laparoscopic procedures upon completion of residency training.69

Skills laboratory training has proven effective in providing skills that are transferable to real operations.1012 However, in the era of the 80-hour work week, curricula must be not only effective but also efficient; training should maximize the benefit while expending the least amount of time. Current methods are highly variable, and few standardized training protocols exist. Most curricula use an arbitrary duration13,14 or a predetermined number of repetitions3,1517 as training end points. Since the rate of learning is variable and may be highly dependent on innate visuospatial abilities,1820 arbitrary end points may be neither efficient nor maximally beneficial. Some participants may train longer than necessary, and some participants may not train long enough. An ideal training curriculum would account for variations in ability and tailor the time needed for each individual to acquire a given skill. The goal should be for all individuals to be competent upon completion of training while not spending wasteful hours performing repetitive motions without further benefit.

To date, very little work has been done using performance-based standards. Seymour et al12 reported good results using a competency standard derived from expert levels for a single virtual reality task. Fraser et al21 reported pass/fail scores for the McGill Inanimate System for Training and Evaluation of Laparoscopic Skills (MISTELS) video-trainer tasks for assessment purposes. No standards are available for the Southwestern video-trainer stations, with which our laboratory is currently equipped, and no large studies have been performed to establish a comprehensive set of standards for use as training end points. The purpose of this study was to develop and test the suitability of expert levels for a basic video-trainer skills curriculum.

All research was performed in the Tulane Center for Minimally Invasive Surgery simulator and training laboratory using a 6-station video-trainer simulator (Karl Storz Endoscopy, Culver City, Calif) (Figure 1). Each module includes a 21 cm × 27.5-cm video monitor (Sony Corporation of America, New York, NY), a Xenon-nova light source, a Telecam SL camera system, a Hopkins II laparoscope, and a Plexiglas box trainer equipped with 1 of 5 previously validated tasks (Karl Storz Endoscopy). The exercises included (in order of increasing difficulty) bean drop, running string, checkerboard, block move, and suture foam (Figure 1), as previously described.10 Scoring is based on completion time (seconds) using a stopwatch (Fisher Scientific, Hampton, NH). Two research assistants provided all subjects with a brief tutorial concerning the rules of each task and supervised testing but did not give active instruction. Prior to testing, no practice was allowed, except for a single unscored trial on suture foam, so that each subject could become familiar with the Endostitch device (US Surgical Corp, Norwalk, Conn).

Place holder to copy figure label and caption
Figure 1.

Five-station video-trainer simulators with 5 Southwestern tasks (from top to bottom): bean drop, running string, checkerboard, block move, and suture foam.

Graphic Jump Location

Board-certified surgeons (n = 4), established as laparoscopic experts (with more than 250 basic and more than 50 advanced cases based on personal case logs), volunteered to develop competency levels. One expert had extensive previous exposure to the simulator while the other 3 had no previous exposure. All 4 experts performed 11 consecutive repetitions on each of the 5 tasks. The mean and SD for all scores were determined using Microsoft Excel (Microsoft Inc, Redmond, Wash). A trimmed mean was obtained by removing outliers (scores greater or less than 2 SDs from the mean) and recalculating. The trimmed mean was rounded to the nearest second and defined as the competency level.

Medical students (n = 11, third year) and surgery residents (n = 39, postgraduate year 1 [PGY-1] to postgraduate year 5 [PGY-5]) were enrolled in a protocol approved by the Tulane University institutional review board, and informed consent and demographic data were obtained. No student or resident had previous experience with the video-trainer tasks; all residents had previously been tested on the Minimally Invasive Surgical Trainer, Virtual Reality (MIST VR) simulator (Mentice Inc, San Diego, Calif), with less than 2 hours of previous simulator exposure. Each subject completed 3 consecutive repetitions on each of the 5 tasks as a measure of baseline performance. The best score for each subject on each task was compared graphically with the competency levels using Sigma Plot software (SPSS Inc, Chicago, Ill).

Medical students (n = 2, second year) volunteered as pilot subjects to ensure that the competency levels were attainable in a reasonable amount of time. Both subjects had extensive prior MIST VR experience but no video-trainer exposure and no operative experience. Both subjects practiced all 5 video-trainer tasks until the competency levels were achieved on 2 consecutive repetitions.

Four expert surgeons completed 11 repetitions of each task over a 2-hour period. All were male and right-hand dominant. Expert scores and competency levels are shown in Table 1. Only 6% of the raw scores were considered outliers and trimmed. The expert with extensive exposure to video-trainer tasks (expert 1) achieved the best mean score in only 2 of the 5 tasks.

Eleven students and 39 residents completed 3 repetitions of each task over a 1-hour period. Seventy-five percent were male, and 90% were right-hand dominant. Graphs comparing each subject’s best score with the competency levels are shown in Figure 2. For all tasks combined, 6% of subjects reached the competency level. As the task difficulty increased, the number of subjects achieving the levels decreased (Table 2). Seventy-three percent of the successful attempts were made by PGY-5 residents, 13% by PGY-4 residents, 7% by PGY-3 residents, 0% by PGY-2 residents, 7% by PGY-1 residents, and no medical student achieved the competency level.

Place holder to copy figure label and caption
Figure 2.

Comparison of each subject’s best baseline score with the competency level for all tasks. MS-3 indicates third-year medical students; PGY, postgraduate-year residents.

Graphic Jump Location
Table Graphic Jump LocationTable 2. Number of Subjects Reaching Competency* During Baseline Testing†

Two pilot-subject medical students achieved the competency levels on 2 consecutive repetitions for all 5 video-trainer tasks. The mean time to reach competency was 163 minutes, and the mean number of attempts was 90.

Despite the recent surge in popularity of simulator-based training, there is no consensus as to the optimal simulator type or curriculum design. While virtual-reality technology holds great promise, trainees may prefer the video-trainer platform because of issues concerning tactile feedback and imaging fidelity.22 The Southwestern stations include 3 tasks modified from Rosser et al3,15 so that an assistant is unnecessary, as well as 2 additional tasks that teach depth perception and Endostitch suturing. Improvement in operative performance has been demonstrated after 5 hours of practice using these stations.23 A subsequent study calculated plateaus in performance and concluded that 30 to 35 repetitions may be a more suitable end point than 5 hours of training.17 However, calculating performance plateaus for each trainee is cumbersome and not practical for real-time analyses. Moreover, using plateaus may not fully motivate trainees to reach their full potential. A goal-oriented approach may enhance efficiency and maximize learning, although this has not yet been studied.

Performance standards have not yet been developed for most available simulators. The only study reported for a video-trainer platform used the MISTELS tasks, divided trainees into competent and noncompetent groups based on their level of clinical experience, and established a pass/fail criterion based on score distribution.21 After examining several cutoff scores, the authors identified the best pass/fail standard as having a sensitivity of 82% and a specificity of 82%; thus, 18% of the noncompetent surgeons in the study would have passed, and 18% of competent surgeons would have failed. While this represents one of the first attempts to methodically develop a performance standard, the study’s ultimate goal seems to have been to assess surgeons in practice (to discriminate competent surgeons from noncompetent ones), which is a very high-stakes area that will require extensive validation before being used for credentialing purposes.

Our goal was slightly different; we wanted to develop a performance standard that could be used for training purposes. Our criterion was that the standard would represent a level of mastery achieved by experts for any given skill set but still be attainable by students and residents with practice. With little precedent to rely on, we tried a number of methods and chose those described in this article as optimal. Rather than recruiting a large number of practicing surgeons (who may have had wide variability in their abilities), we identified 4 members of our faculty who not only met our criterion for a minimum number of basic and advanced cases but were also known to be excellent laparoscopists and to have suitable skill sets. To accumulate sufficient data, each expert performed 11 repetitions of 5 tasks. Even though each surgeon was handpicked, variability between individual surgeon scores for various tasks occurred (Table 1). Such variability was not surprising since an expert often makes up for a relative deficit in one skill set by having a relative surplus of another; for almost every task, a different expert achieved the best score. Despite some individual variability, very few scores needed to be trimmed to produce an overall homogeneous data set. With 206 expert data points, we felt confident that the competency levels obtained were a reasonably accurate representation of mastery for each skill.

As a gauge of suitability, we compared the competency levels with performance data of students and residents (Figure 2). They had no prior video-trainer training and minimal to no prior simulator exposure; testing represented baseline ability. If our competency levels were suitable, very few of the subjects would reach competency at baseline, and this was the case (Table 2). Moreover, the vast majority (86%) who achieved competency were senior residents (PGY-4-5), and no medical students did so, supporting the conclusion that scores were reflective of clinical experience. The only task for which no subject demonstrated baseline competency was the suture foam drill; this was not surprising since none had previous training using the Endostitch device. Although using an Endostitch is much simpler than suturing with conventional laparoscopic needle drivers, the device still requires familiarity and practice; suture foam was thus the most difficult task.

When group scores are examined, a trend of improved performance with increased training year is noted. The performance differences seen are likely due to traditional operative experience, thus suggesting construct validity. Unfortunately, these data are an incidental finding and not robust enough to draw this conclusion. Specifically, the power analysis revealed that a much larger sample size would be needed (n = 103, α = .80 for P<.05) before the issue of construct validity can be addressed. With continued accrual of subjects, this will be addressed but is not the focus of this article.

The major advantage of using goal-oriented training is uniformity of the final product; all trainees should reach competency. Moreover, subjects can learn at their own speed. This method may efficiently train those with a higher innate ability and avoid unnecessary repetition while ensuring adequate training of those with slower skill development or less innate ability. The ultimate amount of improvement may also be better than the improvement after training using arbitrary duration end points, since students and residents function as competitive adult learners and thrive from having a goal to achieve. With a basic skill set already acquired, subjects may then safely enter the clinical arena and care for patients, which undoubtedly will enhance educational opportunities. If proficiency in basic skills is present, time in the operating room can be spent on learning anatomy, pathology, and operative technique instead of skill acquisition.

While this study provides much needed standards for training, additional studies are warranted. Although data from previous studies10,17,22,24,25 and preliminary work using pilot subjects in our laboratory suggest that most subjects can achieve the competency levels with practice, we have yet to determine whether these levels will be attainable for all trainees. The question then arises whether trainees who cannot reach competency should be allowed to perform procedures that require these skills. High-stakes validation will be required before this question can be answered.

An interesting corollary also exists. Namely, will the use of competency levels discourage exceptionally talented individuals from reaching their full potential? In other words, will using a competency level prematurely terminate a practicing session for someone capable of surpassing an expert level if they continued practicing? Fortunately, with the goal of ensuring a minimum level of mastery, exceptional individuals can safely refine skills further in either the laboratory or the operating room.

In conclusion, the competency standards developed in this study seem suitable for the purpose of performance-based training. The use of such standards should enhance efficiency and more reliably ensure competency. As we continue to enhance surgical education using simulator-based training, careful scrutiny and scientific rigor are needed to optimize training methods and to develop methods for assessment.

Correspondence: Daniel J. Scott, MD, Department of Surgery, SL-22, Tulane Center for Minimally Invasive Surgery, 1430 Tulane Ave, New Orleans, LA 70112-2699 (dscott@tulane.edu).

Accepted for Publication: August 18, 2004.

Funding/Support: Equipment was provided in part by unrestricted educational grants from Karl Storz Endoscopy, Culver City, Calif, and United States Surgical Corporation, Norwalk, Conn.

Ali  MRMowery  YKaplan  BDeMaria  EJ Training the novice in laparoscopy: more challenge is better. Surg Endosc 2002;161732- 1736
PubMed Link to Article
Derossis  AMFried  GMAbrahamowicz  MSigman  HHBarkun  JSMeakins  JL Development of a model for training and evaluation of laparoscopic skills. Am J Surg 1998;175482- 487
PubMed Link to Article
Rosser  JCRosser  LESavalgi  RS Objective evaluation of a laparoscopic surgical skill program for residents and senior surgeons. Arch Surg 1998;133657- 661
PubMed Link to Article
Wanzel  KWard  MReznick  R Teaching the surgical craft from selection to certification. Curr Probl Surg 2002;39573- 659
PubMed Link to Article
Smith  CDFarrell  TMMcNatt  SSMetreveli  RE Assessing laparoscopic manipulative skills. Am J Surg 2001;181547- 550
PubMed Link to Article
Rattner  DApelgren  KEubanks  W The need for training opportunities in advanced laparoscopic surgery. Surg Endosc 2001;151066- 1070
PubMed Link to Article
Park  AWitzke  DDonnelly  M Ongoing deficits in resident training for minimally invasive surgery. J Gastrointest Surg 2002;6501- 509
PubMed Link to Article
Chung  RPham  QWojtasik  LChari  VChen  P The laparoscopic experience of surgical graduates in the United States. Surg Endosc 2003;171792- 1795
PubMed Link to Article
Hunter  J The case for fellowships in gastrointestinal and laparoendoscopic surgery. Surgery 2002;132523- 525
PubMed Link to Article
Scott  DJBergen  PCRege  RV  et al.  Laparoscopic training on bench models. J Am Coll Surg 2000;191272- 283
PubMed Link to Article
Hamilton  ECScott  DJKapoor  A  et al.  Improving operative performance using a laparoscopic hernia simulator. Am J Surg 2001;182725- 728
PubMed Link to Article
Seymour  NEGallagher  AGRoman  SA  et al.  Virtual reality training improves operating room performance. Ann Surg 2002;236458- 463
PubMed Link to Article
Melvin  WSJohnson  JAEllison  EC Laparoscopic skills enhancement. Am J Surg 1996;172377- 379
PubMed Link to Article
Shapiro  SJPaz-Partlow  MDaykhovsky  LGordon  LA The use of a modular skills center for the maintenance of laparoscopic skills. Surg Endosc 1996;10816- 819
PubMed Link to Article
Rosser  JCRosser  LESalvalgi  RS Skill acquisition and assessment for laparoscopic surgery. Arch Surg 1997;132200- 204
PubMed Link to Article
Derossis  AMBothwell  JSigman  HHFried  GM The effect of practice on performance in a laparoscopic simulator. Surg Endosc 1998;121117- 1120
PubMed Link to Article
Scott  DJYoung  WNTesfay  STFrawley  WHRege  RVJones  DB Laparoscopic skills training. Am J Surg 2001;182137- 142
PubMed Link to Article
Gettman  MKondraske  GTraxer  O  et al.  Assessment of basic human performance resources predicts operative performance of laparoscopic surgery. J Am Coll Surg 2003;197489- 496
PubMed Link to Article
Gallagher  ACowie  RCrothers  IJordan-Black  JSatava  R PicSOr: an objective test of perceptual skill that predicts laparoscopic technical skill in three initial studies of laparoscopic performance. Surg Endosc 2003;171468- 1471
PubMed Link to Article
Wanzel  KHamstra  SCaminiti  MAnastakis  DGrober  EReznick  R Visual-spatial ability correlates with efficiency of hand motion and successful surgical performance. Surgery 2003;134750- 757
PubMed Link to Article
Fraser  SKlassen  DFeldman  LGhitulescu  GStanbridge  DFried  G Evaluating laparoscopic skills. Surg Endosc 2003;17964- 967
PubMed Link to Article
Hamilton  ECScott  DJFleming  JB  et al.  Comparison of video trainer and virtual reality training systems on acquisition of laparoscopic skills. Surg Endosc 2002;16406- 411
PubMed Link to Article
Scott  DJRege  RVBergen  PC  et al.  Measuring operative performance after laparoscopic skills training. J Laparoendosc Adv Surg Tech A 2000;10183- 190
PubMed Link to Article
Scott  DJValentine  RJBergen  PC  et al.  Evaluating surgical competency using ABSITE, skill testing, and intra-operative assessment. Surgery 2000;128613- 622
PubMed Link to Article
Sierra  RBrunner  WCKorndorffer  JRDunne  JBScott  DJ Residency-wide competency using laparoscopic simulators [abstract]. Surg Endosc In press

Figures

Place holder to copy figure label and caption
Figure 1.

Five-station video-trainer simulators with 5 Southwestern tasks (from top to bottom): bean drop, running string, checkerboard, block move, and suture foam.

Graphic Jump Location
Place holder to copy figure label and caption
Figure 2.

Comparison of each subject’s best baseline score with the competency level for all tasks. MS-3 indicates third-year medical students; PGY, postgraduate-year residents.

Graphic Jump Location

Tables

Table Graphic Jump LocationTable 2. Number of Subjects Reaching Competency* During Baseline Testing†

References

Ali  MRMowery  YKaplan  BDeMaria  EJ Training the novice in laparoscopy: more challenge is better. Surg Endosc 2002;161732- 1736
PubMed Link to Article
Derossis  AMFried  GMAbrahamowicz  MSigman  HHBarkun  JSMeakins  JL Development of a model for training and evaluation of laparoscopic skills. Am J Surg 1998;175482- 487
PubMed Link to Article
Rosser  JCRosser  LESavalgi  RS Objective evaluation of a laparoscopic surgical skill program for residents and senior surgeons. Arch Surg 1998;133657- 661
PubMed Link to Article
Wanzel  KWard  MReznick  R Teaching the surgical craft from selection to certification. Curr Probl Surg 2002;39573- 659
PubMed Link to Article
Smith  CDFarrell  TMMcNatt  SSMetreveli  RE Assessing laparoscopic manipulative skills. Am J Surg 2001;181547- 550
PubMed Link to Article
Rattner  DApelgren  KEubanks  W The need for training opportunities in advanced laparoscopic surgery. Surg Endosc 2001;151066- 1070
PubMed Link to Article
Park  AWitzke  DDonnelly  M Ongoing deficits in resident training for minimally invasive surgery. J Gastrointest Surg 2002;6501- 509
PubMed Link to Article
Chung  RPham  QWojtasik  LChari  VChen  P The laparoscopic experience of surgical graduates in the United States. Surg Endosc 2003;171792- 1795
PubMed Link to Article
Hunter  J The case for fellowships in gastrointestinal and laparoendoscopic surgery. Surgery 2002;132523- 525
PubMed Link to Article
Scott  DJBergen  PCRege  RV  et al.  Laparoscopic training on bench models. J Am Coll Surg 2000;191272- 283
PubMed Link to Article
Hamilton  ECScott  DJKapoor  A  et al.  Improving operative performance using a laparoscopic hernia simulator. Am J Surg 2001;182725- 728
PubMed Link to Article
Seymour  NEGallagher  AGRoman  SA  et al.  Virtual reality training improves operating room performance. Ann Surg 2002;236458- 463
PubMed Link to Article
Melvin  WSJohnson  JAEllison  EC Laparoscopic skills enhancement. Am J Surg 1996;172377- 379
PubMed Link to Article
Shapiro  SJPaz-Partlow  MDaykhovsky  LGordon  LA The use of a modular skills center for the maintenance of laparoscopic skills. Surg Endosc 1996;10816- 819
PubMed Link to Article
Rosser  JCRosser  LESalvalgi  RS Skill acquisition and assessment for laparoscopic surgery. Arch Surg 1997;132200- 204
PubMed Link to Article
Derossis  AMBothwell  JSigman  HHFried  GM The effect of practice on performance in a laparoscopic simulator. Surg Endosc 1998;121117- 1120
PubMed Link to Article
Scott  DJYoung  WNTesfay  STFrawley  WHRege  RVJones  DB Laparoscopic skills training. Am J Surg 2001;182137- 142
PubMed Link to Article
Gettman  MKondraske  GTraxer  O  et al.  Assessment of basic human performance resources predicts operative performance of laparoscopic surgery. J Am Coll Surg 2003;197489- 496
PubMed Link to Article
Gallagher  ACowie  RCrothers  IJordan-Black  JSatava  R PicSOr: an objective test of perceptual skill that predicts laparoscopic technical skill in three initial studies of laparoscopic performance. Surg Endosc 2003;171468- 1471
PubMed Link to Article
Wanzel  KHamstra  SCaminiti  MAnastakis  DGrober  EReznick  R Visual-spatial ability correlates with efficiency of hand motion and successful surgical performance. Surgery 2003;134750- 757
PubMed Link to Article
Fraser  SKlassen  DFeldman  LGhitulescu  GStanbridge  DFried  G Evaluating laparoscopic skills. Surg Endosc 2003;17964- 967
PubMed Link to Article
Hamilton  ECScott  DJFleming  JB  et al.  Comparison of video trainer and virtual reality training systems on acquisition of laparoscopic skills. Surg Endosc 2002;16406- 411
PubMed Link to Article
Scott  DJRege  RVBergen  PC  et al.  Measuring operative performance after laparoscopic skills training. J Laparoendosc Adv Surg Tech A 2000;10183- 190
PubMed Link to Article
Scott  DJValentine  RJBergen  PC  et al.  Evaluating surgical competency using ABSITE, skill testing, and intra-operative assessment. Surgery 2000;128613- 622
PubMed Link to Article
Sierra  RBrunner  WCKorndorffer  JRDunne  JBScott  DJ Residency-wide competency using laparoscopic simulators [abstract]. Surg Endosc In press

Correspondence

CME
Meets CME requirements for:
Browse CME for all U.S. States
Accreditation Information
The American Medical Association is accredited by the Accreditation Council for Continuing Medical Education to provide continuing medical education for physicians. The AMA designates this journal-based CME activity for a maximum of 1 AMA PRA Category 1 CreditTM per course. Physicians should claim only the credit commensurate with the extent of their participation in the activity. Physicians who complete the CME course and score at least 80% correct on the quiz are eligible for AMA PRA Category 1 CreditTM.
Note: You must get at least of the answers correct to pass this quiz.
You have not filled in all the answers to complete this quiz
The following questions were not answered:
Sorry, you have unsuccessfully completed this CME quiz with a score of
The following questions were not answered correctly:
Commitment to Change (optional):
Indicate what change(s) you will implement in your practice, if any, based on this CME course.
Your quiz results:
The filled radio buttons indicate your responses. The preferred responses are highlighted
For CME Course: A Proposed Model for Initial Assessment and Management of Acute Heart Failure Syndromes
Indicate what changes(s) you will implement in your practice, if any, based on this CME course.
Submit a Comment

Multimedia

Some tools below are only available to our subscribers or users with an online account.

Web of Science® Times Cited: 38

Related Content

Customize your page view by dragging & repositioning the boxes below.

Articles Related By Topic
Related Collections
PubMed Articles