Surg Endosc (2012) 26:2550–2558 DOI 10.1007/s00464-012-2230-7
and Other Interventional Techniques
A head-to-head comparison between virtual reality and physical reality simulation training for basic skills acquisition Constantinos Loukas • Nikolaos Nikiteas Dimitrios Schizas • Vasileios Lahanas • Evangelos Georgiou
•
Received: 25 August 2011 / Accepted: 24 February 2012 / Published online: 5 April 2012 Ó Springer Science+Business Media, LLC 2012
Abstract Background This study aimed to investigate whether basic laparoscopic skills acquired with a virtual reality simulator (LapVRTM) are transferable to a standard video trainer (VT) and vice versa. Methods Three basic tasks were considered: peg transfer, cutting, and knot-tying. The physical models were custombuilt as identical copies of the virtual models. Forty-four novices were randomized into two equal groups to be trained on the LapVRTM or the VT. Each task was practiced separately 12 times. Transferability of skills from one modality to the other was assessed by performing the same task on the alternative modality before and after training (crossover assessment). Performance metrics included path length, time, and penalty score. Results Both groups demonstrated significant performance curves for all tasks and metrics (p \ 0.05). Plateaus were statistically equivalent between the groups for each task in terms of path length and time, and across all tasks in terms of the penalty score (p \ 0.05). When each group was tested on the alternative modality there was a significant improvement for all tasks and metrics (p \ 0.05). Comparing the plateau performance of one group with the performance achieved on the same simulator by the other group we found (a) no statistical deference in the penalty score (p \ 0.05), (b) a statistical difference in time and path length for cutting and knot-tying (p \ 0.05), and
C. Loukas (&) N. Nikiteas D. Schizas V. Lahanas E. Georgiou Medical Physics Lab-Simulation Center, Medical School, University of Athens, 75 Mikras Asias str., 11527 Athens, Greece e-mail:
[email protected]
123
(c) an equal time performance for peg transfer (p \ 0.05) but not for path length (p \ 0.05). Conclusions Both modalities provided significant enhancement of the novices’ performance. The skills learned on the LapVRTM are transferable to the VT and vice versa. However, training with one modality does not necessarily mean a performance equivalent to that achieved with the other modality. Keywords Surgical training Virtual reality Simulation Laparoscopic surgery Video trainer Education
Over the past decades, laparoscopic surgery (LS) has become the standard technique for an increasing number of operations. Minimized risk of infection, reduced pain, shortened rehabilitation time, and better cosmetic results are some of the major benefits compared with open surgery. However, LS requires the acquisition of psychomotor skills that are different to those needed for open surgery for a number of reasons such as the confined operating field, the fulcrum effect, the limited force feedback, and the absence of 3D vision [1]. The traditional method of learning technical skills in surgery, which is based on the Halstedian principle of ‘‘see one, do one, teach one,’’ is not able to address these issues [2]. Moreover, a laparoscopic curriculum should be proficiency-based, rather than timebased, to ensure safe, adequate, and fast skills training and assessment [3]. Simulation-based surgical education has received significant attention almost since the invention of LS. Highfidelity models with life-like patient anatomy are employed for the development of special psychomotor and visuospatial skills outside the operating theater [1, 3, 4]. Simulation-based surgical education allows trainees to achieve
Surg Endosc (2012) 26:2550–2558
an adequate level of proficiency before the operation is performed on a real patient [3, 5]. There are many surgical simulators, which can be categorized into two broad classes: video trainers (or box trainers) and virtual reality (VR) simulators. The video trainer (VT) is a physical reality system where surgeons learn on inanimate models or animal parts [6]. Most of these simulators employ conventional laparoscopic equipment (instruments and endoscopic camera) and a pelvic trainer (or box). Physical reality simulators offer the advantage of realistic force feedback since trainees interact with real objects. However, such a training system requires tedious and costly maintenance (e.g., replacement of the models) and the presence of an expert, or examination of recorded videos, to provide constructive feedback to the trainee. On the other hand, VR simulators allow trainees to gradually develop and increase their psychomotor and depth-perception skills in a range of difficulty levels, from elementary tasks to whole procedures (e.g., laparoscopic cholecystectomy and Nissen fundoplication). Training is performed in a purely virtual environment and the virtual instruments are controlled with a mechanical interface integrated with appropriate sensors. VR simulators reproduce multiple scenarios on the same platform and generate objective feedback and skills assessment in an automated manner [1, 7]. However, VR simulators have been criticized for their high cost and low return on investment, for being unable to adequately reproduce the full content of important tasks such as suturing, and also for providing nonrealistic force feedback and oversimplified visualization of real anatomies [8–10]. To date, a few studies have attempted to clarify which training modality is the most efficient by comparing the transferability of skills between VR simulators and VTs [11–15]. The reported results are contradictory mainly because of the dissimilarity of the tasks performed on each device. In this study we aimed to compare the training efficacy of two laparoscopic surgery simulators and also the transferability of skills from one modality to the other. To meet this challenge we employed equivalent models of three fundamental laparoscopic tasks that were performed separately in a VR and a physical simulator by two different groups. Four different issues were addressed based on the same experimental design. The first, and easiest to prove, hypothesis stated that repeated training with either modality improves surgical performance on the same modality. Second, we examined whether absolute levels of training performance are comparable between the two modalities (comparison of plateaus between the groups). Third, for each group we investigated whether training on one modality leads to improved performance on the other modality (pre-/ post-training performance comparison). The final, and most important, question concerned the comparison between the
2551
performance achieved after training with one modality and that achieved on the same modality but after the training is completed on the other modality. Although the devices were conceptually different from one another, we considered tasks of different complexity where each one was based on models with equivalent properties between the two modalities. Our hope was that this head-to-head comparison would offer a better understanding about the contribution of VRbased simulation training on the performance in a physical reality setting and vice versa.
Methods Study design Forty-four undergraduate medical students were selected at random from the medical school to voluntarily participate in the study. Written informed consent was obtained prior to participation. Our institutional review board granted exemption for this study. All subjects had no prior experience with training on a laparoscopic simulator (either physical or virtual reality system). The participants were randomized into two equal groups to undertake training on either a VR simulator or a VT (hereafter referred to as VR group and VT group, respectively). Training included three basic laparoscopic tasks: peg transfer, cutting, and knottying. Each task was repeated 12 times before proceeding to the next one. Two sessions on the same day was the maximum performed, and the intersession interval was set to at least 1 h for avoiding repetition patterns in the data collected. The training was scheduled during normal working hours whenever the subjects were available. Throughout the course of the study none of the participants had knowledge of their performance scores. The only instructions provided were about the usage of the equipment and the objectives of the simulation tasks. Before and after training, each participant was asked to perform the same tasks on the alternative simulator (pre-/posttraining phases, see Fig. 1). This design allowed for a straightforward crossover assessment of the two groups. It was also used in recent research work of Debes et al. [13] (although we were not aware of their protocol at that time of our study). Nevertheless, our protocol is different for a number of reasons that are described in the Discussion section. Task description The training tasks were chosen from the Fundamentals in Laparoscopic Surgery (FLS) program [16], based on their gradual complexity and type of psychomotor skill involved. Before training, all participants received an identical instructional tutorial to familiarize themselves
123
2552
Surg Endosc (2012) 26:2550–2558
tension with the grasper and cut half of the cloth with the scissors, and then switch hands (Fig. 2). The last task involved intracorporeal knot-tying based on the square-knot technique. Trainees were required to manipulate a length of suture that was predriven through a tissue-like material into a square/slip knot consisting of two opposite throws. Standard needle drivers were used for this task. With the LapVRTM, when a throw was clinched the suture turned red; this meant that the user could proceed with the second throw (Fig. 2).
Simulation systems
Fig. 1 Schematic diagram of the study design. VTpl and VRpl denote the plateau values during training of the VT group and the VR group, respectively. k denotes the number of sessions (repetitions) for each task performed by either of the two groups and n is the number of subjects in each training group
with the equipment. All tasks were performed in the same manner on either the VR simulator or the VT. The participants initially performed a trial session for each task to enhance familiarization. Peg transfer was performed first during the training phase; it required that the trainer pick up a series of four cylindrical pegs (6 mm wide, 1.7 cm long) from the floor of the cavity and place them into the correct holes of a pegboard (surface size *50 cm2, see Fig. 2). Each time a peg appeared on either side of the pegboard (left or right). For the first two pegs the user had to use the grasper on that side to place the peg into a hole located also at the same side of the pegboard. For the next two pegs the user had to place them into a hole located at the other side of the pegboard, which required peg transfer between the graspers (i.e., a peg lying initially on the left side had to be picked up with the left grasper, transferred into the right grasper, and finally placed on a hole at the right side of the pegboard). The cutting task required the user to accurately cut a section of gauze from a larger piece. Trainees had to cut along the perimeter of a circle (*18 cm) within a boundary area that indicated the maximum allowable deviation (*3 cm wide). It was important to maintain
123
The VR simulator used for our study was the LapVRTM laparoscopic simulator (CAE Healthcare, Montreal, QC, Canada; see Fig. 3). The mechanical interface includes two dummy laparoscopic tools, each with five degrees of freedom (three Cartesian coordinates, tip rotation, and handle opening/closing). The extension and tip of each instrument appear on the monitor as a virtual analog of a real laparoscopic tool and move according to the manipulation of the dummy instruments. A tracking device ensures accurate behavior of the virtual instrument in relation to the maneuvers. The simulator is also equipped with a haptic device providing force feedback during interaction with objects present in the virtual world. The embedded software consists of three main modules, each with a series of training exercises (basic, procedural skills, and entire laparoscopic operations). The three training tasks employed in this study were selected from the first module (essential skills). LapVRTM has been validated in our prior work [17]. The VT comprises a pelvic trainer [size: 50 cm (length) 9 30 cm (width) 9 20 cm (height), weight: 3 kg; Annex Art, Anglesey, UK) and a rigid endoscope (3CCD with optical head from Olympus, Hamburg, Germany) connected to a laparoscopic tower (19-in. flat screen, 3CCD digital processor, xenon light source, and DVD recorder, from Olympus; see Fig. 3). A special tripod was used to hold the laparoscope at a fixed position throughout the study. Commercially available laparoscopic graspers and scissors were used (Karl Stortz, Tuttlingen, Germany). Each of the three tasks was performed using a different training model placed inside the thoracic trainer. Care was taken so that the VT physical models were custom-built as identical copies of the virtual models (see Fig. 2, bottom). Thus, the sizes and properties of the pegboard, pegs, gauze, cutting pattern, and suture were equivalent to those of their virtual counterparts. This ensured that the training environment was identical between the VR and VT groups. Performances were videorecorded and reviewed by two experts in a blinded fashion (interrater reliability[0.80, p \ 0.01).
Surg Endosc (2012) 26:2550–2558
2553
Fig. 2 The three training tasks in a virtual (top) and real (bottom) simulation setting
Fig. 3 The VR simulator (left) and the VT (right)
Performance metrics included total path length, time, and penalty score (medians). Table 1 provides a description on how the penalty score was measured. For LapVRTM these metrics were generated automatically. For the VT, penalty score and time were inferred from the video. Quantitative evaluation of the task output was performed by an expert (e.g., knot security). The path length of each instrument was obtained via tip tracking using custom-made video analysis
software (see Fig. 4). A color-coded adhesive strip was placed around the tip of each instrument to facilitate tracking. The path length generated by image analysis was obviously underestimated as it is computed from the image plane (2D) rather than by using the Cartesian 3D space considered in LapVRTM. Thus, in two of the statistical comparisons (b and d in Table 2), this metric was omitted since its physical meaning was different in the two simulators used.
123
2554
Surg Endosc (2012) 26:2550–2558
Table 1 Description of the penalty score calculated for each simulation task Essential tasks
Penalty score
Peg transfer
Total number of pegs dropped
Cutting
Percentage area of deviation from the circle ? total number of unsuccessful cutting attempts
Knot-tying
Total number of times the needle came in contact with the wall or the suture was overstretched ? knot’s insecurity score (i.e., 0 points for secure knots, 10 points for slipping knots, and 20 points for knots that came apart) ? throws not alternating (1 point) ? tail length [5 cm (1 point)
Statistics The choice of at least ten subjects per group was based on a two-tailed test, with a = 0.05 and power (1- b) = 0.80, and the fact that approximately 10 % of participants drop out, as reported in similar VR simulation studies. All data were statistically analyzed using the MATLABÒ Statistics toolbox ver. R2009a (MathWorks, Natick, MA, USA). Our statistical analysis included (1) training plateau computation
(i.e., VTpl and VRpl, see Fig. 1 for notation), (2) plateau comparison (VTpl vs. VRpl), (3) comparison of pre- and post-training performance achieved on the ‘‘nontraining’’ modality (VTpre vs. VTpost and VRpre vs. VRpost), and (4) the impact of training on the performance of the same task on the other modality (i.e., VRpl vs. VTpost and VTpl vs. VRpost). The latter was of central importance in the present study. The type and number of statistical comparisons performed are summarized in Table 2 for clarity. The plateau of each performance curve was derived using Friedman’s test (nonparametric repeated-measures analysis of variance). In case there was a statistically significant difference (p \ 0.05), multiple-comparisons tests with a Bonferroni adjustment were performed to identify the plateau (i.e., the session at which a point fails to show additional significant improvements). Within- and between-group comparison of performance estimates was undertaken with the Mann-Whitney U test (5 % level of significance).
Results Subjects who trained on the LapVRTM demonstrated significant performance curves for all tasks (Table 3). Time
Fig. 4 The custom-made video-processing software used for instrument tracking
123
Surg Endosc (2012) 26:2550–2558
2555
Table 2 Statistical comparisons performed per simulation task Comparison details
Performance metrics considered
No. of comparisons
a. Computing the metric plateaus of each training group
Path length, time, penalty score
6
b. Comparing the two training groups in terms of the plateaus of each metric
Time, penalty score
2
c. Comparing pre- vs. post-training performance for each separate group
Path length, time, penalty score
6
d. Comparing plateau vs. post-training performance on each training modality
Time, penalty score
4
and peg transfer having the highest and lowest plateau values, respectively, across all metrics; this was valid for both modalities. Table 4 gives the performance data in the pre- and posttraining phases (transfer of skills). Note that each group was assessed on the alternative modality before and after training on its own modality (i.e., the VR group on the VT and vice versa). All parameters showed a significant improvement in all tasks. For the VR group, time, path length, and penalty score decreased by *40, 15–30, and 50–75 %, respectively, whereas for the VT group they decreased by 32–47, 14–24, and 60–70 % (depending on the task), respectively. Improvement in time was slightly higher for the VT group across the tasks (p = 0.04), whereas for path length and penalty score there was no statistical difference between the groups. We also investigated whether the plateau performance achieved with one modality is approached by the other group (trained on the alternative device). This essentially involved a comparison between the plateau values of one group with the post-training values of the other group (i.e.,
was the most difficult metric to reach a stable proficiency level (i.e., a plateau) and penalty score was the easiest. Penalty score was the metric with the greatest improvement (67–75 %, depending on the task). Path length and time also improved (45–60 %). For the VT group there was also a significant plateau in each task’s performance curve, and plateau sessions were similar to those of the VR group (Table 3). Compared to the other two metrics, time was the last metric where proficiency was achieved during training (*7th session) and penalty score the first (*3rd session). The improvement in the penalty score was again the highest (70–75 %), while the other two metrics showed a slightly lower improvement (40–60 %). It is also important to note that each parameter had similar improvement across the tasks. No significant difference was found for either time or penalty score, when comparing the plateau values between the two groups for each task (p [ 0.05). The session at which the plateau was achieved was also significantly equivalent between the two groups. Moreover, there was a significant difference among the three tasks, with cutting Table 3 Performance curve results Peg transfer
Cutting
Group
Session
T (s)
VR
First
204
850
3
342
1,112
8
318
1,126
7
Plateau
113 (6th)
442 (5th)
1 (4th)
158 (7th)
501 (6th)
2 (3rd)
127 (8th)
443 (6th)
2 (4th)
First
212
460
4
346
770
7
349
768
8
Plateau
95 (7th)
201 (5th)
1 (3rd)
162 (7th)
480 (6th)
2 (2nd)
112 (8th)
346 (5th)
2 (4th)
VT
P (cm)
PS
T (s)
Knot-tying P (cm)
PS
T (s)
P (cm)
PS
Value in parentheses is the plateau session and value outside the parentheses is the median of the plateau T time, P path length, PS penalty score Table 4 Assessment results Peg transfer Group VR VT
Cutting
Knot-tying
T (s)
P (cm)
PS
T (s)
P (cm)
PS
T (s)
P (cm)
PS
Pre
197
310
4
374
Post
108
260
1
216
700
4
262
650
9
492
2
165
504
Pre
188
574
3
3
447
1027
5
298
869
7
Post
103
491
1
236
786
2
202
664
2
All values are medians T time, P path length, PS penalty score
123
2556
Surg Endosc (2012) 26:2550–2558
Fig. 5 Bar charts of plateau and post-training performance for each metric: (A) penalty score, (B) path length, (C) time. On each bar, the top edge is the median and the error bar extends to the 75th percentile
plateaus of the VR group vs. post-training values of the VT group and vice versa). Figure 5 shows bar charts of the plateau and post-training values for each metric separately, based on the same data used to generate the results in Tables 3 and 4. For each task there are four bars: the left two correspond to the VR simulator and the right two to the VT. Our comparison was performed separately for each device, metric, and task. Starting with the penalty score (Fig. 5A), no statistical difference was found between the two groups on either simulation device or task. For path length (see Fig. 5B), the (plateau) performance of the VR group was significantly superior to that achieved by the VT group for all three tasks (p \ 0.05; for path length, p = 0.048). On the VT, the (plateau) performance was approached by the VR group in peg transfer and cutting but not in knot-tying (p \ 0.05). Finally, the time plateaus (see Fig. 5C) on either simulator were approached by the other group only for peg transfer (p \ 0.05). For cutting and knot-tying, the performance achieved during training on one simulator was significantly superior to that achieved on the same simulator by the other group (p \ 0.05).
Discussion We aimed to highlight the relative efficiency and transferability of skills between two popular modalities used in laparoscopic surgical training. Three different tasks were selected, each requiring demonstration of special technical skills. Peg transfer required precise positioning of small objects and accurate depth perception. Cutting involved accurate manipulation of endoscopic sutures, while knottying was the most challenging and involved the coordination of several skills such as visual, perceptual, and psychomotor [18]. The key idea of the study was to examine how the training environment affects laparoscopic performance on the alternative modality (crossover assessment). We purposely chose to use identical tasks for both simulators and constructed the physical models as replicas of their virtual counterparts.
123
First, our results suggest that both simulators enhance the performance of laparoscopic tasks. This was expected and several studies have reported the educational value of both training modalities (see [19] for a recent review). For both modalities time was the most difficult metric to reach a stable proficiency level, and penalty score was the easiest. This finding supports previous studies that showed that time is superior to motion-tracking metrics for performance assessment [20]. Interestingly, for both groups the penalty score improved equally across all tasks (*70 %). There are two possible explanations for this: the limited range of errors defining this metric, and the fact that trainees learn more easily to correct their performance from session to session than to avoid time-consuming and unnecessary movements. Focusing on each separate task, time and path length improved equally between the two groups. Moreover, the plateau session for each parameter was equivalent between the two groups, indicating a consistent efficiency of the two modalities. The higher plateau values of time and path length for cutting may be due to the greater time that is objectively required to complete this task. The crossover assessment performed before and after training allowed for the study of whether the skills learned with one modality are transferable to the other one. The tasks and training models between the simulators were designed to be identical, allowing for a direct comparison. Crossover assessment between similar types of simulators has also been applied recently by Debes et al. [13]. However, the video trainer they used was a low-cost version of our laparoscopy system (image processor, 3CCD videoscope, and a xenon light source). A critical issue in laparoscopic performance is depth perception, which is favorably affected by the endoscopic camera. Modern cameras are designed so that the sense of depth during visual appraisal of the operating field is enhanced. More importantly, the training tasks in the study by Debes et al. [13] were much different between the two devices, and the crossover assessment tasks also differed from the training tasks. The training tasks focused more on transferring and positioning objects, whereas our study considered additional skills such as cutting and knot-tying. In Lehmann
Surg Endosc (2012) 26:2550–2558
et al. [11], training and crossover assessment was based on identically constructed tasks performed on a VR trainer and a box trainer, but only camera navigation and depth perception skills were considered in that study. Transferability of skills from a VR simulator to physical reality [11, 12, 21, 22] or the operating room [12, 23] is essential in surgical education and shows the real educational value of simulation training. Our results provide substantial evidence that the skills learned in one modality are transferred to the other one, and also the magnitude of improvement is similar between the two training groups. It is important to note that the transferability of skills was proved consistent for all three tasks, even for knot-tying, which imposes several challenges for adequate reproduction in the VR environment. This is in view of the considerable discussion on whether a VR simulator can adequately represent the suturing context. Knot-tying, along with needle-driving, is a fundamental element of intracorporeal suturing, and finding the most efficient way to practice is of great importance for many professionals in surgical education who make wide use of simulators in their training programs. Our experience with the LapVRTM system showed that the incorporated feature of force feedback provides a sense of ‘‘touch’’ during performance [23, 24]. Previous studies on the difference between physical and VR simulation have stated that feedback is an important feature in laparoscopic suturing simulation [25– 27]. Currently, this is also one of the most controversial issues in VR laparoscopic simulator design. For tasks such as peg transfer, tactility is not crucially important since the main focus is on the positioning of the objects. For cutting and knot-tying, features such as force feedback and realistic deformation of the virtual objects are essential. Provided that force feedback is minimal during laparoscopic procedures, the little feedback that remains is valuable [25]. A limitation of many VR simulators is the absence of haptic feedback to the surgeon [28]. Some reports conclude that for tasks requiring the application of force, training should be done on systems that incorporate haptics [29], whereas others do not directly support this evidence [27]. We believe that the adequate representation of a virtual task such as knot-tying is a combination of several parameters, including good sense of tactility, realistic deformation of the suture, and realistic presentation of shadows in the virtual cavity. Our results showed that these features helped the VR group adequately apply the skills learned on the LapVRTM to the physical reality setting. Similarly, the skills learned with the VT were transferable to the LapVRTM, although this is of less importance since the gold standard is the physical, rather than the virtual, world. In addition to the transferability of skills, we examined whether in absolute terms the skills learned as a result of
2557
training with one modality are transferable to the other modality. In other words, we raised the question: ‘‘Is training with LapVRTM enough to reach the proficiency level of a group trained on the same task with a VT and vice versa?’’ Our results show that in general this is not always the case. For example, for all tasks the penalty score performance was equivalent between the two groups. However, for the other two metrics, training with one modality did not seem to be enough to reach the proficiency achieved by the other group. For knot-tying in particular, this conclusion was valid for both metrics and simulation devices. This implies that the conventional box trainer cannot always be replicated by a VR simulator. Thus, although the VR simulator provided sufficient training in suturing, it was not enough to reach the technical performance of the VT group. For the other two tasks this conclusion was compromised: for cutting, the VR group approached the VT group in path length, whereas for peg transfer, the performance of both groups was equivalent on either simulator. A possible explanation for this may be that these tasks involve fewer technical challenges compared to knot-tying and the virtual representation is much closer to physical reality. A possible limitation of the present study is the absence of an experts group. Inclusion of experienced surgeons would allow evaluation of their performance with either simulator and also in relation to novices. Another limitation is that we included only fundamental skills but not procedural skills like diathermy. Although necessary in laparoscopic surgery, designing a physical task that simulates electrocautery as a replica of its virtual counterpart imposed some serious challenges. We plan to address these issues in a future study. In conclusion, we provide substantial evidence that both types of simulator enhance the performance of novices in a consistent and efficient manner. The skills learned on LapVRTM are transferable to the VT and vice versa. However, absolute values of performance are not always comparable, especially for complex tasks such as knottying. Further research that includes additional tasks is needed to conclude whether training in a VR environment leads to a performance that is equivalent to that achieved with conventional box trainers. Disclosures Dr. Constantinos Loukas, Dr. Nikolaos Nikiteas, Mr. Dimitrios Schizas, Mr. Vasileios Lahanas, and Dr. Evangelos Georgiou have no conflicts of interest or financial ties to disclose.
References 1. Gallagher AG, Satava RM (2002) Virtual reality as a metric for the assessment of laparoscopic psychomotor skills. Learning curves and reliability measures. Surg Endosc 16(12):1746–1752
123
2558 2. Dawson S (2006) Procedural simulation: a primer. Radiology 241(1):17–25 3. Bridges M, Diamond DL (1999) The financial impact of teaching surgical residents in the operating room. Am J Surg 177(1):28–32 4. Scott DJ, Bergen PC, Rege RV, Laycock R, Tesfay ST, Valentine RJ, Euhus DM, Jeyarajah DR, Thompson WM, Jones DB (2000) Laparoscopic training on bench models: better and more cost effective than operating room experience? J Am Coll Surg 191(3):272–283 5. McCloy R, Stone R (2001) Science, medicine, and the future. Virtual reality in surgery. Br Med J 323(7318):912–915 6. Madan AK, Frantzides CT, Tebbit C, Quiros RM (2005) Participants’ opinions of laparoscopic training devices after a basic laparoscopic training course. Am J Surg 189(6):758–761 7. Satava RM, Cuschieri A, Hamdorf J (2003) Metrics for objective assessment. Surg Endosc 17(2):220–226 8. Haluck RS, Satava RM, Fried G, Lake C, Ritter EM, Sachdeva AK, Seymour NE, Terry ML, Wilks D (2007) Establishing a simulation center for surgical skills: what to do and how to do it. Surg Endosc 21(7):1223–1232 9. Maura A, Galata G, Rulli F (2008) Establishing a simulation center for surgical skills. Surg Endosc 22(2):564 10. McClusky DA III, Smith CD (2008) Design and development of a surgical skills simulation curriculum. World J Surg 32(2):171–181 11. Lehmann KS, Ritz JP, Maass H, Cakmak HK, Kuehnapfel UG, Germer CT, Bretthauer G, Buhr HJ (2005) A prospective randomized study to test the transfer of basic psychomotor skills from virtual reality to physical reality in a comparable training setting. Ann Surg 241(3):442–449 12. Hamilton EC, Scott DJ, Fleming JB, Rege RV, Laycock R, Bergen PC, Tesfay ST, Jones DB (2002) Comparison of video trainer and virtual reality training systems on acquisition of laparoscopic skills. Surg Endosc 16(3):406–411 13. Debes AJ, Aggarwal R, Balasundaram I, Jacobsen MB (2010) A tale of two trainers: virtual reality versus a video trainer for acquisition of basic laparoscopic skills. Am J Surg 199(6):840–845 14. Youngblood PL, Srivastava S, Curet M, Heinrichs WL, Dev P, Wren SM (2005) Comparison of training on two laparoscopic simulators and assessment of skills transfer to surgical performance. J Am Coll Surg 200(4):546–551 15. Munz Y, Kumar BD, Moorthy K, Bann S, Darzi A (2004) Laparoscopic virtual reality and box trainers: is one superior to the other? Surg Endosc 18(3):485–494 16. Derossis AM, Fried GM, Abrahamowicz M, Sigman HH, Barkun JS, Meakins JL (1998) Development of a model for training and evaluation of laparoscopic skills. Am J Surg 175(6):482–487
123
Surg Endosc (2012) 26:2550–2558 17. Loukas C, Nikiteas N, Kanakis M, Georgiou E (2011) Deconstructing laparoscopic competence in a virtual reality simulation environment. Surgery 149(6):750–760 18. Satava RM, Gallagher AG, Pellegrini CA (2003) Surgical competence and surgical proficiency: definitions, taxonomy, and metrics. J Am Coll Surg 196(6):933–937 19. Palter VN, Grantcharov TP (2010) Simulation in surgical education. Can Med Assoc J 182(11):1191–1196 20. Stefanidis D, Scott DJ, Korndorffer JR Jr (2009) Do metrics matter? Time versus motion tracking for performance assessment of proficiency-based laparoscopic skills training. Simul Healthc 4(2):104–108 21. Mohammadi Y, Lerner MA, Sethi AS, Sundaram CP (2010) Comparison of laparoscopy training using the box trainer versus the virtual trainer. J Soc Laparoendosc Surg 14(2):205–212 22. Torkington J, Smith SG, Rees BI, Darzi A (2001) Skill transfer from virtual reality to a real laparoscopic task. Surg Endosc 15(10):1076–1079 23. Munz Y, Almoudaris AM, Moorthy K, Dosis A, Liddle AD, Darzi AW (2007) Curriculum-based solo virtual reality training for laparoscopic intracorporeal knot tying: objective assessment of the transfer of skill from virtual reality to reality. Am J Surg 193(6):774–783 24. Korndorffer JR Jr, Dunne JB, Sierra R, Stefanidis D, Touchard CL, Scott DJ (2005) Simulator training for laparoscopic suturing using performance goals translates to the operating room. J Am Coll Surg 201(1):23–29 25. Botden SM, Torab F, Buzink SN, Jakimowicz JJ (2008) The importance of haptic feedback in laparoscopic suturing training and the additive value of virtual reality simulation. Surg Endosc 22(5):1214–1222 26. Kim HK, Rattner DW, Srinivasan MA (2004) Virtual-realitybased laparoscopic surgical training: the role of simulation fidelity in haptic feedback. Comput Aided Surg 9(5):227–234 27. Strom P, Hedman L, Sarna L, Kjellin A, Wredmark T, FellanderTsai L (2006) Early exposure to haptic feedback enhances performance in surgical simulator training: a prospective randomized crossover study in surgical residents. Surg Endosc 20(9): 1383–1388 28. Pearson AM, Gallagher AG, Rosser JC, Satava RM (2002) Evaluation of structured and quantitative training methods for teaching intracorporeal knot tying. Surg Endosc 16(1):130–137 29. Chmarra MK, Dankelman J, van den Dobbelsteen JJ, Jansen FW (2008) Force feedback and basic laparoscopic skills. Surg Endosc 22(10):2140–2148