Structure of Simulators

Surgical simulators can be split in 3 parts:

  • anatomic model and physics
  • graphics and visualization
  • haptics

However, the requirements differ depending on purpose of the simulator like training or rehearsal [1].

Model and Physics

Every simulator needs a computational model of the (virtual) patient. The geometry, visual appearance as well as biomechanical behavior of anatomical structures and tissue has to be simulated. The anatomical data can be gained from segmentation of images from different imaging modalities. Biomechanical and visual data can also be determined for example with the new and quite promising non-invasive technology, the magnetic resonance elastography [1].

The volume rendered models should replicate biomechanical response (bleeding or irrigation) to user manipulation in real time. Unfortunately, the standard physics implemetations for animations and computer graphics which are very advanced cannot be applied directly. But, there are two alternative algorithms. On the one hand, there is the mass-spring method. It models the anatomy as discrete sets of point masses connected by springs. It is easy to implement, but cannot accurately depict a surgical cut. Moreover, it is not abe to capture 3D directly as it is a 1D method. On the other hand, there is the finite-element method. It simulates the deformation of solids based on continuum mechanics and is able to reflect tissue biomechanics more accurately. However, it needs high computational power to solve the systems of equations with mathematical approximation techniques [1,2].

Graphics and Visualization

This is the best developed part of surgical simulators [1,2]. The goal of visualization is to render an appearance like it can be seen in a real OR [1]. There are different types of rendering available for this task. First, there is stereoscopic rendering. It is under great research as it is used by the entertainment industry. They try to make it cheap and runnable on home computers so they can use the techniques for computer games for example [1]. Then, there is direct rendering. It is the visualization of anatomical structures as volumetric models consisting of voxels using image density values. This rendering type comes with high processing costs resulting in computational burdens and limited fluidityof the simulator [2]. Furthermore, there is indirect or surface rendering. This renders a model describing the surface of anatomical structures. Because of less data to display the computational burden is reduced and processing speed increased [2].

Haptics

The haptics system is the (force) feedback interface that connects the surgeon with the simulator [1,2]. It displays the interaction between tools and objects (tissue). Such a device has serveral properties that are decisive [1]:

  • range of forces
  • shape of workspace
  • degrees of freedom (3-6 in most surgical simulators)

Generating accurate feedback is quite difficult and, therefore, accompanied with high computational costs. Hence, minimally invasive neurosurgery is favorable because of limited haptic parameters [2]. Also best haptic devices are not worth their money if the computed feedback is inaccurate or takes to long to compute [1].

Example of Simulators

NeuroTouch

The NeuroTouch simulator is a simulator used for training and rehearsing neurosurgeries. It consists of a stereoscope (a), two haptic systems (b) with power supplies and amplifier (c) as well as one or two computers (d). The stereoscope is build out of a binocular eyepiece without lenses. If you look through it, the perspective of each eye is redirected by 2 first surface mirrors to the two 17-inch LCD screens on the left and the right of the head. The haptic system is used to track the tools as well as rendering the resistance of tissue. It has six joints to achieve free handle motion in all six degrees of freedom (DOF). Depending on which haptic system is used for the NeuroTouch it needs one or two comuters. Each of the computers has two quad-core processors from Intel and a GeForce graphcis card from NVidia. These computers run the special software for simulaton. The software makes use of three threads one each for graphics, haptics and tissue mechanics. These three parts are sampled at different frequencies, graphics at 60 Hz, haptics at 1000 Hz and tissue mechanics at 100 Hz. The software can run on 64 bit systems of linux and windows [3].

Currently, there are two possible tasks which can be applied. The first is the Tumor-Debulking Task. Its goal is to remove the tumor completely while removing as little healthy tissue as possible. The simulator calculates several metrics like time, tumor/healthy volume percentage or total removed volume. The second task is the Tumor Cauterization Task. Its goal is the removal of as much tumar as possible while minimizeing the amount of blood loss. The metric at this task are the time and the volume of blood loss [3].

The developers let several neurosurgeons test the NeuroTouch. Almost all praised the visual category while the touch category was most criticized [3].

NeuroTouch components:
(a) stereoscope
(b) haptic systems
(c) power supplies and amplifier for haptic systems
(d) computer [3]

Haptic Operative Realistic Ultrasonographic Simulator

The Haptic Operative Realistic Ultrasonographic Simulator (HORUS) is used to train ultrasound guided percutaneous obstetric and digestive interventions.

HORUS usage on the left, screenshots on the right [4]

Unlimited Laparoscopic Immersive Simulator

The Unlimited Laparoscopic Immersive Simulator (ULIS) is composed of two or three entry points allowing the insertion of real instruments. The real tools have a greater / more realistic operating radius than all virtual tools. Because of the access to the database of the clinic, the ULIS can use CT data of a patient to create a 3D model. Ths model is textured photorealistic to get a very realistic experience [5].

usage of real instruments on virtual patient makes it more realistic [5]

exercise example of the ULIS [5]

Dextroscope

The Dextroscope was developed by Bracco AMT from Princeton. It is a workstation for surgical evaluation and decision making [1]. It uses CT and MR data for patient-specific information-fused 3D models generated by automatic coregistration and segmentation of critical structures. The surgeon can manipulate the model with controllers in 3D space [1,2]. There are also some extensions to the simulator in order to plan surgeries or adding simulations of additional types of interventions [1].

Dextroscope usage and visualizaton [1]

ROBO-SIM

ROBO-SIM is a software which can be used to simulate minimally invasive neurosurgery. It is able to compute the deformation of (virtual) tissue in real time using mass-spring method and direct as well as indirect rendering. Originally, it was developed for the NEUROBOT which is a robotic arm designed for usage in real surgery. The software can execute certain preoperative planning steps like the skull entry point, the depth to target or surgical track. It can also create a path through the brain using go and no-go areas defined by the surgeon. One limitation is the invisibility of anatomical structures which are smaller than 1 mm. This results from the limitations in MR imaging. Another limitation is the absence of membranes and blood flow representation in the virtual model [2].

ROBO-SIM minimally invasive neurosurgery simulation software shown modeling the resection of an intraventricular tumor [2]

Education and Rehearsal

Simulators can be used for aquiring medical knowledge and surgical skills in an non-risk environment through harmless repetition [2,4,6]. But still cadavers and assisting are the standard ways of aquiring skills. This leads to an increased risk for patients. In the US resident training is based on the apprenticeship model from Dr. William Halsted, "see one, do one, teach one" [6]. But, learning by assisting is inefficient and increases the procedure time. It has be shown that residents with simulator training need less time and are less likely to injure patients [3]. The model of Halsted will be modernized by Virtual and Augmented Reality [6]. These techniques are successfully used in other industries like aviation, oil, nuclear or military [4]. VR offers flexible training experience for novice, practicing surgeons and experts. Simulators are called part-task trainers if they simulate just one of the subtasks of a complex procedure, e.g. ventriculostomy catheter placement. Procedure Simulators replicate a series of steps from the OR [1]. Simulation can also be used for establishing new techniques and procedures, even to experienced surgeons [4].

VR cannot only be used for educating and training surgeons but also for rehearsal of procedures. The surgeon for example can use simulators for training on patient-specific data gained from different imaging modalities [1,4,6]. This will improve patient safety.

A survey among residents (junior (year 1-3) and senior (year 4-6)) delivered following result. The residents found cadaver simulations most benificial with 71,5% benefit. This is followed by the physical simulators giving 63,8% benefit. From Haptic/Computerized simulators the residents got only 59,1% benefit [7]. This shows that there is a great potential in computerized simulation which has to be discovered.

this video is part of the survey from Jaime Gasco et al. [7]

Future Development

As the survey from Jaime Gasco et al. [7] showed there is a lot to do before surgical (VR/AR) simulators can take over the main educational task. The research of VR is also pushed by needs in other fields like the entertainment industry [1].But VR and AR have to meet reliability and validity expectations so that people trust the technology [6]. With this technology, it may be possible to define proper evaluation criteria of surgical skills (novice and experienced surgeons), tools, approaches and strategies which can result in more optimal use of equipment, shorter procedure times and reduction of complications. This will again increase the safety of patients [4,6]. As VR is continously evolving, it can be used to create prototypes for new procedures. Another improvement can be the combination of the rehearsal with surgical robots. The surgeon practices until perfection and records the steps so that a robot can do the real procedure. The robot will be faster, ergonomically, technically superior and will produce better pateint outcomes [4]. The future of surgical education and rehearsal will be simulation-based and in virtual environment [1,4].

Bibliography

  1. Chan, Sonny, et al. "Virtual reality simulation in neurosurgery: technologies and evolution." Neurosurgery 72 (2013): A154-A164.
  2. Malone, Hani R., et al. "Simulation in neurosurgery: a review of computer-based simulation environments and their surgical applications." Neurosurgery 67.4 (2010): 1105-1116.
  3. Delorme, Sébastien, et al. "NeuroTouch: a physics-based virtual simulator for cranial microneurosurgery training." Neurosurgery 71 (2012): ons32-ons42.
  4. Willaert, Willem IM, et al. "Recent advancements in medical simulation: patient-specific virtual reality simulation." World journal of surgery 36.7 (2012): 1703-1712.
  5. http://academlib.com/16253/education/ulis_laparoscopic_surgery_simulation
  6. Pelargos, Panayiotis E., et al. "Utilizing virtual and augmented reality for educational and clinical enhancements in neurosurgery." Journal of Clinical Neuroscience 35 (2017): 1-4.
  7. Gasco, Jaime, et al. "Neurosurgery simulation in residency training: feasibility, cost, and educational benefit." Neurosurgery 73 (2013): S39-S45.
  • Keine Stichwörter