Surgical simulators enable surgeons to develop and improve their skills in a virtual environment without any risk. Therefore, they are especially useful for neurosurgeons, because of the vulnerability of tissue in the brain and the consequences of even small errors. In the past, cadavers were mainly used for surgical training, but there only is a limited supply and they are difficult to maintain. Surgical simulators allow repeatable procedures in unlimited quantity and therefore are a better possibility to train. In the last years, surgical simulators and virtual reality evolved from preoperative planning and education to the simulation of important neurosurgical operations. [1]
Important elements
Volume rendering
The creation and displaying of 3D models for surgical simulators can use the same approaches as for image-guided surgery (cp. Imaging, Image-guided surgery) and therefore is the most developed component of todays simulators. The displaying of volumetric models (which may be constructed using CT or MRI images) is called volume rendering.
There exist two main methods: direct rendering and indirect rendering, with direct rendering having a much higher processing intensity. Direct rendering creates voxels with image density values, whre the voxel separate different types of tissue. Indirect rendering does only render the surface of tissue and reduces the processing intensity, but the segmentation process has less information available.
Both with direct and indirect rendering, segmentation is a difficult task: direct rendering enables the calculation of intensity regions, indirect rendering does only enable the calculation of image edges. Both techniques require manual segmentation for addressing the problem of tissue misinterpretation. After segmentation, the structures can be assigned their visual and biomechanical properties.
Manual segmentation is especially needed for patient-specific simulators which use data from different patients for their simulations.
Currently there are many advances in the quality of volume rendering and graphics, featuring for example new image reconstruction techniques. Also, new semiautomated approaches use prior training sets and then try to fit the new data on them for segmentation, but manual segmentation is still needed to achieve the final results (cp. fig. 1). [1]
Model response
After the volume rendering, the structures must be able to react to user interaction via deformation. Mass-spring-systems for each voxel and the finite-element method (FEM) are the two main techniques for modeling deformations. Neither mass-spring-systems nor FEM are able to model disruptive deformations (i.e. topology changes), though.
Spring-mass-systems are simpler and therefore faster to compute than the FEM equivalent, but not as accurate. FEM means the numerical solution of differential equations on a distinct grid of elements (cubes or higher-order polyhedrons), using geometrical and temporary boundary conditions. FEM has a higher complexity and more parameters than mass-spring systems and therefore a higher accuracy when modeling plastic deformations. It therefore has a longer computation time. While modeling plastic deformations is quite common in engineering, disruptive deformations are used more rarely. ChainMail is one algorithm which addresses this need for surgical simulators, using elements linked like a chain mail armor [1]
Haptics
Haptics means the feedback of the calculated model response to the user, so that the model response can be felt. To produce haptic feedback, simulators need haptic interfaces that create tactile sensations (cp. fig. 2). As haptic interfaces are complex and require a lot of computational power, early simulators did not possess them and solely relied on visual model response.
Especially neurosurgery is comparably well suited for haptic feedback, because the mechanical properties of brain tissue are quite homogenous. Moreover, minimally invasive neurosurgery only possesses limited haptic parameters. Both factors combined make neurosurgery computationally easier for haptic feedback. [1]
Physical models
Not only digital surgical simulators, but 3D printing allows patient specific customization of models, too. Today's 3D printers can print materials with varying consistency and density. Using these printers allows the creation of models with distinct skin, bone, dura mater, regular brain tissue and tumor tissue, the 3D-printed element currently being the head. After the initial programming, these head models are easily reproducible and can be created with patient data. In the future, it may be possible to print the brain tissue, too, allowing the creation of an even more realistic physical model. [2]
Figure 1: cervical spine (manual segmentation after semiautomated segmentation using training data) [1]
Figure 3: 3D-printed physical model [2]
Surgical planning simulators
Surgical simulators are not only used for training, but for planning, too. Compared to image guided surgery (Imaging, Image-guided surgery), surgical simulators are used in the first phase concerning the planning of the procedure.
One example for surgical planning tools is Dextroscope. It is a virtual reality neurosurgical planning tool developed in the 1990’s, enabling intuitive preoperative interaction with patient-specific 3D renderings. Dextroscope combines preoperative patient-specific MRI and CT data and enables the surgeon to interact via a virtual surgical field. The virtual surgical field is a stereoscopic image, created by a monitor above and then reflected by a mirror into the user’s line of sight. The user wears steroscopic shutter glasses and perceives the displayed structures as a three-dimensional hologram. The shown hologram can be manipulated with positional controllers that have to be hold underneath the mirror (cp. fig 4). [1] As a pure planning tool, Dextroscope possesses neither model response nor haptical feedback.
Examples for surgical training simulators
ImmersiveTouch
ImmersiveTouch is simulator with haptic feedback concerning ventriculostomy simulation (ventriculostomy means creating a hole in a ventricle for a drainage), created in 2005. Like Dextroscope, ImmersiveTouch uses a virtual surgical field. Electromagnetic head-tracking goggles register the user position and orientation, recentering the stereoscopic view on the shown structures. Both preoperative MRI and CT images can be used to create the rendered models, but have to be manually segmented. Mechanical properties can be assigned to each surface (stiffness, viscosity, static friction, dynamic friction), resulting in different tactile signals which the user experiences. Studies could show that surgeons who trained with ImmersiveTouch were more likely to execute ventriculostomy successfully. [1]
NeuroTouch
NeuroTouch is a simulator with haptic feedback concerning craniotomy-based tumor removal procedures, created in 2012. It possesses a stereoscopic view, mimicking a neurosurgical microscope. It has a three-dimensional workspace, created by two LCD screens which are mirrored into a mock microscope (cp. fig 6). A six degree of freedom haptic system create free tool movement (cp. fig. 5), FEM simulation is used for the model response. Like ImmersiveTouch, MRI images have to be manually segmented and mechanical properties have to be given to specific structures.
When the paper was published, the image data needed 60 h of human postprocessing before it could be used in the simulator, but this time should decrease with better algorithms. NeuroTouch could be used with two tumor-removal procedures and three tools, but a bigger variety was going to be developed and it still has to be validated in clinical trials. [4]
Current usefulness of training simulators and future developments
The processing power needed for the tissue deformation, manual segmentation as well as the high costs of haptic interfaces limit the use of real-time interactive simulators. Because of the diversification of simulation procedures, more different approaches are going to be investigated and synergies might arise. It is very important that researchers work together to combine their advances, therefore common simulator architectures need to be developed and used. Common standards and validation techniques should also be part of future collaboration. [1]
The performance of the training with different surgical simulators was researched with favorable feedback from participating medical students. An improved performance was difficult to show, though, because of several reasons, including the lack of evidence of improved clinical performance because of the simulator training and nonrandomized design due to only regular anatomy procedures being included in the simulators (e.g. ventriculostomy). [3]
A study compared proficiency improvements between cadaver simulations, physical simulations and virtual reality simulations with haptic feedback. It showed that students have the highest proficiency improvement when training with cadavers (proficiency improvement in 71.5% of simulations performed), followed by physical simulations (proficiency improvement in 63.8% of simulations performed) and least improvement when training with virtual reality simulations (proficiency improvement in 59.1% of simulations performed). Because of this, the study recommends the usage of all three types of simulations. [5]
The usefulness of simulators for real clinical performance improvements still has to be shown. But as virtual reality simulations yield comparable results to the training with physical simulations and both are only around 10% worse than cadaver simulations, they can already be classified as useful in the surgical education. With further advances in the development of tissue deformation algorithms and automatic segmentation, virtual reality simulations could outperform cadaver simulations in the next decades.
2 Kommentare
Ardit Ramadani sagt:
21. Juni 2017Probably a bit late, but as far as I was following the page the content for Surgical Simulators was not fully done. I like the flow of information and the details on several points of the wiki. Same principles have been used also to section the topics, which is very helpful to comprehend while reading. Looking forward to the presentation tomorrow. Keep up the good work!
Unbekannter Benutzer (ga39tec) sagt:
21. Juni 2017thank you and not too late