Paul Milgram Continiuum
Motivation of using AR in Neuronavigation
Neuronavigation has become an essential neurosurgical tool in pursuing minimal invasiveness and maximal safety, even though it has several technical limitations. Augmented reality (AR) neuronavigation is a significant advance, providing a real-time updated virtual information of anatomical details, overlaid on the real surgical field.
While searching for the relevant information, we also came across on the interesting survey about the usefulness of AR in respect to brain tumors .
Participants
3 expert neurosurgeons with >5 years of practice, 4 intermediate (residents and fellow) with >2 years of training and 4 novices (graduate students) with no neurosurgical experience participated in this study[3]
Idea
Methodology Participants were asked to identify the longest axis of the tumor as well as the shortest path from the surface of the cortex to the tumor. For the longest axis task (LA), subjects were asked to align the stylus with the longest axis of the tumor. For the shortest distance task (SD), they were asked to place the tip of the stylus on the appropriate location on the head phantom. Each experiment involved 64 trials(48 trials for one of the experts.) (32 trials per task) in which the patient MR volume randomly selected from the database and displayed in the 4 different modalities described earlier. The display of these modalities - as well as the task performed- was counterbalanced between and within subjects to correct for the effect of training and fatigue (e.g. Fig. 1). The response time (RT), the location of the stylus (in SD), and orientation (in LA) were recorded for every trial. The RT was the time elapsed between the time at which subjects start scrolling through the images and the time at which subject indicated their desired point/angle.[3]
Fig.1
Result
Index of Performance for the task of longest axis[3] Index of Performance for the task of shortest distance[3]
Perceiving the spatial relationships between relevant structures such as tumors and eloquent areas is necessary for successful neurosurgical pre-operative planning. The usability of such percepts is heavily influenced by the mode of visualization and interaction within the neurosurgical planning environment. Most experts ( >5 years of experience) were trained at a time when 3D reconstructions of brain images were scarce and therefore accustomed to interpreting 2D images. CT imaging was classically presented as axial slices, while MRI images as orthogonal slices (axial, coronal, and sagittal canonical views). Experts are able to interpret 3D structures, identify anatomical landmarks in order to plan for the approach, and measure those landmarks on the skin before starting surgery after viewing only 2D images. The new generation of trainees, however, who started their residency training in neurosurgery over the past 6 years, has been trained using 3D models of the brain. They also have been trained to use neuronavigation systems for most neurosurgical procedures. Therefore, while they are still used to looking at 2D images to prepare their approaches, they tend to rely more on new technology to help them in their planning. In any case, our preliminary results show that all subjects with a neurosurgery background performed better than novices in all visualization modalities to identify the shortest distance and longest axis 2. Nevertheless, AR was shown to be superior to other modalities (or equally good) in terms of performance. XP (orthogonal planes), on the other hand, was significantly slower for everyone, and most subjects reported anecdotally that they did not feel comfortable working in that modality.[3]
Solutions available on the market and researches
Philips announces new augmented-reality surgical navigation technology designed for image-guided spine, cranial and trauma surgery.[1]
The technology uses high-resolution optical cameras mounted on the flat panel X-ray detector to image the surface of the patient. It then combines the external view captured by the cameras and the internal 3D view of the patient acquired by the X-ray system to construct a 3D augmented-reality view of the patient's external and internal anatomy. This real-time 3D view of the patient's spine in relation to the incision sites in the skin aims to improve procedure planning, surgical tool navigation and implant accuracy, as well as reducing procedure times.[1]
At the same time very curious In-situ study was proposed by CAMP Chair in TUM.
They present method for the using Augmented Reality (AR) for the convergence of improved perception of 3D medical imaging data (mimesis) in context to the patient’s own anatomy (in-situ) incorporating the physician’s intuitive multisensory interaction and integrating direct manipulation with endoscopic instruments. Transparency of the video images recorded by the color cameras of a video see-through, stereoscopic HeadMounted-Display (HMD) is adjusted according to the position and line of sight of the observer, the shape of the patient’s skin and the location of the instrument. The modified video image of the real scene is then blended with the previously rendered virtual anatomy.[2]
CaptiView from Leica Microsystems
The system links image-guided surgery (IGS) software to the microscope hardware itself, laying down critical visual information (such as images from a brain scan) directly on top of a patient’s brain. Both two- and three-dimensional images can be injected. While IGS software isn’t new, the ability to see all pertinent information in a heads-up display directly through the microscope could be a game changer for neurosurgery.[4] The software is smart enough to track both the position of the microscope as well as the focal point of the eyepieces, updating in real time as the surgeon works.
Joshua Bederson, M.D., utilizes the latest simulation and virtual reality advances during neurosurgery.
Credit: Mount Sinai Health System
The CaptiView image injection system utilizes Brainlab® Cranial 3.1 Navigation Software in conjunction with a Leica M530 OH6 microscope. The heads-up display provides neurovascular and fiber-track information in 2D or 3D as well as on-screen video overlays visible through the ocular. The microscope integration also allows the surgeon to switch views in the eyepiece, toggling between live and pre-operative anatomical images using handle control buttons or footswitch for ease of use and uninterrupted workflow. Markers attached to the microscope enable positional tracking and autofocus. This new technology will be utilized alongside Surgical Navigation Advanced Platform (SNAP) developed by Surgical Theater, LLC, which is a standard feature in the operating room. SNAP provides advanced 3D visualization technology that gives surgeons an intraoperative and patient-specific 3D environment to plan and understand surgical approaches.
Bibliography:
[2] Christoph Bichlmeier, Felix Wimmer, Sandro Michael Heining and Nassir Navab "Contextual Anatomic Mimesis Hybrid In-Situ Visualization Method for Improving Multi-Sensory Depth Perception in Medical Augmented Reality," 6th IEEE and ACM International Symposium on Mixed and Augmented Reality, pp.129-138, November 2007.
[3] Abhari K. et al. (2013) The Role of Augmented Reality in Training the Planning of Brain Tumor Resection. In: Liao H., Linte C.A., Masamune K., Peters T.M., Zheng G. (eds) Augmented Reality Environments for Medical Imaging and Computer-Assisted Interventions. Lecture Notes in Computer Science, vol 8090. Springer, Berlin, Heidelberg
[4] https://www.digitaltrends.com/cool-tech/leica-captiview-ar-brain-surgery/ [06.06.2017]