Minimally invasive surgery is a highly complex medical discipline and can be regarded as a major breakthrough in
surgical technique. A minimally invasive intervention requires enhanced motor skills to deal with difficulties like the
complex hand-eye coordination and restricted mobility. To alleviate these constraints we propose to enhance the
surgeon's capabilities by providing a context-aware assistance using augmented reality techniques. To recognize and
analyze the current situation for context-aware assistance, we need intraoperative sensor data and a model of the
intervention. Characteristics of a situation are the performed activity, the used instruments, the surgical objects and the
anatomical structures. Important information about the surgical activity can be acquired by recognizing the surgical
gesture performed. Surgical gestures in minimally invasive surgery like cutting, knot-tying or suturing are here referred
to as surgical skills. We use the motion data from the endoscopic instruments to classify and analyze the performed skill
and even use it for skill evaluation in a training scenario. The system uses Hidden Markov Models (HMM) to model and
recognize a specific surgical skill like knot-tying or suturing with an average recognition rate of 92%.
Minimally invasive surgery is nowadays a frequently applied technique and can be regarded as a major breakthrough in
surgery. The surgeon has to adopt special operation-techniques and deal with difficulties like the complex hand-eye
coordination and restricted mobility. To alleviate these constraints we propose to enhance the surgeon's capabilities by
providing a context-aware assistance using augmented reality techniques. To analyze the current situation for context-aware
assistance, we need intraoperatively gained sensor data and a model of the intervention. A situation consists of
information about the performed activity, the used instruments, the surgical objects, the anatomical structures and defines
the state of an intervention for a given moment in time. The endoscopic images provide a rich source of information
which can be used for an image-based analysis. Different visual cues are observed in order to perform an image-based
analysis with the objective to gain as much information as possible about the current situation. An important visual cue is
the automatic recognition of the instruments which appear in the scene. In this paper we present the classification of
minimally invasive instruments using the endoscopic images. The instruments are not modified by markers. The system
segments the instruments in the current image and recognizes the instrument type based on three-dimensional instrument
models.
Minimally invasive surgery has gained significantly in importance over the last decade due to the numerous advantages on patient-side. The surgeon has to adapt special operation-techniques and deal with difficulties like the complex hand-eye coordination, limited field of view and restricted mobility. To alleviate these constraints we propose to enhance the surgeon's capabilities by providing a context-aware assistance using augmented reality (AR) techniques. In order to generate a context-aware assistance it is necessary to recognize the current state of the intervention using intraoperatively gained sensor data and a model of the surgical intervention. In this paper we present the recognition of risk situations, the system warns the surgeon if an instrument gets too close to a risk structure. The context-aware assistance system starts with an image-based analysis to retrieve information from the endoscopic images. This information is classified and a semantic description is generated. The description is used to recognize the current state and launch an appropriate AR visualization. In detail we present an automatic vision-based instrument tracking to obtain the positions of the instruments. Situation recognition is performed using a knowledge representation based on a description logic system. Two augmented reality visualization programs are realized to warn the surgeon if a risk situation occurs.
Minimally invasive surgery is a highly complex medical discipline with various risks for surgeon and patient, but has
also numerous advantages on patient-side. The surgeon has to adapt special operation-techniques and deal with
difficulties like the complex hand-eye coordination, limited field of view and restricted mobility. To alleviate with these
new problems, we propose to support the surgeon's spatial cognition by using augmented reality (AR) techniques to
directly visualize virtual objects in the surgical site. In order to generate an intelligent support, it is necessary to have an
intraoperative assistance system that recognizes the surgical skills during the intervention and provides context-aware
assistance surgeon using AR techniques. With MEDIASSIST we bundle our research activities in the field of
intraoperative intelligent support and visualization. Our experimental setup consists of a stereo endoscope, an optical
tracking system and a head-mounted-display for 3D visualization. The framework will be used as platform for the
development and evaluation of our research in the field of skill recognition and context-aware assistance generation.
This includes methods for surgical skill analysis, skill classification, context interpretation as well as assistive
visualization and interaction techniques. In this paper we present the objectives of MEDIASSIST and first results in the
fields of skill analysis, visualization and multi-modal interaction. In detail we present a markerless instrument tracking
for surgical skill analysis as well as visualization techniques and recognition of interaction gestures in an AR
environment.
INPRES, a system for Augmented Reality has been developed in the collaborative research center "Information Technology in Medicine - Computer- and Sensor-Aided Surgery". The system is based on see-through glasses. In extensive preclinical testing the system has proven its functionality and tests with volunteers had been performed successfully, based on MRI imaging. We report the surgeons view of the first use of the system for AR guided biopsy of a tumour near the skull base. Preoperative planning was performed based on CT image data. The information to be projected was the tumour volume and was segmented from image data. With the use of infrared cameras, the positions of patient and surgeon were tracked intraoperatively and the information on the glasses displays was updated accordingly. The systems proved its functionality under OR conditions in patient care: Augmented reality information could be visualized with sufficient accuracy for the surgical task. After intraoperative calibration by the surgeon, the biopsy was acquired successfully. The advantage of see through glasses is their flexibility. A virtual stereoscopic image can be set up wherever and whenever desired. A biopsy at a delicate location could be performed without the need for wide exposure. This means additional safety and lower operation related morbidity to the patient. The integration of the calibration-procedure of the glasses into the intraoperative workflow is of importance to the surgeon.
This paper is going to present a summary of our technical experience with the INPRES System -- an augmented reality system based upon a tracked see-through head-mounted display. With INPRES a complete augmented reality solution has been developed that has crucial advantages when compared with previous navigation systems. Using these techniques the surgeon does not need to turn his head from the patient to the computer monitor and vice versa. The system's purpose is to display virtual objects, e.g. cutting trajectories, tumours and risk-areas from computer-based surgical planning systems directly in the surgical site. The INPRES system was evaluated in several patient experiments in craniofacial surgery at the Department of Oral and Maxillofacial Surgery/University of Heidelberg. We will discuss the technical advantages as well as the limitations of INPRES and present two strategies as a result. On the one hand we will improve the existing and successful INPRES system with new hardware and a new calibration method to compensate for the stated disadvantage. On the other hand we will focus on miniaturized augmented reality systems and present a new concept based on fibre optics. This new system should be easily adaptable at surgical instruments and capable of projecting small structures. It consists of a source of light, a miniature TFT display, a fibre optic cable and a tool grip. Compared to established projection systems it has the capability of projecting into areas that are only accessible by a narrow path. No wide surgical exposure of the region is necessary for the use of augmented reality.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.