The analysis of binary data remains a challenge, especially for large or potentially inconsistent files. Traditionally, hex editors only make limited use of semantic information available to the user. We present an editor that supports user-supplied semantic data definitions. This semantic information is used throughout the program to realize semantic data visualization and data exploration capabilities not present in similar systems. Visualization and human-computer interaction techniques are applied. We show that this makes recognizing the structure of unknown or inconsistent data much more effective. Our approach demonstrates concepts that can be applied to the visual analysis of raw data in general.
KEYWORDS: Video, Visualization, Video acceleration, Video processing, Video coding, Web services, Medical imaging, Visual analytics, Volume rendering, Image analysis
Modern 3D visualization environments for medical image data provide high interactivity and flexibility but
depend on the expert knowledge and the experience of the user with respect to the software application. The
definition of the visualization parameters is a manual time-consuming process and as a result, inter-patient
or inter-study comparisons are extremely difficult. To overcome these drawbacks in case of the analysis and
diagnosis of pathologies, standardization of 3D visualization is an important issue. For this purpose automatically
generated digital video sequences can be used to convey the most important information contained in the data.
In this paper, we present an improvement of our existing web-based service which is now able to calculate the
video sequences in much shorter time exploiting the power of a GPU-cluster. The system requires to transfer a
medical volume dataset from an arbitrary computer connected via Internet and sends back a number of video
files automatically generated with direct volume rendering. To achieve an optimal load balancing of the available
resources, the tasks of automatic adjustment of transfer functions, volume rendering, and video encoding are
divided into small sub-requests, which are distributed to the different cluster nodes in order to be performed in
parallel. An additional preview mode, which renders a number of dedicated frames, provides a direct feedback
and quick overview. For the evaluation, we were focusing on the analysis of intracranial aneurysms and were
able to show that the system can be successfully applied. Further on, the system was developed in a way that
allows easy integration of other analysis tasks.
KEYWORDS: Visualization, Brain activation, Brain, Functional magnetic resonance imaging, Volume rendering, Digital video recorders, Opacity, Particles, Convolution, 3D metrology
Modern medical imaging provides a variety of techniques for the acquisition of multi-modality data. A typical
example is the combination of functional and anatomical data from functional Magnetic Resonance Imaging
(fMRI) and anatomical MRI measurements. Usually, the data resulting from each of these two methods is
transformed to 3D scalar-field representations to facilitate visualization. A common method for the visualization
of anatomical/functional multi-modalities combines semi-transparent isosurfaces (SSD, surface shaded display)
with other scalar visualization techniques like direct volume rendering (DVR). However, partial occlusion and
visual clutter that typically result from the overlay of these traditional 3D scalar-field visualization techniques
make it difficult for the user to perceive and recognize visual structures. This paper addresses these perceptual
issues by a new visualization approach for anatomical/functional multi-modalities. The idea is to reduce the
occlusion effects of an isosurface by replacing its surface representation by a sparser line representation. Those
lines are chosen along the principal curvature directions of the isosurface and rendered by a flow visualization
method called line integral convolution (LIC). Applying the LIC algorithm results in fine line structures that
improve the perception of the isosurface's shape in a way that it is possible to render it with small opacity
values. An interactive visualization is achieved by executing the algorithm completely on the graphics processing
unit (GPU) of modern graphics hardware. Furthermore, several illumination techniques and image compositing
strategies are discussed for emphasizing the isosurface structure. We demonstrate our method for the example
of fMRI/MRI measurements, visualizing the spatial relationship between brain activation and brain tissue.
Visualizing diffusion tensor imaging data has recently gained increasing importance. The data is of particular interest for neurosurgeons since it allows analyzing the location and topology of
major white matter tracts such as the pyramidal tract. Various approaches such as fractional anisotropy, fiber tracking and glyphs
have been introduced but many of them suffer from ambiguous representations of important tract systems and the related anatomy. Furthermore, there is no information about the reliability of the presented visualization. However, this information is essential for neurosurgery. This work proposes a new approach of glyph visualization accelerated with consumer graphics hardware showing a
maximum of information contained in the data. Especially, the probability of major white matter tracts can be assessed from the
shape and the color of the glyphs. Integrating direct volume rendering of the underlying anatomy based on 3D texture mapping and a special hardware accelerated clipping strategy allows more comprehensive evaluation of important tract systems in the vicinity of a tumor and provides further valuable insights. Focusing on hardware acceleration wherever possible ensures high image quality and interactivity, which is essential for clinical application. Overall, the presented approach makes diagnosis and therapy planning based on diffusion tensor data more comprehensive and allows better assessment of major white matter tracts.
Conference Committee Involvement (7)
Lasers in Dentistry XXVIII
23 January 2022 | San Francisco, California, United States
Lasers in Dentistry XXVII
6 March 2021 | Online Only, California, United States
Lasers in Dentistry XXVI
2 February 2020 | San Francisco, California, United States
Lasers in Dentistry XXV
3 February 2019 | San Francisco, California, United States
Lasers in Dentistry XXIV
28 January 2018 | San Francisco, California, United States
Lasers in Dentistry XXIII
29 January 2017 | San Francisco, California, United States
Lasers in Dentistry XXII
14 February 2016 | San Francisco, California, United States
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.