KEYWORDS: Image segmentation, 3D modeling, Magnetic resonance imaging, 3D image processing, Medical imaging, Image processing, Optical coherence tomography, Image processing algorithms and systems, Brain, Neuroimaging
Segmentation of 3D medical structures in real-time is an important as well as intractable problem for clinical applications due to the high computation and memory cost. We propose a novel fast evolving active contour model in this paper to reduce the requirements of computation and memory. The basic idea is to evolve the brief represented dynamic contour interface as far as possible per iteration. Our method encodes zero level set via a single unordered list, and evolves the list recursively by adding activated adjacent neighbors to its end, resulting in active parts of the zero level set moves far enough per iteration along with list scanning. To guarantee the robustness of this process, a new approximation of curvature for integer valued level set is proposed as the internal force to penalize the list smoothness and restrain the list continual growth. Besides, list scanning times are also used as an upper hard constraint to control the list growing. Together with the internal force, efficient regional and constrained external forces, whose computations are only performed along the unordered list, are also provided to attract the list toward object boundaries. Specially, our model calculates regional force only in a narrowband outside the zero level set and can efficiently segment multiple regions simultaneously as well as handle the background with multiple components. Compared with state-of-the-art algorithms, our algorithm is one-order of magnitude faster with similar segmentation accuracy and can achieve real-time performance for the segmentation of 3D medical structures on a standard PC.
This paper presents a learning based vessel detection and segmentation method in real-patient ultrasound (US) liver images. We aim at detecting multiple shaped vessels robustly and automatically, including vessels with weak and ambiguous boundaries. Firstly, vessel candidate regions are detected by a data-driven approach. Multi-channel vessel enhancement maps with complement performances are generated and aggregated under a Conditional Random Field (CRF) framework. Vessel candidates are obtained by thresholding the saliency map. Secondly, regional features are extracted and the probability of each region being a vessel is modeled by random forest regression. Finally, a fast levelset method is developed to refine vessel boundaries. Experiments have been carried out on an US liver dataset with 98 patients. The dataset contains both normal and abnormal liver images. The proposed method in this paper is compared with a traditional Hessian based method, and the average precision is promoted by 56 percents and 7.8 percents for vessel detection and classification, respectively. This improvement shows that our method is more robust to noise, therefore has a better performance than the Hessian based method for the detection of vessels with weak and ambiguous boundaries.
Adaptive thresholding is a useful technique for document analysis. In medical image processing, it is also helpful for segmenting structures, such as diaphragms or blood vessels. This technique sets a threshold using local information around a pixel, then binarizes the pixel according to the value. Although this technique is robust to changes in illumination, it takes a significant amount of time to compute thresholds because it requires adding all of the neighboring pixels. Integral images can alleviate this overhead; however, medical images, such as ultrasound, often come with image masks, and ordinary algorithms often cause artifacts. The main problem is that the shape of the summing area is not rectangular near the boundaries of the image mask. For example, the threshold at the boundary of the mask is incorrect because pixels on the mask image are also counted. Our key idea to cope with this problem is computing the integral image for the image mask to count the valid number of pixels. Our method is implemented on a GPU using CUDA, and experimental results show that our algorithm is 164 times faster than a naïve CPU algorithm for averaging.
Tumor tracking is very important to deal with a cancer in a moving organ in clinical applications such as radiotherapy, HIFU etc. Respiratory monitoring systems are widely used to find location of the cancers in the organs because respiratory signal is highly correlated with the movement of organs such as the lungs and liver. However the
conventional respiratory system doesn’t have enough accuracy to track the location of a tumor as well as they need additional effort or devices to use. In this paper, we propose a novel method to track a liver tumor in real time by extracting respiratory signals directly from B-mode images and using a deformed liver model generated from CT images of the patient. Our method has several advantages. 1) There is no additional radiation dose and is cost effective due to use of an ultrasound device. 2) A high quality respiratory signal can be directly extracted from 2D images of the diaphragm. 3) Using a deformed liver model to track a tumor’s 3D position, our method has an accuracy of 3.79mm in tracking error.
We present a new method for patient-specific liver deformation modeling for tumor tracking. Our method focuses on deforming two main blood vessels of the liver – hepatic and portal vein – to utilize them as features. A novel centerline editing algorithm based on ellipse fitting is introduced for vessel deformation. Centerline-based blood vessel model and various interpolation methods are often used for generating a deformed model at the specific time t. However, it may introduce artifacts when models used in interpolation are not consistent. One of main reason of this inconsistency is the location of bifurcation points differs from each image. To solve this problem, our method generates a base model from one of patient’s CT images. Next, we apply a rigid iterative closest point (ICP) method to the base model with centerlines of other images. Because the transformation is rigid, the length of each vessel’s centerline is preserved while some part of the centerline is slightly deviated from centerlines
of other images. We resolve this mismatch using our centerline editing algorithm. Finally, we interpolate three deformed models of liver, blood vessels, tumor using quadratic B´ezier curves. We demonstrate the effectiveness of the proposed approach with the real patient data.
KEYWORDS: 3D modeling, 3D image processing, Tumors, Liver, Motion models, Image processing, Computed tomography, Magnetic resonance imaging, Data modeling, Veins
This paper presents a novel method of using 2D ultrasound (US) cine images during image-guided therapy to accurately track the 3D position of a tumor even when the organ of interest is in motion due to patient respiration. Tracking is possible thanks to a 3D deformable organ model we have developed. The method consists of three processes in succession. The first process is organ modeling where we generate a personalized 3D organ model from high quality 3D CT or MR data sets captured during three different respiratory phases. The model includes the organ surface, vessel and tumor, which can all deform and move in accord with patient respiration. The second process is registration of the organ model to 3D US images. From 133 respiratory phase candidates generated from the deformable organ model, we resolve the candidate that best matches the 3D US images according to vessel centerline and surface. As a result, we can determine the position of the US probe. The final process is real-time tracking using 2D US cine images captured by the US probe. We determine the respiratory phase by tracking the diaphragm on the image. The 3D model is then deformed according to respiration phase and is fitted to the image by considering the positions of the vessels. The tumor’s 3D positions are then inferred based on respiration phase. Testing our method on real patient data, we have found the accuracy of 3D position is within 3.79mm and processing time is 5.4ms during tracking.
Automatic segmentation of anatomical structure is crucial for computer aided diagnosis and image guided online
treatment. In this paper, we present a novel approach for fully automatic segmentation of all anatomical structures from a
target liver organ in a coherent framework. Firstly, all regional anatomical structures such as vessel, tumor, diaphragm
and liver parenchyma are detected simultaneously using random forest classifiers. They share the same feature set and
classification procedure. Secondly, an efficient region segmentation algorithm is used to obtain the precise shape of these
regional structures. It is based on level set with proposed active set evolution and multiple features handling which
achieves 10 times speedup over existing algorithms. Thirdly, the liver boundary curve is extracted via a graph-based
model. The segmentation results of regional structures are incorporated into the graph as constraints to improve the
robustness and accuracy. Experiment is carried out on an ultrasound image dataset with 942 images captured with liver
motion and deformation from a number of different views. Quantitative results demonstrate the efficiency and
effectiveness of the proposed algorithm.
Respiratory motion tracking has been issues for MR/CT imaging and noninvasive surgery such as HIFU and
radiotherapy treatment when we apply these imaging or therapy technologies to moving organs such as liver, kidney or
pancreas. Currently, some bulky and burdensome devices are placed externally on skin to estimate respiratory motion of
an organ. It estimates organ motion indirectly using skin motion, not directly using organ itself. In this paper, we propose
a system that measures directly the motion of organ itself only using ultrasound image. Our system has automatically
selected a window in image sequences, called feature window, which is able to measure respiratory motion robustly even
to noisy ultrasound images. The organ's displacement on each ultrasound image has been directly calculated through the
feature window. It is very convenient to use since it exploits a conventional ultrasound probe. In this paper, we show that
our proposed method can robustly extract respiratory motion signal with regardless of reference frame. It is superior to
other image based method such as Mutual Information (MI) or Correlation Coefficient (CC). They are sensitive to what
the reference frame is selected. Furthermore, our proposed method gives us clear information of the phase of respiratory
cycle such as during inspiration or expiration and so on since it calculate not similarity measurement like MI or CC but
actual organ's displacement.
KEYWORDS: 3D modeling, Laser induced plasma spectroscopy, Head, 3D image processing, Motion models, Data modeling, Statistical modeling, Control systems, Visual process modeling, Nose
We propose a novel markerless 3D facial motion capture system using only one common camera. This system is simple
and easy to transfer facial expressions of a user's into virtual world. It has robustly tracking facial feature points
associated with head movements. In addition, it estimates high accurate 3D points' locations. We designed novel
approaches to the followings; Firstly, for precisely 3D head motion tracking, we applied 3D constraints using a 3D face
model on conventional 2D feature points tracking approach, called Active Appearance Model (AAM). Secondly, for
dealing with various expressions of a user's, we designed 2D face generic models from around 5000 images data and 3D
shape data including symmetric and asymmetric facial expressions. Lastly, for accurately facial expression cloning, we
invented a manifold space to successfully transfer 2D low dimensional feature points to 3D high dimensional points. The
manifold space is defined by eleven facial expression bases.
KEYWORDS: Cameras, Photography, Control systems, Digital cameras, Scene classification, Camera shutters, Digital imaging, Machine vision, Pattern recognition, Computer vision technology
In this work we propose a method to build digital still cameras that can take pictures of a given scene with the knowledge of photographic experts, professional photographers. Photographic expert' knowledge means photographic experts' camera controls, i.e. shutter speed, aperture size, and ISO value for taking pictures of a given scene. For the implementation of photographic experts' knowledge we redefine the Scene Mode of currently commercially available digital cameras. For example instead of a single Night Scene Mode in conventional digital cameras, we break it into 76 scene modes with the Night Scene Representative Image Set. The idea of the night scene representative image set is the image set which can cover all the cases of night scene with respect to camera controls. Meanwhile to appropriate picture taking of all the complex night scene cases, each one of the scene representative image set comes along with corresponding photographic experts' camera controls such as shutter speed, aperture size, and ISO value. Initially our work pairs off a given scene with one of our redefined scene modes automatically, which is the realization of photographic experts' knowledge. With the scene representative set we use likelihood analysis for the given scene to detect whether it is within the boundary of the representative set or not. If the given scene is classified within the representative set it is proceeded to calculate the similarities with comparing the correlation coefficient between the given scene and each of the representative images. Finally the camera controls for the most similar one of the representative image set is used for taking picture of the given scene, with finer tuning with respect to the degree of the similarities.
Despite fast spreading of digital cameras, many people cannot take pictures of high quality, they want, due to lack of
photography. To help users under the unfavorable capturing environments, e.g. 'Night', 'Backlighting', 'Indoor', or
'Portrait', the automatic mode of cameras provides parameter sets by manufactures. Unfortunately, this automatic
functionality does not give pleasing image quality in general. Especially, length of exposure (shutter speed) is critical
factor in taking high quality pictures in the night. One of key factors causing this bad quality in the night is the image
blur, which mainly comes from hand-shaking in long capturing. In this study, to circumvent this problem and to
enhance image quality of automatic cameras, we propose an intelligent camera processing core having BASE (Scene
Adaptive Blur Estimation) and VisBLE (Visual Blur Limitation Estimation). SABE analyzes the high frequency
component in the DCT (Discrete Cosine Transform) domain. VisBLE determines acceptable blur level on the basis of
human visual tolerance and Gaussian model. This visual tolerance model is developed on the basis of human perception
physiological mechanism. In the experiments proposed method outperforms existing imaging systems by general users
and photographers, as well.
To capture human pictures with good quality, auto focus, exposure and white-balance on human face areas is very
important. This paper presents a novel method to detect multi-view faces fast and accurately. It combines accurate 3-
level all-chain structure algorithm and fast skin color algorithm. The 3-level all-chain structure algorithm has 3 levels
and all the levels are linked from the top to the bottom. The level 1 rejects the non-face samples for all the views with
improved real-boosting method. The level 2 proposes a specially designed cascade structure with 2 sub levels to estimate
and verify the view class of face sample from coarse to fine. The level 3 is independent view verifier for each view.
Between neighboring levels(or sub levels), the sample classification confidence of previous level would be passed to
next level. Inner each level(or sub levels), the classification confidence of previous stage would be the first weak
classifier of next stage. It is because the previous classification result contains very useful information for current
situation. The fast skin color algorithm could remove the non-skin area with little computation, which makes the system
work much faster. The experimental result shows that this method is very efficient and it could correctly detect the multiview
human faces in real-time. It can also estimate the face view class at the same time.
'Fast and robust' are the most beautiful keywords in computer vision. Unfortunately they are in trade-off relationship.
We present a method to have one's cake and eat it using adaptive feature selections. Our chief insight is that it compares
reference patterns to query patterns, so that it selects smartly more important and useful features to find target. The
probabilities of pixels in the query to belong to the target are calculated from importancy of features. Our framework has
three distinct advantages: 1 - It saves computational cost dramatically to the conventional approach. This framework
makes it possible to find location of an object in real-time. 2 - It can smartly select robust features of a reference pattern
as adapting to a query pattern. 3- It has high flexibility on any feature. It doesn't matter which feature you may use. Lots
of color space, texture, motion features and other features can fit perfectly only if the features meet histogram criteria.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.