Automatic and precise segmentation of hand bones is important for many medical imaging applications. Although several previous studies address bone segmentation, automatically segmenting articulated hand bones remains a challenging task. The highly articulated nature of hand bones limits the effectiveness of atlas-based segmentation methods. The use of low-level information derived from the image-of-interest alone is insufficient for detecting bones and distinguishing boundaries of different bones that are in close proximity to each other. In this study, we propose a method that combines an articulated statistical shape model and a local exemplar-based appearance model for automatically segmenting hand bones in CT. Our approach is to perform a hierarchical articulated shape deformation that is driven by a set of local exemplar-based appearance models. Specifically, for each point in the shape model, the local appearance model is described by a set of profiles of low-level image features along the normal of the shape. During segmentation, each point in the shape model is deformed to a new point whose image features are closest to the appearance model. The shape model is also constrained by an articulation model described by a set of pre-determined landmarks on the finger joints. In this way, the deformation is robust to sporadic false bony edges and is able to fit fingers with large articulations. We validated our method on 23 CT scans and we have a segmentation success rate of ~89.70 %. This result indicates that our method is viable for automatic segmentation of articulated hand bones in conventional CT.
Vertebral segmentation is a critical first step in any quantitative evaluation of vertebral pathology using CT images. This is especially challenging because bone marrow tissue has the same intensity profile as the muscle surrounding the bone. Thus simple methods such as thresholding or adaptive k-means fail to accurately segment vertebrae. While several other algorithms such as level sets may be used for segmentation any algorithm that is clinically deployable has to work in under a few seconds. To address these dual challenges we present here, a new algorithm based on the geodesic distance transform that is capable of segmenting the spinal vertebrae in under one second. To achieve this we extend the theory of the geodesic distance transforms proposed in1 to incorporate high level anatomical knowledge through adaptive weighting of image gradients. Such knowledge may be provided by the user directly or may be automatically generated by another algorithm. We incorporate information 'learnt' using a previously published machine learning algorithm2 to segment the L1 to L5 vertebrae. While we present a particular application here, the adaptive geodesic transform is a generic concept which can be applied to segmentation of other organs as well.
Non-interventional diagnostics (CT or MR) enables early identification of diseases like cancer. Often, lesion growth assessment done during follow-up is used to distinguish between benign and malignant ones. Thus correspondences need to be found for lesions localized at each time point. Manually matching the radiological
findings can be time consuming as well as tedious due to possible differences in orientation and position between
scans. Also, the complicated nature of the disease makes the physicians to rely on multiple modalities (PETCT, PET-MR) where it is even more challenging. Here, we propose an automatic feature-based matching that is robust to change in organ volume, subpar or no registration that can be done with very less computations. Traditional matching methods rely mostly on accurate image registration and applying the resulting deformation map on the findings coordinates. This has disadvantages when accurate registration is time-consuming or may not be possible due to vast organ volume differences between scans. Our novel matching proposes supervised learning by taking advantage of the underlying CAD features that are already present and considering the matching as a classification problem. In addition, the matching can be done extremely fast and at reasonable accuracy even when the image registration fails for some reason. Experimental results∗ on real-world multi-time point thoracic CT data showed an accuracy of above 90% with negligible false positives on a variety of registration scenarios.
Purpose: By incorporating high-level shape priors, atlas-based segmentation has achieved tremendous success
in the area of medical image analysis. However, the effect of various kinds of atlases, e.g., average shape model,
example-based multi-atlas, has not been fully explored. In this study, we aim to generate different atlases and
compare their performance in segmentation.
Methods: We compare segmentation performance using parametric deformable model with four different atlases,
including 1) a single atlas, i.e., average shape model (SAS); 2) example-based multi-atlas (EMA); 3) cluster-based
average shape models (CAS); 4) cluster-based statistical shape models (average shape + principal shape variation
modes)(CSS). CAS and CSS are novel atlases constructed by shape clustering. For comparison purpose, we also
use PDM without atlas (NOA) as a benchmark method.
Experiments: The experiment is carried on liver segmentation from whole-body CT images. Atlases are
constructed by 39 manually delineated liver surfaces. 11 CT scans with ground truth are used as testing data
set. Segmentation accuracy using different atlases are compared.
Conclusion: Compared with segmentation without atlas, all of the four atlas-based image segmentation methods
achieve better results. Multi-atlas based segmentation behaves better than single-atlas based segmentation. CAS
exhibit superior performance to all other methods.
We propose a new model for pharmacokinetic analysis based on the one proposed by Tofts. Our model
both eliminates the need for estimating the Arterial Input Function (AIF) and normalizes analysis so
that comparisons across patients can be performed. Previous methods have attempted to circumvent
the AIF estimation by using the pharmacokinetic parameters of multiple reference regions (RR). Viewing
anatomical structures as filters, pharmacokinetic analysis tells us that 'similar' structures will be similar
filters. By cascading the inverse filter at a RR with the filter at the voxel being analyzed, we obtain a
transfer function relating the concentration of a voxel to that of the RR. We show that this transfer function
simplifies into a five-parameter nonlinear model with no reference to the AIF. These five parameters are
combinations of the three parameters of the original model at the RR and the region of interest. Contrary
to existing methods, ours does not require explicit estimation of the pharmacokinetic parameters of the
RR. Also, cascading filters in the frequency domain allows us to manipulate more complex models, such as
accounting for the vascular tracer component. We believe that our model can improve analysis across MR
parameters because the analyzed and reference enhancement series are from the same image. Initial results
are promising with the proposed model parameters exhibiting values that are more consistent across lesions
in multiple patients. Additionally, our model can be applied to multiple voxels to estimate the original
pharmacokinetic parameters as well as the AIF.
We present a new algorithm for automatic detection of bright tubular structures and its performance for automatic
segmentation of vessels in breast MR sequences. This problem is interesting because vessels are the main
type of false positive structures when automatically detecting lesions as regions that enhance after injection of
the contrast agent. Our algorithm is based on the eigenvalues of what we call the shape tensor. It is new in
that it does not rely on image derivatives of either first order, like methods based on the eigenvalues of the mean
structure tensor, or second order, like methods based on the eigenvalues of the Hessian. It is therefore more
precise and less sensitive to noise than those methods. In addition, the smoothing of the output which is inherent
to approaches based on the Hessian or structure tensor is avoided. The output of our filter does not present
the typical over-smoothed look of the output of the two differential filters that affects both their precision and
sensitivity. The scale selection problem appears also less difficult in our approach compared to the differential
techniques. Our algorithm is fast, needing only a few seconds per sequence. We present results of testing our
method on a large number of motion-corrected breast MR sequences. These results show that our algorithm
reliably segments vessels while leaving lesions intact. We also compare our method to the differential techniques
and show that it significantly out-performs them both in sensitivity and localization precision and that it is less
sensitive to scale selection parameters.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.