Advances in computer hard- and software have enabled the automated extraction of biomarkers from large scale imaging studies by means of image processing pipelines. For large cohort studies, ample storage- and computing resources are required: pipelines are typically executed in parallel on one or more High Performance Computing Clusters (HPC). As processing is distributed, it becomes more cumbersome to obtain detailed progress and status information of large-scale experiments. Especially in a research-oriented environment, where image processing pipelines are often in an experimental stage, debugging is a crucial part of the development process that relies heavily on a tight collaboration between pipeline developers and clinical researchers. Debugging a running pipeline is a challenging and time-consuming process for seasoned pipeline developers, and nearly impossible for clinical researchers, often involving parsing of complex logging systems and text files, and requires special knowledge of the HPC environment. In this paper, we present the Pipeline Inspection and Monitoring web application (PIM). The goal of PIM is to make it more straightforward and less time-consuming to inspect complex, long running image processing pipelines, irrespective of the level of technical expertise and the workflow engine. PIM provides an interactive, visualization-based web application to intuitively track progress, view pipeline structure and debug running image processing pipelines. The level of detail is fully customizable, supporting a wide variety of tasks (e.g. quick inspection and thorough debugging) and thereby facilitating both clinical researchers and pipeline developers in monitoring and debugging.
Segmentation of brain structures in magnetic resonance images is an important task in neuro image analysis. Several
papers on this topic have shown the benefit of supervised classification based on local appearance features, often combined with atlas-based approaches. These methods require a representative annotated training set and therefore often do not perform well if the target image is acquired on a different scanner or with a different acquisition protocol than the training images. Assuming that the appearance of the brain is determined by the underlying brain tissue distribution and that brain tissue classification can be performed robustly for images obtained with different protocols, we propose to derive appearance features from brain-tissue density maps instead of directly from the MR images. We evaluated this approach on hippocampus segmentation in two sets of images acquired with substantially different imaging protocols and on different scanners. While a combination of conventional appearance features trained on data from a different scanner with multi-atlas segmentation performed poorly with an average Dice overlap of 0.698, the local appearance model based on the new acquisition-independent features significantly improved (0.783) over atlas-based segmentation alone (0.728).
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.