KEYWORDS: Video acceleration, Video, Detection and tracking algorithms, Video processing, Sensors, Analog electronics, Image processing, Visualization, Inspection, Digital imaging
An important task in film and video preservation is the quality assessment of the content to be archived or reused out of
the archive. This task, if done manually, is a straining and time consuming process, so it is highly recommended to
automate this process as far as possible. In this paper, we show how to port a previously proposed algorithm for detection
of severe analog and digital video distortions (termed "video breakup"), efficiently to NVIDIA GPUs of the Fermi
Architecture with CUDA. By parallizing of the algorithm massively in order to take usage of the hundreds of cores on a
typical GPU and careful usage of GPU features like atomic functions, texture and shared memory, we achive a speedup
of roughly 10-15 when comparing the GPU implementation with an highly optimized, multi-threaded CPU
implementation. Thus our GPU algorithm is able to analyze nine Full HD (1920 × 1080) video streams or 40 standard
definition (720 × 576) video streams in real-time on a single inexpensive Nvidia Geforce GTX 480 GPU. Additionally,
we present the AV-Inspector application for video quality analysis where the video breakup algorithm has been
integrated.
KEYWORDS: Sensors, Data modeling, Surveillance, Video surveillance, Systems modeling, Video, Visualization, Cameras, Imaging systems, Surveillance systems
Metadata interoperability is crucial for various kinds of surveillance applications and systems, e.g. metadata mining in
multi-sensor environments, metadata exchange in networked camera systems or information fusion in multi-sensor and
multi-detector environments. Different metadata formats have been proposed to foster metadata interoperability, but they
show significant limitations. ViPER, CVML and MPEG Visual Surveillance MAF support only the visual modality,
CVML's frame based approach leads to inefficient representation, and MPEG-7's comprehensiveness handicaps its
efficient usage for a specific application. To overcome these limitations we propose the Surveillance Application
Metadata (SAM) model, capable of describing online and offline analysis results as a set of time lines containing events.
A set of sensors, detectors, recorded media items and object instances is described centrally and linked from the event
descriptions. The time lines can be related to a subset of sensors and detectors for any modality and different levels of
abstraction. Hierarchical classification schemes are used for many purposes, such as types of properties and their values,
event types, object classes, coordinate systems etc. in order to allow for application specific adaptations without
modifying the data model while ensuring the controlled use of terms. The model supports efficient representation of
dense spatio-temporal information such as object trajectories. SAM is not bound to a specific serialization but can be
mapped to different existing formats within the limitations evoked by the target format. SAM specifications and
examples have been made available
Digital film restoration and special effects compositing require more and more automatic procedures for movie regraining. Missing or inhomogeneous grain decreases perceived quality. For the purpose of grain synthesis an existing texture synthesis algorithm has been evaluated and optimized. We show that this algorithm can produce synthetic grain which is perceptually similar to a given grain template, which has high spatial and temporal variation and which can be applied to multi-spectral images. Furthermore a re-grain application framework is proposed, which synthesises based on an input grain template artificial grain and composites this together with the original image content. Due to its modular approach this framework supports manual as well as automatic re-graining applications. Two example applications are presented, one for re-graining an entire movie and one for fully automatic re-graining of image regions produced by restoration algorithms. Low computational cost of the proposed algorithms allows application in industrial grade software.
The application of the mean shift algorithm to color image segmentation has been proposed in 1997 by Comaniciu and Meer. We apply the mean shift color segmentation to image sequences, as the first step of a moving object segmentation algorithm. Previous work has shown that it is well suited for this task, because it provides better temporal stability of the segmentation result than other approaches. The drawback is higher computational cost. For speed up of processing on image sequences we exploit the fact that subsequent frames are similar and use the cluster centers of previous frames as initial estimates, which also enhances spatial segmentation continuity. In contrast to other implementations we use the originally proposed CIE LUV color space to ensure high quality segmentation results. We show that moderate quantization of the input data before conversion to CIE LUV has little influence on the segmentation quality but results in significant speed up. We also propose changes in the post-processing step to increase the temporal stability of border pixels. We perform objective evaluation of the segmentation results to compare the original algorithm with our modified version. We show that our optimized algorithm reduces processing time and increases the temporal stability of the segmentation.
We present a case study of establishing a description infrastructure for an audiovisual content-analysis and retrieval system. The description infrastructure consists of an internal metadata model and access tool for using it. Based on an analysis of requirements, we have selected, out of a set of candidates, MPEG-7 as the basis of our metadata model.
The openness and generality of MPEG-7 allow using it in broad range of applications, but increase complexity and hinder interoperability. Profiling has been proposed as a solution, with the focus on selecting and constraining description tools. Semantic constraints are currently only described in textual form. Conformance in terms of semantics can thus not be evaluated automatically and mappings between different profiles can only be defined manually. As a solution, we propose an approach to formalize the semantic constraints of an MPEG-7 profile using a formal vocabulary expressed in OWL, which allows automated processing of semantic constraints.
We have defined the Detailed Audiovisual Profile as the profile to be used in our metadata model and we show how some of the semantic constraints of this profile can be formulated using ontologies. To work practically with the metadata model, we have implemented a MPEG-7 library and a client/server document access infrastructure.
We propose a search & retrieval (S&R) tool, which supports the combination of a text search with content-based search for video and image content. This S&R system allows the formulation of complex queries allowing the arbitrary combination of content-based and text-based query elements with logical operators. The system will be implemented as a client/server system. The entire S&R system is designed in such a way that the client system can be either a web application accessing the server over the Internet or a native client with local access to the server. The S&R tool is embedded into a system called MECiTV - Media Collaboration for iTV. Within MECiTV a complete authoring environment for iTV content will be developed. The proposed S&R tool will enable iTV authors and content producers to efficiently search for already existing material in order to reduce costs for iTV productions.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.