The focus of this paper is on the design, implementation, and validation of asynchronous multimedia annotations designed for Web-based collaboration in educational and research settings. The two key questions we explore in this paper are: How useful are such annotations and what purpose do annotations serve? What is the ease of use of our specific implementation of annotations? The context of our project has been in the area of multimedia information usage and collaboration in the biological sciences. We have developed asynchronous annotations for HTML and image data. Our annotations can be executed via any browser and require no downloads. They are stored in a central database allowing search and asynchronous access by all registered users. An easy to use user interface allows users to add, view and search annotations. We also performed a usability study that showed that our implementation of text annotations to validate our implementation.
The long-term goals of the recently started Biomedia project at SFSU are to provide multimedia information systems and applications for the research and education needs of several projects in the SFSU Biology Department. These applications involve a considerable amount of images and image sequence data, in addition to traditional text, genomic, and experimental measurement data. Our systems will allow biology researchers and students to store, index, annotate, search, visualize, analyze, collaborate, and share a large amount of heterogeneous biomedical data. Our initial focus in BioMedia is the creation of collaborative WWW site for the Hedgehog gene pathway. The Hedgehog (Hh) protein super family constitutes a group of closely related secreted proteins that control many crucial processes during the embryogenesis of tissues. The overall goals of the Hh WWW Site project are as follows: a) to provide a WWW site to be used by researchers and students studying the Hedgehog gene pathway and made available to broad community, and b) to provide advanced and innovative functionality enabling easy usage and management, community based content submission and updates, and asynchronous collaboration between researchers and students. In this paper we present the status and first results in building and researching technologies necessary for this WWW site.
QBICTM (Query By Image Content) is a set of technologies and associated software that allows a user to search, browse, and retrieve image, graphic, and video data from large on-line collections. This paper discusses current research directions of the QBIC project such as indexing for high-dimensional multimedia data, retrieval of gray level images, and storyboard generation suitable for video. It describes aspects of QBIC software including scripting tools, application interfaces, and available GUIs, and gives examples of applications and demonstration systems using it.
Advances in technologies for scanning, networking, and CD-ROM, lower prices for large disk storage, and acceptance of common image compression and file formats have contributed to an increase in the number, size, and uses of on-line image collections. New tools are needed to help users create, manage, and retrieve images from these collections. We are developing QBIC (query by image content), a prototype system that allows a user to create and query image databases in which the image content -- the colors, textures, shapes, and layout of images and the objects they contain -- is used as the basis of queries. This paper describes two sets of algorithms in QBIC. The first are methods that allow `query by color drawing,' a form of query in which a user draws an approximate color version of an image, and similar images are retrieved. These are automatic algorithms in the sense that no user action is necessary during database population. Secondly, we describe algorithms for semi-automatic identification of image objects during database population, improving the speed and usability of this manually-intensive step. Once outlined, detailed queries on the content-properties of these individual objects can be made at query time.
IBM's Ultimedia Manager is a software product for management and retrieval of image data. The product includes both traditional database search and content based search. Traditional database search allows images to be retrieved by text descriptors or business data such as price, date, and catalog number. Content based search allows retrieval by similarity to a specified color, texture, shape, position or any combination of these. The two can be combined, as in 'retrieve all images with the text `beach' in their description, and sort them in order by how much blue they contain.' Functions are also available for fast browning, and for database navigation. The two main components of Ultimedia Manger are a database population tool to prepare images for query by identifying areas of interest and computing their features, and the query tool for doing retrievals. Application areas include stock photography, electronic libraries, retail, cataloging, and business graphics.
Byron Dom, David Steele, Richard Krebs, David Kiehl, Patrick Saldanha, Eric Wong, John Moffitt, Dragutin Petkovic, John Herber, Lionel Kuhlmann, Scott Dunbar
This paper describes a system known as 'The Disaster Detector', for automatic inspection of the air-bearing surface of disk sliders (disk read/write heads). It inspects for certain types of defects that are global or systematic in the sense that, when they occur, they occur on every slider in a row or, in some cases, on every slider in the entire carrier. The inspection system is described and the associated image-analysis algorithms are described in detail. The system uses standard microscope optics, a color CCD camera, computer-controlled state, laser autofocus, a video digitizer and a PC/AT (486-based).
In the query by image content (QBIC) project we are studying methods to query large on-line image databases using the images' content as the basis of the queries. Examples of the content we use include color, texture, and shape of image objects and regions. Potential applications include medical (`Give me other images that contain a tumor with a texture like this one'), photo-journalism (`Give me images that have blue at the top and red at the bottom'), and many others in art, fashion, cataloging, retailing, and industry. Key issues include derivation and computation of attributes of images and objects that provide useful query functionality, retrieval methods based on similarity as opposed to exact match, query by image example or user drawn image, the user interfaces, query refinement and navigation, high dimensional database indexing, and automatic and semi-automatic database population. We currently have a prototype system written in X/Motif and C running on an RS/6000 that allows a variety of queries, and a test database of over 1000 images and 1000 objects populated from commercially available photo clip art images. In this paper we present the main algorithms for color texture, shape and sketch query that we use, show example query results, and discuss future directions.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.