In this paper, we investigate the reliability in a petabyte scale storage system built from thousands of Object-Based
Storage Devices and study the mechanisms to protect data loss when disk failure happens. We delve in two underlying
redundancy mechanisms: 2-way mirroring, 3-way mirroring. To accelerate data reconstruction, Fast Mirroring Copy is
employed where the reconstructed objects are stored on different OBSDs throughout the system. A SMART reliability
for enhancing the reliability in very large-scale storage system is proposed. Results show that our SMART Reliability
Mechanism can utilize the spare resources (including processing, network, and storage resources) to improve the
reliability in very large storage systems.
With the rapid development of massive storages, traditional RAID of single-protocol is increasingly unable to satisfy the
various demands of users. For the purpose of keeping down the investment of storages, we propose a multi-protocol
RAID that utilizes existing storage devices. The multi-protocol RAID achieves the integration of storage via managing
the disks of different interfaces. This paper presents a framework of multi-protocol RAID and a prototype
implementation of it, i.e., the proposed multi-protocol approach can not only unify the storage devices of different types,
but also provide different access channels (e.g. iSCSI, FC) to manage the heterogeneous RAID system, thus achieving
the goal of centralized management. Our function tests validate the feasibility and flexibility of the proposed RAID
system. The comparison tests indicate that the multi-protocol RAID can attain even higher performance than that of the
single-protocol RAID, especially the aggregated bandwidth.
KEYWORDS: Computing systems, Data storage, Optoelectronics, Computer science, Multimedia, Optical storage, Gallium nitride, Current controlled current source, Windows 2000, Basic research
The distribution of metadata is very important in mass storage system. Many storage systems use subtree partition or
hash algorithm to distribute the metadata among metadata server cluster. Although the system access performance is
improved, the scalability problem is remarkable in most of these algorithms. This paper proposes a new directory hash
(DH) algorithm. It treats directory as hash key value, implements a concentrated storage of metadata, and take a dynamic
load balance strategy. It improves the efficiency of metadata distribution and access in mass storage system by hashing
to directory and placing metadata together with directory granularity. DH algorithm has solved the scalable problems
existing in file hash algorithm such as changing directory name or permission, adding or removing MDS from the
cluster, and so on. DH algorithm reduces the additional request amount and the scale of each data migration in scalable
operations. It enhances the scalability of mass storage system remarkably.
The structured overlay networks based on DHT provide a decentralized, self-organizing substrate for building the large
distributed systems such as file sharing and data storage. However, in most of these systems, many problems still remain
to be solved regarding system scalability, network proximity and so on. In this paper, we present a novel routing protocol
called PBHC. By combining hierarchical DHT algorithm with a data proximity mechanism PBHC minimizes the intracluster
access traffic and boosts the local access ratio in heterogeneous networks environment. The simulation results
show PBHC can significantly improve the routing performance and scalability of the P2P storage system.
Finite-Difference Beam Propagation Method (FD-BPM) in conventional is modified, according to more accurate
Helmholtz equation, a new arithmetic is advanced. By using the new arithmetic and the old arithmetic in calculating slab
waveguide and calculate the parameter which scales the precision of the method and the calculating time, we prove that
the accuracy of the new arithmetic is improved without affecting time performance. At last we calculate the transmission
mode in the AWG by the new method to show the practical value of the modified arithmetic.
In this paper, we present a new method by using 2-D discrete multiwavelet transform in image denoising. The developments in wavelet theory have given rise to the wavelet thresholding method, for extracting a signal from noisy data. The method of signal denoising via wavelet thresholding was popularized. Multiwavelets have recently been introduced and they offer simultaneous orthogonality, symmetry and short support. This property makes multiwavelets more suitable for various image processing applications, especially denoising. It is based on thresholding of multiwavelet coefficients arising from the standard scalar orthogonal wavelet transform. It takes into account the covariance structure of the transform. Denoising is images via thresholding of the multiwavelet coefficients result from preprocessing and the discrete multiwavelet transform can be carried out by threating the output in this paper. The form of the threshold is carefully formulated and is the key to the excellent results obtained in the extensive numerical simulations of image denoising. The performances of multiwavelets are compared with those of scalar wavelets. Simulations reveal that multiwavelet based image denoising schemes outperform wavelet based method both subjectively and objectively.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.