The Silicon-On-Insulator (SOI) CMOS is one of the most advanced and promising technology for monolithic pixel detectors design. The insulator layer that is implemented inside the silicon crystal allows to integrate sensors matrix and readout electronic on a single wafer. Moreover, the separation of electronic and substrate increases also the SOI circuits performance. The parasitic capacitances to substrate are significantly reduced, so the electronic systems are faster and consume much less power. The authors of this presentation are the members of international SOIPIX collaboration, that is developing SOI pixel detectors in 200 nm Lapis Fully-Depleted, Low-Leakage SOI CMOS. This work shows a set of advantages of SOI technology and presents possibilities for pixel detector design SOI CMOS. In particular, the preliminary results of a Cracow chip are presented.
KEYWORDS: Physics, Data modeling, Sensors, Monte Carlo methods, Computer simulations, Particle physics, Polishing, Particles, Data storage, Distributed computing
Several new experiments in particle physics are being prepared by large international consortia. During their lifetimes they will generate an unprecedented amount of petabytes (1015 B) of data, which have to be made accessible to large communities of researchers, distributed all over the world - this fact is one of the biggest challenges facing modern experimental physics. A possible solution is provided by the concept of a distributed computing Grid, made feasible by recent significant improvements in networking. Based on the results of several pilot Grid projects on both sides of the Atlantic, the LHC Computing Grid (LCG) project was launched in 2001, with the goal of creating a large prototype of a worldwide computing Grid for physics. Thousands of processors and a number of mass storage devices, belonging to different institutions of many countries, have been connected effectively into one computing system, controlled by the "virtual organizations" of experiments. This paper presents estimates of the computing requirements of future experiments, an overview of the Grid technology as well as progress on the construction of a computing Grid for particle physics and its use by the experiments.
Several new experiments in particle physics are being prepared by large international consortia. They will generate data at the rate of 100-200 MB/sec over a number of years, which will result in many PetaBytes (1015 B) of data. This data will have to be made accessible to a large, international community of researchers, and as such it calls for a new approach to the problem of data analysis. Estimates of the computing needs of future experiments, as well as scenarios of overcoming potential difficulties are presented, based on the studies conducted by LHC consortia and Grid computing projects.
Conference Committee Involvement (1)
Photonics Applications in Industry and Research IV
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.