Thumbnail
Access Restriction
Open

Author Barberis, D.
Source CERN Document Server
Content type Text
File Format PDF
Date Created 2011-11-11
Language English
Subject Domain (in DDC) Natural sciences & mathematics ♦ Physics ♦ Modern physics ♦ Technology ♦ Engineering & allied operations ♦ Applied physics
Subject Keyword Detectors and Experimental Techniques ♦ Distributed Computing
Abstract The ATLAS experiment is taking data steadily since Autumn 2009, and collected so far over 5 fb-1 of data (several petabytes of raw and reconstructed data per year of data-taking). Data are calibrated, reconstructed, distributed and analysed at over 100 different sites using the World-wide LHC Computing Grid and the tools produced by the ATLAS Distributed Computing project. In addition to event data, ATLAS produces a wealth of information on detector status, luminosity, calibrations, alignments, and data processing conditions. This information is stored in relational databases, online and offline, and made transparently available to analysers of ATLAS data world-wide through an infrastructure consisting of distributed database replicas and web servers that exploit caching technologies. This paper reports on the experience of using this distributed computing infrastructure with real data and in real time, on the evolution of the computing model driven by this experience, and on the system performance during the first two years of operation.
Description Presented at: 2011 Europhysics Conference on High Energy Physics, Grenoble, France, 21 - 27 Jul 2011
Learning Resource Type Article
Publisher Date 2011-01-01
Rights License Preprint: (License: CC-BY-4.0)
Organization The ATLAS collaboration
Page Count 4