CERN Accelerating science

Published Articles

Latest additions:
2017-12-13
06:06
Federated data storage system prototype for LHC experiments and data intensive science / Kiryanov, A (St. Petersburg, INP ; Kurchatov Inst., Moscow) ; Klimentov, A (Kurchatov Inst., Moscow ; Brookhaven) ; Krasnopevtsev, D (Kurchatov Inst., Moscow ; Moscow Phys. Eng. Inst.) ; Ryabinkin, E (Kurchatov Inst., Moscow) ; Zarochentsev, A (Kurchatov Inst., Moscow ; St. Petersburg State U.)
Rapid increase of data volume from the experiments running at the Large Hadron Collider (LHC) prompted physics computing community to evaluate new data handling and processing solutions. Russian grid sites and universities’ clusters scattered over a large area aim at the task of uniting their resources for future productive work, at the same time giving an opportunity to support large physics collaborations. [...]
2017 - 8 p. - Published in : J. Phys. : Conf. Ser. 898 (2017) 062016 Fulltext: PDF;
In : 22nd International Conference on Computing in High Energy and Nuclear Physics, CHEP 2016, San Francisco, Usa, 10 - 14 Oct 2016, pp.062016

Registo detalhado - Registos similares
2017-12-13
06:06
An efficient, modular and simple tape archiving solution for LHC Run-3 / Murray, S (CERN) ; Bahyl, V (CERN) ; Cancio, G (CERN) ; Cano, E (CERN) ; Kotlyar, V (Serpukhov, IHEP) ; Kruse, D F (CERN) ; Leduc, J (CERN)
The IT Storage group at CERN develops the software responsible for archiving to tape the custodial copy of the physics data generated by the LHC experiments. Physics run 3 will start in 2021 and will introduce two major challenges for which the tape archive software must be evolved. [...]
2017 - 8 p. - Published in : J. Phys. : Conf. Ser. 898 (2017) 062013 Fulltext: PDF;
In : 22nd International Conference on Computing in High Energy and Nuclear Physics, CHEP 2016, San Francisco, Usa, 10 - 14 Oct 2016, pp.062013

Registo detalhado - Registos similares
2017-12-13
06:06
DPM evolution: a disk operations management engine for DPM / Manzi, A (CERN) ; Furano, F (CERN) ; Keeble, O (CERN) ; Bitzes, G (CERN)
The DPM (Disk Pool Manager) project is the most widely deployed solution for storage of large data repositories on Grid sites, and is completing the most important upgrade in its history, with the aim of bringing important new features, performance and easier long term maintainability. Work has been done to make the so-called “legacy stack” optional, and substitute it with an advanced implementation that is based on the fastCGI and RESTful technologies. [...]
2017 - 7 p. - Published in : J. Phys. : Conf. Ser. 898 (2017) 062011 Fulltext: PDF;
In : 22nd International Conference on Computing in High Energy and Nuclear Physics, CHEP 2016, San Francisco, Usa, 10 - 14 Oct 2016, pp.062011

Registo detalhado - Registos similares
2017-12-13
06:06
Tape SCSI monitoring and encryption at CERN / Laskaridis, Stefanos (CERN) ; Bahyl, V (CERN) ; Cano, E (CERN) ; Leduc, J (CERN) ; Murray, S (CERN) ; Cancio, G (CERN) ; Kruse, D (CERN)
CERN currently manages the largest data archive in the HEP domain; over 180PB of custodial data is archived across 7 enterprise tape libraries containing more than 25,000 tapes and using over 100 tape drives. Archival storage at this scale requires a leading edge monitoring infrastructure that acquires live and lifelong metrics from the hardware in order to assess and proactively identify potential drive and media level issues. [...]
2017 - 8 p. - Published in : J. Phys. : Conf. Ser. 898 (2017) 062005 Fulltext: PDF;
In : 22nd International Conference on Computing in High Energy and Nuclear Physics, CHEP 2016, San Francisco, Usa, 10 - 14 Oct 2016, pp.062005

Registo detalhado - Registos similares
2017-12-13
06:06
HNSciCloud - Overview and technical challenges / Gasthuber, Martin (DESY) ; Meinhard, Helge (CERN) ; Jones, Robert (CERN)
HEP is only one of many sciences with sharply increasing compute requirements that cannot be met by profiting from Moore’s law alone. Commercial clouds potentially allow for realising larger economies of scale. [...]
2017 - 5 p. - Published in : J. Phys. : Conf. Ser. 898 (2017) 052040 Fulltext: PDF;
In : 22nd International Conference on Computing in High Energy and Nuclear Physics, CHEP 2016, San Francisco, Usa, 10 - 14 Oct 2016, pp.052040

Registo detalhado - Registos similares
2017-12-13
06:06
Use of DAGMan in CRAB3 to improve the splitting of CMS user jobs / Wolf, M (Notre Dame U.) ; Mascheroni, M (Fermilab) ; Woodard, A (Notre Dame U.) ; Belforte, S (INFN, Trieste) ; Bockelman, B (Nebraska U.) ; Hernandez, J M (Madrid, CIEMAT) ; Vaandering, E (Fermilab)
CRAB3 is a workload management tool used by CMS physicists to analyze data acquired by the Compact Muon Solenoid (CMS) detector at the CERN Large Hadron Collider (LHC). Research in high energy physics often requires the analysis of large collections of files, referred to as datasets. [...]
2017 - 7 p. - Published in : J. Phys. : Conf. Ser. 898 (2017) 052035 Fulltext: PDF;
In : 22nd International Conference on Computing in High Energy and Nuclear Physics, CHEP 2016, San Francisco, Usa, 10 - 14 Oct 2016, pp.052035

Registo detalhado - Registos similares
2017-12-13
06:06
Stability and scalability of the CMS Global Pool: Pushing HTCondor and glideinWMS to new limits / Balcas, J (Caltech) ; Bockelman, B (Nebraska U.) ; Hufnagel, D (Fermilab) ; Hurtado Anampa, K (Notre Dame U.) ; Aftab Khan, F (NCP, Islamabad) ; Larson, K (Fermilab) ; Letts, J (UC, San Diego) ; Marra da Silva, J (Sao Paulo, IFT) ; Mascheroni, M (Fermilab) ; Mason, D (Fermilab) et al.
The CMS Global Pool, based on HTCondor and glideinWMS, is the main computing resource provisioning system for all CMS workflows, including analysis, Monte Carlo production, and detector data reprocessing activities. The total resources at Tier-1 and Tier-2 grid sites pledged to CMS exceed 100,000 CPU cores, while another 50,000 to 100,000 CPU cores are available opportunistically, pushing the needs of the Global Pool to higher scales each year. [...]
2017 - 7 p. - Published in : J. Phys. : Conf. Ser. 898 (2017) 052031 Fulltext: PDF;
In : 22nd International Conference on Computing in High Energy and Nuclear Physics, CHEP 2016, San Francisco, Usa, 10 - 14 Oct 2016, pp.052031

Registo detalhado - Registos similares
2017-12-13
06:06
CMS readiness for multi-core workload scheduling / Perez-Calero Yzquierdo, A (PIC, Bellaterra ; Madrid, CIEMAT) ; Balcas, J (Caltech) ; Hernandez, J (Madrid, CIEMAT) ; Aftab Khan, F (NCP, Islamabad) ; Letts, J (UC, San Diego) ; Mason, D (Fermilab) ; Verguilov, V (CLMI, Sofia)
In the present run of the LHC, CMS data reconstruction and simulation algorithms benefit greatly from being executed as multiple threads running on several processor cores. The complexity of the Run 2 events requires parallelization of the code to reduce the memory-per- core footprint constraining serial execution programs, thus optimizing the exploitation of present multi-core processor architectures. [...]
2017 - 8 p. - Published in : J. Phys. : Conf. Ser. 898 (2017) 052030 Fulltext: PDF;
In : 22nd International Conference on Computing in High Energy and Nuclear Physics, CHEP 2016, San Francisco, Usa, 10 - 14 Oct 2016, pp.052030

Registo detalhado - Registos similares
2017-12-13
06:06
LHCb Dockerized build environment / Clemencic, M (CERN) ; Belin, M (ISIMA, Aubiere) ; Closier, J (CERN) ; Couturier, B (CERN)
Used as lightweight virtual machines or as enhanced chroot environments, Linux containers, and in particular the Docker abstraction over them, are more and more popular in the virtualization communities. The LHCb Core Software team decided to investigate how to use Docker containers to provide stable and reliable build environments for the different supported platforms, including the obsolete ones which cannot be installed on modern hardware, to be used in integration builds, releases and by any developer. [...]
2017 - 3 p. - Published in : J. Phys. : Conf. Ser. 898 (2017) 052029 Fulltext: PDF;
In : 22nd International Conference on Computing in High Energy and Nuclear Physics, CHEP 2016, San Francisco, Usa, 10 - 14 Oct 2016, pp.052029

Registo detalhado - Registos similares
2017-12-13
06:06
Elastic extension of a local analysis facility on external clouds for the LHC experiments / Ciaschini, V (INFN, CNAF) ; Codispoti, G (CERN) ; Rinaldi, L (INFN, Bologna ; U. Bologna, DIFA) ; Aiftimiei, D C (INFN, CNAF) ; Bonacorsi, D (INFN, Bologna ; U. Bologna, DIFA) ; Calligola, P (INFN, Bologna) ; Dal Pra, S (INFN, CNAF) ; De Girolamo, D (INFN, CNAF) ; Di Maria, R (Imperial Coll., London) ; Grandi, C (INFN, Bologna) et al.
The computing infrastructures serving the LHC experiments have been designed to cope at most with the average amount of data recorded. The usage peaks, as already observed in Run-I, may however originate large backlogs, thus delaying the completion of the data reconstruction and ultimately the data availability for physics analysis. [...]
2017 - 5 p. - Published in : J. Phys. : Conf. Ser. 898 (2017) 052024 Fulltext: PDF;
In : 22nd International Conference on Computing in High Energy and Nuclear Physics, CHEP 2016, San Francisco, Usa, 10 - 14 Oct 2016, pp.052024

Registo detalhado - Registos similares
Search also:
INSPIRE
KISS Preprints