CERN Accelerating science

IT Posters

Darreres entrades:
2013-11-01
16:05
CORAL and COOL during the LHC long shutdown
Reference: Poster-2013-399
Created: 2013. -1 p
Creator(s): Valassi, A; Clemencic, M; Dykstra, D; Goyal, N; Salnikov, A [...]

CORAL and COOL are two software packages used by the LHC experiments for managing detector conditions and other types of data using relational database technologies. They have been developed and maintained within the LCG Persistency Framework, a common project of the CERN IT department with ATLAS, CMS and LHCb. This presentation reports on the status of CORAL and COOL at the time of CHEP2013, covering the new features and enhancements in both packages, as well as the changes and improvements in the software process infrastructure. It also reviews the usage of the software in the experiments and the outlook for ongoing and future activities during the LHC long shutdown (LS1) and beyond.

Presented at 20th International Conference on Computing in High Energy and Nuclear Physics 2013 Amsterdam, Netherlands 14 - 18 Oct 2013 2013 , (list conference papers)
© CERN Geneva

Access to file

Registre complet - Registres semblants
2013-10-30
20:12
Helix Nebula and CERN: A Symbiotic approach to exploiting commercial clouds
Reference: Poster-2013-392
Created: 2013. -1 p
Creator(s): Barreiro Megino, Fernando Harald; Jones, Robert; Kucharczyk, Katarzyna; Medrano Llamas, Ramón; van der Ster, Daniel

The recent paradigm shift toward cloud computing in IT, and general interest in "Big Data" in particular, have demonstrated that the computing requirements of HEP are no longer globally unique. Indeed, the CERN IT department and LHC experiments have already made significant R&D; investments in delivering and exploiting cloud computing resources. While a number of technical evaluations of interesting commercial offerings from global IT enterprises have been performed by various physics labs, further technical, security, sociological, and legal issues need to be address before their large-scale adoption by the research community can be envisaged. Helix Nebula - the Science Cloud is an initiative that explores these questions by joining the forces of three European research institutes (CERN, ESA and EMBL) with leading European commercial IT enterprises. The goals of Helix Nebula are to establish a cloud platform federating multiple commercial cloud providers, along with new business models, which can sustain the cloud marketplace for years to come. This contribution will summarize the participation of CERN in Helix Nebula. We will explain CERN's flagship use-case and the model used to integrate several cloud providers with an LHC experiment's workload management system. During the first proof of concept, this project contributed over 40.000 CPU-days of Monte Carlo production throughput to the ATLAS experiment with marginal manpower required. CERN's experience, together with that of ESA and EMBL, is providing a great insight into the cloud computing industry and highlighted several challenges that are being tackled in order to ease the export of the scientific workloads to the cloud environments.

Presented at 20th International Conference on Computing in High Energy and Nuclear Physics 2013 Amsterdam, Netherlands 14 - 18 Oct 2013 2013 , (list conference papers)
© CERN Geneva

Access to file

Registre complet - Registres semblants
2013-10-30
14:56
Handling Worldwide LHC Computing Grid Critical Service Incidents : The infrastructure and experience behind nearly 5 years of GGUS ALARMs
Reference: Poster-2013-391
Keywords:  WLCG  GGUS  ALARM  ticket  Storage  CASTOR  Batch  LSF  CERN  KIT  fail-safe  Tier0  Tier1  workflow  incident
Created: 2013. -1 p
Creator(s): Dimou, M; Dres, H; Dulov, O; Grein, G

In the Wordwide LHC Computing Grid (WLCG) project the Tier centres are of paramount importance for storing and accessing experiment data and for running the batch jobs necessary for experiment production activities. Although Tier2 sites provide a significant fraction of the resources a non-availability of resources at the Tier0 or the Tier1s can seriously harm not only WLCG Operations but also the experiments' workflow and the storage of LHC data which are very expensive to reproduce. This is why availability requirements for these sites are high and committed in the WLCG Memorandum of Understanding (MoU). In this talk we describe the workflow of GGUS ALARMs, the only 24/7 mechanism available to LHC experiment experts for reporting to the Tier0 or the Tier1s problems with their Critical Services. Conclusions and experience gained from the detailed drills performed in each such ALARM for the last 4 years are explained and the shift with time of Type of Problems met. The physical infrastructure put in place to achieve GGUS 24/7 availability are summarised.

Presented at 20th International Conference on Computing in High Energy and Nuclear Physics 2013 Amsterdam, Netherlands 14 - 18 Oct 2013 2013 , (list conference papers)
© CERN Geneva

Access to files

Registre complet - Registres semblants
2011-01-27
14:30
Open up your mind! - CERN openlab Student Programme Poster 2011
Reference: Poster-2011-166
Keywords:  openlab  student programme  poster  2011
Created: 2011. -1 p
Creator(s): Gaillard, Melissa


Who should apply?

How to apply?

Stipend

Students projects

Other activities

© CERN Geneva

Fulltext

Registre complet - Registres semblants
2011-01-27
14:21
Open up your mind! - CERN openlab Student Programme Leaflet 2011
Reference: Poster-2011-165
Keywords:  openlab  student programme  leaflet  2011
Created: 2011 . -2 p
Creator(s): Gaillard, Melissa


Who should apply?

How to apply?

Stipend

Students projects

Other activities


© CERN Geneva

Fulltext

Registre complet - Registres semblants
2010-02-18
09:52
CERN GSM monitoring system
Reference: Poster-2010-141
Keywords:  GSM  monitoring system  GPRS  SMS  leaky feeder cable
Created: 2009. -1 p
Creator(s): Ghabrous Larrea, C

As a result of the tremendous development of GSM services over the last years, the number of related services used by organizations has drastically increased. Therefore, monitoring GSM services is becoming a business critical issue in order to be able to react appropriately in case of incident. In order to provide with GSM coverage all the CERN underground facilities, more than 50 km of leaky feeder cable have been deployed. This infrastructure is also used to propagate VHF radio signals for the CERN’s fire brigade. Even though CERN’s mobile operator monitors the network, it cannot guarantee the availability of GSM services, and for sure not VHF services, where signals are carried by the leaky feeder cable. So, a global monitoring system has become critical to CERN. In addition, monitoring this infrastructure will allow to characterize its behaviour over time, especially with LHC operation. Given that commercial solutions were not yet mature, CERN developed a system based on GSM probes and an application server which collects data from them via the CERN GPRS network. By placing probes in strategic locations and comparing measurements between probes, it is possible now possible to determine if there is a GSM or VHF problem on one leaky feeder cable segment. This system has been successfully working for several months in underground facilities, allowing CERN to inform GSM users and fire brigade in case of incidents.

© CERN Geneva

Fulltext

Registre complet - Registres semblants
2009-07-02
10:52
Experience with Server Self Service Center (S3C)
Reference: Poster-2009-120
Keywords:  CERN  CHEP  virtualization  windows  hyper-v  vm  server  virtual
Created: 2009. -1 p
Creator(s): Sucik, J

CERN has a successful experience with running Server Self Service Center (S3C) for virtual server provisioning which is based on Microsoft® Virtual Server 2005. With the introduction of Windows Server 2008 and its built-in hypervisor based virtualization (Hyper-V) there are new possibilities for the expansion of the current service. This paper describes the architecture of the redesigned virtual Server Self Service based on Hyper-V which provides dynamically scalable virtualized resources on demand as needed and outlines the possible implications on the future use of virtual machines at CERN.

Related links:
CHEP2009
© CERN Geneva

Fulltext

Registre complet - Registres semblants
2009-05-15
16:08
CERN automatic audio-conference service
Reference: Poster-2009-113
Keywords:  Telephony  Audioconference  SIP
Created: 2009. -1 p
Creator(s): Sierra Moral, R

Scientists from all over the world need to collaborate with CERN on a daily basis. They must be able to communicate effectively on their joint projects at any time; as a result telephone conferences have become indispensable and widely used. Managed by 6 operators, CERN already has more than 20000 hours and 5700 audio-conferences per year. However, the traditional telephone based audio-conference system needed to be modernized in three ways. Firstly, to provide the participants with more autonomy in the organization of their conferences; secondly, to eliminate the constraints of manual intervention by operators; and thirdly, to integrate the audio-conferences into a collaborative working framework. The large number, and hence cost, of the conferences prohibited externalization and so the CERN telecommunications team drew up a specification to implement a new system. It was decided to use a new commercial collaborative audio-conference solution based on the SIP protocol. The system was tested as the first European pilot and several improvements (such as billing, security, redundancy...) were implemented based on CERN’s recommendations. The new automatic conference system has been operational since the second half of 2006. It is very popular for the users and has doubled the number of conferences in the past two years.

Related links:
17th International Conference on Computing in High Energy and Nuclear Physics
© CERN Geneva

Fulltext

Registre complet - Registres semblants
2009-04-22
15:01
INSPIRE: a new scientific information system for HEP
Reference: Poster-2009-112
Keywords:  INSPIRE  CDS-Invenio  SPIRES
Created: 2009. -1 p
Creator(s): Ivanov, R; Raae, L

The status of high-energy physics (HEP) information systems has been jointly analyzed by the libraries of CERN, DESY, Fermilab and SLAC. As a result, the four laboratories have started the INSPIRE project – a new platform built by moving the successful SPIRES features and content, curated at DESY, Fermilab and SLAC, into the open-source CDS Invenio digital library software that was developed at CERN. INSPIRE will integrate present acquisition workflows and databases to host the entire body of the HEP literature (about one million records), aiming to become the reference HEP scientific information platform worldwide. It will provide users with fast access to full-text journal articles and preprints, but also material such as conference slides and multimedia. INSPIRE will empower scientists with new tools to discover and access the results most relevant to their research, enable novel text- and data-mining applications, and deploy new metrics to assess the impact of articles and authors. In addition, it will introduce the "Web 2.0" paradigm of user-enriched content in the domain of sciences, with community-based approaches to scientific publishing. INSPIRE represents a natural evolution of scholarly communication built on successful community-based information systems, and it provides a vision for information management in other fields of science. Inspired by the needs of HEP, we hope that the INSPIRE project will be inspiring for other communities.

Related links:
17th International Conference on Computing in High Energy and Nuclear Physics (CHEP)
© CERN Geneva

Fulltext

Registre complet - Registres semblants
2008-05-13
14:14
Software management of the LHC Detector Control Systems
Reference: Poster-2008-014
Keywords:  Control Systems  SCADA  Software management
Created: 2007. -1 p
Creator(s): Varela, F

The control systems of each of the four Large Hadron Collider (LHC) experiments will contain of the order of 150 computers running the back-end applications. These applications will have to be maintained and eventually upgraded during the lifetime of the experiments, ~20 years. This paper presents the centralized software management strategy adopted by the Joint COntrols Project (JCOP) [1], which is based on a central database that holds the overall system configuration. The approach facilitates the integration of different parts of a control system and provides versioning of its various software components. The information stored in the configuration database can eventually be used to restore a computer in the event of failure.

Related links:
ICALEPCS 2007
© CERN Geneva

Fulltext

Registre complet - Registres semblants