# Posters

Derniers ajouts:
2016-11-04
16:15
Automatised Data Quality Monitoring of the LHCb Vertex Locator

2016-11-04
15:58
LHCbDIRAC as Apache Mesos microservices
 Reference: Poster-2016-565 Created: 2016. -1 p Creator(s): Couturier, Ben The LHCb experiment relies on LHCbDIRAC, an extension of DIRAC, to drive its offline computing. This middleware provides a development framework and a complete set of components for building distributed computing systems. These components are currently installed and ran on virtual machines (VM) or bare metal hardware. Due to the increased load of work, high availability is becoming more and more important for the LHCbDIRAC services, and the current installation model is showing its limitations. Apache Mesos is a cluster manager which aims at abstracting heterogeneous physical resources on which various tasks can be distributed thanks to so called "framework". The Marathon framework is suitable for long running tasks such as the DIRAC services, while the Chronos framework meets the needs of cron-like tasks like the DIRAC agents. A combination of the service discovery tool Consul together with HAProxy allows to expose the running containers to the outside world while hiding their dynamic placements. Such an architecture would bring a greater flexibility in the deployment of LHCbDirac services,allowing for easier deployment maintenance and scaling of services on demand (e..g LHCbDirac relies on 138 services and 116 agents). Higher reliability would also be easier, as clustering is part of the toolset, which allows constraints on the location of the services. This paper describes the investigations carried out to package the LHCbDIRAC and DIRAC components into Docker containers and orchestrate them using the previously described set of tools. Related links:CHEP 2016 © CERN Geneva Access to files

2016-11-04
15:16
The LHCb Grid Simulation
 Reference: Poster-2016-564 Created: 2016. -1 p Creator(s): Baranov, Alexander The LHCb Grid access if based on the LHCbDirac system. It provides access to data and computational resources to researchers with different geographical locations. The Grid has a hierarchical topology with multiple sites distributed over the world. The sites differ from each other by their number of CPUs, amount of disk storage and connection bandwidth. These parameters are essential for the Grid work. Moreover, job scheduling and data distribution strategy have a great impact on the grid performance. However, it is hard to choose an appropriate algorithm and strategies as they need a lot of time to be tested on the real grid. In this study, we describe the LHCb Grid simulator. The simulator reproduces the LHCb Grid structure with its sites and their number of CPUs, amount of disk storage and bandwidth connection. We demonstrate how well the simulator reproduces the grid work, show its advantages and limitations. We show how well the simulator reproduces job scheduling and network anomalies, consider methods for their detection and resolution. In addition, we compare different algorithms for job scheduling and different data distribution strategies. Related links:CHEP 2016 © CERN Geneva Access to files

2016-11-04
14:09
Real time analysis with the upgraded LHCb trigger in Run-III

2016-11-04
13:47
MCBooster: a tool for MC generation for massively parallel platforms
 Reference: Poster-2016-562 Created: 2016. -1 p Creator(s): Alves Junior, Antonio Augusto MCBooster is a header-only, C++11-compliant library for the generation of large samples of phase-space Monte Carlo events on massively parallel platforms. It was released on GitHub in the spring of 2016. The library core algorithms implement the Raubold-Lynch method; they are able to generate the full kinematics of decays with up to nine particles in the final state. The library supports the generation of sequential decays as well as the parallel evaluation of arbitrary functions over the generated events. The output of MCBooster completely accords with popular and well-tested software packages such as GENBOD (W515 from CERNLIB) and TGenPhaseSpace from the ROOT framework. MCBooster is developed on top of the Thrust library and runs on Linux systems. It deploys transparently on NVidia CUDA-enabled GPUs as well as multicore CPUs. This contribution summarizes the main features of MCBooster. A basic description of the user interface and some examples of applications are provided, along with measurements of performance in a variety of environments. Related links:CHEP 2016 © CERN Geneva Access to files

2016-11-04
10:26
Monitoring the LHCb data quality system
 Reference: Poster-2016-561 Created: 2016. -1 p Creator(s): Baranov, Alexander Monitoring the quality of the data, DQM, is crucial in a high-energy physics experiment to ensure the correct functioning of the apparatus during the data taking. DQM at LHCb is carried out in two phase. The first one is performed on-site, in real time, using unprocessed data directly from the LHCb detector, while the second, also performed on-site, requires the reconstruction of the data selected by the LHCb trigger system and occurs with some delay. For the Run II data taking the LHCb collaboration has re-engineered the DQM protocols and the DQM graphical interface, moving the latter to a web-based monitoring system, called Monet, thus allowing researchers to perform the second phase off-site. In order to support the operator's task, Monet is also equipped with an automated, fully configurable, alarm system, thus allowing its use not only for DQM purposes, but also to track and assess the quality of LHCb software and simulation. Related links:CHEP 2016 © CERN Geneva Access to files

2016-10-28
14:14
Silicon telescope for prototype sensor characterisation using particle beam and cosmic rays
 Reference: Poster-2016-560 Created: 2016. -1 p Creator(s): Fu, Jinlin We present the design and the performance of a silicon strip telescope that we have built and recently used as reference tracking system for prototype sensor characterisation. The telescope was operated on beam at the CERN SPS and also using cosmic rays in the laboratory. We will describe the data acquisition system, based on a custom electronic board that we have developed, and the online monitoring system to control the quality of the data in real time. Related links:TWEPP 2016 © CERN Geneva Access to files

2016-10-28
08:18
A New Readout Electronics for the LHCb Muon Detector Upgrade
 Reference: Poster-2016-559 Created: 2016. -1 p Creator(s): Cadeddu, Sandro The 2018/2019 upgrade of LHCb Muon System foresees a 40 MHz readout scheme and requires the development of a new Off Detector Electronics (nODE) board that will be based on the nSYNC, a radiation tolerant custom ASIC developed in UMC 130 nm technology. Each nODE board has 192 input channels processed by 4 nSYNCs. The nSYNC is equipped with fully digital TDCs and it implements all the required functionalities for the readout: bunch crossing alignment, data zero suppression, time measurements. Optical interfaces, based on GBT and Versatile link components, are used to communicate with DAQ, TFC and ECS systems. Related links:TWEPP 2016 © CERN Geneva Access to files

2016-10-25
10:20
Development and test of the CO2 evaporative cooling system for the LHCb UT Detector
 Reference: Poster-2016-557 Created: 2016. -1 p Creator(s): Nieswand, Simon At the Large Hadron Collider (LHC) at CERN, scientists from around the world are using complex detector systems to test the predictions of the Standard Model and to search for indications of new physics. One of those detectors is the LHCb experiment which was specifically designed for the study of heavy hadrons containing bottom and charm quarks (heavy flavour physics). To deal with the increased beam energy and instantaneous luminosity of the LHC after the Long Shutdown 2 in 2018/19, several subsystems of the LHCb detector have to be exchanged and upgraded. For this purpose, a new tracking system that will replace the so-called Inner and Outer Tracker of the current detector is currently being developed and built. The base of this new tracker are 2:5m long scintillating fibres ($\varnothing 250 \mu$) into which light is induced by passing charged particles. The fibres are arranged in six-layered fibre mats which are read out with the help of silicon photomultipliers at the edge of the tracker’s acceptance. The finished tracker will have a spatial resolution below 100 $\mu$ and will cover an area larger than 360m2. To produce the required total of 1100 fibre mats until the beginning of the second Long Shutdown, mass production must be set up at several locations. To assure the quality of the fibre mats, they are subjected to various tests during the production. One of those is a check of the integrity of the fibre matrix inside the mats which is used to look for irregularities and defects. Furthermore, different properties such as light yield, attenuation length, spatial resolution, and detection efficiency are measured with the help of another setup as well as beam tests at carried out at CERN. In this talk, the various studies of the fibre mats are explained in detail and first results are presented. Related links:14th Topical Seminar on Innovative Particle and Radiation Detectors © CERN Geneva Access to files