CERN Accelerating science

Posters

最近の追加:
2016-11-04
16:15
Automatised Data Quality Monitoring of the LHCb Vertex Locator
Reference: Poster-2016-566
Created: 2016. -1 p
Creator(s): Szumlak, Tomasz

The LHCb Vertex Locator (VELO) is a silicon strip semiconductor detector operating at just 8mm distance to the LHC beams. Its 172,000 strips are read at a frequency of 1 MHz and processed by off-detector FPGAs followed by a PC cluster that reduces the event rate to about 10 kHz. During the second run of the LHC, which lasts from 2015 until 2018, the detector performance will undergo continued change due to radiation damage effects. This necessitates a detailed monitoring of the data quality to avoid adverse effects on the physics analysis performance. The VELO monitoring infrastructure has been re-designed compared to the first run of the LHC when it was based on manual checks. The new system is based around an automatic analysis framework, which monitors the performance of new data as well as long-term trends and flags issues whenever they arise. An unbiased subset of the detector data are processed about once per hour by monitoring algorithms. The new analysis framework then analyses the plots that are produced by these algorithms. One of its tasks is to perform custom comparisons between the newly processed data and that from reference runs. A single figure of merit for the current VELO data quality is computed from a tree-like structure, where the value of each node is computed using the values of its child branches. The comparisons and the combination of their output is configurable through steering files and is applied dynamically. Configurable thresholds determine when the data quality is considered insufficient and an alarm is raised. The most-likely scenario in which this analysis would identify an issue is the parameters of the readout electronics no longer being optimal and requiring retuning. The data of the plots are reduced further, e.g. by evaluating averages, and these quantities are input to long-term trending. This is used to detect slow variation of quantities, which are not detectable by the comparison of two nearby runs. Such gradual change is what is expected due to radiation damage effects. It is essential to detect these changes early such that measures can be taken, e.g. adjustments of the operating voltage, to prevent any impact on the quality of high-level quantities and thus on physics analyses. The plots as well as the analysis results and trends are made available through graphical user interfaces (GUIs). One is available to run locally on the LHCb computing cluster, the other provides a web interface for remote data quality assessment. The latter operates a server-side queuing system for worker nodes that retrieve the data and pass it on the client for displaying. Both GUIs are dynamically configured by a single configuration that determines the choice and arrangement of plots and trends and ensures a common look-and-feel. The infrastructure underpinning the web GUI is used as well for other monitoring applications of the LHCb experiment.

Related links:
CHEP 2016
© CERN Geneva

Access to file

レコードの詳細 - ほとんど同じレコード
2016-11-04
15:58
LHCbDIRAC as Apache Mesos microservices
Reference: Poster-2016-565
Created: 2016. -1 p
Creator(s): Couturier, Ben

The LHCb experiment relies on LHCbDIRAC, an extension of DIRAC, to drive its offline computing. This middleware provides a development framework and a complete set of components for building distributed computing systems. These components are currently installed and ran on virtual machines (VM) or bare metal hardware. Due to the increased load of work, high availability is becoming more and more important for the LHCbDIRAC services, and the current installation model is showing its limitations. Apache Mesos is a cluster manager which aims at abstracting heterogeneous physical resources on which various tasks can be distributed thanks to so called "framework". The Marathon framework is suitable for long running tasks such as the DIRAC services, while the Chronos framework meets the needs of cron-like tasks like the DIRAC agents. A combination of the service discovery tool Consul together with HAProxy allows to expose the running containers to the outside world while hiding their dynamic placements. Such an architecture would bring a greater flexibility in the deployment of LHCbDirac services,allowing for easier deployment maintenance and scaling of services on demand (e..g LHCbDirac relies on 138 services and 116 agents). Higher reliability would also be easier, as clustering is part of the toolset, which allows constraints on the location of the services. This paper describes the investigations carried out to package the LHCbDIRAC and DIRAC components into Docker containers and orchestrate them using the previously described set of tools.

Related links:
CHEP 2016
© CERN Geneva

Access to files

レコードの詳細 - ほとんど同じレコード
2016-11-04
15:16
The LHCb Grid Simulation
Reference: Poster-2016-564
Created: 2016. -1 p
Creator(s): Baranov, Alexander

The LHCb Grid access if based on the LHCbDirac system. It provides access to data and computational resources to researchers with different geographical locations. The Grid has a hierarchical topology with multiple sites distributed over the world. The sites differ from each other by their number of CPUs, amount of disk storage and connection bandwidth. These parameters are essential for the Grid work. Moreover, job scheduling and data distribution strategy have a great impact on the grid performance. However, it is hard to choose an appropriate algorithm and strategies as they need a lot of time to be tested on the real grid. In this study, we describe the LHCb Grid simulator. The simulator reproduces the LHCb Grid structure with its sites and their number of CPUs, amount of disk storage and bandwidth connection. We demonstrate how well the simulator reproduces the grid work, show its advantages and limitations. We show how well the simulator reproduces job scheduling and network anomalies, consider methods for their detection and resolution. In addition, we compare different algorithms for job scheduling and different data distribution strategies.

Related links:
CHEP 2016
© CERN Geneva

Access to files

レコードの詳細 - ほとんど同じレコード
2016-11-04
14:09
Real time analysis with the upgraded LHCb trigger in Run-III
Reference: Poster-2016-563
Created: 2016. -1 p
Creator(s): Szumlak, Tomasz

The current LHCb trigger system consists of a hardware level, which reduces the LHC bunch-crossing rate of 40 MHz to 1 MHz, a rate at which the entire detector is read out. A second level, implemented in a farm of around 20k parallel processing CPUs, the event rate is reduced to around 12.5 kHz. The LHCb experiment plans a major upgrade of the detector and DAQ system in the LHC long shutdown II (2018-2019 ). In this upgrade, a purely software based trigger system is being developed and it will have to process the full 30 MHz of bunch crossings with inelastic collisions. LHCb will also receive a factor of 5 increase in the instantaneous luminosity, which further contributes to the challenge of reconstructing and selecting events in real time with the CPU farm. We discuss the plans and progress towards achieving efficient reconstruction and selection with a 30 MHz throughput. Another challenge is to exploit the increased signal rate that results from removing the 1 MHz readout bottleneck, combined with the higher instantaneous luminosity. Many charm hadron signals can be recorded at up to 50 times higher rate. LHCb is implementing a new paradigm in the form of real time data analysis, in which abundant signals are recorded in a reduced event format that can be fed directly to the physics analyses. These data do not need any further offline event reconstruction, which allows a larger fraction of the grid computing resources to be devoted to Monte Carlo productions. We discuss how this real time analysis model is absolutely critical to the LHCb upgrade, and how it will evolve during Run-II.

Related links:
CHEP 2016
© CERN Geneva

Access to files

レコードの詳細 - ほとんど同じレコード
2016-11-04
13:47
MCBooster: a tool for MC generation for massively parallel platforms
Reference: Poster-2016-562
Created: 2016. -1 p
Creator(s): Alves Junior, Antonio Augusto

MCBooster is a header-only, C++11-compliant library for the generation of large samples of phase-space Monte Carlo events on massively parallel platforms. It was released on GitHub in the spring of 2016. The library core algorithms implement the Raubold-Lynch method; they are able to generate the full kinematics of decays with up to nine particles in the final state. The library supports the generation of sequential decays as well as the parallel evaluation of arbitrary functions over the generated events. The output of MCBooster completely accords with popular and well-tested software packages such as GENBOD (W515 from CERNLIB) and TGenPhaseSpace from the ROOT framework. MCBooster is developed on top of the Thrust library and runs on Linux systems. It deploys transparently on NVidia CUDA-enabled GPUs as well as multicore CPUs. This contribution summarizes the main features of MCBooster. A basic description of the user interface and some examples of applications are provided, along with measurements of performance in a variety of environments.

Related links:
CHEP 2016
© CERN Geneva

Access to files

レコードの詳細 - ほとんど同じレコード
2016-11-04
10:26
Monitoring the LHCb data quality system
Reference: Poster-2016-561
Created: 2016. -1 p
Creator(s): Baranov, Alexander

Monitoring the quality of the data, DQM, is crucial in a high-energy physics experiment to ensure the correct functioning of the apparatus during the data taking. DQM at LHCb is carried out in two phase. The first one is performed on-site, in real time, using unprocessed data directly from the LHCb detector, while the second, also performed on-site, requires the reconstruction of the data selected by the LHCb trigger system and occurs with some delay. For the Run II data taking the LHCb collaboration has re-engineered the DQM protocols and the DQM graphical interface, moving the latter to a web-based monitoring system, called Monet, thus allowing researchers to perform the second phase off-site. In order to support the operator's task, Monet is also equipped with an automated, fully configurable, alarm system, thus allowing its use not only for DQM purposes, but also to track and assess the quality of LHCb software and simulation.

Related links:
CHEP 2016
© CERN Geneva

Access to files

レコードの詳細 - ほとんど同じレコード
2016-10-28
14:14
Silicon telescope for prototype sensor characterisation using particle beam and cosmic rays
Reference: Poster-2016-560
Created: 2016. -1 p
Creator(s): Fu, Jinlin

We present the design and the performance of a silicon strip telescope that we have built and recently used as reference tracking system for prototype sensor characterisation. The telescope was operated on beam at the CERN SPS and also using cosmic rays in the laboratory. We will describe the data acquisition system, based on a custom electronic board that we have developed, and the online monitoring system to control the quality of the data in real time.

Related links:
TWEPP 2016
© CERN Geneva

Access to files

レコードの詳細 - ほとんど同じレコード
2016-10-28
08:18
A New Readout Electronics for the LHCb Muon Detector Upgrade
Reference: Poster-2016-559
Created: 2016. -1 p
Creator(s): Cadeddu, Sandro

The 2018/2019 upgrade of LHCb Muon System foresees a 40 MHz readout scheme and requires the development of a new Off Detector Electronics (nODE) board that will be based on the nSYNC, a radiation tolerant custom ASIC developed in UMC 130 nm technology. Each nODE board has 192 input channels processed by 4 nSYNCs. The nSYNC is equipped with fully digital TDCs and it implements all the required functionalities for the readout: bunch crossing alignment, data zero suppression, time measurements. Optical interfaces, based on GBT and Versatile link components, are used to communicate with DAQ, TFC and ECS systems.

Related links:
TWEPP 2016
© CERN Geneva

Access to files

レコードの詳細 - ほとんど同じレコード
2016-10-25
10:20
Development and test of the CO2 evaporative cooling system for the LHCb UT Detector
Reference: Poster-2016-558
Created: 2016. -1 p
Creator(s): Coelli, Simone

The upgrade of the LHCb detector, which will take place during the Long Shutdown 2 from mid 2018 to the end of 2019, will extend significantly the physics reach of the experiment by allowing it to run at higher instantaneous luminosity with increased trigger efficiency for a wide range of decay channels. The LHCb upgrade relies on two major changes. Firstly, the full read-out of the front-end electronics, currently limited by a Level-0 trigger to 1 MHz, will be replaced with a 40 MHz trigger system. Secondly, the upgraded LHCb detector will be designed to cope with an increase of the nominal operational luminosity by a factor five compared to the current detector. Compared to the current experiment several subsystems need to be partially rebuilt. Among these the 4 TT planes will be replaced by new high granularity silicon micro-strip planes with an improved coverage of the LHCb acceptance.The new system is called the Upstream Tracker. The radiation length of each UT plane should not exceed the value of 1 % X0. The cooling system has to maintain the temperature of the sensors at -5 °C by removing the heat generated in the ASICs, in the silicon sensors due to self-heating, and in the cables that provide the power to the front-end electronics. The acceptable temperature excursion over the sensor is in the range of 5 °C. The temperature of the ASICs should be kept under 40°C for optimal functioning. The cooling power of the UT detector is rated at 5 kW. An efficient cooling system is necessary for maintaining the temperature of the sensors below - 5 °C in order to reduce the leakage current and prevent thermal runaway in presence of radiation damage. CO2 bi-phase cooling systems have successfully been built and operated for the LHCb VELO particle detectors, which pioneered the use of evaporative CO2 cooling in high energy physics, for the AMS tracker, and recently for the ATLAS Pixel Ineer B-Layer (IBL). They have proved to be very efficient and reliable, providing effective cooling with reduced impact on the material budget. In the UT detector the heat load is dominated by the power dissipation of the read-out ASICs, that are bonded directly to the sensor and positioned close to it in the active tracking volume. Simulation studies based on finite element analysis (FEA), has proved that evaporative CO2 cooling is the optimal choice in terms of cooling effciency and material budget. The CO2 evaporation around - 30 °C take place in cooling pipes embedded in the local support structures: 68 vertical staves, 1.8 m long. High thermally conductive carbon foam in an optimized sandwich structure design, provide a good heat transfer from the sensor and front-end electronics to the cooling pipe. A "snake pipe" design with bent tubes passing underneath the ASICs is currently considered as the baseline solution, providing the maximal heat transfer and the lower and uniform detector temperatures. The material for the pipe is titanium with 2 mm inner diameter and 0.1 mm thickness.The use of a vertical 3 m long "snake pipe" gives the best thermal performance for the detector, but R&D for a system with this geometry was mandatory. R&D activities and real scale test on prototypes have been done, and are in progress, to prove and finalize the design concept.

Related links:
14th Topical Seminar on Innovative Particle and Radiation Detectors
© CERN Geneva

Access to files

レコードの詳細 - ほとんど同じレコード
2016-10-25
09:57
Production and Quality Assurance of a Scintillating Fibre Detector for the LHCb Experiment
Reference: Poster-2016-557
Created: 2016. -1 p
Creator(s): Nieswand, Simon

At the Large Hadron Collider (LHC) at CERN, scientists from around the world are using complex detector systems to test the predictions of the Standard Model and to search for indications of new physics. One of those detectors is the LHCb experiment which was specifically designed for the study of heavy hadrons containing bottom and charm quarks (heavy flavour physics). To deal with the increased beam energy and instantaneous luminosity of the LHC after the Long Shutdown 2 in 2018/19, several subsystems of the LHCb detector have to be exchanged and upgraded. For this purpose, a new tracking system that will replace the so-called Inner and Outer Tracker of the current detector is currently being developed and built. The base of this new tracker are 2:5m long scintillating fibres ($\varnothing 250 \mu$) into which light is induced by passing charged particles. The fibres are arranged in six-layered fibre mats which are read out with the help of silicon photomultipliers at the edge of the tracker’s acceptance. The finished tracker will have a spatial resolution below 100 $\mu$ and will cover an area larger than 360m2. To produce the required total of 1100 fibre mats until the beginning of the second Long Shutdown, mass production must be set up at several locations. To assure the quality of the fibre mats, they are subjected to various tests during the production. One of those is a check of the integrity of the fibre matrix inside the mats which is used to look for irregularities and defects. Furthermore, different properties such as light yield, attenuation length, spatial resolution, and detection efficiency are measured with the help of another setup as well as beam tests at carried out at CERN. In this talk, the various studies of the fibre mats are explained in detail and first results are presented.

Related links:
14th Topical Seminar on Innovative Particle and Radiation Detectors
© CERN Geneva

Access to files

レコードの詳細 - ほとんど同じレコード
焦点:
Open Days 2013 Posters (58)