CERN Accelerating science

Posters

最近の追加:
2017-02-24
11:44
Space Charge Modules for PyHEADTAIL
Reference: Poster-2017-569
Keywords:  Beam dynamics, Numerical simulation, Collective effects, PyHEADTAIL
Created: 2016. -6 p
Creator(s): Oeftiger, Adrian; Hegglin, Stefan Eduard

PyHEADTAIL is a 6D tracking tool developed at CERN to simulate collective effects. We present recent developments of the direct space charge (SC) suite, which is available for both the CPU and GPU. A new 3D particle-in-cell solver with open boundary conditions has been implemented. For the transverse plane, there is a semi-analytical Bassetti-Erskine model as well as 2D self-consistent particle-in-cell solvers with both open and closed boundary conditions. For the longitudinal plane, PyHEADTAIL offers line density derivative models. Simulations with these models are benchmarked with experiments at the injection plateau of CERN’s SPS.

Related links:
HB2016
© CERN Geneva

Access to files

レコードの詳細 - ほとんど同じレコード
2017-02-24
11:39
Space Charge Mitigation With Longitudinally Hollow Bunches
Reference: Poster-2017-568
Keywords:  Beam dynamics, radio-frequency, longitudinal emittance blow-up, tune shift, PS
Created: 2016. -6 p
Creator(s): Oeftiger, Adrian; Hancock, Steven; Rumolo, Giovanni

Hollow longitudinal phase space distributions have a flat profile and hence reduce the impact of transverse space charge. Dipolar parametric excitation with the phase loop feedback systems provides such hollow distributions under reproducible conditions. We present a procedure to create hollow bunches during the acceleration ramp of CERN’s PS Booster machine with minimal changes to the operational cycle. The improvements during the injection plateau of the downstream Proton Synchrotron are assessed in comparison to standard parabolic bunches.

Related links:
HB2016
© CERN Geneva

Access to file

レコードの詳細 - ほとんど同じレコード
2017-01-25
10:50
ALICE detector schematic after LS2
Reference: Poster-2017-567
Keywords:  ALICE  LS2
Original source: ALICE2017
Created: 2017. -1 p
Creator(s): Tauro, Arturo

ALICE detector schematic after LS2

Related links:
ALICE DETECTOR
© CERN Geneva

Access to files
Fulltext
Fulltext

レコードの詳細 - ほとんど同じレコード
2016-11-04
16:15
Automatised Data Quality Monitoring of the LHCb Vertex Locator
Reference: Poster-2016-566
Created: 2016. -1 p
Creator(s): Szumlak, Tomasz

The LHCb Vertex Locator (VELO) is a silicon strip semiconductor detector operating at just 8mm distance to the LHC beams. Its 172,000 strips are read at a frequency of 1 MHz and processed by off-detector FPGAs followed by a PC cluster that reduces the event rate to about 10 kHz. During the second run of the LHC, which lasts from 2015 until 2018, the detector performance will undergo continued change due to radiation damage effects. This necessitates a detailed monitoring of the data quality to avoid adverse effects on the physics analysis performance. The VELO monitoring infrastructure has been re-designed compared to the first run of the LHC when it was based on manual checks. The new system is based around an automatic analysis framework, which monitors the performance of new data as well as long-term trends and flags issues whenever they arise. An unbiased subset of the detector data are processed about once per hour by monitoring algorithms. The new analysis framework then analyses the plots that are produced by these algorithms. One of its tasks is to perform custom comparisons between the newly processed data and that from reference runs. A single figure of merit for the current VELO data quality is computed from a tree-like structure, where the value of each node is computed using the values of its child branches. The comparisons and the combination of their output is configurable through steering files and is applied dynamically. Configurable thresholds determine when the data quality is considered insufficient and an alarm is raised. The most-likely scenario in which this analysis would identify an issue is the parameters of the readout electronics no longer being optimal and requiring retuning. The data of the plots are reduced further, e.g. by evaluating averages, and these quantities are input to long-term trending. This is used to detect slow variation of quantities, which are not detectable by the comparison of two nearby runs. Such gradual change is what is expected due to radiation damage effects. It is essential to detect these changes early such that measures can be taken, e.g. adjustments of the operating voltage, to prevent any impact on the quality of high-level quantities and thus on physics analyses. The plots as well as the analysis results and trends are made available through graphical user interfaces (GUIs). One is available to run locally on the LHCb computing cluster, the other provides a web interface for remote data quality assessment. The latter operates a server-side queuing system for worker nodes that retrieve the data and pass it on the client for displaying. Both GUIs are dynamically configured by a single configuration that determines the choice and arrangement of plots and trends and ensures a common look-and-feel. The infrastructure underpinning the web GUI is used as well for other monitoring applications of the LHCb experiment.

Related links:
CHEP 2016
© CERN Geneva

Access to file

レコードの詳細 - ほとんど同じレコード
2016-11-04
15:58
LHCbDIRAC as Apache Mesos microservices
Reference: Poster-2016-565
Created: 2016. -1 p
Creator(s): Couturier, Ben

The LHCb experiment relies on LHCbDIRAC, an extension of DIRAC, to drive its offline computing. This middleware provides a development framework and a complete set of components for building distributed computing systems. These components are currently installed and ran on virtual machines (VM) or bare metal hardware. Due to the increased load of work, high availability is becoming more and more important for the LHCbDIRAC services, and the current installation model is showing its limitations. Apache Mesos is a cluster manager which aims at abstracting heterogeneous physical resources on which various tasks can be distributed thanks to so called "framework". The Marathon framework is suitable for long running tasks such as the DIRAC services, while the Chronos framework meets the needs of cron-like tasks like the DIRAC agents. A combination of the service discovery tool Consul together with HAProxy allows to expose the running containers to the outside world while hiding their dynamic placements. Such an architecture would bring a greater flexibility in the deployment of LHCbDirac services,allowing for easier deployment maintenance and scaling of services on demand (e..g LHCbDirac relies on 138 services and 116 agents). Higher reliability would also be easier, as clustering is part of the toolset, which allows constraints on the location of the services. This paper describes the investigations carried out to package the LHCbDIRAC and DIRAC components into Docker containers and orchestrate them using the previously described set of tools.

Related links:
CHEP 2016
© CERN Geneva

Access to files

レコードの詳細 - ほとんど同じレコード
2016-11-04
15:16
The LHCb Grid Simulation
Reference: Poster-2016-564
Created: 2016. -1 p
Creator(s): Baranov, Alexander

The LHCb Grid access if based on the LHCbDirac system. It provides access to data and computational resources to researchers with different geographical locations. The Grid has a hierarchical topology with multiple sites distributed over the world. The sites differ from each other by their number of CPUs, amount of disk storage and connection bandwidth. These parameters are essential for the Grid work. Moreover, job scheduling and data distribution strategy have a great impact on the grid performance. However, it is hard to choose an appropriate algorithm and strategies as they need a lot of time to be tested on the real grid. In this study, we describe the LHCb Grid simulator. The simulator reproduces the LHCb Grid structure with its sites and their number of CPUs, amount of disk storage and bandwidth connection. We demonstrate how well the simulator reproduces the grid work, show its advantages and limitations. We show how well the simulator reproduces job scheduling and network anomalies, consider methods for their detection and resolution. In addition, we compare different algorithms for job scheduling and different data distribution strategies.

Related links:
CHEP 2016
© CERN Geneva

Access to files

レコードの詳細 - ほとんど同じレコード
2016-11-04
14:09
Real time analysis with the upgraded LHCb trigger in Run-III
Reference: Poster-2016-563
Created: 2016. -1 p
Creator(s): Szumlak, Tomasz

The current LHCb trigger system consists of a hardware level, which reduces the LHC bunch-crossing rate of 40 MHz to 1 MHz, a rate at which the entire detector is read out. A second level, implemented in a farm of around 20k parallel processing CPUs, the event rate is reduced to around 12.5 kHz. The LHCb experiment plans a major upgrade of the detector and DAQ system in the LHC long shutdown II (2018-2019 ). In this upgrade, a purely software based trigger system is being developed and it will have to process the full 30 MHz of bunch crossings with inelastic collisions. LHCb will also receive a factor of 5 increase in the instantaneous luminosity, which further contributes to the challenge of reconstructing and selecting events in real time with the CPU farm. We discuss the plans and progress towards achieving efficient reconstruction and selection with a 30 MHz throughput. Another challenge is to exploit the increased signal rate that results from removing the 1 MHz readout bottleneck, combined with the higher instantaneous luminosity. Many charm hadron signals can be recorded at up to 50 times higher rate. LHCb is implementing a new paradigm in the form of real time data analysis, in which abundant signals are recorded in a reduced event format that can be fed directly to the physics analyses. These data do not need any further offline event reconstruction, which allows a larger fraction of the grid computing resources to be devoted to Monte Carlo productions. We discuss how this real time analysis model is absolutely critical to the LHCb upgrade, and how it will evolve during Run-II.

Related links:
CHEP 2016
© CERN Geneva

Access to files

レコードの詳細 - ほとんど同じレコード
2016-11-04
13:47
MCBooster: a tool for MC generation for massively parallel platforms
Reference: Poster-2016-562
Created: 2016. -1 p
Creator(s): Alves Junior, Antonio Augusto

MCBooster is a header-only, C++11-compliant library for the generation of large samples of phase-space Monte Carlo events on massively parallel platforms. It was released on GitHub in the spring of 2016. The library core algorithms implement the Raubold-Lynch method; they are able to generate the full kinematics of decays with up to nine particles in the final state. The library supports the generation of sequential decays as well as the parallel evaluation of arbitrary functions over the generated events. The output of MCBooster completely accords with popular and well-tested software packages such as GENBOD (W515 from CERNLIB) and TGenPhaseSpace from the ROOT framework. MCBooster is developed on top of the Thrust library and runs on Linux systems. It deploys transparently on NVidia CUDA-enabled GPUs as well as multicore CPUs. This contribution summarizes the main features of MCBooster. A basic description of the user interface and some examples of applications are provided, along with measurements of performance in a variety of environments.

Related links:
CHEP 2016
© CERN Geneva

Access to files

レコードの詳細 - ほとんど同じレコード
2016-11-04
10:26
Monitoring the LHCb data quality system
Reference: Poster-2016-561
Created: 2016. -1 p
Creator(s): Baranov, Alexander

Monitoring the quality of the data, DQM, is crucial in a high-energy physics experiment to ensure the correct functioning of the apparatus during the data taking. DQM at LHCb is carried out in two phase. The first one is performed on-site, in real time, using unprocessed data directly from the LHCb detector, while the second, also performed on-site, requires the reconstruction of the data selected by the LHCb trigger system and occurs with some delay. For the Run II data taking the LHCb collaboration has re-engineered the DQM protocols and the DQM graphical interface, moving the latter to a web-based monitoring system, called Monet, thus allowing researchers to perform the second phase off-site. In order to support the operator's task, Monet is also equipped with an automated, fully configurable, alarm system, thus allowing its use not only for DQM purposes, but also to track and assess the quality of LHCb software and simulation.

Related links:
CHEP 2016
© CERN Geneva

Access to files

レコードの詳細 - ほとんど同じレコード
2016-10-28
14:14
Silicon telescope for prototype sensor characterisation using particle beam and cosmic rays
Reference: Poster-2016-560
Created: 2016. -1 p
Creator(s): Fu, Jinlin

We present the design and the performance of a silicon strip telescope that we have built and recently used as reference tracking system for prototype sensor characterisation. The telescope was operated on beam at the CERN SPS and also using cosmic rays in the laboratory. We will describe the data acquisition system, based on a custom electronic board that we have developed, and the online monitoring system to control the quality of the data in real time.

Related links:
TWEPP 2016
© CERN Geneva

Access to files

レコードの詳細 - ほとんど同じレコード
焦点:
Open Days 2013 Posters (58)