CERN Accelerating science

Posters

Πρόσφατες προσθήκες:
2019-11-08
14:58
Using ML to Speed Up New and Upgrade Detector Studies
Reference: Poster-2019-1019
Created: 2019. -1 p
Creator(s): Ratnikov, Fedor

Designing new experiments, as well as upgrade of ongoing experiments, is a continuous process in experimental high energy physics. Frontier R&Ds are used to squeeze the maximum physics performance using cutting edge detector technologies. The evaluating of physics performance for particular configuration includes sketching this configuration in Geant, simulating typical signals and backgrounds, applying reasonable reconstruction procedures, combining results into final quality metrics. Since the best solution is always a trade-off between different kinds of limitations, a quick turn over is necessary to evaluate physics performance for different technical solutions in different configurations. Two typical problems which slow down evaluating physics performance for different detector technologies and configurations are: describing Geant geometry together with signal processing chain for an adequate description of the detector response and developing adequate reconstruction algorithm for physics reconstruction of detector response under study. Both problems may be addressed using modern ML approaches. In addition to this, the whole procedure can be viewed as a black-box optimisation, which gives access to numerous available methods. In our presentation, we discuss the way advanced machine learning techniques allow to speed up the detector development and optimization cycle with an emphasis on the project of the calorimeter upgrade for the LHCb detector.

© CERN Geneva

Access to file

Αναλυτική εγγραφή - Παρόμοιες εγγραφές
2019-11-08
14:56
Compass SPMD: a SPMD vectorized tracking algorithm
Reference: Poster-2019-1018
Created: 2019. -1 p
Creator(s): Fernandez Declara, Placido

The LHCb detector will be upgraded in 2021, where the hardware-level trigger will be replaced by a High Level Trigger 1 software trigger that needs to process the full 30 MHz data-collision rate. As part of the efforts to create a GPU High Level Trigger 1, tracking algorithms need to be optimized for SIMD architectures in order to achieve high-throughput. We present a SPMD (Single Program, Multiple Data) version of Compass, a tracking algorithm optimized for SIMD architectures, vectorized using the Intel SPMD Program Compiler. This compiler and model allows to execute program instances in parallel, and allows to use exploit the SIMD lanes of CPUs using GPU-like source code, without the need of low-level details knowledge. It is able to target different vector widths, vector instructions sets and combine different levels of parallelism. We design the algorithm focusing on highly parallel architectures in mind, minimizing divergence and memory footprint while creating a data-oriented algorithm that is efficient for SIMD architectures. We vectorize the algorithm using the SPMD programming model, preserving the algorithm design and delivering the same physics efficiency as its GPU counterpart. We study the physics performance and throughput of the algorithm. We discuss the impact with different vector widths and instructions sets and compare it with the GPU implementation.

© CERN Geneva

Access to files

Αναλυτική εγγραφή - Παρόμοιες εγγραφές
2019-11-08
14:53
Modularization of the LHCb software environment and preparation for heterogeneous resources
Reference: Poster-2019-1017
Created: 2019. -1 p
Creator(s): Clemencic, Marco

The LHCb software stack has to be run in very different computing environments: the trigger farm at CERN, on the grid, on shared clusters, on software developer's desktops... The old model assumes the availability of CVMFS and relies on custom scripts (a.k.a LbScripts) to configure the environment to build and run the software. It lacks flexibility and does not allow, for example running in container and be very difficult to configure and run on non standard environments. This paper describes the steps taken to modularize this environment to allow for easier development and deployment (as standard python packages), but also added integration with container technology to better support non standard environments.

© CERN Geneva

Access to files

Αναλυτική εγγραφή - Παρόμοιες εγγραφές
2019-11-08
14:52
A gateway between Gitlab CI and DIRAC
Reference: Poster-2019-1016
Created: 2019. -1 p
Creator(s): Couturier, Ben

The Gitlab continuous integration system (http://gitlab.com) is an invaluable tool for software developer to test and validate their software. LHCb analysts have also been using it to validate physics software tools and data analysis scripts, but this usage faced issues differing from standard software testing, as it requires significant amount of CPU resources and credentials to access physics data. This paper presents the Gitlab CI to DIRAC gateway, a tool that runs Gitlab CI jobs within the LHCb grid system (LHCbDirac) therefore bridging the gap between the Gitlab jobs and the CPU and disk resources provided to the experiment.

© CERN Geneva

Access to files

Αναλυτική εγγραφή - Παρόμοιες εγγραφές
2019-11-08
14:50
Feasibility tests of RoCE for the cluster-based event building in LHCb
Reference: Poster-2019-1015
Created: 2019. -1 p
Creator(s): Krawczyk, Rafal Dominik

This paper evaluates the utilization of RDMA over Converged Ethernet (RoCE) for the Run3 LHCb event building at CERN. The acquisition system of the detector will collect partial data from approximately 1000 separate detector streams. Total estimated throughput equals 40 terabits per second. Full events will be assembled for subsequent processing and data selection in the filtering farm of the online trigger. As a result, inter-node large-throughput transmissions with a combination of 100 and 25 Gigabit-per-second will be essential features of the system. Therefore, the data exchange mechanism of the cluster must utilize memory-lightweight data transmission protocols. In this work, the RoCE high-throughput kernel bypass Ethernet-based protocol is benchmarked as an applicable technology for the event building network. CPU and memory bandwidth utilization for RoCE-based data transmissions is investigated and discussed. A comparison of RoCE with InfiniBand protocol is presented. Preliminary performance results are discussed with the selected network hardware supporting the protocol. Relevant utilization and interoperability issues are detailed along with lessons learned along the road.

© CERN Geneva

Access to files

Αναλυτική εγγραφή - Παρόμοιες εγγραφές
2019-11-07
16:08
Quarkonia production in pPb collisions at LHCb
Reference: Poster-2019-1014
Created: 2019. -1 p
Creator(s): Crkovska, Jana

We present LHCb results on charmonia production in proton-lead collisions, using the data collected in 2016 at sqrt(s_NN) = 8.16 TeV nucleon-nucleon centre-of-mass energy, in the forward region (pseudorapidity between 2 and 5), covering forward (pPb configuration) and backward (Pbp configuration) rapidities. Measurements include prompt and from-b-decay components which are disentangled. The large increase in size of the data sample, compared to the 5 TeV sample collected in 2013, allows a remarkable improvement in the accuracy of the studies of nuclear matter effects.

© CERN Geneva

Access to files

Αναλυτική εγγραφή - Παρόμοιες εγγραφές
2019-11-07
16:06
Prompt open charm production in 𝑝Pb with LHCb
Reference: Poster-2019-1013
Created: 2019. -1 p
Creator(s): Wang, Jianqiao

Within proton-lead collisions collected by the LHCb detector at nucleon-nucleon center-of-mass energy of 5 and 8.16 TeV, a rich set of open charm hadrons is observed with abundant statistics. Thanks to the LHCb forward acceptance that is complementary to general purpose detectors and excellent performance in particle reconstruction and identification, these charm states are studied down to zero pT with overwhelming precision in heavy ion data. Presented in this talk is the measurements of production of charm mesons and baryons reconstructed in exclusive hadronic final states. Nuclear effects are studied, quantified by the nuclear modification factors, forward-to-backward production ratios and baryon-to-meson ratios. The impact of the results, in particularly, on the improvement of nuclear PDF and parton saturation are discussed.

© CERN Geneva

Access to files

Αναλυτική εγγραφή - Παρόμοιες εγγραφές
2019-11-07
15:57
Heavy-flavor production in fixed-target mode with LHCb
Reference: Poster-2019-1012
Created: 2019. -1 p
Creator(s): Garcia Rosales, Felipe Andres

LHCb has the unique capability to study collisions of the LHC beams on fixed targets. Internal gas targets of helium, neon and argon have been used so far to collect samples of proton- and Pb-gas collisions corresponding to integrated luminosities up to 0.1 pb−1. Results on open and hidden charm productions will be presented, which can provide crucial constraints on cold nuclear matter effects and nPDF at large x.

© CERN Geneva

Access to files

Αναλυτική εγγραφή - Παρόμοιες εγγραφές
2019-11-07
15:54
Open Heavy-Flavour and J/psi production in peripheral PbPb collisions at LHCb
Reference: Poster-2019-1011
Created: 2019. -1 p
Creator(s): Belin, Samuel

In 2018, LHCb recorded ~210 microbarn−1 integrated luminosity of PbPb collisions at sNN−−−−√ = 5.02 TeV. In this talk, we present the first LHCb measurements of open heavy-flavour and J/ψ productions from this new sample. The momentum resolution of the detector allows to probe the boundary of peripheral and ultra-peripheral collisions by comparing hadro-produced and photo-produced J/ψ, but also to measure mesonic and baryonic open-charm productions in peripheral collisions, down to pT=0, where the performance of the detector is optimal.

© CERN Geneva

Access to files

Αναλυτική εγγραφή - Παρόμοιες εγγραφές
2019-11-07
15:50
Probing small-x gluons with 𝛄+hadron correlations in the forward rapidity with the LHCb detector
Reference: Poster-2019-1010
Created: 2019. -1 p
Creator(s): Mukherjee, Maitreyee

Gluon nuclear PDFs still have large uncertainties in the small-x (x<10^{-3}) and small virtuality Q2<50 (GeV/c)2 region. Yields from particles coming from these gluons obtained in nuclear collisions are suppressed relative to p+p collisions because of initial-state effects such as shadowing, energy loss and gluon saturation. Precise measurement of yields coming from small-x, small-Q2 gluons are essential to understand these effects which have a significant contribution to the suppressions observed in A+A collisions at RHIC and LHC. The inverse Compton process q+q→γ+q→γ+h is one of the few which can access and provide information on the gluon x and Q2 in the region where nPDFs are not well constrained. The LHCb detector can measure photons through the Electromagnetic Calorimeter or photon conversion to dielectrons in the pseudorapidity range 2<η<5, covering x>5×10−6 and Q2>2GeV2 in the case of inverse Compton processes. This unique coverage allow us to search for the gluon saturation scale, the transition between dilute and saturated gluons, predicted by the Color-Glass Condensate effective theory. This presentation will show the status of the isolated γ+hadron correlation analysis using data collected in p+Pb and Pb+p collisions at 8.16 TeV and p+p collisions at 8 TeV. New techniques will also be presented to identify isolated photons and subtract the large background from neutral meson decays.

© CERN Geneva

Access to files

Αναλυτική εγγραφή - Παρόμοιες εγγραφές
Επικέντρωση σε:
Open Days 2013 Posters (58)
Open Days 2019 Posters (293)