CERN Accelerating science

Posters

Darreres entrades:
2018-08-09
14:07
Honeypot Resurrection - Redesign of CERN's Security Honeypots
Reference: Poster-2018-653
Keywords:  computer security, honeypot, SOC
Created: 2018. -1 p
Creator(s): Buschendorf, Fabiola

Honeypots are a fake system residing in a companie's or organization's network, attracting attackers by emulating old and vulnerable software. If a Honeypot is accessed, all actions are logged and any submitted files are being stored on the host machine. The current Honeypot at CERN is deprecated and does not provide useful notifications. The task of this summer student project is to identify well maintained and up-to-date open source honeypots, test and configure them and finally deploy them to convincingly resemble a CERN host in order to collect information about potentially malicious activity inside the GPN.

© CERN Geneva

Access to files

Registre complet - Registres semblants
2018-08-03
16:36
Software packaging and distribution for LHCb using Nix
Reference: Poster-2018-652
Created: 2018. -1 p
Creator(s): Burr, Chris

Software is an essential and rapidly evolving component of modern high energy physics research. The ability to be agile and take advantage of new and updated packages from the wider data science community is allowing physicists to efficiently utilise the data available to them. However, these packages often introduce complex dependency chains and evolve rapidly introducing specific, and sometimes conflicting, version requirements which can make managing environments challenging. Additionally, there is a need to replicate old environments when generating simulated data and to utilise pre-existing datasets. Nix is a "purely functional package manager" which allows for software to be built and distributed with fully specified dependencies, making packages independent from those available on the host. Builds are reproducible and multiple versions/configurations of each package can coexist with the build configuration of each perfectly preserved. Here we will give an overview of Nix followed by the work that has been done to use Nix in LHCb and the advantages and challenges that this brings.

© CERN Geneva

Access to files

Registre complet - Registres semblants
2018-07-27
13:59
KairosDB and Chronix as longterm storage for Prometheus - For those who don’t want to deal with Hbase.
Reference: Poster-2018-651
Created: 2018. -1 p
Creator(s): Mohamed, Hristo Umaru

Prometheus is a leading open source monitoring and alerting tool. Prometheus's local storage is limited in its scalability and durability, but it integrates very well with other solutions which provide us with robust long term storage. This talk will cover two solutions which interface excellently and do not require us to deal with HBase - KairosDB and Chronix. Intended audience are people who are looking to evaluate a long term storage solution for their Prometheus data. This talk will cover the CERN@LHCb Online experience of choosing a monitoring solution for our data processing cluster. It will address two technologies on the maker Chronix and KairosDB which do not require us to maintain a HBase cluster.

© CERN Geneva

Access to files

Registre complet - Registres semblants
2018-07-26
15:22
Strategy and Automation of the Quality Assurance Testing of MaPMTs for the LHCb RICH Upgrade
Reference: Poster-2018-650
Created: 2018. -1 p
Creator(s): Gizdov, Konstantin

The LHCb RICH system will undergo major modifications for the LHCb Upgrade during the Long Shutdown 2 of the LHC, and the current photon detectors will be replaced by Multi Anode PMTs. The operating conditions of the upgraded experiment puts forth significant requirements onto the MaPMTs in terms of their performance, durability & reliability. Presented is an overview of the testing facilities designed and used to vet 3100 units of Hamamatsu 1-inch R13742 and 450 units of Hamamatsu 2-inch R13743 during the short 2 year testing period. Furthermore, discussed are the hardware architecture, the different read-out, power and control components, as well as the novel extensible software framework to steer the procedure. Finally, the operation of four automated stations, that have been deployed in two separate labs, is reported, with each station capable of fully characterising 16 MaMPTs per day.

© CERN Geneva

Access to files

Registre complet - Registres semblants
2018-07-26
13:27
First results from production testing of 64-channel MaPMT R13742 (1 in) and R13743 (2 in) for the LHCb RICH Upgrade
Reference: Poster-2018-649
Created: 2018. -1 p
Creator(s): Gizdov, Konstantin

During 2019/20 LHCb Upgrade of the Ring Imaging Cherenkov (RICH) system the current Hybrid Photon Detectors (HPDs), with embedded 1 MHz readout electronics, will be replaced with Multi-anode Photomultiplier Tubes (MaPMTs) with new external 40 MHz readout electronics. Two sizes of Hamamatsu 64-channel MaPMT have been selected as the photon detectors: the 1-inch R13742 and the 2-inch R13743 MaPMTs, custom modifications of the models R11625 and R12699. Including spares, 3100 R13742 and 450 R13743 are purchased. The campaign to characterise all units, to ensure compliance with minimum specifications and to allow for selection of units with similar operational parameters is ongoing. The key characteristics comprise the average gain, the spread of the gain (uniformity), the peak-to-valley ratio, the dark count rate as well as the dependency of the gain on the high voltage (k-factor). So far 474 and 45 units have been tested, respectively. The test results will be presented. Additional measurements and studies, made with subsets of MaPMT, round the picture: the Quantum Efficiency, the loss of photon detection efficiency in magnetic fields and minimal mu-metal shield configurations to effectively shield them up to 3 mT.

© CERN Geneva

Access to files

Registre complet - Registres semblants
2018-07-20
14:45
Improvements to the LHCb software performance testing infrastructure using message queues and big data technologies
Reference: Poster-2018-648
Created: 2018. -1 p
Creator(s): Szymanski, Maciej Pawel

Software is an essential component of the experiments in High Energy Physics. Due to the fact that it is upgraded on relatively short timescales, software provides flexibility, but at the same time is susceptible to issues introduced during development process, which enforces systematic testing. We present recent improvements to LHCbPR, the framework implemented at LHCb to measure physics and computational performance of complete applications. Such infrastructure is essential for keeping track of the optimisation activities related to the upgrade of computing systems which is crucial to meet the requirements of the LHCb detector upgrade for the next stage of data taking of the LHC. Latest developments in LHCbPR include application of messaging system to trigger the tests right after the corresponding software version is built within LHCb Nightly Builds infrastructure. We will also report on the investigation of using big data technologies in LHCbPR. We have found that using tools such as Apache Spark and Hadoop Distributed File System may significantly improve the functionality of the framework, providing an interactive exploration of the test results with efficient data filtering and flexible development of reports.

© CERN Geneva

Access to files

Registre complet - Registres semblants
2018-07-20
14:41
Machine Learning based Global Particle Identification Algorithms at the LHCb Experiment
Reference: Poster-2018-647
Created: 2018. -1 p
Creator(s): Hushchyn, Mikhail

One of the most important aspects of data processing at LHC experiments is the particle identification (PID) algorithm. In LHCb, several different sub-detector systems provide PID information: the Ring Imaging Cherenkov detectors, the hadronic and electromagnetic calorimeters, and the muon chambers. The charged PID based on the sub-detectors response is considered as a machine learning problem solved in different modes: one-vs-rest, one-vs-one and multi-classification, which affect the models training and prediction. To improve charged particle identification for pions, kaons, protons, muons and electrons, several neural networks and gradient boosting models have been tested. These approaches provide larger area under the curve of receiver operator characteristics than existing implementations in most cases. To reduce the systematic uncertainty arising from the use of PID efficiencies in certain physics measurements, it is also beneficial to achieve a flat dependency between efficiencies and spectator variables such as particle momentum. For this purpose, "flat” algorithms based on the boosted decision trees that guarantee the flatness property for efficiencies have also been developed. This talk presents approaches based on the state-of-the-art machine learning techniques and its performance evaluated on Run 2 data and simulation samples. A discussion of the performances is also presented.

© CERN Geneva

Access to files

Registre complet - Registres semblants
2018-07-20
14:32
Addressing Scalability with Message Queues: Architecture and Use Cases for DIRAC Interware
Reference: Poster-2018-646
Created: 2018. -1 p
Creator(s): Krzemien, Wojciech Jan

The Message Queue architecture is an asynchronous communication scheme that provides an attractive solution for certain scenarios in the distributed computing model. The introduction of the intermediate component (queue) in-between the interacting processes, allows to decouple the end-points making the system more flexible and providing high scalability and redundancy. The message queue brokers such as RabbitMQ, ActiveMQ or Kafka are proven technologies widely used nowadays. DIRAC is a general-purpose Interware software for distributed computing systems, which offers a common interface to a number of heterogeneous providers and guarantees transparent and reliable usage of the resources. The DIRAC platform has been adapted by several scientific projects, including High Energy Physics communities like LHCb, the Linear Collider and Belle2. A Message Queue generic interface has been incorporated into the DIRAC framework to help solving the scalability challenges that must be addressed during LHC Run3 starting in 2021. It allows to use the MQ scheme for the message exchange among the DIRAC components, or to communicate with third-party services. Within this contribution we will describe the integration of MQ systems with DIRAC, and several use cases will be shown. The focus will be put on the incorporation of MQ into the pilot logging system. Message Queues are also foreseen to be used as a backbone of the DIRAC component logging system, and monitoring. The results of the first performance tests will be presented.

© CERN Geneva

Access to files

Registre complet - Registres semblants
2018-07-20
14:30
New approaches for track reconstruction in LHCb’s Vertex Locator
Reference: Poster-2018-645
Created: 2018. -1 p
Creator(s): Hasse, Christoph

Starting with Upgrade 1 in 2021, LHCb will move to a purely software-based trigger system. Therefore, the new trigger strategy is to process events at the full rate of 30MHz. Given that the increase of CPU performance has slowed down in recent years, the predicted performance of the software trigger currently falls short of the necessary 30MHz throughput. To cope with this shortfall, LHCb's real-time reconstruction will have to be sped up significantly. We aim to help solve this shortfall by speeding up the track reconstruction of the Vertex Locator which currently takes up roughly a third of the time spent in the first phase of the High Level Trigger. In order to obtain the needed speedup, profiling and technical optimizations are explored as well as new algorithmic approaches. For instance, a clustering based algorithm can reduce the event rate prior to the track reconstruction by separating hits into two sets - hits from particles originating from the proton-proton interaction point, and those from secondary particles - allowing the reconstruction to treat them separately. We present an overview of our latest efforts in solving this problem, which is crucial to the success of the LHCb upgrade

© CERN Geneva

Access to files

Registre complet - Registres semblants
2018-07-20
14:25
LHCb’s Puppet 3.5 to Puppet 4.9 migration
Reference: Poster-2018-644
Created: 2018. -1 p
Creator(s): Mohamed, Hristo Umaru

Up until September 2017 LHCb Online was running on Puppet 3.5 Master/Server non redundant architecture. As a result, we had problem with outages, both planned and unplanned, as well as with scalability issues (How do you run 3000 nodes at the same time? How do you even run 100 without bringing down the Puppet Master). On top of that Puppet 5.0 was released, so we were running now 2 versions behind! As Puppet 4.9 was the de facto standard, something had to be done right now, so a quick self inflicted three weeks long nonstop hackathon had to happen. This talk will cover the pitfalls, mistakes and architecture decisions we took when migrating our entire Puppet codebase nearly from scratch, to a more modular one, addressing both existing exceptions and anticipating arising ones in the future - All while our entire infrastructure was running in physics productions and on top of that causing 0 outages. We will cover mistakes we had made in our Puppet 3 installment and how we fixed them in the end, in order to lower cotalogue compile time and reduce our overall codebase around 50%. We will cover how we setup a quickly scalable Puppet Core(Masters,CAs,Foreman,etc) infrastructure.

© CERN Geneva

Access to files

Registre complet - Registres semblants
Enfocat a:
Open Days 2013 Posters (58)