πŸ“„ Research Paper Abstract

Below is the abstract from this arXiv research paper. Mathematical notation has been simplified for readability.

Seerecentarticles

Showing new listings for Monday, 10 November 2025

New submissions (showing 2 of 2 entries)

Muon scattering tomography is a well-established, non-invasive imaging technique using cosmic-ray muons. Simple algorithms, such as PoCA (Point of Closest Approach), are often utilized to reconstruct the volume of interest from the observed muon tracks. However, it is preferable to apply more advanced reconstruction algorithms to efficiently use the sparse muon statistics that are available. One approach is to formulate the reconstruction task as a likelihood-based problem, where the material properties of the reconstruction volume are treated as an optimization parameter.In this contribution, we present a reconstruction method based on directly maximizing the underlying likelihood using automatic differentiation within the PyTorch framework. We will introduce the general idea of this approach, and evaluate its advantages over conventional reconstruction methods. Furthermore, first reconstruction results for different scenarios will be presented, and the potential that this approach inherently provides will be discussed.

A tau lepton identification algorithm, DeepTau, based on convolutional neural network techniques, has been developed in the CMS experiment to discriminate reconstructed hadronic decays of tau leptons (tau_mathrm{h}) from quark or gluon jets and electrons and muons that are misreconstructed as tau_mathrm{h} candidates. The latest version of this algorithm, v2.5, includes domain adaptation by backpropagation, a technique that reduces discrepancies between collision data and simulation in the region with the highest purity of genuine tau_mathrm{h} candidates. Additionally, a refined training workflow improves classification performance with respect to the previous version of the algorithm, with a reduction of 30-50% in the probability for quark and gluon jets to be misidentified as tau_mathrm{h} candidates for given reconstruction and identification efficiencies. This paper presents the novel improvements introduced in the DeepTau algorithm and evaluates its performance in LHC proton-proton collision data at sqrt{s} = 13 and 13.6 TeV collected in 2018 and 2022 with integrated luminosities of 60 and 35 fb^{-1}, respectively. Techniques to calibrate the performance of the tau_mathrm{h} identification algorithm in simulation with respect to its measured performance in real data are presented, together with a subset of results among those measured for use in CMS physics analyses.

Cross submissions (showing 8 of 8 entries)

We present a hardware-accelerated hit filtering system employing Graph Neural Networks (GNNs) on Field-Programmable Gate Arrays (FPGAs) for the Belle II Level-1 Trigger. The GNN exploits spatial and temporal relationships among sense wire hits and is optimized for high-throughput hardware operation via quantization, pruning, and static graph-building. Sector-wise spatial parallelization permits scaling to full-detector coverage, satisfying stringent latency and throughput requirements. At a sustained throughput of 31.804 MHz, the system processes sense wire data in real-time and achieves detector-level background suppression with a measured latency of 632.4 ns while utilizing 35.65% of Look-Up Tables (LUTs), and 29.75% of Flip-Flops, with zero Digital Signal Processing (DSP) usage, as demonstrated in a prototype implementation for a single sector on an AMD Ultrascale XVCU190. Offline validation using Belle II data yields a background hit rejection of 83% while maintaining 95% signal hit efficiency. This work establishes hit-level GNN-based filtering on FPGAs as a scalable low-latency solution for real-time data reduction in high-luminosity collider conditions.

Algorithms for computing neutrino oscillation probabilities in sharply varying matter potentials such as the Earth are becoming increasingly important. As the next generation of experiments, DUNE and HyperK as well as the IceCube upgrade and KM3NeT, come online, the computational cost for atmospheric and solar neutrinos will continue to increase. To address these issues, we expand upon our previous algorithm for long-baseline calculations to efficiently handle probabilities through the Earth for atmospheric, nighttime solar, and supernova neutrinos. The algorithm is fast, flexible, and accurate. It can handle arbitrary Earth models with two different schemes for varying density profiles. We also provide a c++ implementation of the code called NuFast-Earth along with a detailed user manual. The code intelligently keeps track of repeated calculations and only recalculates what is needed on each successive call which can also help provide significant speed-ups.

Low mass particles with small electric charges can be produced abundantly in large electric fields via the Schwinger effect. We study the production rate of such particles inside the polar gap of nearby pulsars. After production they are accelerated above MeV energies by the local electric fields. These pulsar-produced millicharged particles can be detected at Earth in low-threshold dark matter direct detection experiments. We find that the current XENONnT data constrains millicharged particles produced in the Crab pulsar to have charges less than O(10^{-6}) for sub-eV masses.

We analyze the excesses at 95 GeV in the light Higgs-boson searches in the di-photon decay channel reported by CMS and ATLAS, which combined are at the level of three standard deviations and are compatible with the excess in the bbar{b} final state observed at LEP, together with an excess in the di-photon channel at around 152 GeV reported based on a sideband analysis. We demonstrate that these excesses can be well described in a minimally extended Georgi-Machacek (meGM) model. This is enabled by four key features of the meGM model: (1) a natural prediction for scalar boson masses of lesssim200 GeV arising from the condition to describe both the Higgs boson signal at 125 GeV and the excesses at 95 GeV, (2) the prediction for a doubly charged Higgs boson that can potentially enhance the di-photon decay rates, (3) asymmetric WW and ZZ couplings to neutral scalar bosons that are induced by mild custodial symmetry breaking, and (4) the approximate preservation of the electroweak rho parameter to be 1 at tree level. We show in our numerical analysis that the meGM model naturally improves the fit to the LHC data around 152 GeV when describing the excesses at 95 GeV. At the same time, the model also predicts additional light CP-odd and charged scalar bosons that can be potentially probed in future experiments, which motivates dedicated searches in the upcoming LHC runs. We also present the results of sensitivity studies for the 95 and 125 GeV Higgs-boson couplings at the HL-LHC and future e^+e^- colliders, which demonstrate very interesting prospects for probing the meGM model at future colliders.

We calculate inclusive B-meson decay rates in the Mesogenesis framework, a model explaining baryogenesis and the existence of dark matter, using the Heavy Quark Expansion (HQE), up to the dimension-six two-quark Darwin term. By systematically studying the power-suppressed contributions, we identify regions of parameter space where subleading terms exceed the leading contribution, i.e., the free b-quark decay, highlighting the limits of the HQE in this BSM scenario. This behavior is reminiscent of the Standard Model only under artificially heavy charm masses, and can be used to study the HQE close to its breakdown. We further update the lower bounds on the exclusive decay mode B^+ to p^+ psi by incorporating the fully HQE-corrected inclusive width in the ratio Gamma_{mathrm{excl}}/Gamma_{mathrm{incl}}. Extending the analysis from total decay rates to the lifetime ratio tau(B_s)/tau(B_d), we find no additional constraints on the couplings beyond existing collider bounds, consistent with analogous results for tau(B^+)/tau(B_d). We further compare the sensitivity of both lifetime ratios.

In most particle acceleration mechanisms, the maximum energy of the cosmic rays can achieve is charge dependent. However, the observational verification of such a fundamental relation is still lack due to the difficulty of measuring the spectra of individual particles from one (kind of) source(s) up to very high energies. This work reports direct measurements of the carbon, oxygen, and iron spectra from ~ 20 gigavolts to ~ 100 teravolts (~ 60 teravolts for iron) with 9 years of on-orbit data collected by the Dark Matter Particle Explorer (DAMPE). Distinct spectral softenings have been directly detected in these spectra for the first time. Combined with the updated proton and helium spectra, the spectral softening appears universally at a rigidity of ~ 15 teravolts. A nuclei mass dependent softening is rejected at a confidence level of > 99.999%. Taking into account the correlated structures at similar energies in the large-scale anisotropies of cosmic rays, one of the most natural interpretations of the spectral structures is the presence of a nearby cosmic ray source. In this case, the softening energies correspond to the acceleration upper limits of such a source, forming the so-called Peters cycle of the spectra. The results thus offer observational verification of the long-standing prediction of the charge-dependent energy limit of cosmic ray acceleration.

The weak interactions of neutrinos with other Standard Model particles are well described within the Standard Model of particle physics. However, modern accelerator-based neutrino experiments employ nuclei as targets, where neutrinos interact with bound nucleons, turning a seemingly simple electroweak process into a complex many-body problem in nuclear physics. At the time of writing this Encyclopedia of Particle Physics chapter, neutrino-nucleus interactions remain one of the leading sources of systematic uncertainty in accelerator-based neutrino oscillation measurements.This chapter provides a pedagogical overview of neutrino interactions with nuclei in the medium-energy regime, spanning a few hundred MeV to several GeV. It introduces the fundamental electroweak formalism, outlines the dominant interaction mechanisms - including quasielastic scattering, resonance production, and deep inelastic scattering - and discusses how nuclear effects such as Fermi motion, nucleon-nucleon correlations, meson-exchange currents, and final-state interactions modify observable cross sections. The chapter also presents a brief survey of the foundational and most widely used theoretical models for neutrino-nucleus cross sections, together with an overview of current and upcoming accelerator-based neutrino oscillation experiments that are shaping the field.Rather than targeting experts, this chapter serves as a primer for advanced undergraduates, graduate students, and early-career researchers entering the field. It provides a concise foundation for understanding neutrino-nucleus scattering, its relevance to oscillation experiments, and its broader connections to both particle and nuclear physics.

We compute mathcal O(alpha^2 Z) radiative corrections to superallowed beta decays with a heavy-particle effective field theory that systematically describes the interactions of low-energy ultrasoft photons with nuclei. We calculate two-loop virtual and one-loop real-virtual amplitudes by reducing the Feynman integrals to a set of master integrals, which we solve analytically using a variety of techniques. These techniques can be applied to other phenomenologically interesting observables. The ultrasoft corrections can then be combined with contributions arising from the exchange of potential photons to obtain the complete mathcal O(alpha^2 Z) correction to the decay rate, with resummation of large logarithms of the electron energy times the nuclear radius. We find that mathcal O(alpha^2 Z) ultrasoft loops induce a relative correction to the decay rate that ranges from 0.7 cdot 10^{-3} in the decay of ^{10}C to 3.6 cdot 10^{-3} in the decay of ^{54}Co, and will thus impact the extraction of V_{ud} at the permille level. We show that the inclusion of these corrections reduces the residual renormalization scale dependence of the decay rate to a negligible level, making missing ultrasoft perturbative corrections a subdominant source of theoretical error.

Replacement submissions (showing 9 of 9 entries)

Fast and accurate muon reconstruction is crucial for neutrino telescopes to improve experimental sensitivity and enable online triggering. This paper introduces a hybrid-graph neural network (GNN) method tailored for efficient muon track reconstruction, leveraging the robustness of GNNs, alongside traditional physics-based approaches. The "light GNN model" achieves a run-time of 0.19-0.29 ms per event on GPUs, offering a 3 orders of magnitude speedup compared to traditional likelihood-based methods, while maintaining a high reconstruction accuracy. For high-energy muons (10-100 TeV), the median angular error is approximately 0.1Β°, with errors in reconstructed Cherenkov photon emission positions being below 3-5 m, depending on the GNN model used. Furthermore, the semi-GNN method offers a mechanism to assess the quality of event reconstruction, enabling the identification and exclusion of poorly reconstructed events. These results establish the GNN-based approach as a promising solution for next-generation neutrino telescope data reconstruction.

In this paper we present updated constraints on the top-quark sector of the Standard Model Effective Field Theory using data available from Tevatron, LEP and the LHC. Bounds are obtained for the Wilson coefficients from a global fit including the relevant two-fermion operators, four-quark operators and two-quark two-lepton operators. We compare the current bounds with the prospects for the high luminosity phase of the Large Hadron Collider and future lepton colliders.

The Circular Electron-Positron Collider (CEPC), a proposed next-generation e^+e^- collider to enable high-precision studies of the Higgs boson and potential new physics, imposes rigorous demands on detector technologies, particularly the vertex detector. JadePix-3 is a prototype Monolithic Active Pixel Sensor (MAPS) designed for the CEPC vertex detector. This paper presents a detailed laboratory-based characterization of the JadePix-3 sensor, focusing on the previously under-explored effects of substrate reverse bias voltage on key performance metrics: charge collection efficiency, average cluster size, and hit efficiency of laser. Systematic testing demonstrated that JadePix-3 operates reliably under reverse bias, exhibiting a reduced input capacitance, an expanded depletion region, enhanced charge collection efficiency, and a lower fake-hit rate. These findings confirm the sensor's potential for high-precision particle tracking and vertexing at the CEPC while offering valuable references for future iterational R&D of the JadePix series.

New heavy resonances with sizeable couplings to top quarks can be probed through searches for beyond-the-Standard-Model effects in four-top production at the LHC. In this work, we present the first next-to-leading-order QCD predictions for the full on-shell and off-shell production of four-top events via new electroweak singlet states, along with dedicated analysis strategies based on the reconstruction and tagging of all final-state top quarks. We develop a detector-level simulation incorporating recent advances in top-tagging and boosted object reconstruction. Moreover, we demonstrate that searches at LHC Run 3 and high-luminosity phase in the zero-lepton, one-lepton and same-sign di-lepton channels can improve the sensitivity to the new physics cross sections by up to two orders of magnitude. In particular, colour-octet resonances with masses up to 2-2.5 TeV and colour-singlet states with masses up to 1-1.5 TeV are within reach for coupling values in the 0.1-1 range.

Collider processes at the highest available partonic center-of-mass energies - 10 TeV and above - exhibit a new regime of electroweak interactions where electroweak gauge bosons mostly act as quasi-massless partons in vector boson fusion processes. We scrutinize these processes using the Equivalent Vector boson Approximation (EVA) based on its implementation in the Monte Carlo generator framework Whizard. Using a variety of important physics processes, including top pairs, Higgs pairs, neutrino pairs, and vector boson pairs, we study the behavior of processes initiated by transverse and longitudinal vector bosons, both W and Z induced. By considering several distributions for each process, we conclude that: there is no universal, process-independent prescription which minimizes the discrepancies between EVA- and matrix-element-based predictions; even by resorting to process-by-process prescriptions, we typically observe significant observable-dependent effects; the uncertainties associated with parameter dependencies in the EVA can be as large as mathcal{O}(100%), and can only possibly be reduced by careful process-dependent kinematical selections.

In this article, we summarise the recent experimental measurements and theoretical work on Higgs boson production via vector-boson fusion at the LHC. Along with this, we provide state-of-the-art predictions at fixed order as well as with parton-shower corrections within the Standard Model at 13.6 TeV. The results are presented in the form of multi-differential distributions as well as in the Simplified Template Cross Section bins. All materials and outputs of this study are available on public repositories. Finally, following findings in the literature, recommendations are made to estimate theoretical uncertainties related to parton-shower corrections.

Production of the High Granularity Timing Detector for the ATLAS experiment at High Luminosity LHC requires over 21000 silicon sensors based on Low Gain Avalanche Diode (LGAD) technology. Their radiation hardness is monitored as a part of the production quality control. Dedicated test structures from each wafer are irradiated with neutrons and a fast and comprehensive characterization is required. We introduce a new test method based on Transient Current Technique (TCT) performed in the interface region of two LGAD devices. The measurement enables extraction of numerous sensor performance parameters, such as LGAD gain layer depletion voltage, LGAD gain dependence on bias voltage, sensor leakage current and effective interpad distance. Complementary capacitance-voltage measurements and charge collection measurements with 90Sr on the same samples have been performed to calibrate the TCT results in terms of charge collection and define acceptance criteria for wafer radiation hardness in the ATLAS-HGTD project.

We study axion-like particles (ALPs) in beam dump experiments, focusing on the Search for Hidden Particles (SHiP, at CERN) experiment and the Beam Dump eXperiment (BDX, at JLab). Many existing projections for sensitivity to ALPs in beam dump experiments have focused on production from either the primary proton/electron beam, or - in the case of SHiP - the secondary (high-energy) photons produced by neutral meson decays (e.g., pi^0rightarrowgamma gamma). In this work, we study the subsequent production of axions from the full electromagnetic shower in the target, finding order-of-magnitude enhancements in the visible decay yields across a wide range of axion masses. We update SHiP's sensitivity curve and provide new projections for BDX. Both experiments will be able to reach currently unexplored regions of ALP parameter space.

Unfolding, for example of distortions imparted by detectors, provides suitable and publishable representations of LHC data. Many methods for unbinned and high-dimensional unfolding using machine learning have been proposed, but no generative method scales to the several hundred dimensions necessary to fully characterize LHC collisions. This paper proposes a 3-stage generative unfolding framework that is capable of unfolding several hundred dimensions. It is effective to unfold the jet-level kinematics as well as the full substructure of light-flavor jets and of top jets, and is the first generative unfolding study to achieve high precision on high-dimensional jet substructure.

πŸ“š

Read Full Paper on arXiv

View the complete research paper with proper mathematical formatting, figures, and references on arXiv.

View on arXiv β†’