Conveners
Afternoon Talks
- Huilin Qu (CERN)
Afternoon Talks
- Cheng-Wei Chiang (National Taiwan University)
Afternoon Talks
- Manqi Ruan (Institute of High Energy Physics (IHEP), CAS)
Advanced boosted-jet taggers, such as those involving H->bb/cc or t/W/Z tagging, have significantly enhanced the sensitivity of many analyses at the LHC. Recently, an inclusive pre-trained large-R jet model has been successfully deployed in the CMS experiment. In this talk, we discuss two novel potentials brought by this technique. Firstly, the novel X->cb tagging technique can be utilized to...
The SM and BSM searches via Lorentz-boosted jets are a key focus at the LHC, yet much of the potential phase space remains underexplored. In this talk, we first present the recent transformer-based tagger developed within CMS for SM H→WW* decays and demonstrate its superior performance. Building upon this, we introduce the next-generation Global Particle Transformer 3 (GloParT-3), designed to...
This talk presents long-lived particle (LLP) searches in Higgs decays at future lepton colliders (e+e−→ZH) using deep learning techniques. Scanning LLP lifetimes from 0.001 to 100 ns and masses from 1 to 50 GeV, we find that the best sensitivity is achieved at 50 GeV and 1 ns, where deep neural networks, including CNNs and GNNs, reach up to 99% signal efficiency with zero Standard Model...
In the Two-Higgs-Doublet Model (2HDM) Type-I, setting the coupling between a light CP-even Higgs boson and fermions to zero introduces a fermiophobic Higgs ℎ𝑓, which dominantly decays as ℎ𝑓 → γγ. Searching for ℎ𝑓 with a mass below 10 GeV presents a challenge, as conventional isolated diphoton methods become ineffective. This is due to the ℎ𝑓 → γγ decay producing highly collimated diphotons...
We propose a novel approach for more advanced analysis of jets, which serve as crucial objects of study in high-energy particle physics experiments. While conventional methods often treat jets as point clouds, our work focuses on the binary tree structure obtained during clustering. It explores ways to handle its constituents (such as tracks) using natural language processing language models....
The data processing and analyzing is one of the main challenges at HEP experiments, normally one physics result can take more than 3 years to be conducted. To accelerate the physics analysis and drive new physics discovery, the rapidly developing Large Language Model (LLM) is the most promising approach, it have demonstrated astonishing capabilities in recognition and generation of text while...
As the Large Hadron Collider (LHC) generates hundreds of petabytes of data and even more with its high-luminosity upgrade, particle physics is entering a new era of data-driven discovery where machine learning (ML) techniques play a pivotal role. Alongside numerous task-specific ML algorithms, recent works have introduced foundation models excelling across diverse applications. At the heart of...
We develop diffusion models for simulating lattice gauge theories, where stochastic quantization is explicitly incorporated as a physical condition for sampling. We demonstrate the application of this novel sampler to U(1) gauge theory in two spacetime dimensions and find that a model trained at a small inverse coupling constant can be extrapolated to larger inverse coupling regions without...
How do AI and Hamiltonian Mechanics drive each other’s advancement, enabling stronger predictions and offering deeper insights? In this presentation, we explore how AI can not only predict but also understand Hamiltonian dynamics. First, we introduce a robust long-term prediction framework that combines an improved Hamiltonian Neural Network with Bayesian data assimilation. This method...
In this talk we describe LeStrat-Net: a new algorithm to perform Monte Carlo integration using Lebesgue style stratified sampling and machine learning. We divide the domain of integration based on the height of the integrand, similar to Lebesgue integration. The isocontours of the integrand can in principle create regions of any shape and with many disconnected subregions. We take advantage of...
In high-energy physics, calculating Feynman integrals efficiently remains challenging due to the computational demands of traditional methods. We are developing a new approach relying on code generation capabilities of large language models to optimize integration-by-parts (IBP) reduction by combining advanced techniques with classical algorithms. This method shows significant improvements in...