Navigation auf uzh.ch
The Standard Model (SM) of particle physics describes the fundamental constituents and interactions of Nature. Matter consists of quarks and leptons (fermions) which interact via the exchange of force particles (gauge bosons). The SM has been tested to a very good accuracy, both in high-energy searches at the Large Hadron Collider (LHC) at CERN and in low energy precision experiments. However, it is well known that it cannot be the ultimate theory of nature since it fails to explain observations like Dark Matter, Dark Energy, neutrino masses or the presence of more matter than anti-matter in the Universe. The goal of our research is to construct and study models of physics beyond the SM.
Direct and Indirect Hints for Physics Beyond the Standard Model
The Large Hadron Collider (LHC) at CERN recently found several hints for so far unknown particles.
This includes new strongly interacting particles that decay into quarks, resulting in signatures called "jets" at the LHC [1]. Also indirect tests of the SM show signs of new physics, including the measurement of the W boson mass which is predicted very precisely within the SM. Here, we pointed out that this measurement could be explained by new heavy quarks, leading to exotic decays of the top quark [2]. Furthermore, there might be a connection of the W mass to the tensions observed in decays quark bound states containing a heavy bottom quark [3].
Figure: Global fit to EW precision observables, neutrino trident production,LEP bounds on 4-lepton contact interactions and τ → μνν data (orange) and b → sll data (blue) in the g' − sinξ plane for mZ' = 3 TeV. One can see that both regions overlap nicely and that a non-zero value of the mixing angle is preferred (from [3]).
Highlighted Publications:
1. Consistency and Interpretation of the LHC (Di-)Di-Jet Excess,
A. Crivellin, C. A. Manzari, B. Mellado, S. E. Dahbi and A. K. Swain, arXiv:2208.12254 [hep-ph]
2. Large t→cZ as a sign of vectorlike quarks in light of the W mass,
A. Crivellin, M. Kirk, T. Kitahara and F. Mescia, Phys. Rev. D 106 (2022) no.3, L031704 doi:10.1103/PhysRevD.106.L031704
3. Unified explanation of the anomalies in semileptonic B decays and the W mass,
M. Algueró, J. Matias, A. Crivellin and C. A. Manzari, Phys. Rev. D 106 (2022) no.3, 033005 doi:10.1103/PhysRevD.106.033005
Our research group focuses on precision calculations for collider observables within the Standard Model and their application in the interpretation of experimental data. We develop novel techniques and computer algebra tools that enable analytical calculations in perturbative quantum field theory and help to unravel the underlying mathematical structures. We implement our results into numerical parton-level event generator programs, which are flexible tools that allow to take proper account of the details of experimental measurements, enabling precision theory to be directly confronted with the data.
Fiducial cross sections in Drell-Yan type processes
The production of lepton pairs (Drell–Yan process) at hadron colliders is mediated through on-shell and off-shell electroweak vector boson production. It is a fundamental benchmark for the study of strong interactions and the extraction of electro-weak parameters. The outstanding precision of the LHC demands very accurate theoretical predictions with a full account of fiducial experimental cuts.
Predictions for fiducial cross sections that include realistic selection cuts on the leptons and on hadronic activity in the event allow to compare theoretical predictions directly to experimental observations. Our group is currently developing methods and tools to perform fully differential calculations of fiducial cross sections with QCD corrections expanded up to third order (N3LO) in perturbation theory.
Fully differential predictions at higher orders in perturbation theory require special treatment for the cancellation of infrared singularities that appear at the intermediate stages of the calculation. The qT -subtraction method exploits the fact that the most singular phase space configurations are associated with the small transverse momentum qT region of the lepton pair. They can be isolated numerically by an artificial qT cut. The below-cut region is well-understood from resummation, and can be obtained in analytical form. We extended the qT -subtraction method to N3LO in QCD by computing the below-cut contribution to this order, and matching it to a numerical calculation of the above-cut contribution, which corresponds to Drell-Yan-plus-jet production at second-order (next-to-next-to-leading order, NNLO). A particular strength of the method is that its predictions can be combined with all-order resummation of large logarithmic corrections to all orders in perturbation theory in a straight-forward manner.
Figure: Fiducial pT(ll) distribution at N3LO+N3LL (blue, solid) and NNLO+NNLL (red, dotted) compared to LHC data.
An example application is the transverse momentum distribution of the lepton pair (figure), which we computed to N3LO matched to resummation at the third logarithmic order (N3LL). Our results provide an excellent description of the data across the spectrum and are accurate to the one percent level as required for precision phenomenology. The implementation of our results into a parton-level event generator allows to compute any distribution in the lepton kinematics, such as for example the transverse mass distribution relevant to the W boson mass determination.
Computing more complex final states to N3LO in QCD will however require substantial advances in concepts, algorithms and techniques for perturbative calculations. The major challenges associated with this endeavour are being addressed by our group in the framework of the ERC Advanced Grant ’Theory of Particle Collider Processes at Ultimate Precision (TOPUP)’.
Highlighted Publications:
1. Dilepton Rapidity Distribution in Drell-Yan Production to Third Order in QCD,
X. Chen et al., Phys. Rev. Lett. 128 (2022) 052001.
2. Third-Order Fiducial Predictions for Drell-Yan Production at the LHC,
X. Chen et al., Phys. Rev. Lett. 128 (2022) 252001.
Our research activity is focused on the phenomenology of particle physics at high-energy colliders. We perform accurate theoretical calculations for benchmark processes at the Large Hadron Collider and we make their results fully available to the community. We strive to develop flexible numerical tools that can be used to perform these calculations with the specific selection cuts used in the experimental analyses. Our projects span over a wide range of processes from vector-boson pair production to heavy-quark and jet production, to Higgs boson studies within and beyond the Standard Model.
Hadroproduction of a heavy-quark pair in association with a massive boson
Processes characterised by a 2 → 3 reaction at Born level are the current frontier for perturbative computations at the next-to-next-to-leading order (NNLO) in QCD. Recently our group has presented two (almost) exact computations of the NNLO QCD corrections for the associated production of a Higgs boson with a top-antitop quark pair [1] and for the hadroproduction of a W boson in association with a massive bottom quark-antiquark pair [2]. The associated production of a Higgs boson with a tt̄ pair is a crucial process at the LHC since it allows for a direct measurement of the top-quark Yukawa coupling. At present ATLAS and CMS measure the signal strength in this channel to an accuracy of O (20%), but at the end of the High-Luminosity phase the uncertainties are expected to go down to the O( 2% ) level. Current theory predictions are characterised by O(10%) uncertainties. Therefore, in order to match the expected experimental accuracy, the inclusion of NNLO QCD corrections is mandatory. In our work [1] the calculation of the NNLO QCD corrections is complete except for the finite part of the two-loop virtual amplitude, estimated with a soft Higgs boson approximation. Our approximation allows us to control the NNLO tt̄H cross section to better than 1%, reducing the QCD perturbative uncertainties to few-percent level all over a wide range of collider energies (see figure).
Figure: LO, NLO and NNLO cross sections with their perturbative uncertainties as functions the centre-of-mass energy. The experimental results from ATLAS and CMS at √s = 13 are also shown.
The hadroproduction of a W boson in association with a bb̄ pair constitutes one of the main backgrounds for W H and single-top production. The current predictions include NLO QCD corrections which were found to be very large. In order to avoid the ambiguities related to the use of flavored jet algorithms, massive bottom quarks have to be considered. In our work we compute the NNLO corrections by retaining the exact dependence on the b-quark mass in all the contributions but the two-loop virtual amplitude. The finite part of the two-loop virtual amplitude is reconstructed starting from the massless result and it includes mass effects up to power-suppressed terms. NNLO corrections to inclusive and fiducial cross sections are sizeable and the perturbative series starts to converge only if these corrections are included.
Highlighted Publications:
1. tt̄H production in NNLO QCD, S. Catani et al.,
Phys. Rev. Lett. 130 (2023) no.11, 111902 doi:10.1103/PhysRevLett.130.111902
2. Associated production of a W boson and massive bottom quarks at next-to-next-to-leading order in QCD, L. Buonocore et al., arXiv:2212.04954
The Standard Model of fundamental interactions describes the nature of the basic constituents of matter, the so-called quarks and leptons, and the forces through which they interact. This Theory is very successful in laboratory experiments over a wide range of energies. However, it fails in explaining cosmological phenomena such as dark matter and dark energy. It also leaves unanswered basic questions, such as why we observe three almost identical replicas of quarks and leptons, which differ only in their mass. Finally, it gives rise to conceptual problems when extrapolated to very high energies, where quantum effects in gravitational interactions become relevant. The goal of our research activity is to formulate extensions of this Theory that can solve its open problems, identifying way to test the new hypotheses about fundamental interactions in future experiments.
Probing new interactions via flavour-changing transitions
One of the key predictions of the Standard Model (SM) is that quarks and leptons do appear in three replicas (denoted generations, or flavours) that behave exactly in the same manner under the known microscopic forces, and differ only in their mass (or better their interaction with the Higgs field). Why we have three almost identical replica of quarks and leptons, and which is the origin of their different interactions with the Higgs field is one of the big open questions in particle physics. The peculiar structure of quark and lepton masses, which exhibits a strongly hierarchical pattern, is very suggestive of some underlying new dynamics that we have not identified yet. The main goal of our research activity in the last few years is trying to understand the nature of this dynamics. To achieve this main goal, we proceed along three complementary research directions:
1) we build explicit extensions of the SM that can explain the observed pattern of quark and lepton masses, possibly addressing also other short comings of the SM (in particular the instability of the Higgs sector);
2) we investigate the consistency of the new hypothesized interactions with current data, particularly on rare flavour-changing transitions;
3) we perform detailed predictions, according to the new hypotheses, in view of future experiments.
Figure: Preferred region for mass and couplings (in orange) of a new particle (the leptoquark) predicted assuming quark and leptons are different manifestations of the same underlying field. The gray areas denote the current exclusion bounds from high-energy experiments, the green lines indicate their future sensitivity.
Over the past year, we have worked mainly along the second and third directions. First, we improved the theoretical description of radiative corrections in rare B-meson decays. Applying these results to the analysis of current data from the LHCb experiment, we were able to place stringent constraints on the parameter space of a motivated SM extension. The latter is based on the interesting hypothesis that quarks and leptons are two manifestations of the same underlying field. We also made accurate predictions for processes occurring at high energies that will be measured in the coming years by ATLAS and CMS, showing that these future measurements could verify or disprove the validity of this hypothesis.
Highlighted Publications:
1. Flavor hierarchies, flavor anomalies, and Higgs mass from a warped extra dimension, J. Fuentes-Martin et al.,
Phys. Lett. B 834 (2022) 137382, arXiv:2203.01952
2. Semi-inclusive Lepton Flavor Universality ratio in b → s ll transitions, M. Ardu, G. Isidori, and M. Pesut,
Phys. Rev. D 106 (2022) 093008, arXiv:2207.12420
3. QED in B → K ll LFU ratios: theory versus experiment, a Monte Carlo study, G. Isidori, D. Lancierini, S. Nabeebaccus, R. Zwicky,
JHEP 10 (2022), 146, arXiv:2205.08635
Our research deals with the development of automated methods for the simulation of scattering processes in quantum-field theory. The O PEN L OOPS algorithm, developed in our group, is one of the most widely used programs for the calculation of scattering amplitudes at the LHC. This tool is applicable to arbitrary collider processes up to high particle multiplicity and can account for the full spectrum of first-order quantum effects induced by strong and electroweak interactions.
Currently, new automated methods for second-order quantum effects are under development. Our phenomenological interests include topics like the strong and electroweak interactions of heavy particles at the TeV scale, or theoretical challenges related to the extraction of rare Higgs-boson and dark-matter signals in background-dominated environments.
Rational terms of scattering amplitudes at two loops
Recently we did an important step forward towards the extension of the OpenLoops algorithm from first-order to second-order quantum corrections. Such corrections involve the exchange of virtual quanta with one or two unconstrained momenta, which gives rise to so-called one- and two-loop integrals. Due to the presence of ultraviolet singularities, loop integrals are typically evaluated in D = 4 − e space-time dimensions, where e is as infinitesimally small parameter. In this way the singularities assume the form of 1/e poles and can be canceled through the so-called renormalisation procedure. Finally, physical predictions are obtained by setting e → 0. In this limit, the interplay of 1/e poles with infinitesimally small terms of order e gives rise to subtle contributions, which are known as rational terms and play an important role for the automation of loop calculations.
So far automated algorithms exist only at one loop. In this case the most powerful approach turned out to be the combination of numerical algorithms in D = 4 dimensions together with special techniques for the reconstruction of the missing rational terms. In the recent few years, as a basis for automated two-loop algorithms, we have developed a fully-fledged theoretical framework to control rational terms at two loops.
Example of Feynman diagrams describing second-order quantum correction to the interaction of photons (wavy lines) with electrons and positrons (solid lines). Two-loop diagrams (left) can be computed using numerical tools in D = 4 dimensions, while missing contributions in D = 4 − e dimensions are reconstructed by means of one-loop (orange) and two-loop (yellow) rational counterterms. This approach is applicable to any scattering process.
In this approach, the standard procedure for the subtraction of ultraviolet singularities is supplemented by rational counterterms, which represent universal corrections to the Feynman rules that control the fundamental interactions of elementary particles and their propagation in the vacuum. Such rational counterterms can be determined for any theoretical model and, once available, they can be used to reconstruct the missing rational parts for any scattering process at two loops.
In [1] we have addressed the non-trivial problem of deriving the two-loop rational counterterms for theories that feature spontaneous symmetry breaking—also known as spontaneous symmetry breaking. To this end, we have presented a new method that makes it possible to carry out all calculations in the symmetric phase of the theory at hand. The high efficiency of this approach opens the door to the determination of two-loop rational counterterms for the full Standard Model of particle physics. As a first application we have derived all rational terms in the Standard Model at second order in the strong coupling constant.
These results provide an important building block for a new generation of automated algorithms for precision calculations at high-energy colliders.
Highlighted Publications:
1. Two-loop rational terms for spontaneously broken theories, J.-N. Lang, S. Pozzorini, H. Zhang, M. Zoller, JHEP 01 (2022) 105, arXiv:2107.10288
Particle physics at low energy but high intensity provides an alternative road towards a better understanding of the fundamental constituents of matter and their interactions. Using the world’s most intense muon beam at PSI allows to look for tiny differences to the Standard Model or for extremely rare decays. Our group provides theory support for such experiments by computing higher-order corrections in Quantum Electrodynamics (QED) to scattering and decay processes and by systematically analysing the impact of experimental bounds on scenarios of physics beyond the Standard Model. These calculations are also adapted to experiments performed at other facilities with lepton beams.
Muon-electron scattering at NNLO
Our group has set up McMule (Monte Carlo for Muons and other LEptons), a generic framework for higher-order QED calculations of scattering and decay processes involving leptons. This framework properly treats infrared singularities when combining loop amplitudes and allows to obtain fully differential cross sections at any order in QED perturbation theory with massive fermions. The long-term goal is to provide a library of relevant processes with sufficient precision, typically at next-to-next-to leading order (NNLO) in the perturbative expansion. The code is public and the current version is available at https://gitlab.com/mule-tools/mcmule.
After the implementation of several processes at next-to-leading order (NLO), recently we have calculated the complete NNLO corrections to Møller and muon-electron scattering. The latter process is important in connection with the planned MUonE experiment that aims at an independent determination of the leading hadronic contributions to the anomalous magnetic moment of the muon.
In QED it is important to keep the fermion masses at their physical value, rather than setting them to zero. This allows to compute contributions with large mass logarithms, which often produce the dominant part of the corrections in QED. This is in contrast to similar calculations in the context of Quantum Chromodynamics, where observables are typically more inclusive such that these logarithms cancel.
A reliable numerical evaluation in particular of the real-virtual contribution relies on OpenLoops, in combination with a sufficiently precise approximation of the matrix element in the delicate soft and collinear regions. To this end, we extended the Low-Burnett-Kroll theorem and the collinear approximation to one-loop amplitudes, where a photon is emitted from a massive fermion line. We have derived a universal structure for these limiting behaviours and use them to ensure a numerically stable evaluation of the real-virtual corrections.
Figure: Representative contributions to the squared amplitude for muon-
electron scattering, resulting in double-real (rr), real-virtual (rv),
and double-virtual (vv) contributions.
Highlighted Publications:
1. Møller scattering at NNLO, P. Banerjee et al., Phys. Rev. D 105 (2022) no.3, 3, doi:10.1103/PhysRevD.105.L031904
2. Muon-electron scattering at NNLO, A. Broggio et al., JHEP 01 (2023), 112 doi:10.1007/JHEP01(2023)112
3. Universal structure of radiative QED amplitudes at one loop, T. Engel, A. Signer and Y. Ulrich, JHEP 04 (2022), 097 doi:10.1007/JHEP04(2022)097
The research of our group is focused on indirect searches for physics beyond the Standard Model and the theoretical challenges at the precision frontier: these concern the model-independent description of non-perturbative effects due to the strong interaction at low energies as well as higher-order perturbative effects that can be described within effective field theories.
Our current research activity is mainly motivated by experimental progress at the low-energy precision frontier, such as searches for CP- or lepton-flavor-violating observables and the improved measurement of the muon anomalous magnetic moment.
Despite its success, the Standard Model (SM) of particle physics fails to explain certain observations, such as the baryon asymmetry in the universe, dark matter, or neutrino masses. Our group is interested in indirect searches for physics beyond the SM, conducted in low-energy experiments at very high precision. These observables pose interesting theoretical challenges concerning the model-independent description of effects beyond the SM, as well as non-perturbative effects due to the strong nuclear force.
CP and lepton-flavor violation
Beyond-the-SM sources of CP or lepton-flavor violation are probed up to very high scales by searches for electric dipole moments (EDMs) or lepton-flavor-violating decay processes, e.g., in the upcoming n2EDM and Mu3e experiments at PSI. We are interested in non-perturbative effects that affect these observables at low energies. Their description is based on effective field theories (EFTs) and usually requires input from lattice QCD.
Our group is working on the one-loop matching between the MS scheme used in EFTs and a gradient-flow scheme that can be implemented with lattice QCD. We recently obtained the results for all dimension-five operators and we are extending this work to the CP-odd dimension-six operators. The results will enable the use of future lattice-QCD input for an accurate determination of the EFT contributions to the neutron EDM, which encode effects beyond the SM.
Anomalous magnetic moment of the muon The 4.2σ discrepancy between the SM prediction of the anomalous magnetic moment of the muon and the experimental value is challenged by a conflict between data-driven and recent lattice-QCD evaluations of hadronic vacuum polarization. In order to scrutinize these different discrepancies, we have provided data-driven evaluations of Euclidean window quantities, which can be compared to lattice-QCD computations, and we have investigated isospin-breaking effects in the two-pion contribution, revealing systematic differences between different e+ e− experiments.
In order to further reduce non-perturbative uncertainties, we are extending the dispersive framework for hadronic light-by-light scattering, enabling the inclusion of higher-spin resonances.
Figure: Weight functions in center-of-mass energy for different Euclidean-time windows for the hadronic vacuum polarization contribution to the anomalous magnetic moment of the muon (from Ref.[2]).
Highlighted Publications:
1. Isospin-breaking effects in the two-pion contribution to hadronic vacuum polarization, G. Colangelo, M. Hoferichter, B. Kubis, P. Stoffer, [2208.08993 [hep-ph]], JHEP 10 (2022) 032
2. Data-driven evaluations of Euclidean windows to scrutinize hadronic vacuum polarization, G. Colangelo et al., [2205.12963 [hep-ph]], PLB 833 (2022) 137313
3. One-loop matching for quark dipole operators in a gradient-flow scheme, E. Mereghetti, C. J. Monahan, M. D. Rizik, A. Shindler, P. Stoffer, [arXiv:2111.11449 [hep-lat]], JHEP 04 (2022) 050
The CMS (Compact Muon Solenoid) experiment at CERN measures properties of the fundamental particles and their interactions, and can uncover new forces and particles. CMS surrounds one of the interaction points at the Large Hadron Collider (LHC), which when colliding protons produces an energy density comparable to that of the universe one ten- billionth of a second after it started. The CMS detector is used to determine the energy and direction of the energy and directions of the particles emerging from the LHC collisions of protons and heavy ions. In 2012, with 10 fb−1 , CMS discovered the Higgs boson, proving the mechanism on how particles acquire mass. CMS is also focused on detector refurbishment for the data-taking period of 2022 to 2025, and upgrades needed for the high-luminosity run of the LHC from 2029 to 2038.
The CMS group at UZH is strong in data analysis, focusing on the fundamental mysteries remaining in particle physics. We are studying the Higgs boson, and also using it as a probe to look for new forces and particles. We are searching for dark matter in unexplored phase space, and we are measuring standard model processes that can elucidate rare phenomena.
2022 marks the 10th anniversary of the discovery of the Higgs boson. UZH researchers on the CMS experiment were heavily involved in a 10-year anniversary paper published in Nature, in which CMS measurements over the previous 10 years were combined into a single review paper, demonstrating our improved understanding of this fascinating particle that explains how particles acquire mass (see Fig. 1) [1].
Figure 1: This plot shows that the Higgs boson interacts with both matter and force particles, with masses ranging over three orders of magnitude, according to predictions of the standard model [1].
UZH members have also been involved in several of the analyses that were combined in this review, including the observation of the Higgs boson interacting with third-generation particles: b quarks, top quarks, and tau leptons. The current dataset of 150 fb−1 allows CMS to make precise measurements and searches for new physics.
In 2022, the UZH CMS group observed the production of tau leptons for the first time in PbPb collisions at the LHC. The tau leptons are produced by photons surrounding the high electromagnetic field of the Pb ions (see Fig. 2). Since these events are produced through the electromagnetic interaction, the events are extremely clean, allowing the group to measure the lowest energy tau leptons ever reconstructed by the CMS experiment. Using these events, a measurement of the anomalous magnetic moment of the tau lepton (g-2)τ could be determined. So far, this result agrees with the SM, however, large deviations have been observed in the measurement of (g-2)μ , and it could be that such deviations could be even larger for the τ lepton. This result was accepted by PRL with the editor’s choice distinction [2].
Figure 2: In this sketch, two photons are exchanged from Pb ions, producing two tau leptons. The only observable particles in this interaction are the 3 pions from one tau lepton decay and the muon from the other tau decay [2].
In 2022, the UZH CMS group found an unexpected excess when searching for leptoquarks that interact strongly with third-generation particles (see Fig. 3). The excess was observed in events in which two tau leptons are produced. Such an excess is predicted by models from the Isidori group. This was the first LHC analysis to search for non-resonant production of leptoquarks, such that the leptoquarks are exchanged rather than produced. Due to the large deviation of 3.4 standard deviations, investigations are ongoing in CMS to determine if new physics is responsible [3].
Figure 3: The excess in data compared to the prediction is apparent at high values of the signal discriminant. Also shown is the expectation for a leptoquark signal in red.
Meanwhile, another UZH analysis, searching for a new type of particle called a vector-like lepton, also found a 2.8 standard deviation excess that is also consistent with the same models from the Isidori group [5]. Follow-up studies in other final states that would also be expected are ongoing.
The UZH CMS group produced a new type of analysis to search for the effects of new physics interactions by studying their off-shell effects through the use of the effective field theory (EFT) approach. The analysis focuses on new interactions that could couple top quarks with leptons, bosons, or quarks. Twenty-six extensions of the standard model are tested in how they would modify these events with top quarks. No significant modifications are currently observed, however, with future HL-LHC datasets that are ten times larger, deviations could become apparent. The search is explained in a manuscript that is currently being reviewed by the collaboration [4].
Many of the UZH measurements use tau leptons to probe for new physics. To do this, it is important to be able to identify tau leptons with high efficiency and low fake rate, and reconstructed with excellent energy resolution. UZH members helped develop and validate a new tau algorithm, known as DeepTau, that makes use of deep neural networks to identify tau leptons with up to a 30% reduction in fakes (other particles being misidentified as tau leptons) as compared to previous approaches for tau identification. A description of the new algorithm can be found in JINST 17 (2022) P0702.
Highlighted Publications:
1. A portrait of the Higgs boson by the CMS experiment ten years after the discovery, CMS Collab., Nature 607, 60–68 (2022), https://doi.org/10.1038/s41586-022-04892-x
2. Observation of τ lepton pair production in ultraperipheral lead-lead collisions at √s NN = 5.02 TeV CMS Collab., https://arxiv.org/abs/2206.05192, accepted by PRL
3. The search for a third-generation leptoquark coupling to a τ lepton and a b quark through single, pair and nonresonant production at √s = 13 TeV, CMS Collab., https://inspirehep.net/literature/21101884. Search for new physics in top quark production with additional leptons in the context of effective field theory using 138fb− 1 of proton-proton collisions at √s = 13 TeV, CMS Collaboration., https://inspirehep.net/literature/2642353
5. Search for pair-produced vector-like leptons in final states with third-generation leptons and at least three b quark jets in proton-proton collisions at √s = 13 TeV CMS Collab., https://inspirehep.net/literature/2139823
More publications at: https://www.physik.uzh.ch/r/cms
LHCb is an experiment for precision measurements of observables in the decays of B mesons at the Large Hadron Collider (LHC) at CERN. We play a leading role in measurements with B meson decays and in measurements of electroweak gauge boson production, and have made important contributions to the LHCb detector. We contribute to an ongoing major upgrade of the detector for 2023 and are involved in studies for future upgrades of the experiment.
New LHCb measurements test behaviour of the third generation
Of the constraints on new physics, those from quark flavour physics, such as CP violation and mixing measurements, are particularly stringent with mass scale sensitivity up to 1000 TeV. The fact that the flavour sector has been consistent with the Standard Model (SM) therefore suggests that new physics is too heavy to solve the hierarchy problem. However, this mass scale sensitivity assumes that new physics couples equally to the three generations of fermions. If the flavour structure of new physics is non-trivial, such a hierarchical coupling to the three generations, then the largest sensitivity is in decays involving third generation fermions such as the beauty quark and τ lepton.
The decays b → cτντ are particularly interesting in this regard as they contain three third generation fermions and the decay rate can be reliably predicted in the SM due to their semileptonic nature. The LHCb collaboration has recently presented a new test of these decays by comparing the decay rate of transitions involving the τ lepton and the muon via so-called lepton universality ratios, R (D ) and R ( D ∗ ).
The latest result from the LHCb collaboration measures the R ( D0 ) ratio for the first time, which compares the B → Dτντ and B → Dμν μ decay rates. This particular observable is more complicated than existing LHCb measurements as the D meson is a ground state particle, meaning that there is much more background from excited states such as D∗ mesons.
The τ lepton is reconstructed via the decay τ → μν μ ν τ , meaning that the muonic and tauonic decays have the same final state which cancels systematic uncertainties associated with the efficiency. The main challenge is therefore to control for the background, as it cannot be easily distinguished from signal due to the presence of multiple neutrinos in the final state. The analysis is performed by fitting a three-dimensional distribution of kinematic variables which discriminate the signal from the background. Such variables include the energy of the final state lepton and the mass of the system of missing particles. Isolation criteria against additional particles is crucial to reduce the background. This criteria is extensively used to create control samples which are vital to verify that the background is under control.
Figure: Measurements of R(D) and R(D∗ ) along with the SM prediction shown in black. The single measurements of R( D∗ ) are shown as the coloured data points whereas the joint measurements including R(D) are shown as ellipses.
The results of the new measurement is shown in the figure, along with previous results. The measurement is shown as a purple ellipse, and is around two standard deviations from the prediction based on the Standard Model shown in black. The combination of results, shown in red, is around three standard deviations from the SM. The discrepancy from the SM hints to towards new particles at the TeV scale which preferentially couple to third generation fermions. Clarifying this situation is of the main goals at UZH group in the future.
Highlighted Publications:
1. All LHCb publications: lhcb.web.cern.ch/lhcb/
2. Measurement of the ratios of branching fractions R(D∗ ) and R(D 0 ), LHCb Collab., arXiv:2302.02886
Our group is one of the main proponents of the ShiP (Search for Hidden Particles) experiment, which is a beam dump experiment at the SPS of CERN aiming to search for Feebly Interacting Particles (FIPs). These particles are predicted by several extensions of the Standard Model explaining Dark Matter.
SHiP implementation at ECN3
While there are some anomalies in flavour physics from the Standard Model, it is not clear yet if this is a genuine sign of physics beyond the Standard Model and no new particles have been observed in direct detection experiments (ALTAS and CMS). For these reasons, models predicting FIPs to explain Dark Matter are now attracting large attention in the particle physics community. For this reason, now CERN is considering a possible implementation of SHiP at the ECN3 experimental area, shown in the Figure, for which there is competition with another alternative proposal. A dedicated task force has been created at CERN to evaluate the feasibility and physics case of the different proposals. The result of the process will be known by the end of 2023. Prof. Serra is coordinating the physics studies of the SHiP experiment and the group is playing a crucial role in this area.
Figure: Possible implementation of SHiP at the ECN3 experimental area.
The SND@LHC is a recently approved and running experiment at the Large Hadron Collider (LHC) performing neutrino physics and searches for feebly interacting particles. It collects man-made neutrinos in the uncharted TeV energy scale from pp collision at the ATLAS interaction point.
The detector is located in the tunnel, ∼ 480 m away from the interaction region of the ATLAS experiment (see Figure), and covers the forward pseudo-rapidity region 7.2 < η < 8.4, where neutrinos are mostly produced from charmed hadrons. Measuring neutrinos at these unprecedented energies and rapidities provides crucial constraints on the production of heavy flavours, and in turn to the parton density function of the gluon at very small x values (x < 10−6 ), currently limited by large theoretical uncertainties. These improved constraints are an essential benchmark for the realisation of Future Colliders and the understanding of astrophysical neutrino sources.
Our group is deeply involved in the experiment since its early days, initially contributing to preparing the experiment proposals and coordinating calibration test beams. It has also designed and built the detector’s hadronic calorimeter and muon system.
With the start of Run3, the group has taken a leading role in analysing the first collected data, intending to perform the first observation of collider neutrinos at the LHC. The analysis results have been recently presented at the 57th Rencontres de Moriond - 2023, and a dedicated publication will soon appear.
Figure: Location of the SND@LHC experiment in the LHC complex.
The goal of the Future Circular Collider (FCC) is to greatly push the intensity and energy frontiers of particle colliders and lay the foundations for a new research infrastructure that can succeed the LHC and serve the world-wide physics community for the rest of the 21st century. In the first stage, the FCC-ee, would collide e+ e − pairs in unprecedented numbers at energies between 90 and 365 GeV. Since 2021, our group is developing tracking detectors and algorithms for the FCC-ee in collaboration with the Vrije Universiteit Brussel (Belgium), informing the FCC feasibility study ending in 2025.
The successful identification of hadronic final states is an essential ingredient in exploiting the physics potential of collider experiments. At the FCC-ee the clean environment which is free of effects such as QCD ISR and PDFs, means that the identification of jet flavour (flavour tagging) is much more straightforward than at the (HL-)LHC, and is thus expected to improve considerably. Strange quark jet discrimination, for instance, would enable the first ever study of the Z → ss production, rare Higgs boson decays and the strange Yukawa coupling, CKM matrix elements via W decays, and BSM physics scenarios such as FCNCs. We have implemented a multiclassifier neural network using a transformer-based architecture [1] for flavour tagging at the FCC-ee [2]. Using a transformer architecture means that the network is highly parallelizable during training, which is of utmost importance when evaluating different potential detector configurations. The network has achieved state-of-the-art strange quark discrimination (see figure) and we will use it as a springboard to evaluate the feasibility of novel physics measurements and the required detector performance.
The most crucial element of FCC-ee experiments for flavour tagging is the innermost vertex detector. To provide the best possible position and momentum resolution, the amount of material in the vertex detector has to be limited to the absolute minimum. We aim to achieve this using ultrathin Monolithic Active Pixel Sensors (MAPS) developed in the 65nm process. We are working together with the ALICE ITS3 collaboration, IPHC Strasbourg and other partners to test two kinds of test structures of such pixel sensors (APTS and CE-65). This work includes calibrating the sensors at the UZH lab and studying them at test beam facilities to discern which pixel structure best suits the FCC-ee requirements. Our group also works on implementing one of the proposed vertex detector designs in full simulation using the common key4hep/DD4hep software frameworks to study its performance. We will also compare the performance a detector designed in above mentioned 65 nm technology.
Highlighted Publications:
1. Attention is all you need, A. Vaswani, et al., https://arxiv.org/abs/1706.03762 (1017)
2. Jet-Flavour Tagging at FCC-ee, K. Gautam, https://arxiv.org/pdf/2210.10322.pdf
3. Future Circular Collider - European Strategy Update Documents, M. Benedikt et al., CERN-ACC-2019-0003 (2019)
4. Main deliverables and timeline of the FCC feasibility study, CERN, CERN/3588 (2021)