Posted in | News | Quantum Physics

Argonne's Decades-Long Contribution to ATLAS: Unlocking the Universe's Secrets at CERN

The Argonne team continues to make important contributions to the ATLAS experiment, investigating the universe’s smallest building blocks and the rules behind their interactions.

More than 300 feet underground near Geneva, Switzerland, scientists are using the world’s largest and most powerful particle accelerator - the Large Hadron Collider (LHC) - to study the architecture of our universe at the smallest scales.

Located at CERN, the European Laboratory for Particle Physics, the LHC accelerates subatomic particles around a 17-mile ring at near the speed of light. At collision points around the ring, the particles are smashed together, producing debris in the form of new particles.

Surrounding the collision points are several experiments, each using different detector systems to study the particles’ interactions from different perspectives.

Thousands of researchers from hundreds of institutions have contributed to these experiments, pursuing answers to some of the most pressing questions in physics: What are the basic constituents of matter? What is dark matter made of? Are there hidden particles or forces waiting to be discovered?

For over 30 years, scientists and engineers at the U.S. Department of Energy’s (DOE) Argonne National Laboratory have contributed significantly to one LHC experiment in particular, the ATLAS experiment.

The ATLAS detector weighs almost 8,000 tons - around as much as the Eiffel Tower’s wrought-iron frame. The experiment at ATLAS started recording data in 2009. Supported by more than 6,000 scientists, engineers, technicians, students and other staff, ATLAS is one of the two largest experiments at CERN, and one of the largest collaborative efforts ever attempted in science.

Argonne researchers have helped shape the ATLAS detector and experiment since the early 1990s, when the project was still just an idea. Through the years, they have designed, built, tested and updated multiple critical ATLAS detector components.

They have also developed advanced technologies to process the experiment’s massive influx of data and distribute it to collaborators around the world for analysis. In addition, Argonne researchers have rigorously analyzed ATLAS data, contributing to the experiment’s key discoveries and breakthroughs.

Today, Argonne researchers are working to upgrade crucial ATLAS detector components, streamline data processing systems and prepare the collaboration’s software to be run on powerful exascale supercomputers.

We’ve made significant contributions to ATLAS since its early days, from construction, to operation, and now through our roles in the upgrades,” said Jinlong Zhang, an experimental particle physicist who leads the Argonne team working on the ATLAS experiment.

How to Detect (Almost) Anything

ATLAS is a general-purpose particle detector designed to investigate a wide range of physics phenomena. When particles collide at the center of the detector, it sets off a cascade of more than a billion other particle interactions per second. At the record-setting energies that are made possible by the LHC, there’s a chance that almost anything could come out of these collisions.

To detect all possible signals from all possible particles created in a collision, ATLAS uses more than 100 million sensors arranged in layers around the collision point. Each layer uses different sensors dedicated to measuring distinct particles and properties. Together, the detector layers provide a full account of each particle collision and its aftermath.

Although ATLAS can’t directly detect the universe’s most elusive known particles -  neutrinos  - the detailed information it does capture enables physicists to reconstruct entire cascades from beginning to end, leaving no particle unaccounted for.

Argonne scientists have made critical contributions to the design and construction of multiple detector layers and other ATLAS components. The most visible of these is the ATLAS hadronic calorimeter. This detector layer spans more than 25 feet in diameter and, at 3,200 tons, is the heaviest part of ATLAS.

The hadronic calorimeter is the larger of two calorimeters within the ATLAS detector, and it was designed to detect a class of particles called hadrons. Because of their makeup, most hadrons are relatively heavy particles, which easily punch through the preceding detector layers.

Inside the hadronic calorimeter are dense steel plates, which stop the hadrons in their tracks. Layers of the steel plates are alternated with layers of special plastic tiles, called scintillators, which record the energy the hadrons leave behind. For this reason, the hadronic calorimeter is also known as the ?“TileCal.”

The hadronic calorimeter also provides structural support to other parts of the ATLAS detector. For example, Victor ?“Vic” Guarino, manager of the Engineering Services group in Argonne’s Experimental Operations and Facilities division, was one of three engineers who designed the hadronic calorimeter support structures, known as saddles. The saddles were then fabricated as in-kind contributions from other members of the ATLAS collaboration. These engineers also did extensive structural analysis not only of the completed structure but also each step of the assembly to ensure it could be constructed safely.

Guarino’s group of specialized engineers have collaborated with Argonne physicists, as well as researchers from Michigan State University, University of Chicago, the University of Texas at Arlington and CERN, to develop several custom hardware systems for ATLAS over the years.

The ATLAS team at Argonne led U.S. efforts to design, prototype and fabricate the hadronic calorimeter, collaborating closely with various universities and labs. Key contributions from Argonne scientists include the minimization of cracks in the detector. They also developed a novel use of the die-stamping fabrication technique, which allowed the team to manufacture the calorimeter’s steel plates in a factory in the Czech Republic with a high degree of precision. The team was also responsible for performing quality checks and distribution of the plates to collaborators worldwide.

Since then, Argonne scientists have played a significant role in maintaining and upgrading the calorimeter. For example, in 2019, the team designed a new power-supply system within the detector layer to replace one that was vulnerable to radiation. Tim Cundiff - an engineering specialist who supports some of Argonne’s high energy physics projects - installed the new system inside the calorimeter at ATLAS. He performed this work in challenging locations within the ATLAS cavern, sometimes requiring a full-body harness.

The Argonne team also assembled one ?“end barrel” for the detector, including instrumentation, and shipped it to CERN. The end barrel is a steel structure placed at the end of the detector that helps provide complete coverage in detecting particles.

Jim Grudzinski, another member of the Argonne team, designed and installed a unique air pad system that was used to move the detector underground along a set of rails to precisely position it.

An engineer with over 25 years of experience, Grudzinski is now the operations officer for Argonne’s Physical Sciences and Engineering directorate.

Our contributions to the calorimeter over the years have been essential for accurate physics measurements and cost efficiency,” said retired Argonne Distinguished Fellow James ?“Jimmy” Proudfoot, who helped shape the calorimeter system from its earliest days. Proudfoot joined the ATLAS collaboration in 1992 and led Argonne’s ATLAS team from 2006 to 2019. In 2020, he received a U.S. ATLAS Lifetime Achievement Award for his work on the experiment.

Filtering a Firehose

When running, the ATLAS detector’s vast array of sensors generates 60 terabytes of data each second, representing over a billion particle interactions. Roughly equivalent to 15 million smartphone photos, that’s an unruly amount of data to manage, and only some of the interactions have the potential to reveal something new.

ATLAS’s trigger and data acquisition (TDAQ) system combines sophisticated hardware and software to quickly filter this firehose of information.

As signals from particle interactions reach ATLAS’s sensors, a subset of that information is sent to the ?“trigger.” This system, composed of hardware and software, rapidly scans the data received and picks out those with the most interesting characteristics, such as hints of rare particles or unexpected decays.

The trigger is crucial,” said Proudfoot. ?“If it doesn’t work, none of it works.”

The trigger sends its selections to a data acquisition system, which retrieves more complete information from the detector to more deeply analyze each interaction. Based on this analysis, the system further filters the data, sending only the most promising interactions to computational storage.

The TDAQ system does all of this within a fraction of a second. Ultimately, only around one in a million interactions detected by ATLAS are recorded for further analysis.

Argonne was one of only a few U.S. groups to produce custom hardware for the TDAQ system and has played a major role in its development and operation. In particular, Zhang - who currently leads the 20-member ATLAS team at Argonne - has held multiple roles related to the TDAQ system throughout his career.

Zhang started working on the ATLAS detector in 2006 and was involved in the early commissioning of the experiment. Over the years, he and other Argonne researchers have served in key leadership positions at ATLAS, such as readout coordinator and run coordinator. These roles involve managing international teams to ensure successful detector operations and data collection.

Our group used to have people in the control room commissioning the detectors and taking shifts. When ATLAS saw its first beam on the LHC start-up day, I was the shift leader,” he said. ?“We were also on call for the calorimeter and TDAQ. I carried a phone for support for almost three years while in Switzerland.”

In 2011, Zhang received an Early Career Research Program award from DOE’s Office of Science to improve the TDAQ system. With this award, he built custom electronics for the trigger, which allows it to process data at a much higher rate than before. That means the trigger can construct more advanced information about an interaction before choosing to keep or discard it.

Zhang’s improvements to the TDAQ system have had a profound impact on ATLAS’s performance and the potential for new physics discoveries. His work is also informing the design of an even newer TDAQ system being developed. The High-Luminosity project upgrade to the LHC, known as the HL-LHC, will enable many more particle interactions for detection at ATLAS.

Managing ATLAS’s Large and Growing Volumes of Data

ATLAS detects and records over 10,000 terabytes of data per year - another analogy to help understand that volume is that it equals about 320,000 hours of 4K streaming. To prepare this amount of data for analysis and make it available to members around the globe, the ATLAS collaboration leverages cutting-edge technologies in networking, software and computing.

Efficient access to ATLAS data is crucial when searching for those few interactions that could, for example, indicate the decay of an undiscovered particle. Since 2002, ATLAS and other LHC researchers have been able to remotely access and analyze massive amounts of data through the Worldwide LHC Computing Grid.

This network relies on a concept called grid computing, which was pioneered by Argonne Distinguished Fellow and Data Science and Learning Division Director Ian Foster and his colleagues in the late 1990s. This novel way of distributing computational resources across a network paved the way for ATLAS’s greatest breakthroughs, such as the discovery of the Higgs boson in 2012. It also marked the dawn of cloud computing, a technology that has since become central to modern society.

Argonne scientists also helped develop the software that retrieves the data from the ATLAS detector and organizes it to be shared with the collaboration. The software saved 30% of disk usage compared with the previous storage model, providing significant cost savings, according to Proudfoot.

Today, computational scientists and engineers at Argonne continue to work at the boundaries of what’s technologically possible to prepare ATLAS for its next chapters. For example, physicist Walter Hopkins is leading an effort at Argonne to adapt ATLAS-related code to run on exascale computing systems - especially Aurora, a supercomputer housed at the Argonne Leadership Computing Facility (ALCF), a DOE Office of Science user facility.

One of the world’s first exascale computers, Aurora is capable of performing more than a quintillion - that is, a billion billion - calculations per second. Such impressive computational power will enable the ATLAS collaboration to process and analyze data more efficiently than ever before. This will prove especially useful as ATLAS prepares for the HL-LHC upgrade, which will allow for a 20-fold increase in ATLAS data output compared with the data recorded until 2018.

“Machines like Aurora will allow us to manage this increased flow of data,” said Hopkins. ?“We could also make use of exascale systems for generating high volumes of simulated ATLAS data, which we use to inform and validate our experimental results.”

Comparing simulated and real interaction data helps scientists calibrate their computer models, for both particle interactions and the detector itself. These models are crucial for predicting and identifying particle behavior within ATLAS.

Hopkins and his team are reworking widely used particle simulation software to be compatible with the graphics processing units (GPUs) that make up Aurora. They are also teaching artificial intelligence (AI) systems to account for sources of uncertainty in the simulations. By building this awareness directly into the training process, the collaboration improves the reliability of AI predictions and increases confidence in correctly identifying particles.

“The statistical sensitivity of the simulations can be a limiting factor in how certain we can be in our analyses,” said Proudfoot, who initiated the project several years ago. ?“Exascale computing can help with that challenge.”

Hopkins is working with researchers from Argonne and DOE’s Lawrence Berkeley National Laboratory on the project as part of ALCF’s Aurora Early Science Program.

It Takes a Global Village to Make a Breakthrough

For decades, Argonne scientists have actively participated in international efforts to analyze ATLAS data in search of new physics insights.

In 2012, ATLAS and its counterpart detector at the LHC  - the Compact Muon Solenoid - confirmed the existence of a fundamental particle called the Higgs boson, marking one of the biggest scientific breakthroughs of the 21st century.

The discovery of the Higgs particle confirmed the existence of a corresponding Higgs field. This field permeates all of space and serves as the source of other particles’ masses, imposing a kind of stability on the universe that gives order to its structure. If it weren’t for the Higgs field, subatomic particles would zip around at the speed of light, and atoms - and therefore matter - wouldn’t exist as we know them.

The Higgs boson was also the last particle predicted by the Standard Model (SM), which represents scientists’ current best understanding of the universe’s makeup, to be detected experimentally. Although the discovery bolstered confidence in the SM, many scientists still suspect the SM to be incomplete.

There appears to be more to nature than the SM captures, such as the existence of dark matter. Over the years, Argonne researchers have performed detailed simulations and analyses - often using computers at the ALCF  - to explore physics beyond the SM and identify rare and interesting particle interactions within ATLAS data.

Argonne is one of more than 260 institutions representing 46 countries that make up the ATLAS collaboration. The success of an experiment of this scale depends on effective teamwork and coordination between partners across countries and continents. To facilitate collaboration with U.S.-based universities, the Argonne group hosts one of the nation’s four ATLAS centers.

The Argonne center serves as a hub for ATLAS scientists in the region and provides early career researchers with crucial training and development opportunities. It provides office space, computing resources and software consultation to visiting university faculty and students. The center also offers physicists opportunities to collaborate on detector development, readout electronics, core software, various AI and machine learning projects and analysis of ATLAS data.

The goal of the center is to strengthen each other through collaboration and education,” said Alexander ?“Sasha” Paramonov, an Argonne physicist who served as the support center’s point of contact from 2014 to 2018. Paramonov joined Argonne and the ATLAS experiment in 2009 and has worked on several ATLAS hardware and analysis projects since then.

The center also hosts workshops, conferences and regional meetings to keep ATLAS collaborators informed and engaged,” he added.

The ATLAS collaboration is accumulating and analyzing one of the world’s largest collections of experimental particle physics data in history. For years to come, physicists will refer to this goldmine of information to validate their particle physics theories and investigate the nature of our universe’s smallest components.

Upgrading ATLAS: A Long History of Forward Thinking

The ATLAS experiment is expected to continue for another 15 years or more. To maintain the experiment’s leading technological edge and maximize the opportunity for discovery, ATLAS researchers have been constantly looking decades into the future.

Since the experiment’s inception, the collaboration has tackled ambitious, forward-thinking projects in detector research and development, which have enabled crucial improvements to ATLAS.

Planning for an upgrade often begins long before the researchers know exactly how they will pull it off. If a new technology is needed to make an upgrade work, it’s up to them to invent it. Designs of the upgraded systems are then prototyped, tested, built and integrated into the detector. This process can take decades, but an experiment of this scale isn’t a sprint. It’s a relay carried out across several generations.

The ATLAS team at Argonne has been part of this relay since the beginning. Currently, they are deeply involved in planning and executing ATLAS’s Phase-II upgrades, which will prepare the detector for the HL-LHC.

Zhang serves on the ATLAS leadership team as its upgrade deputy coordinator, guiding the lab’s contributions to key systems, including the upgraded TDAQ. The Argonne team is also helping build the all-silicon inner tracker (ITk) for the ATLAS experiment, a new system that will replace the existing inner detector. This system of silicon sensors and electronics is the first point of detection at ATLAS.

I joined the Argonne team 10 years ago to figure out how to build the pixel layers for the new ITk, and now we are starting to enter the production phase,” said physicist Jessica Metcalfe, who leads Argonne’s contributions to the upgraded system.

Metcalfe currently manages the Argonne Micro Assembly Facility, a cleanroom where her team is assembling more than 1,000 of the 10,000 sensor modules that make up the ITk’s pixel layers. Each module is barely larger than a postage stamp and packed with precision electronics, which read out and decipher signals from the sensors.

The new ITk pixel layers are designed to keep pace with the increased volumes of collision data from the HL-LHC. A special coating on the new detector modules helps them cope with some of the effects of exposure to high doses of radiation resulting from the upgrade.

The electronics within these ITk pixel modules generate an intense amount of heat inside the detector. Argonne engineers Allen Zhao and Jerin Pappachan developed a system to keep the inner detector cool despite this heat. In particular, they developed a titanium cooling structure, the welding of which took more than two years to perfect.

“This is delicate work, but the welding - everything - is very reliable and will cool the upgrade tracker for its expected lifetime of 15 years,” Zhao said. ?“I can sleep well at night.”

ATLAS researchers are hoping to see the first collision data from the HL-LHC by 2030. Already, the Argonne team is engaged in planning for even more advanced detectors and colliders.

People in this field think over long timescales. I’ve been working on ATLAS since 2004, and I’ll spend the majority of my career here,” said Metcalfe, who has served many roles at ATLAS in both analysis and detector development.

ATLAS allowed me to do many things in my career, from technical work to management and leadership. I’ve also mentored physicists from every continent,” said Proudfoot. ?“It really is a special environment.”

Tell Us What You Think

Do you have a review, update or anything you would like to add to this news story?

Leave your feedback
Your comment type
Submit

While we only use edited and approved content for Azthena answers, it may on occasions provide incorrect responses. Please confirm any data provided with the related suppliers or authors. We do not provide medical advice, if you search for medical information you must always consult a medical professional before acting on any information provided.

Your questions, but not your email details will be shared with OpenAI and retained for 30 days in accordance with their privacy principles.

Please do not ask questions that use sensitive or confidential information.

Read the full Terms & Conditions.