Physics World
Quiz of the week: how many galaxies and quasars are in the biggest high-res 3D map of our universe?
Have you been keeping up to date with physics news? Try our short quiz to find out
The post Quiz of the week: how many galaxies and quasars are in the biggest high-res 3D map of our universe? appeared first on Physics World.
Fancy some more? Check out our puzzles page.
The post Quiz of the week: how many galaxies and quasars are in the biggest high-res 3D map of our universe? appeared first on Physics World.
Dark energy survey unveils the largest 3D map of the universe
The work involved mapping more than 47 million galaxies and quasars over a five-year period
The post Dark energy survey unveils the largest 3D map of the universe appeared first on Physics World.
The Dark Energy Spectroscopic Instrument (DESI) has created the largest high-resolution 3D map of the universe. The work involved observing more than 47 million galaxies and quasars as well as 20 million stars over a five-year period. Researchers will now use the vast dataset to probe the nature of dark energy.
DESI, which began collecting data in 2021, is mounted on the Nicholas U Mayall 4-m Telescope at the Kitt Peak National Observatory in Arizona. It comprises 5000 robot-controlled optical fibres that send light to an array of spectrographs.
This allows DESI to make an extensive map of galaxies and quasars with the spectroscopic data providing a measure of how fast a galaxy is moving away from us, which is determined by a galaxy’s redshift.
By comparing how galaxies clustered in the past with their distribution today, researchers can trace dark energy’s influence. Work published in 2024 found hints that the acceleration of the expansion of the universe has not been constant.
DESI will now use the expanded dataset to further test whether the “cosmological constant” could be evolving over time with the results expected to be published next year.
DESI director Michael Levi, who is based at the Lawrence Berkeley National Laboratory, says the survey has been “spectacularly successful and is “incredibly exciting”.
“The instrument performed better than anticipated,” he says, “We’re going to celebrate completion of the original survey and then get started on the work of churning through the data, because we’re all curious about what new surprises are waiting for us.”
DESI will now continue observations into 2028 and further expand the map by about 20% to include parts of the sky that are more challenging to observe.
The post Dark energy survey unveils the largest 3D map of the universe appeared first on Physics World.
STEM stock rising in quantitative finance
Find out what it’s like working for options trading firm Susquehanna
The post STEM stock rising in quantitative finance appeared first on Physics World.
Quantitative trading plays an ever-increasing role in the global financial markets. Automated algorithms analyse millions of financial instruments simultaneously, while mathematical models anticipate price movements on nanosecond timescales.
Susquehanna is a proprietary trading firm, meaning it invests its own capital in the markets. Susquehanna’s quantitative researchers – or “quants” – collaborate with traders and technologists to drive the company’s success. Quants design and implement the complex models and algorithms the firm needs to make rapid, well-informed pricing and trading decisions.
The quant advantage

Lyubo Panchev, a quant at Susquehanna with seven years at the firm, describes how quants collaborate across a wide range of instruments and problem types. “Our quants are all trying to mathematically understand the world and the financial markets,” he says, “and then we use that information to determine whether we want to make a trade or not.” While the challenges vary considerably across the firm’s different trading desks, that shared mathematical mission is what unites them.
The details of this work can differ from quant to quant, from devising new pricing approaches for financial instruments, to finding patterns in data to turn into trading signals, to developing specialized software to implement new trading strategies.
However, specialist knowledge in specific fields is not what Susquehanna is primarily interested in when hiring a new quant. Instead, the firm is looking for the types of transferrable skills that PhD students in STEM fields often possess. “We want to hire people who can reason through first principles and feel comfortable working in an uncertain environment with open-ended questions to which answers sometimes might not even exist,” says Panchev. “So that’s why we like to hire PhDs.”
A physicist, for instance, brings the skills and intuition for modelling systems with incomplete information – whether that’s modelling interactions in a complex system or inferring signal from noise in a vast dataset. The mental frameworks used by a theorist studying quantum field theory or an experimentalist analyzing data translate surprisingly well to pricing derivatives or spotting anomalies in market behavior.
Life outside academia
Panchev – a three-time International Mathematical Olympiad medallist with a PhD in pure mathematics from MIT – says that the most satisfying part of working at Susquehanna for him is that it preserves what he loved about academia, while at the same time addressing some of the shortcomings.
“The freedom to work on what you want is a unique advantage in academia, over pretty much any industry,” says Panchev. “But what quant researchers do at Susquehanna is close to that spirit.”
Though he enjoyed focusing on challenging questions surrounded by like-minded people, he found working on hyper-specialized academic problems during his PhD a slow, lonely slog. At Susquehanna, quants work on challenging problems, but never in isolation. Quantitative trading problems are invariably interconnected, requiring close collaboration between researchers, traders, technologists and many other experts, to connect all the pieces together.
What’s more, the environment is highly dynamic. “The impact is much more immediate, sometimes instantaneous,” he adds. “You can be looking at the data and then decide to make a change to your algorithm, tweak a few things, and five minutes later, you’re already getting data that’s from the change you just made – it’s a very fast feedback loop.”
When you add a highly desirable salary, benefits package, career development opportunities, and a company culture that values strategy games like poker to hone decision-making skills and apply them to complex financial markets, it is clear to see why a STEM PhD student might choose Susquehanna over a career in academia.
From toy problems to market mastery
To earn a seat at this table, applicants are put through their paces. The first and perhaps greatest challenge they face is getting through the interview process. Quant skills – like original thinking, intuition, and problem-solving – are not easily described in a CV or interview, they need to be demonstrated. But how can an applicant demonstrate those skills in an interview?
“We build interesting toy problems that are representative of what we do,” explains Panchev. “And then we give them time to think and work on it on their own, before reconvening to see how they approached the problem, and what they found out.”
The internship builds solid foundations in finance domain knowledge, machine learning, programming and data analysis
Successful applicants who are hired on immediately participate in a comprehensive 10-week internship – the first step in an intensive front-loaded education program at the company. This internship builds solid foundations in finance domain knowledge, machine learning, programming, data analysis, as well as what Susquehanna’s different quant groups do and how their work all fits together.
Panchev says that a typical direct full-time hire requires five months or more of very structured education, over time, however, the quant will be faced with more open-ended problems and need to chart their own way, free to explore their own ideas and methods.
“There’s a long, steep learning curve but at the end you become an expert,” he adds. “In a way, it’s very similar to how a PhD is structured.” This means that, while the barrier to entry is fairly high, the support system is robust, with a well-organized education program that ensures that everyone is equipped with the tools that they need to succeed.
For the successful STEM PhD student assessing their career options, Susquehanna offers a compelling proposition – the chance to remain a scientist, but on a stage where the stakes are higher, the collaborations deeper and more dynamic, and the results play out in real-time and have real-world impact.
The post STEM stock rising in quantitative finance appeared first on Physics World.
Memristive synapses could reduce AI energy consumption
Hafnium-oxide based nanomaterial mimics the mechanisms of the human brain
The post Memristive synapses could reduce AI energy consumption appeared first on Physics World.
A new highly stable and energy-efficient memristor based on a hafnium oxide material can emulate the behaviour of synapses in the brain. The neuromorphic device could help dramatically cut the energy consumed by artificial intelligence (AI) hardware, say its developers at the University of Cambridge in the UK.
Today’s AI systems rely on conventional digital computers. These have separate processing and storage units and consume huge amounts of energy when performing data-intensive tasks. As global AI use is exploding, this energy consumption has already become unsustainable, says materials scientist Babak Bakhit, who led this new study.
An alternative way to process information
Neuromorphic computers could provide an alternative way to process information. As their name suggests, they are inspired by the architecture of the human brain. The circuits in these computers are made up of highly connected artificial neurons and artificial synapses that simulate the brain’s structure and functions. These machines have combined processing and memory units that allow them to process information at the same time as they store it, in the same way as a multi-tasking human brain. This means they could reduce energy consumption by as much as 70% compared with their digital counterparts.
Memory-resistors, or memristors, have become a fundamental building block of such neuromorphic architectures. This is because they can be engineered to behave very much like neurons in the human brain, which learn by reconfiguring the strengths of the connections (synapses) between neurons. Memristors excel in this respect as they can bring this learning functionality to the connections in electronic circuits.
First described theoretically in 1971, it was not until 2008 that researchers made the first practical version of a memristor. These devices are special in that their resistance can be programmed and subsequently stored. This is because, unlike standard resistors, the resistance of a memristor changes depending on the current previously applied to it – hence the “memory” in its name. What is more, the device “remembers” this resistive state even when the power is switched off.
Randomness in switching behaviour is a problem
All well and good, but most of today’s memristors unfortunately suffer from randomness in their switching behaviour because they rely on the formation of tiny conductive filaments in the materials making them up. These filamentary devices also typically require high forming and operating voltages and extra devices to avoid uncontrolled current changes that lead to permanent device failure. These challenges make such devices difficult to scale up for real-world applications, says Bakhit.
The researchers, who report their work in Science Advances, claim to have overcome the intrinsic stochasticity of memristive switching by exploiting a completely different switching mechanism – based on carefully engineered heterointerface physics rather than random filament switching. They achieved this by adding strontium and titanium to a hafnium-oxide thin film, which results in the formation of a p-n heterointerface. This junction allows the device to change its resistance smoothly by shifting the height of an energy barrier at the bottom interface through the migration of electro-ionic charges, explains Bakhit.
The new interfacial device has an ultralow switching current of less than or equal to 10-8 A, which is around 106 times lower than those of conventional oxide-based memristors. It also produces hundreds of distinct and stable conductance levels that can be easily modulated, a key prerequisite for analogue “in-memory” computing. And that’s not all: the device can also undergo tens of thousands of switching cycles without losing its programmed states for around a day.
Looking ahead, the researchers say they will now be focusing on translating their material and device breakthrough into a functional computing system. “In particular, we are working on reducing the thin-film growth temperature (which currently stands at around 700 °C) so that it is compatible with standard semiconductor manufacturing (CMOS) tolerances,” says Bakhit. “We will then scale up device arrays to demonstrate large-scale integration.”
Ultimately, the goal is to move from individual devices to fully integrated neuromorphic chips that can compete with, or surpass, conventional AI hardware in both performance and energy efficiency, he tells Physics World.
The post Memristive synapses could reduce AI energy consumption appeared first on Physics World.
Word flower puzzle no. 3
How many words can you find in this puzzle?
The post Word flower puzzle no. 3 appeared first on Physics World.
How did you get on?
10 words Warming up nicely
16 words Getting hot, hot, hot
22 words Top dog!
Fancy some more? Check out our puzzles page.
The post Word flower puzzle no. 3 appeared first on Physics World.
Collisional quantum gates created using fermionic atoms
Architectures could support quantum-chemistry simulations
The post Collisional quantum gates created using fermionic atoms appeared first on Physics World.
Collisional quantum gates based on fermionic atoms have been realized independently by researchers in Germany and Switzerland. The gates are a long-proposed building block for quantum processors, but had been very challenging to create.
Both teams’ gates achieve entangling operations with a fidelity above the theoretical threshold for quantum error correction – and could potentially be particularly useful for simulations of quantum chemistry.
The potential of collisional quantum gates was proposed in the late 1990s by researchers such as Peter Zoller of the University of Innsbruck in Austria and Ivan Deutsch of the University of New Mexico in the US. The underlying principle is that the states of qubits are encoded into the spin states of atoms in an optical lattice. Then, gate operations between qubits are performed by manipulating interactions between the atoms’ wavefunctions. Experimental attempts followed shortly after, but the technology of the time was insufficient to create practical gates.
Early schemes
“Schemes were developed to move the atoms using state dependent potentials, but the laser light was too near resonant, so it worked in principle, but in practice there was too much heating involved,” explains Konrad Viebahn of ETH Zurich and a member of the Swiss team.
German-team member Petar Bojović of the Max Planck Institute for Quantum Optics in Garching adds that imaging the resulting gates was another problem: “They got some first collisional gates showing proof of principle that this could possibly be done at around the same time as they did [trapped] ions, but they couldn’t move further and scale this up or do many more things with it because there was no way to really see the individual qubits and individual gates”.
Since those early days, much progress has been made in quantum-computing schemes that use neutral atoms held in optical tweezer arrays. During a gate operation, one atom is laser excited to a high-energy, large-size Rydberg state in which its wavefunction easily overlaps with the other atoms – allowing atomic qubits to interact.
There are, however, challenges associated with this architecture. Rydberg states are loosely bound, so the qubits are prone to disruption by classical noise. Furthermore, ensembles of Rydberg atoms tend to be large and this is a barrier to scaling-up the architecture.
Robust collisional quantum gates
Bojović and colleagues at the Max Planck Institute led by Titus Franz and Viebahn’s group at ETH Zurich now unveil independent work on new, more robust collisional quantum gates using fermionic lithium-6 atoms. Lithium has the advantage of being lighter, which allows for faster gates.
Most prior work on collisional quantum gates has used bosonic atoms, explains Viebahn, but using fermions makes the gates more robust because the exclusion principle guards against gate errors: “For our [collisional] implementation, the wavefunctions are allowed to overlap completely, and this amplifies the effects of quantum statistics,” he says.
Both groups produced two-qubit gates, including those able to perform entangling operations, with fidelities of over 99%. The Max Planck researchers controlled the interactions between the qubits by manipulating the potential barriers between them. They utilized an optical lattice among the most stable in the world, together with a quantum gas microscope that allowed single-site resolution.
“There’s been some criticism from other communities,” says Bojović; “Once you get to a regime of ‘ninety-nine point something’ fidelity, you really need to be able to see it precisely in order to characterize it.” The researchers would like to go on to demonstrate all the other gates in a universal quantum gate set, but Bojović says that researchers in quantum chemistry are already intrigued by the potential of the platform to simulate molecular behaviour.
Different protocol
The ETH Zurich researchers used a different protocol involving control of the bias voltage to couple the quantum states of their fermionic atoms rather than manipulation of the barrier height. The researchers have not achieved single site resolution – they are currently working to do so – but Viebahn believes his group’s protocol should prove more robust to noise.
“I would say the key novelty here is that we came up with this more robust way of doing this interaction, which was not part of the original proposals from the 90s,” says Viebahn. “We’re the first to implement this gate where these qubits form this fully overlapped quantum state.”
Both groups’ two-qubit gate fidelities are well above the theoretical minimum required for quantum error correction (QEC) to be possible. However, implementing QEC will be difficult because creating the required universal gate set involves a complete set of single qubit gates as well as at least one two-qubit gate that can generate entanglement in the system. Nevertheless, Viebahn concludes, “The two-qubit gate is limiting many other quantum computing platforms, and that’s the thing that we’re very good at.”
The collisional quantum gates are described in two papers in Nature: links to the Max Planck paper and the ETH Zurich paper.
Quantum-computing expert Barry Sanders of University of Calgary in Canada says the papers “have two different purposes and both purposes are significant”. The Max Planck paper, he says, is especially impressive because it opens up the potential to simulate the Fermi–Hubbard dynamics of strongly-correlated electronic systems directly in a quantum simulator. The ETH Zurich paper, meanwhile, uses Fermi dynamics to offer gate operation protection against time-dependent sources of error. “There’s a lot of rich physics available with two fermions at a site,” he says.
The post Collisional quantum gates created using fermionic atoms appeared first on Physics World.
Atomic-scale devices and quantum platforms
Join the audience for a live webinar on 13 May 2026 sponsored by IOP Publishing's journal, Nano Futures
The post Atomic-scale devices and quantum platforms appeared first on Physics World.

We are pleased to announce a forthcoming webinar that presents the very latest developments concerning atomic-scale devices and quantum platforms, and following on from two roadmap publications in Nano Futures that map out the potential pathways of these technologies. The webinar will feature four speakers who will present the status of four distinct research disciplines together with the key challenges and methodologies by which these may be overcome as quantum platforms and single-atomic devices are translated to the level of scalable quantum technologies.
Meet the esteemed panel of experts:

Chair and moderator
Vincenzo Pecunia, Simon Fraser University, Canada
Vincenzo is an associate professor and the head of the Sustainable Optoelectronics Research Group at Simon Fraser University, Canada. His research focuses on printable semiconductors and their applications in photovoltaics and sensing. He earned his PhD in physics and conducted postdoctoral research at the Cavendish Laboratory, University of Cambridge, UK, from 2009 to 2016. Before that, he earned his BSc and MSc in electronic engineering at Politecnico di Milano, Italy. His research breakthroughs include pioneering lead-free-perovskite-based indoor photovoltaics, ultra-low-power printed-thin-film-transistor electronics, and advanced spectrally selective printable light sensors. In recognition of his contributions, Vincenzo has received many awards and honours, including the Fellowship of the Institute of Materials, Minerals & Mining (FIMMM), the Fellowship of the Institution of Engineering and Technology (FIET), and the Fellowship of the Institute of Physics (FInstP).
Speakers
Steven Schofield, University College London, UK
Steven studied physics in Australia at the University of Newcastle (BSc) and the University of New South Wales, Australia (PhD). Following his PhD, he was awarded an Australian Postdoctoral Fellowship, which launched his independent research career. In 2008, he moved to the UK and in 2009 was awarded a five-year EPSRC Career Acceleration Fellowship. He joined UCL as a lecturer in 2012 and has since progressed to professor of physics, with a joint appointment at the London Centre for Nanotechnology and the Department of Physics and Astronomy. His research focuses on understanding and controlling the quantum properties of materials at the atomic scale, combining scanning tunnelling microscopy, synchrotron-based experiments, and theoretical modelling, with a particular interest in how these properties can be harnessed for future electronic and quantum technologies.
Joris Keizer, University of New South Wales, Australia
Joris is a tenured associate professor at the School of Physics at the University of New South Wales, Sydney, Australia. Joris is widely respected as an expert in atomic-scale quantum device fabrication. He is currently the team lead for developing deterministic atomic-precise dopant placement and 3D fabrication techniques for error-correction at Silicon Quantum Computing (SQC). His work to date (six years in academia, seven years in industry) has focused on the fabrication of atomic-scale devices with the goal of realizing a surface code architecture in silicon.
Soo-hyon Phark, Center for Quantum Nanoscience, Institute for Basic Science, Republic of Korea
Soo-hyon is currently working as a PI at Center for Quantum Nanoscience (QNS) of Institute for Basic Science (IBS), where he is leading the research group “Atomic spin qubits on surfaces”. He got his PhD in solid-state physics from Seoul National University (SNU), South Korea, in 2006, for an experimental research on single molecule magnets on surface using scanning probes. He joined QNS in October 2016 and has been leading the project “Electron Spin Qubits on Surfaces” from 2019, using STM equipped with electron spin resonance. He has developed a novel qubit platform using atomic spins on a solid surface for the first time and demonstrated quantum-coherent manipulation of multi-qubit systems (2023). In recognition of these pioneering contributions to the quantum-coherent nanoscience field, he has been awarded the Minister’s Commendation for Outstanding Scientists of the Year 2024, The Best Award in Sciences and Infrastructures of the 100 National R&D Achievements, from Korean Ministry of Science and ICT in 2025, and The 1st ACS Nano Impact Awards from American Chemical Society in 2025. Currently, he continues and extends the projects using various atomic/molecular single spins towards quantum information science/technology using the bottom-up approach.
Franz Giessibl, University of Regensburg, Germany
Franz is the chair for Quantum Nanoscience at University of Regensburg in Germany. He obtained his diploma in physics after studies at the Technical University of Munich and ETH Zürich. He was the PhD student of Nobel laureate Prof. Gerd Binnig with the IBM Physics Group Munich at the Ludwig-Maximilians University, where he built the first atomic-force microscope (AFM) for ultrahigh vacuum and low temperatures. He continued his work on AFM at Park Scientific Instruments, a Stanford spinoff, where he established AFM as a surface science tool by obtaining for the first time the atomically resolved Si(111)-(7×7) reconstruction published in Science 267, 68 in 1995. During a two-year break from science, as a management consultant with McKinsey & Company, he invented the qPlus sensor, a new core for AFM, in his home laboratory and returned to academia. The qPlus sensor enabled transformative works in science since and Giessibl has been awarded 10 international science prizes for his work on AFM so far, including the Keithley award of APS, the Feynman Prize of Nanotechnology, the Heinrich Rohrer Grand Medal and the NIMS award of Japan.
Nano Futures is a multidisciplinary, high-impact journal publishing fundamental and applied research at the forefront of nanoscience and technological innovation.
Editor-in-chief: Vincenzo Pecunia is an associate professor and the head of the Sustainable Optoelectronics Research Group at Simon Fraser University, Canada.
The post Atomic-scale devices and quantum platforms appeared first on Physics World.
Proteins on manuscript reveal how Renaissance medicines were made
Our podcast guest develops technologies for proteomics
The post Proteins on manuscript reveal how Renaissance medicines were made appeared first on Physics World.
Gleb Zilberstein is my guest in this episode of the Physics World Weekly podcast. A physicist by training, Zilberstein applies the principles of proteomics to the study of historical objects including Renaissance manuscripts.
He is also a director of Israel-based SpringStyle Tech Design, which has created a special film that lifts proteins from the surfaces of historical objects. Analysis of these proteins provides important information about how those objects were used.
In a recent paper, Zilberstein and colleagues studied protein residues on a well-thumbed book of medical recipes that was published in Germany in 1531. He explains how their analysis provides a new view into how medical practitioners used the book and what sorts of concoctions they were making. Astonishingly, the team found evidence that European readers had access to ingredients derived from hippopotamuses.
Some papers about the application of proteomics to historical research:
- The Scientific Analysis of Renaissance Recipes
- Count Dracula Resurrected
- EVA Technology and Proteomics: A Two-Pronged Attack on Cultural Heritage
The post Proteins on manuscript reveal how Renaissance medicines were made appeared first on Physics World.
Daily QA 4 Pro redefines machine quality assurance for next-generation radiotherapy
All-in-one QA platform provides an independent, interpretable alternative to vendor black-box QA
The post Daily QA 4 Pro redefines machine quality assurance for next-generation radiotherapy appeared first on Physics World.
For radiotherapy centres, daily quality assurance (QA) provides the final safety check before each day of patient treatments – ensuring that all linear accelerators (linacs) deliver radiation safely, accurately and as expected.
But as radiotherapy technologies evolve, the required QA procedures become increasingly complex, with verification tests often performed in isolation using multiple phantom set-ups. New treatment techniques – such as surface-guided radiotherapy (SGRT), which is more widely used now than ever – also introduce new QA requirements. And the ongoing adoption of adaptive radiotherapy, where measurement-based pre-treatment QA is not possible, increases the emphasis on machine QA, in which daily QA plays a key role.
What’s needed is a comprehensive QA approach that incorporates the dosimetry, imaging and positioning checks required for all radiotherapy modalities. Addressing this challenge, US manufacturer Sun Nuclear has launched Daily QA 4 Pro, a new device that simplifies daily machine QA by combining dosimetry and positioning verification via imaging into a single indexed, imageable platform.
“The main motivation for launching the Daily QA 4 Pro was to create a product that not only met the current needs of clinicians, but also future needs, based on our vision of the radiotherapy QA field,” explains Rajiv Lotey, technical product manager for the Daily QA 4 Pro.
The next-generation platform builds on the company’s Daily QA 3 beam quality analysis product, which was introduced more than a decade ago and is now standard in many radiotherapy departments. “The biggest difference between the Daily QA 4 Pro over the Daily QA 3 is the end-to-end QA functionality – representing the patient workflow – achieved by integrating a 3D high-resolution array, fiducials, an SGRT-compatible surface, an imageable architecture, and the ability to correlate all imaging and mechanical isocentres together onto one device,” says Lotey.
Enabling new modalities, expanding clinical applications
David Barbee, Director of Technology and Innovation in Radiation Oncology at NYU Langone Health, was one of the first to adopt this technology. Speaking at the recent QA & Dosimetry Symposium (QADS) hosted by Sun Nuclear, he described his early experiences of using the next-generation Daily QA 4 Pro.
“The first thing I wanted to do was evaluate surface-guided radiation therapy, because we don’t currently do this during daily QA,” Barbee explained.
To perform this test, the team defined a region-of-interest in the hospital’s VisionRT SGRT system that covered the entire surface and edges of the Daily QA 4 Pro and tested it over the full range of couch motion. The maximum translation range that it could detect was about ±4.5 cm in the lateral (side to side) and longitudinal (along the couch length) directions, and +13 to –17 cm vertically.
“For pitch and roll, we tested the 3°/3 mm limits and 90° couch rotations, and it observed them perfectly,” he added. “This is the first time we’ve ever run this test and compared our SGRT system to our image guidance system,” he noted. “This is very, very helpful.”

For dosimetry, Barbee noted that many parameters are carried over from the Daily QA 3 – including the output profile constancy, the field size and shift, and the flatness and symmetry – but added that the Daily QA 4 Pro can measure at a much wider range, anywhere from 2 to 20 cm square fields. “There are also new metrics, such as the penumbra, beam shape constancy for FFF [flattening filter-free] fields, the beam centre and the dose-per-pulse,” he explained. “And there’s a new dose output correction factor for when you need to move this device to a different unit.”
Barbee and colleagues performed a range of dosimetry assessments using the Daily QA 4 Pro, measuring 30 sessions on six linacs using both jaw- and multi-leaf collimator (MLC)-defined field sizes. They found that the output factors were consistent down to about 7 mm, after which the MLC gave slightly higher output factors, while the largest beam profile differences were seen in flatness and symmetry for very small fields.
Integrating Winston–Lutz
The Daily QA 4 Pro incorporates active measurement Winston-Lutz tests – a standard procedure for evaluating isocentre accuracy – using the system’s onboard 3D detector array to directly measure the radiation isocentre. The NYU Langone team used the Daily QA 4 Pro to quantitatively assess the mechanical isocentres and their response to gantry, collimator and couch motion for six linacs, again using both jaw- and MLC-defined fields.
Barbee noted that the system runs the gantry and collimator checks automatically. “You can basically hit play on SunCHECK and then you don’t touch anything again until you get to the couch, which you have to move from the console,” he explained.
To test the accuracy of the results, Barbee compared them with two years’ worth of Machine Performance Check (MPC) and traditional Winston-Lutz measurements of all of the centre’s linacs. Daily QA 4 Pro measurements agreed well with previous isocentre results across all machines tested. “It’s a little bit early to say, but it looks commensurate, there are no concerns,” he noted.
A look inside the device
The Daily QA 4 Pro measures 30 x 50 x 6 cm, weighs 6.2 kg and sits on a 4.1 kg six degrees-of-freedom base. It incorporates four ion chambers that measure field sizes down to 5 x 5 cm, as well as 249 diodes spaced at high resolution in the x– and y-directions, the diagonals and along both sides. There are also eight 3 mm tungsten carbide BBs positioned off-axis, factory-calibrated to enable micron-level corrections.
Externally, the device incorporates scribed laser alignment marks with 2 mm tolerance on its sides and surfaces, plus a crosshair for collimator alignment. There are also field size markings for 5 x 5, 10 x 10 and 20 x 20 cm fields, as well as eight symmetric reliefs designed specifically for SGRT.
The Daily QA 4 software is designed to integrate into the SunCHECK environment and can be controlled using either SunCHECK Local via a standalone laptop or (starting in version 6.0) the SunCHECK Server.
The team also ran active imaging Winston-Lutz tests, which evaluate system geometry by analysing the position of a known target in images acquired using the linac’s imaging panels. The Daily QA 4 Pro device detects the image fiducials (tungsten carbide BBs) and compares their positions to expected values for each gantry angle. These tests allow users to assess factors such as device positioning, gantry angle accuracy and overall alignment.
“This is all summarized into a report showing the maximum error in any one of those parameters across all gantry angles,” explained Barbee. “It will tell you which gantry angle was the worst and what the value there was.”
Used together, the two Winston-Lutz methods combine direct radiation measurement with imaging-based verification to provide a more complete understanding of system health and to help identify, quantify and correct any errors.
Efficiency analysis
Barbee notes that while the Daily QA 4 Pro generates a comprehensive set of dosimetry and positioning verification data, at first glance, it looks like a lot more work. An efficiency analysis, however, proved the opposite – demonstrating significant gains in workflow efficiency.
Currently, Daily QA 3 and IGRT tasks take about 16 min to perform. “Daily QA 4 Pro cuts about five minutes off that time, because you’re not going in and out of the room and doing multiple setups,” he explained. “Adding Winston-Lutz currently doubles the time to over half an hour. But with Daily QA 4 Pro, you only add five minutes. And it’s a simple setup that your therapist can run as part of their morning QA.”
“The Daily QA 4 Pro integrates image-guided radiotherapy, SGRT, beam dosimetry and Winston-Lutz verification into a single device, enabling comprehensive daily QA in a single setup and session,” Barbee concluded. “This provides an independent, interpretable alternative to vendor black-box QA systems, with comparable isocentre and imager tests, and superior beam quality constancy tests. It really can consolidate a lot of phantoms that you might not need anymore.”
The post Daily QA 4 Pro redefines machine quality assurance for next-generation radiotherapy appeared first on Physics World.
India’s first fast-breeder nuclear reactor achieves criticality
Milestone marks the start of the second-stage of India’s nuclear programme
The post India’s first fast-breeder nuclear reactor achieves criticality appeared first on Physics World.
India’s first prototype fast-breeder reactor (PFBR) has achieved criticality, marking a significant boost for the country’s nuclear programme. The 500 MW reactor, which is based at Kalpakkam, about 70 km south of Chennai, is intended to be a forerunner for a fleet of six similar fast-breeder reactors.
India’s currently has almost 9 GW of nuclear capacity from 24 plants, which are mainly pressurised heavy water reactors (PHWRs) that use domestic and imported natural uranium. Long-term, the Indian government wants to exapand nuclear capacity to 100 GW by mid-century, quadrupling its share in electricity generation from 3% to 12%.
An Indian parliamentary panel examining the country’s nuclear programme warned earlier this year, however, that current capacity expansion is falling “significantly short” of the 100 GW target. The panel called for a “ring-fenced” funding mechanism and a clear roadmap and timelines to scale up fast-breeder reactors.
The PFBR uses uranium–plutonium mixed oxide (MOX) fuel and is designed to generate more fuel than it consumes. It does this by using a blanket of uranium-238 that surrounds the reactor’s core, absorbs neutrons and is transmuted into fissile plutonium-239. Work started on the PFBR in 2004 and it was originally supposed to open in 2010.
Despite delays and technical issues, the PFBR successfully achieved its first criticality on 6 April. “This is a historic moment,” says Anil Kakodkar, former secretary of India’s Department of Atomic Energy (DAE) who is now chancellor of the Homi Bhabha National Institute, told Physics World.
Three-stage solution
India has a three-stage nuclear strategy, in which PHWRs are the first stage, with the second involving spent fuel from PHWRs bring reprocessed into MOX fuel for fast breeders.
The third stage seeks to exploit India’s abundant thorium reserves – estimated at over a million tonnes of thorium compared to 433 000 tonnes of uranium – to produce uranium-233, potentially supporting energy demand for centuries.
Other countries, such as France, Japan and the US, have scaled back or deprioritised fast-breeder programmes due to technical and economic challenges.
Kakodkar cautions that the pace of future expansion will hinge on a shift from MOX to metallic fuel fast reactors, which use metal alloys and fast neutrons to breed new fuel. This could reduce the fuel doubling time in fast breeders from roughly 30 years to about a decade.
In parallel to the PFBR programme, the Bhabha Atomic Research Centre in Mumbai, has designed an Advanced Heavy Water Reactor (AHWR) to use thorium-based fuels. Kakodkar says that advancing the AHWR would “expedite transition” to the thorium fuel cycle by building institutional and industrial capability.
The post India’s first fast-breeder nuclear reactor achieves criticality appeared first on Physics World.
Epitaxial Si/SiGe Multilayers for novel logic and memory devices
Join the audience for a live webinar at 3 p.m. BST/10 a.m. EDT on 6 May 2026
The post Epitaxial Si/SiGe Multilayers for novel logic and memory devices appeared first on Physics World.

A description of the evolution of metal-oxide-semiconductor device architectures and the corresponding requirements on epitaxial growth schemes will be followed by a discussion of the obtained material properties of Si/SiGe multilayer stacks used for logic and 3D DRAM devices, grown on 300 mm Si (001) wafers.
The process used to deposit Si/SiGe multilayers for Nano-Sheet devices has been extended to 120 pairs (241 sub-layers) of {65 nm Si/10 nm strained Si0.8Ge0.2} for 3D DRAM concepts [1]. A more complicated layer stack with two different Ge concentrations is required for the monolithic fabrication of complementary field effect transistor (CFET) devices, where gate-all-around nFETs and pFETs are stacked on top of each other [2]. A relatively high growth temperature provides acceptable Si and SiGe growth rates while still suppressing 3D island growth for SiGe growth with up to 40% Ge. Excellent structural and optical material properties of the epi stack will be reported, with up to 3 + 3 Si channels in the top and bottom part of the stack, respectively. For all layer designs, the absence/presence of lattice defects has been verified by several techniques including photoluminescence (PL) measurements at both room-temperature and low temperature.
[1] R. Loo et al., JAP 138, 055702 (2025), https://doi.org/10.1063/5.0260979
[2] R. Loo et al., ECS SST 14, 015003 (2025), https://iopscience.iop.org/article/10.1149/2162-8777/ada79f

Roger Loo joined imec in January 1997. Since October 2013 he has been a principal scientist (principal member of technical staff) in the group IV epi team. Since September 2023, he has also been a visiting professor (5%) at the Ghent University. He has authored or co-authored more than 240 articles in peer-reviewed journals. He has been co-editor of eight journal special issues, (co-)authored more than 250 articles in proceedings listed in Web of Science and has given more than 30 invited talks at international conferences. Loo regularly gives invited research seminars and tutorials at universities, institutes and companies. Loo has co-authored more than 90 patent filings (including provisional filings), among which more than 50 patents have been granted and are maintained. He has also (co-)organized about 24 international conferences.
The post Epitaxial Si/SiGe Multilayers for novel logic and memory devices appeared first on Physics World.
Ferroelectric devices push reservoir computing forward
By pairing a ferroelectric capacitor with a linear capacitor, researchers create a power‑efficient device with tuneable memory and strong nonlinear responses
The post Ferroelectric devices push reservoir computing forward appeared first on Physics World.
Reservoir computing is a computational approach well suited to time‑dependent tasks such as speech recognition, because it relies on internal dynamics, nonlinear responses, and short‑term memory of recent inputs. However, most hardware implementations consume too much power and lack the rich dynamics needed for complex problems. In this study, the researchers introduce a new reservoir‑computing device made by connecting a ferroelectric capacitor (FC) in series with a linear capacitor (LC). This FC-LC device naturally provides the two essential ingredients of a reservoir: nonlinearity, through polarization switching and back‑switching in the ferroelectric layer, and fading memory, through slow charge accumulation and relaxation.
The device offers several advantages over existing reservoir hardware. It operates at extremely low power, produces a direct voltage output without extra circuitry, and has widely tuneable time constants, allowing it to respond quickly or slowly depending on the task. It also supports bidirectional operation, which increases the richness of its internal states and improves performance on classification tasks. By combining FC-LC devices with different time constants, the researchers create a hybrid reservoir with even greater computational capacity.
The system performs exceptionally well on a range of benchmarks, including heartbeat anomaly detection, waveform classification, multimodal digit recognition, and prediction of chaotic time‑series data. Because the device can be fabricated using established semiconductor processes and can be extended to widely used ferroelectric materials such as hafnium oxide, it is well positioned for large‑scale integration and future commercial reservoir‑computing hardware. This work lays the foundation for scalable, energy‑efficient reservoir systems that could enable fast, on‑chip processing in next‑generation electronics.
Read the full article
Linyuan Mo et al 2026 Rep. Prog. Phys. 89 028001
Do you want to learn more about this topic?
Many-body localization in the age of classical computing by Piotr Sierant, Maciej Lewenstein, Antonello Scardicchio, Lev Vidmar and Jakub Zakrzewski (2025)
The post Ferroelectric devices push reservoir computing forward appeared first on Physics World.
What happens when a Bose–Einstein condensate becomes turbulent?
New research from the Université Côte d’Azur, CNRS, Institut de Physique de Nice, shows how Bose–Einstein condensates (BECs) become turbulent when driven out-of-equilibrium at small scales
The post What happens when a Bose–Einstein condensate becomes turbulent? appeared first on Physics World.
The concept of turbulence is one of physics’ most persistent challenges, defying a simple description despite decades of research. Adding quantum mechanics into the mix only makes things more complicated.
BECs are formed when atoms are cooled down to close to absolute zero. In this state they behave as a single coherent quantum fluid. They enable the observation of quantum behaviour on a macroscopic scale, enabling breakthroughs in fundamental physics and ultra‑precise technologies.
Waves can form within a BEC when it’s disturbed, just like in any other fluid. These can travel through the material, interacting, cascading and ultimately forming turbulent patterns.
When the turbulence is weak, and the chaotic interactions are small, perturbative wave‑interaction theories work well. A complete, simple theory of strong turbulence, however, remains elusive. Nonlinearities dominate and approximations break down.
The new paper sets out the conditions for a BEC to shift from weak to strong turbulence, offering a clearer way to interpret experiments and simulations. The work explains how nonlinear interactions, external driving, and dissipation help to shape the turbulent cascade. This process is analogous to classical turbulence but is fundamentally altered by quantum mechanics.
The authors emphasise that distinguishing the two turbulent regimes is essential for interpreting modern ultracold-atom experiments, where turbulence can be intentionally engineered using a shaking potential trap.
As BECs continue to serve as pristine platforms for simulating complex fluid behaviour, understanding their turbulent states is becoming increasingly important. The results of this paper will be invaluable for future investigations into quantum turbulence, non-equilibrium statistical physics, and the boundary where order gives way to chaos in quantum matter.
Read the full article
Strong and weak wave turbulence regimes in Bose–Einstein condensates – IOPscience
Ying Zhu et al 2026 Rep. Prog. Phys. 89 030501
The post What happens when a Bose–Einstein condensate becomes turbulent? appeared first on Physics World.
Science and standards: a comprehensive guide to radiological protection
New book provides an essential guide to the science, regulatory landscape and real world practice of radiological protection
The post Science and standards: a comprehensive guide to radiological protection appeared first on Physics World.

The discovery of X-rays and radioactivity in the late 19th century gave rise to a surge of interest from the scientific community, shortly followed by the realization of the adverse effects of ionizing radiations on health. By about 1910 the dangers were widely recognised and some basic protection measures were being adopted. It was not until 1934, however, that the first quantitative standards of radiological protection were published.
Of course, protection against the adverse effects of ionizing radiation is as important today as ever, particularly for those working within nuclear and defence-related industries, medicine and R&D, as well as hospital patients undergoing radiation-based procedures and members of the general public. As such, the last century has seen the development of a complex international regulatory system, with recommendations on occupational and public exposures to radiation – from organizations such as the International Commission on Radiological Protection (ICRP) and others – continually revised and updated.
A new book, Principles and Techniques of Radiological Protection, provides a comprehensive overview of the current regulatory context for radiological protection. The text also provides an overview of the scientific issues relating to radiological protection and the current state-of-the-art tools used to comply with the relevant legislation and guidance.
Targeted at postgraduate students and new entrants to the field, the textbook is designed to cover a wide range of topics that an early-career radiation protection professional might need, or want, to know about. It also serves as a day-to-day reference work for specialists such as radiation protection advisors (RPAs) to identify appropriate techniques to address radiological protection issues as they arise.
“I aimed to produce a book that I would have liked to have had available when I started work in radiological protection just over 50 years ago,” explains the book’s editor Michael Thorne. “As I come towards the end of my career in the field, I aimed to include information, tools and techniques that I would have liked to have had readily accessible in a single volume.”
History, theory and practical applications
Thorne begins the book with a brief history of radiological protection and how historical developments continue to influence the discipline today. The next chapters examine the physical aspects of radiological protection, including an overview of basic nuclear physics and the sources of radiation, radiation transport through and interactions with matter, and the instruments used to detect and monitor radiation. Later chapters cover the principles of internal dosimetry, phantoms and biokinetic models, and mathematical modelling of radionuclide transport.

“I have also given a detailed account of natural background radiation and modelling the transport of radionuclides in the environment; and I have included a chapter on the effects of radiation on the environment, with specific emphasis on non-human biota,” says Thorne. “Throughout, I have recruited co-authors with decades of relevant experience to capture their expertise in each of the specialized areas.”
The book also provides examples of how this information is employed practically within various fields, including the nuclear industry and industries handling naturally occurring radioactive materials. Several chapters and themes are of particular relevance to those working within medical physics.
“There are two chapters specifically on radiology and nuclear medicine, written by Colin Martin, who is well known internationally for his work in this area,” Thorne tells Physics World. “There are also specialized chapters on biokinetic modelling, the nature and use of both mathematical and physical phantoms in radiation dosimetry, and on the use and abuse of instruments for radiation monitoring.”
The book rounds off with a look at the some of the major and minor accidents that led to exposure of members of the public and workers using radioactive sources. The final chapter addresses emergency planning and response for such incidents, including suggested protective actions and the roles and responsibilities of various organizations.
“Throughout, the emphasis is on broad principles and widely applicable techniques,” says Thorne. “It is considered that an individual who gains a clear understanding of these principles and techniques will be readily able to apply that understanding to the diverse and changing set of challenges that arise.”
- Individual copies of Principles and Techniques of Radiological Protection can be purchased at the IOP Publishing Bookstore.
The post Science and standards: a comprehensive guide to radiological protection appeared first on Physics World.
Lure of the black hole: from science to art
An excerpt from art historian and author Lynn Gamwell’s book Conjuring the Void: the Art of Black Holes
The post Lure of the black hole: from science to art appeared first on Physics World.

Black holes, as their name suggests, are veiled in darkness and mystery. These brooding celestial behemoths are regions of space–time that consume not just stellar dust and light but the attention of astronomers, artists and non-scientists too. Often depicted as shadowy maws ringed by fire, these inescapable pits intrigue us all.
“Science has produced a wealth of information about black holes that has been popularized worldwide,” says author, curator and art historian Lynn Gamwell. “This has prompted artists to delve deep into their creative imaginations to find the significance of black holes within a broad cultural context.”
Unable to escape from the lure of black holes herself, Gamwell – who teaches the history of art, science and mathematics at the School of Visual Arts in New York – has written and compiled Conjuring the Void: the Art of Black Holes. The stunning coffee-table book is a definitive – and near-exhaustive – collection of black-hole art, including 155 colour illustrations, perfectly mixed with information about the science and history of these objects.
Readers will undoubtedly fall into the pull of the book’s gravity, in which Gamwell skilfully weaves together our scientific understanding of black holes along with interpretations of these regions of space–time by artists around the world. Indeed, the book uses every medium available to decipher these objects.
With a background in the arts and humanities, Gamwell’s interest in science came while studying modern art. “The explanations of abstract, non-objective art that were taught to me never made sense,” she says. “While it seems so obvious now, I finally figured out that artists express their worldview and the modern worldview is shaped by science, which discovered invisible forces – such as electromagnetism – that can’t be pictured.”
Gamwell’s previous books – Mathematics and Art (2015) and Exploring the Invisible (2020) – both focused on the more abstract aspects of maths and science that are often complex and difficult to visualize. A few years ago, she was invited by physicist Peter Galison, director of Harvard University’s Black Hole Initiative (BHI), to give a talk at its annual conference.
“In researching for the talk, I was amazed to learn how many artists had done art about black holes,” Gamwell recalls. “So I decided to write a book about the artistic phenomenon and why black holes have captured the public imagination.” Gamwell is now an affiliate of the BHI, which brings together scientists, mathematicians and philosophers of science to deepen our understanding of black holes.
Given the interdisciplinary nature of her work, Gamwell regularly meets artists interested in science as well as scientists interested in art, including the Event Horizon Telescope’s Shep Doeleman, whom this book is dedicated to. “Artists and scientists arrive at similar ideas by different paths,” she says. “Both benefit from looking at each other’s work.”
The art – and, by extension, the artists depicted in Conjuring the Void – shows how the human conceit of “nothingness” links us to black holes. “On the one hand, the black hole provides artists with a symbol to express the devastations and anxieties of the modern world,” Gamwell writes. “On the other hand, a black hole’s extreme gravity is the source of stupendous energy, and artists such as Yambe Tam invite viewers to embrace darkness as a path to transformation, awe, and wonder.”
Below is an edited extract from chapter three of Conjuring the Void, illustrated by a selection of images of art from the book. They depict everything from colliding black holes and their gravitational waves to a black hole’s accretion disc and even a sonic wormhole. We hope they also take you on a journey of awe and wonder.
Artistic and scientific images of invisible objects
In the early 1970s the existence of black holes was reported in scientific papers and newspapers around the world, starting with the discovery of Cygnus X-1, introducing the phenomenon to the culture’s imagination. Scientists symbolized data in charts, graphs and mathematical formulae and attempted to make images of black holes. But seeing an object requires light, so rather than depicting a black hole itself, scientists imagined what matter surrounding it would look like. Artists, in turn, subjected scientific data to the transformation of the imaginative process and created something completely new: artworks.
Seeing an object requires light, so rather than depicting a black hole itself, scientists imagined what matter surrounding it would look like. Artists, in turn, subjected scientific data to the transformation of the imaginative process and created something new
Lynn Gamwell
In the decades before scientists showed that black holes exist, several artists in the West –including the American Barnett Newman, the Argentine-Italian Lucio Fontana, the American Lee Bontecou, and the Englishman John Latham – made abstract art about dark voids.
As scientists were confirming the existence of black holes, Frederick Eversley was imagining sculptures of them. He graduated in 1963 from the Carnegie Institute of Technology (now Carnegie Mellon University) in Pittsburgh with a degree in engineering and worked in the aerospace industry building acoustic laboratories for NASA. Around 1970 he transitioned to being an artist, creating abstract sculptures in cast polyester. With his background in science, Eversley understood the significance of the discovery of Cygnus X-1 in 1971.
That same year, the Brazilian artist Anna Maria Maiolino began a series of artworks about her life under Brazil’s military dictatorship. Whereas most artists in the early 1970s didn’t pay much attention to black holes because there were no visualizations of them to fire their imaginations, Maiolino became fascinated with holes filled with darkness.
Black holes were a metaphor for resistance to political repression in the work of Rudolf Sikora – in his case, from the Communist government of Czechoslovakia. In the early 1970s he began a series called Concentration of Energy featuring black holes.
Early scientific images of black holes
While Eversley, Maiolino and Sikora were in their studios making artworks about black holes, the US physicists C T Cunningham and James Bardeen were in their laboratory creating an illustration of the deformations in space–time around a black hole. They imagined a distant observer seeing a star orbiting a black hole at a uniform distance. They knew that the rapidly rotating black hole’s gravity affects light passing through its gravitational field in a manner similar to a powerful lens, hence the observer would see light that is distorted by what astronomers call gravitational lensing. Cunningham and Bardeen calculated these optical deformations and in 1973 produced the first scientific visualization of space–time around a black hole.

What would gravitational lensing do to the cloud of dust and gas that orbits a black hole called the accretion disc? The French astrophysicist Jean-Pierre Luminet wanted to make a realistic picture of an accretion disc. Associating realism with photography, he imagined the black hole “as seen by a distant observer” taking a “photograph” from a stationary, authoritative viewpoint. In Luminet’s diagram (see above), the accretion disc forms a flat, circular disc of dust and gas. Friction and magnetic forces heat the accretion disc to hundreds of billions of degrees until it becomes an incandescent plasma emitting radiation. The observer looks down on the disc from a slightly elevated position (at a 10-degree angle, labelled “observer’s direction”). While the accretion disc and stars emit light in all directions, for simplicity’s sake Luminet imagined parallel light rays coming from the observer’s direction.
Luminet made his drawing with tiny dots of black ink on white paper and then photographically reversed the image so that it reads white against a black background to create a “simulated photograph” of a luminous object in the darkness of space. His drawing shows one additional optical deformation lacking in Cunningham and Bardeen’s line drawing. The accretion disc displays a dramatic Doppler effect since it’s rotating close to the speed of light. Light appears closer to the blue or red end of the spectrum depending on whether the source is moving toward or away from the observer. In Luminet’s drawing, the disc’s left side appears to be moving toward the observer, so the observed frequency (hence the energy) of the electromagnetic waves is very high. Since Luminet’s image is black and white, he shows all radiation in the electromagnetic spectrum in what photographers call a bolometric photograph.
In Luminet’s image, the innermost stable circular orbit is the smallest circular orbit in which matter can stably orbit the black hole; it’s the inner edge of the accretion disc. If matter goes inside that orbit, it quickly falls past the black hole’s event horizon. Since light has no mass, it can orbit within the innermost stable circular orbit. If light crosses the event horizon it will not escape, but some photons circle on a narrow path between the innermost stable circular orbit and the event horizon. Scientists call this structure a photon ring (some call it a photon sphere because it’s three-dimensional).
Luminet published his work in 1979 and concluded with these prophetic words: “Thus our picture could represent many relatively weak sources, such as for instance the supermassive black hole whose existence in the nucleus of M87 has been suggested recently.” Forty years later, the black hole in the centre of galaxy M87 was imaged by the Event Horizon Telescope.
Added colour
Jean-Alain Marck – Luminet’s colleague at the Paris-Meudon Observatory – was an expert in general relativity, computer programming and calculating geodesics around a black hole. A geodesic is the shortest distance between two points on a curved plane. In 1989 Marck calculated the geodesics describing the accretion disc in Luminet’s drawing from various angles and, for dramatic effect, added colour. An image of a black hole from 1997 shows the far side of the accretion disc’s top side and underside. Marck and Luminet’s image had shown this view earlier, but it remained unpublished.
In the early 1990s Marck and Luminet collaborated on a sequence about black holes for a television documentary that was broadcast across Europe. Luminet had drawn his image by hand in the late 1970s because computer graphics programs were not available, but by the 1990s the technology had advanced and Marck was able to write the animation program himself. Marck’s calculation is unusual because it shows what a moving observer – riding a magic carpet and wearing a bow in her hair – would see flying past a Schwarzschild black hole on an elliptical trajectory.
While Luminet’s monochrome picture depicted the total radiation in all wavelengths, astronomers Jun Fukue and Takushi Yokoyama imagined a visible-light photograph of an accretion disc. Luminet, Fukue and Yokoyama visualized thin accretion discs around Schwarzschild (stationary) black holes and a thick accretion disc around a Kerr (rotating) black hole from an almost edge-on viewpoint. Artist Fabian Oefner created an artwork that is a metaphor for a multicoloured accretion disc, representing the visible light from a rotating black hole (see artwork at the top of this article).

If a black hole is rotating, the speed at which it spins affects the diameter of the innermost stable circular orbit; the faster it spins, the smaller its diameter. If a Kerr black hole spins extremely fast, it will distort space–time at the inner edge of the accretion disc. A thin accretion disc around a maximally rotating Kerr black hole from an elevated viewpoint shows asymmetry of the disc’s inner edge as the result of frame-dragging; the rotating black hole “drags” space–time along.
Melissa Walter created a sculpture that is a metaphor for gravitational lensing. Light passes through cut paper that sways and curves, distorting the light like a gravitational lens. Walter, unlike many artists, understands the crucial distinction between a science illustration and an artwork. Under her maiden name, Melissa Weiss, she works for NASA, executing science illustrations of how a black hole might actually appear, such as the widely used image of Cygnus X-1 and its companion star. Under her married name, Melissa Walter, she creates artworks. Speaking about the development of her oeuvre, she said: “Abstraction has been the common thread throughout that evolution as it relates to humanity’s place in the cosmos.”
Eric Heller is a physicist who studies wave phenomena in quantum mechanics, acoustics and oceanography. He’s also a practising artist who creates digital images about scientific subjects. In Black Holes Merging he imagined the pattern two black holes might make when they spiral into each other (see above left).
The popularization of black holes
In the late 1970s popular-science books about black holes began appearing, including Isaac Asimov’s The Collapsing Universe: the Story of Black Holes (1977). Having earned a PhD in chemistry, Asimov drew on a deep knowledge of science and was a skilled storyteller. Another title that contributed to the popular fascination with black holes was Stephen Hawking’s A Brief History of Time: From the Big Bang to Black Holes (1988) and the 1991 film based on it. Inspired by the words of Hawking, the Italian art collective Opiemme painted letterforms surrounding a long shape that symbolizes an event horizon.
Carl Sagan’s book Cosmos (1980) sold five million copies internationally. The related TV series, Cosmos: a Personal Voyage (1980), was hosted by Sagan and shown in 60 countries to 400 million viewers. A sequel, Cosmos: a Space–time Odyssey (2014), hosted by Neil deGrasse Tyson, was shown in 125 countries to 135 million viewers. Sagan and Tyson described many scientific topics, including black holes, which were brought to life by animators.

The impact of these popularizations was felt around the world, and artists in Asia mixed Western science with Eastern philosophy and history. Cai Guo-Qiang was in his 20s when he began experimenting with gunpowder as an artistic medium. When you explode a small amount of gunpowder on paper, it leaves a mark. Cai called these works “gunpowder drawings”. In 1986, at age 29, he moved from his native China to Japan and became enthralled by popular books about astrophysics, especially A Brief History of Time and Cosmos, which he read in translation.
Cai said: “When I came to Japan, my encounters with the theories of 20th-century astrophysics were very significant to me. The concepts of the Big Bang, black holes, the birth of stars, what is beyond the universe, time tunnels, how to leap over great distances of time and space and dialogue with something infinitely far away – these ideas were still not commonly in circulation in China at the time. They were an eye-opener for me. At the same time, many of these ideas have similarities with traditional Chinese views, with which I was familiar, of metaphysics and the universe.”
In 1991 Cai created large gunpowder drawings on paper mounted on wood panels, such as The Vague Border at the Edge of Time/Space Project. Then he joined the wooden panels together, transforming them into traditional Chinese folding screens. He called the series Primeval Fireball: the Project for Projects because his drawings, like the cosmos, exploded into existence.
Lucas J Rougeux was inspired when in 2014 astronomers watched as what appeared to be a cloud of dust (G2) approached Sagittarius A*. They expected the space cloud to be sucked into the black hole, but it survived the encounter. (Astronomers now believe that G2 was a binary star system that orbited the black hole in tandem, eventually merging into an extremely large star.) After learning about G2, Rougeux created a series of artworks about black holes (see above) that were shown in a 2022 exhibition titled The Soul Gravity—Guided to Black. The artist said, “The delicacy and amorphous nature of a space cloud is directly connected to my own sense of queer identity…I am a cloud of space dust. I am a collection of particles dealing with depression. I am weaving through waves of space–time and isolation. My work is the product of this existentialism, loneliness and search for a connection to the sublime.”

In 2022 NASA released a new sonification of the black hole at the centre of the Perseus galaxy cluster, which inspired the photographer John White. He painted the bottom of a petri dish black, filled it with water, and set it on top of a speaker. As he played the sound of the black hole through the speaker, the water began to vibrate. Shooting directly down at the petri dish with a macro lens and a halo light in a darkened room, he captured the vibration in a photograph titled Black Echo (see above).
Immersive art about black holes
Artists create immersive art – artworks the viewer can walk into – to enhance the immediacy of the experience. In 2016 the choreographer Wen-chi Su was an artist-in-residence at CERN, where she met the theoretical physicist Diego Blas, and they discussed the meaning of gravity in dance and astronomy. Su imagined what happens when a body falls into a black hole. Together with her production team, she directed a film in which the sets were animations and the movements of the dancer were captured by motion sensors. Additionally, a surround-sound system immersed the audience in a three-dimensional sound field.

Cao Yuxi (James Cao) is a computer artist who created an artwork about a black hole that he titled Oriens (Latin for “Orient”), giving it the subtitle Immersive Black Hole because the viewer is able to walk around in the space of the artwork (see above). His projection of a sphere on the wall suggests a black hole. A circle symbolizing the event horizon is projected on the floor, and flashing, curving lights communicate distortions in space–time near the black hole.

The American artist Yambe Tam, who merges Western science with Chinese philosophy, has said: “Black holes are a reoccurring theme in my practice. Beyond my interest in theoretical physics, I see connections to the Buddhist philosophical concept of the void/emptiness/nothingness, which is shared more widely with other Eastern spiritual traditions. Rather than signifying a negative space or absence of something, void/emptiness/nothingness is a space of infinite potentiality. It is during the practice of zazen [silent meditation] that I most feel an embodied sense of this – the emptying of oneself, or dissolution of form and ego into pure being.”
Tam’s Cosmic Garden was created to resemble a Buddhist dry garden. From the ceiling hang several of the artist’s sculptures that take the form of bells. One of these sculptures, Wormhole Bell (see above left) has feedback microphones that turn the object into a self-resonating instrument, which helps induce a deep state of meditation. In astronomy, a wormhole is a hypothetical tunnel that connects separate regions of space–time. Tam says: “To me, black holes and the speculative, double-ended form of the wormhole are symbols of transformation – whether the breakdown of classical Newtonian physics to general relativity or the spiritual transcendence one feels in contemplative practices like zazen. Physically, travelling into a black hole is obliteration – a return to pure atomic matter. However, in more philosophical and spiritual terms, a wormhole is an unknowable space of no return, a portal to another side of reality.”
- This is an edited excerpt from Lynn Gamwell’s book Conjuring the Void: the Art of Black Holes (2025 MIT Press 208pp £41 hb). Reproduced with permission, copyright MIT Press. All rights reserved
The post Lure of the black hole: from science to art appeared first on Physics World.
Joint Institute for Nuclear Research ‘deeply embedded’ in Russia’s military efforts, states report
JINR maintains links with almost 700 research centres and universities in 60 countries
The post Joint Institute for Nuclear Research ‘deeply embedded’ in Russia’s military efforts, states report appeared first on Physics World.
A major research institution in Russia is “deeply embedded” in the Russian military and the country’s military-industrial complex. That is the claim of a report by particle physicist Tetiana Berger-Hrynova, who argues that the activities of scientists belonging to the Joint Institute for Nuclear Research (JINR) in international collaborations should be limited as it poses a security threat to Europe (arXiv:2603.21896).
The JINR is an international research centre for nuclear science with 5500 staff members and prior to 2022 over 1000 scientists from JINR-collaborating organizations visited Dubna each year.
Berger-Hrynova, who is based at the CNRS’s Annecy Particle Physics Laboratory in France, told Physics World that her research was triggered by an article in December 2022 in the New York Times, which found that Kh-101 missiles – a Russian air-launched cruise missile – were produced in Dubna.
“I found that JINR scientists have played a critical role in developing Dubna into a major centre for Russia’s military-industrial complex — through dual-use research, knowledge-transfer programmes and personnel training,” says Berger-Hrynova.
According to Berger-Hrynova, who was born in Ukraine and educated at Liverpool University before doing a PhD at Stanford University, the lack of awareness of the issue has enabled scientists from Dubna to maintain their participation in international collaborations.
“JINR personnel can travel freely to scientific institutions in the EU and the UK, retain access to advanced technologies that can then be transferred to military and security actors through Dubna’s tightly connected research-industrial ecosystem,” claims Berger-Hrynova.
In 2022, following Russia’s invasion of Ukraine, CERN suspended JINR’s observer status at the lab, although Berger-Hrynova points out that the cooperation never stopped as in 2024 CERN’s council decided to maintain its international cooperation agreement with JINR for five more years, which currently allows 321 JINR scientists to be associated with CERN.
Indeed, JINR is also participating in the Russian Regional Center for Processing Experimental Data from the Large Hadron Collider (LHC) — a critical component of the Worldwide LHC Computing Grid.
JINR also still maintains links with nearly 700 research centres and universities in 60 countries and provides scholarships to physicists from developing countries.
The JINR is in addition actively involved in international and national scientific conferences, hosting up to 10 major conferences, over 30 international meetings as well as international schools for young scientists. Scientists from France, Italy, Germany, Latvia and other EU countries are also still on the JINR governing committees.
Ukraine applied sanctions against JINR in August 2025 given its alleged connections to military research and Berger-Hrynova now calls on other countries to do likewise.
“The JINR case illustrates how Russian scientific research institutions are used to circumvent sanctions, underscoring the need for coordinated enforcement among Ukraine, the EU, and the G7, as well as greater awareness within the international scientific community,” writes Berger-Hrynova.
Evgeniy Bragin, an official spokesperson for JINR told Physics World that “nobody from the administration of the Institute can provide comment”.
The post Joint Institute for Nuclear Research ‘deeply embedded’ in Russia’s military efforts, states report appeared first on Physics World.
Michael Frayn on Copenhagen: ‘When I wrote it, I didn’t think it would even be staged’
Chris Sinclair talks to Michael Frayn about a new revival of his classic science play Copenhagen
The post Michael Frayn on <em>Copenhagen</em>: ‘When I wrote it, I didn’t think it would even be staged’ appeared first on Physics World.

When Werner Heisenberg retreated at daybreak to an isolated rock on the island of Helgoland in June 1925 to contemplate his development of quantum physics, he might well have been surprised to know that this moment would be recreated by an actor perched on the back of a chair in a pool of water on a stage over 100 years later.
However, this is exactly what happens in a revival of Michael Frayn’s play Copenhagen, currently at Hampstead Theatre in London.
The play explores Heisenberg’s visit to see Niels Bohr in Nazi-occupied Copenhagen in 1941 and features just three characters, Heisenberg, Bohr and Bohr’s wife Margrethe. The intentions surrounding Heisenberg’s visit have always been unclear, with this uncertainty being central to the play, which was first staged to critical and popular acclaim at the National Theatre, London, in 1998.
The initial success of Copenhagen came even as a surprise to its writer Michael Frayn. “When I wrote it, I didn’t think it would even be staged,” he admitted in an interview with Physics World. Eventually, Copenhagen went on to receive many accolades, including a Tony Award for Best Play and enjoyed over 300 performances in London and New York.
The new production at the Hampstead Theatre is directed by Michael Longhurst, who told me how struck he was by the level of detail in the play.
“While Frayn is super conscious of this as an act of fiction and theoretical imaging, I don’t think I’ve ever worked on a play that feels like it’s been as rigorously researched,” he says.
“I think there’s a real pleasure and opportunity as a director, when you’re staging plays that are tapping into scientific principles. There is a beautiful probing parallel between the uncertainty of intention and Heisenberg’s uncertainty principle.”

Heisenberg’s involvement in what became the German nuclear-bomb programme is likely to have been a significant factor in his seeking to meet with Bohr, but the beauty of the play is the uncertainty behind the real motivation for the meeting.
As Frayn told Physics World: “The play is about the elusiveness of human intention, so I don’t claim to have a settled view of Heisenberg’s.”
However, Frayn hints that he is most persuaded by Heisenberg’s own account, which he gave many years later, that he wanted to warn the Allies about Germany’s plan to build a bomb, rather than trying to get information from Bohr to help the Nazi programme.
“Bohr’s confirmation in his unsent letter [in 1957],” says Frayn, “that Heisenberg had in fact overridden all normal obligations of wartime secrecy to tell him that Germany was doing research on a nuclear weapon – and that he now believed it was in theory possible to build one – seems to me to go some way to reinforcing the account that Heisenberg himself gave later of his intentions in seeking the meeting in 1941.”
As for the new revival at Hampstead, Longhurst says it is a chance “to engage with an incredible play that hasn’t been seen in London since that original production”.
“I’m very proud of the cast that we’ve assembled in Damien Molony, Richard Schiff and Alex Kingston, who I think are individually and collectively brilliant. I guess what is thrilling about the play when you see it live, and it is three bodies in a contained space, is watching them shift between prosecutor, witness and judge. That triangle of relationships is constantly shifting. I like to imagine them as three entangled souls with an unanswered question.”
- Copenhagen runs at Hampstead Theatre, London, UK until 2 May.
The post Michael Frayn on <em>Copenhagen</em>: ‘When I wrote it, I didn’t think it would even be staged’ appeared first on Physics World.
Gauge theory could give quantum error correction a boost
Concept from theoretical physics could reduce qubit requirements
The post Gauge theory could give quantum error correction a boost appeared first on Physics World.
Concepts from gauge theory could lead to a more efficient way to perform fault-tolerant quantum computation by reducing the number of qubits required for key operations – according to work done by Dominic Williamson and Theodore Yoder at IBM Quantum in the US.
By adapting ideas from gauge theory, the researchers show how quantum information spread-out across a machine can be measured using only local checks, significantly lowering computing overhead. Their approach works for a wide class of quantum error-correction codes and could help accelerate the development of practical quantum computers.
One importance difference between quantum computers and ordinary computers is how information is stored. Instead of bits, which can be either 0 or 1, quantum computers use qubits, which can exist in a combination of both states at once. Qubits can also be entangled and it is these and other quantum effects that can be harnessed to solve some problems much fast than conventional computers.
However, this power comes with a major drawback. Qubits are extremely sensitive to disturbances from their environment, which can easily introduce errors. This fragility is one of the main reasons why building large-scale quantum computers is so difficult.
To overcome this, researchers are developing fault-tolerant strategies that allow a quantum computer to continue working correctly even when some of its components fail. Williamson, who is now at Australia’s University of Sydney, describes this as using “carefully designed methods with built-in checks so that, when those checks pass, the final result has not been corrupted”.
Such methods typically store information held in one “logical qubit” across many “physical qubits” so that errors can be detected and corrected. But this protection comes at a cost, often requiring a large numbers qubits to perform even simple operations.
Measuring quantum information
In their new work, Williamson and Yoder tackle one of the central challenges in fault-tolerant quantum computing: how to measure information that is spread across many qubits without introducing too many extra resources.
The researchers draw on gauge theory, a concept from mathematical physics. “Gauge theories describe how local interactions can connect distant parts of a system,” Williamson explains. “In our work, we use this idea to measure information that is spread out across many qubits by adding extra helper qubits and performing only local checks.”
In practice, this means breaking down a complicated, global measurement into many small, local ones. By combining the outcomes of these local checks, the overall result can be reconstructed. This avoids the need for large, complex operations that would otherwise require many additional qubits.
According to the study, the number of extra qubits required grows only slightly faster than the size of the measurement itself. This is a substantial improvement over earlier methods, where the overhead could increase much more rapidly.
The approach is also flexible and can be applied to a wide range of quantum error-correcting codes. Barbara Terhal at the Technical University of Delft in the Netherlands highlights this point, noting that “the advance in this [work] is that it shows how to do this measurement in a reliable way for any of these codes, and also makes clear how many extra qubits are needed.”
She adds that such measurements are essential because they enable the key steps of quantum computation. “By measuring these operators, you can perform all the key steps needed for a full quantum computation.”
The method is particularly effective when implemented on highly connected structures that allow information to spread efficiently. Williamson notes that, “using this kind of highly connected structure reduces the number of extra qubits needed for fault-tolerant computation.”
Future directions
Despite its advantages, the new method does not remove all obstacles. One important trade-off involves time. Reducing the number of qubits can make computations take longer.
Terhal explains, “There is an inevitable extra time cost when you try to reduce the number of qubits”. In some cases, a system with fewer qubits may need more time to complete a calculation, while one with more qubits could run faster. Finding the right balance remains an open problem.
Another limitation is that the current study is largely theoretical. As Terhal points out, “[This work] focuses on the mathematical side and does not yet study how well the method performs in realistic simulations, which are very important for practice”. Further work will be needed to understand how the approach performs in real devices.
Williamson says, “We are working on ways to reduce the cost even more,” including lowering both the number of qubits required and the time needed to perform computations. He also notes that the method “has already been used in several follow-up studies” and is expected to appear in early fault-tolerant quantum computers in the coming years.
As quantum computing continues to advance, reducing the resources required for error correction will be crucial. By showing how to perform key operations with fewer qubits, the new work offers a promising step toward scalable and practical quantum machines.
The research is described in Nature Physics.
The post Gauge theory could give quantum error correction a boost appeared first on Physics World.
How pictures can help school students learn quantum physics
Muhammad Sabieh Anwar describes a new way to engage students in quantum physics
The post How pictures can help school students learn quantum physics appeared first on Physics World.

Humans perceive knowledge, make decisions and build the consciousness of knowing through vision and speech. This interplay between visual and nonvisual patterns collectively shapes how we learn complex concepts such as quantum physics. That is despite the subject’s reputation as being incomprehensible and difficult to reconcile with our everyday conceptions.
The issue when teaching quantum mechanics also lies in the shortcoming of using literary constructs to accurately describe what quantum mechanics really means. As the Hungarian-British philosopher Michael Polanyi once noted: “We always know more than we can tell.” It is hard to accurately capture in language the full meaning of quantum phenomena such as nonlocality, superposition, no-cloning, teleportation, counterfactual quantum computation, delayed choice or the many other uniquely quantum phenomena.
This also means that terms such as wave, particle, superposition and entanglement are not truly complete until followed by detailed calculations or elaboration of their consequences. The result is that introductory quantum mechanics courses often require prerequisite mathematical grounding in complex numbers, matrices, linear algebra and differential equations.
Yet I believe this tortuous preparation can be bypassed – in an accurate, comprehensive and consistent way – simply through “pictures”. With that in mind, we conducted an experiment last year at Government College University in Lahore, Pakistan – alma mater of the physics Nobel laureate Abdus Salam. The four-week-long summer school – Quantum in Pictures – was organized by the Khwarizmi Science Society, a not-for-profit grassroots science association that aims to make scientific education accessible especially for resource-deprived communities.
Some 50 school students attended lectures and demonstrations led by Muhammad Hamza Waseem from the UK firm Quantinuum, who works with Bob Coecke, one of the founders of a pictorial approach towards quantum physics and education.
Most of the students, who had no prior knowledge of quantum mechanics, came from Lahore while the remainder were from nearby towns and villages where opportunities especially in advanced fields are generally minimal. On top of that classroom engagement is largely discouraged and an outdated model of examination fosters rote learning. Almost half of the participants who attended the school were girls, with 75% of participants aged between 14 and 18 – the youngest being a 13-year-old girl from a village called Syedanwala in Kasur.

To capture ideas about quantum mechanics, we used “string diagrams” as our basis. Such diagrams, simply put, are made using boxes that represent processes. Wires coming in at the top and at the bottom represent the input and output systems being processed by the box. Simulating quantum processes translates to connecting boxes with wires, chopping and straightening wires or sliding boxes along wires like beads on a string.
Even though this formalism is rigorous and derived from category theory, the manner in which it is presented is unhindered by burdensome abstractions. In terms of quantum mechanics, such diagrams are able to capture ideas about how quantum states transform, how quantum operations work as well as counterintuitive notions about measurement.
A new confidence
When I teach quantum mechanics to undergraduates, colleagues often discourage me from “spilling the beans” on quantum mechanics too early before we have covered the mathematical acrobatics of Hilbert spaces, unitary transforms, eigenvalues and Dirac’s bra-ket notation. Yet I believe school students should relish the counterintuitive repercussions of quantum mechanics much earlier than they currently do. I believe that introducing such aesthetic visuals – an overlooked concept for learning – can make the discipline more comprehensible and attractive to students.
A diagrammatic technique helps to avoid all this and democratizes the knowledge of our quantum world. After all, the future quantum workforce must be trained earlier than ever, given we do not want students missing out on the quantum revolution. In addition, quantum computing is not the purview of physicists alone. Many computer scientists and programmers, who will never be formally trained in physics, will need an initiation in quantum mechanics.
When it comes to making education accessible and within the direct grasp of millions of eager learners, demystifying traditional modes of learning and introducing new approaches helps students and teachers. Learners gain the confidence to ask questions, synthesize connections between bodies of knowledge and prepare themselves for a workforce that may require competency instead of a paper degree.
According to a survey of students who completed the course, 60% engaged in interactive discussions or used the chalkboard to solve problems while 80% asked or responded to questions. For most of these students, this level of engagement with the instructor was a first in their lives. This is the confidence that our liberated students walked away with as they completed their final exams in the Quantum in Pictures summer school.
The post How pictures can help school students learn quantum physics appeared first on Physics World.
Laser-driven free electron laser runs for more than eight hours
Laser stabilization system boosts quality of electron bunches
The post Laser-driven free electron laser runs for more than eight hours appeared first on Physics World.
A laser plasma accelerator (LPA) has been used to power a free electron laser (FEL) for more than eight hours, delivering stable pulses of coherent light. The system was created in the US by researchers at the company Tau Systems and Lawrence Berkeley National Laboratory. The team says that its achievement represents a major breakthrough in stability for LPA-driven FELs, which could someday make coherent UV and X-ray pulses more accessible to academia and industry.
An FEL creates bright pulses of coherent light – usually in the ultraviolet-to-X-ray portion of the electromagnetic spectrum. These pulses are used in a wide range of research including physics, chemistry, biology and materials science.
The pulses are created by sending bunches of high-energy electrons through a device called an undulator, which applies a transverse magnetic field that alternates in direction as the bunch propagates. As the electrons are accelerated back and forth by the field they emit light. Under the right conditions the emitted light interacts with the electron bunch in such a way that the coherence and brightness of the light increases as the electron bunch travels through the undulator.
FELs require a bright and stable source of high-energy electron bunches, so today’s facilities are driven by large and expensive electron accelerators. The European X-ray Free Electron Laser, for example, is located at the end of a 3.4 km linear accelerator.
Surfing a plasma wave
High-energy electron bunches can also be created by firing high-intensity laser pulses at a plasma target. Electrons in the plasma are much lighter than the ions, so they are accelerated more by the intense electric field of the laser pulse. The result is a region of separated positive and negative charge that contains a large electric field. This region trails the laser pulse like the wake of a ship – and is called a wakefield. If electrons are injected into this wakefield, they are captured and accelerated to near the speed of light. The process is similar to how a surfer is propelled by an ocean wave.
While LPA-driven FELs would require expensive lasers, their size and cost would dwarf that of accelerator-driven facilities. Today, however, the electron pulses delivered by LPAs are not good enough to drive a FEL. Some shortcomings are related to fluctuations in the focal point of the laser and well as changes in the pulse energy and duration. These fluctuations can be caused by mechanical vibrations, temperature fluctuations and other environmental disturbances.
Founded in 2021, the Texas-based company Tau Systems is developing practical LPAs for a range of applications including FELs. Now, the company has joined forces with researchers at Berkeley Lab’s BELLA Center to implement a set of laser-stabilization technologies on BELLA’s Hundred Terawatt Undulator beamline.
The team implemented five active systems that worked together to stabilize the focal point of the powerful laser. Some of this was done using a “ghost” beam – a low-power copy of the driving beam – to observe subtle fluctuations that would not be apparent by monitoring the main beam.
High-quality bunches
As a result the system delivered bunches of 100 MeV electrons at a frequency of 1 Hz and at high stability for over 10 h. These bunches were then used to drive a self-amplified spontaneous emission (SASE) FEL based on a 4 m-long undulator that is embedded within a vacuum chamber.
The LPA–FEL delivered violet (420 nm wavelength) pulses for more than 8 h without any human intervention. The FEL gain of the system was about 1000, which is the ratio of brightness of the emitted coherent FEL pulse to the brightness of light emitted by unamplified undulation.
This run is a significant improvement on the team’s 2025 achievement of using a LPA–FEL setup to deliver pulses of similar quality for an hour.
“This is the moment the community has been working toward,” says Stephen Milton of Tau Systems. “We have shown that an LPA-driven FEL is not just a proof-of-concept experiment. It is a platform capable of delivering the stability that real scientific and industrial users demand.”
Finn Kohrell of the BELLA Center adds, “Maintaining FEL stability for a record eight hours represents a significant advancement in LPA-driven FELs and provides deeper insights both into achieving optimal FEL performance and into validating LPAs as high-brightness injectors, which is crucial for LPA application in future light source facilities”.
During operation, the team gathered data about the stabilization process and mapped correlations between the parameters of the drive laser; the plasma source; the electron bunches; and the FEL’s output pulses. The researchers are now using this information to improve their control systems and they say that these data indicate that further gains in stability and brightness are possible.
The next experimental step will involve increasing the FEL energy to their system’s maximum value of 500 MeV.
“At this level, we can lower the undulator radiation wavelength to the 20–30 nm range, placing it in the hard ultraviolet or soft X-ray regime,” explains Kohrell. “[This would be] a crucial step toward making the technology viable for real-world applications.”
The new system is described in Physical Review Accelerators and Beams.
The post Laser-driven free electron laser runs for more than eight hours appeared first on Physics World.
Stanford Medicine unveils world’s first ultracompact proton therapy facility
Combining the industry’s most compact cyclotron with an upright positioning system creates a proton therapy system small enough to fit into a linac vault
The post Stanford Medicine unveils world’s first ultracompact proton therapy facility appeared first on Physics World.
Stanford Medicine has opened a new proton therapy facility – featuring an ultracompact treatment system that’s small enough to fit in a room the size of a conventional linear accelerator vault.
Proton therapy is an advanced cancer treatment that offers precise tumour targeting while minimizing dose to healthy tissues. The technique is particularly beneficial for treating tumours located near critical structures and for treating cancers in children. Currently, however, access to proton therapy is limited by its high costs and substantial space requirements.
The new treatment facility – opened earlier this week at Stanford Medicine Cancer Center in Palo Alto, CA – incorporates the S250-FIT proton therapy system from Mevion Medical Systems, the most compact cyclotron in the industry. But even with a much small accelerator, proton therapy delivery usually requires a bulky gantry that rotates around the patient to aim the proton beams at the optimal treatment angles. As such, most proton facilities need a whole new multi-storey building to be built just to fit everything in.
To eliminate this obstacle, the Stanford facility is using a positioning system from Leo Cancer Care to deliver protons via a novel approach known as upright radiotherapy. Here, the patient is treated in an upright position (rather than lying down) and rotated in front of a static treatment beam, removing the need for a gantry and slashing space requirements and installation costs.

By combining these advanced technologies, the new equipment fits into a standard 1200 sq. ft linear accelerator vault (as used for standard X-ray-based radiotherapy) and was installed without having to construct a new building.
The advanced system also incorporates built-in CT scanning, enabling extremely precise targeting of tumours within patients with minimal collateral damage to the rest of the body.
“Developing this novel approach to proton therapy at Stanford Medicine, in collaboration with our industrial partners Mevion and Leo Cancer Care, gives us an important additional tool to treat our patients in a personalized, case-by-case way,” says Billy Loo, professor of radiation oncology and co-director of particle therapy at Stanford Medicine. “We are excited to pioneer this world’s first ultracompact and efficient technology that will benefit not only patients at Stanford but expand access to proton therapy worldwide and improve patient outcomes.”
“This milestone really marks the transition from concept and theory to clinical reality,” adds Leo Cancer Care’s CEO Stephen Towe. “Proton therapy installed inside a linac vault always felt like an impossible goal – our partnership with Stanford and Mevion has made that vision possible.”
Loo tells Physics World that patient treatments on the new proton therapy system are likely to start this summer. “As with any first-of-its-kind system in medicine, introducing this complex technology requires a rigorous process of testing and optimization to ensure it meets our high standards for patient safety and treatment quality,” he explains. “We are moving through these steps now.”
The Stanford Medicine team emphasize the particular advantages of proton therapy for children, not least that it can really decrease the radiation dose delivered to normal tissues. Minimizing irradiation of sensitive developing tissue can dramatically reduce the risk of long-term side effects. In addition, treating children while they are sitting up and actively engaged may be far less intimidating for them than having to lie down and have the treatment “happen to them”.

The first proton treatments will likely be “cranial and head-and-neck sites, for both adults and selected paediatric patients, for which we already have established patient positioning solutions,” says Loo. In parallel, the radiation oncology team will develop the workflows and immobilization solutions for all other anatomic sites.
The team also plans to investigate new ways to advance the technology and explore the clinical advantages of delivering upright radiotherapy. For example, evidence suggests that for some diseases, such as lung cancer, upright treatment puts the targeted organ in a more favourable position to irradiate safely. Upright positioning also provides greater flexibility to deliver radiation from many different angles. The team will also study the impact of upright positioning on FLASH treatments, in which radiation is delivered at ultrahigh dose rates.
Looking ahead, nine other medical centres are installing this new ultracompact proton therapy system, ultimately making proton therapy increasingly accessible to patients around the world.
“The clinical data to support the use of protons is stronger than ever before,” says Towe. “The strength of this data, combined with the cost reductions delivered by Leo’s technology, has sparked a new wave of growth for protons globally.”
The post Stanford Medicine unveils world’s first ultracompact proton therapy facility appeared first on Physics World.
Have you published a disruptive paper? New machine-learning tool helps you check
The tool could be used to spur transformative breakthroughs
The post Have you published a disruptive paper? New machine-learning tool helps you check appeared first on Physics World.
Scientists in the US have unveiled a new machine-learning tool that, they claim, can identify disruptive scientific breakthroughs. They say their method, which assesses how much a paper reshapes its field, is better than other techniques at spotting such disruptions even if they are simultaneously discovered by independent research groups (Sci. Adv. 12 eadx3420).
The work examined 55 million papers listed by Web of Science and the American Physical Society (APS) published between 1893 and 2019. The papers were mapped using a machine-learning technique known as neural embedding, with each publication represented by two vector points. The first vector characterizes the body of work the paper builds on while the second represents the research it inspires.
Papers that disrupt tend to cause future research to depart significantly from previous work in the field, making these “past” and “future” vectors diverge sharply. The greater the divergence, the higher the paper’s so-called Embedding Disruptiveness Measure (EDM) score.
The team, based at Indiana and Binghamton universities, tested their EDM technique against Nobel-prize-winning papers and milestone publications as selected by APS editors. The EDM identified these landmark contributions as being highly disruptive.
The researchers discovered that the EDM was more consistent at spotting such papers than similar metrics, such as the “disruption index”, which focuses more on a publication’s closest citations. While this makes it sensitive to individual citations, it can miss the bigger picture, the researchers found.
The team discovered that the 10 papers with the biggest difference between the EDM and the disruption index were all examples of “simultaneous disruption”. This is where multiple papers have independently reached the same conclusion, or scientists have published their work across publications. Citations that linked these simultaneous disruptive papers weakened their disruption index.
One notable example is the two 1974 papers announcing the discovery of the J/ψ meson. As both groups cited each other, the disruption index ranked these publications in the bottom 1% of disruptive papers while the EDM placed them both in the top 10%. A similar pattern was seen for the two 1964 papers – one by Peter Higgs and the other by François Englert and Robert Brout – on the Higgs mechanism.
The team claims that the EDM also provides a new way to detect simultaneous discoveries, finding that papers that report the same breakthrough tend to be cited in similar contexts by later work, meaning their “future” vectors cluster together.
“By having more accurate metrics, we can actually investigate where the disruption is happening in the map of science,” says data scientist Sadamori Kojaku from Binghamton University.
The researchers say their tool could help science funding and policy to drive transformative breakthroughs. “It can have significant implications for science policy and it’s also helpful for prioritizing funding,” adds Kojaku. “We now have the quantitative metrics to investigate at which stage of research the disruptive work occurs and matters most.”
The post Have you published a disruptive paper? New machine-learning tool helps you check appeared first on Physics World.
Backing winners in deep tech: physicist and venture capitalist Alexandra Vidyuk
Our podcast guest began her career with a BSc in applied mathematics and physics
The post Backing winners in deep tech: physicist and venture capitalist Alexandra Vidyuk appeared first on Physics World.
The physicist and venture capitalist Alexandra Vidyuk is our guest in this episode of the Physics World Weekly podcast. She is the chief executive and founding partner of Beyond Earth Ventures, which provides funding and support to early-stage companies in deep-tech sectors including space, robotics and energy.
In conversation with Physics World’s Margaret Harris, Vidyuk explains how her BSc in applied mathematics and physics and her early career in banking and fintech set her on a path to deep-tech venture capital.
Vidyuk talks about the specific challenges facing deep-tech entrepreneurs and reveals what she looks for when deciding which companies to fund. She also emphasizes the importance of building an organization that understands its customers and can communicate effectively with them.
The post Backing winners in deep tech: physicist and venture capitalist Alexandra Vidyuk appeared first on Physics World.
Word wave puzzle no.2
Can you guess the physics-related word in this puzzle?
The post Word wave puzzle no.2 appeared first on Physics World.
Here’s how the game works:
- Enter a word guess – in this game the word has six letters.
- After submitting your guess, each letter in the guessed word is coloured to provide feedback:
- Green: The letter is correct and is in the correct position in the target word.
- Yellow: The letter is correct but is in the wrong position in the target word.
- Grey: The letter is not in the target word at all.
- Using this colour feedback, refine your next guess.
- Continue guessing until you correctly identify the hidden word(s) or run out of attempts.
If you need any hints, read this recent feature article.
Fancy some more? Check out our puzzles page.
The post Word wave puzzle no.2 appeared first on Physics World.
Advent Research Materials wordsearch
Try to find all the materials in this fiendish word search
The post Advent Research Materials wordsearch appeared first on Physics World.
Advent Research Materials is an Oxford-based specialist supplier of high-purity metals, alloys and polymers to the global scientific research community.
With a catalogue of over 10,000 items, ISO 9001:2015 accreditation, and more than 35 years of experience supplying researchers, universities and industry, Advent is a precision materials partner trusted worldwide.
All products are held in stock and available for rapid dispatch.
The post Advent Research Materials wordsearch appeared first on Physics World.
Want to make a peptide material go from soft to stiff? Just add water
Discovery could facilitate the large-scale fabrication of materials that adapt to changing conditions
The post Want to make a peptide material go from soft to stiff? Just add water appeared first on Physics World.
Protein molecules are highly dynamic, continually changing shape in response to changes in external conditions. Scientists have long sought to mimic this behaviour in artificial materials, and now a team at the City University of New York (CUNY) in the US has done just that, constructing a crystalline solid that switches between several distinct architectures as the ambient humidity changes. Their work could make it easier to fabricate adaptive materials on a large scale for applications such as humidity-responsive coatings.
Proteins owe their shape-shifting character to a series of complex interactions that take place between two or more molecules. These supramolecular interactions, as they are known, allow proteins to adapt their properties – and therefore their functions – as needed. Water plays an important role in such interactions because it stabilizes certain structures while weakening others.
“Stripped-down” versions of protein behaviour
In the new work, researchers led by CUNY chemist Rein Ulijn and chemical engineer Xi Chen studied peptides, which are the molecular building blocks that make up proteins. In particular, they focused on leucine (L) and isoleucine (I), which are isomers, meaning they have the same chemical formula but different structures. “Such short peptides give us access to ‘stripped-down’ versions of protein behaviour,” explains Ulijn, who is also the founding director of CUNY ASRC Nanoscience Initiative. “They’re simple enough to design systematically, but still rich enough to encode sometimes surprisingly complex and dynamic behaviour.”
They found that when the chemical potential of water in the system – effectively, the humidity – changed, the solid-state porous architecture of LI crystals reorganized, reversibly switching between rigid perpendicular/parallel honeycomb structures and layered soft van der Walls structures. Importantly, Ulijn explains, this transformation occurs without compromising the peptides’ overall structural integrity.
“What makes this particularly significant is that most dynamic supramolecular systems are limited to relatively minor changes in organization,” he says. “In contrast, the peptide side chains in our system undergo very dramatic conformational reorganization, which translates into the topological changes observed.”
Uljin adds that this process offers a completely new way to design materials that can switch between distinct structural states. “This opens the door to solid materials that are both robust and highly adaptable, a combination that is difficult to achieve with existing approaches,” he tells Physics World.
A new toolbox for designing dynamic solid-state materials
The researchers say they undertook their study to address a “fundamental gap between biological systems and synthetic solid-state materials”. Although proteins routinely undergo sequence-encoded conformational changes to access multiple functional states in solution, replicating this kind of dynamic behaviour in solid materials has been a major challenge. “Our goal was to create a minimalist, peptide-based system that could mimic this adaptability without relying on large, complex structures and that could be triggered by low energy inputs,” they explain.
The team says the work provides a new toolbox for designing dynamic solid-state materials with tuneable topology and function, which could potentially impact a wide range of fields. One potential application is the development of adaptive materials with switchable mechanical properties, where stiffness and softness can be controlled through environmental humidity or temperature. “This could be useful in soft robotics, responsive coatings, or smart structural materials,” Chen notes.
The researchers are now studying other peptide structures in hopes of better understanding the fundamental rules for conformational control of short peptides. Ultimately, they say this programme should lead to specific design rules for porous peptide materials, making it possible to explore a broader range of sequences and side-chain chemistries. “We are also interested in scaling these materials to enable practical demonstrations in hydration-responsive coatings,” Chen adds.
The team reports its work in Matter.
The post Want to make a peptide material go from soft to stiff? Just add water appeared first on Physics World.
The dark heart of the lithium-ion battery revolution
James Dacey reviews The Elements of Power: a Story of War, Technology and the Dirtiest Supply Chain on Earth by Nicolas Niarchos
The post The dark heart of the lithium-ion battery revolution appeared first on Physics World.
In a book about batteries, you might not expect the author to be detained by Congolese secret police because he attempted to meet a rebel warlord whose militia has been linked with cannibalism. But that’s exactly what happened when journalist Nicolas Niarchos was doing research for The Elements of Power: a Story of War, Technology and the Dirtiest Supply Chain on Earth.
In his debut book, Niarchos dives into the global supply chain of critical metals for lithium-ion (Li-ion) batteries. Nowadays Li-ion technology powers electric vehicles, laptops and smartphones, and provides backup for renewable energy when the Sun stops shining and the wind stops blowing. The critical metals in these batteries come from all corners of the Earth. In 2024 Australia, Chile and China were the top three producers of lithium; Indonesia produced over half of the world’s nickel; and the Democratic Republic of Congo (DRC) dominated the cobalt mining industry.
Building on his earlier reporting for The New Yorker and other outlets, Niarchos shines a light on the dark underbelly of green tech. He takes the reader from the underprivileged mining communities extracting the raw materials, to the global superpowers profiting from Li-ion technology.
This is a story of geopolitics, deep-rooted inequality, and history repeating itself. In recent decades, governments, corporations and opportunistic intermediaries have jostled for the lion’s share of resources in mineral-rich countries. As in colonial times, wealth has again concentrated in the hands of a few, while communities near the resources bear the costs of greed and corruption.
“The world is facing the biggest supply–demand dislocation in living memory with critical metals,” writes Niarchos.
The race to develop and commercialize
In The Elements of Power, Niarchos includes the history of Li-ion batteries and their commercialization. Key scientific figures include British chemist Stanley Whittingham, US solid-state physicist John Goodenough, and Japanese chemist Akira Yoshino, who all shared the 2019 Nobel Prize in Chemistry for their breakthroughs that led to commercial Li-ion batteries.
Whittingham laid the foundations in the 1970s when his work on fast ionic transport in solids led to a cathode made from titanium disulphide that could house (or “intercalate”) lithium ions. Goodenough then introduced a lithium cobalt oxide cathode – raising the battery voltage and making it less explosive – before Yoshino took the final step to a commercially viable battery by adding a carbon-based anode in 1985.
Niarchos highlights how Japan failed to capitalize on this early lead. Although Japanese firm Sony released the first Li-ion battery in 1991, production and commercial impetus soon switched to China and South Korea. In fact, at the turn of the millennium, Japan controlled 90% of the Li-ion market, but by 2012 Sony’s value had dropped to one-ninth of Samsung’s in South Korea.
The electrification of transport has been a key application of China’s push for Li-ion batteries – it drives economic growth and tackles air pollution. The speed of progress is striking. In 2018 China produced 1.26 million electric cars over the course of the whole year. By 2024 it was producing a million in a month.
To fuel battery demand, Beijing has steadily strengthened its foothold in places like the DRC and Indonesia. Niarchos highlights the 2007/2008 Sicomines “minerals-for-infrastructure” deal, which was a major, yet controversial, partnership made between the DRC government and a group of Chinese investors. It swapped massive copper/cobalt mining rights in the DRC for $6bn in Chinese-financed infrastructure, which has been slow to materialize.
Niarchos shows how China’s economic miracle has been fuelled by ruthless geopolitical pragmatism in strengthening its mining deals over decades, but also how the US administration’s manoeuvrings in places like Greenland are an unsubtle sign that it intends to catch up.
Inevitably, Elon Musk and Tesla make several appearances in the book. For example, Niarchos includes how a futuristic Tesla Gigafactory near Berlin, Germany, was attacked by protestors. The episode reveals the conundrum facing progressives in the West who want to go green but in the right way, led by the right people.
The people behind the metal
While Niarchos looks at how global superpowers profit from Li-ion technology, it’s his reporting on the sources of critical metals that reveals the truly dark side of the supply chain.
Cobalt, often used in Li-ion battery cathodes, is perhaps the starkest example of the problem, and the book gives particular attention to its production and the mining practices in the DRC. More than 70% of global supply comes from the DRC, with most mined in the mineral-rich Katanga region, comprising of the provinces Tanganyika, Haut-Lomami, Lualaba and Haut-Katanga. Extreme poverty is rife, cholera outbreaks are common, and conflict has displaced hundreds of thousands.
One of the book’s strengths is how Niarchos weaves the story of Li-ion batteries with the social history of the DRC. In works like this, the human sections often provide light relief from dense scientific explanations. Here, the opposite is true, as the cycles of violence and exploitation against the Congolese people – which goes back centuries – make for grim reading.
What is now the DRC was colonized in the 1870s by Belgium, and forced labour, starvation, violence and mass death were inflicted on the Congolese people in relation to the ivory and rubber trade. While the country gained its independence from Belgium in 1960, the turbulence of power struggles and civil war has led to deeper corruption, opaque webs of international finance, and foreign magnates whose dealings raise eyebrows among global watchdogs. Today the country seems haunted by its past, trapped by the cruelty of power dynamics and the corrupting influence of promised wealth.
The most resonant pages of The Elements of Power describe modern daily life in the Katanga region. Most people see barely a trickle of the vast mineral wealth they help dig up. In 2020 some 74 million Congolese lived below the poverty line of $2.15 a day, and 43% of children in the country were malnourished.
Many adults and children resort to digging for mineral seams using rudimentary tools and minimal safety gear. Referred to as “artisanal” miners by multinational corporations but known as creuseurs (French for digger or burrower) in the DRC, they often come from the very poorest stratum of society and do not have the education or the contacts to get jobs with the mining corporations that have official permits to extract the cobalt. Just in Kolwezi – the capital city of the Lualaba Province with a population of nearly 600,000 – an estimated 170,000 of these unofficial miners dig for the black ores, which they then sell to unscrupulous intermediaries.
One of the saddest passages is when Kolwezi resident Françoise Ilunga describes how her husband was crushed and suffocated, along with at least 150 other creuseurs, after a tunnel collapsed in the city. Unable to get official jobs, the miners had entered a secluded part of a cobalt mining site without permits or safety gear to find ore to sell so they could support their families. The mine was run by the Anglo-Swiss multinational Glencore (which incidentally had to pay $700m in 2022 relating to bribery offences in several African nations). Françoise and her family spent two days digging up her husband’s body.
It is easy to see how cycles of poverty have been sustained in the DRC. Niarchos interviews children who say they mined out of necessity for food and clothes. In their villages and towns, conflict still bubbles under. When Niarchos is detained by the DRC’s secret police, he had planned to meet a man called Gédéon, whose militia group, Bakata Katanga, has agitated for a separate Katanga state. Niarchos had heard a rumour that Gédéon was funding himself through artisanal mines. You’ll need to read the book for the full story, but it’s fair to say Niarchos won’t be returning to the DRC anytime soon.
Save solutions for another day
While The Elements of Power touches upon some solutions – such as recycling batteries, and sodium and sulphur-based alternatives to Li-ion batteries – no fully scalable solution is presented. And at times, I found the web of organizations and individuals hard to follow. I’m also a bit of a geology geek so I wish there was a bit more on why the DRC is blessed with so many critical minerals in the first place.
That said, the book feels incredibly timely given the current state of geopolitics. It is essential reading for anyone who cares about the origins of materials powering their phones, cars and many other aspects of daily life in wealthy nations. It shines a light on how difficult it is to know what percentage of critical minerals in your devices has come from ethical sources, despite what tech companies might say.
If there is a key takeaway, it’s that any system-wide solution for greener, ethical mining must consider the entire supply chain. Above all, we should listen to people on the ground sourcing the raw materials that make our shiny new technology possible. A supply chain is only as clean as its grubbiest link.
- 2026 William Collins 480pp £25 hb
The post The dark heart of the lithium-ion battery revolution appeared first on Physics World.
A new explanation for negative thermal expansion
Revealing how copper atoms shift under heat offers a blueprint for engineering materials with precisely controlled expansion
The post A new explanation for negative thermal expansion appeared first on Physics World.
Most materials expand when heated because increased atomic vibrations push atoms slightly farther apart. However, some unusual materials, such as α‑Cu₂V₂O₇, instead shrink when heated, a phenomenon known as negative thermal expansion. Although this behaviour had been observed before, its underlying mechanism was not well understood. In this study, the researchers examined α‑Cu₂V₂O₇ from 5 K to 800 K using neutron diffraction, synchrotron X‑ray diffraction, Raman spectroscopy, and first‑principles calculations. They found that the material exhibits three distinct thermal‑expansion regimes: almost no expansion below 35 K, strong negative thermal expansion between 35 K and 550 K, and normal positive expansion above 550 K.
The origin of this behaviour lies in how copper atoms move within distorted CuO₆ like octahedra. At the lowest temperatures, a quantum effect called the second‑order Jahn-Teller effect pushes the copper atoms off‑centre, but this motion is partly suppressed by the onset of antiferromagnetic ordering, which stabilises the structure and produces near‑zero thermal expansion. As the temperature increases, the second‑order Jahn-Teller effect weakens, allowing the copper atoms to shift back toward the centre of their octahedra, but in opposite directions along different structural chains. This anti‑off‑centering motion compresses the Cu-Cu zigzag chains and also reduces the spacing between neighbouring chains, pulling the structure inward and producing the observed negative thermal expansion.

The researchers also found that the copper atoms have unusually large vibrational freedom along one axis, which helps enable this motion. Raman spectroscopy revealed an anomalous broadening of a low‑frequency vibrational mode, providing evidence for electron-phonon coupling that further supports the proposed mechanism. Together, these effects explain the unusual thermal behaviour of α‑Cu₂V₂O₇ and offer valuable insight for designing materials with controlled thermal expansion, which is important for precision engineering, electronics, and composite materials that must remain dimensionally stable across temperature changes. Meanwhile, this mechanism, centered on the Jahn–Teller effect, can be extended to a wide range of transition metal oxide systems, providing a universal theoretical foundation for systematically explaining the anomalous thermal expansion behavior of such materials.
Read the full article
Jahn–Teller distortions induced strong negative thermal expansion in α-Cu2V2O7
Xiangkai Hao et al 2026 Rep. Prog. Phys. 89 018005
Do you want to learn more about this topic?
Negative thermal expansion and associated anomalous physical properties: review of the lattice dynamics theoretical foundation by Martin T Dove and Hong Fang (2016)
The post A new explanation for negative thermal expansion appeared first on Physics World.
How topological surfaces strengthen magnetism
Topological surface states are found to mediate a strong, non‑oscillatory interaction that aligns magnetic moments and enhances ferromagnetic order
The post How topological surfaces strengthen magnetism appeared first on Physics World.
In this work the researchers explore what happens when a topological insulator is placed next to a two‑dimensional ferromagnetic insulator. Experiments have shown that this arrangement dramatically increases the ordering temperature of the ferromagnet. The theoretical study demonstrates that the surface electrons of the topological insulator mediate interactions between the magnetic moments in the neighbouring ferromagnetic material, strengthening its overall magnetism.
There are two main ways electrons in a nearby material can act as messengers between magnetic moments. The first is the well‑known Ruderman-Kittel-Kasuya-Yosida interaction, which arises in a metal from electrons at the Fermi level that produce long‑range, oscillatory coupling, typically in a regime when magnetic moments are sparse. The is the often overlooked Bloembergen-Rowland interaction, which in fact turns out to dominate in this system. This mechanism comes from virtual transitions between the valence and conduction bands of the topological insulator surface states and leads to strong, short‑ranged ferromagnetic interactions between the dense magnetic moments.

Identifying the Bloembergen-Rowland interaction is significant because it naturally enhances ferromagnetism: it is strong, it does not oscillate, and it keeps the magnetic moments aligned. Due to the spin-momentum locking of the topological insulator’s surface states, this interaction also has a built‑in anisotropy that favours out‑of‑plane magnetic alignment. The researchers show that the increase in the magnetic ordering temperature is directly proportional to the Van Vleck susceptibility of the topological insulator’s surface electrons.
The study also examines how hybridisation between the top and bottom surfaces of a thin topological‑insulator film modifies the mediated interaction and affects the magnetic ordering temperature. This analysis helps explain recent experimental results in heterostructures made from chromium telluride and bismuth-antimony telluride. Overall, the work clarifies how topological surface states influence magnetism in these layered systems and provides a foundation for designing improved devices in spintronics, magnonics, and quantum technologies.
Read the full article
Enhancement of Curie temperature in ferromagnetic insulator-topological insulator heterostructures
Murod Mirzhalilov et al 2026 Rep. Prog. Phys. 89 018004
Do you want to learn more about this topic?
Characteristics and controllability of vortices in ferromagnetics, ferroelectrics, and multiferroics by Yue Zheng and W J Chen (2017)
The post How topological surfaces strengthen magnetism appeared first on Physics World.
‘Nano-aquariums’ deliver atomic-resolution imaging
Transmission electron microscopy probes solid–liquid interfaces
The post ‘Nano-aquariums’ deliver atomic-resolution imaging appeared first on Physics World.
Graphene liquid cells have been used to study atoms dissolved in organic solvents at atomic-scale resolution. Through a combination of smarter material choices and machine learning techniques, a team led by Sarah Haigh at the University of Manchester showed how these graphene “nano-aquariums” can work with virtually any type of solvent – offering deeper insights into the atomic-scale properties of solids left behind when solvents dry out.
To understand the atomic interactions taking place at solid–liquid interfaces, researchers will often start by sandwiching liquid samples between pairs of transparent films. In most cases, they will then use transmission electron microscopy (TEM) to create atomic-scale images of these interactions. This involves irradiating the sample and films with a tightly focused electron beam.
“These windows need to be as thin as possible to get the best resolution,” explains Manchester’s Nick Clark. “Graphene is just about the thinnest window possible, and over the past decade or so it’s enabled atomic-resolution imaging of solid nanoparticles inside liquids.”
Uncontrollable evaporation
So far, however, these graphene liquid cells have proven difficult to work with. While sealing liquid samples inside these cells, the solution will often evaporate uncontrollably, creating significant variability in the sample’s concentration. In addition, most organic solvents are incompatible with the soft polymer membranes used to support the graphene films during the sealing process, limiting previous studies to mild aqueous solutions.
To address these challenges, Haigh’s team replaced the polymeric supports with stiff ceramic cantilevers. These offer similar levels of mechanical stability while being far more chemically inert. As a result, the cells can be sealed mechanically while fully immersed in liquid. This prevents the sample from drying out during sealing, while also making the process compatible with virtually any solvent.
The resulting graphene cells are remarkably stable, which allows the team to collect large numbers of images via repeated irradiation by the TEM electron beam.
“We combined this with neural-network based denoising to minimize the signal to noise ratio required to extract atomic coordinates, and a fully automated analysis workflow,” Clark adds. “This enabled us to collect enough atomic coordinates to draw representative conclusions.”
Individual gold atoms
With this combination of techniques, the team could resolve individual gold atoms and the graphene lattice beneath them, and examine how the behaviour of gold atoms at the graphene-liquid interface varied with their choice of organic solvent.
With their rapid TEM imaging, they could track over one million gold adatoms – single atoms which adsorb to a solid surface – and account for the dynamic, interconnected behaviours of structures formed from pairs, triplets, and larger clusters of adatoms.
Chemists have long known that these behaviours are strongly connected to the catalytic properties of the solid material left behind when the solvent dries out. For the first time, however, this approach allowed Haigh’s team to explore in detail how these properties depend on the choice of solvent.
“We were able to decouple the actual liquid phase dispersion from the drying process, and showed how both must be controlled to generate isolated atoms on the final dried support – which we know gives the most active catalytic materials,” Clark explains.
Through further improvements to their technique, Haigh, Clark and their colleagues are confident it could drive advances across a range of real-world technologies. “We hope that our new characterisation approach will allow us to help those working on catalysis, or batteries, or liquid filtration to understand what’s happening at the solid-liquid interfaces in their devices at atomic scale,” Clark says.
The research is described in Science.
The post ‘Nano-aquariums’ deliver atomic-resolution imaging appeared first on Physics World.
Pollen dispersion study offers hope for hay fever sufferers
Studying how pollen is dispersed from trees when the wind blows could help urban planners mitigate future exposure to airborne pollen grains
The post Pollen dispersion study offers hope for hay fever sufferers appeared first on Physics World.

Researchers in France have developed a novel method to investigate how pollen is dispersed from trees when the wind blows – paving the way for new approaches to urban planning that could help alleviate the symptoms of seasonal hay fever.
A project team headed up at the University of Rouen Normandy has discovered for the first time that different trees can exhibit different local dynamics for the transport of pollen grains – for example, when pollen is dispersed by wind – and that that this behaviour depends on the local detachment force of pollen grains occurring at the scale of each flower inside the tree.
As part of the project, outlined in the paper Flow and plants: On the dispersion of wind-induced tree pollen, published in Physics in Fluids, the researchers developed an innovative direct-forcing porous immersed boundary method (DF-PIBM) to explore the wind-driven pollen dispersion and transport phenomena from green trees.
“The research investigates, through advanced physics-based modelling and simulations, the impact of tree types and their interaction with wind on the local dispersion of pollen grains in the surrounding environment,” says lead author Talib Dbouk, a researcher in the CORIA Lab, CNRS, at the University of Rouen Normandy.
As Dbouk explains, the team’s approach involved the use of a range of advanced computational fluid dynamics (CFD) modelling and simulation techniques to solve the local air flow around and within the trees, taking into account the interaction between the air flow and the pollen grains in and/or on the tree flowers.
“The DF-PIBM is an advanced numerical technique developed in order to accurately solve the local resistance of a tree to wind by assuming the tree leaves lead to the fact that a tree can [act] as a porous medium, where the local porosity inside the tree will depend on its leaf area density,” he adds.
According to Dbouk, this method was “derived, implemented and validated in an in-house CFD code”, first by testing different flow configurations around and within porous spherical particles – and then by extending and applying it to different types and structures of trees.
A digital twin
In Dbouk’s view, the key advantage of using DF-PIBM compared with other approaches is that it allows researchers to accurately solve the local air flow velocity and the local pressure inside the tree.
“DF-PIBM has a number of current and potential applications – including prediction of the behaviour of airborne pollen grains and support for future applications involving vegetation–flow interactions in urban settings,” he says. “The currently developed DF-PIBM allows us to accurately predict all the phenomena of the detachment, dispersion, resuspension and local transport of airborne pollen grains when emitted from a green space – for example, trees and grass – and thus any vegetation zones inside urban environments under different weather conditions.”
Meanwhile, co-author Julien Reveillon confirms that the next steps for the research team will involve the integration of all its physics-based models into a new advanced digital twin of the Rouen-Normandy Metropolitan region in Normandy, France.
“This is with the intention of developing a new advanced multi-risk assessment digital platform that can help our local public authorities in their future territorial management and planning strategies – for example, to better anticipate and fight climate change phenomena, especially those related to local heat islands and aero-allergens like pollen, in addition to environmental pollution of air, water and soil,” he says.
“Moreover, huge efforts are also [being] made in order to develop and integrate advanced models related to predicting and simulating airborne pollutant particle dispersion in our region, for example those related to emissions from both natural fires and industrial accident fires,” co-author Béatrice Patte-Rouland tells Physics World.
The post Pollen dispersion study offers hope for hay fever sufferers appeared first on Physics World.
Stoichiometric iron telluride is a superconductor: magnetic mystery is solved
Antiferromagnetism is caused by excess iron
The post Stoichiometric iron telluride is a superconductor: magnetic mystery is solved appeared first on Physics World.
Pristine iron telluride is a superconductor, with the natural material’s superconductivity suppressed by excess iron in the crystal lattice, researchers in the US have shown. This resolves a long-standing puzzle about why, when other materials with similar structures showed superconductivity at low temperatures, iron telluride had always retained an antiferromagnetic order. The results provide a secure platform for further exploration of iron-based superconductivity, and could open the door to the study of interesting physics such as potential topological superconductivity in iron telluride itself.
Much like the cuprates, iron-based superconductors such as chalcogenides like iron selenide often exhibit complex phase diagrams in which antiferromagnetic ground states compete with superconducting ones. Although tellurium sits directly underneath selenium in the periodic table, superconductivity has never been observed in pure iron telluride. It can behave as a “parent compound” for inducing superconductivity via chemical substitution with selenium, for example.
“One thing that’s always been a puzzle in the field is that the magnetic structure of iron telluride is fundamentally different from that of all other iron-based superconductors,” says condensed matter physicist Pengcheng Dai of Rice University in Texas; “People say ‘Oh, it’s more correlated’ – but the problem with that is that when you dope it with selenium and it does become superconducting, all the electric and magnetic properties occur at the exact same wave vector as other iron-based superconductors.”
Barely discussed
Condensed matter experimentalist Cui-Zu Chang of Pennysylvania State University in the US and colleagues had conducted multiple experiments involving the growth of tellurium compounds on iron telluride substrates, and reliably found that these produced supercondivity. Nevertheless, says Chang, the possibility that iron telluride itself might have a superconducting state was barely discussed by theorists.
Following Chang’s philosophy that “for superconductivity, if you follow theory and try to do something, 99% of the time you will fail,” the researchers set out to ascertain the state of pristine iron telluride experimentally. They bombarded a strontium titanate substrate with high purity beams of gaseous iron and tellurium atoms to produce 40-layer-thick films of iron tellurium. When they examined these using a scanning tunnelling microscope, they found that the films showed antiferromagnetic order. However, electron microscopy showed that the structures contained excess iron atoms clustered together periodically.
The researchers therefore performed multiple cycles of post-growth annealing, bombarding the structure with pure tellurium. These reacted with the interstitial iron, removing it from the structure by forming more iron telluride on the surface. The researchers monitored the electrical behaviour of the sample in tandem with its structural evolution, finding that, as regions approached stoichiometric FeTe, the antiferromagnetic order disappeared. After five cycles of annealing, the material was pure iron telluride, and the researchers showed that it behaved as a robust superconductor with a critical temperature of around 13.5 K. They confirmed this with the observation of the Josephson effect, Cooper-pair tunnelling and other related phenomena.
The researchers now intend to study the specific properties of stoichiometric iron telluride in more detail: “Because tellurium is heavier than selenium you have stronger spin-orbit coupling, so iron telluride should be a topological insulator at the same time as it’s a superconductor,” says Chang; “We call these topological superconductors.” Such topological superconductors – the first of which was uranium ditelluride – are of great interest in quantum computing thanks to their potential to host protected Majorana qubits. More broadly, the researchers believe it is important to study whether other materials may host “hidden” superconducting states suppressed by disorder.
Dai, who was not involved in the research, is impressed: “It’s surprising, in the sense that it solves a fundamental puzzle that’s been in the field for some time,” he says. He notes that definitive proof is not achieved because the material is on a substrate, so techniques such as neutron diffraction traditionally used to probe the magnetic structure of bulk materials are impossible. It is also possible to question whether the substrate is influencing the material. Nevertheless, he is persuaded: “At least to me, it really unifies the picture that the magnetism is probably universal for all the iron-based superconductors,” he concludes; “In the same way that in the cuprates, the parent compounds are basically Mott insulators, from this experiment we can basically say that in iron-based superconductors the parent compounds are basically simple stripes, and this oddball is because of the excess iron that stabilizes the particular structure.”
The research is described in Nature.
The post Stoichiometric iron telluride is a superconductor: magnetic mystery is solved appeared first on Physics World.
Gravitational effects could shed more light on the Hubble tension
Two new ways to measure the Hubble constant
The post Gravitational effects could shed more light on the Hubble tension appeared first on Physics World.
There are today two main ways to measure the Hubble constant, which is a parameter that describes the rate at which the universe is expanding. However, these two techniques produce conflicting results This discrepancy is called the Hubble tension and it suggests that we may be missing something fundamental about how the universe works. Now, two independent groups of astronomers, one in the US and the other in Germany, are developing two new methods to measure the Hubble constant. One uses gravitational waves; and the other uses gravitationally-lensed supernovae. Their work could help resolve the Hubble tension.
We know that the universe has been expanding ever since the Big Bang nearly 14 billion years ago – in part, thanks to observations made in the 1920s by the American astronomer Edwin Hubble. By measuring the redshift of various galaxies, he discovered that galaxies further away from Earth are moving away faster than galaxies that are closer to us. The linear relationship between this speed and the galaxies’ distances is defined by the Hubble constant, H0.
While there are many techniques for measuring H0, the problem is that different techniques yield different values. One main approach involves the European Space Agency’s Planck space telescope, which measures the Cosmic Background Radiation (CMB) “left over” from the Big Bang. This produces a value of H0 of about 67km/s/Mpc, where 1 Mpc is 3.3 million light–years. The other main approach is the “cosmic distance ladder” measurement, such as that made by the SH0ES collaboration involving observations of type Ia supernovae, which says H0 is about 73 km/s/Mpc.
Much brighter than typical supernovae
Now, astronomers at the Technical University of Munich, the Ludwig Maximilians University and the Max Planck Institutes for Astrophysics and Extraterrestrial Physics have observed an extremely rare type of supernova – or stellar explosion – that was gravitationally lensed, which by itself is also a very rare phenomenon. The supernova, which is called SN 2025wny (or more affectionately “SN Winny”), is superluminous and therefore much brighter than most gravitationally lensed supernovae discovered to date. This means that it can be studied using ground-based telescopes. Indeed, the researchers, led by Sherry Suyu and Stefan Taubenberger observed it with the Nordic Optical Telescope and the University of Hawaii 88-inch Telescope.
“It was an extraordinary coincidence that the first well-resolved lensed supernova found from the ground turned out to be a superluminous supernova,” says Taubenberger. “Its initial spectrum did not match the types of supernova we expected (that is, Type Ia or Type IIn), so determining its redshift was also difficult without this clear classification. We eventually measured the redshift to be equal to two so the observed optical light had actually been emitted as energetic UV radiation. The extraordinary UV brightness then allowed us to identify the object as being a superluminous supernova.”
The fact that the supernova can be clearly observed from here on Earth makes it useful for a technique called time-delay cosmography. This method, which dates from 1964, exploits the fact that massive galaxies can act as lenses, deflecting the light from objects behind them so that from our perspective, these objects appear distorted. “This is called gravitational lensing and we actually see multiple copies of the objects,” Taubenberger explains. “The light from each of these will have taken a slightly different pathway to reach us, so we see them at different times. In the case of SN 2025wny, we observed five copy objects that had been deflected by two galaxies in the foreground.”
If we measure the difference in the arrival times of these objects and combine these data with estimates of the distribution of the mass of the deflecting lens galaxies, we can calculate the so-called time-delay distance, he explains. “From the time-delay distance and the redshift, we can then infer H0. Unlike the cosmic distance ladder, which involves many calibration steps and can accumulate errors with each step, this is a one-step technique with fewer and completely different sources of systemic uncertainties.”
Making the observations was not without a number of challenges, he remembers. “Initially, we had secured observing time at southern hemisphere telescopes (in particular, the ESO [European Southern Observatory] in Chile). However, the object we discovered was in the northern sky, making this secured time unusable. This meant we had to quickly find alternative observatories and write new proposals for northern hemisphere follow-up observations.”
Using undetectable black hole collisions
Meanwhile, a team of astrophysicists at The Grainger College of Engineering at the University of Illinois Urbana-Champaign and the University of Chicago has developed a way to determine the Hubble constant using gravitational waves and in particular the gravitational-wave background. Gravitational waves are generated when compact astrophysical objects, such as black holes, collide. These collisions, which are extremely energetic, produce tiny ripples in the fabric of space–time that travel at the speed of light, eventually reaching us here on Earth where they are detected by the LIGO–Virgo–KAGRA (LVK) Collaboration.

Individual black hole collisions have been observed by the LVK, which allows us to determine the rates of those collisions happening across the universe, explains study leader Bryce Cousins, who is at Illinois. “Based on those rates, we expect there to be a lot more events that we can’t observe. This is called the gravitational-wave background.”
Their approach uses a unique, previously unexplored relationship between the gravitational-wave background and H0. This relationship is not found in other astrophysical phenomena, meaning that the method is complementary to existing electromagnetic and gravitational-wave measurements of H0.
An upper limit on the background can provide a lower limit on the Hubble constant
The strength of this gravitational-wave background scales directly with the density of gravitational waves in the universe, he says. “For example, if the universe were expanding more slowly, then it would have a smaller total physical volume and a correspondingly higher density of gravitational waves, leading to a stronger background. Thus, an upper limit on the background can provide a lower limit on the Hubble constant.”
The researchers demonstrated their hypothesis by analysing gravitational-wave data from the LVK Collaboration’s third observing run. They have dubbed their method the “stochastic siren” since the gravitational waves (the “sirens”) composing the background arise randomly.
The LVK network is not yet sensitive enough to detect the gravitational-wave background, but researchers expect it will be able to within the next six years or so. However, when Cousins and colleagues’ new work is combined with existing “spectral siren” measurements, the result is a more accurate value of H0 – even without a detection of the gravitational-wave background. As a result, the new technique should only improve as gravitational-wave detectors become more sensitive. The spectral siren approach measures the Hubble constant by considering the redshift of gravitational-wave signals.
Cousins says he is “hopeful” that the findings of gravitational-wave cosmology will be able shed more light on the Hubble tension as gravitational-wave data collection continues.
The researchers are now extending their method to consider other dark energy models, in light of ongoing findings that the standard “cosmological constant” interpretation of dark energy may be incorrect. Cousins is also applying the existing analysis to the latest gravitational-wave dataset and working with other collaborators to modify the stochastic siren procedure so that it can be applied to the next-generation of gravitational-wave detectors.
Two different but complementary techniques
Taubenberger says that Cousins and colleagues’ technique is trying to measure the Hubble constant in a completely different way to his group’s – and also without relying on the cosmic distance ladder. “Since some gravitational waves have no optical counterpart, you cannot take an optical spectrum of them and measure their redshift, so methods like theirs allow us to measure distances in a statistical sense by analysing multiple objects and glean information about the Hubble constant in this way.
“Every independent approach to measure the Hubble constant is welcome, of course.”
Cousins, for his part, says that Taubenberger and colleagues’ work effectively supports an existing method with new data, while his group’s work involves creating a new method that can use existing data. “Taubenberger and his team exclusively use electromagnetic data, which differs from our gravitational wave method, but our approaches are ultimately complementary since they are independent takes on the same underlying question.
“It is interesting and important work since they have found a unique candidate for time-delay cosmography. I am excited to find out what new Hubble constant constraints will come from using this new lensed supernova.”
The post Gravitational effects could shed more light on the Hubble tension appeared first on Physics World.
Quiz of the week: how long will NASA’s Artemis II mission to the Moon last?
Have you been keeping up to date with physics news? Try our short quiz to find out
The post Quiz of the week: how long will NASA’s Artemis II mission to the Moon last? appeared first on Physics World.
Fancy some more? Check out our puzzles page.
The post Quiz of the week: how long will NASA’s Artemis II mission to the Moon last? appeared first on Physics World.
Biomedical optics play crucial roles across medicine
Medical physicist, inventor and entrepreneur Brian Pogue is our podcast guest
The post Biomedical optics play crucial roles across medicine appeared first on Physics World.
This episode of the Physics World Weekly podcast features Brian Pogue, who is professor of biomedical engineering at Dartmouth College in the US. He is also the co-founder of several start-up companies that are developing optics-based systems for medicine.
In conversation with Physics World’s Tami Freeman, Pogue explains that optical technologies underlie many of today’s routine medical procedures. The field of optics is also converging with the world of medical physics, and Pogue talks about exciting new techniques for guidance, dosimetry and in vivo verification of radiation therapy cancer treatments.
- This interview was recorded in association with the journal Physics in Medicine & Biology, which celebrates its 70th anniversary this year.
This podcast is supported by One Physics, your trusted, local partner in medical physics and radiation safety.
The post Biomedical optics play crucial roles across medicine appeared first on Physics World.
NASA launches crewed Artemis II mission to the Moon
Craft will conduct a flyby of the Moon before returning to Earth
The post NASA launches crewed Artemis II mission to the Moon appeared first on Physics World.
NASA has successfully launched four astronauts on a 10-day mission to the Moon. The crew – Reid Wiseman, Victor Glover, Christina Koch and Jeremy Hansen – were aboard the Orion spacecraft that was launched yesterday by a Space Launch System rocket from NASA’s Kennedy Space Center in Florida.
The mission is the first crewed lunar flyby in more than 50 years but it also represents a number of significant firsts with Koch, Glover and Hansen set to be the first woman, Black person and Canadian, respectively, to travel to the Moon.
Following launch, the Orion capsule was put into Earth orbit and after five hours into the flight, the craft deployed four CubeSats – from Argentina’s Comisión Nacional de Actividades Espaciales; the German Aerospace Center; the Korea AeroSpace Administration; and the Saudi Space Agency – that will conduct scientific investigations and technology demonstrations.
The craft is now set to carry out a six-minute rocket firing that will send the spacecraft towards the Moon.
During a lunar flyby on 6 April, the astronauts will take photographs and provide observations of the Moon’s surface being the first people to see some areas of the far side.
Some four days later, the craft will then return to Earth and splash down in the Pacific Ocean.
This mission follows the Artemis I mission, which carried a simulated crew of three mannequins wired with sensors, that completed a flyby of the Moon in 2022.
Artemis III, meanwhile, is currently ear-marked for launch in 2027, planning to be the first crewed lunar landing since the Apollo missions in the 1960s and 70s.
Will the Artemis programme instil the same sense of awe as the Apollo missions?
In the summer of 1969 I was four years old and I have a very distinct memory of my mother calling me and my brother in from the garden to watch something on television. That something had to do with NASA’s Apollo 11 mission to the Moon.
For years, I thought that I had watched Neil Armstrong take his first steps on the Moon on live TV. I now realize that the timing was all wrong. I was in Montreal and it was daytime – whereas the walk occurred at about 11 p.m. EDT, well after my bedtime. So I was (probably) not one of the estimated 500 million people worldwide (including Pope Paul VI) who witnessed this momentous event as it happened.
Regardless of whether I watched it live or not, the first human steps on the Moon made a great impression on me – and who knows, maybe that early exposure to the cutting edge of science and technology encouraged me to pursue a career in physics.
I could be wrong, but I don’t think that the Artemis missions will instil the same awe in people as did the Apollo missions. I didn’t watch the Artemis II launch and I had a distinctly “been there, done that” feeling when I heard about its success.
Indeed, I have been left wondering exactly why the US has decided to return to the Moon now. Is it for reasons of science and exploration (possibly setting the scene for a human mission to Mars), or is this more about nationalism and colonialization? I hope it is the former, because for me sending humans to the Moon and beyond is akin to blue-sky research in physics – probing the universe to expand knowledge, with the confidence that this will result in a better world.
Hamish Johnston is an online editor of Physics World
The post NASA launches crewed Artemis II mission to the Moon appeared first on Physics World.
Word flower puzzle no. 2
How many words can you find in this puzzle?
The post Word flower puzzle no. 2 appeared first on Physics World.
How did you get on?
14 words Warming up nicely
20 words Getting hot, hot, hot
26 words Top dog!
Fancy some more? Check out our puzzles page.
The post Word flower puzzle no. 2 appeared first on Physics World.
Trapped ion quantum technology gets smaller
Portable device could find applications in quantum computing and optical clocks
The post Trapped ion quantum technology gets smaller appeared first on Physics World.
A new integrated photonics platform can perform precision quantum experiments that were previously only possible with multiple table-top lasers and other bulky apparatus. According to its US-based developers, the new chip-scale device could find applications in quantum computing and portable optical clocks based on trapped ions.
Today’s quantum computers and optical clocks depend on a range of equipment that typically includes some combination of lasers, cryogenic coolers, vacuum chambers and optical reference cavities. The last of these can take up more than half the device’s total volume, and they are crucial for stabilizing laser frequencies to the high precision required for controlling the quantum states of trapped ions. Such ions can serve as quantum bits (qubits) in quantum computing and can also be used for precision timekeeping in optical clocks. In the latter case, each clock “tick” is defined by the frequency of the light the ions absorb and emit as they undergo a specific, sub-Hz transition (the so-called “clock transition”) between atomic energy levels.
Miniaturizing large laser systems
Researchers led by Daniel Blumenthal of the University of California Santa Barbara (UCSB) and Robert Niffenegger at the University of Massachusetts Amherst have now shown for the first time that these large, stabilized laser systems can be replaced with small photonic chips. They used these chips to prepare and control the quantum state of strontium ions at room temperature as well as driving the clock transition. Though the fidelity of the system is not yet high enough to compete with the best traditionally-constructed devices, Niffenegger describes it as a critical first step for producing next-generation clocks and future quantum computers with millions of qubits. “Reaching such a goal will only be possible with such integrated quantum systems on a chip,” he explains.
Blumenthal, Niffenegger and colleagues used two components to create their chip-based stabilized laser: an integrated Brillouin laser with a wavelength of 674 nm, connected to an integrated 674 nm, 3 m long coil resonator cavity. The team characterized the stability of this laser and coil by measuring the 0.4 Hz quadrupole optical clock transition in strontium-88 (88Sr+) ions trapped at an electrode located on a single surface electrode trap (SET) chip. This transition is one of the most precise used by quantum researchers today, and its narrow linewidth makes it relatively easy to measure using high-resolution trapped ion spectroscopy.
“The fact that these results were achieved with the SET at room temperature is remarkable given the precision of the transition, and is a major step forward in realizing portable versions of this quantum technology,” Blumenthal says.
Making optical clocks more portable and robust
As well as being smaller than traditional lasers, the chip’s 674-nm Brillouin laser light also removes the need for bulky frequency conversion equipment. A further advantage is its reduced high-frequency noise, which is important for clock acquisition and qubit state preparation fidelity, and which cannot be achieved using standard electronic feedback loops. The coil, for its part, reduces mid- and low-frequency noise, stabilizing the laser’s carrier frequency even further so that it can be locked to the precision sub-Hz trapped-ion clock transition.
According to Niffenegger, this combination of improvements enabled the team to achieve a frequency noise profile and so-called Allen deviation (a measure of stability) of just of 5.3 × 10–13 – an unprecedented figure for a room-temperature chip. “We can therefore prepare qubit states with high fidelity and interrogate the clock transition, which is essential for quantum computing applications,” he says.
As optical clocks become more portable and robust, they become more feasible for a greater variety of applications. The ultimate goal, says Blumenthal, is to reach a stability range of 10-14 to 10-16, which would allow optical clocks to replace GPS-based navigation on missions to the Moon and Mars. “Such clocks could also help advance fundamental science – for example, by mapping gravity and measuring orbit time around Earth for climate science, detecting gravitational waves and dark matter/energy and for general relativity measurements, to name just a few,” he explains.
Niffenegger says it is now feasible to scale the team’s integrated platform to a grid of 100 or more ions, to further improve performance. He and his colleagues are now working to integrate other experimental components (including the ion trap chip, the optical cavity chip and other photonics) onto a single, full-architecture chip that builds on their current designs. “Preliminary results already show improved performance, with further exciting developments anticipated soon,” they tell Physics World.
The present work is detailed in Nature Communications.
The post Trapped ion quantum technology gets smaller appeared first on Physics World.
Counting photons could redefine the future of CT imaging
Advanced photon-counting detectors could transform clinical imaging
The post Counting photons could redefine the future of CT imaging appeared first on Physics World.
Photon-counting computed tomography (PCCT) is an advanced medical imaging technique that differs from conventional X-ray CT in that it can discriminate between the energies of individual detected photons. Offering higher spatial, spectral and contrast resolution than conventional CT, PCCT could deliver significant benefits for disease characterization and enable new diagnostic approaches.
Conventional CT measures the attenuation of X-rays after they pass through the body, enabling clinicians to monitor normal and abnormal anatomy and providing valuable information for diagnosis and treatment of disease. The advantages promised by PCCT primarily arise from the differing characteristics of the detectors: conventional CT scanners use energy-integrating detectors (EIDs) whilst PCCTs employ photon-counting semiconductor detectors.
The effective dose from diagnostic CT procedures is estimated to be in the range of 1–10 mSv, although this can vary by a factor of 10 or more depending on patient size, the type of CT scan performed, the CT system and the operating technique. PCCT systems offer better dose efficiency than conventional CT and use energy thresholding to eliminate background electrical noise. As a result, PCCT requires lower radiation dose than standard CT – reducing the risk to the person being scanned.
Detector characteristics: limitations and advantages
Conventional CT systems use an EID to collect the total energy deposited by all incident X-ray photons. EIDs are typically composed of gadolinium oxysulfide (Gd2O2S) or cadmium tungstate (CdWO4) and comprise two layers: a solid-state scintillator placed on top of a photodiode array. The detection mechanism is a two-step, indirect process. Incoming photons hit the scintillation layer, which produces a flash of visible light. When the photodiode absorbs this light, it converts it into an electrical signal.
The photodiode array consists of individual detector elements separated by opaque, reflective walls called septa. This design prevents optical cross talk (signals transferring between adjacent channels and reducing image quality) produced by light scattering. The need for septa, however, creates “dead space” on the detector surface, which wastes X-ray dose and limits the spatial resolution since it physically restricts detector size.
As EIDs collect the total energy from all incoming photons, signals from photons of different energies are mixed together. High-energy photons will generate a higher light intensity than low-energy photons and will consequently produce a higher intensity electrical signal. This means that the final output signal will be dominated by the high-energy photons and under-weight the valuable contrast information that the low-energy photons provide. It also prevents the distinction between electrical noise and genuine low-energy photons, which further affects the achievable contrast.

PCCT scanners, on the other hand, employ photon-counting detectors that directly convert the photon energy to electric signals. These detectors consist of a semiconductor layer placed between a cathode on the upper side and an anode underneath. The anodes are pixellated to increase spatial resolution, with each pixel placed on top of an ASIC.
This detector uses a direct conversion process in which a high bias voltage is applied across the semiconductor to generate electron–hole pairs when struck by an incoming photon. The strong electric field draws the clouds of charge toward the anode electrodes, creating a current. The ASIC instantly processes this current and converts it into a voltage pulse, with the height of the pulse directly proportional to the incident photon’s energy. Comparators and counters sort the photons into energy bins based on threshold values, a process that can also filter out electronic noise and enable spectral imaging.
The semiconducting materials used in photon-counting detectors are typically either cadmium telluride (CdTe), cadmium zinc telluride (CZT) or silicon. The cadmium-based detectors have high stopping powers due to their high atomic number, leading to efficient absorption of X-rays via the photoelectric effect and resulting in a high spatial resolution. Another advantage of CZT and CdTe detectors is that the semiconductor can be relatively thin (roughly 2 mm), allowing the detector to be placed perpendicular to the direction of the incident X-rays.
Advanced spectral capabilities
Conventional CT relies on post-processing software to enhance image resolution and reduce the electronic noise that’s inherent to its physical hardware. But the algorithms traditionally used for image reconstruction – which include back projection, filtered back projection and iterative reconstruction algorithms – can reduce spatial resolution and cause blurring.
Deep learning-based reconstruction, meanwhile, can induce artefacts (such as generating objects that don’t exist or removing true small anatomical structures), particularly in low-dose scenarios where training data are limited. To achieve high resolution in conventional CT, a low-energy filter in the X-ray beam is needed, which increases the required radiation dose.
The PCCT detector design, with small pixel sizes and lack of reflective septa, make it an inherently high-resolution technique. Image quality can be further improved using algorithms such as quantum iterative reconstruction, which has been shown to reduce image noise by up to 34.5%. Sharp convolution kernels (used to optimize the balance between noise and sharpness) are needed to ensure that the image produced maintains the high resolution provided by the detector.

The ability of PCCT to distinguish photon energy also allows for material decomposition, which enables the generation of a range of advanced images. This includes virtual monoenergetic images reconstructed at a single energy level to amplify contrast agents without reducing dose, and virtual non-contrast images, which allow digital subtraction of particular materials without needing another scan. PCCT can also be used for K-edge imaging, in which contrast agents can be isolated based on their isolation of their K-edge energies.
Clinical applications
The technical advantages of PCCT have significantly improved the diagnostic applications of CT across a plethora of medical disciplines.
For instance, a prospective study on 200 adults with lung cancer who underwent both PCCT and EID CT showed that PCCT outperformed conventional CT in lung cancer management. The key findings were that PCCT had a lower effective radiation dose (1.36 mSv) compared with EID (4.04 mSv), lower exposure to iodine (a dye used to increase image contrast), with an iodine load of 20.6 mSv for PCCT (compared with 28.1 mSv for EID CT) and higher detection and diagnostic confidence for enhancement-related malignant features.
Similarly, in a study of CT pulmonary angiography, PCCT reduced the total iodine load by 26.7% and the CT dose index volume by 24.4% compared with EID CT. This potentially lowers patient risk, as well as providing environmental and financial benefits.
Within coronary imaging, PCCT enables characterization of coronary artery disease and plaque and shows promise in coronary artery calcium quantification by reducing blooming artefacts (where small, high-density structures like calcium appear larger than their true size). PCCT can also provide high-resolution imaging of the lumen for evaluation of coronary stents and assessment of myocardial tissue and perfusion.
The higher dose efficiency of PCCT makes it particularly effective in paediatric applications, as children are more radiosensitive than adults. Children also have smaller organs, making the ultrahigh resolution provided by PCCT especially helpful, for example, in the detection of tiny, complex heart defects in neonates and infants.
As of early 2025, there were two US Food and Drug Administration (FDA)-cleared PCCT systems in clinical use: the NAEOTOM Alpha from Siemens Healthineers and Samsung Healthcare’s OmniTom Elite. And just last month, the Extremity Scanner System from MARS Bioimaging and GE HealthCare’s Photonova Spectra photon-counting CT both received FDA clearance. Other clinical prototypes include systems from Canon Medical Systems and Philips Healthcare.
Ongoing challenges
As with any emerging technology, challenges remain to be solved. With photon-counting detectors, these includes effects such as pulse pile-up, charge sharing, K-escape and Compton scattering.

Pulse pile-up occurs when two or more photons arrive at the detector simultaneously, which may result in it recording this as a single photon. This leads to errors in the calculation of energy received at the detector and determination of the numbers of photons. If a single photon strikes near the boundary between two pixels it may be detected as having lower energy than it actually has. This effect, known as charge sharing, will degrade the spectral and spatial resolution of the CT image.
Due to their high atomic number, cadmium detectors are also susceptible to an effect known as “K-escape”, in which incident X-rays produce fluorescence that’s detected as a separate event. Compton scattering occurs when a secondary photon produced in the semiconductor material is registered as a separate event, underestimating the real energy value.
Finally, manufacturing the semiconductor materials used in PCCT is expensive – PCCT scanners can cost in excess of £2 million. And the large data sets generated by multi-energy scanning require a large amount of computing power and time to process and reconstruct.
Future impact
PCCT is a highly promising technology that replaces traditional indirect detection mechanisms with direct detection using semiconducting materials. PCCT offers superior image quality due to higher spatial and spectral resolution, higher dose efficiency and the ability to perform quantitative imaging. The multi-energy capabilities of PCCT shift the image from providing purely structural information to also include functional information.
Current clinical use is limited mainly due to cost rather than diagnostic capability, with a lack of clinical studies making the high cost difficult to justify. However, the potential impacts for optimizing healthcare could be vast. Perhaps it is inevitable that, as costs decrease with evolving technology, the clinical use of PCCT will overtake conventional CT in the future and become the standard CT technique.
The post Counting photons could redefine the future of CT imaging appeared first on Physics World.
Invisible force of nature: what the wind does for us
Kate Ravilious reviews The Breath of the Gods: the History and Future of the Wind by Simon Winchester
The post Invisible force of nature: what the wind does for us appeared first on Physics World.
In recent years the news has been dominated by devastating hurricanes, cyclones, tornadoes, wildfires and floods, and data show that these hazardous events are increasing in frequency and strength. It is clear that our weather is becoming more extreme, with a warming world adding more energy to the atmosphere and increasing the power of these wind-fuelled events.
With this in mind, Simon Winchester’s opening question in The Breath of the Gods: the History and Future of the Wind might surprise readers: are Earth’s winds slowing down? There was, indeed, a decrease in wind speeds over land between the 1980s and 2010, which was ominously dubbed the Great Stilling. In fact, observations show a decrease in average wind speeds over land of between 5 and 15% over the last 50 years. So what is going on?
Winchester – a writer and journalist with a background in geology – starts his quest to discover more atop the windiest place in the world, the summit of Mount Washington. With delicious irony, he finds the anemometers are still and a very rare calm hangs in the air.
He goes on to build the case for exceptional weather becoming the norm. He covers recent examples of extreme wind events, such as the exceedingly hot and dry Santa Ana winds of January 2025, which fed the dramatic and devastating wildfires that ripped through suburbs of Los Angeles; the record-breaking storms that pounded Europe during 2024 and 2025; and the freak tornado in March 2023 that killed 17 people and razed the town of Rolling Fork, Mississippi, to the ground.
Ever-present element
This book isn’t simply a tour of wind-related disasters, however. Winchester takes us back through thousands of years of human history, to explore how wind influenced some of the earliest civilizations. The first recorded mention of the wind arose 5000 years ago and comes from the ancient kingdom of Sumer (now south-eastern Iraq). People there identified four different prevailing winds and attributed their characteristics to four different gods. This classification system persists to this day, with our familiar north, east, south and west winds originating from these mythological four Mesopotamian winds.
For much of history humans have made use of the wind: from propelling pioneering populations in tiny boats across the Pacific Ocean some 5000 years ago, to enabling human flight; from milling grain and pumping water with windmills, to using them to generate energy. But it is only in more recent times that we have started to map and understand the major winds on our planet and the role they play in making it habitable.
Winchester romps through the science. We learn how the wind has pummelled, shaped and moulded the Earth since time immemorial, and how the winds work in tandem with the oceans, constantly transporting energy from equator to poles and preventing the planet from overheating. He also introduces key characters along the way, such as Brigadier Ralph Bagnold, a British army engineer. Bagnold used wind tunnel experiments and his extensive desert experience to understand the physics of windblown grains and the circumstances that create everything from tiny ripples in sand, to mighty marching barchan dunes.
Not quite blown away
But it is when the wind works against us that its might is truly revealed, and Winchester devotes an entire chapter to inclement winds. He starts by transporting us into the wretched five years of the American Great Depression in the 1930s, when terrible dust storms tore the topsoil from the prairie states of Oklahoma, Texas, Kansas, Colorado and Nebraska, resulting in starvation and mass migration. We hear how the arrival of the settlers and farming technology triggered this tragedy, with steel-bladed ploughs ripping through the soil and tearing up the grasses that had previously glued the soil to the land.
However, this is a tale that ends well, with President Roosevelt taking sound advice and devising an audacious plan to fix it. As a result, some 220 million trees were planted in a series of windbreaks stretching from the Canadian border down to central Texas. These restored prosperous and stable farmland to the American Midwest, and survive to this day.
Writing a book about this invisible force of nature could be stuffy, but Winchester brings his trademark curiosity and storytelling to the fore. He whisks readers through history and around the world, inserting himself into the story and pulling out the human impacts that bring the topic alive.
But while it’s a thoroughly enjoyable read, The Breath of the Gods lacks a thread to hold the book together. And most frustratingly, it fails to really return to answer the opening question about what’s behind the slowing winds. I would have liked a bit more science – particularly in understanding the impact that climate change is having on the wind – but for those looking for an accessible read with lots of fascinating weather anecdotes to regale friends with, this book won’t disappoint.
- 2025 William Collins 416pp £25hb £11.99ebook
The post Invisible force of nature: what the wind does for us appeared first on Physics World.
The mathematics of quantum entanglement
A team of researchers from Poland have developed new mathematical methods that could help enable better control of quantum entanglement and teleportation experiments
The post The mathematics of quantum entanglement appeared first on Physics World.
Most headline-grabbing advances in quantum mechanics today are experimental in nature: more qubits, entangled particles, fewer errors.
Often overlooked are the advances in the mathematics that underpins the behaviour of these quantum systems.
The walled Brauer algebra is an abstract but increasingly important mathematical structure that appears in quantum information theory whenever physicists study particles, symmetries and transformations involving permutations and partial transposition.
Work in this area inevitably leads to the question of how a system transforms when particles are permuted or when one part of a composite object is flipped (transposed) while the rest is left untouched. Collect all such operations together and you get the walled Brauer algebra. It plays an important role in the mathematical description of problems ranging from entanglement detection to advanced teleportation schemes.

The problem is that this algebra is famously intricate. Until now, physicists have only been able to describe its structure using methods that do not fully align with the natural symmetries of the system, making calculations heavy and sometimes opaque.
The new work changes that. The authors have developed an iterative construction that builds the algebra piece by piece, revealing its architecture in a symmetry-compatible way. Instead of a tangled hierarchy, the algebra unfolds into independent components, each shaped by the action of two symmetric groups.
The result is not just a more elegant mathematical picture; it is also a new framework that can make symmetry-based analysis of complex quantum-information problems more systematic and transparent.
This matters now more than ever. Quantum technologies increasingly involve many-particle configurations where symmetry is both a feature and a challenge. Teleportation schemes that move quantum information without moving particles, algorithms that manipulate unknown quantum operations, and proposals for higher-order quantum processes all rely on understanding how transformations behave under symmetry.
By clarifying this structure, the new framework could help researchers analyse these settings more effectively and support the development of better-controlled entanglement- and teleportation-based protocols.
Read the full article
M. Horodecki et al 2026 Rep. Prog. Phys. 89 027601
The post The mathematics of quantum entanglement appeared first on Physics World.
Revealing the magic in hybrid quantum systems
Quantum technologies rely on more than just entanglement. Another, less well-known ingredient is non-stabiliserness, often called magic
The post Revealing the magic in hybrid quantum systems appeared first on Physics World.
This property determines whether a quantum system can outperform even the fastest classical supercomputer. Until now, scientists could quantify magic in systems of qubits, but not in systems of bosons such as photons or hybrid devices of coupled bosons and spins, like those used in real quantum hardware.
In this new work, a team of researchers from Taiwan and Japan proposed the first unified way to measure magic in systems that combine both spins and bosons. These hybrid platforms appear everywhere from superconducting circuits to trapped ion quantum processors. However the quantum resources inside them have remained difficult to identify.
The team’s new framework uses the shape of a quantum state in phase space to define a family of magic entropies that apply cleanly to qubits, bosons and crucially, the interactions between them.
To test the idea, the researchers examined the Dicke model, a paradigmatic system in which many spins couple to a single light field. As the system approaches a superradiant phase transition (a dramatic collective reorganisation), the shared non-classical behaviour across both spins and photons (the hybrid magic) peaks at this transition. This provides another way to identify the critical point, alongside familiar tools such as entanglement. Another interesting result is that, in the finite systems studied here, the quantum magic in the spin sector increases sharply, while the bosonic magic saturates to a finite value. This contrast suggests that these measures capture different aspects of the quantum state.
The team also analysed how magic evolves dynamically in the Jaynes–Cummings model, where a single spin and a single photon exchange energy. As the two systems swap excitations, magic flows back and forth, and have different behaviours for bosonic and spin parts, providing a picture of how computational power migrates through a quantum device in real time.
As quantum computers grow more complex, scientists and engineers need reliable ways to diagnose which parts of their machines produce genuine quantum advantage. This new framework gives them a powerful tool to do just that, and it’s one that works not just for qubits, but for the hybrid architectures likely to define the next generation of quantum technologies.
Read the full article
Magic entropy in hybrid spin-boson systems – IOPscience
S. Crew et al 2026 Rep. Prog. Phys. 89 027602
The post Revealing the magic in hybrid quantum systems appeared first on Physics World.
Perseverance finds evidence for an ancient river delta on Mars
New analyses of the Jezero impact crater reveal signs of flowing water
The post Perseverance finds evidence for an ancient river delta on Mars appeared first on Physics World.

A river delta may have been present on Mars as early as 4.2 billion years ago, which is much earlier than previously thought. This is the conclusion from a new study by researchers at the University of California, Los Angeles, who have analysed ground-penetrating radar (GPR) data collected by the Mars 2020 Perseverance rover from the Jezero impact crater.
“The finding may also extend the period of flowing water and potential habitability for Jezero back further in time, says astrobiologist Emily Cardarelli, who led this research effort.
The surface of Mars carries many traces of a past watery climate, including ancient river channels, deltas, and paleolakes. Indeed, observations from space provide evidence for the existence of minerals possibly left behind as Mars’ atmosphere was gradually lost to space and its surface dried up.
Researchers are particularly interested in carbonate minerals because these preserve a record of the Red Planet’s ancient water thanks to its interactions with carbon dioxide in the Martian atmosphere at this time. How these minerals formed over the large scale in the Margin unit is unclear though.
Data collected from more than 35 metres underground
In the new work, Cardarelli and her colleagues in the Department of Earth, Planetary and Space Sciences at UCLA analysed data collected by Perseverance’s Radar Imager for Mars Subsurface Experiment (RIMFAX) instrument. They focused on a sedimentary deposit known as the Margin unit, which is rich in magnesium carbonates and lies near a fluvial inlet to the Jezero impact crater in the Nili Fossae region near Syrtis Major. The researchers already knew that this region hosts features typical of a paleolake basin and river delta deposits.
RIMFAX acquired a continuous 6.1-km ground-penetrating radar (GPR) image along the Margin unit campaign path with soundings every 10 cm and the researchers analysed 78 traverses made between September 2023 and February 2024 over 250 sols (Martian days, which are about 40 min longer than Earth days). The instrument collected data from more than 35 m underground, which is 1.75 times deeper than previous measurements at the Jezero crater.
The researchers found that the Margin unit contains a well-preserved paleolandscape with distinct river and deltaic features. These, they say, could be the remnants of a meandering river, an alluvial fan or braided river. This environment could have developed before the Jezero Western Delta viewable from orbit as early as the Noachian epoch (around 4.2 to 3.7 billion years ago).
Jezero might have hosted a habitable ancient environment
From the stratigraphic features mapped by RIMFAX, Cardarelli and colleagues conclude that the Jezero crater might have hosted an aqueous, possibly habitable environment capable of preserving biosignatures. “RIMFAX confirms that the Margin unit is distinct from a geological region known as the Upper Fan, which was deposited earlier and different in composition as well as in physical area,” says Cardarelli. “Our work suggests that there is some continuity of formation between the Margin unit and the Upper Fan, with a repeated process in Jezero crater, but at completely separate formation and deposition times.”
Indeed, a body of water might once have fed Jezero crater, she tells Physics World, and deposited sedimentary layers of varying scales, similar in size and morphology to those observed in an area known as the Western Fan. “We suggest that this was once an extensive system that included the Margin unit, although it is now a buried remnant.”
This study, which is detailed in Science Advances, highlighted only some of the specific features found since the mission began. To date, Perseverance has traversed around 40 km and has moved out of Jezero and onto the crater’s rim and the researchers say they will continue publishing their analyses from both these areas.
“I am also excited about one day returning to the Neretva Vallis region where we have detected the most compelling potential biosignatures. These may have a biological origin, but require additional study before determining if they may be evidence of past microbial life,” says Cardarelli.
The post Perseverance finds evidence for an ancient river delta on Mars appeared first on Physics World.
Shock as CERN antiproton lorry vanishes in staff car park
Truck was used last month to transport 92 antiprotons around CERN
The post Shock as CERN antiproton lorry vanishes in staff car park appeared first on Physics World.
WE HOPE YOU ENJOYED OUR APRIL FOOL’S JOKE FOR 2026. KEN HEARTLY-WRIGHT WILL BE BACK AGAIN NEXT YEAR.
Researchers at the CERN particle-physics lab near Geneva have been left stunned after a lorry containing a vial of antiprotons went missing. The lorry had been used by the Baryon-Antibaryon Symmetry Experiment (BASE) to successfully transport 92 antiprotons around the CERN site last month.
Following their work, BASE researchers had left the lorry in the main CERN car park but found it had vanished the following morning. The antiprotons were contained in a cryogentically-cooled Penning trap composed of gold-plated cylindrical electrode stacks made from oxygen-free copper surrounded by a superconducting magnet bore.
Initial suspicion was that the lorry might have been stolen by visiting US researchers from Fermilab, but a review of CCTV footage by CERN scientist Vittoria Vetra suggests it had been left overnight with the handbrake off.
I should have paid more attention. But I was just reaching into my bag to get my baguette lunch.
CERN lorry driver Herwig Chopper
Vetra discovered that following the test run, the driver – Herwig Chopper – had hit a pine marten dashing across the car park. “I should have paid more attention,” admitted Chopper. “But I was just reaching into my bag to get my baguette lunch”.
The driver swiftly went to get help for the stricken marten, with the suspicion being that in the rush he accidently left the truck’s handbrake off.
Footage taken later in the day revealed that the antiproton lorry began moving slowly forwards towards an identical vehicle containing protons, which had been used in 2024 to successfully transport protons across the lab’s campus.
Moments later, the two trucks collided and annihilated in a brilliant flash of light that dazzled the CCTV camera.
The light was so intense that it was even picked up at CERN’s Antiproton Proton RecoIL-1 (APRIL-1) experiment, which lies just a few hundred metres away.
Initial analysis by experiment head Silvano Bentivoglio suggests that the significant centre-of-mass energy of the collision could have produced two new particles, which the team have dubbed an “angelon” and a “demon”.
This new discovery opens up a new branch of particle physics to probe the full collision spectrum of trucks containing matter and antimatter.
TV physicist Brian Cox
“This new discovery opens up a new branch of particle physics to probe the full collision spectrum of trucks containing matter and antimatter,” says TV particle physicist Brian Cox. “Who knows what we might find and it could also be possible to collide other methods of transportation to search for new forces.”
There are now calls for CERN to build the 91 km Future Truck Collider in an underground tunnel with the Vatican and other private sponsors already coming forward with significant funding.
The post Shock as CERN antiproton lorry vanishes in staff car park appeared first on Physics World.
Exploring the astrophysics behind Project Hail Mary
Author Andy Weir and astrophysicist Becky Smethurst unpack the physics in the new Hollywood space epic
The post Exploring the astrophysics behind <em>Project Hail Mary</em> appeared first on Physics World.
What happens when hard science fiction collides with big-budget cinema? The latest episode of Physics World Stories delves into the ideas within Project Hail Mary – a new film about a science teacher (portrayed by Ryan Gosling) who finds himself alone on a spacecraft with the job of saving humanity from a star-dimming threat.
Host Andrew Glester talks to science-fiction author Andy Weir, whose 2021 novel inspired the production. Weir, also known for The Martian and Artemis – both adapted for the screen – has built a reputation for scientific rigour, sometimes spending days perfecting calculations for the smallest plot details. In the interview, he reflects on how his writing has evolved over time, with a growing focus on character development alongside the hardcore science.
Also in the episode is astrophysicist and science communicator Becky Smethurst, who gives her take on the film’s science. From the treatment of relativity to its refreshingly plausible take on alien life, Smethurst loves how Project Hail Mary avoids many familiar sci-fi clichés. She also shares some of her favourite recent science fiction.
Smethurst, who runs the popular YouTube channel Dr Becky, recently released a series about Project Hail Mary. It’s well worth checking out the entertaining interviews with Weir, Gosling and directors Phil Lord and Christopher Miller – all grappling with the challenge of bringing complex physics to the screen.
The post Exploring the astrophysics behind <em>Project Hail Mary</em> appeared first on Physics World.
From the blackboard to the backbenches: how physics teacher Dave Robertson became an MP
Matin Durrani talks to Dave Robertson, a politician who used to be a physics teacher
The post From the blackboard to the backbenches: how physics teacher Dave Robertson became an MP appeared first on Physics World.
Physicists who go into politics are a rare breed. Most famously there was Angela Merkel, who was chancellor of Germany for 16 years. Climate physicist Claudia Sheinbaum Pardo was elected Mexican president in a landslide win in 2024. Alok Sharma, meanwhile, was business secretary in the UK government and president of the COP-26 climate summit.
But Dave Robertson is even more unusual. Having originally studied physics at the University of Liverpool in the UK, he worked as a physics teacher in Birmingham for almost a decade. After spells in the trade-union movement and local politics, Robertson has been the Labour Member of Parliament (MP) for Lichfield, Burntwood and the Villages since 2024.
He’s not the only physicist currently serving as an MP. Others include Layla Moran – another former physics teacher – who’s been Liberal Democrat MP for Oxford West and Abingdon since 2017. There’s also shadow home secretary Chris Philp, who’s been Conservative MP for Croydon South since 2015.
But Robertson is the only physics-teacher-turned-MP in the current Labour government, which came to power at the 2024 general election. It won a 174-seat landslide majority, though Robertson’s own victory was wafer-thin. He squeaked home by just 810 votes over his Conservative rival Michael Fabricant, who had been Lichfield’s MP for more than 25 years.
In an interview with Physics World, Robertson admits he had little idea of what the job of MP would involve (see box). Describing the British parliament as “a truly bonkers and bizarre workplace”, he divides his time between Lichfield and London. “I try to do four days in my constituency a week and four days in parliament. That doesn’t add up, but if can split my Mondays, I can just about make it work.”
Dave Robertson MP: what happened after I got elected

Dave Robertson recalls the immediate aftermath of his victory in the UK general election on Thursday 4 July 2024.
When you win an election, they give you this envelope. I was expecting a proper, thick A4 envelope, but all they gave me was a single sheet of A4 paper folded in half. It was 4.30 in the morning, I’d had no sleep and I’d been on my feet since 7 a.m. or something stupid. And I thought “I’m not opening this now. I’m going to take it home.”
When I opened it in the morning, it basically said “Congratulations, phone this number.” So I rang and someone said “Oh, when are you coming down to parliament?” And my reaction was “I thought you’d tell me that!” In the end, I went down on the Sunday after the election and I remember walking into Westminster Hall for the first time with the person who was showing me round and she said, “So when was the last time you were in parliament?”
As I put my hand on the door, I had to admit I’d never been in the building before: it was literally the first time I’d ever been there. And it’s nothing like I expected. It is a truly bonkers and truly bizarre workplace. It’s unique and so different to everything else. That comes with its frustrations, but it is also an absolute privilege to be involved – and long may it continue.
Into the classroom
Brought up in Lichfield, Robertson began his physics degree at Liverpool in 2004. Saying he “loved every second” of his time there, Robertson particularly enjoyed nuclear physics. But it was a science-communication course, which Robertson admits he only took because he thought it would be easy marks, that made him realize how much he liked taking complicated concepts and explaining them to non-experts.
After graduating in 2007 and taking a year off, Robertson returned to the Midlands to do a teacher-training degree at the University of Birmingham. The course was largely practical, with Robertson spending most of his time getting hands-on teaching experience at various schools in Birmingham, including one – Great Barr School – that he ended up working at.
Roberston spent seven years as a physics teacher at Great Barr, which was then one of the largest secondary schools in the UK. With about 2500 pupils, it had as many as 16 classes in each year group, from age 11 to 16. Great Barr was also able to offer physics to 17 and 18 year olds who stayed on to do A-levels. “We’d always have one physics group or occasionally two in year 12.”
Rather than just focusing on the syllabus, Robertson would try to make his lessons “loud and engaging” to emphasize the excitement and sheer bizarreness of physics. Claiming he has good control of his voice, Robertson says he would also “put on accents and do silly voices” to keep pupils entertained.
He particularly enjoyed teaching a course called “Science in the news”, where pupils would look into the impact of a particular topic in the syllabus on the wider world. “That was wonderful,” Robertson recalls. “It was effectively a literature review, which let us teach a lot of the skills that we want to see kids developing when they’re learning sciences. It was fascinating.”
Not all pupils enjoyed physics. “For some kids, physics wasn’t their thing – it’s not what drove them,” he says. But he regarded it as “an absolute privilege” to teach students who were engaged with the subject, especially those who went on to study physics at university. One ex-pupil even contacted Robertson after he became an MP to say she’d just passed her PhD. “She’d dropped a note into her thesis thanking Mr Robertson for being an inspiring physics teacher.”
Political moves
Robertson’s time at Great Barr came to an end in 2016 when the school was making job cuts and he accepted voluntary redundancy. After doing supply teaching for about a year, he got wind of a post at the NASUWT teachers’ trade union, which he’d been school rep for at Great Barr. “It was one of those jobs I’d have regretted if I didn’t apply for it,” he says.
It was while working for the NASUWT that Robertson got involved in local politics. He joined the Labour Party and in 2019 was elected to Lichfield District Council, which was then run by the Conservative Party. He also stood in that year’s UK general election, but was beaten by Michael Fabricant, losing by more than 23,000 votes. “I don’t talk about that result,” Robertson jokes.

Robertson is now one of more than 400 Labour MPs and spends most of his time on local Lichfield matters. “My number one focus is very much what’s going on in my constituency, and that will always be the case,” he says. “But I’m very fortunate to be one of a very small number of parliamentarians who’ve got a science background, let alone a physics background.”
That interest saw Robertson host an exhibition in the Houses of Parliament, organized by the Institute of Physics (IOP), in June 2025 to support the International Year of Quantum Science and Technology (IYQ). “Every MP and member of the Lords would have been able to walk past and see that it was the IYQ,” he says. The exhibition was, for him, a great opportunity “to show decision-makers that the UK is one of the world leaders in quantum”.
That month Robertson also hosted a hands-on display of quantum technology for MPs and members of the House of Lords, again organized by the IOP. At the end of 2025 he sponsored another parliamentary reception, this time for physics-based companies that had won IOP Business Awards. “The event was absolutely wonderful,” says Robertson. “Seeing some of the cutting-edge science from companies on show was astonishing.”
Robertson’s focus on science extends to his membership of various cross-party parliamentary groups, including ones about nuclear energy and space. He is also chair of a new group he has set up devoted to quantum science and technology. As a backbench MP, Robertson cannot dictate or implement policy, but he says such groups “can help build up a critical mass of interest in parliament to drive an agenda forwards”.

With his background in teaching, Robertson is also keen to highlight the UK-wide shortage of physics teachers. While at Great Barr School – now rebranded as Fortis Academy – he was lucky. “I remember having a physics group meeting,” he says, “where there were six of us around the table and thinking ‘This is more [physics teachers] than most cities have’.”
As a 2025 IOP report pointed out, a quarter of state schools in England have no specialist physics teachers. In fact, more than half of physics lessons for 14–16 year olds are taught by teachers who never studied a physics-related subject beyond the age of 18. Despite some improvement, only 31% of the government’s target number of physics teachers have been recruited, while 44% of new physics teachers quit within five years.
It’s the responsibility of me and other MPs with a scientific background to spark an interest in physics
Dave Robertson MP
Robertson admits that getting the lack of physics teachers on the radar is an uphill battle. “There are 650 MPs but have they all thought about the importance of getting more physics teachers in the classroom? Probably not, if I’m honest. That’s why it’s the responsibility of me and other MPs with a scientific background to spark an interest in physics and unearth the next Paul Dirac or Isaac Newton.”
Robertson would also like to get on the influential science innovation and technology select committee to spread the message about the importance of physics. But he is wary of spending too much time in parliament with other MPs with a scientific background. “It’s more helpful if all of us have tentacles that spread out into other groups and parties and sections of parliament.”
Spreading the message
For the wider physics community, Robertson believes that physicists need to speak out more strongly about how they can tackle many of the world’s problems, notably climate change. “It’s the biggest issue at the moment and a lot of the solutions are going to come from physics,” he says. “Getting more physicists engaged with decision-makers will not only be good for the future of the economy but ultimately for the future of the planet.”
As for Robertson’s own future, he knows that a career in politics is precarious. Voters rarely hold politicians in high regard and will often boot them out on a whim. It’s therefore hard for any MP to have a predictable career path or plan too far ahead. Robertson himself admits to having “no big aspirations” to be a cabinet minister, which is perhaps just as well given that his majority at the last election was so thin.
With the next general election not due to take place until 2029, Robertson is for now focusing squarely on his role as a backbench constituency MP. “The job I have is just about the most wonderful in the world,” he says. “I want to keep doing it because there’s some wonderful things I can do for my community, whether it’s physics, quantum or football.” But if Robertson did get kicked out, at least he can go back into the classroom.
“Rumour has it, we could do with a few more physics teachers.”
- Dave Robertson also features on the 19 March 2026 episode of the Physics World Weekly podcast
The post From the blackboard to the backbenches: how physics teacher Dave Robertson became an MP appeared first on Physics World.
Miniature magnets break field strength record
New coiled device could rival expensive magnet facilities, say scientists
The post Miniature magnets break field strength record appeared first on Physics World.
Physicists at ETH Zurich in Switzerland have produced magnetic fields as high as 40 T in a superconducting coil that has a bore diameter of just 3.1 mm. Until now, creating such intense fields required large and expensive facilities and tens of megawatts of power. The new miniaturized structure requires a few thousand times less power than larger magnets and it could help bring ultrastrong benchtop magnets closer to reality.
“All previous 40 T class magnets have been metres in size, weigh more than six tons, and require about 20 MW of power to operate,” says Alexander Barnes, who led the research effort. “Our miniature magnet can also generate a 40 T magnetic field, but it is small enough to fit in the palm of your hand and requires a few watts or less to operate.”
Such a device could be extremely useful for scientists who use strong magnets in their research, he adds. “Rather than having to travel to the few locations in the world that have the resources and space to house a strong magnetic field, with this technology scientists in the future could have access to these magnets in their own laboratory.”
Making the magnet tiny
Barnes and his colleagues, who are nuclear magnetic resonance (NMR) spectroscopists, came up with the idea for their new magnet by asking themselves a simple question: “what do we need to put inside it in our experiments?” The answer was: only the sample and an NMR detection coil.
“So, instead of making magnets expensive and big enough to house all different kinds of equipment, we decided to make the magnet tiny – and just big enough to be able to fit inside it what we need to fit inside it,” says Barnes. In this way, any bulky components can be placed outside the magnet and only the essential elements within the high-field region inside it.
“Think about the right-hand rule and the Biot-Savart law we all learn in first year physics,” he explains. “This law tells us the more electrons moving in a circle, the higher the magnetic field. And the more electrons moving in a circle in a smaller volume close to the sample also means a higher magnetic field. This is all we did – we tried to maximize the electrons moving in a circle near our sample.”
High-temperature superconducting tapes
Strong magnets are needed in a host of research and technology areas, from magnetic resonance imaging (MRI) and particle accelerators to NMR spectroscopy. Magnetic fields greater than 40 T can be produced using high-temperature superconducting (HTS) tapes. These structures can also be wound together to increase their already very high critical current even further, something that allows the resulting coils to reach higher magnetic fields. A famous example, Barnes reminds us, is the world-record 45.5 T steady-state magnet, which uses a HTS coil as an insert within a resistive background magnet. The problem, however, is that these high-field hybrid magnets are huge and require a lot of power.
Barnes’ team says it might now have overcome this issue with its two compact HTS magnets wound with a conducting tape coated with the superconducting ceramic REBCO. The first magnet, composed of two pancake coils, produces a magnetic field of 38 T and the second, composed of four (quad) pancake coils, a field of 42 T. The researchers say they used a specialized winding technique combined with soldering to make sure there was a jointless connection between the pancake coils at a winding diameter of 3.5 mm.
The strong magnetic fields of the coils stem from the high current-carrying ability of REBCO and the extremely small magnet bore diameter of 3.1 mm. “These magnets reach current densities of 2257 and 1880 Amm−2 at peak currents of 1246 and 1038 A, respectively,” says Barnes, “and despite the much higher current density, they consume a few thousand times less power and require a coil volume over 1000 times smaller than that of the 45.5 T hybrid magnet.”
“Amazing” materials
He says he imagines a “bright future” where there are hundreds and thousands of benchtop magnets capable of 50 T and more, all over the world in academia and industry. These magnets can be used for NMR and electron paramagnetic resonance (EPR) spectroscopy, but also quantum computers and other applications. For instance, the ETH Zurich team is working on a project that uses these magnets to build miniature gyrotrons, which are microwave generators. “We have plans to use such devices for spectroscopy, but also for nuclear fusion heating and even vaporizing holes deep in the Earth to extract geothermal energy,” Barnes tells Physics World.
It will not all be plain sailing, however, say the researchers. One of the main challenges in this work, which is detailed in Science Advances, is to avoid damaging the REBCO-coated tapes. These tapes are “amazing” materials, says Barnes. They are a single crystal of rare-earth barium copper oxide and are more than 100 m long, but the problem is that they are subject to mechanical strain. If this strain exceeds a certain, critical threshold, then the superconducting layer can crack, leading to reduced current-carrying capacity as the structure’s resistance increases.
The researchers say they are now busy working on increasing the magnetic fields – they are targeting 50 T soon – and performing NMR inside their existing coils. “ResonX, the commercial partner on this study, is also actively commercializing these magnets,” reveals Barnes.
The post Miniature magnets break field strength record appeared first on Physics World.
Magnetic microrobot swarm moves objects with water
Microrobots harness fluidic torque to move millimetre-sized objects without physical contact
The post Magnetic microrobot swarm moves objects with water appeared first on Physics World.
Robots tend to move things physically, using arms or other appendages. But what if robots could move objects without physically touching them? Researchers from the Max Planck Institute for Intelligent Systems, the University of Michigan and Cornell University have developed robotic swarms that can manipulate objects using only water, by inducing a fluidic torque.
Strong viscous interactions exist in microscale systems, which can be used to generate fluid flows that actuate passive objects. In their previous research, the researchers found that this manipulation can be influenced by the number of microrobots, the spin rate of microrobots and the position of the microrobots relative to the object. This latest work, published in Science Advances, has gone one step further, demonstrating that a magnetic robot swarm can assemble, transport and reorganize objects that are many times larger than the microrobots themselves.
“This study is the third in a series of papers where our team explores how microscale robot swarms can coordinate using simple global control signals,” says Kirstin Petersen of Cornell University, “Rather than controlling each robot individually, we broadcast the same signal to the entire group and rely on the robots’ interactions with each other and with their environment to produce different collective behaviours. Here, we showed that those interactions could also be used to manipulate external structures through the fluid flows generated by the swarm”.
The robots are microdisks with diameters of about 300 µm and because they are magnetic, they can be rotated using an externally applied magnetic field. When each individual microrobot spins, it drags the fluid around it, which generates a force in the liquid. While this force is small for an individual robot, combining hundreds of robots together that spin in unison (and/or increasing the spin speed of the robots) creates a much larger flow force in the water – generating a high enough torque to move objects.
“The most exciting result is that the robot collective can use the fluidic torque it generates to manipulate structures much larger than the robots themselves, without physical contact. It suggests that you could add actuation to otherwise passive objects simply by introducing microrobots in the surrounding fluid,” Petersen tells Physics World.
To demonstrate this approach, the researchers positioned the microrobots inside and outside of concentric floating ring structures, and used the number of robots, their positions and spin speeds to act as a form of control for moving objects. They found that the robots could spread out and surround the object, rotating it in the process, or they could crawl around the edges of an object, allowing them to reorganize objects. The ability to change these parameters and obtain different torques provided a tuneable and programmable way of using the microrobot swarms.
The researchers extended the principles to mechanical systems, using the microrobot to turn miniature gear trains (after turning the first gear, the other gears moved by conventional mechanical contact). They also rotated 3D floating objects that were 45,000 times the mass of an individual robot. Here, placing the robots on top of the object generated sufficient torque to rotate it, despite the mass difference.
The team also found that the microrobot swarm could dynamically assemble objects using coordinated fluid flows, in which the robots switched between their rotational function and crawling ability to move objects along a surface. This adaptive behaviour not only allowed the manipulation of objects, but also their reorganization – including expelling, dispersing and aggregating objects – based on the environment and task requirements.
The introduction of these small robots into fluids essentially turns the fluid from a passive medium into a small-scale motor. For applications where there is a risk of structural damage from mechanical manipulation, contactless manipulation could be highly beneficial. For example, this type of mechanism could be useful in microscale manufacturing and biomedical engineering, particularly for miniature device assembly, biological matter transport and targeted manipulation within the human body.
When asked about what’s next for this research, Petersen tells Physics World that “the other authors are focusing specifically on innovating microrobots, whereas my lab is studying the broader question of how collectives coordinate through their shared environment while keeping individual agents simple. We are exploring natural and engineered fluid-coupled swarms across a wide range of size scales”.
The post Magnetic microrobot swarm moves objects with water appeared first on Physics World.
Why mentorship is vital for the future of physics
Honor Powrie explains why giving back through mentorship is so valuable
The post Why mentorship is vital for the future of physics appeared first on Physics World.
A couple of months ago I wrote about whether it’s possible to teach the art of entrepreneurship or if it’s a skill that’s innate to individuals. My article led to some invaluable feedback, notably from one reader who said that, yes, of course it can be taught. Not, they said, from formal lectures but mainly through mentoring by people who’ve learned the art of entrepreneurship themselves.
That idea got me thinking about the wider benefit of “giving back” one’s experience to others who could gain from that wisdom. All professional scientists and engineers will have benefited at one time or another from the generous guidance of other people – be they teachers, lecturers, or work colleagues. So perhaps we should think about how we can do the same.
The value of a professional interaction, however small, should not be overlooked
It’s easy to imagine our lives are so inconsequential that we have nothing to teach – and even if we do have something to say, we certainly haven’t got the time to tell others about it. But the value of a professional interaction, however small, should not be overlooked. A timely moment at any career stage can make all the difference to an individual’s professional impact and future success. The scope of opportunity for giving back is broad.
Volunteering and internships
In my experience, local schools are always grateful for career guidance from professionals. Staff at my company, for example, often give career talks at their children’s schools. We take part in events such as assemblies, career evenings or careers weeks and we are currently keen to provide work experience for 16- and 17-year-olds in year 12. If we go ahead, I am sure pupils will be eager to snap opportunities up.
I have also seen the benefit of scientists and engineers developing videos, workbooks and other materials for primary-school children to learn about concepts in science and technology. It is important to make an impact at the earliest possible stage, which is where the talent pipeline starts. Once students are in their teens and have made their subject choices, it becomes hard – if not impossible – to influence them.
Internships are another great way of giving back. For the last eight years, I have been running a data-science internship programme at GE – and I just wish I’d started it sooner. Initially, we offered summer-long placements, but after a year we added year-long roles to the mix. I will be honest, colleagues were hugely sceptical about how much value these roles would bring, but their worry proved unfounded.
The vast majority of our interns have been extremely productive under our guidance and, after finishing, have gone on to secure graduate positions within GE or other tech firms. It’s vital, however, that interns are properly supported. As well as being given comprehensive induction and training, interns must be part of an established project team, whose members are always on hand to give guidance, answer questions, and provide the interns with clear tasks and goals.
It’s also important to set expectations of professionalism when at work. We are fortunate in GE that interns are taken on as regular employees and so have access to a wide range of employee and company benefits. Interns therefore find it easier to feel part of the company and adopt its ethos. Remember too, that the benefits work both ways. Interns bring you new perspectives and fresh ideas, while also keeping the rest of the team stimulated.
Professional societies and professorships
Being a member of a professional body is also a great way to give back to the community. The Institute of Physics (IOP), for example, has an active volunteer community, along with special interest groups and regional and national branches that are all run by member volunteers, with help from IOP staff. Becoming an IOP volunteer also gives you the chance to influence and help shape the physics community.
By meeting like-minded colleagues, you can build your network and give back to the community at the same time
You could, for example, get involved with running lectures, seminars, webinars and career outreach events. By meeting like-minded colleagues, you can build your network and give back to the community at the same time. There are some great examples, notably Deborah Phelps, a physicist in engineering who ended up launching the IOP’s girl-guiding badge.
For more experienced industrialists, another way to give back is to become a visiting professor. Being fortunate enough to hold such a position myself, they let you go back to university and share your knowledge and experience with current students. It’s invaluable for universities too, allowing students to learn what real-life careers look like and what skills they might need beyond the technical knowledge gained during a degree.
Visiting professorships tend to be awarded by directly by universities. But competitive awards exist too. The Royal Academy of Engineering, for example, runs a scheme that brings engineers, entrepreneurs, consultants and other industry insiders into UK universities to boost undergraduate engineering education. Covering areas that would appeal to physicists, such as energy, materials and electronics, the scheme lets experts deliver face-to-face teaching, mentoring and curriculum development for three years.
The Royal Society, meanwhile, runs an entrepreneur-in-residence scheme that’s been taken up by people like Fiona Riddich, who originally studied maths and physics before joining the energy industry. She’s mentored students at the University of Edinburgh and developed a project called Energy@Edinburgh to raise awareness of researchers’ work, promote interdisciplinary exchange, grow staff understanding of the energy market, and encourage innovation and translation of research.
I have only scratched the surface of what can be done for the good of our scientific and engineering community, but there is plenty of opportunity and few, if any, barriers to entry. I can’t emphasise enough the importance of doing this, especially for growing our pipeline of technical breakthroughs and developing talented people for the future.
My challenge to you is to tell your colleagues what you’re already doing to “give back” – and why. And if you’re doing nothing to give back, now is the perfect time to get started.
The post Why mentorship is vital for the future of physics appeared first on Physics World.
Where do thunderstorms form?
Soil moisture and wind patterns are important, reveals new study
The post Where do thunderstorms form? appeared first on Physics World.
The amount of moisture in soil – and the way this moisture is distributed – combined with wind patterns in the lowest few kilometres of the atmosphere can influence where thunderstorms begin and how they develop. This new finding, from researchers at the UK Centre for Ecology and Hydrology (UKCEH) could help in the development of new early warning systems for such events, which are increasing worldwide and becoming more intense and dangerous as the climate warms.
Thunderstorms can develop quickly on hot afternoons, sometimes in less than half an hour of clouds building up, but predicting where they originate can be difficult.
A team of researchers led by meteorologist Christopher Taylor has now discovered that patches of dry soil 10–50 km across can combine with the wind field and affect how quickly convective storm clouds (cumulonimbus) form and grow.
“We already knew that differences in wind speed and direction with height (the ‘vertical wind shear’) in the atmosphere are critical ingredients for severe storm development, whilst gradients in land surface heating across the landscape can induce weak winds near the ground,” explains Taylor. “These two elements are usually studied separately, but we put them together and found that convective clouds grow very rapidly when the winds that steer them, some 3–4 km above the ground, oppose local surface-generated winds near the ground.”
This combination, he says, effectively increases the supply of moist, buoyant air into a cloud, accelerating the updraughts responsible for lightning and heavy rain.
“Storm initiations are clearly favoured in specific locations”
The result, he explains, challenges conventional thinking that over flat terrain, where cumulonimbus first develop, is essentially random. “In fact, under the conditions we studied – across sub-Saharan Africa – storm initiations are clearly favoured in specific locations, based on a combination of soil and wind conditions on that day.”
The work, which is detailed in Nature, could help in the development of more localized storm forecasting, he says, particularly in tropical areas where soil moisture gradients and wind shear are strong and can lead to flash flooding, lightning and strong winds.
The UKCEH team obtained its result by studying satellite images of 2.2 million afternoon storms in 2004–2024. They were able to obtain high-resolution data from the images and so observe fine-scale details of the wetness of soils.
The principle they have identified would be applicable to predicting thunderstorm formation in other parts of the world, such as Asia, the Americas, Australia and Europe – and not just the worst-hit tropical regions in Africa.
Ground-based measurement networks are scarce in Africa
Taylor and colleagues say they have been working with meteorological services in Africa for the last few years and contributing to international efforts to provide early warning systems for severe storms. Convective storms can be particularly damaging in built-up urban areas with intense rainfall damaging infrastructures such as roads and sanitation systems. “Unlike in the UK, where ground-based measurement networks are the backbone of weather forecasting, they are scarce in Africa and there are only a handful of meteorological radars here, explains Taylor. “We therefore had to rely on satellite data, which provide good quality information on some aspects of the coupled land-atmosphere system – notably the temperature (and therefore the height) of clouds and estimates of moisture in the top few centimetres of the soil.”
From this information, the researchers inferred how soil moisture affects evapotranspiration and atmospheric heating, how pressure gradients created by these heating patterns affect winds locally and, finally, how these inferred local winds interact with growing convective clouds.
The insights gleaned from this study could help improve the accuracy of short-term weather forecasts by providing a better indication of where storms are likely to appear within a region, Taylor says. “Just how much more skilful a forecast will be is an open question, but we have good reason to believe that in parts of Africa it could provide a big advance. In general, weather forecasting is a rapidly evolving field thanks to AI, and so the translation from research finding to application could be rapid.”
The researchers say they are now starting to look at how weather forecast models depict the processes described in their work. “Early indications suggest that models solving physical equations on a fine enough grid (of around 4 km) can capture the relationships between soil moisture, wind shear and cloud growth, but operational weather forecast models will require more accurate information on spatial variations of soil moisture to produce better forecasts,” says Taylor.
“We are also looking at how predictive models based on deep learning can exploit the new knowledge to provide forecasters with early indications of where storms may appear later in the day,” he reveals.
The post Where do thunderstorms form? appeared first on Physics World.
Researchers from China dominate IOPP outstanding reviewer awards
Some 1621 individuals from 74 countries have been honoured
The post Researchers from China dominate IOPP outstanding reviewer awards appeared first on Physics World.
More than 1600 researchers from 74 different countries have won “outstanding reviewer awards” from IOP Publishing, with researchers from China making up almost a third of awardees. The annual award recognises scientists who have delivered exceptional peer-review reports for IOP Publishing journals over the past year.
Reviewer feedback to authors plays a crucial role in the peer-review process, boosting the quality of published papers for the benefit of authors and the wider scientific community. Awards such as those from IOP Publishing are an attempt by publishers to raise the importance of courteous and constructive peer review.
This year’s recipients were selected from about 35,000 reviewers who submitted peer-review reports to IOP Publishing journals in 2025. Journal editors evaluated nominees based on the volume, timeliness and quality of their reviews.
A total of 1621 individuals have been honoured with a 2025 award. China makes up 30% of awardees followed by 16% from the US and just over 6% from India. Some 10% of this year’s award winners are also based in lower middle-income countries or territories.
“High quality peer review is essential to maintaining trust in science as it safeguards the quality and integrity of academic work,” notes Laura Feetham-Walker, IOP Publishing’s reviewer engagement manager. “I’d like to thank this year’s winners, whose thoughtful and rigorous reviews help advance scientific discovery and strengthen the communities we serve.”
The IOPP’s outstanding reviewer programme has been awarded annually since 2016. The IOPP also recently introduced a peer review excellence certification programme that provides free peer review training and certification. In 2025, more than 1500 reviewers took the initiative.
The post Researchers from China dominate IOPP outstanding reviewer awards appeared first on Physics World.
Quiz of the week: how many antiprotons did CERN transport by truck?
Have you been keeping up to date with physics news? Try our short quiz to find out
The post Quiz of the week: how many antiprotons did CERN transport by truck? appeared first on Physics World.
Fancy some more? Check out our puzzles page.
The post Quiz of the week: how many antiprotons did CERN transport by truck? appeared first on Physics World.
Magnetic friction defies centuries-old law
Sensing and programmable metamaterials could benefit from discovery
The post Magnetic friction defies centuries-old law appeared first on Physics World.
Through new experiments with magnetic materials, physicists in Austria, Hong Kong and Germany have overturned a simple law of friction that has held for over 300 years. Led by Clemens Bechinger at the University of Konstanz, the team’s discovery shows how internal collective dynamics in these materials can cause friction to peak at a certain applied, load before dropping sharply. The effect could prove especially promising in applications where friction needs to be precisely controlled.
In 1699, French physicist Guillaume Amontons published his rediscovery of an effect first observed by Leonardo da Vinci: that the force of friction between two sliding surfaces is proportional to the load pressing them together. He also showed that this relationship is monotonic, meaning friction continues to grow as the load increases, forcing stronger interactions between the surfaces.
Since then, Amontons’ law has held up to close experimental scrutiny. “It is actually quite remarkable that this simple law holds across a wide range of very different materials,” Bechinger says. “At the same time, this classical picture does not account for systems where internal degrees of freedom – such as magnetic order – play an active role.”
Little microscopic insight
For all its success, Amontons’ law offers little insight into the microscopic mechanisms underlying friction. To probe these mechanisms, many studies have turned to atomic force microscopy, which measures the motion of a nanoscale tip as it is scanned across a surface. While powerful, this technique can only capture frictional mechanisms over extremely local regions. As a result, it is less well suited to systems where friction emerges from larger-scale effects.
In particular, magnetic materials host regions of aligned atomic spins that can extend across millimetres. When two magnetic surfaces slide past each other, these spins continuously reorient in response to their changing interactions. However, this reconfiguration isn’t instantaneous.
Famously, magnetic systems can display hysteresis, whereby a material’s response to an external magnetic field depends to the history of its magnetization. For two interacting magnetic surfaces, hysteresis means that spin realignments to lag behind the sliding motion, causing the system to undergo repeated cycles of delayed switching. In the process, the kinetic energy of the sliding motion is partly dissipated, increasing the overall friction experienced by the surfaces.
To explore these effects in more detail, Bechinger’s team developed a new experimental platform that moves beyond the constraints of conventional techniques. Instead of applying a load directly, they varied the interaction strength between two extended magnetic surfaces by precisely controlling their separation distance.
Monitoring magnetization
“Using millimetre-sized rotatable magnets, this allowed us to directly monitor the orientations of their magnetization during sliding, and to correlate these changes quantitatively with the measured friction force,” Bechinger explains.
As the surfaces were brought closer together, the researchers observed that friction initially rose, in line with the expectations of Amontons’ law. However, this trend did not continue indefinitely: at an intermediate separation distance, friction reached a maximum.
“A peak occurs when competing magnetic interactions drive the system into a frustrated state,” Bechinger continues. “This causes repeated, hysteretic switching of magnetic orientations during sliding, which strongly enhances energy dissipation.”
Beyond this point, the effect was weakened by further decreases in separation distance, and friction dropped sharply: a clear departure from the monotonic behaviour predicted by Amonton’s law.
Altogether, the team’s findings show that friction can arise entirely from the internal collective dynamics of the material, rather than from direct mechanical contact alone. As Bechinger explains, the ability to tune these effects could open up new technological possibilities.
“This opens up new possibilities for designing wear-free, contactless frictional systems and suggests that friction itself can serve as a sensitive probe of microscopic ordering,” he says. “Potential applications could range from magnetic sensing to programmable metamaterials.”
The research is described in Nature Materials.
The post Magnetic friction defies centuries-old law appeared first on Physics World.
Word wave puzzle no.1
Try out our new word finder game today
The post Word wave puzzle no.1 appeared first on Physics World.
- Enter a word guess – in this game the word has six letters.
- After submitting your guess, each letter in the guessed word is coloured to provide feedback:
- Green: The letter is correct and is in the correct position in the target word.
- Yellow: The letter is correct but is in the wrong position in the target word.
- Grey: The letter is not in the target word at all.
- Using this colour feedback, refine your next guess.
- Continue guessing until you correctly identify the hidden word(s) or run out of attempts.
If you need any hints, read our news article here.
Fancy some more? Check out our puzzles page.
The post Word wave puzzle no.1 appeared first on Physics World.
How IOP Publishing cut its carbon footprint by 36% since 2020
The company’s sustainability lead Liz Martin is our podcast guest
The post How IOP Publishing cut its carbon footprint by 36% since 2020 appeared first on Physics World.
My guest in this episode of the Physics World Weekly podcast is Liz Martin, who is sustainability lead at IOP Publishing. We chat about how the scholarly publisher has reduced its carbon emissions by 36% when compared to a 2020 baseline – and the challenges and opportunities for achieving further reductions.
Martin talks about the importance of cooperation and partnerships – both internal and external – to achieving environmental goals. This includes engaging with both suppliers and employees on how to reduce carbon emissions.
IOP Publishing is a wholly owned subsidiary of the Institute of Physics, which is the professional body and learned society for physics in the UK and Ireland. It produces over 100 scholarly journals, around half of which are published jointly with or on behalf of partner societies and research organizations. Physics World is also brought to you by IOP Publishing.
- You can download a PDF of IOP Publishing’s Sustainability Report 2025 here.
The post How IOP Publishing cut its carbon footprint by 36% since 2020 appeared first on Physics World.
Many-body effects at the world’s largest physics conference
A personal reflection on the Global Physics Summit – from artificial intelligence to ultracold atoms
The post Many-body effects at the world’s largest physics conference appeared first on Physics World.
Many-body physics is the study of large ensembles of interacting particles and their collective behaviour. These systems are notoriously difficult to simulate, yet they underpin phenomena such as superconductivity and superfluidity. Thus, they are of great interest to understand. As a many-body physicist myself, I arrived at my first American Physical Society (APS) meeting with a different curiosity: understanding what the largest physics conference in the world was all about.
Last week, I joined a crowd of 14,000 scientists convening in Denver, Colorado for the annual Global Physics Summit, hosted by the APS.
On Sunday morning, the day before the conference, I walked alone through the streets of downtown Denver. Silence filled the frigid air. A light flurry of snow covered the empty streets in white. It seemed that the city was still asleep.
But Denver was abruptly awakened on Monday morning, as I found myself well-accompanied by the crowd collectively moving towards the Colorado Convention Center for an 8 a.m. start. Inside, the conference was humming with its own emergent dynamics, with lines forming around coffee stations and people bustling to find their way to wherever they were going.
Throughout the day, I was faced with the repeated indecision of choosing between over 80 simultaneous sessions. Some sessions housed APS’s infamous blitz talks with speakers racing to pack as many graphs and equations into their allotted 10 min. Having barely enough time to write down the takeaways, I tried, often in vain, to fill my memory as quickly as possible.
Other sessions featured longer talks on hot topics in physics. By evening, my mind was swimming with notions of scalable quantum computing and physics funding issues and public engagement opportunities and the infiltration of AI slop into every corner of the scientific process. These sessions offered me a necessary reminder that science is not performed in a vacuum. With that said, the purely technical sessions on ultracold atomic gases served as a necessary reprieve for me that day.
Ultracold atoms, cooled to only a fraction of a degree above absolute zero, provide physicists with a clean and controllable platform for studying quantum many-body physics. At its heart, this physics is governed by interparticle correlations.

During my PhD, we measured two-body correlations and observed bosons spatially bunching together—unlike their antisocial fermionic counterparts. While the stereotypical physicist may be notoriously antisocial, the APS lanyard seemed to overturn that reputation.
Over dinner one evening, I requested a table for one. Only a moment later, I was joined by a physicist I’d never met before, and the evening unfolded behind pleasant chatter of 2D materials and the lack of vegetables in our travel diets.
Two tables down sat a professor whose work I admired. I’ll admit that I embarrassingly (or, more favourably, courageously) walked to the washroom so that I could pass by his table and say hello. I had met him once last year, but he didn’t remember me. So, I kept talking until he agreed that he remembered, and that it was nice to run into each other again. Whether true or not, I accepted it as a win. Without an APS lanyard, I probably would have avoided that conversation.
Single-atom resolution
On Thursday, a session titled “Novel imaging and quantum sensing technologies” caught my eye since I work with a quantum gas microscope. The microscope is a high-magnification imaging system that affords us the resolution of individual atoms. The microscopic information is far richer than what is obtained by a bulk imaging technique such as absorption.
Similarly, at the conference, I found the greatest value in individual conversations. Conversing with employees at the career fair, though exhausting, was far more effective than listening to panels on how to plan for careers that I couldn’t decide if I wanted.
By the end of the week, I started to recognize people I had already met over the few days prior. I saw every reunion or simple “Oh! Hi” as miraculous rather than a given, based on the size of the conference. People shared with me their personal journeys navigating the hardships and uncertainties of today’s world, others about the trade-offs and uncertainties in their experimental results. Some of the most fulfilling and deeply human conversations were the spontaneous ones that arose outside the doors of sessions that we had meant to be in.
When Friday rolled around, the city emptied as quickly as it had filled. For me, I retreated into the sunny Boulder mountains, mulling over the lingering resolution of singular people whose shared words and ideas were now intertwined with my own. Ignoring my fear of getting lost, I followed my instincts deeper into the dry heat of the afternoon, one step at a time.
The post Many-body effects at the world’s largest physics conference appeared first on Physics World.
Pressure quench increases superconducting transition temperature
New protocol could lead to ambient-pressure room-temperature superconductivity
The post Pressure quench increases superconducting transition temperature appeared first on Physics World.
Could a new pressure quenching technique help researchers move forward on the road to reaching room-temperature superconductivity? Researchers at the University of Houston are pinning their hopes on this approach and say they have already used it to achieve a record-high superconducting transition temperature (Tc) of 151 K at ambient pressure in a metastable phase of HgBa2Ca2Cu3O8+δ (or HBCCO) The phase remains stable for around at least three days when held at 77 K, although its Tc degrades when heated to above 200 K.
Achieving ambient-pressure room-temperature superconductivity remains the holy grail for scientists working in this field. This is because superconductors that work at ambient temperatures and pressures could revolutionize a host of application areas, including increasing the efficiency of electrical generators and transmission lines through lossless electricity transmission. They would also greatly simplify technologies such as magnetic resonance imaging (MRI) that rely on the generation or detection of magnetic fields.
While much progress has been made in the last decades, increasing the Tc often relies on squashing materials at extremely high pressure – usually in a device known as a diamond anvil cell (DAC). Some examples include the sulphide material H3S, which has a Tc of 203 K when compressed to pressures of 150 GPa and the cerium hydrides, CeH9 and CeH10, which boast high-temperature superconductivity at lower pressures of about 80 GPa with a Tc of around 100 K.
HBCCO is a high-temperature superconducting cuprate that has a Tc of 133 K at ambient pressure. This can be pushed to 164 K by applying a pressure of 31 GPa to it.
High-pressure-induced metastable superconducting phase
The high Tc of HBCCO is thought to come from the high electron density of states of a possible “van Hove singularity” associated with the two-dimensional CuO2 planes in it. In the new work, a team led by Ching-Wu Chu and Liangzi Deng of the Department of Physics and Texas Center for Superconductivity at the University of Houston decided to study a high-pressure-induced metastable superconducting phase in the material that they think might be able to form at ambient pressure as a result of this singularity (which leads to strong interactions between electrons) and/or other anomalies in the electronic energy spectrum.
To investigate further, the researchers developed a pressure-quench protocol to stabilize this metastable phase at ambient pressure. Their process involves first identifying the target phase in a DAC under high pressures of between 10–30 GPa. Next, the material is quenched (that is, the pressure is rapidly removed) at 4.2 K.
Chu and Deng confirmed that they had indeed isolated this phase and not another using synchrotron X-ray diffraction (at the 16-ID-B beamline of the Advanced Photon Source) before removing it from the cell. These measurements also show that the pressure-quenched phase at ambient pressure retains its original crystal structure, but possibly contains defects, generated under pressure and during quenching. The researchers think that these defects might help preserve the metastable high- Tc phase.
Thanks to their technique, they say they have achieved a hitherto unreported ambient-pressure Tc of 151 K.
Tiny samples
The experiments were far from easy, however, they say. The samples were extremely small (around just 50–80 microns in size), so handling them in high-pressure experiments is inherently challenging, explains Chu. Another major difficulty was preventing the electrical leads used for the resistivity measurements from breaking during the pressure-quenching process. Recovering the samples after quenching for more detailed analyses at ambient pressures was technically demanding too.
Looking ahead, the researchers say they would now like to better understand where the high Tc in HBCCO comes from – both under pressure before quenching and at ambient pressure after quenching. “We would also like to elucidate the mechanisms that lock in the high Tc phase at ambient pressure after quenching,” says Chu.
The impact of the new work, which is detailed in PNAS, might even extend beyond superconductivity, adds Deng. “Indeed, our approach could allow us stabilize quantum metastable states at ambient pressure that have enhanced or unique properties that only emerge under pressures. Based on our experimental results, using theoretical modelling and AI-driven approaches, we would like to identify different types of quantum materials that are suitable for pressure quenching.”
The post Pressure quench increases superconducting transition temperature appeared first on Physics World.
Researchers at CERN transport antiprotons by truck in world‑first experiment
A cloud of 92 antiprotons have been on a journey around CERN’s campus
The post Researchers at CERN transport antiprotons by truck in world‑first experiment appeared first on Physics World.
Researchers at the CERN particle-physics lab have successfully transported antiprotons in a lorry across the lab’s main site. The feat, the first of its kind, follows a similar test with protons in 2024. CERN says the achievement is “a huge leap” towards being able to transport antimatter between labs across Europe.
Antimatter is almost identical to ordinary matter except that the electric charge and magnetic moment are reversed. But if equal amounts of matter and antimatter were created in the Big Bang, as is widely believed, they would have annihilated each other, leaving an empty universe. Physicists therefore suspect there are hidden differences that may explain why matter survived and antimatter all but disappeared.
CERN’s Baryon-Antibaryon Symmetry Experiment (BASE) experiment focuses on measuring the magnetic moment (or charge-to-mass ratio) of protons and antiprotons to search for such differences.
These measurements need to be extremely precise but this is difficult at CERN’s “Antimatter Factory”, which produces the antiprotons, due to inference from nearby equipment. To carry out more precise measurements, the team therefore needs a way of transporting the antiprotons to labs further afield.
To do so, in 2020 the BASE team began developing a device, known as BASE-STEP (for Symmetry Tests in Experiments with Portable Antiprotons), to store and transport antiprotons.
It works by trapping particles in a Penning trap composed of gold-plated cylindrical electrode stacks made from oxygen-free copper that is surrounded by a superconducting magnet bore operated at cryogenic temperatures.
The device, which also contains a carbon-steel vacuum chamber to shield the particles from stray magnetic fields, is then mounted on an aluminium frame. This allows it to be transported using standard forklifts and cranes and withstand the bumps and vibrations of transport.
In 2024, BASE researchers used the device to transport a cloud of about 105 trapped protons across CERN’s Meyrin campus for four hours.
After that feat, the researchers began to adjust BASE-STEP to handle antiprotons and yesterday the team successfully transported a trap containing a cloud of 92 antiprotons around the campus for 30 minutes, travelling up to 42 km/h.
With further improvements and tests, the team now hope to transport the antiprotons further afield. The first destination on the team’s list is the Heinrich Heine University (HHU) in Düsseldorf, Germany, which would take about eight hours.
“This means we’d have to keep the trap’s superconducting magnet at a temperature below 8.2 K for that long,” says BASE-STEP’s leader Christian Smorra. “So, in addition to the liquid helium , we’d need to have a generator to power a cryocooler on the truck. We are currently investigating this possibility.”
If possible to transport to HHU, physicists would then use the particles to search for charge-parity-time violations in protons and antiprotons with a precision at least 100 times higher than currently possible at CERN.
The post Researchers at CERN transport antiprotons by truck in world‑first experiment appeared first on Physics World.
Heavier cousin of the proton discovered at the LHC
LHCb spots elusive particle in just one year of data
The post Heavier cousin of the proton discovered at the LHC appeared first on Physics World.
Researchers at the Large Hadron Collider (LHC) have discovered a new particle, the Ξcc⁺, (“Xi cc plus”), a heavier cousin of the proton. The particle’s fleeting existence had made it invisible for decades, but the upgraded LHCb detector captured it in just one year of data, opening a new window into the forces that hold quarks together.
Quarks are the fundamental building blocks of protons and neutrons, which in turn combine to form atomic nuclei. Protons themselves are made from two up quarks and one down quark, held together by the strong force. This is described by a sophisticated theory known as quantum chromodynamics (QCD). The Ξcc⁺ is unusual because it replaces the two up quarks with heavier charm quarks, keeping just one down quark.
“Up and down quarks are labels we give to distinguish the different types of quark,” Tim Gershon of the University of Warwick, told Physics World in an email. “In the Ξcc⁺, both up quarks are replaced by the heavier charm quark. Since the charm and up quarks differ only by their mass – in particular having the same charge – this provides an ideal way to test QCD,” explains Gershon who is spokesperson-elect for LHCb.
This quark content change makes the Ξcc⁺ roughly four times heavier than a proton. Its extremely short lifetime, less than a trillionth of a second, is why previous experiments could not detect it, despite the particle being produced frequently in LHC collisions.
Upgrade was crucial
“The key development that made the observation possible was the upgrade of the LHCb detector,” Gershon says. “We could observe the Ξcc⁺ in one year of data-taking, while we had not been able to do so in a decade of data collected with the original LHCb detector.”
The Ξcc⁺ appears briefly in proton–proton collisions before decaying into three lighter particles: a Λc⁺ baryon, a K⁻ meson, and a π⁺ meson. These decay further into five final particles, including a proton, two K⁻ mesons, and two π⁺ mesons. By reconstructing the trajectories of these particles, researchers saw a sharp signal corresponding to the existence of the Ξcc⁺ particle.
This observation also settles a long-standing question. Over twenty years ago, the SELEX experiment at Fermilab in the US reported hints of the particle. However, the signal could not be confirmed. The LHCb measurement provides a clear, unambiguous detection.
“Studies of particles containing two heavy quarks are very interesting for tests of the QCD binding mechanisms, and this observation provides important new data in that direction,” Gershon says.
The discovery relied on upgrades to the LHCb detector. A silicon pixel system called the Vertex Locator tracks particle paths with incredible precision, while a Ring Imaging Cherenkov system identifies particle types based on the light they emit. These improvements allow the detector to collect much larger amounts of data than before, making rare particle discoveries possible.
The discovery of the Ξcc⁺ is just the beginning. Physicists now aim to measure its properties in detail, including its lifetime and additional decay channels. Beyond this, they hope to find even heavier cousins, where one or both charm quarks are replaced by a beauty quark – called Ξbc and Ξbb respectively.
“These may be out of reach with the current LHCb detector – although we will try our best!” Gershon says. “But we do expect to be able to observe them with a future upgrade called LHCb Upgrade II. Unfortunately, the UK funding for this upgrade has recently been put in doubt due to decisions made at the UKRI funding agency. This latest result reiterates the uniqueness of LHCb – no other experiment can make these measurements – and the importance of finding a solution to be able to fund LHCb Upgrade II.”
The post Heavier cousin of the proton discovered at the LHC appeared first on Physics World.
Diamond films cool down electronics precisely where needed
Patterned diamond films could extend the lifespan of electronic devices by removing unwanted heat
The post Diamond films cool down electronics precisely where needed appeared first on Physics World.
A new technique for directly growing diamond layers in selected areas on technologically relevant substrates could help remove heat precisely where it is needed in electronic devices, improving their performance. The scalable technique, which relies on microwave plasma chemical vapour deposition, can create diamond patterns on silicon and gallium nitride across length scales ranging from microns to full 2-inch wafers.
Unwanted heat is a major problem in electronics, and the issue only gets worse as devices become smaller. Synthetic polycrystalline diamond could come into its own here, thanks to the material’s high thermal conductivity, which allows it to efficiently dissipate heat. The problem, however, is that diamond is very hard and chemically resistant. This makes it difficult to shape using the conventional “top-down” techniques employed to carve fully-grown diamond layers to the sizes required.
In the new work, a team of researchers led by materials scientists Xiang Zhang and Pulickel Ajayan and electrical and computer engineer Yuji Zhao of Rice University in the US turned to a bottom-up approach in which they build up diamond layer-by-layer using a plasma chemical vapour deposition technique. Their process, which is detailed in Applied Physics Letters, involves using microwave energy to ionize methane gas (CH4) so that it breaks down into its constituent carbon and hydrogen atoms. The carbon atoms then settle onto the substrate and assemble via a process that begins with nucleation. “Here, individual carbon atoms act as ‘seeds’ that other carbon atoms can latch on to,” explains Zhao.
Under these conditions, the researchers are able to control the thickness of the diamond by varying the growth time.
Controlling the seed location
To control the precise location of the carbon seeds, the team employed two techniques. The first was photolithography – a routine method in microelectronics that involves passing a light beam through a transmission mask to project an image of the mask’s light-absorption pattern onto a (usually silicon) wafer. The wafer itself is covered with a photosensitive polymer called a resist. Changing the intensity of the light leads to different exposure levels in the resist-covered material, making it possible to create small, finely detailed structures.
The approach, explains Zhao, is akin to using light to create a precise stencil, with the resulting structure acting as a mould for the diamond seeds. “Once the substrate wafers have been prepped, we spread a liquid containing nanodiamonds over their surface. These tiny specks then act as the starters for the diamond growth.”
The particle size of the nanodiamond seeds was 5–10 nm, which ensured a high nucleation density (estimated to be around 1011–1012 cm-2) for subsequent diamond growth, Zhao adds. High-magnification scanning electron microscopy revealed that the diamond films consisted of densely packed grains that were smaller than a micron and that the patterned diamond films were around 2.5–3.5 µm thick. Raman spectroscopy confirmed that a diamond film had formed across the entire patterned region and that it was highly crystalline.
To prove how versatile this approach was, the team decided to selectively fabricate complex geometries – for example, a diamond structure in the shape of an owl, which is the mascot of Rice University – on a gallium nitride substrate.
A different technique for larger wafers
This technique worked well for small-area patterns, but for larger wafers, a different approach was required, explains Zhao. Instead of conventional photoresist lithography, the team laminated a commercially available lapping film onto a silicon wafer that served as a removable masking layer. A standard laser cutter was then used to define the boundaries of the desired pattern by selectively cutting through the film.
Next, the engraved regions were peeled off, exposing the underlying substrate only in the predefined areas. “We then carried out nanodiamond seeding by spin-coating a nanodiamond suspension over the entire wafer,” says Zhao. “After solvent evaporation, we mechanically lifted off the remaining lapping film, removing the nanodiamond seeds from the masked regions to leave a patterned seed layer on the exposed substrate that diamond can then grow on.”
This approach allowed the researchers to scale up to a full 2-inch wafer.
“The key result is that we can grow diamond on selected, predefined areas on technologically relevant substrates,” Zhao tells Physics World. “This will allow diamond – the best bulk thermal conductor known – to be placed precisely where heat removal is needed in a device, making practical integration much more feasible. Indeed, we showed that our films when employed as heat spreaders on a silicon substrate can reduce the operating temperature by more than 23 °C compared to bare silicon.”
The team also discovered that smaller diamond islands were better at dissipating heat than a continuous diamond coating. “We found that the 50-micron diamond patterns achieved the most effective cooling because of their higher perimeter-to-area ratio,” Zhang explains. “These geometric features increase the density of the edge regions and help the heat dissipate more efficiently in three dimensions down into the silicon substrate.”
Thermal management is now a universal challenge – and is needed everywhere from AI GPUs and advanced logic (for example, FinFET technologies) to power electronics and photonics, Zhang adds. “As the global demand for AI accelerates, the associated power consumption and heat generation are becoming critical limits. Selective diamond integration offers a pathway to more efficient heat spreading across a broad range of technologies.”
Looking ahead, the researchers say they will now be working on direct device-level integration and making quantitative thermal measurements. They will also further optimize the material quality and interface engineering.
The post Diamond films cool down electronics precisely where needed appeared first on Physics World.
Superconductivity’s new contender
Doubly charged excitons in transition‑metal dichalcogenide bilayers may unlock an entirely new route to friction‑free charge flow
The post Superconductivity’s new contender appeared first on Physics World.
Researchers have experimentally observed a new kind of particle in transition‑metal dichalcogenide bilayers called doubly charged excitons, or quaternions. A single exciton is an electron bound to a hole, and combining an even number of fermions can create a boson with integer spin. In this system, one electron and three holes (or one hole and three electrons) bind together into a stable, doubly charged bosonic complex. Because bosons can occupy the same quantum state, these quaternions could in principle form a Bose-Einstein condensate, a collective phase in which all particles share a single macroscopic wavefunction. For charged bosons, such a condensate could carry electrical current with zero resistance, opening a pathway to a new kind of superconductivity.
The researchers confirmed the existence of quaternions through two key measurements. By continuously tuning the electron and hole densities, they observed the expected population behaviour of the bound state, and by applying magnetic fields, they identified the complex as a spin‑triplet. These signatures match theoretical predictions for a doubly charged exciton.
Unlike exciton or polariton condensates, a quaternion condensate is not expected to emit coherent light, and the experiments indeed show no signs of spectral narrowing or other coherence effects. Achieving condensation will require overcoming practical challenges, including heating from the optical pump and nonradiative Auger recombination at high densities, both of which raise the critical density for condensation. Better cooling and possible lateral confinement could help reach the required regime.
Although true Bose-Einstein condensation is not possible in an infinite two‑dimensional system, finite 2D systems can still undergo a transition that is effectively indistinguishable from condensation if the coherence length exceeds the system size. This makes it reasonable to search for superfluidity, and potentially superconductivity, in this platform. The strong long‑range Coulomb repulsion between quaternions also raises the possibility of entirely different quantum phases, such as a bosonic Wigner crystal or even a supersolid.
The establishment of these doubly charged exciton complexes in screened transition‑metal dichalcogenide bilayers opens a promising new direction in quantum materials research, with the real prospect of discovering a non‑BCS form of superconductivity (one that does not rely on the conventional Cooper‑pair mechanism) and other exotic states of matter.
Read the full article
Light-induced electron pairing in a bilayer structure
Qiaochu Wan et al 2026 Rep. Prog. Phys. 89 018003
Do you want to learn more about this topic?
Bose–Einstein condensation and indirect excitons: a review by Monique Combescot, Roland Combescot and François Dubin (2017)
The post Superconductivity’s new contender appeared first on Physics World.
A single theory for complicated quantum systems
A unified field‑theoretic framework models open quantum spins across all coupling and memory regimes
The post A single theory for complicated quantum systems appeared first on Physics World.
Open quantum systems appear in quantum computers, quantum magnets and spintronics, but their behaviour is extremely difficult to model. The environment introduces memory effects (non‑Markovian dynamics) and strong system-bath interactions (non‑perturbative regimes), where most existing methods fail or require switching between entirely different techniques depending on the parameters. This research presents a single unified framework that can handle all these regimes for interacting quantum spins coupled to bosonic environments.
The approach combines Schwinger-Keldysh field theory with the two‑particle‑irreducible (2PI) effective action and crucially uses a 1/N expansion of Schwinger bosons rather than a perturbative expansion in the system-bath coupling. This allows the method to remain accurate even in strongly non‑perturbative regimes. The framework can compute advanced quantities such as multitime spin correlations, which are essential for understanding quantum phase transitions and nonequilibrium transport in quantum materials.
The authors benchmark their method against quasi‑exact tensor‑network simulations of the spin‑boson model, showing excellent agreement in the regimes where tensor‑network methods are applicable, and then apply it to more complex spin‑chain models with multiple baths where no other method currently works. Because it supports arbitrary spin value, geometry, dimensionality, and bath spectral function, the framework offers a general and computationally tractable route to simulating many‑body open quantum systems.
Overall, this work provides a powerful field‑theoretic tool for studying driven‑dissipative quantum systems, with applications ranging from quantum computing to quantum magnonics and spintronics.
Read the full article
Felipe Reyes-Osorio et al 2026 Rep. Prog. Phys. 89 018002
Do you want to learn more about this topic?
Keldysh field theory for driven open quantum systems by L M Sieberer, M Buchhold and S Diehl (2016)
The post A single theory for complicated quantum systems appeared first on Physics World.
Sunken nuclear submarine is leaking radioactive material intermittently
Little evidence of radionuclide accumulation in vessel's surrounding environment
The post Sunken nuclear submarine is leaking radioactive material intermittently appeared first on Physics World.
In April 1989 the Soviet Navy’s nuclear submarine Komsomolets caught fire while cruising 335 m beneath the surface of the Norwegian Sea. It was able to surface and 27 of 69 crew members survived the ordeal. The vessel then sank and now lies in 1680 m of water about 180 km off the coast of Norway’s Bear Island.
As well as being powered by a nuclear reactor, the Komsomolets is believed to contain two torpedo-mounted nuclear warheads. Not surprisingly, people are very concerned about the wreck and the possibility of radioactive materials leaking from the vessel.
Indeed, a Russian expedition in 1994 revealed that plutonium was leaking from one of the warheads. The following year, fractures in the hull and the torpedo tubes was sealed. Since then measurements taken near the Komsomolets suggest that any radioactive leakage is rapidly diluted by the surrounding water.
Now, scientists in Norway led by Justin Gwynn and Hilde Elise Heldal, have completed a comprehensive analysis of data taken by a 2019 survey of Komsomolets. The wreck’s marine environment was explored using Ægir 6000, which is a remote-controlled vehicle that is equipped with an array of cameras and other instruments and is capable of diving to 6000 m.
Writing in the Proceedings of the National Academy of Sciences, the team says analysis of seawater and sediment samples collected near the torpedo compartment reveals no evidence of plutonium being released from the warheads. However, analysis of samples from near a ventilation pipe show that radioactive material is being released intermittently from the nuclear reactor. By measuring the ratio of plutonium to uranium in the region, the team concluded that the fuel in the reactor is corroding.
Despite releases over the past three decades, Ægir 6000 found little evidence that radionuclides were accumulating in the region of the wreck – most likely because of the diluting effect of seawater.
The research is described in PNAS, where the team concludes, “Considering the global increase in military activities and geopolitical tensions, the fate of Komsomolets and the nuclear material within it can provide us with important insights as to impacts of any future accident involving nuclear powered vessels and nuclear weapons at sea”.
The post Sunken nuclear submarine is leaking radioactive material intermittently appeared first on Physics World.
Electrosolvation force can act over long distances
Attractive force between a pair of like-charged colloidal particles is measured
The post Electrosolvation force can act over long distances appeared first on Physics World.

Two particles carrying electrical charge with the same sign should not attract each other, but in recent years, researchers have found that they can do this when they are dispersed in a liquid. A team at the University of Oxford in the UK has now discovered that the distance over which this counterintuitive “electrosolvation” force acts is much longer than theoretical models currently predict. They have also shown that the range of the force can depend on particle properties such as size and surface chemistry.
“The new finding reveals a missing piece in our understanding of electrostatic forces in liquids,” says physical chemist Madhavi Krishnan, who led this research study. “It is likely to reshape our understanding of how biological matter may self-organize and how molecules like DNA, RNA and proteins may naturally condense and cluster inside cells.”
In their work, Krishnan and colleagues used optical imaging to observe how pairs of charged micron-sized spheres with various surface coatings, such as DNA, polypeptides and anionic lipid bilayers (which make up cell membranes) interact in water.
Not a uniform medium
“Conventional electrostatic models treat the solvent as a uniform medium with a dielectric constant, but real liquids (such as water) cannot be described in this way because they form hydrogen bond networks and orient themselves around surfaces. Liquids also exhibit long-range correlations. All these properties may play a role in giving rise to an additional force which we call the electrosolvation force,” explains Krishnan.”
To be able to come up with a comprehensive understanding of the electrosolvation interaction, we have to dissect and carefully examine the phenomenology in question, she explains. A key feature of an interaction is its range. To measure the range of the attractive electrosolvation force accurately, Krishnan says that the students who carried out the experiments – her graduate student Sida Wang in particular – performed careful microscopy measurements on particles interacting with each other, observing individual pairs for periods of up to an hour and sometimes longer.
“We also performed exhaustive computer simulations to vet the measurements and estimate their accuracy,” Krishnan adds.
The researchers observed that DNA-coated particles exhibit particularly long-range attraction, which implies that the interaction depends not only on the solvent but also on the chemical and physical structure of the particles’ surface. This contrasts with the long-held view that the (Debye) screening length governing the interaction of charged particles in solutions depends only on the properties of the solvent medium.
Krishnan explains that the measured range of the attractive electrosolvation force can significantly exceed the nominal Debye length is to our knowledge not readily accounted for within any existing theoretical view and points to major gaps in our understanding of this very basic and fundamental question of how two charged particles interact in a liquid.. Indeed, it highlights the need for a more sophisticated view of the intervening medium than that offered by standard continuum electrostatics models.
“Current electrostatic models are incomplete”
“In short, anionic matter seems poised to attract; and the ability to either attract or repel in water, depending on the conditions, appears to be an intrinsic feature of negatively charged matter,” she tells Physics World. “It is entirely possible that the underlying mechanisms behind this process are broadly exploited in biology.”
The new work, which is detailed in Reports on Progress in Physics is the most recent in a series of investigations on the physics of interparticle interactions in the fluid phase, she says, and once again shows that current electrostatic models are incomplete – even under conditions in which they are expected to work well.
Looking ahead, the researchers say they would now like to examine the same interactions in bulk solution and compare these observations in the sedimented colloids studied in the present work.
The post Electrosolvation force can act over long distances appeared first on Physics World.
Spectroscopic OCT plus AI detects high-risk plaque in coronary arteries
Artificial intelligence-based optical coherence tomography could improve long-term management of patients with coronary artery disease
The post Spectroscopic OCT plus AI detects high-risk plaque in coronary arteries appeared first on Physics World.

Identifying lipid-rich plaques inside coronary arteries is critical to assess a patient’s risk of having a heart attack. These fatty deposits adhere to the walls of blood vessels and, if they rupture, can trigger adverse cardiovascular events.
Currently, physicians use near-infrared spectroscopy and intravascular ultrasound (NIRS-IVUS) to quantitatively assess plaque lipid burden. Optical coherence tomography (OCT) is another intravascular imaging modality used during catheter-based procedures and provides micrometre-resolution visualization of plaque structure. But its diagnostic accuracy is limited by imaging artefacts and signals originating from non-lipid plaque components.
Researchers in Korea are developing a different approach: combining the biochemical specificity of spectroscopic OCT (S-OCT) with artificial intelligence (AI). This combination enables automated, composition-aware tissue characterization, and offers interpretable and annotation-efficient lipid mapping.
“By enabling efficient lipid screening and spatial interpretation, [the technique] establishes a scalable foundation for downstream assessment of lipid burden and clinically relevant plaque characterization, with potential utilization for automated risk stratification,” the researchers explain in Biomedical Optics Express.
Model training and validation
The AI-enhanced S-OCT system, which utilizes existing OCT systems without requiring hardware modification, incorporates an AI model developed by researchers at the Korea Advanced Institute of Science and Technology (KAIST) and the Multimodal Imaging and Theranostic Lab of the Korea University Guro Hospital. The AI model receives wavelength-dependent information from the OCT images and, by recognizing signal patterns associated with lipid-rich tissue, automatically highlights any suspicious regions in the image.
Team leader Hyeong Soo Nam from KAIST and collaborators created a dataset to train the AI model, using 848 lipid-positive and 622 non-lipid frames acquired from images of five rabbits with induced atherosclerotic plaques. They manually annotated each OCT frame to indicate lipid presence, and employed complex calibrated interferometric signals obtained through standard OCT processing to extract depth-resolved spectroscopic information for S-OCT. Finally, they applied a vessel region selection procedure with a depth range selected to focus the analysis on biologically relevant vessel regions with potential lipid content.
After training the deep-learning model, the researchers evaluated its performance in classifying lipid presence and spatial localization of lipid-associated regions, on both lipid-positive and non-lipid image frames. They also conducted histopathological validations to validate the predictions against ground truth, and compared the performance of the trained S-OCT model with an identically trained greyscale OCT-only model to assess the benefit of incorporating spectroscopic identification.
When assessing the relative importance of distinct spectral regions for lipid detection, the researchers discovered that training the model on a short-wavelength spectral subset (below 1300 nm) resulted in a higher lipid localization Dice score (a similarity metric) than a model trained on the long-wavelength band (above 1300 nm).
“This performance difference suggests the network relies more on spectroscopic features in the short-wavelength region, where lipid absorption shows a more pronounced spectral gradient,” they write. “The superior performance with short-wavelength data implies that the model effectively utilizes this spectral gradient, rather than relying on shared morphological features, to enhance lipid detection.”
The researchers validated their approach by imaging two rabbits with atherosclerotic plaques, and comparing the AI-generated predictions against histopathology results using lipid-specific tissue staining. The proposed model accurately localized lipid regions with strong spatial correspondence to histology, achieving a lipid localization Dice score of 83.9%.
“The results showed strong classification performance along with good spatial agreement with the pathological findings,” says Nam in a press statement. “By analysing wavelength-dependent information hidden in the OCT signal and combining it with AI, we were able to identify the presence and distribution of lipids within the vessel wall.”
“During a coronary intervention, this method could provide clinicians with additional information to support risk assessment, procedural planning and evaluation of treatment response,” Nam emphasizes. “Ultimately it has the potential to contribute to safer clinical decision making, more individualized treatment strategies and improved long-term management of patients with coronary artery disease.”
The team is currently working to improve the processing speed and robustness of their approach to make it more practical for real-time clinical use, and plan to perform validation studies using data from human coronary arteries. In addition, they aim to create a seamless method for integrating data reporting (the presence or absence of lipid plaque) into the clinical workflow.
The post Spectroscopic OCT plus AI detects high-risk plaque in coronary arteries appeared first on Physics World.
Rocket re-entry pollutes the upper atmosphere
Lidar measurements have detected a plume of lithium created by falling space debris from a rocket stage
The post Rocket re-entry pollutes the upper atmosphere appeared first on Physics World.
Thanks to new resonance lidar measurements, researchers in Germany, the UK and Peru have successfully measured and traced a lithium plume created by a rocket stage as it uncontrollably re-entered and broke up in the upper atmosphere. The work represents the first time that upper-atmospheric pollution from space debris re-entry has been directly detected, they say. Such pollution is a growing concern and is only likely to worsen as more and more satellites are being launched into space, and in particular into low-Earth orbit.
The number of satellite and rocket launches has increased dramatically over the last decade and this number is set to increase as ever more commercial mega-constellations are deployed. For example, the Starlink constellation is planned to consist of over 40 000 satellites, each with a mass of between 305 and 960 kg. Given their typical operational lifetimes of five years, these satellites are expected to re-enter Earth’s atmosphere through uncontrolled decay within the next several years.
Previous studies in this domain have mainly focused on the dangers of space debris falling to the ground, but we still know little about the environmental effects that the debris can have on our atmosphere. We do know, however, that the upper atmosphere is today host to many exotic atomic and molecular species that cannot be explained as having naturally come from meteors. This is worrying since the upper atmosphere is crucial for shielding life on Earth from meteoroids and UV radiation.
An intense fireball
At roughly 03:42 UTC on 19 February 2025, the upper stage of a SpaceX Falcon 9 rocket uncontrollably re-entered the atmosphere at an altitude of around 100 km, off the western coast of Ireland. This event produced an intense fireball that many people (and radar systems) were witness to, as well as a persistent high-altitude plume of lithium vapour. It also made headline news when fragments of the debris, including a fuel tank, were recovered near Poznań in Poland.
A team of researchers led by Robin Wing of the Leibniz Institute of Atmospheric Physics in Germany measured the concentration of lithium atoms in the mesosphere (which lies between 50 and 85 km in altitude) and the lower thermosphere (between around 85 and 120 km in altitude). They detected the lithium plume in the latter, using a resonance fluorescence lidar in Kühlungsborn in Germany. They also used locally measured winds from the SIMONe Germany meteor radar and global winds from the Upper Atmosphere ICON (UA-ICON) model to determine the path the lithium plume took and where it had originated.
Lidar is a laser-based remote sensing instrument that can be used to measure conditions in the atmosphere. The researchers chose to focus on lithium because it is routinely employed in spacecraft components, such as lithium-ion batteries and lithium-aluminium (Li-Al) alloy hull plating, but it is only naturally present in trace amounts at the altitudes studied. The flux of natural lithium (which comes from meteoric sources) is estimated to be around 80 g per day, while the amount of lithium contained in a single rocket stage is about 30 kg. This large disparity therefore makes lithium a sensitive tracer of man-made input from space debris re-entries, explain the researchers.
Vaporization of lithium begins at approximately 98 km altitude
Scientists already know that lithium rapidly vaporizes when a Li-Al structure ablates and it appears in the atmosphere as the aluminium matrix melts at 933 K. In their work, Wing and colleagues estimated the altitudes at which a Li-Al hull will begin to melt using the Leeds Chemical Ablation Model. For the hull thickness of the Falcon 9, they expect melting and vaporization of lithium to begin at approximately 98 km.
The strong atomic resonance fluorescence line of lithium at 670.7926 nm allows lidar to detect very trace amounts of lithium, both in the mesosphere and lower thermosphere. This enabled the researchers to perform altitude- and time-resolved measurements of the amount of lithium during and after re-entry events. Thanks to their measurements during six hours on the night of 19-20 February, they detected a sudden increase in the signal at about 96 km altitude, by a factor of 10 from the baseline value, just after midnight UTC on 20 February.
The results from this work, which is detailed in Communications Earth & Environment, also back up a recent study of the lower stratosphere conducted by Daniel Murphy at the NOAA and colleagues, which attributed significant middle-atmospheric pollution to space debris.
Potential harm to the ozone layer
Analysing the impact of re-entering space debris on the atmosphere is quite new, says Wing. “The paper by Murphy and colleagues, which was published in 2023, showed that 10% of stratospheric aerosols are already contaminated by materials from space debris. This previous work really motivated us to build a lidar capable of measuring what is left behind when rockets or satellites disintegrate in the atmosphere.”
The primary concern surrounding how space debris impacts the atmosphere is currently potential harm to the ozone layer, he tells Physics World. “Our work shows that we can now measure emissions from re-entering space objects and can use winds from radar observations and models to identify potential sources. By applying similar or improved setups to ours around the globe, the scientific community could provide the space industry with solid findings so we can all optimize the use of space.”
The researchers say they are now working on building a new and improved system to measure lithium and sodium. “We would also like to conduct the first survey of various metals such as copper, titanium and lead in the atmosphere that could be connected to space debris,” says Wing.
The post Rocket re-entry pollutes the upper atmosphere appeared first on Physics World.
Academic collaboration with industry is no longer optional – it is now essential
Mark Procter outlines his advice for academics working with scientists in industry
The post Academic collaboration with industry is no longer optional – it is now essential appeared first on Physics World.
Anyone paying even cursory attention to the research landscape in recent months would have noticed the growing turbulence in public science funding on both sides of the Atlantic. In the UK, the research community has been shaken not by a single dramatic cut, but by a prolonged period of budgetary tightening at UK Research and Innovation (UKRI), driven by flat-cash settlements, rising inflation and increasing pressure to redirect funding towards government-defined missions.
Although government ministers continue to emphasize “record” overall R&D spending, UKRI has been forced to make difficult reprioritization decisions, leading to pauses and closures of several schemes across the research councils. The effects are already being felt, with competition for remaining funding intensifying. Success rates are coming under strain and many researchers are facing heightened uncertainty about the viability of pursuing curiosity-driven research.
Globally, the picture is similar. In the US, the National Science Foundation has become a focal point of intense budgetary uncertainty, with proposed reductions and flat-cash congressional settlements placing growing strain on its ability to sustain investigator-led research. In Europe the €95.5bn Horizon Europe programme faces mounting political pressure to demonstrate impact and value for money amid economic uncertainty and competing fiscal priorities.
For academics, these dynamics translate into tougher competition for grants, longer odds of success and an increasing reliance on short-term, project-specific funding rather than stable, long-horizon research support.
Academic science has always been under pressure to deliver more with less. But the current climate feels different. The combination of shrinking government budgets, rising operational costs and increasing competition for limited grants has created a perfect storm. For early-career researchers and established labs alike, the traditional model of securing public funding is becoming unsustainable.
The implications are profound. Without adequate resources, research groups risk losing momentum, empty talent pipelines and stalling innovation. For many the question is no longer “how do we grow?” but “how do we survive?” Yet amid these challenges lies an opportunity: forging deeper, more strategic partnerships with industry.
The path ahead
You may ask the question “why would companies invest in academic research?” The answer is simply innovation. Industry thrives on differentiation and academic partnerships offer a cost-effective way to access cutting-edge science without bearing the full burden of in-house R&D.
Consider the pharmaceutical sector. Drug discovery is notoriously expensive and time-consuming but collaborating with academic labs allows companies to tap into specialized expertise, advanced facilities and novel methodologies. Similarly in energy and materials science, universities often lead the way in developing next-generation technologies that can redefine markets.
Beyond innovation, partnerships also offer credibility. Peer-reviewed publications and independent validation enhance a company’s reputation and can accelerate regulatory approval. For industries facing complex challenges; such as sustainability, cybersecurity or quantum computing, academic collaboration is not a luxury; it’s a necessity.
So, what can be done to strengthen academic collaboration with industry? The first step is a subtle but important mindset shift. For many researchers, academia has traditionally operated with a strong internal focus, where industry engagement is seen less as undesirable and more as additional – something that sits alongside core research rather than at its centre.
This isn’t about viewing collaboration as secondary or compromising, but about recognizing that aligning fundamental research with industry priorities takes time and sustained effort. It introduces new constraints, different timelines and added complexity into already demanding research programmes.
The challenge, then, is not one of principle, but of practicality. Collaboration is not about box-ticking or “selling out”; it’s about creating the conditions in which fundamental research can remain connected, impactful and resilient in an increasingly complex research ecosystem.
Academics should look for companies with long-term goals that align with their research expertise – creating shared value, not just chasing sponsorships. Another aspect to remember is that industry mandates tangible outcomes. While fundamental research remains vital, framing projects in terms of applied benefits can unlock funding.
It is also important that academics learn to communicate impact. Industry leaders speak the language of “minimum viable product”, “return on investment” and “risk mitigation”. Academics must learn to articulate how their work translates into competitive advantage.
This mindset shift requires effort, but the payoff can be significant in sustained funding streams, access to real-world data, as well as opportunities to test theories in practical settings. When done right, such collaborations also create a virtuous cycle. Academics secure funding and maintain research momentum, while industry gains competitive advantage, joint publications, shared intellectual property and co-developed technologies that strengthen both ecosystems.
Such partnerships can also foster talent development. Graduate students and postdocs gain exposure to realistic problems, enhancing employability and bridging the gap between theory and practice – a critical outcome, given the current bleak outlook for graduate employment worldwide.
For industry, this means access to a pipeline of skilled professionals who understand both scientific rigor and commercial realities. The benefits also extend beyond economics. Collaborative projects often tackle grand challenges – climate change, healthcare, digital security – that no single entity can solve alone. By pooling resources and expertise, academia and industry can drive progress at a scale that matters.
An industrial collaboration ‘playbook’
Mark Procter outlines his five principles for building successful partnerships between academia and industry.
1 Align on impact, not just intellectual property
Focus on creating measurable outcomes rather than solely on rigid intellectual property battles. Impact drives funding and reputation for both sides.
2 Define mutual gains early
Establish clear objectives that benefit both academic advancement and industrial innovation. Document these in a collaboration charter before work begins.
3 Streamline governance
Simplify legal frameworks and reduce administrative friction. Negotiating non-disclosure and intellectual property agreements should not take longer than the research itself.
4 Embed talent exchange
Include opportunities for student placements, joint supervision and secondments. This builds trust and creates a pipeline of skilled professionals. Reciprocally, universities should structure their own professional development opportunities in collaboration with industry.
5 Measure success beyond publications
Track metrics such as technology readiness progression, prototype development and demonstrable economic impact, not just journal citations.
To make this vision a reality, collaboration must be incentivized. Funding agencies can play a pivotal role by enabling grants that include industrial partners, while tax incentives for collaborative R&D could further accelerate uptake.
At the same time, universities must embrace cultural change. Academics must move beyond the notion that collaboration dilutes scientific integrity. Transparency and clear governance can safeguard independence while enabling impact.
The future of academic science may well depend on its ability to align with industry. The current rhetoric from UKRI focuses on return on investment from publicly funded research to meet the UK’s industrial strategy.
In a world where resources are scarce and challenges are complex, working together is the only way forward. The coming decade will test the resilience of academic research. Those who cling to old models risk obsolescence. Those who adapt by embracing industry partnerships will not only survive but thrive. The question is not whether collaboration is necessary, but how quickly we can make it happen.
The post Academic collaboration with industry is no longer optional – it is now essential appeared first on Physics World.
Could lightning occur on Mars?
MAVEN probe captures signature of a "whistler"
The post Could lightning occur on Mars? appeared first on Physics World.
Researchers in the Czech Republic say they may have observed the signature of a “whistler” in a one-second snapshot captured by the MAVEN probe orbiting Mars. The event, observed in the ionosphere of the planet, would be the first lightning-like electric discharge activity ever to be seen there and the finding will be important for understanding atmospheric processes in the Martian atmosphere.
“Whistlers are well known on Earth and are associated with lightning,” explains space physicist František Němec at Charles University, who led this research effort. “Our result implies that this phenomenon also occurs on our planetary neighbour.”
Unlike Earth, Mars does not have a global magnetic field, but only localized fields created by magnetized materials in the planet’s crust. And because its atmosphere is thin, lightning on this planet does not originate in water clouds but instead in dust storms, similar to those observed in volcanic eruptions here on Earth, and in dust devils.
During dust storms, dust grains become electrically charged as they collide with each other and generate an electric field. On Mars, previous studies have predicted that this field can discharge when its value exceeds the breakdown threshold in the low-pressure Martian atmosphere, which is around 15 kV/m.
Dust devils, for their part, can produce ultralow-frequency radiation on Earth thanks to the electrical charges that fluctuate as the dust swirls around. Since both dust devils and dust storms are much stronger on Mars, theory suggests that they could generate wideband radiation that we could detect on Earth. Despite recent measurements by the Allen Telescope Array, the Mars Global Surveyor (MGS) and Mars Atmosphere and Volatile Evolution (MAVEN) missions and the Mars Express spacecraft, conclusive evidence for Martian lightning has yet to be found.
Analysing electromagnetic radiation
Another way to detect these electric discharges, says Němec, is to analyse the electromagnetic radiation that accompanies them. This radiation lies in the extremely low frequency/very low frequency range and, under some conditions, can reach the ionosphere of a planet. The phenomenon was first identified and observed on Earth shortly before the space era and such electromagnetic waves have successfully been used to provide evidence for lightning on Jupiter, Saturn and Neptune since then.
These waves are known as whistlers, he explains, because of their characteristic spectral pattern in the plasma medium of the ionosphere. Here, higher frequency waves propagate faster and arrive at the observation point sooner than lower frequency ones, resulting in a characteristic “whistling” spectral shape.
The observational challenge is that these waves can penetrate the Martian ionosphere only on the nightside of the planet and when the magnetic field is pointing in the vertical direction. This largely restricts the areas over Mars which whistlers can be observed by spacecraft – namely, to the relatively small crustal field regions in the southern hemisphere of the planet.
Němec says he has now identified the electromagnetic wave signature of a whistler on Mars in a snapshot captured by the MAVEN probe on 21 June 2015. “I first identified it at night in a region with a strong and nearly vertical magnetic field, something that is crucial for the wave to be able to propagate to the altitude of where the probe is orbiting without its signal attenuating too much.”
Out of the many wave snapshots analysed (108,418 in total), only this single event contained a whistler signature, he tells Physics World. “This likely reflects the rarity of the phenomenon itself, as well as the specific ionospheric and magnetic field conditions required for the wave to propagate all the way to the spacecraft.”
The MAVEN probe has been orbiting Mars since 2014 and sent back data to Earth until we lost communication with it last year. While no large-scale dust storms were recorded on the planet at the moment at which the probe captured the whistler, Němec and colleagues say the effect might have come from a local dust event.
Different propagation speeds
“Whistlers are formed because, in the ionized plasma of the ionosphere, different signal frequencies propagate at different speeds,” explains Němec. “As a result, although all frequencies are generated simultaneously during a lightning discharge, the higher frequencies – which propagate faster – arrive at the spacecraft first, followed later by the lower frequencies.”
The researchers, who detail their work in Science Advances, calculated these corresponding time delays and say that their observations agree “very well” with theoretical predictions. They also calculated how the waves attenuate by adapting methods used for Earth to the assumed composition of the Martian ionosphere. The results revealed that higher frequencies are more strongly attenuated, which explains why only the lower-frequency portion of the whistler is observed, says Němec.
The existence of strong lightning-like electrical discharges in the Martian atmosphere highlights the need to better understand the relevant atmospheric processes on the Red Planet, with a particular focus on dust storms and dust devils, he adds. The sudden energy release accompanying such discharges also clearly has the potential to locally alter the atmospheric chemistry.
The Charles University researchers together with their colleagues at the Institute of Atmospheric Physics say they are now working on a detailed analysis of how waves attenuate as they propagate through Mars’ ionosphere. “Importantly, we are actively involved in the design and development of the European Space Agency’s M7 mission candidate M-MATISSE,” reveals Němec. “This two-spacecraft mission, scheduled for launch in 2037, would feature advanced instrumentation for, among other things, wave measurements, and would allow for more detailed investigations of the relevant phenomena.
“We are very excited about this opportunity and hope that the mission will ultimately be adopted.”
The post Could lightning occur on Mars? appeared first on Physics World.
Is ‘vibe physics’ the future?
Scientists discuss whether AI could surpass human contributions to physics by 2035
The post Is ‘vibe physics’ the future? appeared first on Physics World.
At the American Physical Society’s Global Physics Summit in Denver, a session on “Navigating the AI revolution: future-proofing your science career” drew in a crowd of early-stage physicists searching for practical career advice. What they received was much more philosophical in nature.
Malachi Schram of the Pacific Northwest National Lab and Hilary Egan of the National Laboratory of the Rockies delivered back-to-back talks full of similar rhetoric, emphasizing the fast-paced development of AI used for specialized tasks in science, such as detecting equipment failure or identifying ways of retrofitting older buildings.
But the third speaker, Matthew Schwartz, a theoretical physicist from Harvard University, took his optimism about AI far further. In a punchy presentation, he predicted that large language models (LLMs) will surpass human intelligence in five years.
“There’s definitely exponential growth of the intellectual capacity of these [large language] models as a function of time,” Schwartz told the audience, using the number of model parameters as a proxy for intelligence. “The machines are still growing by roughly 10 times each year, and we” – he paused for dramatic effect – “are not growing much smarter.” This drew a wave of laughter from the crowd.
Unlike humans, machines can visualize higher dimensional spaces, hold far more information in memory and process more complex equations. “We are not the endpoint of intelligence. We are only the smartest things to evolve on Earth so far,” Schwartz argued. He went on to suggest that humans may simply be incapable of understanding long-standing physics problems such as a theory of everything. He compared it to cats, which he suggested will never understand chess.
If the talent of physicists exists on a bell curve, Schwartz claimed we can push the bell curve higher on the talent axis: “If we use AI augmentation, we can get 10,000 Einsteins a century instead of one Einstein.”
Opposing views
The next speaker after Schwartz was Matthew Ginsberg from Google Deepmind. Speaking in a personal capacity, he expressed strong disagreement over AI’s ability to advance physics. “Asking questions is the essence of being a good physicist, and this is, at least so far, 100% our domain,” he argued. In a direct response to Schwartz, Ginsberg added, “We aren’t being eaten away at exponentially. No, asking good questions is us. It’s what we’re good at.” He concluded, “I remain hopeful that we have a role to play.”
In his talk, Ginsberg emphasized the importance of human creativity. “LLMs generate the consensus response to hard questions,” he said. “You are the best physicist when you give the not consensus answer, which is what AI is incapable of doing.”
In a concluding panel discussion, the four experts seemed to converge on the idea that the human contribution to physics has to do with what some called “taste,” others “creativity,” or “asking good questions” (seemingly, questions that humans find interesting). However, over the three-hour session, Schwartz and Ginsberg independently predicted that AI may develop the ability to ask good questions in the next decade.
If so, this could undermine the main argument for the value of humans in science. So, does there exist a deeply human role in the physics of the future, or is “vibe physics” on the 10-year horizon? Perhaps only time will tell.
The post Is ‘vibe physics’ the future? appeared first on Physics World.
Extracting entropy information from quantum dots
New work could help optimize quantum memories and information processing
The post Extracting entropy information from quantum dots appeared first on Physics World.
Researchers have succeeded in measuring how energy dissipates in quantum dots by quantifying the entropy they produce. The work, by a team at Stanford University in the US, could help in the optimization of real-world nanoscale devices used in applications such as quantum memories and information processing.
Technologies like memory storage devices and information processors are intrinsically dissipative, explains materials scientist and engineer Aaron Lindenberg, who led this new study. Energy is lost as heat in many ways but at a fundamental level, this arises from the Landauer principle, which defines a lower limit for these energy costs. “When physical or computational processes evolve non-quasi-statically – for example, over a finite amount of time and out-of-equilibrium, the energy costs increase. Despite its fundamental and practical importance, directly measuring this dissipation remains extremely challenging, particularly as modern devices continue to shrink in size.”
In the new work, Lindenberg and colleagues wanted to measure energy dissipation directly in real materials in contrast to previous experiments that measured entropy production in very clean systems, such as defect centres in diamond. “Previously studied materials behaved like simple two-state ‘Markov’ systems, where the probability of moving to the next step is determined only by the current state,” explains Lindenberg, “but real materials often have memory effects and hidden internal states.
Good test systems
Quantum dots, which are tiny semiconductor crystals that emit fluorescent light when excited with ultraviolet light, are good test systems in this context, he says. When they emit light, charge carriers (electrons and holes) can tunnel into nearby defect states, temporarily stopping the emission, causing the quantum dot to “blink” between bright and dark states. This non-Markovian and stochastic blinking follows statistical patterns (a power law waiting-time distribution) that hint at memory effects and hidden states.
In their experiment, Lindenberg and colleagues in the School of Engineering and Photon Science at the SLAC National Accelerator Laboratory kept the ultraviolet excitation on continuously and switched an additional strong laser field on and off. This process changed the blinking statistics and drove the system out-of-equilibrium. The researchers then recorded the fluorescence blinking traces and used machine learning to optimize a physics-based “hidden Markov model”. “This allowed us to reconstruct the hidden state trajectories that are Markovian and then compute entropy production from them,” says Yuejun Shen, the first author of the study, which is detailed in Nature Physics.
Entropy production of quantum dots is a quantity that describes how reversible a microscopic process. It encodes information about memory, information loss (as the distribution of charge carriers in the dots evolves) and energy dissipation. Such measurements therefore enable new possibilities for determining the ultimate efficiency limits of a device, he explains.
Measuring entropy production
Lindenberg adds that the new work provides a general method to measure entropy production in complex, stochastic and non-equilibrium systems in which we cannot observe all internal states directly.
While practical applications may still be a way off, the approach could eventually help measure and reduce dissipation in nanoscale devices, he tells Physics World. “This is especially important as device sizes continue to shrink and stochastic fluctuations become unavoidable.”
As to future work, the Stanford researchers say they would now like to measure energy dissipation in other material systems and implement optimization algorithms to minimize this dissipation. “This new type of calorimetry could have applications in many other types of information storage devices and technologies,” says Lindenberg.
The post Extracting entropy information from quantum dots appeared first on Physics World.
Rechargeable liquid solar battery stores sunlight in molecules
A new bio-inspired material has a record-breaking energy density and outperforms lithium-ion batteries
The post Rechargeable liquid solar battery stores sunlight in molecules appeared first on Physics World.
Being able to store renewable energy, such as that produced by sunlight, so that it can be used at night or on cloudy days remains a major challenge. In recent years, researchers have been looking into molecular solar thermal (MOST) energy storage systems that harness the energy from photons and release it when needed. Now, a joint US team from Grace Han’s lab at the University of California, Santa Barbara, and Kendall Houk’s lab at the University of California, Los Angeles, has published details of a bio-inspired pyrimidone-based molecule that, when highly strained, stores a record amount of photon energy in its chemical bonds. The energy released when the molecule is allowed to relax is enough to boil water, the researchers say.
The structure of pyrimidone looks very much like that of a component found in DNA, which, when exposed to ultraviolet light, can reversibly form “Dewar lesions”. “These lesions naturally contain significant ring strain, something that immediately stood out to us as a promising feature for energy storage,” says first author Han Nguyen.
The researchers set out to engineer a synthetic version of this structure, the Dewar isomer of pyrimidone, which they also designed to be highly strained. They did this by combining a de-aromatization strategy with a compounded strain effect from fusing two already strained rings contained within the molecule.
Better than previous MOST energy storage systems and Li-ion batteries
“As a result, each molecule can store a large amount of energy, reaching 228 kJ/mol,” says Nguyen. “This translates to a gravimetric energy density of 1.6 MJ/kg, a value that is at least 1.6 times higher than previous MOST energy storage systems and nearly double the energy density of a standard lithium-ion battery (around 0.9 MJ/kg).”
The system can be described as a mechanical spring, she says. “When hit with sunlight, it twists into a strained, high-energy shape. It stays locked in that shape until a trigger – such as a small amount of heat or a catalyst – snaps it back to its relaxed state, releasing the stored energy as heat. It can be thought of as a liquid solar battery that stores sunlight and can be recharged.”
The result, Nguyen explains, addresses a long-standing limitation of MOST materials: insufficient energy density for practical use. Until now, the stored heat could only be released and used in well-insulated environments to minimize losses. “In contrast, our system releases enough heat to operate under ambient conditions and in our demonstrations, the heat output is strong enough to boil a sample of 0.5 mL of water in under one second.”
According to the researchers, the study “marks an important step toward real-world applications and shows that MOST systems can now move beyond controlled laboratory settings and function robustly in practical environments”.
A simplified structure that is soluble in water
This pyrimidone system had not been explored before as a candidate for MOST materials, so the researchers first had to design a simplified structure based on the Dewar lesion. The challenge, remembers Nguyen, was to strip away parts that were not relevant to the application in hand while retaining the features responsible for efficient storage and release of energy. “Through iterative design and testing, we arrived at a structure that is both efficient and practical,” she says.
Another challenge was improving functionality. “Our early designs still required solvents, which limit practical use,” she explains. “To solve this, we engineered the system into a liquid-state molecule that can operate without solvent.”
The fact that the material is soluble in water means that it could be pumped through roof-mounted solar collectors to charge during the day and stored in tanks to provide heat at night, adds team member Benjamin Baker. “With solar panels, you need an additional battery system to store the energy, but with molecular solar thermal energy storage, the material itself is able to store that energy from sunlight.”
The UCSB researchers hope their work, which they detail in Science, will encourage further research in the field, so that pyrimidone and other heterocycles like it can be further improved and optimized. “We would like to design and develop molecules that absorb in a broader range of solar radiation,” says Houk. “We also want to maintain high energy density, thermal stability and energy release upon thermal ring opening and will use quantum mechanical calculations to make these predictions.”
To this end, he adds, the team plans to screen hundreds to thousands of molecules, perhaps with AI assistance, to open up new avenues for experimental research.
Nguyen tells Physics World that the goal of her laboratory is “to make heat more affordable and accessible, especially in situations where people need it most. For example, our materials could be useful in emergency or disaster settings where access to power and fuel is limited”.
Looking further ahead, she says that the technology could be integrated into real-world systems, such as heating for houses and buildings, helping to provide more reliable and accessible heat in everyday life.
The post Rechargeable liquid solar battery stores sunlight in molecules appeared first on Physics World.
The coming hurricane: early-career physicists and the crisis in American science
Parallel sessions at the American Physical Society’s Global Physics Summit reveal a stark divide
The post The coming hurricane: early-career physicists and the crisis in American science appeared first on Physics World.
With several dozen talks taking place at any one time, figuring out which session to attend at the American Physical Society’s Global Physics Summit is a challenge. On Wednesday, though, my choice was particularly stark. Should I depress myself by attending the session on “The Crisis in American Science”? Or restore my faith in humanity by finding out “How Early-Career Physicists Are Solving Society’s Greatest Challenges”?
A quirk of scheduling – or an APS organizer with a dark sense of humour – meant that the two sessions were practically next door to each other in Denver’s Colorado Convention Center. So, after a bit of dithering, I decided to oscillate between the two, hoping to strike a balance between cheer and gloom.
Reasons to be cheerful
The first speaker in the early-career physicists session was Rosimar Rios-Berrios, a physicist-turned-atmospheric-scientist at the US National Center for Atmospheric Research (NCAR). Located up the road from Denver in Boulder, Colorado, NCAR’s iconic, adobe-style Mesa Laboratory was designed by the architect I M Pei, and it supports the work of several hundred scientists who study weather and climate.
Rios-Berrios is originally from the US island territory of Puerto Rico, and her APS talk focused on a weather phenomenon that’s hugely important for anyone living on tropical islands or coasts: hurricanes. Rios-Berrios is trying to understand how these storms develop. In particular, she wants to know what happens in the atmosphere to cause gentle Caribbean breezes and fluffy white clouds to morph into devastating hurricanes like 2017’s Hurricane Maria, which killed nearly 3000 people in Puerto Rico alone, or 2025’s Hurricane Melissa, which devastated parts of Jamaica.
In previous studies, a specific type of atmospheric wave called a convectively coupled Kelvin wave had emerged as a possible hallmark of incipient tropical cyclones (a generic term for hurricanes and their Pacific Ocean equivalents, typhoons). These waves appear in the region near the equator where northeast and southeast trade winds meet, and they move eastward at around 15-20 metres per second, with wavelengths of 7000 km.
As they travel, the Kelvin waves perturb local concentrations of moisture, and their passing heralds a change in the direction of the prevailing winds. The resulting “zonal wind anomalies” are thought to play a role in spawning early-stage tropical cyclones. Observational data support this idea: two days after these waves pass through an area, there’s a noticeable uptick in tropical cyclones.
Rios-Berrios decided to investigate this correlation by developing an idealized model that captures the convective dynamics of both Kelvin waves and tropical cyclones. To simplify things, she based her model on an “Earth-like “aquaplanet” that lacks land and sea ice, and where temperatures vary only with latitude, and never with longitude or the passage of the seasons.
Despite these simplifications, Rios-Berrios found that this watery model world nevertheless produces hurricane-like swirls of wind and cloud. And after observing the full life cycle of more than 100 simulated tropical cyclones, she concluded that Kelvin waves do, in fact, influence their formation, with a two-day time lag that corresponds well to the data. “These results could help forecast active tropical cyclone periods several weeks in advance,” Rios-Berrios told the APS audience.
A gathering storm
Fortified by this promise, I picked up my laptop and walked the short distance to the “science crisis” session down the hall. I missed the session’s first talk, which aimed to draw comparisons between the Trump administration’s cuts to the National Science Foundation and the 1990s round of belt-tightening that doomed the Superconducting Super Collider project. However, I arrived in time for the second talk, in which a science historian, Zuoyue Wang, was scheduled to discuss parallels between the current situation and a succession of Cold War-era science budget crises.
To my surprise, Wang’s first slide featured a photo of NCAR’s iconic Mesa Lab – but not because of the great work being done there by Rios-Berrios and her colleagues. The fact that NCAR studies climate as well as weather means it has fallen foul of MAGA Republicans’ denial of climate change. In December 2025, the Trump White House issued a memo specifically targeting the centre and claiming that “NCAR’s activities veer far from strong or useful science”.
As a consequence of this disfavour, Wang told the audience, “NCAR is being dismantled as we speak.” Though some members of the US Congress have pushed back against the administration’s cuts, Wang described the attacks on science as “never-ending”.
By the end of Wang’s talk, all the optimism I’d built up in hearing about Rios-Berrios’ efforts to protect people from tropical cyclones was blasted from its foundations. If this is the kind of science the current US regime deems “useless”, I wondered, what on Earth is going to happen to the rest of the work on show at events like the APS Summit?
The post The coming hurricane: early-career physicists and the crisis in American science appeared first on Physics World.
From the classroom to the committee room: Dave Robertson MP on politics and physics
British politician talks about the importance of physicists and physics education
The post From the classroom to the committee room: Dave Robertson MP on politics and physics appeared first on Physics World.
This episode of the Physics World Weekly podcast features a conversation with Dave Robertson, who was elected member of the UK parliament for Lichfield in 2024. Robertson spent eight years teaching physics after studying the subject at the University of Liverpool. He then worked for a teachers’ union, which inspired him to become a candidate for the Labour Party.
He chats with Physics World’s Matin Durrani about his transition from the classroom to the committee room and how parliament “is a truly bonkers and truly bizarre workplace”.
Robertson has already sponsored three physics-related events at the Palace of Westminster and he talks about his membership of various cross-party parliamentary groups – including those on nuclear energy and space.
Robertson has not forgotten his roots in education and is adamant that the UK must address its nationwide shortage of physics teachers. He also urges physicists to speak out about how they can help address many of the world’s problems, notably climate change.
- You can also read a feature article about Dave Robertson’s career by following this link.
The post From the classroom to the committee room: Dave Robertson MP on politics and physics appeared first on Physics World.
Geometry induces chirality in nickel – and magnons flow
Twisted nickel nanotubes use shape as the source of asymmetry
The post Geometry induces chirality in nickel – and magnons flow appeared first on Physics World.
The ability to control the direction in which a signal travels – without external switching, without added circuitry – is a longstanding goal in the design of compact magnetic devices. Magnetochiral anisotropy offers exactly that: a material-level asymmetry in which magnetic waves (known as magnons) travelling in opposite directions are physically inequivalent, opening a route to magnetic logic operations and memory that retains data without a continuous power supply.
The effect has been understood in principle for decades, but always felt like a phenomenon that nature deliberately made inconvenient. Accessing magnetochiral anisotropy required materials that are chiral at the crystalline level – compounds like Cu2OSeO3, where the Dzyaloshinskii-Moriya interaction (DMI, a quantum mechanical force that pushes neighbouring magnetic moments to twist relative to each other) emerges only from a non-centrosymmetric crystal lattice that takes considerable effort to synthesize.
And even after synthesis, the device still needs cooling to cryogenic temperatures and application of an external magnetic field just to function. As a result, a phenomenon with genuine technological promise has spent most of its life confined to fundamental research, perpetually interesting and perpetually out of reach.
A research team headed up at École Polytechnique Fédérale de Lausanne (EPFL) has come up with a way to move this technology closer to real-world application. The idea is deceptively simple: stop asking which material can provide chirality, and start asking whether the shape of the structure itself can do the job instead. It turns out it can – as the team demonstrated, not just in a theoretical prediction, but as an actual measurement, at room temperature and with zero applied magnetic field.
Dirk Grundler, head of EPFL’s Laboratory of Nanoscale Magnetic Materials and Magnonics, and collaborators showed that a structural twist, in the form of a helical surface relief printed onto an otherwise ordinary polycrystalline nickel nanotube, is sufficient to induce chirality. The torsion and curvature of the twist generate a shape anisotropy and force the magnetic ground state into a spiralling spin texture. Its toroidal moment does exactly what the DMI does in a natural chiral crystal – it breaks the symmetry of the magnon dispersion relation, which describes how the energy of a magnetic wave depends on its direction of travel.
These twisted structures, termed artificial chiral magnets (ACMs), satisfy three conditions that no natural chiral magnet has jointly met: room temperature stability, zero field operation and realization in polycrystalline nickel – a material that is naturally achiral.
ACM outperforms a natural chiral crystal
The researchers used two-photon lithography to write the twisted polymeric scaffold and atomic layer deposition to coat it with a conformal 30-nm thick nickel shell. The handedness of the design is directly inherited by the magnetic ground state – as confirmed by X-ray magnetic circular dichroism microscopy – and can be flipped by field history, producing opposite helicity states in the same structural device on demand.
The team also performed microfocused Brillouin light scattering spectroscopy to resolve the magnon dynamics. At zero field and room temperature, the intensity non-reciprocity parameter (the difference in signal strength between waves travelling in opposite directions) reached 35.7%, switching reproducibly between two stable configurations (spin texture spiralling clockwise or anticlockwise) under field cycling without drift or degradation. At ±250 mT, the frequency non-reciprocity parameter (how much the frequency of a magnetic wave changes depending on which way it travels) peaked at 5.4×10-2, nearly three times the value reported for the bulk chiral material Cu2OSeO3 at cryogenic temperature.
Overall, the geometrically engineered nickel tube at room temperature outperformed a natural chiral crystal at low temperature. Using micromagnetic simulations and analytical modelling, the team traced the origin of this non-reciprocity to two cooperating mechanisms, both of which are tied to the geometry of the tube rather than the chemistry of the material, and both of which scale with decreasing tube radius. This implies that the numbers reported in this study are not a ceiling – they are a starting point.
A scalable blueprint for chirality-engineered magnonics
The most important factor in this result is not the nickel. While nickel was the material used, the principle does not belong exclusively to nickel. Because the chirality here is geometric – written into the shape of the structure rather than the chemistry of the lattice – it is transferable to any ferromagnet that can be deposited conformally over a three-dimensional scaffold.
Analytical calculations predict that permalloy, a nickel–iron magnetic alloy with higher saturation magnetization and exchange stiffness than nickel, should produce stronger non-reciprocity in an identical geometry. And since non-reciprocity scales with decreasing tube radius, sub-100-nm geometries accessible through next-generation two-photon lithography represent a direct route to significantly amplified effects.
Moreover, this ACM structure is multifunctional by design. Spin waves travelling through the helical magnetic structure behave differently depending on their characteristics. Some move quickly and directionally, making them suitable for carrying signals. Others are nearly stationary and strongly asymmetric – they travel in one direction and are blocked in the other, which is the defining behaviour of a diode.
The twist of the magnetic texture (clockwise or anticlockwise) can also be set by a magnetic field pulse and held indefinitely without requiring any power, functioning as a memory that stores information in the handedness of the spin arrangement rather than in a voltage or a charge. Because this directional asymmetry of magnetochiral anisotropy is a property of the geometry and not just of the spin waves, electrical current passing through the same structure is expected to experience the same effect – flowing more easily in one direction than the other.
In other words, a single nanoscale helix could simultaneously route signals, switch them, remember them and rectify them. One structure, four functions, no exotic material – the chirality was never in the crystal, it was in the geometry.
The findings are reported in Nature Nanotechnology.
The post Geometry induces chirality in nickel – and magnons flow appeared first on Physics World.
Fluid flow: how heat can move from cooler to warmer regions
New work could help design electronic devices in which heat can be guided in certain directions, minimizing heat loss
The post Fluid flow: how heat can move from cooler to warmer regions appeared first on Physics World.

We are all familiar with the fact that heat flows from warmer regions to colder ones. A team of researchers at the EPFL in Switzerland has now shown that the reverse can happen in many highly ordered materials, without violating the laws of thermodynamics. Besides the fundamental scientific appeal of this finding, the researchers say their work could help in the design of electronic devices in which heat flow could be guided, potentially minimizing heat losses.
Scientists have always thought of heat as following Fourier’s law of diffusion, explains physicist Nicola Marzari, who led this new study. Fourier built his law on earlier work by Newton and established that heat transfer through a solid depends on an intrinsic material property (how easily the material conducts heat) and on the temperature difference (more precisely, the gradient) between a hot and a cold region. “The minus sign between the heat current and the temperature gradient in his equations captures the fact that heat always flows from hot to cold regions,” Marzari explains.
In the 1960s, however, theory argued – and experiments in solid helium revealed – that heat could also propagate like a wave; this behaviour was called “second sound”, in analogy with an acoustic wave travelling through a solid. “At the time, the phenomenon was considered fairly exotic and could only occur at very low temperatures – just a few Kelvin above absolute zero,” says Marzari. “But, in 2015, we showed that this behaviour could be found in many materials – from two-dimensional monolayers to graphite and diamond – and at much higher temperatures. Indeed, experiments on graphite in 2019 and 2022 confirmed these predictions at 100 K and 200 K.”
“Directing heat as we want”
This mode of heat propagation, which could also occur at room temperature, is known as phonon hydrodynamics because heat is now considered as moving like a fluid, Marzari adds. Phonons are the extended atomic vibrations that transport heat. Normally, interference or collisions between these cause heat to dissipate slowly, following Fourier’s law. “However, the emergence of fluid-like behaviour means that vortices can appear and that obstacles can send fluid backwards – from a cold region to a hot one,” he explains. “This is very counterintuitive, but doesn’t break thermodynamics. It also means we can start thinking of guiding and directing heat as we want, or maybe build thermal diodes in which heat can only flow in one direction.”
Back in 2020, the researchers derived a unified theory of heat propagation that encompassed Fourier diffusion and the hydrodynamic regime. This was a feat in itself, says Marzari: Fourier’s law dates from 1822 and the microscopic theory of heat (the Peierls-Boltzmann transport equation) from 1929. “Our new ‘viscous heat equations’ from 2020 are both accurate and reasonably simple to solve, so there was a lot of excitement at being able to try and look at what would they predict.”
“Simple” here is relative, he admits. “But the community has learnt a lot from the study of hydrodynamics phenomena in real fluids – for example, how ships, airplanes and even bumblebees stay afloat – and we used some of that knowledge in our new work.”
Temperature profile of a hydrodynamic system contains two contributions
Reporting their work in Physical Review Letters, the EPFL researchers showed how they could maximize hydrodynamic flow in a strip of graphite using a Fourier-space framework. They knew that the temperature profile of a hydrodynamic system contains two contributions: vorticity (how heat flow swirls) and compressibility (how heat flow is squeezed). They were therefore able to show how compressibility plays a critical role in phonon fluids. This, says team member Enrico Di Lucente, provides an explanation for why heat backflow is at a maximum when compressibility is minimized: incompressible heat flow cannot be squeezed when it encounters resistance, but is instead redirected backwards.
In such hydrodynamic heat backflow, heat flows from cooler regions to warmer ones, leading to a negative temperature difference and overall negative thermal resistance across the device. While the effect observed is very small, Di Lucente says that he and his colleagues could now design experiments to maximize it, “potentially changing how we think about energy loss in electronic systems”. For example, “you could imagine a smartphone with a hydrodynamic shield to direct thermal energy away from the battery, so it doesn’t overheat”.
Looking ahead, the researchers are now working with experimental colleagues who are able to carve very precise microscopic structures that could confirm the predicted phenomena . “We will also explore novel geometries and architectures, to make the effects we have observed larger and larger,” says Marzari. “These comes with fancy names – such as Christmas trees and Tesla valves – so, stay tuned.”
Di Lucente has now moved to Columbia University to work in Michele Simoncelli’s team, who was involved in the earlier studies for the viscous heat equations. And Marzari is moving to the Cavendish Laboratory at the University of Cambridge, where he has been elected as the new Cavendish Professor of Physics.
The post Fluid flow: how heat can move from cooler to warmer regions appeared first on Physics World.
Surface contamination holds the key to a static electricity mystery
Carbon-rich “schmutz” determines how charge moves between objects made from identical insulating oxides
The post Surface contamination holds the key to a static electricity mystery appeared first on Physics World.
When Scott Waitukaitis set out to understand a puzzling aspect of static electricity, he didn’t expect to find the answer in a substance colloquially known as “schmutz”. But after a painstaking series of experiments, Waitukaitis and colleagues at the Institute of Science and Technology Austria (ISTA) have strong evidence that carbon-based surface contaminants – in other words, schmutz – are, in fact, the determining factor in how electric charge flows when certain types of insulating materials come into contact. By clearing up this mystery, the ISTA scientists say that their findings, which are published today in Nature, could shed fresh light on phenomena such as lightning and protoplanetary disk formation where static electricity plays a significant role.
If you’ve ever felt a shock after rubbing your hair with a balloon or shuffling across a carpet, you’ll know that static electricity can be a real pain. But for the scientists who study it, the pain runs much deeper. “Experimentally, it’s really hard,” Waitukaitis says. “There’s just a tonne of problems with this topic.”
During his talk at the APS Global Physics Summit in Denver, Colorado this week, Waitukaitis listed a few of these problems. Measuring an object’s charge is tricky. You can’t tell whether surface charge is coming from electrons or ions. And whenever you touch the object, you change its charge in unpredictable ways. Because of these complications, Waitukaitis says that even the most careful experiments are plagued by systematic effects.
Acoustic levitation to the (partial) rescue
To avoid the worst of these effects, an experimental team led by Galien Grosjean built an apparatus that uses sound waves to suspend a tiny sphere of silicon dioxide above a plate made from the same material. Turning this acoustic potential off and on again enabled the team to drop the initially neutral sphere onto the plate and “catch” it on the rebound without actually touching it. By applying a varying electric field to the sphere and measuring how it oscillates during its “catch” phase, they could also measure how much charge the sphere gained or lost via contact with the plate to within 500 electrons.
It wasn’t easy, though. “Bouncing this tiny sphere on a plate and catching it again is tricky enough to achieve once, but to understand the charging behaviour, we need to repeat this hundreds or even thousands of times in a row, without ever losing the particle,” Grosjean tells Physics World.
Preparing the samples was also challenging, he continues. “We were looking for what could possibly cause same-material samples to charge differently, so it was absolutely crucial for the samples to be prepared in exactly the same conditions.”
After these careful preparations, Grosjean, Waitukaitis and colleagues observed a curious pattern. Although every individual sphere charged in a consistent way with every individual plate, some spheres became more positively charged with each bounce, while others became more negatively charged. In effect, the spheres behaved as if they were made of completely different materials.
Spectroscopic investigations
Suspecting that surface contamination could be the culprit, the ISTA scientists tried cleaning the spheres and plates with plasma and baking them at 200 C. The results were stark: the “clean” spheres all became more negatively charged with each bounce, regardless of how they’d responded before. Then, over the next day or so, the previous pattern reemerged. Whatever they’d removed, it was obviously coming back.
At this point, the ISTA scientists started working with spectroscopists to identify what, exactly was on the surfaces of their spheres. The answer? Carbon, in the form of carbon dioxide, methane and various longer-chain carbon-rich molecules. “We never get the same cocktail of carbon on the surface twice, but the fact that it’s there really matters,” Waitukaitis says. By adding or removing carbon to their spheres, he adds, “we can make everything that was charging positively charge negatively and vice versa.”
What they can’t do – at least, not yet – is explain exactly how this carbon schmutz changes the spheres’ charging behaviour. “Everybody starts on this topic thinking, ‘I’m an awesome physicist, I can kick its butt in less than two years,’” Waitukaitis says. “That’s not the case.”
In a future experiment, the ISTA scientists plan to deliberately dope their spheres with specific functional groups of carbon, in effect creating their own, tailored versions of schmutz to study. “Schmutz is a pain, but for us, it’s the thing that matters,” Waitukaitis concludes.
The post Surface contamination holds the key to a static electricity mystery appeared first on Physics World.
Quantum physicists Charles Bennett and Gilles Brassard win $1m Turing Award
Duo bag award often described as the “Nobel Prize in Computing”
The post Quantum physicists Charles Bennett and Gilles Brassard win $1m Turing Award appeared first on Physics World.
The quantum physicists Charles Bennett and Gilles Brassard have been awarded the 2025 ACM Turing Award “for their essential role in establishing the foundations of quantum information science and transforming secure communication and computing”.
The Turing Award, often referred to as the “Nobel Prize in Computing,” is awarded by the Association for Computing Machinery (ACM) and carries a $1m prize. The award is named after Alan Turing, the British mathematician who formulated the mathematical basis of computing.
For over 40 years, Bennett and Brassard have played a crucial role in the foundations of quantum information science, in particular establing a quantum cryptography protocol in the 1980s.
Classical cryptography today is a vital part of computer and communication networks, protecting everything from business e-mails to bank transactions.
Information is kept secret using an encryption algorithm together with a secret “key” that the sender uses to scramble a message into a form that cannot be understood by an eavesdropper. The recipient then uses the same key with a decryption algorithm to read the message.
The issue with standard encryption is that the key must be known to both parties with the problem being how to distribute the key securely.
Quantum cryptography, or quantum key distribution (QKD), however, provides an automated method for distributing secret keys using standard communication fibres. Based on the principles of quantum mechanics, QKD is inherently secure and allows the key to be changed frequently, reducing the threat of key theft.
The first method for distributing secret keys encoded in quantum states was proposed in 1984 by Bennett working at IBM Research and Gilles Brassard at the University of Montreal.
In their “BB84” protocol, a bit of information is represented by the polarization state of a single photon – “0” by horizontal and “1” by vertical, for example. The sender transmits a string of polarized single photons to the receiver and by carrying out a series of measurements they are able to establish a shared key and to test whether an eavesdropper has intercepted any information en-route.
The BB84 protocol not only tests for eavesdropping, but also guarantees that sender and reciever can establish a secret key even if an eavesdropper has determined some of the bits in their shared binary sequence, using a technique called “privacy amplification”.
In 1993 Bennett and Brassard along with other collaborators, introduced the concept of quantum teleportation, demonstrating how an arbitrary quantum state could be transmitted between distant parties using quantum entanglement and classical communication.
Subsequent work by the duo also led the development of scalable quantum communication, an effort that continues today. “Bennett and Brassard fundamentally changed our understanding of information itself,” notes ACM president Yannis Ioannidis. “Their insights expanded the boundaries of computing and set in motion decades of discovery across disciplines. The global momentum behind quantum technologies today underscores the enduring importance of their contributions.”
A life in science
Bennett was born on 7 April 1943 in New York, US. After earning his Bachelor’s degree from Brandeis University in 1964 and his PhD from Harvard University in 1971, he moved to Argonne National Laboratory. In 1972, Bennett joined IBM Research where he has remained since.
Brassard was born on 20 April 1955 in Montreal, Canada. He earned his Bachelor’s and Master’s degrees from the University of Montreal and his PhD in theoretical computer science from Cornell University in 1979. He then joined the faculty of the University of Montreal shortly thereafter where he has remained since.
As well as the Turing Prize, both Brassard and Bennett have been awarded the Wolf Prize in Physics, the Micius Quantum Prize as well as the Breakthrough Prize in Fundamental Physics.
The post Quantum physicists Charles Bennett and Gilles Brassard win $1m Turing Award appeared first on Physics World.
Superfluid plasmon appears in a two-dimensional superconductor
New finding could advance our understanding of high-temperature superconductors
The post Superfluid plasmon appears in a two-dimensional superconductor appeared first on Physics World.
A collective mode of electrons predicted to exist in high-temperature superconductors, but difficult to observe in experiments has been identified by physicists at the Massachusetts Institute of Technology (MIT). The finding could advance our understanding of these materials, they say.
According to the Bardeen–Cooper–Schrieffer (BCS) theory, superconductivity occurs when electrons in a material overcome their mutual electrical repulsion to form electron pairs. These Cooper pairs, as they are known, can then travel unhindered through the material as a supercurrent without scattering off phonons (quasiparticles arising from vibrations of the material’s crystal lattice) or other impurities.
Cooper pairing is characterized by a tell-tale energy gap near the Fermi level, which is the highest energy level that electrons can occupy in a solid at absolute zero temperature. This gap is equivalent to the minimum energy required to break up a Cooper pair, and identifying it is regarded as unequivocal proof of a material’s superconducting nature.
In high-temperature cuprate semiconductors, which are layered materials, the Cooper pairs are confined to two-dimensional copper–oxygen (CuO2) planes that are only weakly coupled with respect to each other. Researchers are able to study the collective oscillations of these conduction electrons – known as plasmons – that travel perpendicular to these superconducting layers using terahertz (THz) spectroscopy at millielectronvolts energies, which is lower than the superconducting gap of the material. They can do this because the plasmons interact strongly with light.
However, doing the same for the electrons within the CuO2 layers themselves, is not so easy. This is because the collective electron behaviour occurs at energies that are much higher than the superconducting gap.
A 2D superfluid plasmon
Now, a team of physicists led by Nuh Gedik say they have succeeded in identifying a 2D superfluid plasmon in the layered superconductor bismuth strontium calcium copper oxide (or “BSCCO”) using a new THz microscope they developed in their laboratory. This plasmon has energies that are lower than the superconducting gap of the material.
THz radiation spans wavelengths from 30 μm (10 THz) in the mid-infrared part of the electromagnetic spectrum to 1–3 mm (0.1–0.3 THz) in the microwave domain. This is much larger than the size of atoms and molecules, so THz light cannot be used to resolve microscale structures. To overcome this fundamental diffraction limit, which restricts spatial resolution to roughly half of the wavelength of the light being used, Gedik and colleagues used spintronic emitters, which are devices that produce sharp pulses of THz light.
The researchers explain that when they shone this light on the multi-layered BSCCO, it triggered a cascade of effects in the electrons within each layer. By placing their sample, held at ultracold temperatures so that it became a superconductor, close to (in the near field) of the spintronic emitter, they were able to trap the THz light before it had time to spread. They were thereby able to “squeeze” it into a space much smaller than its wavelength. In this regime, the light can bypass the diffraction limit and resolve features previously too small to observe – in this case, collective THz oscillations of superconducting electrons within the material. Such a “jiggling” superfluid as the researchers have dubbed it was predicted to exist but never directly visualized until now.
“The new mode of electrons we have seen will provide a novel way of studying high temperature superconductivity in these systems,” says Gedik, “and we will now be looking into how this collective mode changes as a function of temperature, doping and sample geometry.”
The tool that we built could also be used to study the properties of 2D materials other than high temperature superconductors – for example, the optical behaviour in the THz regime for many small samples and heterostructures, he tells Physics World.
The present work is detailed in Nature.
The post Superfluid plasmon appears in a two-dimensional superconductor appeared first on Physics World.
Will the demise of the US penny damage science education?
Robert P Crease wonders what the death of the US penny will do to how we teach and learn science
The post Will the demise of the US penny damage science education? appeared first on Physics World.
Let us mourn the demise of the American penny. With each of the one-cent coins costing about three cents to make, it was “wasteful” to keep producing them, pronounced President Trump. US pennies won’t vanish soon. While the last was minted in November 2025, about 250 billion will remain in circulation for a time despite the rising number of cash-free transactions.
The US penny has been around since 1793. Lamenting its passing is faintly obscene compared to other things that the US government has done lately, such as terminating science agencies, cutting jobs, and slashing budgets, environmental regulations and vaccine research. But I can’t stop thinking about what the penny meant to my own science education.
Science collaborators
Pennies, which until the early 1980s were 95% copper, taught me about corrosion. I learned, for instance, that the Statue of Liberty’s green colour is due to oxidized copper. At school, we were taught how to make pennies a light shade of green by immersing them in salt and vinegar; a plant food such as Miracle Gro works even better as it contains ammonia. We were then instructed to figure out how to clean off the green, discovering that an acid like lemon juice did the trick.
When I placed a drop of water on the surface of a penny, the dome-like shape it adopted – caused simply by surface tension – was an impressive sight. My first lessons on ions, meanwhile, involved placing pennies and steel nails in a bath of salt and vinegar: the nails got electroplated with copper; the pennies with zinc.
We also had to determine the density of pennies, which are 19 mm in diameter and 1.52 mm thick, by submerging them in a graduated cylinder to find their volume and the weighing them to determine their mass. From 1983 – years after my high-school career – this exercise turned more interesting still because pennies became 97.5% zinc and were only plated with copper so you had to be eagle-eyed to tell old and new apart.
Pennies were indispensable lab props too. All the kilogram weights for mechanics experiments were bags of 400 pennies (the 1983+ penny weighing exactly 2.50 g). They were great for coin-tossing in statistics classes too, although I assume other coins gave the same result, even if that was an experiment we never tried.
The humble penny wasn’t some piece of lab equipment manufactured by an educational company but a familiar part of our world
The humble penny was effective for all these uses because it wasn’t some piece of lab equipment manufactured by an educational company but a familiar part of our world. The coins were cheap and available, and nobody cared if you lost them or took a few home.
You could stick pennies under a leg to prop up a wobbly table. They made makeshift washers if you punched in a hole and inserted nails or screws. If you were bored by the hand-cranked penny-squishing machines at tourist sites and amusement parks whose results are fully predictable, a more exciting way to deface currency would be to lay pennies on railroad tracks and hunt for the results in the stones after the train passes, though never do this because it’s dangerous.
Unit value
Pennies taught me something indirectly. After a breakup, my ex left abandoning some clothing, a cat and a large bowl containing literally thousands of pennies. The clothes I could throw out and I had to learn to love the cat. But the bowl?
I tried to put it at the bottom of my closet, but the damned thing continued to haunt me. Should I toss the bowl and its contents in the garbage? Wasteful, un-environmental and avoidant. Stuff the pennies into 50-cent coin-roll holder or take them to a coin-counting kiosk in a bank, and then present them to a teller? Psychologically unsatisfying.
No, I had to deal with the pennies doing with them what they were meant for. I must spend them. At a bar one night I tried to pay the tab using all pennies. They were legal tender, right? The bouncer was summoned. One night a taxi driver furiously threw my pennies back at me, accusing me of treating him like a waiter. I was astonished that he thought I was disrespecting him rather than engaged in post-breakup, self-absorbed infantile behaviour.
I managed to befriend a sympathetic newsstand worker who, a few times a week, was willing to let me buy the New York Times all in pennies
I could only use the fundamental unit of US currency in anomalous circumstances that I had to generate myself. In those days the New York Times cost 35 cents and I managed to befriend a sympathetic newsstand worker who, a few times a week, was willing to let me buy it all in pennies. He’d cheerfully greet me with “Here come my pennies!” and claimed I was becoming a better person now I was greeting vendors with smiles, not scowls.
I learned to work the monetary system methodically. For things that cost a little over 25 cents, I used a quarter and then pennies for what was left; for things that cost a little over a dollar I handed over a bill and the rest in pennies. I’d choreograph my purchases in advance so that I could use the appropriate lesser unit of currency plus pennies.
I often exploited the fact that sales tax in the US is only added on at the till, which means that something priced $2 with a sales tax of 6.5% will be $2.13 when you pay the cashier. So I’d hunt around in my pocket for a moment, and then in feigned chagrin say that I only had pennies, and hand cashier the 13 of them that I had carefully calculated beforehand would be needed.
Soon, keeping exacting track of purchases, I managed to spend an average of about 200 pennies a week. From day to day and even week to week the pile in the bowl barely dwindled. But, finally, after a little less than a year only a handful remained. I was thrilled – it was better than seeing a therapist.
The critical point
You might think that the moral I’m about to draw is the need for faith in incremental change – that, penny by penny, you can move mountains. That’s certainly the lesson teachers urge on you if you’re learning a foreign language or playing a musical instrument.
No, I was instead moved by the humbler experience of valuing an entire system of units moored to a stable, familiar, simple but all-important base unit that you can literally count on.
I still value that lesson, though it’s less concrete than what I learned from corroded, nailed or squished pennies.
The post Will the demise of the US penny damage science education? appeared first on Physics World.
Revealing hidden orbital topology in light-element materials
Phosphorene hosts an orbital Chern insulator with an experimentally distinct orbital Hall effect
The post Revealing hidden orbital topology in light-element materials appeared first on Physics World.
Topological insulators are insulators in the bulk and conductors on the surface. This behaviour is caused by spin-orbit coupling, a property that is stronger in heavier elements. Therefore, most topological insulators are made using heavy elements, such as bismuth selenide (Bi₂Se₃) and antimony telluride (Sb₂Te₃). In this research, the authors introduce orbital Chern insulators, a topological phase in which the orbital angular momentum of electrons, rather than their spin, drives the nontrivial topology. This allows topological behaviour to emerge in materials composed of much lighter elements, demonstrated using monolayer blue phosphorus, which was previously regarded as a trivial insulator.
The authors introduce a feature‑spectrum topology framework, a systematic method for identifying and characterizing materials with orbital‑driven topology. Using this approach, they show that phosphorene hosts the first pure orbital Chern insulator, where the orbital topology is fully disentangled from spin and valley degrees of freedom. As a result, the material exhibits a pure orbital Hall effect that can be experimentally distinguished from spin and valley Hall responses, unlike in transition‑metal dichalcogenides where spin-orbit coupling and valley physics are intertwined.
Because orbital Chern insulators do not rely on spin-orbit coupling, they are not constrained by the small band gaps typical of spin-orbit coupling driven topological insulators, and can potentially support larger band gaps in light‑element systems. The authors also show that orbital nontriviality is expected more broadly in Group 5A monolayers with buckled or puckered structures, expanding the landscape of candidate materials. This research opens a path for orbitronics, where currents of orbital angular momentum instead of spin currents used in spintronics, can be generated, controlled, and applied in future quantum and electronic devices.
Read the full article
Orbital topology induced orbital Hall effect in two-dimensional insulators
Yueh-Ting Yao et al 2026 Rep. Prog. Phys. 89 018001
Do you want to learn more about this topic?
Interacting topological insulators: a review by Stephan Rachel (2018)
The post Revealing hidden orbital topology in light-element materials appeared first on Physics World.
Decoding the impact of sudden shocks: A new predictive framework for climate and complex systems
Accurately predicting how a system responds to sudden changes is a major challenge across fields like climate science, finance, and epidemiology. Now, a team of researchers has developed a powerful new mathematical framework to do just that, using a generalized linear response theory.
The post Decoding the impact of sudden shocks: A new predictive framework for climate and complex systems appeared first on Physics World.
Linear Response Theory (LRT) is a cornerstone of statistical physics. It predicts how a system at (or near) equilibrium responds to small external perturbations—an idea tied to the fluctuation-dissipation relation. Essentially, if you understand a system’s natural fluctuations, you can infer how it will react to weak forcing without running a full, computationally heavy simulation.
Traditionally, LRT was developed for systems with Gaussian noise—smooth, continuous fluctuations. While this works well for phenomena like thermal fluctuations, many real-world systems also experience sudden jumps or shocks, modeled mathematically as Lévy processes. Think volcanic eruptions, market crashes, or sudden disease outbreaks.
Incorporating these sudden shocks into LRT has been a long-standing goal for statistical physicists. A recent paper published in ROPP has made a major step forward by establishing linear response theory for a broad and fundamental class of systems: mixed jump-diffusion models, which include Lévy processes.
By generalizing the fluctuation-dissipation theorem for this class of models, their response formulas allow scientists to assess how these systems respond to structural perturbations. Crucially, this works even with respect to changes in the underlying noise law itself, allowing for much tighter uncertainty quantification.
The authors—a team of researchers from Israel, UK, USA and Sweden—note that this framework provides foundational support for “optimal fingerprinting”—a statistical methodology used to confidently associate observed changes with specific causal mechanisms. By proving this approach works even under complex stochastic forcings, their findings strengthen a key aspect of the science behind climate change, grounding and expanding Hasselmann’s seminal work on detection and attribution. Importantly, this pathway for causally linking signals with acting forcings extends well beyond climate to a massive class of complex systems.
To demonstrate the theory’s predictive power, the team applied it to complex climate scenarios, including the El Niño-Southern Oscillation (ENSO)—a large-scale climate pattern in the tropical Pacific Ocean. In a more challenging application, they used their LRT to perform accurate climate change projections in the spatially extended Ghil–Sellers energy balance climate model subject to, random, abrupt perturbations. They showed that despite strong nonlinearities in model formulations—such as the complex “if-then” decision-making structures often used to parameterize ocean and atmospheric convection—LRT can still be robustly applied. This strengthens the argument for using this approach to perform accurate climate change projections and to rigorously assess a system’s proximity to tipping points.
Ultimately, this work doesn’t just improve predicting climate models’ response to perturbations; it provides a new blueprint for understanding how any complex system reacts to sudden shocks, paving the way for better predictions in biology, finance, and quantitative social sciences.
Read the full article
Kolmogorov modes and linear response of jump-diffusion models – IOPscience
Mickaël D Chekroun et al 2025, Rep. Prog. Phys. 88 127601
The post Decoding the impact of sudden shocks: A new predictive framework for climate and complex systems appeared first on Physics World.
Love, Tito’s: vodka maker funds physics research
Texas distiller has donated tens of millions of dollars to scientific research
The post Love, Tito’s: vodka maker funds physics research appeared first on Physics World.
As a freelance writer, I’m not usually one to go down rabbit holes when it comes to research funding. That changed when Physics World spotted an intriguing source of support in a paper I was covering on exotic phases in quantum materials. The study, published in Nature Materials and led by Edoardo Baldini at the University of Texas at Austin, was partially funded by Love, Tito’s – the philanthropic arm of the Texas-based distillery, Tito’s Handmade Vodka.
The link between vodka and quantum materials was a story I was eager to explore, but a quick look at the company’s website was enough to shed some light on how this connection came about. Since its beginnings in 2015, Love, Tito’s has donated tens of millions of dollars to multidisciplinary research, with no direct ties to the company’s business operations.
Much of this funding has supported projects ranging from cancer therapies to ocean cleanups – efforts whose vital importance will be immediately clear to the public. Yet the team at Love, Tito’s also demonstrates a clear appreciation for the broader, often understated relevance of physics.
New passion
As I read more about Bert “Tito” Beveridge, the roots of this appreciation became even clearer. In 1992, between jobs in oil rigging and mortgage lending, Beveridge developed a new passion: infusing affordable, high-quality vodka using fresh ingredients.
Faced with limited funding and disinterested investors, his operation encountered hurdles at every turn. But with an engaged and analytical mindset, he tackled these challenges systematically – even studying prohibition-era photographs of distillery setups, and curating advice from colleagues in the oil industry. Working with the resources available to him, Beveridge refined his process, and within a decade, his vodka was winning national awards.
“Tito Beveridge has always been a scientist at heart,” says Sarah Everett, director of global impact and research at Tito’s Handmade Vodka.
Even before achieving success, the company had committed itself to philanthropic work – finding homes for stray dogs that wandered onto the distillery grounds, and connecting local communities with volunteer opportunities. Alongside these efforts, “we have a special focus on scientific research, through our CHEERS initiative: Creating Hope and Elevating Emerging Research and Science,” Everett says.
Invaluable grant
Today, CHEERS provides grants across a wide range of disciplines, including physics. For Edoardo Baldini’s team, a $1.4 million donation from the programme has proved invaluable in advancing their discovery of exotic quantum clock states.
“The gift helped us establish a state-of-the-art sample preparation and handling station for atomically thin materials, which directly supports experiments like those in this study by improving sample quality and reproducibility. It also supported a laser system for a time-resolved momentum microscope,” Baldini says.
These resources are already enabling the next phase of the team’s research: probing ultrafast phenomena in atomically thin systems and tracking how their electronic structure evolves following photoexcitation.
“The experimental work in Baldini’s group provides the basis for developing advanced materials for a wide range of applications, with implications that will be far reaching beyond the walls of his lab,” Everett adds.
With the foresight to recognize the societal relevance of physics – beyond the fields that typically dominate headlines – a company that once defied the odds to build a vodka brand is now helping to support research that could lead to technological solutions for some of the world’s most urgent challenges.
“By supporting fundamental research on quantum materials, this gift reflects Love, Tito’s broader interest in advancing scientific discovery that can ultimately contribute to addressing major societal challenges, including the development of future energy and information technologies,” Baldini says.
The post Love, Tito’s: vodka maker funds physics research appeared first on Physics World.
A rubbish challenge: how do we dump space junk?
Katherine Courtney and Alice Gorman talk about the danger of space debris and what we need to do about it
The post A rubbish challenge: how do we dump space junk? appeared first on Physics World.
Among the working satellites and telescopes orbiting our planet is a lot of rubbish. From full satellites that no longer work to tiny bolts shed as spacecraft release spent rockets, there are millions of human-made pieces of debris in the space around Earth.
The problem is a hot topic within the space community. The presence of space junk has implications for both ground and space-based astronomy; there is an impact to atmospheric science that we’re only just beginning to understand; and it also presents a threat to our highly space-reliant society.
To highlight what is being – and needs to be – done to tackle the issue of space junk, experts Katherine Courtney and Alice Gorman talked to Physics World online editor Margaret Harris as part of a Physics World Live panel discussion in November 2025.
Courtney started her career developing products and services for the telecoms industry before moving to the public sector and working in the UK government. While she was the chief executive of the UK Space Agency she came to realize the impact of space debris.
Courtney is now chair of the Global Network on Sustainability in Space (GNOSIS), which has about 1000 members from research and industry across more than 45 countries. GNOSIS aims to accelerate research and development efforts to tackle problems like space debris. Courtney also mentors start-up companies that are trying to solve these problems and does outreach with young people to educate them on the topic.
Gorman studied archaeology and for several years worked on terrestrial projects before becoming a space archaeologist. Now at Flinders University, Australia, she is known as Dr Space Junk, and focuses not just on debris in Earth orbit, but also planetary landing sites, deep space probes, terrestrial rocket launch sites and tracking stations.
Gorman’s research into space junk involves looking at objects in an environmental context, examining their cultural value and what it means to retain these objects. Along with Justin Walsh, she trained crew on the International Space Station to do what was effectively the first archaeological field survey outside Earth.
What is space junk and how much is there in orbit around Earth?
Alice Gorman: Space junk is commonly defined as any object in space that does not now or in the foreseeable future serve a useful purpose. The biggest contributors to the space debris population are the US, Russia and China.
The latest figures estimate that there are 54,000 human-made objects in orbit that are larger than 10 cm, including over 14,000 operating satellites and spacecraft. Envisat is one of the largest in that category, being 26 m long. There are also medium-size objects, which can be anything from 1–10 cm. Current statistical models estimate there are about 1.2 million objects of this size. At an even smaller scale, there’s an estimated 140 million objects 1 mm to 1 cm in size.
Not all these objects are tracked and catalogued – the number regularly tracked by Space Surveillance Networks is only about 44,870. But that doesn’t mean that’s everything there is – that’s just the things we can see and know are there.
Taking up space

The count evolution of different types of human-made debris in geocentric orbit, as recorded by the European Space Agency:
- Payload– an object designed to perform a specific function in space (excluding launch functionality). This includes operational satellites as well as calibration objects.
- Payload fragmentation debris – an object that has fragmented or unintentionally released from a payload as space debris with origins that can be traced back to a unique event. This class includes objects created when a payload explodes or when it collides with another object.
- Payload debris – an object that has fragmented or unintentionally released from a payload as space debris for an unknown reason but orbital or physical properties allow it to be traced to a source.
- Payload mission related object – an object that served a purpose for the payload and has intentionally been released as space debris. Common examples include covers for optical instruments or astronaut tools.
- Rocket body – an object designed to perform launch-related functionality. This includes the various orbital stages of launch vehicles, but not payloads which release smaller payloads themselves.
- Rocket fragmentation debris – an object that has fragmented or unintentionally released from a rocket body as space debris with origins that can be traced back to a unique event. This class includes objects created when a launch vehicle explodes.
- Rocket debris – an object that has fragmented or unintentionally released from a rocket body as space debris for an unknown reason but orbital or physical properties allow it to be traced to a source.
- Rocket mission related object – an object intentionally released as space debris that served a purpose for the function of a rocket body. Common examples include shrouds and engines.
- Unidentified – an object that has not been traced back to a launch event.
What sorts of objects make up space junk?
Alice Gorman: First, there are whole satellites that no longer work. There are the upper stage rocket bodies that are left in orbit after they’ve delivered their payload – and in some cases are still attached. There are bolts, lens caps, fuel tanks – all kinds of debris that are released into orbit as part of a spacecraft’s mission or satellite launch.
Then you have the hundreds and thousands of fragments from exploded spacecraft. There have also been a number of anti-satellite tests that have added to the debris population. One notorious example was when China destroyed its own Fengyun-1C satellite using a missile in 2007. The event created around 3500 trackable objects and many more smaller pieces of debris, a lot of which are still in orbit.
There are also all the tiny fragments resulting from debris being continually bombarded by micrometeoroids and other bits of space junk. Plus, materials decay and erode when they’re in space.
Where is all this space debris?
Alice Gorman: The most congested area is low-Earth orbit – about 200 to 2000 km above sea level. Among the working satellites in this orbit are around 9000 that are part of SpaceX’s Starlink network.
Medium-Earth orbit (between roughly 2000 and 35,000 km) has a lot of stuff in it but also contains the Van Allen radiation belts so tends to be avoided. Then we get to geosynchronous and geostationary orbit at 35,786 km, where a lot of telecoms satellites are. Finally, beyond that is the graveyard orbit, where geostationary satellites that no longer work are sometimes boosted up to.
What hazards do these human-made objects pose to the space environment?
Katherine Courtney: First you have to consider just how dependent we are on the infrastructure that is orbiting the planet. The Internet, mobile telephones, banking networks, utility grids, emergency services, food distribution, climate change monitoring, stock markets – all of these things and so many more depend on space.
In 1978 American astrophysicist Donald Kessler proposed that if certain orbits get too congested with debris and active satellites there could be a collision that triggers a chain reaction of further collisions, making those areas of space unusable for generations. It’s what’s known as the Kessler Syndrome.
Kessler and UK astronautics engineer Hugh Lewis recently released an update to that original paper. Using European Space Agency (ESA) data on space debris, they determined that Kessler Syndrome is actually already happening at some orbits, and there are a whole range of other orbits that are now considered unstable and potentially at risk.
We don’t know for sure that we’re at that catastrophe scenario where the orbits become too congested with objects that can’t be controlled by humans. But the modelling suggests we are well on our way to that situation.
Even tiny debris can make a satellite inoperable. Satellites often just stop working, and nobody knows if that’s because they’ve had a debris strike, an electrical malfunction or some other fault. In ESA’s latest annual report on the debris population, they say that even if no further launches occur, the debris population will continue to expand because of the decay and fragmentation of those legacy rocket bodies and big defunct objects that we have no way of retrieving, reusing or controlling.
Debris isn’t the only hazard. There’s quite a complex system up there where hazards are impacting each other. Some orbits are now getting so congested that it’s getting very difficult for operators to avoid collisions, and they are having to manoeuvre satellites daily to avoid them. Starlink publishes their collision manoeuvre statistics and – when you plot it – you can see how it’s going up and up as they increase the size of their constellation.
But debris doesn’t advertise where it is. As Alice described, we only have a certain number of trackable objects – the other million plus are not trackable. So there’s an interplay between how crowded inoperable things are and how crowded manoeuvrable things are.
Distribution of space debris
Debris distribution An ESA animation released in 2019 showing the distribution of debris in orbit around Earth. The colours represent different object types – functional and dysfunctional satellites (red), rocket bodies (yellow), mission-related objects (green), and fragments (blue).
What impact does space weather have on debris?
Katherine Courtney: Every time we see an aurora in the night sky, it might look pretty but it means that the satellites in orbit around the Earth are being washed with some serious magnetic particles from the Sun. Along with the risk that a massive solar storm could knock out satellites if it was blasted in Earth’s direction, the influx of these particles increases the atmospheric drag and moves the debris in unpredictable ways. Space weather interacts with both active satellites and debris in a way that increases the uncertainties about just how many things we can safely operate up there.
This is also becoming more hazardous because constellation operators in low Earth orbit have started to introduce artificial intelligence and automated manoeuvring systems. They’ve done that because if you have 9000 satellites, you can’t employ (or don’t want to employ) 9000 people to operate them from the ground. So they have all developed automated station keeping, which is a good idea if the idea is to keep the satellite in place.
But there isn’t really a system in place whereby operators announce in advance these automated manoeuvres. Yes, they will try and contact other operators on a sort of “best efforts” basis if they are going to do planned manoeuvres, but unplanned ones are a whole new hazard.
What impact can space junk have on astronomy?
Katherine Courtney: Space junk is quite a challenge for astronomers. They have facilities that have taken 10 years to build and cost billions, but they are getting streaks in their imagery and they are losing data points. It’s a real challenge to deal with that because when these telescopes were designed, we didn’t have 13,000 satellites flying around and more than 10,000 of them moving fast in low Earth orbit.
Radio astronomy is also being interfered with because satellites are transmitting signals all the time. There is some evidence they are also leaking unintentional emissions from their electrical systems, which – again – interferes with astronomy.
And what impact does space debris have on the environment?
Katherine Courtney: There is emerging evidence that when debris re-enters the Earth’s atmosphere, it deposits particulate matter into the atmosphere that we have never experienced before. Naturally occurring matter from meteorites and micrometeorites don’t carry the metals we’ve extracted from Earth and launched into space, which are now burning up on their way back down.
And not all objects burn up. You can find some quite scary pictures of very large things that have landed on Earth – thankfully not on anybody’s head as far as we know. They usually land in places like Australia, a long way from inhabited areas, or in the middle of the Pacific Ocean – but they’re not being controlled as they descend.
A couple of years ago, a Chinese Long March rocket body re-entered the atmosphere uncontrolled. If it had arrived 15 minutes earlier, it would have landed on New York City. All you can do is cross your fingers and hope that when objects come down, they’re not landing on a bunch of people somewhere.

What is being done – or could be done in the future – to reduce the hazards of space junk?
Alice Gorman: This is an urgent problem that we need action on. At the moment, there are many proposals and missions in testing or development to actively remove debris from orbit, but none are actually working. For new missions, however, there has been a really interesting shift.
We used to look on the atmosphere as a natural incinerator, and all the plans to get rid of stuff in orbit involved tipping them back into the atmosphere to mostly burn up. It was considered to be the logical and most harmless way to dispose of space junk. But objects don’t always burn up, and stuff still makes it to the ground.
We also now know that these aluminium and soot particulates [created by objects burning up] in the upper atmosphere are affecting the ozone layer. We thought we had solved that problem with the 1987 Montreal Protocol, when the world came together to stop the ozone layer being destroyed.
People now realize you can’t just let satellites burn up. In fact, there are now proposals, like ESA’s Zero Debris Charter, for new missions to not create any new debris – to be “debris neutral”. That’s great for current and future missions but are people actually going to do it?
There used to be a rule – don’t leave anything in orbit for 25 years and have an end of life strategy to get rid of it. That’s now down to five years, which is good. But apparently only 40 to 60% of satellite operators followed that protocol – the rest would simply do nothing to prevent their spacecraft from contributing to the debris problem.
We rely on satellite operators and launch operators complying with these international standards and norms. And when profit is at stake, I don’t think we can have any guarantee that they will actually do that.
Katherine Courtney: When I first began focusing on space debris, I sometimes felt there was just the United Nations (UN) long-term sustainability guidelines. They were voluntary, but people are now bringing that into their national space law. There is increasing awareness of the issue and satellite operators are beginning to engage in those conversations differently.
ESA’s Zero Debris Charter is a great initiative because it sets timed targets and detailed technical specifications for how not to create additional debris with your missions. Unfortunately, it still calls for five-year design-for-demise as best practice, which maybe isn’t the answer. Missions should be designed for reuse and recycling. Or we need to not only not create debris, but use new materials that have less impact when they re-enter the atmosphere.
The International Telecommunication Union (ITU) [the UN agency for digital technologies] are really the only multilateral body that have any sort of binding powers. They allocate global radio spectrum and satellite orbits to ensure telecommunication operations run smoothly. They have started holding an annual sustainability conference where they get ITU delegates together to talk about how to fix the problem of space debris.
In 2025, the UN’s Committee on the Peaceful Uses of Outer Space (COPUOS) also set up the Expert Group on Space Situational Awareness – under the Working Group on the Long-term Sustainability of Outer Space Activities (LTS) – because one of the real problems is that we don’t have a clear enough picture of what is going on in orbit.
As Alice described, we can’t see the vast majority of the debris, but we also collect the data about debris through lots of non-standardized observations that are not interoperable and are made by different space agencies around the world. There are competing models that give different estimates and different forecasts.
We need to come up with a standardized way of monitoring the space environment and modelling what impact increasing numbers of spacecraft is having, so it’s great to hear that COPUOS has decided to encourage that.
There is also hope from the UN’s Summit of the Future in 2024. Action 56 in the resulting Pact for the Future proposes a fourth UN Conference on the Peaceful Exploration of Outer Space (UNISPACE IV) in 2027. It will focus on debris, debris mitigation and management, space traffic management, and how the world can cooperate more effectively in this area.
So what do we need to make these initiatives work?
Katherine Courtney: We currently don’t have an international treaty with binding rules. Different countries require different things of their licensed operators, don’t necessarily keep other countries informed of their activities, and some space objects don’t even go into the intended orbits at all. We need something like the ITU – a non-military, cross-border independent authority – that could monitor and enforce standards internationally.
A little ray of light for me is that NASA recently received an e-mail from the Chinese National Space Administration to warn them of a potential collision between a Chinese object and a NASA mission. It was the first time that had happened. Communicating to avoid collisions should be the bare minimum to ensure a more sustainable space environment.

What missions and tests have happened or are in the pipeline to deal with individual pieces of debris?
Katherine Courtney: The most advanced to date has been Japan’s Astroscale ADRAS-J mission, which has demonstrated its ability to safely approach a target object and examine it closely. Meanwhile, ESA is launching its ClearSpace-1 mission in 2029 to clear an old PROBA-1 satellite from low-Earth orbit.
These missions are tricky because the first rule is don’t make any more debris – but you have an object that is tumbling, maybe fragmenting and could be carrying fuel. You have to be very careful to prove that you have the technology that can safely capture that object. For ClearSpace-1, they are going to use a sort of robotic grappling mechanism, while Astroscale will use a magnetic solution to grab things.
The UK government has also announced further funding to not just remove one UK licensed object from orbit, but go back and remove a second. This is quite a technical feat – you have to safely take an uncooperative debris object, lower it to a point where Earth’s gravity will cause it to deorbit, and then go back and get another one, all without bumping into anything on the way.
China and Russia have also demonstrated their ability to safely approach objects but they haven’t published the outcomes of those missions. China’s efforts have been defence-focused but they have also started to look at commercial operations in that area. In fact, there are quite a few companies now really interested in being involved in this space.
Do some craft have heritage value?

Alice Gorman: There have been proposals to test some debris removal technologies on older spacecraft on the basis that they might one day be a risk and they’re old so nobody cares. But to me many such craft have incredible heritage value.
People sometimes say to me, “But Alice, we can’t leave them there, they’re junk”. However, if they’re not currently collision risks, we don’t have to do anything about them. Instead we can assess their cultural heritage value. We can rank the objects so we can say, if something has to be removed, this object has a lower value than this one.
So, from my perspective, every nation needs to have a look at its heritage assets in orbit, assess their significance, and from that point decide what needs to be done. And in heritage terms you don’t do anything until you need to – the place where something is, is an important part of its cultural significance.
You could argue that the definition of space junk is something which has no use, but these objects actually do have a purpose. Their purpose is to connect people to their history in space and to space as a place. I want to see all proposals for active debris removal incorporate cultural heritage management.
How long might it take before the orbits get so crowded that we really just can’t put anything else into orbits?
Katherine Courtney: I’ve not seen a forecast on how long this will take, but currently people are launching satellites weekly and in batches. According to ITU filings, there are over a million permissions to operate in certain spectrum on file now. So, over the next 10 years, a million more satellites could theoretically be launched, which could be problematic.
Imagine a motorway where everybody can drive at whatever speed they want with no indicator lights. If your car broke down and you just left it in the middle of the road, that would soon become an unusable environment. Space orbits would soon be like that.
But we can use orbital capacity more efficiently. It’s just that it requires a great force of global collaboration to solve that problem because space, by definition, is a place without national borders.
My view is that 90% of space activity today is commercial. Businesses have to manage these hazards and risks or else they will close down. In fact, I see a day where something happens that makes everybody sit up. I call it the Exxon Valdez moment, a disaster that is small enough to hurt some operators financially, but not big enough that we have a Kessler syndrome and we all have to wait 200 years before we can use that space again. I think that’s when the economic incentives will be there for people to actually start collaborating.
Five years ago you never heard an operator say regulation was a good thing – I now regularly attend events where operators ask for regulation. So I think we can solve these problems.
Are there any alternative approaches to avoid more space debris?
Alice Gorman: Although we depend on space, we are in fact neglecting terrestrial infrastructure. The Starlink satellites, for example, have been strongly pushed in because they promise to provide communication to remote places – but only because there has been no investment in terrestrial infrastructure. We can choose to pull some functions back from space. We’re not completely committed to space for all these functions, and we shouldn’t be so dependent on space.
- This article is based on the 10 November 2025 Physics World Live event, which you can watch on demand here
The post A rubbish challenge: how do we dump space junk? appeared first on Physics World.
International scientists head into the fast-lane of Denmark’s burgeoning quantum ecosystem
Ambition and international talent converge as Denmark scales up in quantum science
The post International scientists head into the fast-lane of Denmark’s burgeoning quantum ecosystem appeared first on Physics World.

Denmark, it seems, is increasingly walking the walk, not just talking the talk, when it comes to quantum science and innovation. Structurally, the country’s “quantum ecosystem” is on a roll, with more than 75 organizations now actively engaged around a shared national mission via the Danish Quantum Community, a network of start-ups, scale-ups, incumbent technology companies, investors, research institutions and government agencies.
Money is greasing the wheels. In October last year, Denmark launched 55North, the world’s largest venture-capital fund dedicated exclusively to quantum technologies and applications. Headquartered in Copenhagen and backed by the Novo Nordisk Foundation and the Export and Investment Fund of Denmark (EIFO), the fund opened with a capital injection of €134 million (and a target base of €300 million) to back high-growth companies in the nascent quantum supply chain – within Denmark and beyond.
Workforce development is also mandatory – a strategic acknowledgement that Denmark must scale the “quantum talent pipeline” if it is to translate advances in fundamental science and applied R&D into next-generation quantum technologies. Capacity-building is well under way as Danish universities work with industry and government partners to train a skilled and diverse quantum workforce of “all the talents”, with recruitment of international scientists and engineers seen as fundamental to Denmark’s long-term quantum ambitions.
Joined-up thinking in quantum
A case study in this regard is Maria Cerdà Sevilla, head of Quantum DTU, the Center for Quantum Technologies at the Technical University of Denmark (DTU). Located in Lyngby, just north of Copenhagen, Quantum DTU coordinates the research activities of around 300 quantum scientists, working across 12 departments at DTU and focused around five main research themes: quantum computing, quantum communications, quantum sensing, advanced materials as well as cross-cutting initiatives in nanofabrication and next-generation quantum chips.
“The goal is to ensure that DTU is not merely participating in quantum science but also shaping the trajectory of technology translation and commercial innovation in the field,” explains Cerdà Sevilla. Put another way: Quantum DTU is all about outcomes versus three broad-scope metrics: scientific depth (world-class research in quantum physics and engineering); building the quantum ecosystem (integrating diverse research disciplines, developing infrastructure, plus education and training); and, finally, readiness for market deployment (meaning responsible and scalable implementation of quantum technologies).
“Our success will be defined not only by high-impact publications and prototypes, but whether DTU – and, by extension, Denmark – has established ‘durable capacity’ in quantum technologies and applications,” says Cerdà Sevilla.
It’s better to travel
For her part, Cerdà Sevilla is the quintessential pan-European scientist, albeit taking the “road less-travelled” to her role at Quantum DTU. After completing a PhD in particle physics at the University of Liverpool, UK, she moved on to postdoctoral research positions in Germany – at Humboldt University of Berlin and the Technical University of Munich – before a mid-career pivot into research strategy and innovation management.

“While I no longer do research myself, I work with quantum scientists every day at DTU,” explains Cerdà Sevilla. That engagement extends to other stakeholders, including policy-makers, funding agencies, manufacturers in the quantum supply chain, as well as industrial end-users looking to deploy quantum technologies. “My role is essentially about leadership and strategic alignment,” she adds. “That means defining research priorities, understanding what we’re doing at a granular level, and ensuring Quantum DTU’s scientific efforts translate into a joined-up action plan across diverse specialisms.”
One of the most powerful aspects of Quantum DTU – indeed the wider quantum sector in Denmark – is this sense of shared purpose. “The quantum community here is internationally connected and recognized as well as being locally cohesive,” notes Cerdà Sevilla. “As an international scientist, it’s a given that you will get to conduct leading-edge research here; at the same time, you will also have a voice in shaping priorities at the departmental, institutional and even national level.”
By extension, institutional and interpersonal trust are defining features of Denmark’s research culture, enabling scientific collaborations and long-term initiatives to take shape organically without undue friction or hierarchical blockers. That same mindset informs life outside the laboratory and the workplace.
“The work-life balance in Denmark is great, though productivity is mandatory,” says Cerdà Sevilla. “Danish people work very hard, but they also understand the need for downtime with family and friends to ensure creativity and clarity of thinking. Overall, there’s a culture of psychological safety in the research community – an implicit acknowledgement that teams function best when individuals feel secure with their colleagues and management.”
Heading north
Another international scientist making an out-sized impact in the Danish quantum community is Francesco Borsoi, an assistant professor of physics and spin qubit pilot-line lead within the Novo Nordisk Foundation Quantum Computing Programme (NQCP), part of the renowned Niels Bohr Institute (NBI) at the University of Copenhagen (KU).
The NQCP is a 12-year collaborative research effort, backed with €200 million of funding through till 2035, to develop fault-tolerant quantum computing hardware and quantum algorithms for chemical and biological challenges in the life sciences. Underpinning the programme is a technology-agnostic approach to hardware development and the infrastructure required to support it (and currently implemented across four qubit pilot lines).
“Right now, my research at NBI explores the development, control and scaling aspects of solid-state quantum devices and investigation of the properties that may enable universal quantum computing,” explains Borsoi. While his focus, in large part, is on quantum-confined spins in semiconductor quantum dots, Borsoi works closely with the other three NQCP pilot-line teams developing platforms based on superconducting, photonic and neutral-atom technologies.
As an assistant professor, Borsoi also plays a proactive role in training the next generation of quantum scientists and engineers. Notably, he is the lead creator of a hands-on experimental course on advanced qubit technologies – part of a joint KU/DTU Masters programme in quantum information science that’s helping Denmark to scale its quantum workforce.
Here for the long term
Like Cerdà Sevilla at Quantum DTU, Borsoi’s back-story reflects the pan-European mobility of scientific talent. He received his MSc in condensed-matter physics from the University of Pisa, Italy, in 2016, before moving on to QuTech at the Delft University of Technology, The Netherlands, where he completed a PhD in applied physics (on semiconductor/superconductor quantum heterostructures) followed by three years of postdoctoral research (and a shift in direction to focus exclusively on semiconductor quantum-dot qubits).
After six years at QuTech, Borsoi wasn’t actively seeking a move to another institution – let alone another country – but was attracted by the NQCP opportunity and, as he puts it, “the chance to build from the ground up and be part of something this ambitious”.
He’s been in Copenhagen for 18 months and has settled well, both within NBI and outside. “Day to day,” he says, “I get to work with talented colleagues and students across NBI and KU, plus I get to develop my career in one of the world’s most liveable, sustainable cities. Five-star food scene, amazing architecture, lots of green space and excellent public transport – what’s not to like?”

Back in the laboratory, meanwhile, Borsoi also engages extensively with domain experts working on the three other qubit pilot lines – a systematic and collaborative research model that underpins NQCP’s approach to quantum science. “The core enabling technologies may differ,” notes Borsoi, “but many of the design, engineering and scalability challenges are common to all the pilot lines. I guess we all talk the same language when it comes to the NQCP mission.”
For Borsoi, the transition to NBI and Denmark’s quantum community could hardly have gone better and already feels like a long-term commitment. “Government, private equity and philanthropic foundations are all making big investments in quantum,” he concludes, “so there’s no shortage of opportunities in Denmark for talented quantum scientists and engineers seeking to develop their careers in a university or industry setting.”
- Visit Science Hub Denmark for more information on quantum job opportunities in Denmark. This nationally coordinated initiative aims to enhance the global visibility of Danish research and career opportunities in the natural sciences, engineering and life sciences. Delegates attending the APS Global Physics Summit (15-20 March) in Denver, Colorado, can find out more by visiting the Danish Quantum Pavilion at the co-located industry exhibition.
The post International scientists head into the fast-lane of Denmark’s burgeoning quantum ecosystem appeared first on Physics World.
At low exciton density, a superfluid suddenly stops flowing
Physicists say they may have observed a supersolid phase in a superfluid
The post At low exciton density, a superfluid suddenly stops flowing appeared first on Physics World.
Physicists at Columbia University in the US say they may have found evidence for a phenomenon in which a superfluid suddenly stops flowing inside a solid-state material. If confirmed, the finding – made in experiments using two atom-thin layers of graphene – could be the first superfluid-to-insulator phase transition ever observed in a naturally occurring material.
“For the first time, we’ve seen a superfluid undergo a phase transition to become what appears to be a supersolid,” says Cory Dean, who led the new study. “It’s like water freezing to ice, but at the quantum level.”
Supersolids are a hypothetical state of matter that can be both liquid- and solid-like at the same time – that is, they have a crystal structure and superfluid properties. In this description, first put forward by physicists in the 1970s, the crystal lattice and superfluidity are all part of the same phase coherent ground state and are not two separate systems, explains Dean.
In the new work, the researchers studied graphene, which is a sheet of carbon just one atom thick. When two of sheets of graphene are placed atop each other, they can be manipulated so that one layer contains extra electrons and the other extra holes.
The electrons and holes can combine to form quasiparticles known as excitons, which can then travel through the graphene bilayer as a superfluid when a strong magnetic field is applied.
Graphene, sometimes called the “wonder material”, is ideal for such fundamental physics studies because its properties can be fine-tuned by adjusting parameters like temperature, the applied electromagnetic fields and even the distance between the layers.
Controlling the density of excitons
In their experiments, Dean and co-workers were able to move the excitons in their bilayer samples by applying oppositely charged electric fields to the two layers. This, explains Dean, causes the positive and negative parts of each exciton to be pulled in the same direction, allowing them to indirectly drive and detect exciton flow. This ability to control layer imbalance allowed the team to tune the exciton density. Normally, such a process is difficult to achieve because excitons are electrically neutral and do not respond directly to ordinary electrical measurements, which makes tracking their motion difficult.
Thanks to their technique, which they detail in Nature, the researchers found that at high densities, the excitons behaved like a superfluid. At lower densities, however, these excitons “froze” and the superfluid became insulating. Even more striking, says Dean, is that warming the system restored the superfluid flow. “This result suggests that a supersolid-like phase emerges spontaneously, driven solely by particle interactions.”
The Dean lab has been studying the superfluid exciton phase for many years, though most of their work to date focused almost exclusively on the “layer balanced” condition that occurs when there is an equal density of electrons and holes in the two graphene layers. More recently, they began to study the layer imbalanced regime, which has been much less explored in experiments.
“To our surprise we found that under very large imbalance, the exciton transitions to an insulating state beyond some critical imbalance,” says Dean. “This observation alone could have many trivial explanations, but the real shock came when we found that upon heating the system, the superfluid is recovered.”
This behaviour, which has been discussed in some theoretical literature, has no precedence in any existing experiments of superfluidity, he explains, so it is something we should try to better understand.
“To view the situation in the opposite sense: when cooling a fluid and it transitions to a superfluid, the superfluid is already in a thermodynamic ground state. So why upon further cooling, should it undergo a transition to any other phase?” asks team member Jia Li. “We eventually realized that in our experiment, the role of layer imbalance is really a tuning of the exciton density, and the insulating phase onsets when the exciton density crosses a critical value,” he tells Physics World. “Once we had adopted this view, understanding the observed phase transition, and how it fits in with existing theoretical predictions, fell into place.”
A true supersolid or not?
While the researchers say they have firmly established the existence of an insulating state within the superfluid phase diagram, whether this state is truly a supersolid or some other as-yet unknown quantum ground state remains less clear. The challenge with understanding an insulating material is that it becomes more difficult to probe its behaviour, says Dean. “This is made even more difficult by the experimental requirements to stabilize the insulating phase: we need ultraclean samples, low temperatures and high magnetic fields.”
And the difficulties do not end there: “having to work with strong magnetic fields also limits what experimental probes we can use,” he adds. “To progress further, we need to develop new tools to probe the insulating state – for example, we are developing a scan probe technique that we hope can directly image and spatially map the exciton condensate.”
“We have also been working on realizing this condensate in material systems with strong interactions that do not require magnetic fields,” he reveals.
The post At low exciton density, a superfluid suddenly stops flowing appeared first on Physics World.
Wanted: an electrical grid that runs on 100% renewable energy
Global conflicts are making renewable energy more attractive, but an all-renewable grid will require solving physics problems as well as political and economic ones
The post Wanted: an electrical grid that runs on 100% renewable energy appeared first on Physics World.
With the conflict in Iran and the resulting closure of the Strait of Hormuz pushing oil and gas prices upwards, the prospect of a world that runs on 100% renewable energy seems even more attractive than usual. Before we can get there, though, experts in a range of fields say we’re going to need to solve a few physics problems – including one that goes straight back to Maxwell’s equations.
Unlike energy that comes from processes such as burning fossil fuels, sending water downhill through turbines, or harnessing the heat from nuclear reactions, the supply of wind and solar energy varies in ways we cannot control. To complicate matters further, consumer demand also varies, and the two variations “do not necessarily match in time or in space” observes Michael Jack, a physicist at the University of Otago in New Zealand.
Speaking on Monday at the American Physical Society’s Global Physics Summit in Denver, Colorado, Jack explained that there are two ways of making sure demand matches supply in an all-renewable grid. The first is to smooth out demand over time, for example by storing energy in batteries and using it when the wind isn’t blowing or the Sun isn’t shining. The second is to smooth out demand over space, for example by creating a grid that connects large numbers of consumers. “It’s very unlikely that all consumers’ demand will peak at the same time,” Jack noted.
To understand how peak demand scales with the number of consumers, Jack and his colleagues are using tools from an area of mathematics called extreme value theory. As its name implies, the goal of extreme value theory is to understand the probability of events that are either extremely large or extremely small compared to the norm. Once we can do that, Jack told the APS audience, we’ll be able to build renewable energy systems that deal efficiently with periods of peak demand.
“The opposite of quantum mechanics”
Another speaker in the same session, Charles Meneveau, is working on the supply side of the variability problem. As a fluid dynamics expert at Johns Hopkins University in Maryland, US, his goal is to understand how turbulent gusts of wind lead to fluctuations in the power output of wind farms – a problem he described as “the opposite of quantum mechanics” because “it’s intuitive and we feel like we understand it, but we can’t compute it”.
Meneveau and his collaborators began by building a micro-scale wind farm, sticking it in a wind tunnel and monitoring how it behaved. More recently, they’ve added computer simulations to the mix, generating around a petabyte of simulated turbulence data.
As expected, these studies showed that the power output of an array of turbines fluctuates much less than the output of a single turbine. However, an array’s output does spike at intervals set by the rotation frequency of the turbine blades, and also when gusts of wind propagate from one turbine to the next. Meneveau has developed a model that can predict this second type of spike, and he’s now working to extend it to floating offshore wind farms, which experience watery turbulence as well as the windy kind.
Everything under control
The third speaker in the session, Bri-Mathias Hodge, is an energy systems engineer at the University of Colorado, Boulder. He’s interested in ways of ensuring that renewable energy systems remain stable in the face of disturbances that could otherwise send the grid into a tailspin, leading to blackouts like the one that struck the Iberian Penninsula in 2025.
In traditional grids dominated by thermal energy sources, Hodge explained that one of the main ways of maintaining stability is to use devices called synchronous machine generators. These are essentially large rotating masses that all spin at the same rate: the frequency of the grid, which in the US is 60 Hz. When coupled to an AC power system, they give the system a degree of inertia, enabling it to resist potentially damaging fluctuations in the supply of electricity.
These devices have existed for 100 years, and Hodge says our current power system is designed around them. But because renewable energy generation is primarily DC rather than AC, an all-renewable grid will require a fundamentally different approach. “We have to reimagine what the system looks like when we have 100% renewable energy,” Hodge told the APS audience.
The solution, Hodge explained, is to replace synchronous machine generators with electronic inverters. These devices have the advantage of reacting much faster to system fluctuations. However, they also come with a big disadvantage. Unlike massive spinning objects that follow ponderous Newtonian physics, they don’t react automatically. They have to be told, and Hodge says that will require completely different control systems than the ones used in today’s electrical grids.
Return of Maxwell’s equations
While studying this problem, Hodge realized that the engineers who designed electrical grids back in the 1960s made an important simplifying assumption. Because they were working with a system composed entirely of thermal, synchronous generators (and because they were doing all their calculations with slide rules), they treated voltage as being separate from frequency, even though the two are inherently coupled. In other words, they treated the grid as an electromechanical network rather than an electromagnetic one.
To understand how this simplification plays out in a renewable-dominated grid, Hodge and colleagues went back to Maxwell’s equations. Specifically, they focused on what these equations have to say about the momentum associated with a mass that is moving around in an electromagnetic field. In an electrical grid controlled by large inertias from thermal generators, this momentum isn’t important. But in a renewable-dominated grid, Hodge says it can’t be ignored.
He and his colleagues have therefore developed a new model of electric power networks that highlights the significance of this electromagnetic momentum and restores the link between frequency and voltage dynamics. Ultimately, though, Hodge says that avoiding blackouts in an all-renewable energy system will require advances in simulation technologies. “We need to improve our decision-making processes on a whole range of timescales, from seconds to years,” he concluded.
The post Wanted: an electrical grid that runs on 100% renewable energy appeared first on Physics World.
Ask me anything: Giannis Zacharakis – ‘The ability to pursue questions that genuinely interest you is a privilege’
Giannis Zacharakis is a biophotonics and biomedical imaging researcher and CEO of the precision photonics spin-off Kymatonics
The post Ask me anything: Giannis Zacharakis – ‘The ability to pursue questions that genuinely interest you is a privilege’ appeared first on Physics World.
Giannis Zacharakis is a research director at the Institute of Electronic Structure and Laser (FORTH) in Greece, where he leads the Laboratory for Biophotonics and Molecular Imaging. Zacharakis has served as president and vice-president of the European Society for Molecular Imaging. His main focus is on developing key enabling technologies for imaging biological processes in living systems.
Zacharakis is also the CEO of the precision photonics spin-off Kymatonics. The company recently secured a highly competitive €2.1m European Innovation Council (EIC) Transition Open grant, to advance the development and commercialization of their innovative wavefront-shaping objective lens.
What skills do you use every day in your job?
My everyday work involves both hard and soft skills, which are equally important for a successful career.
At its core, my work is about asking questions and defining the path to discovery, through scientific knowledge and rigour. This requires being able to break down complex physical and biological problems into manageable and measurable components under certain hypotheses. Much of my day therefore involves analytical thinking and judgement: evaluating whether an observed effect is physically meaningful or an artefact of instrumentation or data processing. That defines the path forward.
Problem solving constantly requires creativity and thinking out of the box, because experiments rarely behave exactly as planned. You need patience, persistence and the ability to stay calm when instruments misbehave or data contradict expectations.
Communication is another central skill. I regularly explain technical concepts to students, collaborators from other disciplines, and biologists or clinicians who may not share the same vocabulary. Translating physics into accessible language, without oversimplifying the science, is something I consciously practise and it takes time and effort to achieve.
Project management also plays a surprisingly large role. Co-ordinating experiments, supervising students, meeting deadlines for proposals or manuscripts, and balancing long-term research goals with short-term deliverables requires structured planning.
Finally, mentoring is an important part of my routine. Guiding students and young scientists through experimental design, encouraging independent thinking, and helping them develop scientific confidence is both a responsibility and an integral component of academic work.
Essentially, while physics provides the foundation, my job relies on a blend of analytical rigour, practical problem-solving, communication and leadership.
What do you like best and least about your job?
What I value most is intellectual freedom: the ability to pursue questions that genuinely interest you is a privilege. There is something deeply satisfying about seeing a concept move from hypothesis to experimental evidence. Even incremental progress can feel meaningful when it clarifies a mechanism or resolves ambiguity.
I also appreciate the interdisciplinary environment. Working at the interface of physics, biology and biomedicine forces me to continuously learn and think beyond boundaries. It prevents intellectual stagnation and keeps curiosity alive.
Mentoring students is another highlight. Watching someone gain confidence, moving from following instructions to proposing their own ideas, is deeply rewarding. Research training is not only about technical knowledge; it is also about developing judgement and rigour.
On the more challenging side, uncertainty is a constant companion. Funding cycles; competitive grant applications and proposal rejections; and the unpredictability of research outcomes can be demanding. Not every idea works, and not every effort translates into immediate output. Maintaining momentum despite setbacks requires persistence and resilience.
Administrative responsibilities can also fragment time and reduce deep focus. Balancing research, supervision and institutional duties often requires careful prioritization.
What do you know today that you wish you knew when you were starting out in your career?
I wish I had understood earlier that uncertainty is not a sign of inadequacy but is the natural state of research. Early in my career, I expected clarity to come quickly if I worked hard enough. In reality, meaningful progress often requires extended periods of ambiguity. Learning earlier to tolerate that and even see it as productive would have reduced unnecessary self-doubt.
I also underestimated the importance of communication. Being technically correct is not enough; ideas need structure, clarity and narrative. Writing well and presenting clearly are not secondary skill; they are core scientific tools.
Another lesson is that collaboration is essential. Scientific progress increasingly happens at disciplinary boundaries with impactful discoveries emerging at interfaces. Engaging with people who think differently challenges assumptions and strengthens work.
Finally, remember that career paths are less rigid than they appear. There is rarely a single “correct” trajectory. Developing transferable skills, analytical thinking, adaptability, mentoring and project management provides resilience across different opportunities.
I would tell my younger self to focus less on short-term milestones and more on building depth, clarity of thought and professional relationships. Those foundations endure longer than any single milestone.
The post Ask me anything: Giannis Zacharakis – ‘The ability to pursue questions that genuinely interest you is a privilege’ appeared first on Physics World.
Single metasurface could generate record numbers of trapped neutral atoms
Technique boosts prospects for building quantum computers with more than 100,000 qubits
The post Single metasurface could generate record numbers of trapped neutral atoms appeared first on Physics World.
Physicists in China have demonstrated that a structure called an optical metasurface can individually trap up to 78,400 neutral atoms – a promising development in efforts to build a large-scale quantum computer. The method, which is similar to one demonstrated independently by a team at Columbia University in the US, could help overcome a troublesome bottleneck for computers that use neutral atoms as their quantum bits (qubits).
Arrays of trapped neutral atoms are widely employed in physics research, and they are a promising platform for quantum computing. Their main drawback is scalability, explains physicist Zhongchi Zhang, who co-led the new study together with his Tsinghua University colleague Xue Feng. The components normally used to make such arrays, such as spatial light modulators (SLMs) and acousto-optic deflectors (AODs), can only create around 10,000 atom traps at any one time, and are thus limited to a maximum of 10,000 atomic qubits.
Flat optical surfaces made up of 2D arrays of metasurfaces
In their work, which is detailed in Chinese Physics Letters, Zhang and colleagues replaced SLMs and AODs with two-dimensional arrays of metasurfaces – artificial nanostructures that manipulate light in much the same way as traditional optics, but with far less bulk. To do this, they used a method known as a weighted Gerchberg-Saxton algorithm to design a metasurface made up of nanoscale pillars that can transform a single input laser beam into a 280 x 280 array. They then constructed this metasurface from silicon nitride using electron-beam lithography and reactive ion etching. Both methods are compatible with standard complementary metal–oxide–semiconductor (CMOS) manufacturing techniques and are thus highly reproducible.
The result is a set of nanoscale, light-manipulating, pixel-like structures that act like a superposition of tens of thousands of flat lenses. When a laser beam hits these “lenses”, they produce a unique pattern that contains tens of thousands of focal points. As long as the laser light is intense enough, each of these focal points can be used to trap and manipulate atoms via a well-established technique called optical tweezing.
Zhang explains that the main advantage of trapping atoms this way is that the metasurface generates the array of optical tweezers on its own, without the need for additional bulky and expensive optical components such as microscope objectives to focus the light. Another benefit is that such arrays are very robust to high laser intensities, which are a prerequisite when the goal is to trap hundreds of thousands of atoms. Indeed, Zhang says that arrays of this type can handle powers several orders of magnitude higher than is possible with arrays made using SLMs and AODs. The intensity of the light is also highly uniform (90.6%) across the array, and individual beams feature an Airy disk-like profile with an average first dark radius of around 1.017 µm – parameters that Zhang says are “ideal for trapping single atoms”.
Improving fault-tolerant quantum computing
“Our work addresses the critical need for scalable physical qubit arrays required for improving ‘fault-tolerant’ quantum computing and making it more robust to errors,” Zhang tells Physics World. “Since quantum error-correcting codes may call for hundreds of physical qubits to build a single logical qubit, scalability here becomes paramount.”
Researchers at Columbia University also recently demonstrated an atom-trapping array that replaced SLMs and AODs with flat optical metasurfaces. But whereas the Columbia team managed to create 360,000 tweezers with extreme pixel efficiency (around 300 pixels/tweezer, with over 95% uniformity) the Tsinghua University group prioritized the array’s robustness at higher laser power, achieving around 1354 pixels/tweezer. Both studies have validated the use of metasurfaces as a scalable platform beyond the limitations imposed by AODs and SLMs, says Zhang.
Spurred on by their preliminary results, Zhang and colleagues report that they are now fabricating a 19.5 mm-diameter metasurface designed to generate approximately 18,000 optical trapping sites. Their goal is to place this metasurface outside the vacuum chamber that contains the trapped atoms. “Such an external configuration represents a significant departure from conventional approaches and is expected to enable the trapping of over 10,000 atoms, surpassing current records while substantially simplifying the experimental setup,” Zhang explains.
The team is also developing a next-generation integrated architecture in which metasurfaces will replace the fluorescence imaging microscopes used to characterize trapped atoms, as well as the optical tweezer arrays used to trap them. “This approach aims to create a completely new system paradigm for neutral-atom quantum computing that eliminates the need for traditional bulky optics, enabling unprecedented compactness and scalability for future quantum processors,” Zhang says.
The post Single metasurface could generate record numbers of trapped neutral atoms appeared first on Physics World.
Physicists demonstrate long-predicted exotic magnetic phases in 2D material
Observations of how magnetism behaves in atomically thin materials could pave the way for new generations of ultracompact magnetic technologies
The post Physicists demonstrate long-predicted exotic magnetic phases in 2D material appeared first on Physics World.
Physicists in the US and Taiwan have performed new experiments that verify long-standing theoretical predictions of how long-range magnetic order can emerge in atomically thin materials. Led by Edoardo Baldini at the University of Texas at Austin, the researchers showed how the transformation occurs through two distinct phase transitions – possibly paving the way for new generations of ultracompact magnetic materials.
Atomically thin two-dimensional (2D) materials are widely studied for their diverse electrical, optical, mechanical and thermal properties. So far, however, their magnetic properties have generally remained far more elusive. Underlying the problem are inevitable thermal fluctuations, which make it extremely difficult to sustain magnetic order over distances larger than atomic scales.
For decades, theorists have investigated a possible exception to this rule in “2D XY” systems: featuring flat arrays of spins that can rotate continuously within the plane and interact with neighbouring spins. One particularly interesting extension of this model describes how a phase transition can occur when these spins become locked into one of six preferred directions, corresponding to the symmetry of the crystal lattice.
“In the 1970s, theoretical work showed that 2D XY magnetic systems with this six-fold anisotropy could exhibit an unusual sequence of phase transitions described by the six-state ‘clock model’, including an intermediate Berezinskii–Kosterlitz–Thouless (BKT) phase,” Baldini explains. “These ideas became central to the theory of low-dimensional magnetism.”
Since these theories emerged, however, such effects have proven far more challenging to observe in real 2D materials.
Verifying the predictions
To tackle this challenge, Baldini’s team turned to a technique involving nonlinear optical microscopy, based on second-harmonic generation: where a material probed by intense light at one frequency emits secondary light at twice that frequency. Crucially, the polarization of this secondary light is highly sensitive to magnetic behaviour. This allowed the researchers to examine magnetic order in the atomically thin antiferromagnet nickel phosphorus trisulphide (NiPS3) without disrupting the system with invasive electrical contacts.
“By tracking how the optical response evolves with temperature, we were able to directly follow successive magnetic phase transitions and determine the universality class of the emergent magnetic phases,” Baldini explains. “In addition, polarization-resolved measurements allowed us to reconstruct the symmetry of the magnetic order parameter.”
As the researchers cooled the material, their measurements revealed two key phase transitions – each occurring suddenly below a distinct critical temperature. “The first transition marks the onset of a BKT phase, an unusual state in which magnetic correlations extend over long distances without forming conventional long-range order,” Baldini says.
In this phase, the material forms bound pairs of vortices and antivortices: topological defects in the spin field triggered by thermal fluctuations. Within these swirling patterns, spins collectively curl around single points, either in clockwise or anticlockwise directions.
At higher temperatures, these swirling patterns are isolated and can roam freely through the material, disrupting the emergence of long-range magnetic order. But when vortices and antivortices are bound together, their disruptive influences largely cancel each other out: allowing spin correlations to persist over longer distances, while still remaining sensitive to thermal fluctuations.
As the researchers cooled the NiPS3 further they observed a second phase transition, in which vortices and antivortices are suppressed and a six-state clock phase emerges. But this symmetry was constrained even further: across the whole system the six possible spin orientations could themselves be arranged in just two distinct ways. This interplay between six- and two-fold anisotropy ultimately gives rise to stable long-range magnetic order, just as earlier theories had predicted.
Through their experimental validation, the team’s results shed new light on the rich and unexpected magnetic phenomena that can emerge in 2D materials. Revealing two distinct phases, the work highlights how magnetism can arise in fundamentally different ways to that seen in more familiar three-dimensional materials.
“More broadly, these results establish atomically thin magnets as a powerful platform for exploring topological phase transitions and may inspire new approaches to controlling magnetism at the nanoscale for future ultracompact technologies,” Baldini says.
The findings are reported in Nature Materials.
The post Physicists demonstrate long-predicted exotic magnetic phases in 2D material appeared first on Physics World.
Inside the world’s particle‑physics labs: Global Physics Photowalk 2025 winners revealed
This is the fifth international photowalk following events held in 2010, 2012, 2015 and 2018
The post Inside the world’s particle‑physics labs: Global Physics Photowalk 2025 winners revealed appeared first on Physics World.
From an image of a detector hunting for signs of dark matter to a picture of a deep-sea neutrino telescope studying astrophysical phenomena, the winning entries for the 2025 Global Physics Photowalk have been announced by the Interactions Collaboration – an international network of particle physics institution.
Some 16 labs around the world took part in the event, in which they opened up their labs for a day in 2025 to amateur and professional photographers.
Each lab then entered their top three images into the global competition and from those 48 images a panel of judges selected their top three photos while the public also chose their top three favourite images via an online vote held on 13–27 January.
Marco Donghia’s photograph, main image above, was picked in first place by the judges. It features a researcher sat in front of the Cryogenic Laboratory for Detectors, which is based at INFN National Laboratories of Frascati. The experiment aims to detect extremely weak and rare signal such as those produced by dark matter.

“Finding out I had won left me speechless,” notes Donghia. “The cryostat I photographed is just a few fractions of a degree above absolute zero, yet this recognition filled me with such warmth and emotion that no cryogenic temperature could cool them down.”
The image on the left won first place in the public vote. It shows the back of the linear accelerator of SPIRAL2 at the Large Heavy Ion National Accelerator, GANIL, based in Caen, France.
Second place in the judges competition, meanwhile, went to Matteo Monzali for his photo, shown below, of the Advanced Gamma Tracking Array photon detector coupled with PRISMA magnetic spectrometer.
The experiment is based at the TANDEM-ALPI-PIAVE accelerator complex at INFN National Laboratories in Legnaro.

Third place in the judges competition, below, goes to a spectacular close-up image of a photomultiplier from the KM3NeT/ORCA experiment, a neutrino telescope currently being installed in the Mediterranean Sea off the coast of Provence, France.

This is the fifth international Photowalk following events held in 2010, 2012, 2015 and 2018.
All 48 images that were submitted to the 2025 competition can be viewed here.
The post Inside the world’s particle‑physics labs: Global Physics Photowalk 2025 winners revealed appeared first on Physics World.
Stripes of Enceladus: a jigsaw puzzle
Can you reconstruct the astrophysics image we’ve pulled apart?
The post Stripes of Enceladus: a jigsaw puzzle appeared first on Physics World.
There are two difficulty settings: choose between an 88-piece jigsaw and the 40-piece version.
Image courtesy: NASA/JPL/Space Science Institute
Fancy some more? Check out our puzzles page.
The post Stripes of Enceladus: a jigsaw puzzle appeared first on Physics World.
Self-healing materials could make automobile parts last over 100 years
Scientists have created a material with the ability to repeatedly and autonomously repair cracks
The post Self-healing materials could make automobile parts last over 100 years appeared first on Physics World.
Researchers from North Carolina State University and the University of Houston have achieved sustained self-healing of a composite material. The findings promise to extend the lifetime of aircraft and automotive parts by a century, according to a recent paper published in the Proceedings of the National Academy of Sciences.
Composite materials bond two or more components to achieve balanced strength, flexibility and durability. Bone is a naturally occurring example, combining flexible collagen fibres with the stiffness of various minerals. Fibre-reinforced polymers (FRPs) are synthetic analogues that embed strong fibres within a polymer matrix to achieve similar material advantages, making them ubiquitous in aerospace, naval and wind energy sectors.
While bonding multiple layers is necessary to enforce strength, it makes the material prone to interlaminar delamination, or the separation of layers. Lead researcher Jack Turicek describes this type of delamination as “one of the most common and life-limiting failure modes in FRPs”. While nature boasts the remarkable ability to autonomously and repeatedly heal from delamination, achieving a similar feat in synthetic materials has only now become possible.
Healing by thermal remending
The researchers used a method known as “thermal remending” to enable self-healing. First, a healing agent, poly(ethylene-co-methacrylic acid) or EMAA, is embedded into a glass-fibre epoxy-matrix composite during curing. This forms strong covalent bonds between EMAA and the epoxy.
To test their materials, the researchers systematically created a fracture by applying controlled tensile loading until the fracture reached 50 mm. Then, to initiate healing, they warmed the material using built-in electrical heaters. The heat vaporized small water bubbles created during the initial curing process, which produced a microporous network that physically expanded and spread the EMAA into the fracture – the so-called “pressure delivery mechanism”.
Afterwards, 30 min of natural convective cooling to room temperature allowed the EMAA to solidify, forming new hydrogen and ionic bonds between EMAA and epoxy. The bonds reconnected the interfaces that had fractured, recovering the structural integrity of the material.

The team repeated the entire procedure over 1000 cycles. Such a prolonged study was previously infeasible due to multi-day cycle lengths. In this work, the researchers set up programmable electrical, thermal and mechanical devices that automatically initiated fracturing, sensed progress to trigger healing, and monitored the rebonded crack before repeating the cycle. This automation reduced cycle lengths to an hour and the full experiment to only 40 days.
Understanding sustained healing
The team quantified the healing effectiveness using the critical strain energy release rate (GIC), a measure of the energy required to propagate a crack. A high GIC means that the material is resilient and well-healed. The EMAA-containing material showed maximum healing at test cycle 7, with 230% the GIC value of an RFP containing no EMAA. The results declined to 180% by cycle 100 and 60% by cycle 1000. When the data was fitted to a Weibull distribution, a common model for material failure, healing asymptotically approached a lower limit of 40% – suggesting that sustained repair is possible.
Optical and electron microscopy revealed two reasons for the observed decline in healing performance. First, the repeated fracture and healing process resulted in accumulation of glass fibre debris in EMAA, which blocked bonding sites. Second, chemical reactions between EMAA and the epoxy matrix are responsible for creating strong covalent bonds between them (necessary for cohesive fracturing of EMAA) and producing the bubbles for the pressure delivery mechanism. The microscopy showed a decline in both reactions, reducing the effectiveness of fracture recovery.
From prototype to practice
Out of the 1000 cycles tested, the self-healing composite maintained over 100% fracture recovery compared with non-EMAA materials for 500 cycles. Based on a 500-cycle lifetime, parts made using the new material could last 125 to 500 years, assuming a quarterly or annual repair schedule – a timeline that far exceeds current design lifetimes of about 40 years.
Integration with existing industry infrastructure is forthcoming. “We have designed both the healing agent interlayers and the resistive heaters to be easily integrated into real-world composites with existing fabrication processes. These functional components enable in situ self-healing (i.e., in the service environment) via electrical power input to the heaters,” says Turicek. “To enable autonomous self-healing, a sensing element that can detect damage is needed to automatically trigger the power on, and power off once repaired. We have such technology on the near horizon.”
The technology has been patented by Jason Patrick, the principal investigator of this research and chief technology officer of the startup company Structeryx. Patrick says that the company intends to “engage with existing and new defence/industry partners to customize the technology for various needs”, in addition to scaling manufacturing.
While we often search for ways to fix broken items, materials of the future may perhaps fix themselves.
The post Self-healing materials could make automobile parts last over 100 years appeared first on Physics World.
A bursting bubble can make a puddle jump
In a breakthrough in droplet physics, researchers find a way to get centimetre-scale water droplets to jump into the air
The post A bursting bubble can make a puddle jump appeared first on Physics World.

On a quiet spring morning, when dew settles on leaves, something curious sometimes happens. A droplet sitting there peacefully will suddenly lift off. No wind. No vibration. Just a tiny leap into the air.
Physicists call this phenomenon droplet jumping. In simple terms, it means that a droplet lifts off from the surface it sits on. If a raindrop hits a leaf and rebounds upward, that rebound can also be considered droplet jumping.
While this may seem like a minor detail in fluid behaviour, removing liquid from surfaces is important for many technologies. When droplets detach from a contaminated surface, they can carry away particles, a process that forms the basis of self-cleaning materials. When droplets leave hot surfaces, they remove heat. And on cold surfaces, quickly removing droplets can help prevent ice buildup.
For years, scientists believed that there was a physical limit to how large these jumping droplets could be. A new study published in Nature has now shown that this limit can be broken, with the help of a bubble.
The research was headed up by Jiangtao Cheng’s lab at Virginia Tech, and performed in collaboration with researchers from the Hong Kong University of Science and Technology and Wuhan University of Technology.
A stubborn limit in droplet physics
Within a droplet, two forces compete constantly: the first is surface tension, the other is gravity.
Surface tension tries to pull the droplet into a sphere, which minimizes its surface area and, therefore, its energy. Gravity, meanwhile, pulls the droplet downward, flattening it against the surface.
The balance between these two forces defines the so-called capillary length – which for water is 2.7 mm. Below this length, surface tension dominates and droplets can sometimes propel themselves upward. Above this length limitation, gravity takes over.
This balance has long been a fundamental barrier in the field of self-propelled droplet jumping. “For droplets larger than the capillary length, gravity dominates,” Cheng tells Physics World. “Simply releasing surface energy from shape relaxation is no longer sufficient to generate enough upward momentum for jumping.”
That is why most previous studies have observed droplets no larger than about 3 mm jumping on their own.
Inspiration from nature
The idea behind the new research began with observations in nature. First author Wenge Huang, who grew up in rural South China, often saw dew droplets on lotus leaves containing tiny air bubbles. Occasionally, when those bubbles burst, the droplets moved.
Years later, that observation led to a question: “could a bubble trapped inside a droplet provide the extra energy needed for jumping?”
A bubble-powered launch
To test this idea, the researchers placed a water droplet on a superhydrophobic surface, which strongly repels water. They then injected air into the droplet using a fine needle, forming a bubble inside the liquid. After a short time, the bubble burst.
High-speed cameras captured what happened next: the droplet lifted cleanly off the surface.
What surprised the researchers most was that droplets nearly 1 cm wide were able to jump – far exceeding the previously accepted capillary length limitation.
A bubble inside the droplet creates additional air–liquid interfaces, increasing the system’s stored surface energy while adding almost no mass. When the bubble bursts, that energy is released as capillary waves that focus momentum upward.
“Embedding a bubble increases the system’s surface energy without increasing its weight,” explains Cheng.
Small bubbles, strong possibilities
The researchers also found that the mechanism was extremely efficient, converting more than 90% of the energy into upward momentum, well above that of many conventional droplet propulsion methods.
The implications extend beyond basic physics; the discovery could help improve self-cleaning surfaces, heat transfer systems and anti-icing technologies. The bubble-burst process can also create directional liquid jets, which could be useful for microscale 3D printing and material deposition.
In simple terms, the study revealed something unexpected. A single bursting bubble can launch a much larger droplet than scientists once thought possible, even at the centimetre scale.
The post A bursting bubble can make a puddle jump appeared first on Physics World.
Word flower puzzle no. 1
How many words can you find in this new interactive puzzle?
The post Word flower puzzle no. 1 appeared first on Physics World.
How did you get on?
20 words Warming up nicely
32 words Getting hot, hot, hot
45 words Top dog!
Fancy some more? Check out our puzzles page.
The post Word flower puzzle no. 1 appeared first on Physics World.
Droplet scientists push the boundary between living and non-living matter
Systems governed by chemistry and physics, not biology, can behave in surprisingly lifelike ways, as Giorgio Volpe, Rob Malinowski and Joe Forth explain
The post Droplet scientists push the boundary between living and non-living matter appeared first on Physics World.
In this episode of the Physics World Weekly podcast, we hear from a trio of scientists with a common interest in the physics of droplets. Specifically, Joe Forth, Rob Malinowski and Giorgio Volpe share a fascination with droplets that are “animate” – that is, capable of responding to their surroundings in ways that resemble the behaviour of living organisms.
As they explain in the podcast, systems must tick three boxes to qualify as animate. First, they must be active, able to use energy from their environment to do work and perform tasks. Second, they must be adaptive, able to move between different dynamical states in response to changes to their environment or their own internal states. Finally, they must be autonomous, able to process multiple inputs and choose how to respond to them without intervention from the outside world.
Incorporating all these behaviours into a droplet – or a system of many droplets – is challenging. The boundary between autonomous and non-autonomous systems is proving especially hard to overcome, and Volpe, Malinowski and Forth have a friendly disagreement over whether any droplet-based system has managed it yet.
Crosses disciplinary borders
Part of the challenge, they say, is that the field crosses disciplinary borders. Although Volpe thinks the community of droplet researchers is getting better at finding a common vocabulary for discussions, Forth jokes that it is still the case that “the chemists are scared of physics, the physicists are scared of chemists, everyone is scared of biology”. The potential rewards of overcoming these fears are great, however, with possible future applications of animate droplets ranging from consumer products such as deodorant to oil spill clean-up.
This discussion is based on a Perspective article that Volpe (a professor of soft matter in the chemistry department at University College London, UK), Malinowski (a research fellow in soft matter physics in the same department) and Forth (a colloid scientist and lecturer in the chemistry department at the University of Liverpool, UK) wrote for the journal EPL, which sponsors this episode of the podcast.
The post Droplet scientists push the boundary between living and non-living matter appeared first on Physics World.
The American Physical Society’s 2026 Global Physics Summit opens in Denver
The "shared future" theme of the world's biggest physics meeting this year is opportune
The post The American Physical Society’s 2026 Global Physics Summit opens in Denver appeared first on Physics World.
The Global Physics Summit (GPS) bills itself as “the world’s largest physics research conference”. Organized by the American Physical Society (APS), it combines the previously separate APS March and April meetings, with at least 14,000 people expected to attend this year’s event in Denver, Colorado, which has the theme “science for a shared future”.
The two APS meetings (especially APS March) have long been pilgrimages for physicists. They’re a chance to meet people whose papers you’ve read, learn about new research, land a dream job or perhaps decide what your future physics career should look like. They offer unparalleled opportunities for gossiping, networking and making your name.
Sometimes they even host extraordinary announcements, such as in 2023 when one group claimed to have discovered room-temperature superconductors, or in 1987 when several groups really did present the first data on high-temperature ones.
Due to the current state of US politics, however, physicists from many countries may well have second thoughts about travelling to this and other scientific meetings in the US.
Indeed, if you’re from one of almost 40 nations to which the US government has partially or fully suspended issuing visas – supposedly “to protect the security of the United States” – you probably won’t be able to get into the country at all.
Among the countries affected by the Trump administration’s ban is Ethiopia, which is home to people like the physicist Mulugeta Bekele, who almost single-handedly kept Ethiopian physics alive in the 1970s and 1980s despite being jailed and tortured.
As Robert P Crease recounts in his latest feature, Mulugeta was awarded the APS’s Sakharov human-rights prize in 2012, picking up his award at that year’s APS March meeting in Boston. Would Mulugeta, I wonder, be able to enter the US in current circumstances?
One US physicist told me that outsiders should respond to the situation in America by boycotting the US entirely. To me, that’s a step too far, not least because breaking contact would show a lack of solidarity with US-based scientists suffering from funding cuts or worse. After all, physics is a global enterprise, as two recent Physics World articles make clear.
The first is a feature about quantifying the environmental impact of military conflicts by Ben Skuse. Numbers are hard to come by, but according to a 2022 estimate extrapolated from the small number of nations that do share their data, the total military carbon footprint is about 5.5% of global emissions. This would make the world’s militaries the fourth biggest carbon emitter if they were a nation.
In another feature, Michael Allen examines how climate change could trigger extreme changes in the activity of earthquakes and volcanoes. Worryingly, increased volcanic eruptions not only contribute to the build-up of greenhouse gases but also create other problems too. In particular, a warming climate melts ice caps, lowering surface loads and potentially causing more earthquakes to occur.
Both issues – and many more besides – will only be solved through global, interdisciplinary collaborations. As the theme of the GPS quite rightly puts it, we need science for a shared future.
That’s why it’s great that the APS, along with AIP Publishing and IOP Publishing, which together form the Purpose-led Publishing (PLP) coalition, are hosting a network of 23 satellite events in Africa, Asia and South America to expand participation in this year’s GPS.
PLP’s satellite hubs, which will take place both in person and online, aim to let researchers engage with the summit programme, contribute to discussions, and take part in locally organized workshops and presentations.
Taking place in countries ranging from Brazil and Benin to the Philippines and Pakistan, the events will host livestreamed and recorded content from Denver as well as offering debates, expert-led sessions and opportunities for networking.
One event will be held in Ethiopia, which, I hope, Mulugeta at least will be pleased to hear.
- More information about the satellite events can be found by clicking this link.
The post The American Physical Society’s 2026 Global Physics Summit opens in Denver appeared first on Physics World.
Interplaying hazards: can you solve our crossword on geophysical processes?
Try out our quick crossword all about climate and geophysics
The post Interplaying hazards: can you solve our crossword on geophysical processes? appeared first on Physics World.
See how much you know about the subject by trying our interactive crossword. Most of the clues are based on the article, but there are a few additional brain teasers thrown in. If you’re feeling stuck, check out the “assist” menu for help.
The post Interplaying hazards: can you solve our crossword on geophysical processes? appeared first on Physics World.
Lunar magnetic field mystery may finally have an explanation
Physicists weren’t sure why Moon rocks brought back during the Apollo missions are more strongly magnetized than models predict
The post Lunar magnetic field mystery may finally have an explanation appeared first on Physics World.
When the Apollo astronauts returned from the Moon, they brought a puzzle back with them. Some of the rocks they collected were so strongly magnetic, it implied that the Moon’s magnetic field must have been stronger than the Earth’s when the rocks formed 3.9‒3.5 billion years ago. “That doesn’t make any sense with the physics that we understand about how planets generate magnetic fields,” says Claire Nichols, a planetary geologist at the University of Oxford, UK.
Nichols and her Oxford colleagues Jon Wade and Simon N Stephenson have now identified a possible explanation. The key, they say, lies in the rocks’ composition, which happens to provide ideal spacecraft landing sites, leading to sampling bias. “It was a proper kind of Eureka moment,” Nichols says.
The lunar dynamo
The magnetic fields of planets and moons stem from convective currents in their largely iron cores. Scientists expect that objects with smaller cores, such as the Moon, will have lower magnetic field strengths. But measurements of the Apollo samples suggested that the magnetic field strength might, in some cases, have exceeded 100 μT – higher than the typical value of 40μT on the surface of the Earth. It’s as if an AA battery were somehow powering a fridge.
“The dynamo modelling community have been trying to come up with all sorts of mechanisms to give you these really strong fields,” Nichols tells Physics World.
When Nichols mentioned this problem to Wade, a petrologist, his response intrigued her. “He said, kind of as a throwaway comment, ‘Have you looked to see if there’s any link between the composition and the intensities?’”
Upon inspecting the data, Nichols realized that Wade could be onto something. While all the lunar basalt samples with high magnetization contained large quantities of titanium, samples with low magnetization contained little.
A possible mechanism
Other researchers had previously suggested a process that could have supercharged the Moon’s dynamo, boosting the magnetization of titanium-bearing basalt in the process. When the Moon formed, an ocean of molten magma developed that gradually crystallized into today’s lunar mantle. The last material to solidify was a titanium-rich mineral called ilmenite. Solid ilmenite is incredibly dense, so once it solidified, it sank towards the Moon’s magnetic core.
According to the hypothesis, heat transfer across the core-mantle boundary then pushed the ilmenite to its melting point and increased the local temperature gradient, thereby boosting convection and, by extension, magnetic field strength. This means that the ilmenite-bearing rocks supercharged the dynamo behind the Moon’s magnetic field and became unusually highly magnetized in the process. Eventually, volcanic activity brought the rocks to the lunar surface, where the Apollo astronauts collected them.
The problem with this explanation, Nichols says, is that the heat flux at the boundary would only be raised for brief periods, meaning that by this mechanism, only two in every thousand Apollo samples would be strongly magnetized. The real figure is roughly half.
A further role for heat transfer?
Nichols and her colleagues therefore dug deeper into the process. They realized that while the period of melting was brief, it played a crucial role in creating the samples the Apollo astronauts found. “Those samples are all being erupted only at the times where the heat flux is high,” Nichols tells Physics World. And when they eventually made their way to the lunar surface, they did so as part of basaltic flows, which happen to make perfect landing sites for spacecraft.
Case solved? Not quite. According to widely accepted theories of convection in the lunar mantle, the ilmenite lumps could not have got as far as the boundary between the core and mantle, because if they did, they would have lacked the buoyancy to rise again. Still, John Tarduno, whose research at the University of Rochester, US, centres on the origins of Earth’s dynamo, describes Nichols and colleagues’ ideas as “intriguing and certainly worth further consideration through data collection and modelling”.
Tarduno, who was not involved in this work, adds that he isn’t sure that core heat flux alone would ensure that the lunar core once had an intermittent strong dynamo. “The work should motivate numerical dynamo simulations as well as modelling of mantle evolution to test the authors’ ideas,” he says.
Nichols is up for the challenge. By studying additional Apollo samples, together with new ones from the Artemis and Chang’e missions to other parts of the Moon, she aims to determine whether magnetization intensity really does correlate with titanium content, and thereby lay the mystery to rest.
The study appears in Nature Geoscience.
The post Lunar magnetic field mystery may finally have an explanation appeared first on Physics World.
Licensing puts the power into nuclear fusion
A consultancy firm with expertise in radiation safety can help companies developing a new generation of commercial fusion reactors to navigate the regulatory framework
The post Licensing puts the power into nuclear fusion appeared first on Physics World.

Nuclear fusion has long held the promise of providing an unlimited supply of clean energy, but turning such a compelling concept into a practical reality has always seemed just beyond reach. That could be about to change, with a new wave of commercial operators developing compact nuclear reactors that they believe could be providing the grid with useful amounts of electricity within the next 10 years.
Leading the way is the US, where a combination of federal grants and private capital is fuelling the drive towards commercial production. One company grabbing the headlines is Helion, which has broken ground on a power plant that is due to supply 50 MW of power to Microsoft by 2028. Commonwealth Fusion Systems, set up with the backing of the Massachusetts Institute of Technology, has also announced an agreement with Google that trades an early strategic investment for 200 MW of power when the company’s first reactor comes online in the early 2030s.
Such commercial interest has been buoyed by a clarification in the licensing regime, at least within the US. In 2023 the Nuclear Regulatory Commission (NRC), the federal agency responsible for nuclear safety, ruled that fusion reactors need not be governed by the highly restrictive framework that applies to existing power plants based on nuclear fission. Instead, fusion developers must comply with the part of the code that is primarily focused on the handling of radioactive material.
“That was a big win for the industry,” says Steve Bump, an expert in radiation safety and licensing at consultancy firm Dade Moeller, part of the NV5 group. “Fusion is a much safer process because there is no spent fuel to deal with and there is no risk of the reaction running out of control. In the event of a system failure, everything just stops.”
Growth industry
Almost 50 companies are now actively involved in fusion development and research within the US, while others are active in the UK, China and Europe. Different reactor designs are being pursued, but each rely on heating a plasma containing deuterium and tritium to extreme temperatures and then confining the superheated plasma. When the light atomic nuclei collide and fuse together – which requires the plasma to reach temperatures above 100 million degrees Celsius – the nuclear reaction releases helium gas and high-energy neutrons, along with a vast amount of energy.
Nuclear fusion has already been shown to deliver intense bursts of energy that exceed the power needed to generate and sustain the plasma, but no-one has yet managed to produce a steady supply of electricity from the process. “The fusion industry is often characterized as a race,” says Bump. “There are many new companies that are aiming to build a commercially viable power plant that can be scaled up and replicated in multiple locations.”
Amid this rapid expansion, one upshot from the NRC ruling is that state-level regulators now have the authority to award licences for fusion reactors, provided that they follow the framework set out by the federal agency. But these state regulators are more accustomed to issuing licences to healthcare providers or research institutes that need to handle small amounts of radioactive material, and they are often wary of applications from fusion developers that ask for large quantities of radioactive tritium. “The amounts required for fusion can produce thousands and thousands of curies, while most other applications need less than a microcurie,” says Bump. “That makes it very different from a licensing standpoint, and the state agencies don’t have much experience with activities that use that much material. It makes them nervous.”
A big priority for them is to ensure that people in and around the plant are safe from any exposure, and we can help to ensure that the information provided by the company is clear, thorough and accurate.
Bump and his colleagues can help fusion companies to reassure the state regulators that all the evaluations have been done correctly. “Each state agency is a little different, and we need to work with each one to find out what they need and what they will accept,” adds Bump. “They need to consider the impact of the facility on public safety and the local environment, and they are going to ask questions before they are confident enough to issue a licence.”
That abundance of caution means that each application must be customized to address the concerns of each regulator. One area that receives particular scrutiny is the amount of shielding needed to protect people from the energetic neutrons produced by the fusion reaction. Slowing down and absorbing these neutral particles is a difficult process, requiring a multi-stage strategy that typically includes water-cooled steel and walls made of reinforced concrete.
As part of the licence application, companies need to demonstrate that their shielding mechanisms reduce the radiation dose to acceptable levels, both for people working inside the facility and those living and working in the neighbourhood. “We can review the shielding evaluations produced by companies before they are submitted to the state regulators,” says Bump. “A big priority for them is to ensure that people in and around the plant are safe from any exposure, and we can help to ensure that the information provided by the company is clear, thorough and accurate.”
Practical advice
The experts at Dade Moeller can also help fusion developers to make a realistic assessment of the amount of tritium they will need, since any licence will place a limit on how much radioactive material can be held within the facility. In addition, they can advise companies on how to establish and document failsafe procedures for storing and using tritium, along with real-time monitoring systems to ensure that emissions of tritium gas are kept within regulated limits. “We also look at the potential dose consequences if there is an accidental release, along with any emergency planning that may be needed if any radioactive material does escape,” adds Bump.
As well as providing the technical documentation needed by the regulators, fusion companies need to gain the support of local residents and businesses. Outreach events and public meetings are critical to explain how the technology works, openly discuss the risks and mitigation strategies, and highlight the benefits to the surrounding community. “We have attended some of the public meetings where people have had the opportunity to ask questions and voice their concerns,” says Bump. “We can help companies to prepare helpful and informative answers, particularly when questions are submitted prior to the meeting.”
If these efforts are successful, many local communities welcome the economic boost that could be produced by a commercial power plant, such as the creation of highly skilled jobs and the potential to attract other businesses to the area. Several fusion companies are planning to build their production facilities on the sites of previous coal-fired power stations, potentially breathing new life into small cities suffering from a post-industrial malaise.
These sites also provide prospective commercial operators with easy access to the existing electrical infrastructure. “It’s convenient for them because there is no need to install new transmission lines,” says Bump. “If they can make electricity, they can simply connect to the grid through the existing substation.”
Most commercial developers are currently building and testing pilot machines, with commercial production expected in the 2030s. As they make that transition, Bump and his colleagues can provide the expertise needed to navigate the licensing requirements across different states. “We can offer advice on how to get started, and how to set up a framework for radiation protection that will support companies as they scale up their operations,” says Bump. “It’s a growing industry, and we are here to help.”
The post Licensing puts the power into nuclear fusion appeared first on Physics World.
Celebrating 100 years of physics at Tsinghua University
Wenhui Duan outlines how Tsinghua aims to continue its rich history in physics
The post Celebrating 100 years of physics at Tsinghua University appeared first on Physics World.
Can you tell us about your career in physics?
My academic path studying physics at Tsinghua University began in 1981 where I completed a bachelor’s and a master’s before earning a PhD in 1992. I then did a postdoc at the Central Iron & Steel Research Institute in Beijing before returning to Tsinghua University in 1994 as a faculty member in the physics department.
Have you always studied and worked in China?
During my time at Tsinghua I carried out two research visits abroad, first at the University of Minnesota from 1996 to 1999 and then at the University of California, Berkeley from 2002 to 2003.
What is your research focus?
My career has been centred on employing and developing theoretical computational methods to understand, predict and design the physical properties of materials from the microscopic level of atoms and electrons. My work is an attempt to use a “computational microscope” to probe the fundamental nature of materials and sketch blueprints for new ones. This journey from fundamental theory to potential application has been continuously challenging and immensely rewarding.
Can you explain some examples?
One is in the theoretical study of topological quantum materials. We have performed theoretical work predicting the potential for the quantum spin Hall effect in two-dimensional systems and we have explored new states of matter such as topological semimetals. Another avenue of research is on the physics of low-dimensional and artificial microstructures. My group has a long-standing interest in the electronic structure, magnetic properties, and optical responses of low-dimensional systems like graphene and two-dimensional magnetic materials. Recently, our team discovered a novel spin chirality-driven nonlinear optical effect in a 2D magnetic material.
Are you using AI in this endeavour?
Yes. A significant recent focus is pioneering the integration of artificial intelligence with computational materials science. We are developing deep-learning models that are compatible with mainstream computational frameworks to increase the efficiency of simulating complex material systems and accelerate the discovery of new materials.
What areas of physics research is Tsinghua active in?
Our department boasts a robust and comprehensive research portfolio. Our research can be mainly outlined as three core directions. The first is condensed-matter physics, which has historically been one of our largest and most prominent areas. Research here spans from fundamental quantum phenomena to materials design for future technologies.
Experimentally we work in areas such as topological quantum materials, high-temperature superconductivity, two-dimensional systems, and novel magnetic phenomena. The recent experimental discovery of the quantum anomalous Hall effect at Tsinghua is one example. Theoreticians, including my group, focus on predicting new quantum states and understanding complex electronic behaviours using first-principles calculations and model analysis.
A more diverse international community brings essential perspectives that challenge assumptions, spark innovation and elevate our collective work to a global standard
What about the other two areas?
The second area is atomic, molecular, and optical physics. Key topics include ultra-cold atoms for quantum simulation of complex many-body problems, quantum optics and quantum communication and precision measurement science. Work here often provides the physical platforms and techniques that enable advances in quantum-information science.
The other area is nuclear physics and particle physics: In particle physics, our faculty and students work in major international collaborations such as the Large Hadron Collider. Besides these core directions, our research is also focused on programmes in astrophysics/cosmology and in biophysics. The emergent field of quantum-information science also connects nearly all these areas making it a defining feature of our current research environment.
Are there some areas of physics that Tsinghua might increase its efforts in?
One is the integration of artificial intelligence and machine learning with fundamental physics research. In my own field of computational materials science, we are already using AI to accelerate the discovery of new quantum materials and predict complex properties with unprecedented speed. This approach should be expanded and deepened across the department — from using AI to analyse data from particle colliders and gravitational-wave detectors, to developing new algorithms for quantum many-body problems and astrophysical simulations.
Any other areas?
We must also intensify our efforts in the development and application of quantum technologies. We already have excellent groups in quantum information, quantum optics and quantum materials so the next step is to combine these strengths towards the engineering of functional quantum systems.
What are some of the major international institutions that Tsinghua collaborates with?
Internationally, our researchers are embedded in several “big science” projects such as the XENON collaboration for direct dark-matter detection, particle physics experiments like ATLAS, CMS and FASER at CERN as well as the LIGO collaboration in gravitational-wave astronomy.
What about those closer to home?
Domestically, we work with the Institute of Physics at the Chinese Academy of Sciences and the Beijing Academy of Quantum Information Sciences, particularly in areas like condensed matter and quantum science. We also value industry partnerships, a notable example being our long-standing collaboration with Foxconn, which formed the joint Foxconn Nanotechnology Center within our department.
How many students and staff are there in Tsinghua’s physics department?
We have an academic community of more than 900 people: 85 faculty members, around 100 staff members, 420 graduate students and 320 undergraduate students.
How many foreign staff and students do you have?
We currently have four foreign professors together with 11 international undergraduates and five international PhD candidates – from Malaysia, Germany, Belarus, Russia, and Iran.
Would you like to see these numbers increase?
Yes, but my emphasis is more on qualitative enhancement than just quantitative increase. A more diverse international community brings essential perspectives that challenge assumptions, spark innovation and elevate our collective work to a global standard. We are working to create an even more welcoming and supportive environment – through dedicated discussions on internationalization, fostering research collaborations, and hosting global conferences.
I hope we are known not just for our discoveries, but for building essential research “bridges” that solve big problems
Why is Tsinghua an attractive place to work?
It’s appeal lies not in any single attribute, but in a unique ecosystem that fosters research and innovation. First, is Tsinghua’s strengths across science and engineering that create a natural incubator for interdisciplinary work. My own research, particularly in integrating advanced computational methods with materials discovery, has been significantly accelerated by collaboration with leading experts in adjacent fields.
Second, is the balance of academic freedom and responsibility. The university provides substantial intellectual freedom and long-term support allowing researchers to pursue high-risk, fundamental questions without being bound solely by short-term deliverables. Coupled with this freedom is a profound sense of responsibility to contribute to national and global scientific efforts, an ethos deeply embedded in Tsinghua’s tradition.
Third, it is the quality of the students. Engaging with some of China’s most talented and driven young minds is perhaps the greatest privilege. Their curiosity, rigour and fresh perspectives constantly challenge and renew my own thinking. Mentoring them from promising undergraduates to independent researchers is a core part of the scientific legacy we build here.
What events do you have planned to mark the centenary of physics at Tsinghua?
We have a number of activities planned including the publication of an updated departmental history book that formally documents our century-long journey from 1926 to the present as well producing a centennial documentary film. We also have an alumni interview series and department exhibitions to visually narrate our history and scientific contributions.
We are collaborating with the Chinese Physical Society, the Chinese Academy of Sciences and the National Natural Science Foundation as well as IOP Publishing to publish commemorative special issues throughout the year. There will also be a series of high-level academic forums and lecture series at Tsinghua. The culmination of the year’s celebration will be the Centennial Commemoration Conference on Saturday 5 September.
What do you hope for Tsinghua in the coming 100 years?
First, I hope we become the world’s leading centre for a new way of doing physics: integrating AI directly into the core of our research cycle. This means moving beyond using AI just as a tool. I envision a future where AI actively helps us formulate new theories about quantum materials, guides the design of critical experiments in astrophysics and particle detection and even controls advanced instruments to run complex measurements. Our goal should be to pioneer a “AI-scientist” partnership, making it as natural as using a microscope.
Second, I hope we are known not just for our discoveries, but for building essential research “bridges” that solve big problems. This means deeply partnering with our engineering schools to turn quantum science into reliable technology as well as with life sciences and environmental science to apply physical principles to global challenges in health and sustainability. We aim to educate students who are not just technically able, but who are also ethically grounded and driven.
If we succeed, then Tsinghua Physics will continue to contribute meaningfully, not just to the scientific community, but to the broader human endeavour of understanding our world. That is the enduring legacy we strive for.
The post Celebrating 100 years of physics at Tsinghua University appeared first on Physics World.
Cobalt dissolution from PtₓCo/C cathode catalysts in PEM fuel cells: in situ quantification and removal methods
Tracking Co²⁺ leaching from PtₓCo/C catalysts and recovering cation‑induced performance losses in PEM fuel cells. Learn more in this webinar
The post Cobalt dissolution from PtₓCo/C cathode catalysts in PEM fuel cells: in situ quantification and removal methods appeared first on Physics World.

Pt-alloy/C catalysts, such as PtxCo/C, are used as cathode catalysts in proton-exchange membrane (PEM) fuel cells due to their exceptionally high kinetic activity for the oxygen reduction reaction (ORR). However, the performance and durability of membrane electrode assemblies (MEAs) with a PtxCo/C cathode catalyst are impaired by the dissolution of Co2+ cations in the ionomer phase of the MEA.
In the first part of this webinar, an in situ method to quantify the amount of Co2+ contamination in an MEA via electrochemical impedance spectroscopy (EIS) is presented. Pt/C model MEAs doped with different amounts of Co2+ ions are used to analyze the effects of Co2+ contamination on the H2/air performance and on ionic resistances under various conditions, highlighting the role of the inactive membrane area. Based on these model MEAs, a calibration curve is established that correlates the high-frequency resistance (HFR) under dry conditions to the amount of Co2+ in the MEA. Due to the high sensitivity of the dry HFR to metal cations, this method enables the tracking of Co2+ leaching from a Pt2.5Co/C MEA in voltage cycling accelerated stress tests.
In the second part, a recovery method to remove cationic contaminants from an MEA using CO2–O2 cathode gas feeds is presented. With this method, cation-induced performance losses of aged PtxCo/C MEAs can be largely recovered. The mechanism of cation removal and opportunities for the durability of Pt-alloy/C MEAs are discussed.

Markus Schilling is a PhD student at the chair of technical electrochemistry under the supervision of Prof Hubert A Gasteiger at the Technische Universität München. In his research, he investigates the degradation of Pt-alloy on carbon cathode catalysts (e.g., PtCo/C) for PEM fuel cells, with the aim of deepening the understanding of aging mechanisms and identifying strategies to increase durability. Current works include catalyst pre-treatments, development of diagnostic methods on the cell level, voltage cycling accelerated stress testing, and recovery methods.
Schilling received his BSc in 2019 from the Universität Konstanz and his MSc in 2022 from the Technische Universität München, where he investigated PEM fuel cell catalyst inks in his thesis, supervised by Prof Gasteiger.

The post Cobalt dissolution from PtₓCo/C cathode catalysts in PEM fuel cells: in situ quantification and removal methods appeared first on Physics World.
Compact optical amplifier is efficient enough for on-chip integration
Low-power optical device achieves around 100 times amplification using just a couple of hundred milliwatts of input power
The post Compact optical amplifier is efficient enough for on-chip integration appeared first on Physics World.
Light forms the backbone of many of today’s advanced technologies, offering the ability to transmit data and information much quicker than electrons. Within optical networks, optical amplifiers are used to increase the intensity of light and enable its transmission over long distances. Without this ability to amplify optical signals, satellite technology, long-distance fibre-optic communications and quantum information processing would not be possible. But many optical amplifiers use a lot of power, limiting their deployment.
Modern-day photonics are continually getting smaller and more efficient, and researchers from Stanford University have now developed an optical amplifier that uses a low amount of energy on a fingertip-sized device – achieved by recycling the energy used to power it.
The low-power optical amplifier operates across the optical spectrum and is small and efficient enough to be integrated on a chip. The device achieved more than 17 dB gain using less than 200 mW of input power – an order of magnitude improvement over previous optical amplifiers of a similar size.
“We wanted to store up optical energy and release it in intense bursts, kind of like how a Q-switched laser works, but now with an optical resonator being the store of energy that fills up,” explains senior author Amir Safavi-Naeini. “After a few months we started to see that it could address other challenges we had in the lab, like building a broadband low-power amplifier for squeezing light in a chip-scale device.”
Optical parametric amplifiers
There are many types of optical amplifiers. Erbium-doped amplifiers are common in telecommunications but only work within specific wavelength bands, while semiconductor amplifiers function over a larger range of wavelengths but are limited by high noise. Optical parametric amplifiers (OPAs) are seen as the bridge between the two. OPAs, which use nonlinear interactions to transfer energy from a pump beam into signal photons, offer high gain, wide bandwidth and low noise.
A high gain boosts signals above noise levels, while the broad bandwidth enables amplification of ultrafast or wavelength-division-multiplexed optical signals. However, as they typically require watt-level power, OPAs have been difficult to miniaturize and integrate onto tiny photonic chips. For most amplifiers, achieving a high gain requires a high power input, which is counterproductive to miniaturization.
Integrating lasers into the photonic chip is not ideal and an external optical pump is now seen as an alternative option, but usually requires a pump at the second harmonic (twice the wavelength frequency being amplified). In the new design, the researchers use an external pump laser at the fundamental wavelength, coupled by lensed fibre onto the chip, where it generates the resonant second-harmonic pump – using a new loop design to reduce power requirements.
“The trick is that we trap and recirculate the shorter-wavelength pump light in a loop, not the signal,” Safavi-Naeini explains. “This gives you the efficiency boost of a resonator without narrowing the amplification bandwidth.”
A low-power optical amplifier
The team built the low-power OPA using thin-film lithium niobate, which offers large second-order nonlinearity and tight optical confinement. The big advantage, however, lies in its second-harmonic resonant design, in which the optical pump is doubled into a second harmonic inside a cavity. The pump light travels in a circular loop, increasing its intensity until the desired power is met. Once this amplification is complete, the signal is output with a near-quantum-limited noise performance over a broad amplification bandwidth of 110 nm.
Performing the amplification inside the cavity reduces the required power because the OPA is powered by energy stored inside the light beam. “The pump light is generated inside the pump resonator, not coupled in. This means we can efficiently fill up this resonator without dealing with impedance matching constraints that limit other nonlinear devices,” explains Safavi-Naeini. “The pump field is therefore larger than what we can even couple into the chip, so we get a boost that otherwise wouldn’t be possible.”
The small-scale and low-power architecture could be used to build on-chip OPAs across a range of applications, including data communications technology, biosensors and novel light sources. The amplifier is also small and efficient enough to be powered by a battery, making it suitable for use in laptops and smartphones.
Looking ahead, Safavi-Naeini says that the goal is “to combine this amplifier with a small on-chip laser, so the whole thing is self-contained without bulky external equipment, and use it to generate large amounts of quantum squeezing in an integrated device”. In the short-term, he suggests that fabrication improvements could cut the power requirements by another factor of ten. “We’re looking to push the sensitivity beyond what’s currently possible with classical technologies.”
The research is reported in Nature.
The post Compact optical amplifier is efficient enough for on-chip integration appeared first on Physics World.
The search for new bosons beyond Higgs
CMS researchers probed top‑quark pairs for signs of new scalar and pseudoscalar particles
The post The search for new bosons beyond Higgs appeared first on Physics World.
Particle physicists have been searching for new fundamental scalar and pseudoscalar bosons because, if discovered, they could reveal physics beyond the Standard Model and help explain mysteries such as dark matter and even why the Higgs exists. The Higgs remains the only confirmed scalar boson, and no pseudoscalar bosons have yet been observed, though they are predicted, for example, in theories involving axions and axion‑like particles. One promising way to find them is to look for their decay into a top quark and antiquark pair (tt̄).
Using the CMS detector at the Large Hadron Collider, researchers analysed 138 fb⁻¹ of proton–proton collision data. They reconstructed the invariant mass of the tt̄ system and used angular variables sensitive to its spin and parity to distinguish potential signals from the Standard Model background. Crucially, the analysis includes interference between any new boson and the Standard Model tt̄ production, which can create peak-dip distortions in the invariant mass of the tt̄ system rather than a simple bump. The observed event yield is consistent with the Standard Model prediction over the majority of the invariant mass spectrum, thus excluding a contribution from a potential new boson.
However, CMS observed a significant excess near the threshold of tt̄ production where the energy of colliding particles is just enough to produce top quarks and antiquarks. This excess has a local significance above five standard deviations and the kinematics of these events is more consistent with a pseudoscalar than a scalar interpretation. However, the excess could also be explained by a predicted tt̄ quasi‑bound state, known as toponium, which fits the data without requiring new particles beyond the Standard Model.
The researchers set upper limits on how strongly new bosons could couple to top quarks across masses from 365 to 1000 GeV and widths from 0.5% to 25%. These constraints exclude couplings down to around 0.3 for pseudoscalars and 0.4 for scalars, providing the most stringent limits to date for scalar resonances decaying to tt̄.
Read the full article
The CMS Collaboration 2025 Rep. Prog. Phys. 88 127801
Do you want to learn more about this topic?
Prospects for Higgs physics at energies up to 100 TeV by Julien Baglio, Abdelhak Djouadi and Jérémie Quevillon (2016)
The post The search for new bosons beyond Higgs appeared first on Physics World.
Pushing thermopower to the 2D limit
LETO/ETO superlattices achieve 20× thermopower enhancement through true 2D electron behaviour
The post Pushing thermopower to the 2D limit appeared first on Physics World.
Thermoelectric materials convert heat into electricity, and their effectiveness is largely determined by their thermopower, which reflects how charge carriers respond to their environment. Designing materials with very high thermopower is important because it boosts overall thermoelectric efficiency, enabling sensors with stronger voltage output, higher sensitivity, and the ability to detect smaller temperature changes. High thermopower also allows for thinner, lighter, and potentially flexible devices that use less material. In 2D materials, electrons become confined to very thin layers, altering their energy levels in ways that can dramatically increase thermopower.
The researchers explore this effect using superlattices made of La-doped EuTiO3 and La-doped EuTiO3 (LETO/ETO), where both dimensional confinement and electronic correlation effects play key roles. These structures achieve stronger 2D confinement than the commonly used SrTiO₃, which has a large Bohr radius that prevents electrons from being tightly localized. In contrast, the LETO/ETO system has a much smaller effective Bohr radius, allowing electrons to behave more like a true 2D gas. The Eu 4f electrons further modify the local potential landscape, strengthening confinement and producing orbital‑selective localization, particularly of the Ti 3dₓᵧ states that dominate the enhanced thermopower response.

As a result, the thermopower becomes extremely large, up to 950 μV K⁻¹, and as much as 20 times higher in the 2D configuration than in the 3D case, an improvement roughly twice that achieved in comparable SrTiO₃-based superlattices. Thermopower measurements and hybrid density functional theory calculations confirm that this enhancement arises from the combined effects of strong confinement, modified band structure, and correlation-driven changes to the Ti 3d electron distribution.
Overall, the study demonstrates a new design strategy for thermoelectric materials that combines material selection (small Bohr radius, 4f-assisted confinement) with dimensional engineering to create ultrathin superlattices that force electrons into 2D behaviour. The authors note that future Hall measurements and conductivity optimization will be important for evaluating power factor and ZT (a measure used in thermoelectrics to describe how good a thermoelectric material is), and that integrating these oxide superlattices with emerging freestanding membrane techniques could enable flexible, high-sensitivity thermal sensing platforms.
Read the full article
Improving 2D-ness to enhance thermopower in oxide superlattices
Dongwon Shin et al 2026 Rep. Prog. Phys. 89 010501
Do you want to learn more about this topic?
Tuning phonon properties in thermoelectric materials by G P Srivastava (2015)
The post Pushing thermopower to the 2D limit appeared first on Physics World.
A physicist’s journey into nuclear energy
Early-career physicist Natasha Khan talks about her journey to becoming a nuclear safety engineer
The post A physicist’s journey into nuclear energy appeared first on Physics World.
When I started my physics degree, I knew it could open the door to a range of career opportunities, but I wasn’t sure what path it would take me down. In the end, it was the optional modules that encouraged my interest in nuclear energy physics, steering me towards my current job as a nuclear safety engineer.
When I was looking at university degrees, I thought about studying chemical engineering, but my A-level physics teacher inspired me to consider physics instead. I’d always been fascinated with the subject, and enjoyed maths (and a challenge) too, so I thought why not give it a go.
I went on to study physics at the University of Liverpool, graduating in 2021. I absolutely loved the city and would highly recommend it to anyone considering physics – or any degree, for that matter. The campus is fantastic and Liverpool is an amazing place to be a student.
My undergraduate experience was incredibly rewarding. I met some of my closest friends and had countless memorable adventures. While the course was challenging at times, I have no regrets about choosing physics. I particularly enjoyed being able to pick specialist optional modules as it meant I could follow my interest in applied physics with topics such as nuclear power and medical physics.
Making a difference
In my final year, I started doing the obligatory job applications for those wanting to go into industry. But after receiving some rejections, I decided to explore an opportunity outside of science and ended up working for nearly a year in the charity sector as a Climate Action intern. There I undertook research projects related to decolonization in international development, and anti-racism and social justice, supporting the delivery of international development programmes.
While my time at Climate Action was rewarding and worthwhile, I wanted to move back into science and use my degree. Nuclear physics had been an area of interest for me since school, and my modules at university had encouraged that, so I turned my attention to the nuclear energy sector. Having worked for a charity, I was keen to find an organization whose values aligned with mine. Employee-owned engineering, management and development consultancy, Mott MacDonald, caught my eye, with its commitment to net zero, social outcomes and the UN’s Sustainable Development Goals.
I joined the company’s three-year graduate scheme and, although I didn’t have any direct experience in safety, was offered a graduate nuclear safety position. It is a great role that ties in skills from my degree and my interest in nuclear while still presenting challenges and an opportunity to learn.
After two years at Mott MacDonald, I won Graduate of the Year at the UK Nuclear Skills Awards 2024. My colleagues had kindly nominated me, recognizing my dedication and drive, and the contribution I’d made to the organization. This opportunity was highly valuable for me and elevated my profile not only at Mott MacDonald but also within the sector. Then, after only two and half years in the graduate scheme, I was promoted to my current position of nuclear safety engineer.
My role focuses on developing nuclear safety cases with the guidance and support of our experienced team. The work involves analysing potential hazards and risks, outlining safety measures, and presenting a structured, evidence-based argument that the facility is safe for operation. I’ve worked on a variety of different projects including small modular reactors, nuclear medicine and flood alleviation schemes.
A typical day for me involves project meetings, writing safety reports, conducting hazard identification studies, and reviewing documents. A key aspect of the work is identifying, assessing and effectively controlling all project-related risks.

Beyond my technical role at Mott MacDonald, I am also part of committees for our internal Women in Nuclear and Europe and UK Advancing Race and Culture networks. These positions allow me to contribute to a range of equality, diversity and inclusion (EDI) initiatives. Creating an inclusive environment is important to allow people the space to be authentically themselves, share and bring diverse perspectives and feel psychologically safe. This is a big driver for me – by supporting equity and equal opportunities, I am helping ensure others like me have role models in the sector.
A nuclear skillset
Physics plays a crucial role in nuclear safety by providing the fundamental principles underlying nuclear processes. Studying nuclear physics at university has helped me understand and analyse reactor behaviour, radiation effects and potential hazards. This knowledge forms the basis for designing nuclear facility safety systems, for the protection of the workforce, environment and general public.
Throughout my degree, I also developed transferable skills such as analytical thinking, logical problem-solving and teamwork, all of which I apply daily in my role. As a safety-case engineer, I work as part of a team, and collaborate with specialists across fields, including process engineering, mechanical engineering and radioactive waste management. My ability to work effectively in teams and maintain strong interpersonal relationships has been key to success in my role.
I would encourage other physics students to explore a career in the nuclear industry
Applying my research and scientific report writing skills I developed at university, I can identify relevant information for safety-case updates, and present safety claims, arguments and evidence in a way that is understandable to a broad, non-specialist audience.
I also mentor and support more junior colleagues with various project and non-project related issues. Skills like critical thinking and the ability to tailor my communication style directly influence how I approach my work and support others.
I would encourage other physics students to explore a career in the nuclear industry. It offers a broad range of career paths, and the opportunity to contribute to some of the most diverse, exciting and challenging projects within the energy sector. You don’t need an engineering background to have a career in nuclear – there are many ways to contribute including beyond the technical route. As physicists we have a wide range of transferable skills, often more than we realize, making us highly adaptable and valuable in this sector.
It’s an incredible time to join the nuclear industry. With advancements like Sizewell C, small modular reactors, and cutting-edge medical nuclear-research facilities, there’s a wealth of diverse projects happening right now to get involved in. I hadn’t planned on a career in nuclear safety, but honestly, I’m really glad my path led this way. I am passionate about driving innovative nuclear solutions, and support progress towards reduced emissions and the global transition to net zero.
While I may be early on in my nuclear career, I have already worked on some interesting projects and met fantastic people. Now, I’m going through a structured training programme at Mott MacDonald to help me achieve chartership status with the Institute of Physics. I look forward to seeing what the future has to offer.
The post A physicist’s journey into nuclear energy appeared first on Physics World.
A glimpse into the future of particle therapy
Particle therapy experts gathered in London to discuss the transformative potential of laser-driven proton and ion beams
The post A glimpse into the future of particle therapy appeared first on Physics World.
Particle therapy is an incredibly powerful cancer treatment. But it is also an incredibly expensive option that relies on massive, bulky accelerator systems. As such, in 2025 there were only 137 proton and carbon-ion therapy facilities in operation worldwide. So how can more people benefit?
Hoping to resolve this challenge, the LhARA collaboration is investigating a new take on particle therapy delivery: a laser-hybrid accelerator for radiobiological applications. The idea is to use laser-driven proton and ion beams to create a compact, high-throughput treatment facility to advance our understanding of cancer and its response to radiation (see: “A novel hybrid design”).
Last month, in the first of a series of CP4CT workshops, experts in the field came together at Imperial College London to discuss the potential advantages of laser-driven charged particles. The workshop aimed to examine the current status of particle therapy technology, assess how the unique properties of laser-driven beams could revolutionize particle therapy, and identify the key research needed to develop personalized cancer therapy with laser-driven ions.
“We want to lay the foundation for the transformation of ion beam therapy,” said Kenneth Long (Imperial College London/STFC), who co-organized the event together with Richard Amos (University College London). “We are aiming to engage with the communities that we will target when the technology is mature.”
A novel hybrid design
LhARA uses a high-power, fast-pulsed laser to create high-flux proton and ion beams with arbitrary spatial and time structures, such as bunches as short as 10 to 40 ns. The beams are captured and focused by a novel electron-plasma lens, and then accelerated using a fixed-field alternating gradient accelerator, to energies of 15–127 MeV for protons and 5–34 MeV/u for ion beams.
The LhARA team recently completed its conceptual design report for the proposed new accelerator facility and is now running radiobiology programmes to prove the feasibility of laser-driven hybrid acceleration, for both radiation biology and clinical studies.
Particle therapy today
The day’s first speaker, Alejandro Mazal (Centro de Protonterapia Quirónsalud) pointed out that despite huge clinical potential, only about 400,000 patients have been treated with proton therapy to date (and 65,000 with carbon ions), with a typical saturation of about 250 patients per year per treatment room. To increase this throughput, factors such as image guidance, adaptive tools, uptime and modularity for upgrades could prove vital.
Mazal cited some development priorities to address, including cost control, vendor robustness, system reliability and throughput optimization. It’s also vital to consider biological modulation techniques, integration into hospitals and generation of clinical evidence. “We used to say that randomized trials are not ethical with particle therapy but this is not always true, evidence must guide expansion,” he said.
Mazal emphasized that technology itself is not the endpoint, but that specifications must be driven by clinical benefit. “The goal is to be transformative, but only when we can measure a clinical value,” he explained.
Sandro Rossi (CNAO) then presented an update on the latest developments at the National Centre of Oncological Hadronotherapy (CNAO) in Italy. Since starting clinical treatments in 2011, the facility has now treated over 6000 patients – roughly half with protons and half with carbon ions. He noted that for some of the most challenging tumours, CNAO’s particle therapy delivered considerably better local tumour control than achieved by conventional X-ray treatments.
CNAO is also a research facility, currently hosting 17 funded research projects and seven active clinical trials. Looking forward, an expansion project will see the centre commission an additional proton therapy gantry, introduce boron neutron capture therapy (BNCT) and install an upright positioning system (from Leo Cancer Care) in one of the treatment rooms.
The killer biological questions
In parallel with the development of laser-based accelerators, researchers are investigating various radiobiological modulation strategies that could enhance the impact of particle therapy. The workshop examined three such options: proton minibeams, FLASH irradiation and combination with immunotherapies.
Minibeam therapy uses an array of submillimetre-sized radiation beams to deliver a pattern of alternating high-dose peaks and low-dose valleys. This spatially fractionated dose greatly reduces treatment toxicity while providing excellent tumour control, as demonstrated in extensive preclinical experiments.

The first patient treatments (using X-ray minibeams) took place in 2024, and clinical investigations on proton minibeams are just starting, explained Yolanda Prezado (CiMUS). Recent studies revealed that minibeams induce a favourable immune response, with high T cell infiltration, vascular renormalization and reduced hypoxia dependence. Further evaluation is essential to explore the underlying radiobiological mechanisms, but Prezado noted that existing accelerators are limited in their ability to modulate treatment beams.
“It would be really interesting to have a system where we can flexibly vary all of the parameters to understand all of these techniques; LhARA could be a very interesting facility for this,” she suggested.
As for the second option, FLASH therapy, this is an emerging treatment approach in which radiation delivery at ultrahigh dose rates reduces normal tissue damage while effectively killing cancer cells. But how the FLASH effect works, and how to optimize this approach, remain key questions.
Joao Seco (DKFZ) presented a novel interpretation of FLASH, focusing on radiation chemistry and emphasizing the role of H2O2 generation in the FLASH process. Production of H2O2, a key molecule in cell damage, depends on the activity of a particular enzyme called superoxide dismutase 1 (SOD1). Seco hypothesized that inhibiting SOD1 could control H2O2 production and thus control cellular damage, effectively mimicking the FLASH effect.
“Forget radiation biology, we are missing a key component: redox chemistry,” he said. “If we know the redox chemistry, we can predict the response before we give radiotherapy.”
Marco Durante (GSI) suggested that the most urgent challenge for radiotherapy may be to combine it with immunotherapy, noting that charged particle beams offer both physical and biological advantages to achieve this. Citing various trials of combined immunotherapy and X-ray-based radiotherapy for cancer treatment, he showed some impressive examples of the benefit of the combination, but also cases with negative results.
“The question to understand is why doesn’t it always work,” he explained, suggesting that this may be due to the timing and sequencing of the two therapies, the fractionation scheme or biological factors. But perhaps a more promising approach would be to combine immunotherapy with particle therapy, he said, sharing examples where immunotherapy plus carbon-ions had better clinical outcomes than combinations with X-ray radiotherapy.
This superior outcome may arise from the various biological advantages of high-LET irradiation. Alongside, the lower integral dose from particle therapy compared with X-rays results in less lymphopenia (a low level of white blood cells), which is indicative of improved prognosis.
“Pre-clinical studies are essential to address timing and sequencing,” he concluded. “We also need more clinical trials to determine the impact of physical and biological properties of charged particles in radioimmunotherapy.”
Democratizing access
Manjit Dosanjh (University of Oxford) discussed the continuing need to increase global access to radiotherapy, noting that while radiotherapy is a key tool for over 50% of cancer patients, not all countries have access to sufficient treatment systems, nor to the expert personnel needed to run them.
Across Africa, for instance, there is just one linac per 3.5 million people, in stark contrast to the one per 86, 000 people in the US. Many European countries also lack sufficient quality or quantity of radiotherapy facilities – a disparity that’s mirrored in terms of access to CT scanners, oncologists and medical physicists, which must be addressed in tandem. “If we could improve imaging, treatments and care quality, we could prevent 9.6 million deaths per year worldwide,” Dosanjh said.

She described some initiatives designed to encourage collaboration and increase access, including ENLIGHT, the European Network for Light Ion Hadron Therapy. Launched in 2002 at CERN, ENLIGHT brings together clinicians, physicists, biologists and engineers working within particle therapy to develop new technologies and provide training, education and access to beams to move the field forward.
More recently, the STELLA (smart technologies to extend lives with linear accelerators) project was established to create a cost-effective, robust radiotherapy linac with lower staff requirements and maximal uptime. A global collaboration, STELLA aims to expand access to high-quality cancer treatment for all patients via innovative transformation of the treatment system, as well as providing training, education and mentoring.
Dosanjh also introduced SAPPHIRE, a UK-led initiative that partners with institutions in Ghana and South Africa to strengthen radiotherapy services across Africa. She stressed that improving access to radiotherapy is a big challenge that can only be achieved by building really good collaborations. “Collaboration is the invisible force that makes the impossible possible,” she said.
Konrad Nesteruk (Harvard) continued the theme of democratizing particle therapy, noting that advancement of beam technologies calls for innovations in space (the facility size), time (both irradiation and total treatment time) and dose (via techniques such as FLASH, proton arc and minibeams). All of these factors interact to create a multidimensional optimization problem, he explained.
The final speaker in this session, Rock Mackie (University of Wisconsin) examined how to translate innovative radiotherapy technology into clinical practice. Academia is the source of breakthrough ideas, he said, but most R&D is funded and refined by companies. And forming a company involves a series of key tasks: identifying an important problem; developing a technical solution; patenting it; customer testing; and procuring investment. If this final stage doesn’t happen, Mackie remarked, it wasn’t an important enough problem.
In particle therapy, the main problems are size and cost limiting patient access, a lack of effective imaging solutions and the fact that the gain in therapeutic ratio does not compensate for increased costs. Aiming to solve these problems, Mackie co-founded Leo Cancer Care in 2018 to commercialize an upright patient positioning system and CT scanner. This approach enables a proton therapy machine to fit into a photon vault, as well as easing patient positioning, thus reducing installation costs while simultaneously increasing throughput.
Mackie applied this startup scenario to LhARA. Here, the problem to solve is achieving high-energy, multi-ion, high-intensity beams for radiotherapy, FLASH, spatial fractionation and proton imaging. The solution is the development of a low-cost particle accelerator that meets all of these needs and fits in a single-storey vault. He also emphasized the importance of consulting with as many potential customers as time permits before defining specifications.
“The most important problem is finding a big enough problem to solve,” he concluded. “It will find a market if the product is less costly, works better and is easier to use.”
Development roadmap
Alexander Gerbershagen (PARTREC) told delegates about PARTREC, the particle therapy research centre at the University Medical Center Groningen. The facility’s superconducting accelerator, AGOR, provides protons with energies up to 190 MeV, as well as ion beams of all elements up to xenon. Ongoing projects at PARTREC include: developing glioblastoma treatments using boron proton capture therapy (NuCapCure); production of terbium isotopes for theranostics; image-guided pharmacotherapy using photon-activated drugs; and real-time in vivo verification of proton therapy dose.
The day closed with a look at the potential of LhARA as an international research facility. Kenneth Long emphasized the importance of investigating how ionizing radiation interacts with tissue, in vivo and in vitro, while considering all of the factors that may impact outcome. This includes time and space domains, different ion species and energies, and combinations with chemo- and immunotherapy. “If one flexible beam facility can do all that, it’s a substantial opportunity for a step change in understanding,” he said.
Long presented some initial cell irradiations using laser-driven beams at the SCAPA research centre in Strathclyde, and noted that component optimization is also underway in Swansea. He also shared designs for the envisaged research facility, with various in vivo and in vitro end-stations and robotic automation to move experiments around. “We have written a mission statement, now our business is to execute that programme,” he concluded.
The post A glimpse into the future of particle therapy appeared first on Physics World.
Mulugeta Bekele: the jailed and tortured scientist who kept Ethiopian physics alive
Robert P Crease talks to Mulugeta Bekele, who almost single-handedly kept Ethiopian physics going
The post Mulugeta Bekele: the jailed and tortured scientist who kept Ethiopian physics alive appeared first on Physics World.
Mulugeta Bekele paid a heavy price for remaining in Ethiopia in the 1970s and 1980s. While many other academics had fled their homeland to avoid being targeted by its military rulers, Mulugeta did not. He stayed to teach physics, almost single-handedly keeping it alive in the country. But Mulugeta was arrested and brutally tortured by members of the Derg, Ethiopia’s ruling military junta. “I still have scars,” he says when we meet at his tiny, second-floor office at Addis Ababa University (AAU) in January 2026.
Gentle and softly spoken, Mulugeta, 79, is formally retired but still active as a research physicist. In 2012 his efforts led to him being awarded the Sakharov prize by the American Physical Society (APS) “for his tireless efforts in defence of human rights and freedom of expression and education anywhere in the world, and for inspiring students, colleagues and others to do the same”.
Mulugeta was born in 1947 near Asela, a small town south of Ethiopia’s capital Addis Ababa. The district had only a single secondary school that depended on volunteer teachers from other countries. One was a US Peace Corps volunteer named Ronald Lee, who taught history, maths and science for two years. Mulugeta recalls Lee as a dramatic and inventive teacher, who would climb trees in physics classes to demonstrate the actions of pulleys and hold special after-school calculus classes for advanced students.
Mulugeta and other Asela students were entranced. So when he entered AAU – then called Haile Selassie 1 University – in 1965, Mulugeta declared he wanted to study both mathematics and physics. Impossible, he was informed; he could do one or the other but not both. “I told myself that if I choose mathematics I will miss physics,” Mulugeta says. “But if I do physics, I will be continually engaged with mathematics.” Physics it was.
At the end of his third year, Mulugeta’s studies appeared in doubt. The university’s only physics teacher was an American named Ennis Pilcher, who was about to return to Union College in Schenectady, New York, after spending a year in Addis on a fellowship from the Fulbright Program. Pilcher, though, managed to convince Union to support Mulugeta so he could travel to the US and study physics there for his final year.
As I talk to Mulugeta, he pulls a dusty book off his shelf. “This was given to me by Pilcher,” he says, pointing to Walter Meyerhof’s classic undergraduate textbook Elements of Nuclear Physics. Mulugeta turns to the inside of the front cover and proudly shows me the inscription: “Mulugeta Bekele, Union College. Schenectady, 1969–1970”.
When Mulugeta returned to AAU in the summer of 1970, he was awarded a BSc in physics. He then received a grant from the US Agency for International Development (USAID) to attend the University of Maryland for a master’s degree. After two more years in the US, Mulugeta returned to Addis Ababa in 1973. As an accomplished researcher and teacher, he was made department chair and began to expand the physics programme at the university.
In the firing line
It was a time when political turmoil was upending Ethiopia, as well as the lives of Mulugeta and many other academics. For centuries the country had been ruled by a dynasty whose present emperor was Haile Selassie. Having come to the throne in 1930, he had tried to reform Ethiopia by bringing it into the League of Nations, drawing up a constitution, and taking measures to abolish slavery.
When fascist Italy invaded Ethiopia in May 1935, Selassie left, spending six years in exile in the UK during the Italian occupation of the country. He returned as emperor in 1941 after British and Ethiopian forces recaptured Addis Ababa. But famine, unemployment and corruption, as well as a brief unsuccessful coup attempt, undermined his rule and made him unexpectedly vulnerable.
While in Maryland, Mulugeta and other Ethiopian students in the US started supporting the Ethiopian People’s Revolutionary Party (EPRP) – a pro-democracy group that sought to build popular momentum against the monarchy. In February 1974 Selassie was deposed by the Derg – a repressive military junta named after the word for “committee” in Amharic, the most widely spoken language in Ethiopia. Selassie was assassinated the following year.

Led by an army officer named Mengistu Haile Mariam, the Derg’s radical totalitarianism was in sharp contrast to the student-led EPRP’s efforts and its agenda included seizing property from landowners. Mulugeta’s family lost all its land, and his father was killed fighting the Derg. “Land ownership was still inequitable,” Mulugeta remarks ruefully, “only the landlords changed.”
In September 1976 the EPRP tried, unsuccessfully, to assassinate Mengistu. The following February, on becoming chairman of Derg – and therefore head of state – Mengistu began ruthlessly to crush any opposition, particularly the EPRP, in what he himself called the “Red terror” campaign of political suppression. About half a million people in Ethiopia were killed.
“It was a police state,” recalls Solomon Bililign, Mulugeta’s then graduate assistant, now a professor of atomic and molecular physics at North Carolina Agricultural and Technical State University. “The police didn’t need any reason to arrest you. They would arrest people openly in the streets, break into homes, and left people dead in roads and parks. Many were tortured; others simply disappeared.”
Captured and tortured
Mulugeta himself was a target. In the summer of 1977, a policeman showed up at his office with an informant. Mulugeta was arrested and imprisoned for his role in helping to organize anti-Derg activities, as was Bililign. Mulugeta still recalls exactly how long he was jailed for: “Eight months and 20 days”.
After his release, Mulugeta knew it would be unsafe to stay in Addis and lived in hiding for several months. So he devised a plan to travel 500 km north to a holdout region not controlled by the Derg. However, while using a fake ID to pass through checkpoints to reach a compatriot, he was betrayed again, captured, and taken back to Addis.
Mulugeta was savagely tortured using a method that the Derg meted out on thousands of other prisoners
En route to Addis, he managed to steal back the fake ID that he’d been using from the pocket of the policeman travelling with him. He then tore it up to shield the identity of his compatriot, and tossed the pieces into a toilet. But the policeman noticed and retrieved the pieces. Mulugeta was then savagely tortured using a method that the Derg meted out on thousands of other prisoners. His arms and legs were tied around a pole, and he was hung in the foetal position between two chairs, upside down. His feet were then beaten until he could no longer walk.
Mulugeta was sent to Maekelawi, an infamous jail in Addis, in which up to 70 prisoners could be jammed in rooms each barely four metres long and four metres wide. Inmates were tortured without warning, could not have visitors, never had trials, were denied books and paper, and at night heard screams from periodic executions. Mulugeta helped those who were beaten by tending to their wounds.
“People who knew him in prison told me that his mental strength helped all of them endure,” remembers Mesfin Tsige, an undergraduate student of Mulugeta at the time, who is now a polymer physicist at the University of Akron in Ohio. Despite the awful conditions, Mulugeta managed to continue working on physics by surreptitiously taking paper from the foil linings of cigarette packets to compose problems.

Another prisoner was Nebiy Mekonnen, a chemistry student of Mulugeta. Later a gifted artist, translator and newspaper editor, Mekonnen began translating the US writer Margaret Mitchell’s classic 1936 book Gone with the Wind into Amharic. It was the one book that the Maekelawi prisoners had in their hands, having retrieved it from the possessions of someone who had been executed.
Surreptitiously writing his translation onto the foil linings of cigarette packets, Mekonnen would read passages to fellow prisoners in the evening for what passed for entertainment. Mekonnen’s translation of Mitchell’s almost 1000-page book was recorded onto 3000 of the linings, which were then smuggled out of the prison stuffed in tobacco pouches and published years later.
Gone with the Wind might seem a strange choice to translate, but as Mulugeta reminds me: “It was the only book we had at the time”. More smuggled books did eventually arrive at the prison, but Gone with the Wind, which describes life in a war-torn country, has several passages that resonated with prisoners. One was: “In the end what will happen will be what has happened whenever a civilization breaks up. The people with brains and courage come through and the ones who haven’t are winnowed out.”
Release and recapture
In 1982 Mulugeta was moved to Kerchele, another prison. There, as at Maekelawi, inmates were forced to listen to Mengistu’s pompous speeches on radio and TV. During one, Mengistu pontificated that he would turn prisons into places of education. A clever inmate, knowing that the prison wardens were also cowering in terror, proposed that Kerchele establish a school with the prisoners as teachers.
The wardens found this a great idea, not least because it let them show off their loyalty to Mengistu. The Kerchele prisoners were promptly put to work erecting a schoolhouse of half a dozen rooms out of asbestos slabs. Unlike schools in the rest of Ethiopia, the Kerchele prison school was not short of teachers, as the prisoners included a wide range of professionals, such as architects, scientists and engineers.
Students included prison guards and their families, along with numerous inmates who had been jailed for non-political reasons. Mulugeta and Bililign taught physics. “It was therapy for us,” Bililign says – and the school was soon known as one of the best in Ethiopia.
When I ask Mulugeta how he maintained his interest in physics in jail, despite being locked up for so many years, he becomes animated
When I ask Mulugeta how he maintained his interest in physics in jail, despite being locked up for so many years, he becomes animated. “In those days, prisons were full of ideas,” he smiles. “We were university students, university teachers. We had a cause. It was exciting. Intellectually, we flourished.”
In the summer of 1985 Mulugeta was released. Many colleagues were not. “They were given release papers and as they left the building, one by one, they were strangled. I had a tenth-grade student who was one of the best; he didn’t make it. There were plenty of stories like this.” Mulugeta pauses. “Somehow we survived. But not them.”
Mulugeta returned to the university, now renamed from Haile Selassie University to Addis Ababa University, and started teaching physics full time. As the Derg was in full control no opposition was possible except in outer regions of Ethiopia. In summer 1991, after Mulugeta had taught physics for another six years, political turmoil erupted yet again.
Mengistu was overthrown that May by a political coalition representing pro-democracy groups from five of Ethiopia’s ethnic regions, the Ethiopian People’s Revolutionary Democratic Front (EPRDF). But ethnic tensions rose and human rights violations continued. “Even though the Derg was overthrown,” Mulugeta recalls, “we knew we were entering another dark age.”
In the same year Mulugeta was put in touch with a Swedish programme seeking to build networks of scientists across countries in the southern hemisphere. Mulugeta knew a physicist from Bangalore, India, who had visited Addis twice as an examiner for his master’s programme and arranged to work with him for his PhD.
That July, Mulugeta married Malefia, who worked in the university’s registrar office, and the two left for Bangalore. As a wedding present, his student Mekonnen painted a picture of two hands coming together, each with a ring on a finger, against a black Sun in the background. “Two rings, in the time of a dark sun” Mekonnen’s caption read, “Happy marriage!” Mulugeta still has the painting.
Mulugeta thrived in Bangalore. Here, he was finally able to combine his two loves, physics and maths, studying statistical physics and stochastic processes and applying them to issues in non-equilibrium thermodynamics. He has worked in that field ever since. He received his PhD in 1998 from the Indian Institute of Science in Bangalore and returned to Addis once more to teach.
Shortly after Mulugeta’s return from Bangalore to Ethiopia in August 1998, some of his former students formed the Ethiopian Physical Society, electing him as its first president. Other students of his who had taken positions in the US created the Ethiopian Physical Society of North America (EPSNA), formally established in 2008. Bililign organized and convened its first meeting.
In 2007 Philip Taylor, a soft-condensed-matter physicist from Case Western Reserve University in the US, who had been Tsige’s PhD supervisor, heard the story of Mulugeta’s imprisonment. Astonished, he spearheaded the successful 2012 application for Mulugeta to receive the APS’s Sakharov prize, which is given every two years to physicists who have displayed “outstanding leadership and achievements of scientists in upholding human rights”.

Unsure that he would receive travel funds to attend a special award ceremony at that year’s APS March meeting in Boston, the EPSNA raised money for Mulugeta and his wife to attend. Jetlagged, worn out by the cold, and somewhat overwhelmed by the attention, Mulugeta could not be found as the ceremony began. EPSNA members tracked him down to his hotel room, where he was dressing in traditional Ethiopian clothes for the occasion – all white from head to toe, including shoes.
Under a dark Sun
In recent years, Mulugeta has continued to teach and collaborate with students and former students, publishing in a wide range of journals, as well as helping out with the Ethiopian Physical Society. But while I was in Ethiopia to talk to Mulugeta at the start of 2026, the Trump administration curtailed immigrant visas from Ethiopia and almost half of all nations in Africa supposedly in an attempt to “protect the security of the United States”. A few months before, it had imposed a $100,000 fee on work visas, all but preventing US universities from hiring non-US citizens. It killed the USAID programme that had once sent Mulugeta to the US for his master’s degree.
The Trump administration has also withdrawn the US from international scientific organizations, conventions and panels, and has gutted the most important US scientific agencies. These and other measures are destroying the networks of international physics collaborations of the kind that Mulugeta both promoted and benefited from – networks that nurture education, careers and knowledge.
“We are not yet in good hands,” Mulugeta warns me as I start to leave. “We are,” he says, “still under the dark Sun.”
The post Mulugeta Bekele: the jailed and tortured scientist who kept Ethiopian physics alive appeared first on Physics World.
Condensed-matter physics pioneer and Nobel laureate Anthony Leggett dies aged 87
Leggett made Nobel-prize-winning contributions to the theory of superfluidity in the 1970s
The post Condensed-matter physics pioneer and Nobel laureate Anthony Leggett dies aged 87 appeared first on Physics World.
The British-American theoretical physicist Anthony Leggett died on 8 March at the age of 87. Leggett shared the 2003 Nobel Prize in Physics with Alexei Abrikosov and Vitaly Ginzburg for their “pioneering contributions to the theory of superconductors and superfluidity”.
Born on 26 March 1938 in London, UK, Leggett graduated in literae humaniores (classical languages and literature, philosophy and Greco-Roman history) at the University of Oxford in 1959.
While philosophy was Leggett’s strongest subject, he did not envisage a career as a philosopher because he felt that the subject depended more on turns of phrase than objective criteria.
As part of an experiment at Oxford to see if it was possible to convert a classicist with minimal qualifications in maths and science into a physicist, Leggett was awarded a degree in physics in 1961.
Leggett then embarked on a DPhil in physics, which he completed at Oxford in 1964, followed by postdocs at the University of Illinois Urbana-Champaign in the US and Kyoto University, Japan.
In 1967 he moved back to the UK, spending the next 15 years at Sussex University. It was at Sussex that he carried out his Nobel-prize-winning work on the theory of superfluidity – the ability of a fluid to flow without viscosity.
Superfluidity in helium-4 was discovered in the 1930s, and in the 1960s several theorists predicted that helium-3 might also be a superfluid.
However, the two forms of helium are fundamentally different. Helium-4 atoms are bosons and can all condense into the same quantum ground state at low enough temperatures – an essential feature of both superfluidity and superconductivity.
Helium-3 atoms, on the other hand, are fermions and the Pauli exclusion principle prevents them from entering such a quantum state.
Electrons, which are also fermions, overcome this problem by forming Cooper pairs as described by the BCS theory of superconductivity that was developed in the mid-1950s by John Bardeen, Leon Cooper and Robert Schrieffer.
Theorists predicted that helium-3 atoms could do something similar and in 1972 superfluidity in helium-3 was finally observed at Cornell University – a feat that earned David Lee, Douglas Osheroff and Robert Richardson the 1996 Nobel Prize in Physics.
Yet many of the results puzzled theorists. In particular there were three different superfluid phases, and the results of nuclear magnetic resonance experiments on the samples could not be explained.
Leggett showed that these results could be explained by the spontaneous breaking of various symmetries in the superfluid and for the work he was awarded a third of the 2003 Nobel Prize in Physics, with Abrikosov and Ginzburg being honoured for their work on type-II superconductors.
A life in science
In 1983 Leggett moved to the University of Illinois at Urbana-Champaign where he remained for the rest of his career until retiring in 2019. There he focussed on problems in high-temperature superconductivity, superfluidity in quantum gases and the fundamentals of quantum mechanics.
In 1998 he was elected an Honorary Fellow of the Institute of Physics and in 2004 was appointed Knight Commander of the Order of the British Empire (KBE) “for services to physics”. In 2023 the Institute for Condensed Matter Theory at the University of Illinois at Urbana-Champaign was renamed the Sir Anthony Leggett Institute.
As well as the Nobel prize, Leggett won many other awards including the 2002 Wolf Prize for physics. He also published two books: The Problems of Physics (Oxford University Press, 1987) and Quantum Liquids (Oxford University Press, 2006).
Peter McClintock from Lancaster University, who has carried out work in superfluidity, says he is “very sad” to hear the news. “[Leggett] was a brilliant physicist whose genius was to comprehend underlying mechanisms and processes and explain their physical essence in comprehensible ways,” says McClintock. “My dominant memory is of the discovery of the superfluid phases of helium-3 and of the way in which [Leggett] was able to interpret each new item of experimental information and slot it into a nascent theoretical framework to build up a coherent picture of what was going on – while always enumerating the remaining loose ends and possible alternative explanations.”
James Sauls, a theorist at Louisiana State University, says that Leggett made discoveries in several areas of physics such as the foundations of quantum mechanics, quantum tunnelling in amorphous glasses and superconducting devices as well as the theory of heat transport at ultra-low temperatures. “Leggett’s contributions to quantum mechanics and low-temperature physics are remarkable and enduring,” adds Sauls. “[Leggett’s] style in theoretical physics was unique in its clarity and originality.”
In a statement, Makoto Gonokami, president of the RIKEN labs in Japan, also expressed that he is “deeply saddened” by the news and that Leggett had “provided warm support for researchers in Japan” through his many trips to the country.
“Leggett made pioneering contributions to our understanding of how quantum mechanics manifests itself in macroscopic matter [and] his theoretical work on superfluid helium-3 provided profound insights into quantum order in strongly interacting fermionic systems,” notes Gonokami. “His work significantly advanced the study of quantum condensed matter and macroscopic quantum coherence.”
The post Condensed-matter physics pioneer and Nobel laureate Anthony Leggett dies aged 87 appeared first on Physics World.
Physicists identify unexpected quantum advantage in a permutation parity task
The list of things that quantum features enable us to do just got a little longer
The post Physicists identify unexpected quantum advantage in a permutation parity task appeared first on Physics World.
Imagine all the different ways you can rearrange a list of labelled items. If you know only a tiny fraction of the labels describing the elements of the list, it’s easy to assume you have almost no information about the order of the list as a whole under permutations. After all, if you shuffle a large deck of cards and then hide most of the labels on the cards, how could anyone possibly tell what permutations you made?
Recent theoretical work by physicists at Universitat Autonoma de Barcelona (UAB), Spain, and Hunter College of the City University of New York (CUNY), US, reveals that this intuition can fail in surprising ways, hinting at deep links between information, symmetry and computation. Specifically, the UAB-CUNY team found that quantum mechanics plays a key role in preserving parity – a global property of a permutation – even when most local information is erased.
An impressive parity identification
Imagine a clever magician named Alice. She hands you a stack of n coloured disks in a known order and leaves the room while you shuffle them. When she returns, she asks: “Can I tell how you permuted the disks?”
If every disk has its own unique label, the answer is obviously “yes”. But if Alice removes some of the labels, she can pose a subtler challenge: “Can I at least tell whether your shuffle swapped the positions of the cards an even or odd number of times?”
Classically, the answer is “no”. With fewer labels than disks, some labels must be repeated. Swapping two disks with the same label leaves the observed configuration unchanged, yet flips the parity of the underlying permutation. As a result, determining parity with certainty requires one unique label per disk. Anything less, and the information is fundamentally lost.
Quantum mechanics changes this conclusion. In their paper, which is published in Physical Review Letters, UAB’s Arnau Diebra and colleagues showed that as long as there are at least √n labels, far fewer than the total number of disks, one can still determine the parity of any permutation applied to the system when the game follows the rules of quantum mechanics. The problem remains the same; the only difference is that we are now preparing our initial state as a quantum state. In other words, even when most of the detailed information about individual elements is erased, a global feature of the transformation survives and exploiting quantum features makes it possible to extract it with carefully chosen information. This is not sleight of hand: it is a genuine mathematical insight into how much information certain global properties retain under massive data reduction.
Quantum advantage
In the field of quantum science, it’s common to ask whether quantum systems can outperform classical ones at specific tasks, a phenomenon known as quantum advantage. Here, “advantage” does not necessarily mean doing everything faster, but rather the ability to solve carefully chosen problems using fewer resources such as time, memory or information. Notable examples include quantum algorithms that factor large numbers more efficiently than any known classical method, and quantum communication protocols that achieve tasks that would be impossible with classical correlations alone.
The parity-identification problem fits naturally into this landscape. Parity is a global property, insensitive to most local details. In this respect, it resembles many other quantities studied in quantum physics, from topological invariants to entanglement measures.
What makes quantum advantage possible in this problem is entanglement – and lots of it. A compound quantum system is said to be entangled when its subsystems are correlated in a nonclassical way. A simple example might be a pair of qubits (quantum bits) for which measuring the state of one qubit gives you information about the state of the other in a way that cannot be reproduced by any classical correlation. In their work, the UAB-CUNY physicists used a geometric measure of entanglement: the “distance” between the state of the system and a state in which all subsystems are separable (that is, not entangled). If this distance is too short, the protocol fails entirely.
The crucial point is that entanglement allows information about the permutation to be stored in genuinely nonlocal correlations among particles (the “cards” in the deck), rather than in properties of each particle/card individually. In effect, the “memory” needed to identify the parity is written into the joint quantum state. No single particle carries the answer, but the system as a whole does. This is precisely what classical systems cannot replicate: once local labels are lost, there is nowhere left for the information to hide.
Can one do better than √n ?
The fact that the threshold for quantum advantage scales with √n is one of the most intriguing aspects of the work. At present, the reason for this remains an open question. While Diebra and colleagues emphasize that the scaling is provably optimal within quantum mechanics, they acknowledge that a more intuitive or fundamental explanation is still missing. Finding such an explanation could illuminate broader principles governing how quantum systems compress and protect global information.
While the parity-identification problem has no immediate known applications, understanding how properties can be inferred from limited information is also crucial when dealing with realistic quantum devices, where noise, decoherence and imperfect measurements severely restrict what information can be accessed. Results like this therefore suggest that some computational or informational tasks may remain feasible even when our view of the system is drastically incomplete.
Speaking more broadly, the conceptual implications of proving new examples quantum advantage are clear: even for extremely simple inference tasks, quantum strategies can outperform classical ones in unexpected and qualitative ways. The result therefore provides a clean testing ground for deeper questions about quantum resources, symmetry and information compression. Which specific features of entanglement are responsible for the advantage? Can similar thresholds be found for other groups or more complex symmetries? And does the square-root scaling reflect a universal principle?
For now, the work serves as a reminder that – even decades into the development of quantum information theory – basic questions about how information is stored, hidden, and revealed in quantum systems can still produce genuine surprises.
The post Physicists identify unexpected quantum advantage in a permutation parity task appeared first on Physics World.
Long-distance quantum sensor network advances the search for dark matter
Physicists in China have set the tightest constraint yet on a parameter known as axion-nucleon coupling
The post Long-distance quantum sensor network advances the search for dark matter appeared first on Physics World.
A new of way of searching for dark-matter candidate particles called axions has produced the tightest constraint yet on how they can interact with normal matter. Using a two-city network of quantum sensors based on nuclear spins, physicists in China narrowed the possible values of a parameter known as axion-nucleon coupling below a limit previously set by astrophysical observations. As well as insights on the nature of dark matter, the technique could aid investigations of other beyond-the-Standard-Model physics phenomena such as axion stars, axion strings and Q-balls.
Dark matter is thought to make up over 25% of the universe’s mass, but it has never been detected directly. Instead, we infer its existence from its gravitational interactions with visible matter and its effect on the large-scale structure of the universe.
While the Standard Model of particle physics does not incorporate dark matter, several physicists have proposed ideas for how to bring it into the fold. One of the most promising involves particles called axions. First hypothesized in the 1970s as a way of explaining unresolved questions about charge-parity violation, axions are chargeless and much less massive than electrons. This means they interact only weakly with matter and electromagnetic radiation.
According to theoretical calculations, the Big Bang should have produced axions in abundance. During phase transitions in the early universe, these axions would have formed topological defects – defects that study leader Xinhua Peng of the University of Science and Technology of China (USTC) says should, in principle, be detectable. “These defects are expected to interact with nuclear spins and induce signals as the Earth crosses them,” Peng explains.
A new axion search method
The problem, Peng continues, is that such signals are expected to be extremely weak and transient. She and her colleagues therefore developed an alternative axion search method that exploits a different predicted behaviour.
When fermions (particles with half-integer spin) interact, or couple, with axions, they should produce a pseudo-magnetic field. Peng and colleagues looked for evidence of this interaction using a network of five quantum sensors, four in Hefei and one in Hangzhou. These sensors combined a large ensemble of polarized rubidium-87 (87Rb) atoms with polarized xeon-129 (129Xe) nuclear spins.
“Using nuclear spins has many advantages,” Peng explains. “These include a higher energy resolution detection for topological dark matter (TDM) axions thanks to a much smaller gyromagnetic ratio of nuclear spins; substantial spin amplification owing to the high ensemble density of noble-gas spins; and efficient optimal filtering enabled by the long nuclear-spin coherence time.”
The USTC researchers’ setup also has other advantages over previous laboratory-based TDM searches, including the Global Network of Optical Magnetometers for Exotic physics searches (GNOME). While GNOME operates in a steady-state detection mode, the USTC researchers use a detection scheme that probes transient “free-decay oscillating” signals generated on spins after a TDM crossing. The USTC team also implemented a dual-phase optimal filtering algorithm to extract TDM signals with a signal-to-noise ratio at the theoretical maximum.
Peng tells Physics World that these advantages enabled the team to explore regions of TDM parameter space well beyond limits set by astrophysical searches. The transient-state detection scheme also enables sensitive searches for TDM in the region where the axion mass exceeds 100 peV – a region that GNOME cannot access.
Most stringent constraints
The researchers have not yet recorded a statistically significant topological crossing event using their setup, so the dark matter search is not over. However, they have set more stringent constraints on axion-nucleon coupling across a range of axion masses from 10 peV to 0.2 μeV. Notably, they calculated that the coupling strength must be greater than 4.1 x 1010 GeV at an axion mass of 84 peV. This limit is stricter than those obtained from astrophysical observations, though Peng notes that these rely on different assumptions.
Peng says the technique developed in this study, which is published in Nature, could lead to the development of even larger, more sensitive networks for detecting transient spin signals such as those from TDM. It also opens new avenues for investigating other physical phenomena beyond the Standard Model that have been theoretically proposed, but have so far lacked a pathway for experimental exploration.
The researchers now plan to increase the number of sensor stations in their network and extend their geographical baselines to intercontinental and even space-based scales. Peng explains that doing so will enhance the network’s detection sensitivity and boost signal confidence. “We also want to enhance the sensitivity of individual sensors via better spin polarization, longer coherence times and advanced quantum control techniques,” she says. Switching to a ³He–K system, she adds, could boost their current spin-rotation sensitivity by up to four orders of magnitude.
The post Long-distance quantum sensor network advances the search for dark matter appeared first on Physics World.
Pathways to a career in quantum: what skills do you need?
Matin Durrani reports from the Careers in Quantum event at the University of Bristol, UK
The post Pathways to a career in quantum: what skills do you need? appeared first on Physics World.
Careers in Quantum, which was held on 5 March 2026, is an unusual event. Now in its seventh year, it’s entirely organized by PhD students who are part of the Quantum Engineering Centre for Doctoral Training (CDT) at the University of Bristol in the UK.
As well as giving them valuable practical experience of creating an event featuring businesses in the burgeoning quantum sector, it also lets them build links with the very firms they – and the students and postdocs who attended – might end up working for.
A clever win-win if you like, with the day featuring talks, panel discussion and a careers fair made up companies such as Applied Quantum Computing, Duality, Hamamatsu, Orca Computing, Phasecraft, QphoX, Riverlane, Siloton and Sparrow Quantum.
IOP Publishing featured too with Antigoni Messaritaki talking about her journey from researcher to senior publisher and Physics World features and careers editor Tushna Commissariat taking part in a panel discussion on careers in quantum.
The importance of communication and other “soft skills” was emphasized by all speakers in the discussion, but what struck me most was a comment by Carrie Weidner, a lecturer in quantum engineering at Bristol, who underlined that it’s fine – in fact important – to learn to fail.
“If you’re resilient and can think critically, you can do anything,” said Weidner, who is also director of the quantum-engineering CDT. She warned too of the dangers of generative AI, joking that “every time you use ChatGPT, your brain is atrophying”.

Another great talk was by Diya Nair, a computer-science undergraduate at the University of Birmingham, who is head of global outreach and UK ambassador for Girls in Quantum.
The organization is now active in almost 70 countries around the world, with the aim of “democratizing quantum education”. As Nair explained, Girls in Quantum does everything from arrange quantum computing courses and hackathons to creating its crowdfunded quantum-computing game called Hop.
The event also included a discussion about taking quantum research “from concept to commercialization”. It featured Jack Russel Bruce from Universal Quantum, Euan Allen from eye-imaging tech firm Siloton, Joe Longden from Duality Quantum Photonics, and Stewart Noakes, who has mentored numerous companies over the years.
Noakes emphasized that all hi-tech firms have three main needs: talent, money and ideas. In fact, as he explained, companies can sometimes suffer from having too much money as well as too little, especially if they grow too fast and hire people on big salaries who might then need to be let go if funding dries up.
Bruce, though, was positive about the overall state of the quantum-tech sector. “For me, the future is bright,” he said. But as all speakers underlined, if you want to join the industry, make sure you’ve got good communication skills, an open-minded attitude – and a willingness to learn on the go.
- If you’d like more information about careers in quantum, do check out the 2026 Physics World Careers Guide (now in its 10th year) and explore Physics World Jobs, which has up-to-date vacancies across physics.
The post Pathways to a career in quantum: what skills do you need? appeared first on Physics World.
Metamaterial antennas enhance MR images of the eye and brain
Integrating metamaterials into radiofrequency antennas improves image sharpness and enables faster data acquisition using existing MRI scanners
The post Metamaterial antennas enhance MR images of the eye and brain appeared first on Physics World.

MRI is one of the most important imaging tools employed in medical diagnostics. But for deep-lying tissues or complex anatomic features, MRI can struggle to create clear images in a reasonable scan time. A research team led by Thoralf Niendorf at the Max Delbrück Center in Germany is using metamaterials to create a compact radiofrequency (RF) antenna that enhances image quality and enables faster MRI scanning.
Imaging the subtle structures of the eye and orbit (the surrounding eye socket) is a particular challenge for MRI, due to the high spatial resolution and small fields-of-view required, which standard MRI systems struggle to achieve. These limitations are generally due to the antennas (or RF coils) that transmit and receive the RF signals. Increasing the sensitivity of these antennas will increase signal strength and improve the resolution of the resulting MR images.
To achieve this, Niendorf and colleagues turned to electromagnetic metamaterials – artificially manufactured, regularly arranged structures made of periodic subwavelength unit cells (UCs) that interact with electromagnetic waves in ways that natural materials do not. They designed the metamaterial UCs based on a double-square split-ring resonator design, tailored for operation at a high magnetic field strength of 7.0 T.
Metamaterials improve transmit–receive performance
In their latest study, led by doctoral student Nandita Saha and reported in Advanced Materials, the researchers created a metamaterial-integrated RF antenna (MTMA) by fabricating the UCs into a 5 x 8 array. They built two configurations: a planar antenna (planar-MTMA); and a version with a 90° bend in the centre (bend-MTMA) to conform to the human face. For comparison, they also built conventional counterparts without the metamaterial (planar-loop and bend-loop).
The researchers simulated the MRI performances of the four antennas and validated their findings via measurements at 7.0 T. Tests in a rectangular phantom showed that the planar-MTMA demonstrated between 14% and 20% higher transmit efficiency than the planar-loop (assessed via B₁+ mapping).
They next imaged a head phantom, placing planar antennas behind the head to image the occipital lobe (the part of the brain involved in visual processing) and bend antennas over the eyes for ocular imaging. For the planar antennas, B₁+ mapping revealed that the planar-MTMA generated around 21% (axial), 19% (sagittal) and 13% (coronal) higher intensity than the planar-loop. Gradient-echo imaging showed that planar-MTMA also improved the receive sensitivity, by 106% (axial), 94% (sagittal) and 132% (coronal).

The bend antennas exhibited similar trends, with B₁+ maps showing transmit gains of roughly 20% for the bend-MTMA over the bend-loop. The bend-MTMA also outperformed the bend-loop in terms of receive signal intensity, by approximately 30%.
“With the metamaterials we developed, we were able to guide and modulate the RF fields generated in MRI more efficiently,” says Niendorf. “By integrating metamaterials into MRI antennas, we created a new type of transmitter and detector hardware that increases signal strength from the target tissue, improves image sharpness and enables faster data acquisition.”
In vivo imaging
Importantly, the new MRI antenna design is compatible with existing MRI scanners, meaning that no new infrastructure is needed for use in the clinic. The researchers validated their technology in a group of volunteers, working closely with partners at Rostock University Medical Center.
Before use on human subjects, the researchers evaluated the MRI safety of the four antennas. All configurations remained well below the IEC’s specific absorption rate (SAR) limit. They also assessed the bend-MTMA (which showed the highest SAR) using MR thermometry and fibre optic sensors. After 30 min at 10 W input power, the temperature increased by about 1.5°C. At 5 W, the increase was below 0.5°C, well within IEC safety thresholds and thus used for the in vivo MRI exams.
The team first performed MRI of the eye and orbit in three healthy adults, using the bend-loop and bend-MTMA antennas positioned over the eyes. Across all volunteers, the bend-MTMA exhibited better transmit performance in the ocular region that the bend-loop.
The bend-MTMA antenna also generated larger intraocular signals than the bend-loop (assessed via T2-weighted turbo spin-echo imaging), with signal increases of 51%, 28% and 25% in the left eyes, for volunteers 1, 2 and 3, respectively, and corresponding gains of 27%, 26% and 29% for their right eyes. Overall, the bend-MTMA provided more uniform and higher-intensity signal coverage of the ocular region at 7.0 T than the bend-loop.
To further demonstrate clinical application of the bend-MTMA, the team used it to image a volunteer with a retinal haemangioma in their left eye. A 7.0 T MRI scan performed 16 days after treatment revealed two distinct clusters of structural change due to the therapy. In addition, one of the volunteer’s ocular scans revealed a sinus cyst, an unexpected finding that showed the diagnostic benefit of the bend-MTMA being able to image beyond the orbit and into the paranasal sinuses and inferior frontal lobe.
The team used the planar antennas to image the occipital lobe, a clinically relevant target for neuro-ophthalmic examinations. The planar-MTMA exhibited significantly higher transmit efficiency than the planar-loop, as well as higher signal intensity and wider coverage, enhancing the anatomical depiction of posterior brain regions.
“Clearer signals and better images could open new doors in diagnostic imaging,” says Niendorf. “Early ophthalmology applications could include diagnostic confirmation of ambiguous ophthalmoscopic findings, visualization and local staging of ocular masses, 3D MRI, fusion with colour Doppler ultrasound, and physio-metabolic imaging to probe iron concentration or water diffusion in the eye.”
He notes that with slight modifications, the new antennas could enable MRI scans depicting the release and transport of drugs within the body. Their geometry and design could also be tuned to image organs such as the heart, kidneys or brain. “Another pioneering clinical application involves thermal magnetic resonance, which adds a thermal intervention dimension to an MRI device and integrates diagnostic guidance, thermal treatment and therapy monitoring facilitated by metamaterial RF antenna arrays,” he tells Physics World.
The post Metamaterial antennas enhance MR images of the eye and brain appeared first on Physics World.
Laser-written glass plates could store data for thousands of years
Scientists at Microsoft Research tout a potential long-term alternative to standard digital archives
The post Laser-written glass plates could store data for thousands of years appeared first on Physics World.
Humans are generating more data than ever before. While much of these data do not need to be stored long-term, some – such as scientific and historical records – would ideally still be retrievable in decades, or even centuries. The problem is that modern digital archive systems such as hard disk drives do not last that long. This means that data must regularly be transferred to new media, which is costly and time-consuming.
A team at Microsoft Research now claims to have found a solution. By using ultrashort, intense laser pulses to “write” data units called phase voxels into glass chips, the team says it has created a medium that could store 4.8 terabytes (TB) of data error-free for more than 10::000 years – a span that exceeds the age of history’s oldest surviving written records.
Direct laser writing
The idea of writing data into glass or other durable media with lasers is not new. Direct laser writing, as it is known, involves focusing high-power pulses, usually just femtoseconds (10-15 s) long, on a three-dimensional region within a medium. This modifies the medium’s optical properties in that region, and each modified region becomes a data-storage unit known as a voxel, which is the 3D equivalent of a pixel.
Because the laser’s energy is focused on a very small volume, the voxels created with this method can be very densely packed. Changing the amplitude and polarization of the laser’s output changes what information gets encoded at each voxel, and an optical microscope can “read out” this information by picking up changes in the light as it passes through each modified region. In terms of the media used, glass is particularly promising because it is thermally and chemically stable and is robust to moisture and electromagnetic interference.
Direct laser writing does have some limitations, however. In particular, encoding information generally requires multiple laser pulses per voxel, restricting the technique’s throughput and efficiency.
Two types of voxel, one laser pulse
Microsoft Research’s “Project Silica” team says it overcame this problem by encoding information in two types of voxel: phase voxels and birefringent voxels. Both types involve modifying the refractive index of the medium, and thus the speed of light within it. The difference is that whereas phase voxels create an isotropic change in the refractive index, birefringent voxels create an anisotropic change by rotating the voxel in the plane of the 120-mm square, 2-mm-thick glass chip.
Crucially, both types of voxel can be produced using a single laser pulse. According to Project Silica team leader Richard Black, this makes the modified region smaller and more uniform, minimizing effects such as light scattering that can interfere with read-outs from neighbouring voxels. It also allows many voxel layers to be written into, and then read out from, a single glass chip. The result is a system that can generate up to 10 million voxels per second, which equates to 25.6 million bits of data per second (Mbit s−1).
Performance of different types of glass
The Microsoft researchers studied two types of glass, both of which have better mechanical properties than ordinary window glass. In 301 layers of fused silica glass, they achieved a data density of 1.59 Gbit mm−3 using birefringent voxels, with a write throughput of 25.6 Mbit s−1 and a write efficiency of 10.1 nJ per bit. In 258 layers of borosilicate glass, the data density reached 0.678 Gbit mm−3 using phase voxels. Here, the write throughput was 18.4 Mbit s−1 and the write efficiency 8.85 nJ per bit.
“The phase voxel discovery in particular is quite notable because it lets us store data in ordinary borosilicate glass, rather than pure fused silica; do it with a single laser pulse per voxel; and do it highly parallel in close proximity,” says Black. “That combination of cheaper material and much simpler and faster writing and reading was a genuinely exciting moment for us.”
The researchers also showed that they could directly inscribe the glass using four independent laser beams in parallel, further increasing the write speeds for both types of glass.
Surviving “benign neglect”
To determine how long these inscribed glass plates could store data, the team repeatedly heated them to 500 °C, simulating their long-term ageing at lower temperatures. The results of these experiments suggest that encoded data could be retrieved after 10::000 years of storage at 290 °C. However, Black acknowledges that this figure does not account for external effects such as mechanical stress or chemical corrosion that could degrade the glass and the data it stores. Another unaddressed challenge is that storage capacity and writing speed will both need to grow before the technology can compete with today’s data centres.
If these deficiencies can be remedied, Black thinks the clearest potential applications would be in national libraries and other facilities that store scientific data and cultural records. “It’s also compelling for cloud archives where data is written once and kept indefinitely,” Black says. He points out that the team has already demonstrated proofs of concept with Warner Bros., the Global Music Vault and the Golden Record 2.0 project, a “cultural time capsule” inspired by the literal golden records launched on the Voyager spacecraft in the 1970s.
A common factor across all these organizations, Black explains, is that they need media that can survive “benign neglect” – something he says Project Silica delivers. He adds that the project also provides what he calls operational proportionality, meaning that its costs are primarily a function of the operations performed on the data, not the length of time the data are kept. “This completely alters the way we think about keeping archival material,” he says. “Once you have paid to keep the data, there is little point in deleting it, and you might as well keep it.”
Microsoft began exploring direct laser data storage in glass nearly a decade ago thanks to team member Ant Rowstron, who recognized the potential of work being done by physicist Peter Kazansky and colleagues at the University of Southampton, UK. The latest version of the technique, which is detailed in Nature, grew out of that collaboration, and Black says its capabilities are limited only by the power and speed of the femtosecond laser being used. “We have now concluded our research study and are sharing our results so that others may build on our work,” he says.
The post Laser-written glass plates could store data for thousands of years appeared first on Physics World.
Ultrasound system solves the ‘unsticking problem’ in biomedical research
Impulsonics’ “surround sound” technology frees-up living cells
The post Ultrasound system solves the ‘unsticking problem’ in biomedical research appeared first on Physics World.
“Surround sound for biological cells,” is how Luke Cox describes the ultrasound technology that Impulsonics has developed to solve the “unsticking problem” in biomedical science. Cox is co-founder and chief executive of UK-based Impulsonics, which spun-out of the University of Bristol in 2023.
He is also my guest in this episode of the Physics World Weekly podcast. He explains why living cells grown in a petri dish tend to stick together, and why this can be a barrier to scientific research and the development of new medical treatments.
The system uses an array of ultrasound transducers to focus sound so that it frees-up and manipulates cells in a way that does not alter their biological properties. This is unlike chemical unsticking processes, which can change cells and impact research results.
We also chat about Cox’s career arc from PhD student to chief executive and explore opportunities for physicists in the biomedical industry.
The following articles are mentioned in the podcast:
- “Materials probed by ultrasound…” podcast with Bruce Drinkwater
- “Portable imaging system targets eye diseases…” podcast with Siloton
- “Holographic acoustic tweezers could be used to create 3D displays” research done in Bruce Drinkwater’s lab
The post Ultrasound system solves the ‘unsticking problem’ in biomedical research appeared first on Physics World.
Scientists are failing to disclose their use of AI despite journal mandates, finds study
Analysis finds that the use of AI in scientific writing is increasing
The post Scientists are failing to disclose their use of AI despite journal mandates, finds study appeared first on Physics World.
An analysis of more than 5.2 million papers in 5000 different journals has revealed a dramatic rise in the use of artificial intelligence (AI) tools in academic writing across all scientific disciplines, especially physics.
However, the analysis has revealed a big gap between the number of researchers who use AI and those who admit to doing so – even though most scientific journals have policies requiring the use of AI to be disclosed.
Carried out by data scientist Yi Bu from Peking University and colleagues, the analysis looks at papers that are listed in the OpenAlex dataset and were published between 2021 and 2025.
To assess the impact of editorial guidelines introduced in response to the growing use of generative AI tools such as ChatGPT, they examined journal AI-writing policies, looked at author disclosures and used AI to see if papers had been written with the help of technology.
The AI detection analysis reveals that the use of AI writing tools has increased dramatically across all scientific disciplines since 2023. It also finds that 70% of journals have adopted AI policies, which primarily require authors to disclose the use of AI-writing tools.
IOP Publishing, which publishes Physics World, for example, has a journals policy that supports authors who use AI in a “responsible and appropriate” manner. It encourages authors, however, to be “transparent about their use of any generative AI tools in either the research or the drafting of the manuscript”.
A new framework
But in the new study, a full-text analysis of 75 000 papers published since 2023, reveals that only 76 articles (about 0.1% of the total) explicitly disclosed the use of AI writing tools.
In addition, the study finds no significant difference in the use of AI between journals that have disclosure policies and those that do not, which suggests that disclosure requirements are being ignored – what the authors call a “transparency gap”.
The study also finds that researchers from non-English-speaking countries are more likely to rely on AI writing tools than native English speakers. Increases in the use of AI writing tools are found to be particularly rapid in journals with high levels of open-access publishing.
The authors now call for a re-evaluation of ethical frameworks to foster responsible AI integration in science. They state that prohibition or disclosure requirements are insufficient to regulate AI use, with their results showing that researchers are not complying with policies.
The authors argue that instead of “opposition and resistance”, “proactive engagement and institutional innovation” is needed “to ensure AI technology truly enhances the value of science”.
The post Scientists are failing to disclose their use of AI despite journal mandates, finds study appeared first on Physics World.
The humanity of machines: the relationship between technology and our bodies
Anita Chandran reviews The Body Digital: a Brief History of Humans and Machines from Cuckoo Clocks to ChatGPT by Vanessa Chang
The post The humanity of machines: the relationship between technology and our bodies appeared first on Physics World.
Humanity has had a complicated relationship with machines and technology for centuries. While we created these inventions to make our lives easier, and have become heavily reliant upon them, we have often feared their impact on society.
In her debut book, The Body Digital: a Brief History of Humans and Machines from Cuckoo Clocks to ChatGPT, Vanessa Chang tells the story of this symbiotic partnership, covering tools as diverse as the self-playing piano and generative AI products. The short book combines creative storytelling, an inward look at our bodies and interpersonal relationships, and a detailed history of invention. Chang – who is the director of programmes at Leonardo, the International Society for the Arts, Sciences, and Technology in California – offers us a framework for examining future worlds based on the relationship between humanity and machines.
“Technology” has no easy definition. The Body Digital therefore takes a broad approach, looking at software, machines, infrastructure and tools. Chang examines objects as mundane as the pen and as complex as the road networks that define our cities. She focuses on the interplay between machine and human: how tools have lightened our load and become embedded in our behaviour. In doing this she asks the reader: is it possible for the human body to extract itself from technology?
Each chapter of the book centres on a different part of the human anatomy – hand, voice, ear, eye, foot, body and mind – looking at the historical relationship between that body part and technology. Chang follows this thread through to the modern day and the large-scale impact these technologies have had on the development of our communities, communications and social structures. The chapters are a vehicle for Chang to present interesting pieces of history and discussions about society and culture. Her explanations are tightly knit, and the book covers huge ground in its relatively concise page count.
Chang avoids “doomerism”, remaining even-handed about our reservations towards technological advancement. She is careful in her discussion of new technology, particularly those that are often fraught in the public discourse, such as the use of generative AI in creating art, and the potential harms of facial-recognition software.
She includes genuine concerns – like biases creeping into training data for large language models – but mitigates these fears by discussing how technologies have become enmeshed in human culture through history. Our fear of some technologies has been unfounded – take, for example, the idea that the self-playing piano would supersede live piano concerts. These debates, Chang argues, have happened throughout the history of technology, and some of the same arguments from the past can easily be applied to future technology.
While this commentary is often thought-provoking, it sometimes doesn’t go as far as it might. There is relatively limited discussion throughout the book about the technological ecosystem we currently live in and how that might impact our level of optimism about the future. In particular, the topics of human labour being supplanted by machine labour, and the impacts of tech monoliths like Apple and Google, are relatively minimal.
In one example, Chang discusses the ways in which “telecommunication technologies might serve as channels into the afterlife”, allowing us to use technology to artificially recreate the voices of our loved ones after death. While the book contains a full discussion of how uncanny and alarming this type of “artistic necrophilia” might be, Chang tempers fear by pointing out that by being careful with our data, careful with our digital selves, we might be able to “mitigate the transformation of [our] voices into pure commodities”. However, the questions of who controls our data, the relationship between data and capital, and the level of control that we have over the use of our data, is somewhat limited.
Poetic technology
The difference between offering interesting ideas and overexplaining is a hard needle to thread, and one that Chang navigates successfully. One striking feature of The Body Digital is the quality of the prose. Chang has a background in fiction writing and her descriptions reflect this. An automaton is anthropomorphized as a “petite, barefoot boy” with a “cloud of brown hair”; and the humble footpath is described as “veer[ing] at a jaunty angle from the pavement, an unruly alternative to concrete”. As a consequence, her ideas are interesting and memorable, making the book readable and often moving.
Particularly impressive is Chang’s attitude to exposition, which mimics fiction’s age-old adage of “show, don’t tell”. She gives the reader enough information to learn something new in context and ask follow-up questions, without banging the reader over the head with an answer to these questions. The book mimics the same relationship between the written word and human consciousness that Chang discusses within it. The Body Digital marinates with the reader in the way any good novel might, while teaching them something new.
The result is a poetic and well-observed text, which offers the reader a different way of understanding humanity’s relationship with technology. It reminds us that we have coexisted with machines throughout the history of our species, and that they have been helpful and positively shaped the direction of our world. While she covers too much ground to gaze in any one direction for too long, the reader is likely to come away enriched and perhaps even hopeful. And, as Chang points out, we have the opportunity to shape the future of technology, by “attending to the rich, idiosyncratic intelligence of our bodies”.
- 2025 Melville House Publishing 256pp £14.99 pb / £9.49 ebook
The post The humanity of machines: the relationship between technology and our bodies appeared first on Physics World.
Making multipartite entanglement easier to detect
New advances in entanglement witnesses allow researchers to verify genuine multipartite entanglement even in noisy, high‑dimensional and computationally relevant quantum states
The post Making multipartite entanglement easier to detect appeared first on Physics World.
Genuine multipartite entanglement is the strongest form of entanglement, where every part of a quantum system is entangled with every other part. It plays a central role in advanced quantum tasks such as quantum metrology and quantum error correction. To detect this deep form of entanglement in practice, researchers often use entanglement witnesses which are fast, experimentally friendly tests that certify entanglement whenever a measurable quantity exceeds a certain bound.
In this work, the researchers significantly extend previous witness‑construction methods to cover a much broader family of multipartite quantum states. Their approach is built within the multi‑qudit stabiliser formalism, a powerful framework widely used in quantum error correction and known for describing large classes of entangled states, both pure and mixed. They generalise earlier results in two major directions: (i) to systems with arbitrary prime local dimension, going far beyond qubits, and (ii) to stabiliser subspaces, where the stabiliser defines not just a single state but an entire entangled subspace.
This generalisation allows them to construct witnesses tailored to high‑dimensional graph states and to stabiliser‑defined subspaces, and they show that these witnesses can be more robust to noise than those designed for multiqubit systems. In particular, witnesses tailored to GHZ‑type states achieve the strongest resistance to white noise, and in some cases the authors identify the most noise‑robust witness possible within this construction. They also demonstrate that stabiliser‑subspace witnesses can outperform graph‑state witnesses when the local dimension is greater than two.
Overall, this research provides more powerful and flexible tools for detecting genuine multipartite entanglement in noisy, high‑dimensional and computationally relevant quantum systems. It strengthens our ability to certify complex entanglement in real‑world quantum technologies and opens the door to future extensions beyond the stabiliser framework.
Read the full article
Entanglement witnesses for stabilizer states and subspaces beyond qubits
Jakub Szczepaniak et al 2025 Rep. Prog. Phys. 88 117602
Do you want to learn more about this topic?
Focus on Quantum Entanglement: State of the Art and Open Questions guest edited by Anna Sanpera and Carlo Marconi (2025-2026)
The post Making multipartite entanglement easier to detect appeared first on Physics World.
Resolving the spin of sound
Researchers show how sound waves can hold conserved spin angular momentum, resolving a long‑standing theoretical debate
The post Resolving the spin of sound appeared first on Physics World.
Acoustic waves are usually thought of as purely longitudinal, moving back and forth in the direction the wave is travelling and having no intrinsic rotation, therefore no spin (spin‑0). Recent work has shown that acoustic waves can in fact carry local spin‑like behaviour. However, until now, the total spin angular momentum of an acoustic field was believed to vanish, with the local positive and negative spin contributions cancelling each other to give an overall global spin‑0. In this work, the researchers show that acoustic vortex beams can carry a non‑zero longitudinal spin angular momentum when the beam is guided by certain boundary conditions. This overturns the long‑held assumption that longitudinal waves cannot possess a global spin degree of freedom.
Using a self‑consistent theoretical framework, the researchers derive the full spin, orbital and total angular momentum of these beams and reveal a new kind of spin–orbit interaction that appears when the beam is compressed or expanded. They also uncover a detailed relationship between the two competing descriptions of angular momentum in acoustics which are canonical‑Minkowski and kinetic‑Abraham. They demonstrate that only the canonical‑Minkowski form is truly conserved and directly tied to the beam’s azimuthal quantum number, which describes how the wave twists as it travels.
The team further demonstrates this mechanism experimentally using a waveguide with a slowly varying cross‑section. They show that the effect is not limited to this setup, it can also arise in evanescent acoustic fields and even in other wave systems such as electromagnetism. These results introduce a missing fundamental degree of freedom in longitudinal waves, offer new strategies for manipulating acoustic spin and orbital angular momentum, and open the door to future applications in wave‑based devices, underwater communication and particle manipulation.
Read the full article
Longitudinal acoustic spin and global spin–orbit interaction in vortex beams
Wei Wang et al 2025 Rep. Prog. Phys. 88 110501
Do you want to learn more about this topic?
Acoustic manipulation of multi-body structures and dynamics by Melody X Lim, Bryan VanSaders and Heinrich M Jaeger (2024)
The post Resolving the spin of sound appeared first on Physics World.
Quantum memories could help make long-baseline optical astronomy a reality
Single-photon interferometry achieved over 1.5 km
The post Quantum memories could help make long-baseline optical astronomy a reality appeared first on Physics World.
Quantum-entangled sensors placed over a kilometre apart could allow interferometric measurements of optical light with single photon sensitivity, experiments in the US suggest. While this proof-of-principle demonstration of a theoretical proposal first made in 2012 is not yet practically useful for astronomy, it marks a significant step forward in quantum sensing.
Radio telescopes are often linked together to provide more detailed images with better angular resolution than would otherwise be possible. The Event Horizon Telescope array, for example, performs very long baseline interferometry of signals from observatories on four continents to take astrophysical images such as the first picture of a black hole in 2019. At shorter wavelengths, however, much weaker signals are often parcelled into higher-energy photons. “You start getting this granularity at the single photon level,” says Pieter-Jan Stas at Harvard University.
According to textbook quantum mechanics, one can create an interferometric image from single photons by recombining their paths at a single detector – provided that their paths are not measured before then. This principle is used in laboratory spectroscopy. In astronomical observations, however, attempting to transport single photons from widely spread telescopes to a central detector would almost certainly result in them being lost. The baseline of infrared and optical telescopes is therefore restricted to about 300 m.
In 2012, theorist Daniel Gottesman, then at the Perimeter Institute for Theoretical Physics in Canada, and colleagues proposed using a central single source of entangled photons as a quantum repeater to generate entanglement between two detection sites, putting them into the same quantum state. The effect of an incoming photon on this combined state could therefore be measured without having to recombine the paths and collect the photon at a central detector.
Hidden information
“In reality, the photon will be in a superposition of arriving at both of the detectors,” says Stas. “That’s where this advantage comes from – you have this photon that is delocalized and arrives at both the left and the right station – so you truly have this baseline that helps you with improving your resolution, but to do this you have to keep the ‘which path’ information hidden.”
The 2012 proposal was not thought to be practical, because it required distributing entanglement at a rate comparable with the telescope’s spectral bandwidth. In 2019, however, Harvard’s Mikail Lukin and colleagues proposed integrating a quantum memory into the system. In the new research, they demonstrate this in practice.
The team used qubits made from silicon–vacancy centres in diamond. These can be very long lived because the spin of the centre’s electron (which interacts with the photon) is mapped to the nuclear spin, which is very stable. The researchers used a central laser as a coherent photon source to generate heralded entanglement to certify that the qubits were event-ready. “It’s not like you have to receive the space signal to be simultaneous with the arrival of the photon,” says team member Aziza Suleymanzade at the University of California, Berkeley. “In our case, we distribute entanglement, and it has some coherence time, and during that time you can detect your signal.”
Using two detectors placed in adjacent laboratories and synthetic light sources, the researchers demonstrated photon detection above vacuum fluctuations in fibres over 1.5 km in length. They acknowledge that much work remains before this can be viable in practical astronomy, such as a higher rate of entanglement generation, but Stas says that “this is one step towards bringing quantum techniques into sensing”.
Similar work in China
The research is described in Nature. Researchers in China led by Jian-Wei Pan have achieved a similar result, but their work has yet to be peer reviewed.
Yujie Zhang of the University of Waterloo in Canada points out that Lukin and colleagues have done similar work on distributed quantum communication and the quantum internet. “The major difference is that for most of the original protocols, what people care about is trying to entangle different quantum memories in the quantum network so then they can do gates on those quantum memories,” he says. “There’s nothing about extra information from the environment…This one is different in that they have to get the information mapped from the starlight to their quantum memory.” He notes several difficulties acknowledged by the researchers – such as that vacancy centres are very narrowband, but says that now people know the system can work, they can work to show that it can beat classical systems in practice.
“I think this is definitely a step towards [realizing the protocol envisaged in 2012],” says Gottesman, now at the University of Maryland, College Park. “There have been previous experiments where they generated the entanglement and they did some interference but they didn’t have the repeater aspect, which is the real value-added aspect of doing quantum-assisted interferometry. Its rate is still well short of what you’d need to have a functioning telescope, but this is putting one of the important pieces into place.”
The post Quantum memories could help make long-baseline optical astronomy a reality appeared first on Physics World.
UK physics leaders express ‘deep concern’ over funding cuts in letter to science minister Patrick Vallance
Heads of almost 60 physics departments sign letter saying UK funding cuts are causing “reputational risk”
The post UK physics leaders express ‘deep concern’ over funding cuts in letter to science minister Patrick Vallance appeared first on Physics World.
The heads of university physics departments in the UK have published an open letter expressing their “deep concern” about funding changes announced late last year by UK Research and Innovation (UKRI), the umbrella organization for the UK’s research councils.
Addressed to science minister Patrick Vallance, the letter says the cuts are causing “reputational risk” and calls for “strategic clarity and stability” to ensure that UK physics can thrive.
It has so far been signed by 58 people who represent 45 different universities, including Birmingham, Bristol, Cambridge, Durham, Imperial College, Liverpool, Manchester and Oxford.
The letter says that the changes at UKRI “risk undermining science’s fundamental role in improving our prosperity, health and quality of life, as well as delivering sustainable growth through innovation, productivity and scientific leadership”.
The signatories warn that the UK’s international standing in physics is “a strategic asset” and that areas such as particle physics, astronomy and nuclear physics are “especially important”.
Raising concerns
The decision by the heads of physics to write to Vallance comes in the wake of UKRI stating in December that it will be adjusting how it allocates government funding for scientific research and infrastructure.
The Science and Technology Facilities Council (STFC), which is part of UKRI, stated that projects would need to be cut given inflation, rising energy costs as well as “unfavourable movements in foreign exchange rates” that have increased STFC’s annual costs by over £50m a year.
The STFC noted that it would need to reduce spending from its core budget by at least 30% over 2024/2025 levels while also cutting the number of projects financed by its infrastructure fund.
The council has already said two UK national facilities – the Relativistic Ultrafast Electron Diffraction and Imaging facility and a mass spectrometry centre dubbed C‑MASS – will now not be prioritized.
In addition, two international particle-physics projects will not be supported: a UK-led upgrade to the LHCb experiment at CERN as well as a contribution to the Electron-Ion Collider at the Brookhaven National Laboratory that is currently being built.
Philip Burrows, director of the John Adams Institute for Accelerator Science at the University of Oxford, who is one of the signatories of the letter, told Physics World that the cuts are “like buying a Formula-1 car but not being able to afford the driver”.
Burrows admits that the STFC has been hit “particularly hard” by its flat-cash settlement, given that a large fraction of its expenditure pays the UK’s subscriptions to international facilities and operating the UK’s flagship national facilities.
But because most of the rest of the STFC’s budget supports scientists to do research at those facilities, he is concerned that the funding cuts will fall disproportionately on the science programme.
“Constraining these areas risks weakening the very talent pipeline on which the UK’s innovation economy depends,” the letter states. “Fundamental physics also delivers substantial public engagement and cultural impact, strengthening public support for science and reinforcing the UK’s reputation as a global scientific leader.”
The signatories also say they are “particularly concerned” about the UK’s capacity to lead the scientific exploitation of major international projects. “An abrupt pause in funding for key international science programmes risks damaging UK researchers’ competitive advantage into the 2040s,” they note.
The letter now calls on the government to work with UKRI and STFC to “stabilize” curiosity-driven grants for physics within STFC “at a minimum of flat funding in real terms” as well as protect postdocs, students and technicians from the cuts.
It also calls on the UK to develop a long-term strategy for infrastructure and call on the government to address facilities cost pressures through “dedicated and equitable mechanisms so that external shocks do not singularly erode the UK’s research base in STFC-funded research areas”.
The news comes as Michele Dougherty today formally stepped down from her role as IOP president. Dougherty, who also holds the position of executive chair of the STFC, had previously stepped back from presidential duties on 26 January due to a conflict of interest.
Paul Howarth, who has been IOP president-elect since September, will now become IOP president.
The post UK physics leaders express ‘deep concern’ over funding cuts in letter to science minister Patrick Vallance appeared first on Physics World.
Ancient reversal of Earth’s magnetic field took an extraordinarily long time
Field-flipping event 40 million years ago in the Eocene epoch lasted 70,000 years
The post Ancient reversal of Earth’s magnetic field took an extraordinarily long time appeared first on Physics World.
The Earth’s magnetic poles have reversed 540 times over the past 170 million years. Usually, these reversals are relatively speedy in geological terms, taking around 10,000 years to complete. Now, however, scientists in the US, France and Japan have found evidence of much slower reversals deep in Earth’s geophysical past. Their findings could have important implications for our understanding of Earth’s climate and evolutionary history.
Scientists think the Earth’s magnetic field arises from a dynamo effect created by molten metal circulating inside the planet’s outer core. Its consequences include the bubble-like magnetosphere, which shields us from the solar wind and cosmic radiation that would otherwise erode our atmosphere.
From time to time, this field weakens, and the Earth’s magnetic north and south poles switch places. This is known as a geomagnetic reversal, and we know about it because certain types of terrestrial rocks and marine sediment cores contain evidence of past reversals. Judging from this evidence, reversals usually take a few thousand years, during which time the poles drift before settling again on opposite sides of the globe.
Looking into the past
Researchers led by Yuhji Yamamoto of Kochi University, Japan and Peter Lippert at the University of Utah, US, have now identified two major exceptions to this rule. Drawing on evidence obtained during the Integrated Ocean Drilling Program expedition in 2012, they say that around 40 million years ago, during the Eocene epoch, the Earth experienced two reversals that took 18,000 and 70,000 years.
The team based these findings on cores of sediment extracted off the coast of Newfoundland, Canada, up to 250 metres below the seabed. These cores contain crystals of magnetite that were produced by a combination of ancient microorganisms and other natural processes. The iron oxide particles within these crystals align with the polarity of the Earth’s magnetic field at the time the sediments were deposited. Because marine sediments are far less affected by erosion and weathering than sediments onshore, Yamamoto says the information they preserve about past Earth environments – including geomagnetic conditions – is exceptionally clean.
Significance for evolutionary history
The team says the difference between a geomagnetic reversal that takes 10,000 years and one that takes 70,000 years is significant because prolonged intervals of weaker geomagnetic fields would have exposed the Earth to higher amounts of cosmic radiation for longer. The effects on living creatures could have been devastating, says Lippert. As well as higher rates of genetic mutations due to increased radiation, he points out that organisms from bacteria to birds use the Earth’s magnetic field while navigating. “A lower strength field would create sustained pressures on these organisms to adapt,” he says.
If humans had existed at the time of these reversals, the effects on our species could have been similarly profound. “Modern humans (Homo sapiens) are thought to have begun dispersing out of Africa only about 50,000 years ago,” Yamamoto observes. “If a geomagnetic reversal can persist for a period comparable to – or even longer than – this timescale, it implies that the Earth’s environment could undergo substantial and continuous change throughout the entire period of human evolution.”
Although our genetic ancestors dodged that particular bullet, Yamamoto thinks the team’s findings, which are published in Nature Communications Earth & Environment, offer a valuable perspective on how evolution and environmental change could interact in the future. “This period corresponds to an epoch when Earth was far warmer than it is today, and when Greenland is thought to have been a truly ‘green land’,” he explains. “We also know that atmospheric CO₂ concentrations during this era were comparable to levels projected for the end of this century, making it an important ‘climate analogue’ for understanding near‑future climate conditions.”
The discovery could also have more direct implications for future life on Earth. The magnitude of the Earth’s magnetic field has decreased by around 5% in each century since records began. This decrease, combined with the slow drift of our current magnetic North Poletowards Siberia, could indicate that we are in the early stages of a new geomagnetic reversal. Re‑evaluating the duration of such reversals is thus not only an issue for geophysicists, Yamamoto says. It’s also an important opportunity to reconsider fundamental questions about how we should coexist with our planet and how we ought to confront a continually changing environment.
Motivation for future studies
John Tarduno, a geophysicist at the University of Rochester, US, who was not involved in the study, describes it as “outstanding” work that “documents an exciting discovery bearing on the nature of magnetic shielding through time and the geomagnetic reversal process”. He agrees that reduced shielding could have had biotic effects, and adds that the discovery of long reversal transitions could influence scientific thinking on the statistics of field reversals – including questions of whether the field retains some “memory” of previous events. “This new study will provide motivation to examine reversal transitions at very high resolution,” Tarduno says.
For their next project, Yamamoto and colleagues aim to use sequences of lava flows in Iceland to analyse how the Earth’s magnetic field evolved. Lippert’s team, for its part, will be studying features called geomagnetic excursions that appear in both deep sea and terrestrial sediments. Such excursions are evidence of short-lived, incomplete attempts at field reversals, and Lippert explains that they can be excellent stratigraphic markers, helping scientists correlate records on geological timescales and compare them with samples taken from different parts of the world. “Excursions, like long reversals, can inform our understanding of what ultimately causes a geomagnetic field reversal to start and persist to completion,” he says.
The post Ancient reversal of Earth’s magnetic field took an extraordinarily long time appeared first on Physics World.
Focusing on fusion: Debbie Callahan talks commercial laser fusion
Plasma physicist Debbie Callahan, chief strategy officer at Focused Energy, talks to Hamish Johnston about her work in laser fusion research
The post Focusing on fusion: Debbie Callahan talks commercial laser fusion appeared first on Physics World.

With the world’s energy demands increasing, and our impact on the climate becoming ever clearer, the search is on for greener, cleaner energy production. That’s why research into fusion energy is undergoing something of a renaissance.
Construction of the International Thermonuclear Experimental Reactor (ITER) in France – the world’s largest fusion experiment – is currently under way, while there are numerous other large-scale facilities and academic research projects too. There has also been a rise in the number of smaller commercial companies joining the race.
One person at the forefront of fusion research is Debbie Callahan – a plasma physicist who spent 35 years working at the National Ignition Facility (NIF) at Lawrence Livermore National Laboratory in the US. She is now chief strategy officer at Focused Energy, a laser-fusion firm based in Germany and California, which is trying to generate energy from the laser-driven fusion of hydrogen isotopes.
Callahan recently talked to Physics World online editor Hamish Johnston about working in the fusion sector, Focused Energy’s research and technology, and the career opportunities available. The following is an edited extract of their conversation, which you can hear in full on the Physics World Weekly podcast.
How does NIF’s approach to fusion differ from that taken by magnetic confinement facilities such as ITER?
To get fusion to happen, you need three elements that we sometimes call the triple product. You need a certain amount of density in your plasma, you need temperature, and you need time. The product of those has to be over a certain value.
Magnetic fusion and inertial fusion are kind of the opposite of each other. In a magnetic fusion system like ITER, you have a low-density plasma, but you hold it for a long time. You do that by using magnetic fields that trap the plasma and keep it from escaping.
In inertial fusion – like at NIF – it’s the opposite. You don’t hold the plasma together at all, it’s only held by its own inertia, and you have a very high density for a short time. In both cases, you can make fusion happen.
What is the current state of the art at NIF, in terms of how much energy you have to put in to achieve fusion versus how much you get out?
To date, the best shot at NIF – by which I mean an individual, high-energy laser bombardment of the target capsule – occurred during an experiment in April 2025, which had a target gain of about 4.1. That means that they got out 4.1 times the amount of energy that they put in. The incident laser energy for those shots is around two megajoules, so they got out about eight megajoules.
This is a tremendous accomplishment that’s taken decades to get to. But to make inertial fusion energy successful and use it in a power plant, we need significantly higher gains of more like 50 to 100.

Can you explain Focused Energy’s approach to fusion?
Focused Energy was founded in July 2021, and has offices in the US and Germany. Just a month later, we achieved fusion ignition, which is when the fusion fuel becomes hot enough for the reactions to sustain themselves through their own internal heating (it is not the same as gain).
At NIF lasers are fired into a small cylinder of gold or depleted uranium and the energy is converted into X-rays, which then drive the capsule. It’s what’s called laser indirect drive. At Focused Energy, however, we’re directly driving the capsule. The laser energy is put directly on the capsule, with no intermediate X-rays.
The advantage of this approach is that converting laser energy to X-rays is not very efficient. It makes it much harder to get the high target gains that we need. At Focused Energy, we believe that direct drive is the best option for fusion energy to get us to a gain of over 50.
So is boosting efficiency one of your key goals to make fusion practical at an industrial level?
Yes, exactly. You have to remember that NIF was funded for national security purposes, not for fusion energy. It wasn’t designed to be a power plant – the goal was just to generate fusion energy for the first time.
In particular, the laser at NIF is less than 1% efficient but we believe that for fusion power generation, the laser needs to be about 10% efficient.
So one of the big thrusts for our company is to develop more efficient lasers that are driven by diodes – called diode pump solid state lasers.
Can you tell us about Focused Energy’s two technologies called LightHouse and Pearl Fuel?
LightHouse is our fusion pilot plant. When operational, it will be the first power plant to produce engineering gain greater than one, meaning it will produce more energy than it took to drive it. In other words, we’ll be producing net electricity.
For NIF, in contrast, gain is the amount of energy out relative to the amount of laser energy in. But the laser is very inefficient, so the amount of electricity they had to put in to produce that eight megajoules of fusion energy is a lot.
Meanwhile, Pearl is the capsule the laser is aimed at in our direct drive system. It’s filled with deuterium–tritium fuel derived from sea water and lithium.

How do you develop the capsule to absorb the laser energy and give as much of it to the fuel as possible?
The development of the capsule for a fusion power plant is quite complicated. First, we need it to be a perfect sphere so it compresses spherically. The materials also need to efficiently absorb the laser light so you can minimize the size of that laser.
You have to be able to cheaply and quickly mass produce these targets too. While NIF does 400 shots per year, we will need to do about 900,000 shots a day – about 10 per second. We’ll also have to efficiently remove the exploded target material from the reactor chamber so that it can be cleared for the next shot.
It’s a very complicated design that needs to bring together all the pieces of the power plant in a consistent way.
When you are designing these elements, what plays a bigger role – computer simulations or experiments?
Computer simulations play a large part in developing these designs. But one of the lessons that I learned from NIF was that, although the simulation codes are state of the art, you need very precise answers, and the codes are not quite good enough – experimental data play a huge role in optimizing the design. I expect the same will be true at Focused Energy.
A third factor that’s developing is artificial intelligence (AI) and machine learning. In fact, at Livermore, a project working on AI contributed to achieving gain for the first time in December 2022. I only see AI’s role in fusion getting bigger, especially once we are able to do higher repetition rate experiments, which will provide more training data.
What intellectual property (IP) does Focused Energy have in addition to that for the design of the Pearl target and the LightHouse plant?
We also have IP in the design of the lasers – they are not the same lasers as used at NIF. And I think there’ll be a lot of IP around how we fabricate the targets. After all, it’s pretty complicated to figure out how to build 900,000 targets a day at a reasonable cost.
We’ll see a lot of IP coming out of this project in those areas, but there’s also the act of putting it all together. How we integrate these things in order to make a successful plant is important.
What are the challenges of working with deuterium and tritium as materials for fusion?
We chose deuterium and tritium because they are the easiest elements to fuse, and have been successfully demonstrated as fusion fuel by NIF.
Deuterium can be found naturally in sea water, but getting tritium – which is radioactive – is more complicated. We breed it from lithium. Our reactor designs have lithium in them, and the neutrons from the fusion reactions breed the tritium.
Making sure that we have enough tritium, and figuring out how to extract that material to use it for future shots, is a big task. We have to be able to breed enough tritium to keep the plant going.
To work on this, we have a collaboration funded by the US Department of Energy to work with Savannah River National Lab in South Carolina. They have a lot of expertise in designing these tritium-extraction systems.
How will you capture the heat from the deuterium–tritium fusion reaction?
We will use a conventional steam cycle to convert the heat into electricity. It’s funny – we’ll have this very hi-tech way of producing heat, but at the end of the day, we will use a traditional system to produce the electricity from that heat.
So what’s the timeline on development?
Our plan is to have a pilot plant up by the end of the 2030s. It’s a fairly aggressive timeline given the things that we have to do. But that’s part of being a start-up – we have to take some risks and try to move quickly to achieve our goal.
To help that we have, in my view, a superpower – we have one foot in Europe and one foot in the US. There are a lot of opportunities between the two continents to partner with other companies, universities and governments. I think that makes us strong because we have access to some of the best talent from around the world.
How does working at Focused Energy compare with life as an academic at Lawrence Livermore?
There are a lot of similarities. My role now is to bring the knowledge and skills I learned at NIF to Focused Energy, so it’s been a natural transition.
In fact, there was a lot of pressure working at NIF. We were trying to move very quickly, so it’s actually very similar to working in a start-up like Focused Energy.
One of the big differences is the level of bureaucracy. Working for a government-funded lab meant there were lots of rules and paperwork, which takes up your time and you don’t always see the value in it.
In contrast, working for a small start-up means we can move more quickly because we don’t have as many of those kinds of constraints. Personally, I find that great because it leaves more time for the fun and interesting things – like trying to get fusion on the grid.
Are you still involved in academic research in any way?
As a firm, we are still out there collaborating with academics. Last year, for example, we gave four separate presentations at the American Physical Society Division of Plasma Physics meeting.

I feel very strongly about peer review. Of course, publishing isn’t our number one priority, but we need feedback from others. We’re trying to do something that no-one’s done before, so it’s important to have our colleagues give us feedback on what we’re doing, point out mistakes we’re making or things we’re forgetting.
Working with universities and national labs in both Europe and the US is vital. Communicating with others in the field is important for us to get to where we want to go.
And of course, being an active part of the fusion community is good for recruitment too. We regularly give presentations at conferences that students attend. We meet those students and they learn about our work – and they might be future employees for our company.
What’s your advice for early-career physicists keen on joining the fusion industry?
There are so many opportunities right now, especially compared to the start of my career when the work was mainly just at universities or national labs. Nowadays, there are a lot of companies in the sector. Not all of them will survive because there’s only so much money, but there are still lots of opportunities. If you’re interested in fusion energy, go for it.
The field is always developing. There’s new stuff happening every day – and new problems. So if you like problem-solving, it’s great, especially if you want to do something good for the world.
There are also opportunities for people who are not plasma physicists. At Focused Energy we have people across so many fields – those who work on lasers, others who work on reactor design, some developing the AI and machine learning, and those who work on target physics, like me. To achieve fusion energy, we need physicists, engineers, mathematicians and computer scientists. We need researchers, technicians and operators. There’s going to be tremendous growth in this sector.
The post Focusing on fusion: Debbie Callahan talks commercial laser fusion appeared first on Physics World.
Shadow sculptures evoke quantum physics
The Midnight Ballet by Will Budgett is a treat for the eyes
The post Shadow sculptures evoke quantum physics appeared first on Physics World.
This winter in Bristol has been even gloomier than usual – so I was really looking forward to the Bristol Light Festival 2026. We went on the last evening of the event (28 February) and we were blessed with dry weather and warmish temperatures.
The festival featured 10 illuminated installations that were scattered throughout Bristol and the crowds were out in force to enjoy them. I wasn’t expecting to be thinking about physics as I wandered through town, but that’s exactly what I found myself doing at an installation called The Midnight Ballet by the British sculptor Will Budgett. Rather appropriately, it was located next to the HH Wills Physics Laboratory at the University of Bristol.
The display comprises seven sculptures that are illuminated from two different directions. The result is two very different images of ballerinas projected onto two screens (see image).
Art and science
So, why was I thinking about physics while admiring the work? To me the pieces embody – in a purely artistic way – the idea of superposition and measurement in quantum mechanics. A sculpture is capable of producing two different images (a superposition of states), but neither of these images is observable until a sculpture is illuminated from specific directions (the measurements).
Now, I know that this analogy is far from perfect. Measurements can be made simultaneously in two orthogonal planes, for example. But, Budgett’s beautiful artworks really made me think about quantum physics. Given the exhibit’s close proximity to the university’s physics department, I suspect I am not the only one.
The post Shadow sculptures evoke quantum physics appeared first on Physics World.
Nuclear-powered transport – how far can it take us?
Honor Powrie looks at the perils and plus-sides of nuclear-powered transport
The post Nuclear-powered transport – how far can it take us? appeared first on Physics World.
In 1942 physicists in Chicago, led by Enrico Fermi, famously produced the world’s first self-sustaining nuclear chain reaction. But it was to be another nine years before electricity was generated from fission for the first time. That landmark event occurred in 1951 when the Experimental Breeder Reactor-I in southern Idaho powered a string of four 200-watt light bulbs.
Our ability to harness nuclear power has been under constant development since then. In fact, according to the Nuclear Energy Association, a record 2667 terrawatt-hours of electricity was generated by nuclear reactors around the world in 2024 – up 2.5% on the year before. But what, I wonder, is the potential of nuclear-powered transport?
A “nuclear engine” has many advantages, notably providing a vehicle with an almost unlimited supply of onboard power, with no need for regular refuelling. That’s particularly attractive for large ships and submarines, where fuel stops at sea are few and far between. It’s even better for space craft, which cannot refuel at all.
The downside is that a vehicle needs to be fairly large to carry even a small nuclear fission reactor – plus all the heavy shielding to protect passengers onboard. Stringent safety requirements also have to be met. If the vehicle were to crash or explode, the shield around the reactor needs to stay fully intact.
Ships and planes
Perhaps the best known transport application of nuclear power is at sea, where it’s used for warships, submarines and supercarriers. The world’s first nuclear-powered ship was the US Navy submarine Nautilus, which was launched in 1954. As the first vessel to have a nuclear reactor for propulsion, it revolutionized naval capabilities.
Compared to oil or coal-fired ships, nuclear-powered vessels can travel far greater distances. All the fuel is in the reactor, which means there is no need for additional fuel be carried onboard – or for exhaust chimneys or air intakes. Even better, the fuel is relatively cheap. But operating and infrastructure costs are steep, which is why almost all nuclear-powered marine vessels belong to the military.
There have, however, been numerous attempts to develop other forms of nuclear-powered transport. While a nuclear-powered aircraft might seem unlikely, the idea of flying non-stop to the other side of the world, without giving off any greenhouse-gas emissions, is appealing. Incredible as it might seem, airborne nuclear reactors were actually trialled in the mid-1950s.
That was when the United States Air Force converted a B-36 bomber to carry an operational air-cooled reactor, weighing around 18 tons. The aircraft was not actually nuclear powered but it was operated in this configuration to assess the feasibility of flying a nuclear reactor. The aircraft made a total of 47 flights between July 1955 and March 1957.
In 1955 the Soviet Union also ran a project to adapt a Tupolev Tu-95 “Bear” aircraft for nuclear power. However, because of the radiation hazard to the crew and the difficulties in providing adequate shielding, the project was soon abandoned. Neither the American or the Soviet atomic-powered aircraft ever flew and – because the technology was inherently dangerous – it was never considered for commercial aviation.
Cars and trains
The same fate befell nuclear-powered trains. In 1954 the US nuclear physicist Lyle Borst, then at the University of Utah, proposed a 360-tonne locomotive carrying a uranium-235 fuelled nuclear reactor. Several other countries, including Germany, Russia and the UK, also had schemes for nuclear locos. But public concerns about safety could not be overcome and nuclear trains were never built. The $1.2m price tag of Borst’s train didn’t help either.

In the late 1950s, meanwhile, there were at least four theoretical nuclear-powered “concept cars”: the Ford Nucleon, the Studebaker Packard Astral, the Simca Fulgur and the Arbel Symétric. Based on the assumption that nuclear reactors would get much smaller over time, it was felt that such a car would need relatively light radiation shielding. I certainly wouldn’t have wanted to take one of those for a spin; in the end none got beyond concept stage.
Perhaps the real success story of nuclear propulsion has been in space
But perhaps the real success story of nuclear propulsion has been in space. Between 1967 and 1988, the Soviet Union pioneered the use of fission reactors for powering surveillance satellites, with over 30 nuclear-powered satellites being launched during that period. And since the early 1960s, radioisotopes have been a key source of energy in space.
Driven by the desire for faster, more capable and longer duration space missions to the Moon, Mars and beyond, China, Russia and the US are now investing significantly in the next generation of nuclear reactor technology for space propulsion, where solar or radioisotope power will be inadequate. Several options are on the table.
One is nuclear thermal propulsion, whereby energy from a fission reactor heats a propellant fuel. Another is nuclear electric propulsion, in which the fission energy ionizes a gas that gets propelled out the back of the spacecraft. Both involve using tiny nuclear reactors of the kind used in submarines, except they’re cooled by gas, not water. Key programmes are aiming for in-space demonstrations in the next 5–10 years.
Where next?
Many of the first ideas for nuclear-powered transport were dreamed up little more than a decade after the first self-sustaining chain reaction. The appeal was clear: compared to other fuels, nuclear power has a high energy density and lasts much longer. It also has zero carbon emissions. Nuclear power must have seemed a panacea for all our energy needs – using it for cars and planes must have seen an obvious next step.
However, there are major safety issues to address when nuclear sources are mobilized, from protecting passengers and crew, to ensuring appropriate safeguards should anything go wrong. And today we understand all too well the legacy of nuclear systems, from the safe disposal of spent fuel to the decommissioning of nuclear infrastructure and equipment.
We’ve struck the right balance when it comes to using nuclear power, confining it to sea-faring vessels under the watchful eye of the military
Here on Earth, I think we’ve struck the right balance when it comes to using nuclear power, confining it to sea-faring vessels under the watchful eye of the military. But as human-crewed, deep-space exploration beckons, a whole new set of issues will arise. There will, of course, be lots of technical and engineering challenges.
How, for example, will we maintain, repair and decommission nuclear-powered space craft? How will we avoid endangering crews or polluting the environment especially when craft take off? Who should set appropriate legislation – and how we do we police those rules? When it comes to space, nuclear will help us “to boldly go”; but it will also require bold regulation.
The post Nuclear-powered transport – how far can it take us? appeared first on Physics World.
Bubbles, foams and self-assembly: a conversation with Early Career Award winner Aurélie Hourlier-Fargette
Aurélie Hourlier‑Fargette, winner of the 2025 JPhys Materials Early Career Award, discusses the inspirations behind her interdisciplinary work on bubble assemblies and foam-based materials
The post Bubbles, foams and self-assembly: a conversation with Early Career Award winner Aurélie Hourlier-Fargette appeared first on Physics World.
Congratulations on winning the 2025 JPhys Materials Early Career Award. What does this mean for you at this stage of your career?
I am really grateful to the Editorial Board of JPhys Materials for this award and for highlighting our work. This is a key recognition for the whole team behind the results presented in this research paper. We were taking a new turn in our research with this topic – trying to convince bubbles to assemble into crystalline structures towards architected materials – and this award is an important encouragement to continue pushing in this direction. At the crossroads of physics, physical chemistry, materials science and mechanics, we hope that this is only the beginning of our interdisciplinary journey around bubble assemblies and foam-based materials.
Your research explores elasto-capillarity and foam architectures, what inspired you to work in this fascinating area?
I always say that research is a series of encounters – with people, and with scientific themes and objects. I was lucky to discover this interdisciplinary world as an undergraduate, during an internship on elasto-capillarity at the intersection of physics and mechanics. The scientific communities working on these topics – and also on foams – are fantastic. In both fields, I was fortunate to meet talented people who inspired my future work, combining scientific skills and creativity.
In France, the GDR MePhy (mechanics and physics of complex systems) played a key role in broadening my perspective, by organizing workshops on many different topics, always with interdisciplinarity in mind.
You have demonstrated mechanically guided self-assembly of bubbles leading to crystalline foam structures. What’s the significance of this finding and how could it impact materials design?
In the paper, part of the journal’s Emerging Leaders collection, we provide a proof-of-concept with alginate and polyurethane materials to demonstrate that it is possible to use a fibre array to order bubbles into a crystalline structure, which can be tuned by choosing the fibre pattern, and to keep this ordering upon solidification to provide an alternative approach to additive manufacturing. This work is mainly fundamental, and we hope it paves the way toward a wider use of mechanical self-assembly principles in the context of porous architected materials.
The use of solidifying materials for those studies is two-fold: first, it allows us to observe the systems with X-ray microtomography once solidified, and second, it demonstrates that we could use such techniques to build actual solid materials.

What excites you most about this field right now, and where do you see the biggest opportunities for breakthroughs?
Combining physical understanding and materials science is certainly a great area of opportunity to better exploit mechanical self-assembly. It is very compelling to search for strategies based on physical principles to generate materials with non-trivial mechanical or acoustic properties. Capillarity, elasticity, stimuli-induced modification of systems, as well as geometrical considerations, all offer a great playground to explore. Curiosity-driven research has many advantages, and often, unexpected observations completely reshape the trajectory that we had in mind.
Could you tell us about your team’s current research priorities and the directions you are most focused on?
We believe that focusing first on the underlying physical principles, especially in terms of mechanical self-assembly, will provide the building blocks to generate novel materials. One key research axis we are exploring now is widening the range of materials that can be used for “liquid foam templating” (a general approach that involves controlling the properties of a foam in its liquid state to control the resulting properties of the foam after solidification). We focus on the solidification mechanisms, either by playing with external stimuli or by controlling the solidification reactions via the introduction of catalysts or solidifying agents.
What are the key challenges in achieving ordered structures during solidification?
Liquid foams provide beautiful hierarchical structures that are also short-lived. To take advantage of the mechanical self-assembly of bubbles to build solid materials, understanding the relevant timescales is key: depending on whether the foam has time to drain and destabilize before solidification or not, its final morphology can be completely different. Controlling both the ageing mechanisms and the solidification of the matrix is particularly challenging.
How do you see foam-based materials impacting real-world applications?
Both biomedical devices and soft robots often rely on soft materials – either to match the mechanical properties of biological tissues or to provide the mechanical properties to build soft robots to enable motion. Being able to customize self-assembled hierarchical structures could allow us to explore a wider range of even softer materials, with specific properties resulting from their structural features. Applications could also extend to stiffer materials, mainly in the context of acoustic properties and wave propagation in such architected structures.
What are the most surprising behaviours you have observed during the processes of self-assembly and solidification of foams?
For the experiments detailed in the paper, the structures revealed their beauty once the X-ray tomography scans were performed. When we varied the parameters, we could only guess what was going to happen before getting the visual confirmation a few hours later. We were really happy to see that changing the pattern of the fibre array could indeed provide different ordered foam structures. In some other projects we are working on, foam stability has been a real challenge. We were sometimes surprised to obtain long-lasting liquid systems.

Looking ahead, what are the next big questions you hope to tackle in your field?
In the fundamental context of the physics and mechanics of elasto-capillarity, the study of model systems involving self-assembly mechanisms will be a key aspect of our research. I then hope to successfully identify key applications for such architected systems – mainly in the fields of mechanical or acoustic metamaterials, but also for biomedical engineering. Regarding foam solidification, understanding the mechanisms of pore opening during the solidification process – leading to either closed-cell or open-cell foams – is also an important question for the community.
You worked on bio-integrated electronics during your postdoc and contributed to a seminal paper on skin-interfaced biosensors for wireless monitoring in neonatal ICUs. How has that shaped your current research interests?
That fantastic experience allowed me to work in a group with numerous people from many different backgrounds, pushing the frontiers of interdisciplinarity in ways I could not have imagined before joining the Rogers group as a postdoc. At the moment, I am focusing on more fundamental questions, but it is definitely important to keep in mind what physics and materials science can bring to a broad variety of applications that offer solutions for society, in biomedical engineering and beyond.
Your research often combines theory and experiment and involves interdisciplinary collaboration. How do you see these collaborations shaping the future of your field?
It is always the scientific questions we want to answer – or the goals we aim to achieve – that should define the collaborations, bringing together multiple skills and backgrounds to tackle a shared challenge. Clearly, at the intersection of physics, physical chemistry, materials science and mechanics, there are many interesting questions that require contributions from different disciplines and skillsets. A key aspect is how people trained in different areas learn to “speak the same language” in order to advance interdisciplinary topics.

How do you envision your research evolving over the next 5–10 years?
I hope to be able to combine fundamental research and meaningful applications successfully – perhaps in the form of medical devices or tools for soft robots. There are many exciting possibilities, but it is certainly still too early for me to predict.
What advice would you give early-career researchers pursuing interdisciplinary projects?
Believe in what you are doing! We push boundaries more easily in areas we are passionate about, and we are also more productive when we work on topics for which we have found a supportive environment – with a unique combination of collaborators and access to state-of-the-art equipment.
In research, and especially in interdisciplinary fields, a key challenge is finding the right balance: you need to stay focused on the research projects that matter for you, while also keeping an open mind and staying aware of what others are doing. This broader vision helps you understand how your work integrates into a larger, more complex landscape.
Finally, what inspires you most as a scientist, and what keeps you motivated during challenging phases of research?
I have always liked working with desktop-scale experiments, where we can touch the objects and have an intuition for the physical mechanisms behind the observed phenomena.
Another source of inspiration is the beauty of the scientific objects that we study. With droplets, bubbles and foams – which are not only scientifically interesting but also beautiful – there is a strong connection with art and photography.
And finally, a key aspect of our professional life is the people we work with. It is clearly an additional motivation to feel part of a community where we can discuss both scientific questions and ways to improve how research is organized, as well as help younger students, PhDs and postdocs find their professional path. Working with amazing colleagues definitely helps when the path is longer or more difficult than expected.
The post Bubbles, foams and self-assembly: a conversation with Early Career Award winner Aurélie Hourlier-Fargette appeared first on Physics World.
From bunkers to bright spaces: the future of smart shielded radiosurgery treatment rooms
Discover how smart shielding enables bright, modern radiosurgery rooms beyond traditional bunker designs
The post From bunkers to bright spaces: the future of smart shielded radiosurgery treatment rooms appeared first on Physics World.

This webinar explores how smart shielding is transforming the design of Leksell Gamma Knife radiosurgery environments, shifting from bunker‑like spaces to open, patient‑centric treatment rooms. Drawing from dose‑rate maps, room‑dimension considerations and modern shielding innovations, we’ll demonstrate how treatment rooms can safely incorporate features such as windows and natural light, improving both functionality and patient experience.
Dr Riccardo Bevilacqua will walk through the key questions that clinicians, planners and hospital administrators should ask when evaluating new builds or upgrading existing treatment rooms. We will highlight how modern shielding approaches expand design possibilities, debunk outdated assumptions and offer practical guidance on evaluating sites and educating stakeholders on what lies “beyond bunkers”.

Dr Riccardo Bevilacqua, a nuclear physicist with a PhD in neutron data for Generation IV nuclear reactors from Uppsala University, has worked as a scientist for the European Commission and at various international research facilities. His career has transitioned from research to radiation safety and back to medical physics, the field that first interested him as a student in Italy. Based in Stockholm, Sweden, he leads global radiation‑safety initiatives at Elekta. Outside of work, Riccardo is a father, a stepfather and writes popular‑science articles on physics and radiation.
The post From bunkers to bright spaces: the future of smart shielded radiosurgery treatment rooms appeared first on Physics World.
The physics of why basketball shoes are so squeaky
The noise is down to the base of the shoe forming wrinkles that travel at near supersonic speeds
The post The physics of why basketball shoes are so squeaky appeared first on Physics World.
If you have ever watched a basketball match, you will know that along with the sound of the ball being bounced, there is also the constant squeaking of shoes as the players move across the court.
Such noise is a common occurrence in everyday life from the scraping of chalk on a blackboard to when brakes are applied on a bicycle.
Physicists in France, Israel, the UK and the US have now recreated the phenomenon in a lab and discovered that the squeaking is due to a previously unseen mechanism.
Katia Bertoldi from the Harvard John A. Paulson School of Engineering and Applied Sciences and colleagues slid a basketball shoe, or a rubber sample, across a smooth glass plate and used high-speed imaging and audio measurements to analyse the squeak.
Previous studies looking at the effect suggested that “pulses” are created when two materials “stick and slip”, but such studies focused on slow movements, which do not create squeaks.
Bertoldi and team instead found that the noise was not caused by random stick-slip events, but rather deformations of the rubber sole pulsing in bursts, or rippling, across the surface.
In this case, small parts of the sole change shape and lose and regain contact with the surface, with the “ripple” travelling at near supersonic speeds.
The pitch of the squeak even matches the rate of the “bursts”, which is determined by the stiffness and thickness of the shoe sole.
The authors also found that if a soft surface is smooth, the pulses are irregular and produce no sharp sounds, whereas ridged surfaces – like the grip patterns on sports shoes – produce consistent pulse frequencies, resulting in a high-pitched squeak.
In another twist, lab experiments showed that in some instances, the slip pulses are triggered by triboelectric discharges – miniature lightning bolts caused by the friction of the rubber.
Indeed, the physics of these pulses share similar features with fracture fronts in plate tectonics, and so a better understanding the dynamics that occur between two surfaces may offer insights into friction across a range of systems.
“These results bridge two fields that are traditionally disconnected: the tribology of soft materials and the dynamics of earthquakes,” notes Shmuel Rubinstein from Hebrew University. “Soft friction is usually considered slow, yet we show that the squeak of a sneaker can propagate as fast as, or even faster than, the rupture of a geological fault, and that their physics is strikingly similar.”
The post The physics of why basketball shoes are so squeaky appeared first on Physics World.
Dark optical cavity alters superconductivity
Quantum fluctuations couple to stretching bonds
The post Dark optical cavity alters superconductivity appeared first on Physics World.
An international team of researchers has shown that superconductivity can be modified by coupling a superconductor to a dark electromagnetic cavity. The research opens the door to the control of a material’s properties by modifying its electromagnetic environment.
Electronic structure defines many material properties – and this means that some properties can be changed by applying electromagnetic fields. The destruction of superconductivity by a magnetic field and the use of electric fields to control currents in semiconductors are two familiar examples.
There is growing interest in how electronic properties could be controlled by placing a material in a dark electromagnetic cavity that resonates with an electronic transition in that material. In this scenario, an external field is not applied to the material. Rather, interactions occur via quantum vacuum fluctuations within the cavity.
Holy Grail
“The Holy Grail of cavity materials research is to alter the properties of complex materials by engineering the electromagnetic environment,” explains the team – which includes Itai Keren, Tatiana Webb and Dmitri Basov at Columbia University in the US.
They created an optical cavity from a small slab of hexagonal boron nitride. This was interfaced with a slab of κ-ET, which is an organic low-temperature superconductor. The cavity was designed to resonate with an infrared transition in κ-ET involving the vibrational stretching of carbon–carbon bonds.
Hexagonal boron nitride was chosen because it is a hyperbolic van der Waals material. Van der Waals materials are stacks of atomically-thin layers. Atoms are strongly bound within each layer, but the layers are only weakly bound to each other by the van der Waals force. The gaps between layers can act as waveguides, confining light that bounces back and forth within the slab. As a result the slab behaves like an optical cavity with an isofrequency surface that is a hyperboloid in momentum space. Such a cavity supports a large number of modes and vacuum fluctuations, which enhances interactions with the superconductor.
Superfluid suppression
The researchers found that the presence of the cavity caused a strong suppression of superfluid density in κ-ET (a superconductor can be thought of as a superfluid of charged particles). The team mapped the superfluid density using magnetic force microscopy. This involved placing a tiny magnetic tip near to the surface of the superconductor. The magnetic field of the tip cannot penetrate into the superconductor (the Meissner effect) and this results in a force on the tip that is related to the superfluid density. They found that the density dropped by as much as 50% near the cavity interface.
The team also investigated the optical properties of the cavity using scattering-type scanning near-field optical microscope (s-SNOM). This involves firing tightly-focused laser light at an atomic force microscope (AFM) tip that is tapping on the surface of the cavity. The scattered light is processed to reveal the near-field component of light from just the region of the cavity below the tip .
The tapping tip creates phonon polaritons in the cavity, which are particle-like excitations that couple lattice vibrations to light. Analysing the near-field light across the cavity confirmed that the carbon stretching mode of κ-ET is coupled to the cavity. Calculations done by the team suggest that cavity coupling reduces the amplitude of the stretching mode vibrations.
Physicists know that superconductivity can arise from interactions between electrons and phonons (lattice vibrations), So, it is possible that the reduction in superfluid density is related to the suppression of stretching-mode vibrations. This, however, is not certain because κ-ET is an unconventional superconductor, which means that physicists do not understand the mechanism that causes its superconductivity. Further experiments could therefore shed light on the mysteries of unconventional superconductors.
“We are confident that our experiments will prompt further theoretical pursuits,” the team tells Physics World. The researchers also believe that practical applications could be possible. “Our work shows a new path towards the manipulation of superconducting properties.”
The research is described in Nature.
The post Dark optical cavity alters superconductivity appeared first on Physics World.
Chernobyl at 40: physics, politics and the nuclear debate today
Jim Smith reflects on the 1986 disaster and how it still shapes public perception of nuclear power
The post Chernobyl at 40: physics, politics and the nuclear debate today appeared first on Physics World.
On 26 April 2026, it will be 40 years since the explosion at Unit 4 of the Chernobyl Nuclear Power Plant – the worst nuclear accident the world has known. In the early hours of 26 April 1986, a badly designed reactor, operated under intense pressure during a safety test, ran out of control. A powerful explosion and prolonged fire followed, releasing radioactive material across Ukraine, Belarus, Russia, with smaller quantities spewing across Europe.
In this episode of Physics World Stories, host Andrew Glester speaks with Jim Smith, an environmental physicist at the University of Portsmouth. Smith began his academic life studying astrophysics, but always had an interest in environmental issues. His PhD in applied mathematics at Liverpool focused on modelling how radioactive material from Chernobyl was transported through the atmosphere and deposited as far away as the Lake District in north-western England.
Smith recounts his visits to the abandoned Chernobyl plant and the 1000-square-mile exclusion zone, now home to roaming wolves and other thriving wildlife. He wants a rational debate about the relative risks, arguing that the accident’s social and economic consequences have significantly outweighed the long-term impacts of radiation itself.
The discussion ranges from the politics of nuclear energy and the hierarchical culture of the Soviet system, to lessons later applied during the Fukushima accident. Smith makes the case for nuclear power as a vital complement to renewables.
He also shares the story behind the Chernobyl Spirit Company – a social enterprise he has launched with Ukrainian colleagues, producing safe, high-quality spirits to support Ukrainian communities. Listen to find out whether Andrew Glester dared to try one.
The post Chernobyl at 40: physics, politics and the nuclear debate today appeared first on Physics World.
LHCb upgrade: CERN collaboration responds to UK funding cut
LHCb spokesperson-elect Tim Gershon is our podcast guest
The post LHCb upgrade: CERN collaboration responds to UK funding cut appeared first on Physics World.
Later this year, CERN’s Large Hadron Collider (LHC) and its huge experiments will shutdown for the High Luminosity upgrade. When complete in 2030, the particle-collision rate in the LHC will be increased by a factor of 10 and the experiments will be upgraded so that they can better capture and analyse the results of these collisions. This will allow physicists to study particle interactions at unprecedented precision and could even reveal new physics beyond the Standard Model.
Earlier this year, however, the UK government announced that it will no longer fund the upgrade of the LHCb experiment on the LHC, which is run by a collaboration of more than 1700 physicists worldwide. The UK had promised to contribute about £50 million to the upgrade – which is a significant chunk of the overall cost.
In this episode of the Physics World Weekly podcast I am in conversation with the particle physicist Tim Gershon, who is based at the UK’s University of Warwick. Gershon is spokesperson-elect for the LHCb collaboration and is playing a leading role in the upgrade.
Gershon explains that UK participation and leadership has been crucial for the success of LHCb and cautions that the future of the experiment and the future of UK particle physics have been imperilled by the funding cut.
We also chat about recent discoveries made by LHCb and look forward to what new physics the experiment could find after the upgrade.
The post LHCb upgrade: CERN collaboration responds to UK funding cut appeared first on Physics World.
Read-out of Majorana qubits reveals their hidden nature
Mechanism could pave the way for more robust quantum computation, but questions remain over scalability
The post Read-out of Majorana qubits reveals their hidden nature appeared first on Physics World.
Quantum computers could solve problems that are out of reach for today’s classical machines. However, the quantum states they rely on are prone to decohering – that is, losing their quantum information due to local noise. One possible way around this is to use quantum bits (qubits) constructed from quasiparticle states known as Majorana zero modes (MZMs) that are protected from this noise. But there’s a catch. To perform computations, you need to be able to measure, or read out, the states of your qubits. How do you do that in a system that is inherently protected from its environment?
Scientists at QuTech in the Netherlands, together with researchers from the Madrid Institute of Materials Science (ICMM) in Spain, say they may have found an answer. By measuring a property known as quantum capacitance, they report that they have read out the parity of their MZM system, backing up an earlier readout demonstration from a team at Microsoft Quantum Hardware on a different Majorana platform.
Measuring parity
The QuTech/ICMM researchers generated their MZMs across two quantum dots – semiconductor structures that can confine electrons – connected by a superconducting nanowire. Electrons can transfer, or tunnel, between the quantum dots through this wire. Majorana-based qubits store their quantum information across these separated MZMs, with both elements in the pair required to encode a single “parity” bit. A pair of parity bits (combining four MZMs in total) forms a qubit.
A parity bit has two possible states. When the two quantum dots are in a superposition of both having one electron and both having none, the system is said to have even parity (a “0”). When the system is instead in superposition of only one of the quantum dots having an electron, the parity is said to be odd (a “1”). Importantly, these even and odd parity states have the same average value of electric charge, meaning that a charge sensor cannot tell them apart.
The key to measuring parity lies in the electrons’ behaviour. In the even-parity state, an even number of electrons can pair up and enter the superconductor together as a Cooper pair. In the odd-parity state, however, the lone electron lacks a partner and cannot flow through the wire in the same way. By measuring the charge flowing into the superconductor, the team was therefore able to determine the parity state. The researchers also determined that the lifetimes of these states were in the millisecond range, which they say is promising for quantum computations.
Competing platforms
According to Nick van Loo, a quantum engineer at QuTech and the first author of a Nature paper on the work, similar chains of quantum dots (known as Kitaev chains) are a promising platform for realizing Majorana modes because each element in the chain can be controlled and tuned. This control, he adds, makes results easier to reproduce, helping to overcome some of the interpretation challenges that have affected Majorana results over the past decade.
Van Loo also stresses that his team uses a different architecture from the Microsoft Quantum Hardware team to create its Majorana modes – one that he says allows for better tuneability as well as easier and more scalable readout. He adds that this architecture also allows an independent charge sensor to be used to confirm the MZM’s charge neutrality.
In response, Chetan Nayak, a technical fellow at Microsoft Quantum Hardware, says it is important that the QuTech/ICMM team independently measured a millisecond time scale for parity fluctuations. However, he notes that the team did not extend this parity lifetime and adds that the so-called “poor man’s Majoranas” used in this research do not constitute a scalable platform for topological qubits, as they lack topological protection.
Seeking full protection
Van Loo acknowledges that the team’s two-site Kitaev chain is not topologically protected. However, he says the degree of protection is expected to improve exponentially as more sites are added. In the near term, he and his colleagues hope to operate their qubit by inducing rotations through coupling pairs of Majorana modes. Once these hurdles are overcome, he tells Physics World that “one major milestone will still remain: demonstrating braiding of Majorana modes to establish their non-Abelian exchange statistics”.
Jay Deep Sau, a physicist at the University of Maryland, US, who was not involved in either the QuTech/ICMM or the Microsoft Quantum Hardware research, describes this as the first measurement of fermion parity in the smallest quantum dot chain platform for creating MZMs. Compared to the Microsoft result, Sau agrees that the quantum dot chain is more controlled. However, he is sceptical that this control will apply to larger chains, casting doubt on whether this is truly a scalable way of realizing MZMs. The significance of these results, he adds, will only be apparent if the quantum dot chain approach can demonstrate a coherent qubit before its semiconductor nanowire counterpart.
The post Read-out of Majorana qubits reveals their hidden nature appeared first on Physics World.
Quantum-secure Internet expands to citywide scale
Device-independent quantum-encrypted keys distributed over 100 km
The post Quantum-secure Internet expands to citywide scale appeared first on Physics World.
Researchers in China have distributed device-independent quantum cryptographic keys over city-scale distances for the first time – a significant improvement compared to the previous record of a few hundred metres. Led by Jian-Wei Pan of the University of Science and Technology of China (USTC) of the Chinese Academy of Sciences (CAS), the researchers say the achievement brings the world a step closer to a completely quantum-secure Internet.
Many of us use Internet encryption almost daily, for example when transferring sensitive information such as bank details. Today’s encryption techniques use keys based on mathematical algorithms, and classical supercomputers cannot crack them in any practical amount of time. Powerful quantum computers could change this, however, which has driven researchers to explore potential alternatives.
One such alternative, known as quantum key distribution (QKD), encrypts information by exploiting the quantum properties of photons. The appeal of this approach is that when quantum-entangled photons transmit a key between two parties, any attempted hack by a third party will be easy to detect because their intervention will disturb the entanglement.
While the basic form of QKD enables information to be transmitted securely, it does have some weak points. One of them is that a malicious third party could steal the key by hacking the devices the sender and/or receiver is using.
A more advanced version of QKD is device-independent QKD (DI-QKD). As its name suggests, this version does not depend on the state of a device. Instead, it derives its security key directly from fundamental quantum phenomena – namely, the violation of conditions known as Bell’s inequalities. Establishing this violation ensures that a third party has not interfered with the process employed to generate the secure key.
The main drawback of DI-QKD is that it is extremely technically demanding, requiring high-quality entanglement and an efficient means of detecting it. “Until now, this has only been possible over short distances – 700 m at best – and in laboratory-based proof-of-principle experiments,” says Pan.
High-fidelity entanglement over 11 km of fibre
In the latest work, Pan and colleagues constructed two quantum nodes consisting of single trapped atoms. Each node was equipped with four high-numerical-aperture lenses to efficiently collect single photons emitted by the atoms. These photons have a wavelength of 780 nm, which is not optimal for transmission through optical fibres. The team therefore used a process known as quantum frequency conversion to shift the emitted photons to a longer wavelength of 1315 nm, which is less prone to optical loss in fibres.
By interfering and detecting a single photon, the team was able to generate what’s known as heralded entanglement between the two quantum nodes – something Pan describes as “an essential resource” for DI-QKD. While significant progress has been made in extending the entangling distance for qubits of this type, Pan notes that these advances have been hampered by low fidelities and low entangling rates.
To address this, Pan and his colleagues employed a single-photon-based entangling scheme that boosts remote entangling probability by more than two orders of magnitude. They also placed their atoms in highly excited Rydberg states to generate single photons with high purity and low noise. “It is these innovations that allow us to achieve high-fidelity and high-rate entanglement over a long distance,” Pan explains.
Using this setup, the researchers explored the feasibility of performing DI-QKD between two entangled atoms linked by optical fibres up to 100 km in length. In this study, which is detailed in Science, they demonstrated practical DI-QKD under finite-key security over 11 km of fibre.
Metropolitan-scale quantum key distribution
Based on the technologies they developed, Pan thinks it could now be possible to implement DI-QKD over metropolitan scales with existing optical fibres. Such a system could provide encrypted communication with the highest level of physical security, but Pan notes that it could also have other applications. For example, high-fidelity entanglement could also serve as a fundamental building block for constructing quantum repeaters and scaling up quantum networks.
Carlos Sabín, a physicist at the Autonomous University of Madrid (UAM), Spain, who was not involved in the study, says that while the work is an important step, there is still a long way to go before we are able to perform completely secure and error-free quantum key distribution on an inter-city scale. “This is because quantum entanglement is an inherently fragile property,” Sabín explains. “As light travels through the fibre, small losses accumulate and the entanglement generated is of poorer quality, which translates into higher error rates in the cryptographic keys generated. Indeed, the results of the experiment show that errors in the key range from 3% when the distance is 11 km to more than 7% for 100 km.”
Pan and colleagues now plan to add more atoms to each node and to use techniques like tweezer arrays to further enhance both the entangling rate and the secure key rate over longer distances. “We are aiming for 1000 km, over which we hope to incorporate quantum repeaters,” Pan tells Physics World. “By using processes like ‘entanglement swapping’ to connect a series of such two-node entanglement, we anticipate that we will be able maintain a similar entangling rate for much longer distances.”
The post Quantum-secure Internet expands to citywide scale appeared first on Physics World.
Todd McNutt: how an AI software solution enables creation of the best possible radiation treatment plans
Medical physicist Todd McNutt explains how Plan AI, an artificial intelligence-powered plan quality software solution, uses data mining to streamline and improve radiotherapy planning for cancer treatments
The post Todd McNutt: how an AI software solution enables creation of the best possible radiation treatment plans appeared first on Physics World.
Todd McNutt is a radiation oncology physicist at Johns Hopkins University in the US and the co-founder of Oncospace, where he led the development of an artificial intelligence (AI)-powered tool that simultaneously accelerates radiation planning and elevates plan quality and consistency. The software, now rebranded as Plan AI and available from US manufacturer Sun Nuclear, draws upon data from thousands of previous radiotherapy treatments to predict the lowest possible dose to healthy tissues for each new patient. Treatment planners then use this information to define goals that streamline and automate the creation of a best achievable plan.
Physics World’s Tami Freeman spoke with McNutt about the evolution of Oncospace and the benefits that Plan AI brings to radiotherapy patients and cancer treatment centres.
Can you describe how the Oncospace project began?
Back in 2007, several groups were discussing how we could better use clinical data for discovery and knowledge generation. I had several meetings with folks at Johns Hopkins, including Alex Szalay who helped develop the Sloan Digital Sky Survey. He built a large database of galaxies and stars and it became a huge research platform for both amateur and professional astronomers.
From that discussion, and other initiatives, we looked at moving towards structured data collection for patients in the clinical environment. By marrying these data with radiation treatment plans we could study how dose distributions across the anatomy affect patient outcomes. And we took that opportunity to build a database for radiotherapy.
What inspired the transition from academic research to founding the company Oncospace Inc in 2019?
After populating the database with data from many patients, we could examine which anatomic features impact our ability to generate a plan that minimizes radiation dose to normal tissues while treating target volumes as best as possible. We came up with a feature set that characterized the relationships between normal anatomy and targets, as well as target complexity.
This early work allowed us to predict expected doses from these shape-relationship features, and it worked well. At that point, we knew we could tap into this database and generate a prediction that could help create treatment plans for new patients. We thought of this as personalized medicine: for the first time, we could see the level of treatment plan quality that we could achieve for a specific patient.
I thought that this was useful commercially and that we should get it out to other clinics. Praveen Sinha, who I’d known from my previous work at Philips and now leads Sun Nuclear’s software business line, asked if I wanted to create a startup. The timing was right for both of us and I had a team here ready to go, so we went ahead and did it. With his knowledge of startups and my knowledge of what we wanted to achieve, we had perfect timing and a perfect group to work with.
Plan AI enables both predictive planning and peer review, how do these functions work?
The idea behind predictive planning is that, for a given patient, I can predict the expected dose that I should be able to achieve for them.


Treatment planning involves specifying dosimetric objectives to the planning system and asking it to optimize radiation delivery to meet these. But nobody really knows what the right objectives even are – it is just a trial-and-error process. Plan AI’s prediction provides a rational set of objectives for plan optimization, allowing the planning system’s algorithm to move towards a good solution and making treatment planning an easier problem to solve.
Peer review involves a peer physician looking at every treatment plan to evaluate it for quality and safety. But again, people don’t really know the level of quality you can generate, it depends on the patient’s anatomy. Providing a predicted dose with clinical dose goals enables a rapid review to see whether it is a high-quality plan or not.
In the past we looked at simple things like whether a contour is missing slices or contains discontinuities and Plan AI checks for this, but you can do far more with AI. For example, you could look at all the contoured rectums in the system and predict if your contour goes too far into the sigmoid colon, then it may be mis-contoured. We have research software that can flag such potential anomalies so they don’t get overlooked.
The Plan AI models are developed using Oncospace’s database of previous treatments; can you describe this data lake?
When we first started, we developed a large SQL database containing all the shape-relationship features and dosimetry features. The SQL language is ideal for being able to query and sift through the data, but when the company was formed, we recognized that there was some age to that technology.
So for the Plan AI data lake, we extracted all the different shape-relationship and shape-complexity features and put them into a Parquet database in the cloud. This made the data lake much more amenable to applying machine learning algorithms to it. The SQL data lake at Johns Hopkins is maintained separately and primarily used to investigate toxicity predictions and spatial dose patterns. But for Plan AI, the models are fixed and streamlined for the specific task of dose prediction.
What does the model training process entail?
One of the first tasks was to curate the data, using the AAPM’s standardized structure-naming model. Our data scientist Julie Shade wrote some tools for automatic name mapping and target identification; that helped us process much larger amounts of data for the model.
Once we had all the shape-relationship and shape-complexity features and all the doses, we trained the models by anatomical region. We have FDA-approved models for the male and female pelvis, thorax, abdomen and head-and-neck. For each of these, we predict the doses for every organ-at-risk. Then we used a five-fold validation model to make sure that the predictions were good on an internal data set.
We also performed external validation at institutions including Johns Hopkins and Montefiore hospitals. We created predicted plans from recent treatment plans that had been evaluated by physicians. For almost all cases, both plan quality and plan efficiency were improved with Plan AI.
One aspect of this training is that whenever we drive optimization via predictive planning we want to push towards the best achievable dose. Regular machine learning predicts an expected, or average, dose across all patients. But you never want to drive a treatment plan towards the average dose, because then every plan you generate will be happy being average. Our model predicts both the average and the best achievable dose, and drives plan optimization towards the best achievable.
When implementing new technology in the clinic, it’s important to fit into the existing treatment workflow. How clinic-ready are these AI tools?
Radiation therapy is protocol-driven: we know what technique we’re going to use to treat and what our clinical dose goals are for different structures. What we don’t know is the patient-specific part of that. So for each anatomical region, we built models out of a wide range of treatment protocols, with many different types of patients, to ensure that the same prediction model works for any protocol. This means a user can use any protocol for treatment and the predictions will work, they don’t have to retrain anything. It’s ready to go out of the box, there’s a library of protocols to start with, and you can change protocols as you need for your own clinic.
The other part of being clinic-ready is aligning with the way that planning is currently performed, which is using dose–volume histograms. Treatment plans are optimized by manipulating these dose objectives, and that’s exactly what we predict. So users aren’t changing the whole paradigm of how planners operate. They still use their treatment planning system (TPS) – we just put the objectives in there. Basically, a TPS script sends the patient’s CT and contours to the cloud, where Plan AI makes the predictions. The TPS then pulls back in the objectives built from the models, based on this specific patient’s anatomy. The TPS runs the optimization and, as a last step, can send the plan back to Plan AI to check that it fits within the best achievable predictions.
Did you encounter any challenges bringing AI into a clinical setting?
Interestingly, the challenges aren’t technical, they are more human related. One of the more systemic challenges is data security when using medical data for training. A nice thing about our system is that the features we generate from treatment plans are just mathematical shape-relationship features and don’t involve a lot of identifiable information.
AI has been used in radiation therapy for image contouring and auto-segmentation, and early efforts were not so good. So, there’s always a good, healthy scepticism. But once you show people that it works and works well, this can be overcome. I have seen some people worried about job security and AI taking over. We are medical professionals designing a treatment plan to care for a patient and there’s a lot of pride and art in that – if you automate that, it takes away some of this pride and art.
I tell people that if we automate the easier things, then they can spend their quality time on the more difficult and challenging cases, because that’s where their talent might be needed more.
Do you have any advice for clinics looking to adopt AI-driven planning?
Introduce it as an assistant, not as a solution. You want people that already know what they’re doing to be able to use their knowledge more efficiently. We want to make their jobs easier and show them that it also improves quality.
With dosimetrists, for example, they create a plan and work hard getting the dose down – and then the physician looks at it and suggests that they can do better. Predictive planning gives them confidence that they are right and takes the uncertainty out of the physician review process. And once you’ve gained that level of confidence, you can start using it for adaptive planning or other technologies.
Where do you see predictive modelling and AI in oncology in five years from now?
Right now, there’s been a lot of data collected, but we want that data to advance and learn. Having multiple centres adding to this pool of knowledge and being able to continually update those models from new, broader data sets could be of huge value.
In terms of patient outcomes, we’ve done a lot of the work looking at how the spatial pattern of dose impacts toxicity and outcomes. This is part of the research being performed at Johns Hopkins and still in discovery mode. But down the road, some of these predictions of normal tissue outcomes could be fed into the planning process to help reduce toxicity at the patient level.
Finally, what’s been the most rewarding part of this journey for you?
During my prior experience building treatment planning systems, the biggest problem was always that nobody knew what the objective was. Nobody knew how to tell the system: “this is the dose I expect to receive, now optimize to get it for me”, because you didn’t know what you could do. For any given patient, you could ask for too much or too little. Now, for the first time, I argue that we actually know what our objective is in our treatment planning.
This levels the playing field between different environments, different countries, or even different dosimetrists with different levels of experience. The Plan AI tool brings all this to a consistent state and enables high quality, efficient planning everywhere. We can provide this predictive planning tool to clinics around the world. Now we just have to get everybody using it.
- You can listen to the full interview with Todd McNutt in the Physics World Weekly podcast.
The post Todd McNutt: how an AI software solution enables creation of the best possible radiation treatment plans appeared first on Physics World.
The future of particle physics: what can the past teach us?
Robert P Crease reports from a conference at CERN on particle physics in the 1980s and 1990s
The post The future of particle physics: what can the past teach us? appeared first on Physics World.
In his opening remarks to the 4th International Symposium on the History of Particle Physics, Chris Llewellyn Smith – who was a director-general of CERN in the 1990s – suggested participants should speak about “what’s not written in the journals”, including “mistakes, dead-ends and problems with getting funding”. Doing so, he said, would “provide insight into the way science really progresses”.
The symposium was not your usual science conference. Held last November at CERN, it took place inside the lab’s 400-seat main auditorium, which has been the venue for many historic announcements, including the discovery of the Higgs boson. Its brown-beige walls are covered with lively designs by the Finnish artist Ilona Rista, suggesting to me the aftermath of a collision of high-energy bar codes.
The 1980s and 1990s saw the construction and operation of various important accelerators and detectors.
The focus of the meeting was the development of particle physics in the 1980s and 1990s – a period that saw the construction and operation of various important accelerators and detectors. At CERN, these included the UA1 and UA2 experiments at the Super Proton Synchrotron, where the W and Z bosons were discovered. Later, there was the Large Electron-Positron Collider (LEP), which came online in 1989, and the Large Hadron Collider (LHC), approved five years later.
Delegates also heard about the opening of various accelerators in the US during those two decades, including two at the Stanford Linear Accelerator Center – the Positron-Electron Project in 1980 and the Stanford Linear Collider in 1989. Most famous of all was the start-up of the Tevatron at Fermilab in 1983. Over at Dubna in the former Soviet Union, meanwhile, scientists built the Nuclotron, a superconducting synchrotron, which opened in 1992.
Conference speakers covered unfinished machines of the era as well. The US cancelled two proton–proton facilities – ISABELLE in 1983 and the Superconducting Super Collider (SSC) a decade later. The Soviet Union, meanwhile, abandoned the multi-TeV proton–proton collider UNK a few years later, though news has recently emerged that Russia might revive the project.
Several speakers recounted the discovery of the W and Z particles at CERN in 1983 and the discovery of the top quark at Fermilab in 1995. Others addressed the strange fact that fewer neutrinos from the Sun had been detected than theory suggested. The “solar-neutrino problem”, as it was known, was finally resolved by Takaaki Kajita’s discovery of neutrino oscillation in 1998, for which he shared the 2015 Nobel Prize for Physics with Art McDonald.
The conference also addressed unsuccessful searches for proton decay, axions, magnetic monopoles, the Higgs boson, supersymmetry particles and other targets. Other speakers described projects with highly positive outcomes, such as the advent of particle cosmology, or what some have jokingly dubbed “the heavenly lab”. The development of string theory, grand unified theories and perturbative quantum chromodynamics was tackled too.
In an exchange in the question-and-answer session after one talk, the Greek physicist Kostas Gavroglu referred to many of such quests as “failures”. That remark prompted the Australian-born US theoretical physicist Helen Quinn to say she preferred the term “falling forward”; such failures, she said, were instances of “I tried this, and it didn’t work so I tried that”.
In relating his work on detecting gravitational waves, the US Nobel-prize-winning physicist Barry Barish said he felt his charge was not to celebrate the importance of his discoveries nor the ingenuity of the route he took. Instead, Barish explained, his job was to answer the much more informal question: “What made me do what?”.
His point was illustrated by the US theorist Alan Guth, who described the very human and serendipitous path he took to working on cosmic inflation – the super-fast expansion of the universe just after the Big Bang. When he started, Guth said, “all the ingredients were already invented”. But the startling idea of inflation hinged on accidental meetings, chance conversations, unexpected visits, a restricted word count for Physical Review Letters, competitions, insecurities and “spectacular realizations” coalescing.
Wider world
Another theme that arose in the conference was that science does not unfold inside its own bubble but can have extensive and immediate impacts on the world around it. Two speakers, for instance, recounted the invention of the World Wide Web at CERN in the late 1980s. It’s fair to say that no other discovery by a single individual – Tim Berners-Lee – has so radically and quickly transformed the world.
The growing role of international politics in promoting and protecting projects was mentioned too, with various speakers explaining how high-level political negotiations enabled physicists to work at institutions and experiments in other nations. The Polish physicist Agnieszka Zalewska, for example, described her country’s path to membership in CERN, while Russian-born US physicist Vladimir Shiltsev spoke about the “diaspora” of Russian particle physicists after the fall of the Soviet Union in 1991.
As a result of the Superconducting Super Colllider’s controversial closure, the centre of gravity of high-energy physics shifted to Europe.
Sometimes politics created destructive interference. The US physicist, historian and author Michael Riordan described how the US’s determination to “go it alone” to outcompete Europe in high-energy physics was a major factor in bringing about the opposite: the termination of the SSC in 1993. As a result of that project’s controversial closure, the centre of gravity of high-energy physics shifted to Europe.
Indeed, contemporary politics occasionally hit the conference itself in incongruous and ironic ways. Two US physicists, for example, were denied permission to attend because budgets had been cut and travel restrictions increased. In the end, one took personal time off and paid his own way, leaving his affiliation off the programme.
Before the conference, some people complained that conference organizers hadn’t paid enough attention to physicists who’d worked in the Soviet Union but were from occupied republics. Several speakers addressed this shortcoming by mentioning people like Gersh Budker (1918–1977). A Ukrainian-born physicist who worked and died in the Soviet Union, Budker was nominated for a Nobel Prize (1957) and even has had a street named after him at CERN. Unmentioned, though, was that Budker was Jewish and that his father was killed by Ukrainian nationalists in a pogrom.
On the final day of the conference, which just happened to be World Science Day for Peace and Development, CERN mounted a public screening of the 2025 documentary film The Peace Particle. Directed by Alex Kiehl, much of it was about CERN’s internationalism, with a poster for the film describing the lab as “Mankind’s biggest experiment…science for peace in a divided world”.
But in the Q&A afterwards some audience members criticized CERN for allegedly whitewashing Russia for its invasion of the Ukraine and Israel for genocide. Those onstage defended CERN on the grounds of its desire to promote internationalism.
The critical point
The keynote speaker of the conference was John Krige, a science historian from Georgia Tech who has worked on a three-volume history of CERN. Those who launched the lab, Krige reminded the audience, had radical “scientific, political and cultural aspirations” for the institution. Their dream was that CERN wouldn’t just revive European science and promote regional collaborative effects after the Second World War, but also potentially improve the global world order too.
Krige went on to quote one CERN founder, who’d said that international science facilities such as CERN would be “one of the best ways of saving Western civilization”. Recent events, however, have shown just how fragile those ambitions are. Alluding to CERN’s Future Circular Collider and other possible projects, Llewellyn Smith ended his closing remarks with a warning.
“The perennial hope that the next big high-energy project will be genuinely global,” he said, “seems to be receding over the horizon due to the polarization of world politics”.
The post The future of particle physics: what can the past teach us? appeared first on Physics World.
A breakthrough in modelling open quantum matter
By analysing the Liouville gap in imaginary time, scientists reveal universal phase‑transition behaviour in both ground and finite‑temperature states
The post A breakthrough in modelling open quantum matter appeared first on Physics World.
Attempts to understand quantum phase transitions in open systems usually rely on real‑time Lindbladian evolution, which tracks how a quantum state changes as it relaxes toward a steady state. This approach is powerful for studying decoherence, dissipation and long‑time behaviour, but it often fails to reveal the deeper structure of the system including the phase transitions, critical points and hidden quantum order that define its underlying physics.
In this work, the researchers introduce a new framework called imaginary‑time Lindbladian evolution, which allows them to define and classify quantum phases in open systems using the spectrum of an imaginary‑Liouville superoperator. This approach works not only for pure ground states but also for finite‑temperature Gibbs states of stabilizer Hamiltonians, showing its relevance for realistic, mixed‑state conditions.
A key diagnostic in their method is the imaginary‑Liouville gap, the spectral gap between the lowest and next‑lowest decay modes. When this gap closes, the system undergoes a phase transition, a change that is accompanied by diverging correlation lengths and nonanalytic shifts in physical observables. The closing of this gap also coincides with the divergence of the Markov length, a recently proposed indicator of criticality in open quantum systems.
To demonstrate the power of their framework, the researchers map out phase diagrams for systems with
symmetry, capturing both spontaneous symmetry breaking and average symmetry‑protected topological phases. Their method reveals universal critical behaviour that real‑time Lindbladian steady states fail to detect, highlighting why imaginary‑time evolution fills a missing piece in the theory of open‑system phases.
Importantly, the authors emphasise that real‑time Lindbladians remain essential for modelling dissipation in practical settings. Their new framework complements this conventional approach, offering a systematic way to study phase transitions in open systems. They also outline how phase diagrams can be constructed using both bottom‑up (state‑based) and top‑down (Hamiltonian‑based) strategies, illustrating the method with a dissipative transverse‑field Ising model.
Overall, this work provides a unified and versatile way to understand quantum phases in open systems, revealing critical behaviour and topological structure that were previously inaccessible. It opens new directions for studying mixed‑state quantum matter and advances the theoretical foundations needed for future quantum technologies.
Read the full article
Yuchen Guo et al 2025 Rep. Prog. Phys. 88 118001
Do you want to learn more about this topic?
Focus on Quantum Entanglement: State of the Art and Open Questions guest edited by Anna Sanpera and Carlo Marconi (2025-2026)
The post A breakthrough in modelling open quantum matter appeared first on Physics World.
How reversibility becomes irreversible
A new framework shows how lost information in quantum systems gives rise to macroscopic entropy and the arrow of time
The post How reversibility becomes irreversible appeared first on Physics World.
In the macroscopic world, we see irreversible processes everywhere, heat flowing from hot to cold, gases mixing, systems decaying. Yet at the microscopic level, quantum mechanics is perfectly reversible, with its equations running equally well forwards and backwards in time. How then, does irreversibility emerge from fundamentally reversible dynamics?
A common explanation is coarse-graining, which simplifies a complex system by ignoring microscopic details and focusing only on large-scale behaviour. To make the micro–macro divide precise, however, one must first define what “macroscopic” means. Here it is given a quantitative inferential meaning: a state is macroscopic if it is perfectly inferable from the perspective of a specified measurement and prior. Central to this framework is a coarse-graining map built from the measurement and its optimal Bayesian recovery via the Petz map; macroscopic states are precisely its fixed points, turning macroscopicity into a sharp condition of perfect inferability. This construction is grounded in Bayesian retrodiction, which infers what a system likely was before it was measured, together with an observational deficit that quantifies how much information is lost in forming a macroscopic description.
States that are macroscopically inferable can be characterised in several equivalent ways, all tied to to a new measure of disorder called macroscopic entropy, which captures how irreversible, or “uninferable”, a macroscopic process appears from the observer’s perspective. This perspective is formalised through inferential reference frames, built from the combination of a prior and a measurement, which determine what an observer can and cannot recover about the underlying quantum state.
The researchers also develop a resource theory of microscopicity, treating macroscopic states as free and identifying the operations that cannot generate microscopic detail. This unifies and extends existing resource theories of coherence, athermality, and asymmetry. They further introduce observational discord, a new way to understand quantum correlations when observational power is limited, and provide conditions for when this discord vanishes.
Altogether, this work reframes macroscopic irreversibility as an information-theoretic phenomenon, grounded not in a fundamental dynamical asymmetry but in an inferential asymmetry arising from the observer’s limited perspective. It offers a unified way to understand coarse-graining, entropy, and the emergence of classical behaviour from quantum mechanics. It deepens our understanding of time’s direction and has implications for quantum computing, thermodynamics, and the study of quantum correlations in realistic, constrained settings.
Read the full article
Macroscopicity and observational deficit in states, operations, and correlations
Teruaki Nagasawa et al 2025 Rep. Prog. Phys. 88 117601
Do you want to learn more about this topic?
Focus on Quantum Entanglement: State of the Art and Open Questions guest edited by Anna Sanpera and Carlo Marconi (2025-2026)
The post How reversibility becomes irreversible appeared first on Physics World.
Visible light paints patterns onto chiral antiferromagnets
New technique for manipulating domains helps pave the way towards antiferromagnetic data storage
The post Visible light paints patterns onto chiral antiferromagnets appeared first on Physics World.
Researchers at Los Alamos National Laboratory in New Mexico, US have used visible light to both image and manipulate the domains of a chiral antiferromagnet (AFM). By “painting” complex patterns onto samples of cobalt niobium sulfite (Co1/3NbS2), they demonstrated that it is possible to control AFM domain formation and dynamics, boosting prospects for data storage devices based on antiferromagnetic materials rather than the ferromagnetic ones commonly used today.
In antiferromagnetic materials, the spins of neighbouring atoms in the material’s lattice are opposed to each other (they are antiparallel). For this reason, they do not exhibit a net magnetization in the absence of a magnetic field. This characteristic makes them largely immune to disturbances from external magnetic fields, but it also makes them all but invisible to simple electrical and optical probes, and extremely difficult to manipulate.
A special structure
In the new work, a Los Alamos team led by Scott Crooker focused on Co1/3NbS2 because of its topological nature. In this material, layers of cobalt atoms are positioned, or intercalated, between monolayers of niobium disulfide, creating 2D triangular lattices with ABAB stacking. The spins of these cobalt atoms point either toward or away from the centers of the tetrahedra formed by the atoms. The result is a noncoplanar spin ordering that produces a chiral, or “handed,” spin texture.
This chirality affects the motion of electrons in the material because when an electron passes through a chiral pattern of spins, it picks up a geometrical phase known as a Berry phase. This makes it move as if it were “seeing” a region with a real magnetic field, giving the material a nonzero Hall conductivity which, in turn, affects how it absorbs circularly polarized light.
Characterizing a topological antiferromagnet
To characterize this behaviour, the researchers used an optical technique called magnetic circular dichroism (MCD) that measures the difference in absorption between left and right circularly polarized light and depends explicitly on the Hall conductivity.
Similar to the MCD that is measured in well-known ferromagnets such as iron or nickel, the amplitude and sign of the MCD measured in Co1/3NbS2 varied as a function of the wavelength of the light. This dependence occurs because light prompts optical transitions between filled and empty energy bands. “In more complex materials like this, there is a whole spaghetti of bands, and one needs to consider all of them,” Crooker explains. “Precisely which mix of transitions are being excited depends of course on the photon energy, and this mix changes with energy. Sometimes the net response is positive, sometimes negative; it just depends on the details of the band structure.”
To understand the mix of transitions taking place, as well as the topological character of those transitions, scientists use the concept of Berry curvature, which is the momentum-space version of the magnetic field-like effect described earlier. If the accumulated Berry phase is positive (negative), then the electron is moving in a right-handed (left-handed) spin texture chirality, which is captured by the Berry curvature of the band structure in momentum space.
Imaging and painting chiral AFM domains
To image directly the domains with positive and negative chirality, the researchers cooled the sample below its ordering temperature, shined light of a particular wavelength onto it, and measured its MCD using a scanning MCD microscope. The sign of the measured MCD value revealed the chirality of the AFM domains.
To “write” a different chirality into these AFM domains, the researchers again cooled the sample below its ordering temperature, this time in the presence of a small positive magnetic field B, which fixed the sample in a positive chiral AFM state. They then reversed the polarity of B and illuminated a spot of the sample to heat it above the ordering temperature. Once the spot cooled down, the negative-polarity B-field changed the AFM state in the illuminated region into a negative chirality. When the “painting” was finished, the researchers imaged the patterns with the MCD microscope.
In the past, a similar thermo-magnetic scheme gave rise to ferromagnetic-based data storage disks. This work, which is published in Physical Review Letters, marks the first time that light has been used to manipulate AFM chiral domains – a fundamental requirement for developing AFM-based information storage technology and spintronics. In the future, Crooker says the group plans to extend this technique to characterize other complex antiferromagnets with nontrivial magnetic configurations, use light to “write” interesting spatial patterns of chiral domains (patterns of Berry phase), and see how this influences electrical transport.
The post Visible light paints patterns onto chiral antiferromagnets appeared first on Physics World.
Green concrete: paving the way for sustainable structures
Concrete is one of the world's biggest carbon emitters. Benjamin Skuse asks if AI can help tame concrete’s climate impact
The post Green concrete: paving the way for sustainable structures appeared first on Physics World.
Grey, ugly, dull. Concrete is not the most exciting material in the world. That is, until you start to think about its impact on our lives. Concrete is the second most consumed material on the planet after water. Humanity uses about 30 billion tonnes of the stuff every year, the equivalent of building an entire new New York City every month. Put another way, there is so much concrete in the world and so much being made that by the 2040s it will outweigh all living matter.
As the son of a builder, I have made a few concrete mixes over the years myself, usually following my father’s tried and trusted recipe. Take one part cement (fine mineral powder), two parts sand, and four parts aggregate (crushed stone), then mix and add enough water until it all goes gloopy.
The ubiquity and low cost of these simple ingredients are just two of the reasons for concrete’s global reach. In liquid form, it can be moulded into almost any shape, and once set, it is as hard and durable as stone. What’s more, it doesn’t burn, rot or get eaten by animals.
These factors make concrete the ideal material for everything from vast imposing dams to sleek kitchen floors. However, its gargantuan presence across society comes at an equally epic environmental cost. If concrete were a country, it would rank third behind only the US and China as a greenhouse gas emitter.
Though raw material processing and transport of concrete are part of the problem, concrete’s biggest environmental impact comes from the heat and chemical processes involved in producing cement. Ordinary cement clinker (the raw form of cement before it is ground to a powder) is the product of heating limestone up to 1450 °C until it breaks apart into lime and carbon dioxide (CO2). This heating requires lots of energy and the chemical process releases huge amounts of the greenhouse gas CO2 – meaning that cement makes up around 90% of the carbon footprint of an average concrete mix.

In the UK and some other parts of the world, this climate impact is well recognized, with the industry having made significant efforts to decarbonize over the last few decades. “Since 1990, the UK concrete industry has decreased its direct and indirect environmental impacts by over 53% through various technology levers,” says Elaine Toogood – an architect and senior director at the Mineral Products Association’s Concrete Centre, the UK’s technical hub for all things concrete.
This reduction has been achieved through actions such as fuel switching, decarbonizing electricity and transport networks, and carbon capture technology. “For example, over 50% of all the heat that’s needed to make cement is now supplied by waste-derived fuels,” Toogood adds.
Yet the sheer scale of the global concrete industry means that much more needs to be done to fully mitigate concrete’s carbon impact. Can physics, and more specifically AI, lend a hand?
Low-carbon replacements
Replacing cement – concrete’s least green ingredient – with low-carbon alternatives seems like a good place to start. Two well-proven options have been available for decades.
Fly ash – the by-product of burning coal at power plants – can replace about 30% of cement in concrete mixes. It has been used in the construction of many prominent structures including the Channel Tunnel, which opened in 1994. Blast furnace slag – the by-product of iron and steel production – is another capable replacement, and can make up to 70% of cement content. Slag was used in 2009 to substitute half of the regular cement in the precast concrete units that now make up the sea defences on Blackpool beach.
Yet although these waste materials are currently extensively used as cement or concrete additions in the UK and elsewhere, they rely on very polluting sources (coal-fired power plants and blast furnaces) that are gradually being phased out globally to meet climate targets. As a result, fly ash and blast furnace slag are not long-term solutions. New low-carbon materials are needed, which is where physics can play a decisive role.
Based in Debre Tabor University in Ethiopia, Gashaw Abebaw Adanu is an expert in innovative construction materials. In 2021 he and colleagues investigated the potential of partially replacing (0%, 5%, 10%, 15% and 20%) standard cement with ash from burning lowland Ethiopian bamboo leaf, a common local construction waste material (Adv. Civ. Eng 10.1155/2021/6468444). The findings were encouraging. Though the concrete took longer to set with increased bamboo leaf ash content, the material’s strength, water absorption and sulphate attack (concrete breakdown caused by sulphate ions reacting with the hardened cement paste) improved for 5–10% bamboo leaf ash mixes. The results suggest that up to 10% of cement could be swapped for this local low-carbon alternative.
Steel, copper – or hair?
More recently, Adanu has turned his focus to concrete fibre reinforcement. Adding small amounts of steel, copper or polyethene fibres is known to increase concrete’s ductility and crack resistance by up to 200% and 90%, respectively. The tiny fibres act like micro-stitches throughout the entire mix, transforming concrete from a brittle material into a tough, energy-absorbing composite.
Fibre reinforcement also leads to major cost savings and a reduced carbon footprint, primarily by removing the need for traditional steel rebar and mesh, where 50 kg of steel fibres can often do the work of 100 kg+ of traditional rebar. Eliminating this expensive material also reduces labour and maintenance costs.
In his latest research, Adanu has explored an unexpected alternative fibre reinforcement material that would decrease costs further as it would otherwise go to landfill: human hair (Eng. Res. Express 7 015115). Adanu took waste hair from barbershops in Debre Tabor (with permission, of course), and added small amounts of it in different quantities to standard concrete mixes. “It’s not biodegradable, it’s not compostable, but as a fibre reinforcement material, we found that using 1–2% human hair improves the concrete’s tensile strength, compressive strength, cracking resistance and reduces shrinkage,” says Adanu. “It makes concrete more clean and sustainable, and because it improves the quality of the concrete, it reduces cost at the same time.”
Research like Adanu’s, involving experimentation with local materials, has been the driving force for innovation in construction for millennia. From the ancient Neolithic practice of boosting mudbricks’ strength by adding local straw, to the Romans using volcanic dust as high-quality cement for concrete constructions like the Pantheon in Rome – a structure that still stands to this day, with its 43.3-m diameter non-reinforced concrete dome remaining the largest in the world. But testing one material at a time is no longer the only way.

Taking a more modern, wide-ranging approach, a team of researchers led by Soroush Mahjoubi and Elsa Olivetti of Massachusetts Institute of Technology (MIT), recently mined the cement and concrete literature, and a database of over one million rock samples, looking for cement ingredient substitutes (Communications Materials 6 99). The study not only confirmed the potential of the well-known alternatives fly ash and metallurgic slags, but also various biomass ashes like the bamboo leaf ash Adanu investigated, as well as rice husk, sugarcane bagasse, wood, tree bark and palm oil fuel ashes.
The meta-review in addition identified various other waste materials with high potential. These include construction and demolition wastes (ceramics, bricks, concrete), waste glass, municipal solid waste incineration ashes, and mine tailings (iron ore, copper, zinc), as well as 25 igneous rock types that could significantly reduce cement’s carbon impact.
AI to the rescue
Although a number of these alternative concrete materials have been known for some time, they have struggled to make an impact, with very few being used to partially replace regular cement in ready-mix concretes. Getting construction companies or concrete contractors to give them a try is no simple task.
“Concrete contractors are used to using certain mixes for certain jobs at certain times of the year, so they can plan a site and project based on how those materials are going to behave,” says Toogood. “Newer mixes act slightly differently when fresh,” she adds, which makes life tricky for those running a construction site, where concrete that behaves in a predictable manner is critical so that things run smoothly and efficiently.
Two physicists – Raphael Scheps and Gideon Farrell – aim to build this trust in low-carbon alternatives through their UK construction technology company Converge. Starting out using sensors to measure the real-time performance of different mixes of concrete in situ, they have built one of the world’s largest datasets on the performance of concrete.

They can now apply an AI model underpinned by physics principles. The program simulates the physical and chemical interactions of different components to predict the performance of a vast number of concrete mixes in a wide range of situations to a high level of accuracy. And this is key, as it builds trust to experiment with lower-carbon mixes. “With projects in the UK and Australia, we’ve helped people tweak the mix that they’re using and achieve quite major carbon savings,” says Scheps. “Anywhere from 10% all the way up to 44%.”
Currently used to recommend existing cost-saving concrete recipes, Scheps sees Converge’s AI model becoming more sophisticated over time. “As it starts to uncover the real fundamental physics-based rules for what drives concrete chemistry, our model will make projections for entirely new materials,” he enthuses.
Also exploring the power of AI to optimize concrete production is US company Concrete.ai. Like Converge, Concrete.ai was born from the idea of applying physics principles to optimize traditional materials and industries; specifically, how AI can be used to reduce the carbon footprint of concrete. And also like Converge, the company’s technology rests on one of the world’s largest concrete databases, consisting of vast amounts of different recipes and materials, alongside their associated performances.
Trained on this dataset, Concrete.ai’s generative AI model creates millions of possible mix designs to identify the optimal concrete recipe for any particular application. “The main difference between a solution like Concrete.ai’s and general models like ChatGPT or Gemini is that our goal is really to create recipes that don’t exist yet,” explains chief technology officer and co-founder Mathieu Bauchy. “Popular large language models regurgitate what they have been trained on and tend to hallucinate, whereas our model discovers new recipes that have never been produced before without breaking the laws of physics or chemistry, and in a reliable way.”
Bauchy sees Concrete.ai’s role as a bridge between concrete producers keen to cut their costs and carbon footprint, and innovators like Adanu or the MIT group exploring new low-carbon concrete materials who are unable to demonstrate the performance of these materials in real-world scenarios and at scale.
Circular benefits
It is perhaps apt that the industry most in need of AI insights from the likes of Converge, Concrete.ai and their growing number of competitors is the AI industry itself. New data centres being used to train, deploy and deliver AI applications and services are the cause of a huge spike in the greenhouse gas emissions of tech giants such as Google, Meta, Microsoft and Amazon. And one of the biggest contributors to those emissions is the concrete from which these hyperscale facilities are built.

This is the reason Meta recently partnered with concrete maker Amrize to develop AI-optimized concrete. For Meta’s new 66,500 m2 data centre in Rosemount, Minnesota, the partners applied Meta’s AI models and Amrize’s materials-engineering expertise to deliver concrete that met key criteria including high strength and low carbon content, as well as practical performance characteristics like decent cure speed and surface quality. The partners estimate that the custom mix will reduce the total carbon footprint of this concrete by 35%.
“There is an interesting synergy between concrete and AI,” says Bauchy. “AI can help design greener concrete, and on the other hand, concrete can be used to build more sustainable data centres to power AI.” With other tech giants exploring AI’s potential in reducing the carbon footprint of the concrete they use too, it may well be that the very places in which AI is developed become the testbeds for AI-derived sustainable green concrete solutions.
The post Green concrete: paving the way for sustainable structures appeared first on Physics World.
New journal aims to advance the interdisciplinary field of personalized health
Medical Sensors and Imaging will publish innovative research at the intersection of engineering, biomedical and computer sciences
The post New journal aims to advance the interdisciplinary field of personalized health appeared first on Physics World.
Personalized health – the use of individualized measurements to address each patient’s specific needs – is a research field that’s evolving at pace. Bringing this level of personalization into the clinic is an interdisciplinary challenge, requiring the development of sensors that generate clinically meaningful data outside the hospital, new imaging modalities and analysis techniques, and computational tools that address the uncertainties of dealing with just one individual.
Much of the most impactful work in this field sits in the spaces between established disciplines. And for researchers looking to publish their findings or read about the latest breakthroughs, this work is often scattered across discipline-specific journals. A new open access journal from IOP Publishing – Medical Sensors & Imaging (MSI) – aims to remedy this shortfall, providing a dedicated home for authors working across sensing, imaging, modelling and data-driven healthcare.

“We want a journal where physicists, engineers, computer scientists, biomedical researchers and clinicians can publish and read work that advances personalized health, without confinement into traditional silos,” explains founding editor-in-chief Marco Palombo from Cardiff University. “MSI also aims to play an important role in strengthening interdisciplinary exchange.”
“The community needs a specialized forum that doesn’t just report on new materials or a clinical trial, but validates innovations that can specifically solve complex biomedical challenges,” adds deputy editor Xiliang Luo from Qingdao University of Science and Technology. “I think this journal is a perfect fit for that gap.”
Connecting communities
Published by IOP Publishing on behalf of the Institute of Physics and Engineering in Medicine (IPEM), MSI aims to dismantle the barriers between engineering innovation and clinical application by creating a community of experts that work together to translate innovative technology into clinical settings.
MSI sits within IPEM’s journal portfolio that includes Physics in Medicine & Biology, Physiological Measurement and Medical Engineering & Physics. Its aims and scope were designed to complement, rather than overlap with, these existing journals and provide a dedicated venue for translational work and practical applied research that may otherwise struggle to fit a traditional scope.

Being part of this established family of journals brings with it strong editorial standards, an established readership base and a commitment to scientific integrity. The journal also offers rapid, high-quality peer review, with feedback that’s constructive, rigorous and fair. MSI is fully open access, which maximizes the visibility, reach and impact of its published papers.
“For a new journal in a dynamic field, ensuring content is discoverable and barrier-free is essential for building an audience quickly and establishing credibility,” says Palombo. “We also wanted MSI to support global participation. Many excellent groups operate with limited budgets but make major scientific contributions. Open access reduces inequities in who can read and build on published work.”
“For the authors, we can provide a specialized platform for scientists whose work transcends traditional boundaries, offering visibility to a broad audience that’s eager for translational solutions,” says Luo. “And for the readers, I think we will be the go-to resource for academic researchers, industry R&D leaders, and healthcare innovators seeking the latest breakthroughs in personalized health monitoring and advanced diagnostics.”
Hot topics
Palombo contributed to the strategic development of the journal at an early stage, drawing upon his experience in healthcare and medical imaging research and engaging with the research community to identify the scientific niche that MSI could fill. Working with IOP Publishing, he helped shape the journal’s aims and scope and assembled a diverse, internationally recognized editorial board with knowledge aligned with the journal’s mission – including Lui, who brings specialist expertise in wearable technologies and biosensors.

The journal will publish high-quality research on novel biomedical sensing and imaging techniques, along with the algorithms, validation frameworks and translational studies that demonstrate their application in real-world medicine. MSI also provides a platform to showcase research on hot topics such as wearable and implantable sensors for continuous physiological monitoring, for example, or microneedle-based sensing technologies and breath analysis.
The development of flexible and biocompatible materials will be key for the growth of bio-integrated devices and biodegradable or transient electronics, as will anti-fouling strategies that enable use of sensors in complex biological environments. On the imaging side, the journal scope encompasses mainstay medical imaging techniques such as MRI, CT, ultrasound, PET and SPECT, as well as emerging multimodal and hybrid approaches, with a focus on technical innovation and translational relevance.
“Given my own background, I’m particularly keen to see strong submissions in the area of MRI, including advanced quantitative biomarkers and approaches that probe tissue microstructure,” notes Palombo. “I also see huge potential in connecting imaging to computational modelling – particularly digital twins – and in building imaging pipelines that enable personalized diagnosis and prognosis.”
“Other exciting areas include combining sensing and imaging technologies into one system, and closed-loop ‘sense then act’ systems, which sense something and can then release medicine to treat the disease,” says Luo.
The rise of AI
Artificial intelligence (AI) is becoming increasingly central to both sensing and imaging, and will likely play a major role in the evolution of personalized health, enabling a shift towards multimodal fusion of sensor streams, imaging and clinical data. AI could also facilitate the introduction of integrated sensor systems that collect data and interpret signals in real time, and digital twins that link patient-specific data with computational models to simulate disease progression or treatment response.
Palombo emphasizes the importance of trustworthy AI: methods that don’t just provide an output, but are explainable, robust and explicitly handle uncertainty. This is a direction seen in the general field of AI, but is especially important within healthcare. He also cites the increasing momentum around green healthcare and green AI, with personalized health technologies designed to reduce waste and minimize energy consumption, and clinical models developed with far greater computational efficiency.
“It would be fantastic to have an AI model running directly on the sensor, for example, and this ties in with the environmental impact of AI,” he explains. “If we keep AI small and manageable, then it pollutes less, is more affordable for everybody and can be deployed on small, lightweight devices.”
A community focal point
Looking ahead, Palombo hopes that MSI will becomes a leading platform for interdisciplinary innovation in personalized health, and the routine home for publishing major advances in sensing, imaging, modelling and trustworthy AI. “Over time, I’d like the journal to build depth in core areas, while also actively shaping emerging directions such as digital twins, uncertainty-aware and explainable AI, multimodal integration and technologies that are genuinely deployable in clinical workflows.”
“Currently, the fields of sensor engineering and clinical medicine often run on parallel tracks. My hope is that this journal will force these tracks to converge over time,” adds Luo. “I see the journal fostering a new language where chemists, physicists, engineers and doctors can understand each other by publishing papers in MSI.”
- Medical Sensors & Imaging is fully open for submissions, with the first issue expected to publish in Q2/Q3 of this year. During the launch phase, IOPP is covering the article processing charge (APC) for all accepted papers, enabling early contributors to publish at no cost while helping the journal establish a strong foundation of high-quality inaugural content. Beyond this period, many authors will benefit from support through IOPP’s transformative agreements, while others may be eligible for APC waivers and discounts.
The post New journal aims to advance the interdisciplinary field of personalized health appeared first on Physics World.
Olympian Eileen Gu rules the piste with physics and international relations
There is lots of classical mechanics in freestyle skiing
The post Olympian Eileen Gu rules the piste with physics and international relations appeared first on Physics World.
Here at Physics World we are always on the look out for physicists with extraordinary talents outside of science. In 2023, for example we were in awe of Harvard University’s Jenny Hoffman who ran across the US in 47 days, 12 hours and 35 minutes – shattering the previous record by one week.
Now, coverage of the Winter Olympics in Italy has revealed that the Chinese freestyle skier Eileen Gu had studied physics at Stanford University. The most decorated female Olympic freestyle skier in history, US-born Gu bagged two gold medals and a silver at the 2022 Beijing games and added three silvers at Milano Cortina.
Gu has subsequently switched majors to international relations at Stanford, but we can still celebrate her as an honorary physicist.
Physics-rich event
Indeed, freestyle skiing is quite possibly the most physics-rich of all Olympic events. Athletes must consider friction, gravity and the conservation of momentum and angular momentum to perfect their skiing.
Now, I’m not suggesting that studying free-body diagrams of freestyle manoeuvres is essential for Olympic success, but I live in hope that an understanding of classical mechanics can improve one’s skiing. (I’m not sure why I believe this, because a PhD and decades of writing about physics certainly hasn’t improved my skiing!).
As well as being lauded for her prowess on the snow, Gu has found herself at the centre of an international furore regarding her choice of competing for China rather than for the US. So, international relations combined with physics seems like a very good course of study!
- Article has been updated to include Gu’s third silver medal at Milano Cortina.
The post Olympian Eileen Gu rules the piste with physics and international relations appeared first on Physics World.
Wobbling gyroscopes could harvest energy from ocean waves
Design can be tuned to work at a wide range of wave frequencies
The post Wobbling gyroscopes could harvest energy from ocean waves appeared first on Physics World.
A new way of extracting energy from ocean waves has been proposed by a researcher in Japan. The system couples a gyroscope to an electrical generator and could be fine tuned to extract energy from a wide range of wave conditions. A prototype of the design is currently being built for testing in a wave tank. If successful, the system could be used to generate electricity onboard ships.
Ocean waves contain huge amounts of energy and humans have tried to harness this energy for centuries. But, despite the development of myriad technologies and a number of trials, the widespread commercial conversion of wave energy remains an elusive goal. One important problem is that most generation schemes only work within a narrow range of wave conditions – and the ocean can be a very messy place.
Now, Takahito Iida at the University of Osaka has proposed a new energy-harvesting technology that uses gyroscopic flywheel system that can be tuned to absorb energy efficiently over a broad range of wave frequencies.
“Wave energy devices often struggle because ocean conditions are constantly changing,” says Iida. “However, a gyroscopic system can be controlled in a way that maintains high energy absorption, even as wave frequencies vary.”
Wobbling top
At the heart of the technology is gyroscopic precession, whereby a torque on a rotating object causes the object’s axis of rotation to trace out a circle. This is familiar to anyone who has played with a spinning top, which will wobble (precess) when perturbed.
Iida’s device is called a gyroscopic wave energy converter and comprises a spinning flywheel mounted on a floating platform (see figure). On calm seas, the gyroscope’s axis of rotation points in a fixed direction thanks to the conservation of angular momentum. However, waves will cause the platform to pitch from side-to-side, exerting torques on the gyroscope and causing it to precess. It is this precession that drives a generator to deliver electrical power.
To design the system, Iida used linear wave theory to model the coupled interactions between waves, the platform, the gyroscope and the generator. This allowed him to devise a scheme for tuning the gyroscope frequency and generator parameters so that an energy conversion efficiency of 50% is achieved for a variety of wave conditions.
The effect of the generator was modelled as a spring-damper. This is a system that responds to a torque by storing and then returning some energy to the gyroscope (the spring), and removing some energy by converting it to electricity (the damper). Iida discovered that a maximum conversion of 50% occurs when the spring coefficient of the generator is adjusted such that the gyroscope’s resonant frequency matches the resonant frequency of the floating platform.
Fundamental constraint
Iida explains that 50% is the maximum efficiency that can be achieved. “This efficiency limit is a fundamental constraint in wave energy theory. What is exciting is that we now know that it can be reached across broadband frequencies, not just at a single resonant condition.”
Iida tells Physics World that a small prototype (approximately 50 cm3 in size) is being built and will be tested in a 100 m-long tank.
The next step will be the development of a system with a generating capacity of about 5 kW. Iida says that the ultimate goal is a 300 kW generator.
Iida also explains that the gyroscopic wave energy converter is designed to operate untethered to the seabed. As a result he says the system would be ideal for use as an auxiliary power system for a ship. “The target output of 300 kW is based on the assumed auxiliary power demand of a typical commercial vessel,” says Iida.
The research is described in the Journal of Fluid Mechanics.
The post Wobbling gyroscopes could harvest energy from ocean waves appeared first on Physics World.
World’s smallest QR code paves the way for ultralong-life data storage
Tiny QR code etched on ceramic sets the Guinness World Record as the world’s smallest
The post World’s smallest QR code paves the way for ultralong-life data storage appeared first on Physics World.
A team headed up at TU Wien in Austria has set the Guinness World Record for creating the world’s smallest QR code. Working with industry partner Cerabyte, the researchers produced a stable and repeatedly readable QR code with an area of just 1.977 µm2. When read out – using an electron microscope, as its structure is too fine to be seen with a standard optical microscope – the QR code links to a scientific webpage at TU Wien.
But this wasn’t just a ploy to get into the record books, the QR code was created as part of the team’s research into ceramic data storage materials. Unlike conventional magnetic or electronic data storage media, which degrade within decades, ceramic-based storage is designed to withstand extreme temperatures, radiation, chemical corrosion and mechanical damage.
As such, information stored in ceramic materials could endure for centuries, or even millennia. And in contrast to today’s data centres, ceramics preserve stored information without any energy input and without requiring cooling.

To create these ultralong-life data storage systems, the researchers use focused ion beams to mill the QR code into a thin film of chromium nitride, a durable ceramic often used to coat high-performance cutting tools. As each individual pixel is just 49 nm in size, roughly 10 times smaller than the wavelength of visible light, the code cannot be imaged using visible light. But when examined with an electron microscope, the QR code could indeed be read out reliably.
After the writing process, the entire stack of ceramic films is subjected to extreme conditions, such as high temperatures, corrosive environments and mechanical stress, to evaluate the material’s long-term durability and readout stability.
Pushing storage to its limits
Creating a “tiny QR code” was not the team’s initial goal, but emerged as a natural outcome of pushing this storage technology to its limits, says Paul Mayrhofer from TU Wien’s Institute of Materials Science and Technology.
“During a discussion with one of my PhD students, Erwin Peck, we realised that the writing procedure we had developed already produced features smaller than what had previously been reported for QR codes,” he explains. “This sparked the idea: if we can reliably write structures at that scale, why not intentionally create the smallest QR code possible?”
To claim its place in the record books, the QR code was successfully milled and read out in the presence of witnesses and its size independently verified using calibrated scanning electron microscopy at the University of Vienna. It is now officially recognized by Guinness as the world’s smallest QR code, and is roughly one third the size of the previous record holder.
Mayrhofer points out that the storage capacity of the ceramic data storage technology far surpasses that of a single QR code. “Based on current estimates, a cartridge of 100 x 100 x 20 mm with ceramic storage medium could potentially store on the order of 290 terabytes of raw data,” he says.
As well as offering this impressive raw capacity, for practical applications it’s also crucial that the ceramic storage offers high writing speed, which determines how efficiently large datasets can be stored, and low energy consumption during writing, which will influence the potential for scalability and sustainability. The researchers are currently working to optimize both of these parameters.
“Humanity has preserved information for millennia when carved in stone, yet much of today’s digital information risks being lost within decades,” project leader Alexander Kirnbauer tells Physics World. “Our long-term goal is to create an ultrastable, sustainable data storage technology capable of preserving information for extremely long times – potentially thousands to millions of years. In essence, we want to develop a form of storage that ensures the knowledge of our digital age does not disappear over time.”
The post World’s smallest QR code paves the way for ultralong-life data storage appeared first on Physics World.
Quantum Systems Accelerator focuses on technologies for computing
Bert de Jong of Lawrence Berkeley National Lab is our podcast guest
The post Quantum Systems Accelerator focuses on technologies for computing appeared first on Physics World.
Developing practical technologies for quantum information systems requires the cooperation of academic researchers, national laboratories and industry. That is the mission of the Quantum Systems Accelerator (QSA), which is based at the Lawrence Berkeley National Laboratory in the US.
The QSA’s director Bert de Jong is my guest in this episode of the Physics World Weekly podcast. His academic research focuses on computational chemistry and he explains how this led him to realise that quantum phenomena can be used to develop technologies for solving scientific problems.
In our conversation, de Jong explains why the QSA is developing a range of qubit platforms − including neutral atoms, trapped ions, and superconducting qubits – rather than focusing on a single architecture. He champions the co-development of quantum hardware and software to ensure that quantum computing is effective at solving a wide range of problems from particle physics to chemistry.
We also chat about the QSA’s strong links to industry and de Jong reveals his wish list of scientific problems that he would solve if he had access today to a powerful quantum computer.
The post Quantum Systems Accelerator focuses on technologies for computing appeared first on Physics World.
Metallic material breaks 100-year thermal conductivity record
Transition metal nitride conducts heat nearly three times better than copper
The post Metallic material breaks 100-year thermal conductivity record appeared first on Physics World.
A newly identified metallic material that conducts heat nearly three times better than copper could redefine thermal management in electronics. The material, which is known as theta-phase tantalum nitride (θ-TaN), has a thermal conductivity comparable to low-grade diamond, and its discoverers at the University of California Los Angeles (UCLA), US say it breaks a record on heat transport in metals that had held for more than 100 years.
Semiconductors and insulators mainly carry heat via vibrations, or phonons, in their crystalline lattices. A notable example is boron arsenide, a semiconductor that the UCLA researchers previously identified as also having a high thermal conductivity. Conventional metals, in contrast, mainly transport heat via the flow of electrons, which are strongly scattered by lattice vibrations.
Heat transport in θ-TaN combines aspects of both mechanisms. Although the material retains a metal-like electronic structure, study leader Yongjie Hu explains that its heat transport is phonon-dominated. Hu and his UCLA colleagues attribute this behaviour to the material’s unusual crystal structure, which features tantalum atoms interspersed with nitrogen atoms in a hexagonal pattern. Such an arrangement suppresses both electron–phonon and phonon–phonon scattering, they say.
Century-old upper limit for metallic heat transport
Materials with high thermal conductivity are vital in electronic devices because they remove excess heat that would otherwise impair the devices’ performance. Among metals, copper has long been the material of choice for thermal management thanks to its relative abundance and its thermal conductivity of around 400 Wm−1 K−1, which is higher than any other pure metal apart from silver.
Recent theoretical studies, however, had suggested that some metallic-like materials could break this record. θ-TaN, a metastable transition metal nitride, was among the most promising contenders, but it proved hard to study because high-quality samples were previously unavailable.
Highest thermal conductivity reported for a metallic material to date
Hu and colleagues overcame this problem using a flux-assisted metathesis reaction. This technique removed the need for the high pressures and temperatures required to make pure samples of the material using conventional techniques.
The team’s high-resolution structural measurements revealed that the as-synthesized θ-TaN crystals had smooth, clean surfaces and ranged in size from 10 to 100 μm. The researchers also used a variety of techniques, including electron diffraction, Raman spectroscopy, single-crystal X-ray diffraction, high-resolution transmission electron microscopy and electron energy loss spectroscopy to confirm that the samples contained single crystals.
The researchers then turned their attention to measuring the thermal conductivity of the θ-TaN crystals. They did this using an ultrafast optical pump-probe technique based on time-domain thermoreflectance, a standard approach that had already been used to measure the thermal conductivity of high-thermal-conductivity materials such as diamond, boron phosphide, boron nitride and metals.
Hu and colleagues made their measurements at temperatures between 150 and 600 K. At room temperature, the thermal conductivity of the θ-TaN crystals was 1100 Wm−1 K−1. “This represents the highest value reported for any metallic materials to date,” Hu says.
The researchers also found that the thermal conductivity remained uniformly high across an entire crystal. H says this reflects the samples’ high crystallinity, and it also confirms that the measured ultrahigh thermal conductivity originates from intrinsic lattice behaviour, in agreement with first-principles predictions.
Another interesting finding is that while θ-TaN has a metallic electronic structure, its thermal conductivity decreased with increasing temperature. This behaviour contrasts with the weak temperature dependence typically observed in conventional metals, in which heat transport is dominated by electrons and is limited by electron-phonon interactions.
Applications in technologies limited by heat
As well as cooling microelectronics, the researchers say the discovery could have applications in other technologies that are increasingly limited by heat. These include AI data centres, aerospace systems and emerging quantum platforms.
The UCLA team, which reports its work in Science, now plans to explore scalable ways of integrating θ-TaN into device-relevant platforms, including thin films and interfaces for microelectronics. They also aim to identify other candidate materials with lattice and electronic dynamics that could allow for similarly efficient heat transport.
The post Metallic material breaks 100-year thermal conductivity record appeared first on Physics World.
Nickel-enhanced biomaterial becomes stronger when wet
A biomaterial that increases its strength when in contact with water could provide a biodegradable alternative to plastics
The post Nickel-enhanced biomaterial becomes stronger when wet appeared first on Physics World.
Synthetic materials such as plastics are designed to be durable and water resistant. But the processing required to achieve these properties results in a lack of biodegradability, leading to an accumulation of plastic pollution that affects both the environment and human health. Researchers at the Institute for Bioengineering of Catalonia (IBEC) are developing a possible replacement for plastics: a novel biomaterial based on chitin, the second most abundant natural polymer on Earth.
“Every year, nature produces on the order of 1011 tonnes of chitin, roughly equivalent to more than three centuries of today’s global plastic production,” says study leader Javier G Fernández. “Chitin and [its derivative] chitosan are the ultimate natural engineering polymers. In nature, variations of this material produce stiff insect wings enabling flight, elastic joints enabling extraordinary jumping in grasshoppers, and armour-like protective exoskeletons in lobsters or clams.”
But while biomaterials provide a more environmentally friendly alternative to conventional plastics, most biological materials weaken when exposed to water. In this latest work, Fernández and first author Akshayakumar Kompa took inspiration from nature and developed a new biomaterial that increases its strength when in contact with water, while maintaining its natural biodegradability.
Metal matters
In the exoskeletons of insects and crustaceans, chitin it is secreted in a gel-like form into water and then transitions into a hard structure. Following a chance observation that removing zinc from a sandworm’s fangs caused them to soften in water, Kompa and Fernández investigated whether adding a different transition metal, nickel, to chitosan could have the opposite effect.
By mixing nickel chloride solution (at concentrations from 0.6 to 1.4 M) with dispersions of chitosan extracted from discarded shrimp shells, the researchers entrapped varying amounts of nickel within the chitosan structure. Fourier-transform infrared spectra of resulting chitosan films revealed the presence of nickel ions, which form weak hydrogen bonds with water molecules and increase the biomaterial’s capacity to bond with water.
“In our films, water molecules form reversible bridges between polymer chains through weak interactions that can rapidly break and reform under load,” Fernández explains. “That fast reconfiguration is what gives the material high strength and toughness under wet conditions: essentially a built-in, stress-activated ‘self-rearrangement’ mechanism. Nickel ions act as stabilizing anchors for these water-mediated bridges, enabling more and longer-range connections and making inter-chain connectivity more robust”.
The nickel-doped chitosan samples had tensile strengths of between 30 and 40 MPa, similar to that of standard plastics. Adding low concentrations of nickel did not significantly impact the mechanical properties of the films. Concentrations of 1 M or more, however, preserved the material’s strength while increasing its toughness (the ability to stretch before breaking) – a key goal in the field of structural materials and a feature unique to biological composites.

Upon immersion in water, the nickel-doped films exhibited greater tensile strength, increasing from 36.12±2.21 MPa when dry to 53.01±1.68 MPa, moving into the range of higher-performance engineering plastics. In particular, samples created from an optimal 0.8 M nickel concentration almost doubled in strength when wet (and were used for the remainder of the team’s experiments).
Scaling production
The manufacturing process involves an initial immersion in water, followed by drying for 24 h and then re-wetting. During the first immersion, any nickel ions that are not incorporated into the material’s functional bridging network are released into the water, ensuring that nickel is present only where it is structurally useful.
The researchers developed a zero-waste production cycle in which this water is used as a primary component for fabricating the next object. “The expelled nickel is recovered and used to make the next batch of material, so the process operates at essentially 100% nickel utilization across batches,” says Fernández.

They used this process to produce various nickel-doped chitosan objects, including watertight containers and a 1 m2 film that could support a 20 kg weight after 24 h of water immersion. They also created a 244 x 122 cm film with similar mechanical behaviour to the smaller samples, demonstrating the potential for rapid scaling to ecologically relevant scales. A standard half-life test revealed that after approximately four months buried in garden soil, half of the material had biodegraded.
The researchers suggest that the biomaterial’s first real-world use may be in sectors such as agriculture and fishing that require strong, water-compatible and ultimately biodegradable materials, likely for packaging, coatings and other water-exposed applications. Both nickel and chitosan are already employed within biomedicine, making medicine another possible target, although any new medical product will require additional regulatory and performance validation.
The team is currently setting up a 1000 m2 lab facility in Barcelona, scheduled to open in 2028, for academia–industry collaborations in sustainable bioengineering research. Fernández suggests that we are moving towards a “biomaterial age”, defined by the ability to “control, integrate, and broadly use biomaterials and biological principles within engineering applications”.
“Over the last 20 years, working on bioinspired manufacturing, we have been able to produce the largest bioprinted objects in the world, demonstrated pathways for resource-secure and sustainable production in urban environments, and even explored how these approaches can support interplanetary colonization,” he tells Physics World. “Now we are achieving material properties that were considered out of reach by designing the material to work with its environment, rather than isolating itself from it.”
The researchers report their findings in Nature Communications.
The post Nickel-enhanced biomaterial becomes stronger when wet appeared first on Physics World.
2D materials help spacecraft electronics resist radiation damage
Transistors based on atomically thin transition-metal dichalcogenides appear particularly robust
The post 2D materials help spacecraft electronics resist radiation damage appeared first on Physics World.
Electronics made from certain atomically thin materials can survive harsh radiation environments up to 100 times longer than traditional silicon-based devices. This finding, which comes from researchers at Fudan University in Shanghai, China, could bring significant benefits for satellites and other spacecraft, which are prone to damage from intense cosmic radiation.
Cosmic radiation consists of a mixture of heavy ions and cosmic rays, which are high-energy protons, electrons and atomic nuclei. The Earth’s magnetic field protects us from 99.9% of this ionizing radiation, and our atmosphere significantly attenuates the rest. Space-based electronics, however, have no such protection, and this radiation can damage or even destroy them.
Adding radiation shielding to spacecraft mitigates these harmful effects, but the extra weight and power consumption increases the spacecraft’s costs. “This conflicts with the requirements of future spacecraft, which call for lightweight and cost-effective architectures,” says team leader Peng Zhou, a physicist in Fudan’s College of Integrated Circuits and Micro-Nano Electronics. “Implementing radiation tolerant electronic circuits is therefore an important challenge and if we can find materials that are intrinsically robust to this radiation, we could incorporate these directly into the design of onboard electronic circuits.”
Promising transition-metal dichalcogenides
Previous research had suggested that 2D materials might fit the bill, with transistors based on transition-metal dichalcogenides appearing particularly promising. Within this family of materials, 2D molybdenum disulphide (MoS2) proved especially robust to irradiation-induced defects, and Zhou points out that its electrical, mechanical and thermal properties are also highly attractive for space applications.
The studies that revealed these advantages were, however, largely limited to simulations and ground-based experiments. This meant they were unable to fully replicate the complex and dynamic radiation fields such circuits would encounter under real space conditions.
Better than NMOS transistors
In their work, Zhou and colleagues set out to fill this gap. After growing monolayer 2D MoS2 using chemical vapour deposition, they used this material to fabricate field-effect transistors. They then exposed these transistors to 10 Mrad of gamma-ray irradiation and looked for changes to their structure using several techniques, including cross-sectional transmission electron microscopy (TEM) imaging and corresponding energy-dispersive spectroscopy (EDS) mapping.
These measurements indicated that the 2D MoS2 in the transistors was about 0.7 nm thick (typical for a monolayer structure) and showed no obvious signs of defects or damage. Subsequent Raman characterization on five sites within the MoS2 film confirmed the devices’ structural integrity.
The researchers then turned their attention to the transistors’ electrical properties. They found that even after irradiation, the transistors’ on-off ratios remained ultra-high, at about 108. They note that this is considerably better than a similarly-sized Si N-channel metal–oxide–semiconductor (NMOS) transistors fabricated through a CMOS process, for which the on-off ratio decreased by a factor of more than 4000 after the same 10 Mrad irradiation.
The team also found that MoS2 system consumes only about 49.9 mW per channel, making its power requirement at least five times lower than the NMOS one. This is important owing to the strict energy limitations and stringent power budgets of spacecraft, Zhou says.
Surviving the space environment
In their final experiment, the researchers tested their MoS2 structures on a spacecraft orbiting at an altitude of 517 km, similar to the low-Earth orbit of many communication satellites. These tests showed that the bit-error rate in data transmitted from the structures remained below 10-8 even after nine months of operation, which Zhou says indicates significant radiation and long-term stability. Indeed, based on test data, electronic devices made from these 2D materials could operate for 271 years in geosynchronous orbit – 100 times longer than conventional silicon electronics.
“The discovery of intrinsic radiation tolerance in atomically thin 2D materials, and the successful on-orbit validation of the atomic-layer semiconductor-based spaceborne radio-frequency communication system have opened a uniquely promising pathway for space electronics leveraging 2D materials,” Zhou says. “And their exceptionally long operational lifetimes and ultra-low power consumption establishes the unique competitiveness of 2D electronic systems in frontier space missions, such as deep-space exploration, high-Earth-orbit satellites and even interplanetary communications.”
The researchers are now working to optimize these structures by employing advanced fabrication processes and circuit designs. Their goal is to improve certain key performance parameters of spaceborne radio-frequency chips employed in inter-satellite and satellite-to-ground communications. “We also plan to develop an atomic-layer semiconductor-based radiation-tolerant computing platform, providing core technological support for future orbital data centres, highly autonomous satellites and deep-space probes,” Zhou tells Physics World.
The researchers describe their work in Nature.
The post 2D materials help spacecraft electronics resist radiation damage appeared first on Physics World.
Rethinking how quantum phases change
A new framework explains direct transitions between ordered states, offering insights into real quantum materials
The post Rethinking how quantum phases change appeared first on Physics World.
In this work, the researchers theoretically explore how quantum materials can transition continuously from one ordered state to another, for example, from a magnetic phase to a phase with crystalline or orientational order. Traditionally, such order‑to‑order transitions were thought to require fractionalisation, where particles effectively split into exotic components. Here, the team identifies a new route that avoids this complexity entirely.
Their mechanism relies on two renormalisation‑group fixed points in the system colliding and annihilating, which reshapes the flow of the system and removes the usual disordered phase. A separate critical fixed point, unaffected by this collision, then becomes the new quantum critical point linking the two ordered phases. This allows for a continuous, seamless transition without invoking fractionalised quasiparticles.
The authors show that this behaviour could occur in several real or realistic systems, including rare‑earth pyrochlore iridates, kagome quantum magnets, quantum impurity models and even certain versions of quantum chromodynamics. A striking prediction of the mechanism is a strong asymmetry in energy scales on the two sides of the transition, such as a much lower critical temperature and a smaller order parameter where the order emerges from fixed‑point annihilation.
This work reveals a previously unrecognised kind of quantum phase transition, expands the landscape beyond the usual Landau-Ginzburg-Wilson framework, which is the standard theory for phase transitions, and offers new ways to understand and test the behaviour of complex quantum systems.
Read the full article
Continuous order-to-order quantum phase transitions from fixed-point annihilation
David J Moser and Lukas Janssen 2025 Rep. Prog. Phys. 88 098001
Do you want to learn more about this topic?
Dynamical quantum phase transitions: a review by Markus Heyl (2018)
The post Rethinking how quantum phases change appeared first on Physics World.
How a single parameter reveals the hidden memory of glass
Glass may look just like a normal solid, but at the microscopic level it behaves in surprisingly complex ways
The post How a single parameter reveals the hidden memory of glass appeared first on Physics World.
Unlike crystals, whose atoms arrange themselves in tidy, repeating patterns, glass is a non‑equilibrium material. A glass is formed when a liquid is cooled so quickly that its atoms never settle into a regular pattern, instead forming a disordered, unstructured arrangement.
In this process, as temperature decreases, atoms move more and more slowly. Near a certain temperature –the glass transition temperature – the atoms move so slowly that the material effectively stops behaving like a liquid and becomes a glass.
This isn’t a sharp, well‑defined transition like water turning to ice. Instead, it’s a gradual slowdown: the structure appears solid long before the atoms would theoretically cease to rearrange.
This slowdown can be extrapolated and be used to predict the temperature at which the material’s internal rearrangement would take infinitely long. This hypothetical point is known as the ideal glass transition. It cannot be reached in practice, but it provides an important reference for understanding how glasses behave.
Despite years of research, it’s still not clear exactly how glass properties depend on how it was made – how fast it was cooled, how long it aged, or how it was mechanically disturbed. Each preparation route seems to give slightly different behaviour.
For decades, scientists have struggled to find a single measure that captures all these effects. How do you describe, in one number, how disordered a glass is?
Recent research has emerged that provides a compelling answer: a configurational distance metric. This is a way of measuring how far the internal structure of a piece of glass is from a well‑defined reference state.
When the researchers used this metric, they could neatly collapse data from many different experiments onto a single curve. In other words, they found a single physical parameter controlling the behaviour.
This worked across a wide range of conditions: glasses cooled at different rates, allowed to age for different times, or tested under different strengths and durations of mechanical probing.
As long as the experiments were conducted above the ideal glass transition temperature, the metric provided a unified description of how the material dissipates energy.
This insight is significant. It suggests that even though glass never fully reaches equilibrium, its behaviour is still governed by how close it is to this idealised transition point. In other words, the concept of the kinetic ideal glass transition isn’t just theoretical, it leaves a measurable imprint on real materials.
This research offers a powerful new way to understand and predict the mechanical behaviour of glasses in everyday technologies, from smartphone screens to industrial coatings.
Read the full article
Order parameter for non-equilibrium dissipation and ideal glass – IOPscience
Junying Jiang, Liang Gao and Hai-Bin Yu, 2025 Rep. Prog. Phys. 88 118002
The post How a single parameter reveals the hidden memory of glass appeared first on Physics World.
Challenges in CO2 reduction selectivity measurements by hydrodynamic methods
Does electrolyte purity really matter in CO2 electroreduction research? Quite a lot, if you’re doing rotating ring-disk electrode studies. Learn more in this webinar
The post Challenges in CO<sub>2</sub> reduction selectivity measurements by hydrodynamic methods appeared first on Physics World.

Electrochemical CO2 reduction converts CO2 to higher-value products using an electrocatalyst and could pave the way for electrification of the chemical industry. A key challenge for CO2 reduction is its poor selectivity (faradaic efficiency) due to competition with the hydrogen evolution reaction in aqueous electrolytes. Rotating ring-disk electrode (RRDE) experiments have become a popular method to quantify faradaic efficiencies, especially for gold electrocatalysts. However, such measurements suffer from poor inter-laboratory reproducibility. This work identifies the causes of variability in RRDE selectivity measurements by comparing protocols with different electrochemical methods, reagent purities, and glassware cleaning procedures. Electroplating of electrolyte impurities onto the disk and ring surfaces were identified as major contributors to electrocatalyst deactivation. These results highlight the need for standardized and cross-laboratory validation of CO2RR selectivity measurements using RRDE. Researchers implementing this technique for CO2RR selectivity measurements need to be cognizant of electrode deactivation and its potential impacts on faradaic efficiencies and overall conclusions of their work.

Maria Kelly is a Jill Hruby Postdoctoral Fellow at Sandia National Laboratories. She earned her PhD in Professor Wilson Smith’s research group at the University of Colorado Boulder and the National Renewable Energy Laboratory. Her doctoral work focused on characterization of carbon dioxide conversion interfaces using analytical electrochemical and in situ scanning probe methods. Her research interests broadly encompass advancing experimental measurement techniques to investigate the near-electrode environment during electrochemical reactions.

The post Challenges in CO<sub>2</sub> reduction selectivity measurements by hydrodynamic methods appeared first on Physics World.
Time crystal emerges in acoustic tweezers
System could shed light on emergent periodic phenomena in biological systems
The post Time crystal emerges in acoustic tweezers appeared first on Physics World.

Pairs of nonidentical particles trapped in adjacent nodes of a standing wave can harvest energy from the wave and spontaneously begin to oscillate, researchers in the US have shown. What is more, these interactions appear to violate Newton’s third law. The researchers believe their system, which is a simple example of a classical time crystal, could offer an easy way to measure mass with high precision. It might also, they hope, provide insights into emergent periodic phenomena in nature.
Acoustic tweezers use sound waves to create a potential-energy well that can hold an object in place – they are the acoustic analogue of optical tweezers. In the case of a single trapped object, this can be treated as a dissipationless process, in which the particle neither gains nor loses energy from the trapping wave.
In the new work, David Grier of New York University, together with graduate student Mia Morrell and undergraduate Leela Elliott, created an ultrasound standing wave in a cavity and levitated two objects (beads) in adjacent nodes.
“Ordinarily, you’d say ‘OK, they’re just going to sit there quietly and do nothing’,” says Grier; “And if the particles are identical, that’s exactly what’s going to happen.”
Breaking the law
If the two particles differ in size, material or any other property that affects acoustic scattering, they can spontaneously begin to oscillate. Even more surprisingly, this motion appears unconstrained by the requirement that momentum be conserved – Newton’s third law.
“Who ordered that?”, muses Grier.
The periodic oscillation, which has a frequency parametrized only by the properties of the particles and independent of the trapping frequency, forms a very simple type of emergent active matter called a time crystal.
The trio analysed the behaviour of adjacent particles trapped in this manner using the laws of classical mechanics, and discovered an important subtlety had been missed. When identical particles are trapped in nearby nodes, they interact by scattering waves, but the interactions are equal and opposite and therefore cancel.
“The part that had never been worked out before in detail is what happens when you have two particles with different properties interacting with each other,” says Grier. “And if you put in the hard work, which Mia and Leela did, what you find is that to the first approximation there’s nothing out of the ordinary.” At the second order, however, the expansion contains a nonreciprocal term. “That opens up all sorts of opportunities for new physics, and one of the most striking and surprising outcomes is this time crystal.”
Stealing energy
This nonreciprocity arises because, if one particle is more strongly affected by the mutual scattering than the other, it can be pushed farther away from the node of the standing wave and pick up potential energy, which can then be transferred through scattering to the other particle. “The unbalanced forces give the levitated particles the opportunity to steal some energy from the wave that they ordinarily wouldn’t have had access to,” explains Grier. The wave also carries away the missing momentum, resolving the apparent violation of Newton’s third law.
If it were acting in isolation, this energy input would make the oscillations unstable and throw the particles out of the nodes. However, energy is removed by viscosity: “If everything is absolutely right, the rate at which the particles consume energy exactly balances the rate at which they lose energy to viscous drag, and if you get that perfect, delicious balance, then the particles can jiggle in place forever, taking the fuel from the wave and dumping it back into the system as heat.” This can be stable indefinitely.
The researchers have filed a patent application for the use of the system to measure particle masses with microgram-scale precision from the oscillation frequency. Beyond this, they hope the phenomenon will offer insights into emergent periodic phenomena across timescales in nature: “Your neurons fire at kilohertz, but the pacemaker in your heart hopefully goes about once per second,” explains Grier.
The research is described in Physical Review Letters.
“When I read this I got somehow surprised,” says Glauber Silva of The Federal University of Alagoas in Brazil; “The whole thing of how to get energy from the surrounding fields and produce motion of the coupled particles is something that the theoretical framework of this field didn’t spot before.”
“I’ve done some work in the past, both in simulations and in optical systems that are analogous to this, where similar things happen, but not nearly as well controlled as in this particular experiment,” says Dustin Kleckner of University of California, Merced. He believes this will open up a variety of further questions: “What happens if you have more than two? What are the rules? How do we understand what’s going on and can we do more interesting things with it?” he says.
The post Time crystal emerges in acoustic tweezers appeared first on Physics World.
Giant barocaloric cooling effect offers a new route to refrigeration
Environmentally friendly dissolution-based method reduces temperature of water by nearly 27 K in just 20 seconds
The post Giant barocaloric cooling effect offers a new route to refrigeration appeared first on Physics World.
A new cooling technique based on the principles of dissolution barocaloric cooling could provide an environmentally friendly alternative to existing refrigeration methods. With a cooling capacity of 67 J/g and an efficiency of nearly 77%, the method developed by researchers from the Institute of Metal Research of the Chinese Academy of Sciences can reduce the temperature of a sample by 27 K in just 20 seconds – far more than is possible with standard barocaloric materials.
Traditional refrigeration relies on vapour-compression cooling. This technology has been around since the 19th century, and it relies on a fluid changing phase. Typically, an expansion valve allows a liquid refrigerant to evaporate into a gas, absorbing heat from its surroundings as it does so. A compressor then forces the refrigerant back into the liquid state, releasing the heat.
While this process is effective, it consumes a lot of electricity, and there is not much room for improvement. After more than a century of improvements, the vapour-compression cycle is fast approaching the maximum efficiency set by the Carnot limit. The refrigerants are also often toxic, contributing to environmental damage.
In recent years, researchers have been exploring caloric cooling as a possible alternative. Caloric cooling works by controlling the entropy, or disorder, within a material using magnetic or electric fields, mechanical forces or applied pressure. The last option, known as barocaloric cooling, is in some ways the most promising. However, most of the known barocaloric materials are solids, which suffer from poor heat transfer efficiency and limited cooling capacity. Transferring heat in and out of such materials is therefore slow.
A liquid system
The new technique overcomes this limitation thanks to a fundamental thermodynamic process called endothermic dissolution. The principle of endothermic dissolution is that when a salt dissolves in a solvent, some of the bonds in the solvent break. Breaking those bonds takes energy, and so the solvent cools down – sometimes dramatically.
In the new work, researchers led by metallurgist and materials scientist Bing Li discovered a way to reverse this process by applying pressure. They began by dissolving a salt, ammonium thiocyanate (NH4SCN), in water. When they applied pressure to the resulting solution, the salt precipitated out (an exothermic process) in line with Le Chatelier’s principle, which states that when a system in chemical equilibrium is disturbed, it will adjust itself to a new equilibrium by counteracting as far as possible the effect of the change.
When they then released the pressure, the salt re-dissolved almost immediately. This highly endothermic process absorbs a massive amount of heat, causing the temperature of the solution to drop by nearly 27 K at room temperature, and by up to 54 K at higher temperatures.
A chaotropic salt
Li and colleagues did not choose NH4SCN by chance. The material is a chaotropic agent, meaning that it disrupts hydrogen bonding, and it is highly soluble in water, which helps to maximize the amount present in the solution during that part of the cooling cycle. It also has a large enthalpy of solution, meaning that its temperature drops dramatically when it dissolves. Finally, and most importantly, it is highly sensitive to applied pressures in the range of hundreds of megapascals, which is within the capacity of conventional hydraulic systems.
Li says that he and his colleagues’ approach, which they detail in Nature, could encourage other researchers to find similar techniques that likewise do not rely on phase transitions. As for applications, he notes that because aqueous NH4SCN barocaloric cooling works well at high temperatures, it could be suited to the demanding thermal management requirements of AI data centres. Other possibilities include air conditioning in domestic and industrial vehicles and buildings.
There are, however, some issues that need to be resolved before such cooling systems find their way onto the market. NH4SCN and similar salts are corrosive, which could damage refrigerator components. The high pressures required in the current system could also prove damaging over the long run, Li adds.
To address these and other drawbacks, the researchers now plan to study other such near-saturated solutions at the atomic level, with a particular focus on how they respond to pressure. “Such fundamental studies are vital if we are to optimize the performance of these fluids as refrigerants,” Li tells Physics World.
The post Giant barocaloric cooling effect offers a new route to refrigeration appeared first on Physics World.
The hidden footprint of hydrogen
Leaked hydrogen boosts methane’s lifetime, yet its overall impact remains small compared to other emissions
The post The hidden footprint of hydrogen appeared first on Physics World.
Hydrogen is considered a clean fuel because it produces water rather than carbon dioxide when burned, and it is seen as a promising route toward lower emissions. It is especially valuable for replacing fossil fuels in industrial processes that require extremely high temperatures and are difficult to electrify. Although hydrogen itself is not a greenhouse gas like carbon dioxide, methane, or nitrous oxide (gases that trap heat in the Earth’s atmosphere), it can still indirectly contribute to warming. Normally, hydroxyl radicals, which are highly reactive atmospheric molecules made of one oxygen and one hydrogen atom with an unpaired electron, break down methane into carbon dioxide and water. But when hydroxyl radicals react with hydrogen instead, fewer radicals are available to remove methane, allowing methane to persist longer in the atmosphere and increasing its warming effect.
This study examines how hydrogen leakage in hydrogen‑based energy systems could influence the planet. The researchers analysed 23 different U.S. future scenarios, including some that eliminate fossil fuels entirely. They estimated how much hydrogen might leak in each scenario, compared those leaks to the remaining carbon dioxide and methane emissions, and calculated how much additional emissions reductions and/or carbon removal would be needed to offset the warming from hydrogen under low, medium, and high leak rates, and over both short‑term and long‑term warming timescales.
They found that although hydrogen leaks do contribute to warming, their impact is much smaller than the warming from the remaining carbon dioxide and methane in all scenarios. Hydrogen’s warming effect appears much larger over a 20 year period because its short‑lived chemical interactions amplify methane and ozone quickly, even though its long‑term impact remains relatively modest. Only small increases in carbon dioxide removal or small reductions in other emissions are needed to offset the warming caused by hydrogen leaks. However, because estimates of hydrogen leakage rates vary widely in the scientific literature, improved measurement and monitoring are essential.
Read the full article
Estimating the climate impacts of hydrogen emissions in a net-zero US economy
Ansh N Nasta et al 2025 Prog. Energy 7 045001
Do you want to learn more about this topic?
Hydrogen storage in liquid hydrogen carriers: recent activities and new trends Tolga Han Ulucan et al. (2023)
The post The hidden footprint of hydrogen appeared first on Physics World.
Transfer learning could help muon tomography identify illicit nuclear materials
Hidden coated materials could be detected using new technique
The post Transfer learning could help muon tomography identify illicit nuclear materials appeared first on Physics World.
Machine-learning could help us use cosmic muons to peer inside large objects such as nuclear reactors. Developed by researchers in China, the technique is capable of identifying target materials such as uranium even if they are coated with other materials.
The muon is a subatomic particle that is essentially a heavier version of the electron. Huge numbers of cosmic muons are created in Earth’s atmosphere when cosmic rays collide with gas molecules. Thousands of cosmic muons per second rain down on every square metre of Earth’s surface and these particles can penetrate tens to hundreds of metres through solid materials.
As a result, cosmic muons are used to peer inside large objects such as nuclear reactors, volcanoes and ancient pyramids. This involves placing detectors next to an object and detecting muons that have passed through or scattered within the object. Detector data are then processed using a tomography algorithm to create a 3D image of the object’s interior.
Illicit nuclear materials
Muons tend to scatter more from high-atomic-number materials, so the technique is particularly sensitive to the presence of materials such as uranium. As a result, it has been used to create systems for the detection of illicit nuclear materials hidden in freight containers.
Muon tomography is relatively straightforward when the object is of simple construction – such as a pyramid built of stone and containing voids. Producing useful images of more complex target – such as a freight container full of unknown objects – is much more difficult. The conventional computational approach is to calculate the muon-scattering physics of many different materials and combine these data with muon-tracking algorithms. This, however, tends to require huge computational resources.
Supervised machine learning has been used to reduce the computational overhead, but this requires prior knowledge of the target materials – limiting efficacy when imaging unknown and concealed materials. What is more, many materials in complex objects are coated with other materials and these coatings can affect muon scattering.
Now, Liangwen Chen at the Institute of Modern Physics of the Chinese Academy of Sciences and colleagues have used a technique called transfer learning to improve cosmic muon tomography of objects that contain coated materials. The idea of transfer learning is to begin with knowledge of the muon-scattering parameters of bare, uncoated materials and use machine learning to predict the parameters of coated materials. Chen and colleagues believe that this is the first application of transfer learning to muon tomography.
Monte Carlo simulations
The team began by creating a database describing how cosmic muons interact with representative materials with a wide range of atomic numbers. This was done by using Geant4 to do Monte Carlo simulations of how muons interact as they pass through materials. Geant4 is the most recent incarnation of the GEANT series of computer simulations, which have been used for over 50 years to design particle detectors and interpret the data that they produce.
Chen and colleagues used Geant4 to calculate how muons are scattered within nine materials ranging from magnesium (atomic number 12) to uranium (atomic number 92). These included common elements such as aluminium, copper and iron. The geometry of the scattering involves incoming cosmic muons with energies of 1 GeV and incident angles that are typical of cosmic muons. After scattering from a material target, the simulation assumes that the muons travel though two successive detectors, which measures the scattering angles. Data were generated for bare targets of the nine materials, as well as the nine materials coated with aluminium and polyethylene. Each simulation involved 500,000 muons passing through a target.
These data were then sampled using an inverse cumulative distribution function, as well as integration and interpolation. This is done to convert the data to a form that is optimal for training a neural network.
To use these data, the team created two lightweight neural-network frameworks for transfer learning: one based on fine tuning; and the other a domain-adversarial neural network. According to the team, both frameworks were able to identify correlations between muon scattering-angle distributions and different target materials. Crucially, this was the case even when the target materials were coated in aluminium or polyethylene.
Chen explains, “Transfer learning allows us to preserve the fundamental physical characteristics of muon scattering while efficiently adapting to unknown environments under shielding”.
Chen and colleagues are now trying to apply their process to more complicated scattering geometries. The also plan to include detector effects and targets made of several materials.
“By integrating simulation, physics, and data-driven learning, this research opens new pathways for applying artificial intelligence to nuclear science and security technologies,” says Chen.
The research is described in Nuclear Science and Techniques.
The post Transfer learning could help muon tomography identify illicit nuclear materials appeared first on Physics World.
Ask me anything: Katie Perry – ‘I’d tell my younger self to network like crazy’
Katie Perry is chief executive of the Daphne Jackson Trust, which helps people who’ve had a career break
The post Ask me anything: Katie Perry – ‘I’d tell my younger self to network like crazy’ appeared first on Physics World.
Katie Perry studied physics at the University of Surrey in the UK, staying on there to do a PhD. While at Surrey, she worked with the nuclear physicist Daphne Jackson, who was the first female physics professor in the UK. Perry later worked in science communication – both as a science writer and in public relations.
She is currently chief executive of the Daphne Jackson Trust – a charity that supports returners to research careers after a break of at least two years for family, caring or health reasons. It offers fellowships to support people to overcome the challenges of returning, ensuring that their skills, talent, training and career promise are not lost.
What skills do you use every day in your job?
One of the most important skills is multitasking and working in an agile and flexible way. I’m often travelling to meetings, conferences and other events so I have to work wherever I am, whether it’s on a train, in a hotel or at the office. How I work reminds me of a moment I had towards the end of my physics degree when suddenly everything I’d been learning seemed to fit together; I could see both the detail and the bigger picture. It’s the same now. I have to switch quickly from one project or task to another, while keeping oversight of the overall direction and operation of the charity.
I am a strong advocate for part time and flexible working, not just for me, but for all my staff and the Daphne Jackson fellows. As a manager, a key skill is to see the person and their value – not just the hours they are working. Communication and networking skills are also vital as much of my role involves developing collaborations and working with stakeholders. I could be meeting a university vice chancellor, attending a networking reception, talking to our fellows or ensuring the trust complies with charity governance – all in one day.
What do you like best and least about your job?
I love my current role, and at the risk of sounding a little cheesy, it’s because of the trust’s amazing staff and the inspiring returners we support. The fact that I knew Daphne Jackson means that leading the organization is personal to me. I’m always blown away by how inspirational, dedicated, motivated and talented our fellows are and I love supporting them to return to successful research careers. It’s a privilege to lead the charity, helping to understand the challenges and barriers that returners face – and finding ways to overcome them.
Leading a small charity requires a broad set of skills. I enjoy the variety but it’s a challenge because you’re not so much a “chief executive officer” as a “chief everything officer”. I don’t have huge teams of people to help me with, say, human resources, finance or health and safety, which makes it struggle to do them as well as I’d like. It’s therefore important to have a good work-life balance, which is why I recently took up golf. I’ve yet to have a work meeting while out practising my swing, but one day my diary might say I’m “on a course”!
What do you know today, that you wish you knew when you were starting out in your career?
If I could go back in time, I’d tell myself – like I now tell my daughter – that it’s fine not to have a defined career path or plan. Sure, it helps to have an idea of what you want to do, but you have to live and work a little to discover what you like and – more importantly – don’t like. Careers these days are highly non-linear. Unexpected life events happen so you have to adapt, just as our Daphne Jackson fellows have done.
If someone had said to me in my 20s, when I was planning a career in science communication, that I’d be a charity chief executive I wouldn’t have believed them. But here I am running a charity founded in memory of the physicist who was such a great mentor to me during my PhD. When one door closes, a window often opens – so don’t be afraid to take set off in a new direction. It can be scary, but it’s often worth the effort.
I’d also tell my younger self to network like crazy. So many opportunities have opened up because I love speaking to people. You never know who you might meet at events or what making new connections can lead to. Finally, I wish I’d known that “impostor syndrome” will always be with you – and that it’s okay to feel that way provided you recognize it and manage it. Chances are, you may never defeat it completely.
The post Ask me anything: Katie Perry – ‘I’d tell my younger self to network like crazy’ appeared first on Physics World.
Quantum scientists release ‘manifesto’ opposing the militarization of quantum research
More than 250 quantum scientists have signed the quantum scientists for disarmament manifesto
The post Quantum scientists release ‘manifesto’ opposing the militarization of quantum research appeared first on Physics World.
More than 250 quantum scientists have signed a “manifesto” opposing the use of quantum research for military purposes. The statement – quantum scientists for disarmament – expresses a “deep concern” about the current geopolitical situation and “categorically rejects” the militarization of quantum research or its use in population control and surveillance. The signatories now call for an open debate about the ethical implications of quantum research.
While quantum science has the potential to improve many different areas – from sensors and medicine to computing – some are concerned about its applications for military purposes. They includes quantum key distribution and cryptographic networks for communication as well as quantum clocks and sensing for military navigation and positioning.
Marco Cattaneo from the University of Helsinki in Finland, who co-authored the manifesto, says that even the potential applications of quantum technologies in warfare can be used to militarize universities and research agendas, which he says is already happening. He notes it is not unusual for scientists to openly discuss military applications at conferences or to include such details in scientific papers.
“We are already witnessing restrictions on research collaborations with fellow quantum scientists from countries that are geopolitically opposed or ambiguous with respect to the European Union, such as Russia or China,” says Cattaneo. “When talking with our non-European colleagues, we also realized that these concerns are global and multifaceted.”
Long-term aims
The idea for a manifesto originated during a quantum-information workshop that was held in Benasque in Spain between June and July 2025.
“During a session on science policy, we realized that many of us shared the same concerns about the growing militarization of quantum science and academia,” Cattaneo recalls. “As physicists, we have a strong – and terrible – historical example that can guide our actions: the development of nuclear weapons, and the way the physics community organized to oppose them and to push for their control and abolition.”
Cattaneo says that the first goal of the manifesto is to address the militarization of quantum research, which he calls “the elephant in the room”. The document also aims to raise awareness and open a debate within the community and create a forum where concerns can be shared.
“A longer-term goal is to prevent, or at least to limit and critically address, research on quantum technologies for military purposes,” says Cattaneo. He notes that “one concrete proposal” is to push public universities and research institutes to publish a database of all projects with military goals or military funding, which, he says, “would be a major step forward.”
Cattaneo claims the group is “not naïve” and understands that stopping the technology’s military application completely will not be possible. “Even if military uses of some quantum technologies cannot be completely stopped, we can still advocate for excluding them from public universities, for abolishing classified quantum research in public research institutions, and for creating associations and committees that review and limit the militarization of quantum technologies,” he adds.
The post Quantum scientists release ‘manifesto’ opposing the militarization of quantum research appeared first on Physics World.
India announces three new telescopes in the Himalayan desert
The telescopes in Ladakh would significantly improve global coverage of transient and variable phenomena
The post India announces three new telescopes in the Himalayan desert appeared first on Physics World.
India has unveiled plans to build two new optical-infrared telescopes and a dedicated solar telescope in the Himalayan desert region of Ladakh. The three new facilities, expected to cost INR 35bn (about £284m), were announced by the Indian finance minister Nirmala Sitharaman on 1 February.
First up is a 3.7 m optical-infrared telescope, which is expected to come online by 2030. It will be built near the existing 2 m Himalayan Chandra Telescope (HCT) at Hanle, about 4500 m above sea level. Astronomers use the HCT for a wide range of investigations, including stellar evolution, galaxy spectroscopy, exoplanet atmospheres and time-domain studies of supernovae, variable stars and active galactic nuclei.
“The arid and high-altitude Ladakh desert is firmly established as among the world’s most attractive sites for multiwavelength astronomy,” Annapurni Subramaniam, director of the Indian Institute of Astrophysics (IIA) in Bangalore, told Physics World. “HCT has demonstrated both site quality and opportunities for sustained and competitive science from this difficult location.”
The 3.7 m telescope is a stepping stone towards a proposed 13.7 m National Large Optical-Infrared Telescope (NLOT), which is expected to open in 2038. “NLOT is intended to address contemporary astronomy goals, working in synergy with major domestic and international facilities,” says Maheswar Gopinathan, a scientist at the IIA, which is leading all three projects.
Gopinathan says NLOT’s large collecting area will enable research on young stellar systems, brown dwarfs and exoplanets, while also allowing astronomers to detect faint sources and to rapidly follow up extreme cosmic events and gravitational wave detections.
Along with India’s upgraded Giant Metrewave Radio Telescope, a planned gravitational-wave observatory in the country and the Square Kilometre Array in Australasia and South Africa, Gopinathan says that NLOT “will usher in a new era of multimessenger and multiwavelength astronomy.”
The third telescope to be supported is the 2m National Large Solar Telescope (NLST), which will be built near Pangong Tso lake 4350 m above sea level. Also expected to come online by 2030, the NLST is an advance on India’s existing 50 cm telescope at the Udaipur Solar Observatory, which provides a spatial resolution of about 100 km. Scientists also plan to combine NLST observations with data from Aditya-L1, India’s space-based solar observatory, which launched in 2023.
“We have two key goals [with NLST],” says Dibyendu Nandi, an astrophysicist at the Indian Institute of Science Education and Research in Kolkata, “to probe small-scale perturbations that cascade into large flares or coronal mass ejections and improve our understanding of space weather drivers and how energy in localised plasma flows is channelled to sustain the ubiquitous magnetic fields.”
While bolstering India’s domestic astronomical capabilities, scientists say the Ladakh telescopes – located between observatories in Europe, the Americas, East Asia and Australia – would significantly improve global coverage of transient and variable phenomena.
The post India announces three new telescopes in the Himalayan desert appeared first on Physics World.
Black hole is born with an infrared whimper
Observation sheds new light on how some massive stars fade away
The post Black hole is born with an infrared whimper appeared first on Physics World.
A faint flash of infrared light in the Andromeda galaxy was emitted at the birth of a stellar-mass black hole – according to a team of astronomers in the US. Kishalay De at Columbia University and the Flatiron Institute, and colleagues, noticed that the flash was followed by the rapid dimming of a once-bright star. They say that the star collapsed, with almost all of its material falling into a newly forming black hole. Their analysis suggests that there may be many more such black holes in the universe than previously expected.
When a massive star runs out of fuel for nuclear fusion it can no longer avoid gravitational collapse. As it implodes, such a star is believed to emit an intense burst of neutrinos, whose energy can be absorbed by the star’s outer layers.
In some cases, this energy is enough to tear material away from the core, triggering spectacular explosions known as core-collapse supernovae. Sometimes, however, this energy transfer is insufficient to halt the collapse, which continues until a stellar-mass black hole is created. These stellar deaths are far less dramatic than supernovae, and are therefore very difficult to observe.
Observational evidence for these stellar-mass black holes include their gravitational influence on the motions of stars; and the gravitational waves emitted when they merge together. So far, however, their initial formation has proven far more difficult to observe.
Mysterious births
“While there is consensus that these objects must be formed as the end products of the lives of likely very massive stars, there has remained little convincing observational evidence of watching stars turn into black holes,” De explains. “As a result, we don’t even have constraints on questions as fundamental as which stars can turn into black holes.”
The main problem is the low key nature of the stellar implosions. While core-collapse supernovae shine brightly in the sky, “finding an individual star disappearing in a galaxy is remarkably difficult,” De says. “A typical galaxy has a 100 billion stars in it, and being able to spot one that disappears makes it very challenging.”
Fortunately, it is believed that these stars do not vanish without a trace. “Whenever a black hole does form from the near complete inward collapse of a massive star, its very outer envelope must be still ejected because it is too loosely bound to the star,” De explains. As it expands and cools, models predict that this ejected material should emit a flash of infrared radiation – vastly dimmer than a supernova, but still bright enough for infrared surveys to detect.
To search for these flashes, De’s team examined data from NASA’s NEOWISE infrared survey and several other telescopes. They identified a near-infrared flash that was observed in 2014 and closely matched their predictions for a collapsing star. That flash was emitted by a supergiant star in the Andromeda galaxy.
Nowhere to be seen
Between 2017 and 2022, the star dimmed rapidly before disappearing completely across all regions of the electromagnetic spectrum. “This star used to be one of the most luminous stars in the Andromeda Galaxy, and now it was nowhere to be seen,” says De.
“Astronomers can spot supernovae billions of light years away – but even at this remarkable proximity, we didn’t see any evidence of an explosive supernova,” De says. “This suggests that the star underwent a near pure implosion, forming a black hole.”
The team also examined a previously-observed dimming in a galaxy 10 times more distant. While several competing theories had emerged to explain that disappearance, the pattern of dimming bore a striking resemblance to their newly-validated model, strongly suggesting that this event too signalled the birth of a stellar-mass black hole.
Because these events occurred so recently in ordinary galaxies like Andromeda, De’s team believe that similar implosions must be happening routinely across the universe – and they hope that their work will trigger a new wave of discoveries.
“The estimated mass of the star we observed is about 13 times the mass of the Sun, which is lower than what astronomers have assumed for the mass of stars that turn into black holes,” De says. “This fundamentally changes out understanding of the landscape of black hole formation – there could be many more black holes out there than we estimate.”
The research is described in Science.
The post Black hole is born with an infrared whimper appeared first on Physics World.
International Year of Quantum Science and Technology draws to a close
Two-day event in Ghana marked the official end of IYQ
The post International Year of Quantum Science and Technology draws to a close appeared first on Physics World.
The International Year of Quantum Science and Technology (IYQ) has officially closed following a two-day event in Accra, Ghana. The year has seen hundreds of events worldwide celebrating the science and applications of quantum physics.
Officially launched in February at the headquarters of the UN Educational, Scientific and Cultural Organization (UNESCO) in Paris, IYQ has involved hundreds of organizations – including the Institute of Physics, which publishes Physics World.
The year 2025 was chosen for an international year dedicated to quantum physics as it marks the centenary of the initial development of quantum mechanics by Werner Heisenberg. A range of international and national events have been held touching on quantum in everything from communications and computing to medicine and the arts.
One of the highlights of the year was a workshop on 9–14 June 2025 in Helgoland – the island off the coast of Germany where Heisenberg made his breakthrough exactly 100 years earlier. It was attended by more than 300 top quantum physicists, including four Nobel prize-winners, who gathered for talks, poster sessions and debates.
Another was the IOP’s two-day conference – Quantum Science and Technology: The First 100 Years; Our Quantum Future – held at the Royal Institution in London in November.
The closing event in Ghana, held on 10–11 February, was attended by government officials, UNESCO directors, physicists and representatives from international scientific societies, including the IOP. They discussed UNESCO’s official 2025 IYQ report as well as heard a reading of the IYQ 2025 poetry contest winning entry and attended an exhibition with displays from IYQ sponsors.
Organizers behind the IYQ hope its impact will be felt for many years to come. “The entire 2025 year was filled with impactful events happening all over the world. It has been a wonderful experience working alongside such dedicated and distinguished colleagues,” notes Duke University physicist Emily Edwards, who is a member of the IYQ steering committee. “We are thrilled to see the enthusiasm continue through to 2026 with the closing ceremony and are proud that a strong foundation has been laid for the years ahead.”
The UN has declared “international years” since 1959, to draw attention to topics deemed to be of worldwide importance. In recent years, there have been a number of successful science-based themes, including physics (2005), astronomy (2009), chemistry (2011), crystallography (2014) and light and light-based technologies (2015).
- Read our two free-to-read quantum briefings, published in May and October, which feature articles on the history, mystery and industry of quantum mechanics.
- Rewatch our Physics World Live: Quantum held in June that included a discussion of how technological developments have created a whole new ecosystem of “quantum 2.0” businesses
The post International Year of Quantum Science and Technology draws to a close appeared first on Physics World.
Asteroid deflection: why we need to get it right the first time
Aerospace engineer Rahil Makadia on the danger of asteroid “keyholes”
The post Asteroid deflection: why we need to get it right the first time appeared first on Physics World.
Science fiction became science fact in 2022 when NASA’s DART mission took the first steps towards creating a planetary defence system that could someday protect Earth from a catastrophic asteroid collision. However, much more work on asteroid deflection is needed from the latest generation of researchers – including Rahil Makadia, who has just completed a PhD in aerospace engineering at University of Illinois at Urbana-Champaign.
In this episode of the Physics World Weekly podcast, Makadia talks about his work on how we could deflect asteroids away from Earth. We also chat about the potential threats posed by near-Earth asteroids – from shattered windows to global destruction.
Makadia’s stresses the importance of getting a deflection right the first time, because his calculations reveal that a poorly deflected asteroid could return to Earth someday. In November, he published a paper that explored how a bad deflection could send an asteroid into a “keyhole” that guarantees its return.
But it is not all gloom and doom, Makadia points out that our current understanding of near-Earth asteroids suggests that no major collision will occur for at least 100 years. So even if there is a threat on the horizon, we have lots of time to develop deflection strategies and technologies.
The post Asteroid deflection: why we need to get it right the first time appeared first on Physics World.
Fluid gears make their debut
New work could promote the development of next-generation machines without mechanical interlocking teeth
The post Fluid gears make their debut appeared first on Physics World.
Flowing fluids that act like the interlocking teeth of mechanical gears offer a possible route to novel machines that suffer less wear-and-tear than traditional devices. This is the finding of researchers at New York University (NYU) in the US, who have been studying how fluids transmit motion and force between two spinning solid objects. Their work sheds new light on how one such object, or rotor, causes another object to rotate in the liquid that surrounds it – sometimes with counterintuitive results.
“The surprising part in our work is that the direction of motion may not be what you expect,” says NYU mathematician Leif Ristroph, who led the study together with mathematical physicist Jun Zhang. “Depending on the exact conditions, one rotor can cause a nearby rotor to spin in the opposite direction, like a pair of gears pressed together. For other cases, the rotors spin in the same direction, as if they are two pulleys connected by a belt that loops around them.”
Making gear teeth using fluids
Gears have been around for thousands of years, with the first records dating back to 3000 BC. While they have advanced over time, their teeth are still made from rigid materials and are prone to wearing out and breaking.
Ristroph says that he and Zhang began their project with a simple question: might it possible to avoid this problem by making gears that don’t have teeth, and in fact don’t even touch, but are instead linked together by a fluid? The idea, he points out, is not unprecedented. Flowing air and water are commonly used to rotate structures such as turbines, so developing fluid gears to facilitate that rotation is in some ways a logical next step.
To test their idea, the researchers carried out a series of measurements aimed at determining how parameters like the spin rate and the distance between spinning objects affect the motion produced. In these measurements, they immersed the rotors – solid cylinders – in an aqueous glycerol solution with a controllable viscosity and density. They began by rotating one cylinder while allowing the other one to spin in response. Then they placed the cylinders at varying distances from each other and rotated the active cylinder at different speeds.
“The active cylinder should generate fluid flows and could therefore in principle cause rotation of the passive one,” says Ristroph, “and this is exactly what we observed.”
When the cylinders were very close to each other, the NYU team found that the fluid flows functioned like gear teeth – in effect, they “gripped” the passive rotor and caused it to spin in the opposite direction as the active one. However, when the cylinders were spaced farther apart and the active cylinder spun faster, the flows looped around the outside of the passive cylinder like a belt around a pulley, producing rotation in the same direction as the active cylinder.
A model involving gear-like- and belt-like modes
Ristroph says the team’s main difficulty was figuring out how to perform such measurements with the necessary precision. “Once we got into the project, an early challenge was to make sure we could make very precise measurements of the rotations, which required a special way to hold the rotors using air bearings,” he explains. Team member Jesse Smith, a PhD student and first author of a paper in Physical Review Letters about the research, was “brilliant in figuring out every step in this process”, Ristroph adds.
Another challenge the researchers faced was figuring out how to interpret their findings. This led them to develop a model involving “gear-like” and “belt-like” modes of induced rotations. Using this model, they showed that, at least in principle, a fluid gear could replace regular gears and pulley-and-belt systems in any system – though Ristroph suggests that transmitting rotations in a machine or keep timing via a mechanical device might be especially well-suited.
In general, Ristroph says that fluid gears offer many advantages over mechanical ones. Notably, they cannot become jammed or wear out due to grinding. But that isn’t all: “There has been a lot of recent interest in designing new types of so-called active materials that are composed of many particles, and one class of these involves spinning particles in a fluid,” he explains. “Our results could help to understand how these materials behave based on the interactions between the particles and the flows they generate.”
The NYU researchers say their next step will be to study more complex fluids. “For example, a slurry of corn starch is an everyday example of a shear-thickening fluid and it would be interesting to see if this helps the rotors better ‘grip’ one another and therefore transmit the motions/forces more effectively,” Ristroph says. “We are also numerically simulating the processes, which should allow us to investigate things like non-circular shapes of the rotors or more than just two rotors,” he tells Physics World.
The post Fluid gears make their debut appeared first on Physics World.





