https://physicsworld.com/a/a-rubbish-challenge-how-do-we-dump-space-junk/
Margaret Harris

International scientists head into the fast-lane of Denmark’s burgeoning quantum ecosystem

Ambition and international talent converge as Denmark scales up in quantum science

The post International scientists head into the fast-lane of Denmark’s burgeoning quantum ecosystem appeared first on Physics World.

Shared purpose Maria Cerdà Sevilla and her colleagues at Quantum DTU in Lyngby are shaping the trajectory of technology translation and commercial innovation in quantum science. (Courtesy: Bax Lindhardt/DTU)

Denmark, it seems, is increasingly walking the walk, not just talking the talk, when it comes to quantum science and innovation. Structurally, the country’s “quantum ecosystem” is on a roll, with more than 75 organizations now actively engaged around a shared national mission via the Danish Quantum Community, a network of start-ups, scale-ups, incumbent technology companies, investors, research institutions and government agencies.

Money is greasing the wheels. In October last year, Denmark launched 55North, the world’s largest venture-capital fund dedicated exclusively to quantum technologies and applications. Headquartered in Copenhagen and backed by the Novo Nordisk Foundation and the Export and Investment Fund of Denmark (EIFO), the fund opened with a capital injection of €134 million (and a target base of €300 million) to back high-growth companies in the nascent quantum supply chain – within Denmark and beyond.

Workforce development is also mandatory – a strategic acknowledgement that Denmark must scale the “quantum talent pipeline” if it is to translate advances in fundamental science and applied R&D into next-generation quantum technologies. Capacity-building is well under way as Danish universities work with industry and government partners to train a skilled and diverse quantum workforce of “all the talents”, with recruitment of international scientists and engineers seen as fundamental to Denmark’s long-term quantum ambitions.

Joined-up thinking in quantum

A case study in this regard is Maria Cerdà Sevilla, head of Quantum DTU, the Center for Quantum Technologies at the Technical University of Denmark (DTU). Located in Lyngby, just north of Copenhagen, Quantum DTU coordinates the research activities of around 300 quantum scientists, working across 12 departments at DTU and focused around five main research themes: quantum computing, quantum communications, quantum sensing, advanced materials as well as cross-cutting initiatives in nanofabrication and next-generation quantum chips.

“The goal is to ensure that DTU is not merely participating in quantum science but also shaping the trajectory of technology translation and commercial innovation in the field,” explains Cerdà Sevilla. Put another way: Quantum DTU is all about outcomes versus three broad-scope metrics: scientific depth (world-class research in quantum physics and engineering); building the quantum ecosystem (integrating diverse research disciplines, developing infrastructure, plus education and training); and, finally, readiness for market deployment (meaning responsible and scalable implementation of quantum technologies).

“Our success will be defined not only by high-impact publications and prototypes, but whether DTU – and, by extension, Denmark – has established ‘durable capacity’ in quantum technologies and applications,” says Cerdà Sevilla.

It’s better to travel

For her part, Cerdà Sevilla is the quintessential pan-European scientist, albeit taking the “road less-travelled” to her role at Quantum DTU. After completing a PhD in particle physics at the University of Liverpool, UK, she moved on to postdoctoral research positions in Germany – at Humboldt University of Berlin and the Technical University of Munich – before a mid-career pivot into research strategy and innovation management.

Copenhagen calling Talented quantum scientists and engineers from across Europe are eyeing career opportunities in Copenhagen, one of the world’s most liveable, sustainable cities. (Courtesy: Daniel Rasmussen/A State of Denmark)

“While I no longer do research myself, I work with quantum scientists every day at DTU,” explains Cerdà Sevilla. That engagement extends to other stakeholders, including policy-makers, funding agencies, manufacturers in the quantum supply chain, as well as industrial end-users looking to deploy quantum technologies. “My role is essentially about leadership and strategic alignment,” she adds. “That means defining research priorities, understanding what we’re doing at a granular level, and ensuring Quantum DTU’s scientific efforts translate into a joined-up action plan across diverse specialisms.”

One of the most powerful aspects of Quantum DTU – indeed the wider quantum sector in Denmark – is this sense of shared purpose. “The quantum community here is internationally connected and recognized as well as being locally cohesive,” notes Cerdà Sevilla. “As an international scientist, it’s a given that you will get to conduct leading-edge research here; at the same time, you will also have a voice in shaping priorities at the departmental, institutional and even national level.”

By extension, institutional and interpersonal trust are defining features of Denmark’s research culture, enabling scientific collaborations and long-term initiatives to take shape organically without undue friction or hierarchical blockers. That same mindset informs life outside the laboratory and the workplace.

“The work-life balance in Denmark is great, though productivity is mandatory,” says Cerdà Sevilla. “Danish people work very hard, but they also understand the need for downtime with family and friends to ensure creativity and clarity of thinking. Overall, there’s a culture of psychological safety in the research community – an implicit acknowledgement that teams function best when individuals feel secure with their colleagues and management.”

Heading north

Another international scientist making an out-sized impact in the Danish quantum community is Francesco Borsoi, an assistant professor of physics and spin qubit pilot-line lead within the Novo Nordisk Foundation Quantum Computing Programme (NQCP), part of the renowned Niels Bohr Institute (NBI) at the University of Copenhagen (KU).

The NQCP is a 12-year collaborative research effort, backed with €200 million of funding through till 2035, to develop fault-tolerant quantum computing hardware and quantum algorithms for chemical and biological challenges in the life sciences. Underpinning the programme is a technology-agnostic approach to hardware development and the infrastructure required to support it (and currently implemented across four qubit pilot lines).

“Right now, my research at NBI explores the development, control and scaling aspects of solid-state quantum devices and investigation of the properties that may enable universal quantum computing,” explains Borsoi. While his focus, in large part, is on quantum-confined spins in semiconductor quantum dots, Borsoi works closely with the other three NQCP pilot-line teams developing platforms based on superconducting, photonic and neutral-atom technologies.

As an assistant professor, Borsoi also plays a proactive role in training the next generation of quantum scientists and engineers. Notably, he is the lead creator of a hands-on experimental course on advanced qubit technologies – part of a joint KU/DTU Masters programme in quantum information science that’s helping Denmark to scale its quantum workforce.

Here for the long term

Like Cerdà Sevilla at Quantum DTU, Borsoi’s back-story reflects the pan-European mobility of scientific talent. He received his MSc in condensed-matter physics from the University of Pisa, Italy, in 2016, before moving on to QuTech at the Delft University of Technology, The Netherlands, where he completed a PhD in applied physics (on semiconductor/superconductor quantum heterostructures) followed by three years of postdoctoral research (and a shift in direction to focus exclusively on semiconductor quantum-dot qubits).

After six years at QuTech, Borsoi wasn’t actively seeking a move to another institution – let alone another country – but was attracted by the NQCP opportunity and, as he puts it, “the chance to build from the ground up and be part of something this ambitious”.

He’s been in Copenhagen for 18 months and has settled well, both within NBI and outside. “Day to day,” he says, “I get to work with talented colleagues and students across NBI and KU, plus I get to develop my career in one of the world’s most liveable, sustainable cities. Five-star food scene, amazing architecture, lots of green space and excellent public transport – what’s not to like?”

Pointing the way Francesco Borsoi moved to Copenhagen for “the chance to build from the ground up and be part of something this ambitious” within the Novo Nordisk Foundation Quantum Computing Programme at the Niels Bohr Institute. (Courtesy: Daniel Rasmussen/A State of Denmark)

Back in the laboratory, meanwhile, Borsoi also engages extensively with domain experts working on the three other qubit pilot lines – a systematic and collaborative research model that underpins NQCP’s approach to quantum science. “The core enabling technologies may differ,” notes Borsoi, “but many of the design, engineering and scalability challenges are common to all the pilot lines. I guess we all talk the same language when it comes to the NQCP mission.”

For Borsoi, the transition to NBI and Denmark’s quantum community could hardly have gone better and already feels like a long-term commitment. “Government, private equity and philanthropic foundations are all making big investments in quantum,” he concludes, “so there’s no shortage of opportunities in Denmark for talented quantum scientists and engineers seeking to develop their careers in a university or industry setting.”

 

The post International scientists head into the fast-lane of Denmark’s burgeoning quantum ecosystem appeared first on Physics World.

https://physicsworld.com/a/international-scientists-head-into-the-fast-lane-of-denmarks-burgeoning-quantum-ecosystem/
No Author

At low exciton density, a superfluid suddenly stops flowing

Physicists say they may have observed a supersolid phase in a superfluid

The post At low exciton density, a superfluid suddenly stops flowing appeared first on Physics World.

Physicists at Columbia University in the US say they may have found evidence for a phenomenon in which a superfluid suddenly stops flowing inside a solid-state material. If confirmed, the finding – made in experiments using two atom-thin layers of graphene – could be the first superfluid-to-insulator phase transition ever observed in a naturally occurring material.

“For the first time, we’ve seen a superfluid undergo a phase transition to become what appears to be a supersolid,” says Cory Dean, who led the new study. “It’s like water freezing to ice, but at the quantum level.”

Supersolids are a hypothetical state of matter that can be both liquid- and solid-like at the same time – that is, they have a crystal structure and superfluid properties. In this description, first put forward by physicists in the 1970s, the crystal lattice and superfluidity are all part of the same phase coherent ground state and are not two separate systems, explains Dean.

In the new work, the researchers studied graphene, which is a sheet of carbon just one atom thick. When two of sheets of graphene are placed atop each other, they can be manipulated so that one layer contains extra electrons and the other extra holes.

The electrons and holes can combine to form quasiparticles known as excitons, which can then travel through the graphene bilayer as a superfluid when a strong magnetic field is applied.

Graphene, sometimes called the “wonder material”, is ideal for such fundamental physics studies because its properties can be fine-tuned by adjusting parameters like temperature, the applied electromagnetic fields and even the distance between the layers.

Controlling the density of excitons

In their experiments, Dean and co-workers were able to move the excitons in their bilayer samples by applying oppositely charged electric fields to the two layers. This, explains Dean, causes the positive and negative parts of each exciton to be pulled in the same direction, allowing them to indirectly drive and detect exciton flow. This ability to control layer imbalance allowed the team to tune the exciton density. Normally, such a process is difficult to achieve because excitons are electrically neutral and do not respond directly to ordinary electrical measurements, which makes tracking their motion difficult.

Thanks to their technique, which they detail in Nature, the researchers found that at high densities, the excitons behaved like a superfluid. At lower densities, however, these excitons “froze” and the superfluid became insulating. Even more striking, says Dean, is that warming the system restored the superfluid flow. “This result suggests that a supersolid-like phase emerges spontaneously, driven solely by particle interactions.”

The Dean lab has been studying the superfluid exciton phase for many years, though most of their work to date focused almost exclusively on the “layer balanced” condition that occurs when there is an equal density of electrons and holes in the two graphene layers. More recently, they began to study the layer imbalanced regime, which has been much less explored in experiments.

“To our surprise we found that under very large imbalance, the exciton transitions to an insulating state beyond some critical imbalance,” says Dean. “This observation alone could have many trivial explanations, but the real shock came when we found that upon heating the system, the superfluid is recovered.”

This behaviour, which has been discussed in some theoretical literature, has no precedence in any existing experiments of superfluidity, he explains, so it is something we should try to better understand.

“To view the situation in the opposite sense: when cooling a fluid and it transitions to a superfluid, the superfluid is already in a thermodynamic ground state. So why upon further cooling, should it undergo a transition to any other phase?” asks team member Jia Li. “We eventually realized that in our experiment, the role of layer imbalance is really a tuning of the exciton density, and the insulating phase onsets when the exciton density crosses a critical value,” he tells Physics World. “Once we had adopted this view, understanding the observed phase transition, and how it fits in with existing theoretical predictions, fell into place.”

A true supersolid or not?

While the researchers say they have firmly established the existence of an insulating state within the superfluid phase diagram, whether this state is truly a supersolid or some other as-yet unknown quantum ground state remains less clear. The challenge with understanding an insulating material is that it becomes more difficult to probe its behaviour, says Dean. “This is made even more difficult by the experimental requirements to stabilize the insulating phase: we need ultraclean samples, low temperatures and high magnetic fields.”

And the difficulties do not end there: “having to work with strong magnetic fields also limits what experimental probes we can use,” he adds. “To progress further, we need to develop new tools to probe the insulating state – for example, we are developing a scan probe technique that we hope can directly image and spatially map the exciton condensate.”

“We have also been working on realizing this condensate in material systems with strong interactions that do not require magnetic fields,” he reveals.

The post At low exciton density, a superfluid suddenly stops flowing appeared first on Physics World.

https://physicsworld.com/a/at-low-exciton-density-a-superfluid-suddenly-stops-flowing/
Isabelle Dumé

Wanted: an electrical grid that runs on 100% renewable energy

Global conflicts are making renewable energy more attractive, but an all-renewable grid will require solving physics problems as well as political and economic ones

The post Wanted: an electrical grid that runs on 100% renewable energy appeared first on Physics World.

With the conflict in Iran and the resulting closure of the Strait of Hormuz pushing oil and gas prices upwards, the prospect of a world that runs on 100% renewable energy seems even more attractive than usual. Before we can get there, though, experts in a range of fields say we’re going to need to solve a few physics problems – including one that goes straight back to Maxwell’s equations.

Unlike energy that comes from processes such as burning fossil fuels, sending water downhill through turbines, or harnessing the heat from nuclear reactions, the supply of wind and solar energy varies in ways we cannot control. To complicate matters further, consumer demand also varies, and the two variations “do not necessarily match in time or in space” observes Michael Jack, a physicist at the University of Otago in New Zealand.

Speaking on Monday at the American Physical Society’s Global Physics Summit in Denver, Colorado, Jack explained that there are two ways of making sure demand matches supply in an all-renewable grid. The first is to smooth out demand over time, for example by storing energy in batteries and using it when the wind isn’t blowing or the Sun isn’t shining. The second is to smooth out demand over space, for example by creating a grid that connects large numbers of consumers. “It’s very unlikely that all consumers’ demand will peak at the same time,” Jack noted.

To understand how peak demand scales with the number of consumers, Jack and his colleagues are using tools from an area of mathematics called extreme value theory. As its name implies, the goal of extreme value theory is to understand the probability of events that are either extremely large or extremely small compared to the norm. Once we can do that, Jack told the APS audience, we’ll be able to build renewable energy systems that deal efficiently with periods of peak demand.

“The opposite of quantum mechanics”

Another speaker in the same session, Charles Meneveau, is working on the supply side of the variability problem. As a fluid dynamics expert at Johns Hopkins University in Maryland, US, his goal is to understand how turbulent gusts of wind lead to fluctuations in the power output of wind farms – a problem he described as “the opposite of quantum mechanics” because “it’s intuitive and we feel like we understand it, but we can’t compute it”.

Meneveau and his collaborators began by building a micro-scale wind farm, sticking it in a wind tunnel and monitoring how it behaved. More recently, they’ve added computer simulations to the mix, generating around a petabyte of simulated turbulence data.

As expected, these studies showed that the power output of an array of turbines fluctuates much less than the output of a single turbine. However, an array’s output does spike at intervals set by the rotation frequency of the turbine blades, and also when gusts of wind propagate from one turbine to the next. Meneveau has developed a model that can predict this second type of spike, and he’s now working to extend it to floating offshore wind farms, which experience watery turbulence as well as the windy kind.

Everything under control

The third speaker in the session, Bri-Mathias Hodge, is an energy systems engineer at the University of Colorado, Boulder. He’s interested in ways of ensuring that renewable energy systems remain stable in the face of disturbances that could otherwise send the grid into a tailspin, leading to blackouts like the one that struck the Iberian Penninsula in 2025.

In traditional grids dominated by thermal energy sources, Hodge explained that one of the main ways of maintaining stability is to use devices called synchronous machine generators. These are essentially large rotating masses that all spin at the same rate: the frequency of the grid, which in the US is 60 Hz. When coupled to an AC power system, they give the system a degree of inertia, enabling it to resist potentially damaging fluctuations in the supply of electricity.

These devices have existed for 100 years, and Hodge says our current power system is designed around them. But because renewable energy generation is primarily DC rather than AC, an all-renewable grid will require a fundamentally different approach. “We have to reimagine what the system looks like when we have 100% renewable energy,” Hodge told the APS audience.

The solution, Hodge explained, is to replace synchronous machine generators with electronic inverters. These devices have the advantage of reacting much faster to system fluctuations. However, they also come with a big disadvantage. Unlike massive spinning objects that follow ponderous Newtonian physics, they don’t react automatically. They have to be told, and Hodge says that will require completely different control systems than the ones used in today’s electrical grids.

Return of Maxwell’s equations

While studying this problem, Hodge realized that the engineers who designed electrical grids back in the 1960s made an important simplifying assumption. Because they were working with a system composed entirely of thermal, synchronous generators (and because they were doing all their calculations with slide rules), they treated voltage as being separate from frequency, even though the two are inherently coupled. In other words, they treated the grid as an electromechanical network rather than an electromagnetic one.

To understand how this simplification plays out in a renewable-dominated grid, Hodge and colleagues went back to Maxwell’s equations. Specifically, they focused on what these equations have to say about the momentum associated with a mass that is moving around in an electromagnetic field. In an electrical grid controlled by large inertias from thermal generators, this momentum isn’t important. But in a renewable-dominated grid, Hodge says it can’t be ignored.

He and his colleagues have therefore developed a new model of electric power networks that highlights the significance of this electromagnetic momentum and restores the link between frequency and voltage dynamics. Ultimately, though, Hodge says that avoiding blackouts in an all-renewable energy system will require advances in simulation technologies. “We need to improve our decision-making processes on a whole range of timescales, from seconds to years,” he concluded.

The post Wanted: an electrical grid that runs on 100% renewable energy appeared first on Physics World.

https://physicsworld.com/a/wanted-an-electrical-grid-that-runs-on-100-renewable-energy/
Margaret Harris

Ask me anything: Giannis Zacharakis – ‘The ability to pursue questions that genuinely interest you is a privilege’

Giannis Zacharakis is a biophotonics and biomedical imaging researcher and CEO of the precision photonics spin-off Kymatonics

The post Ask me anything: Giannis Zacharakis – ‘The ability to pursue questions that genuinely interest you is a privilege’ appeared first on Physics World.

Giannis Zacharakis is a research director at the Institute of Electronic Structure and Laser (FORTH) in Greece, where he leads the Laboratory for Biophotonics and Molecular Imaging. Zacharakis has served as president and vice-president of the European Society for Molecular Imaging. His main focus is on developing key enabling technologies for imaging biological processes in living systems.

Zacharakis is also the CEO of the precision photonics spin-off Kymatonics. The company recently secured a highly competitive €2.1m European Innovation Council (EIC) Transition Open grant, to advance the development and commercialization of their innovative wavefront-shaping objective lens.

What skills do you use every day in your job?

My everyday work involves both hard and soft skills, which are equally important for a successful career.

At its core, my work is about asking questions and defining the path to discovery, through scientific knowledge and rigour. This requires being able to break down complex physical and biological problems into manageable and measurable components under certain hypotheses. Much of my day therefore involves analytical thinking and judgement: evaluating whether an observed effect is physically meaningful or an artefact of instrumentation or data processing. That defines the path forward.

Problem solving constantly requires creativity and thinking out of the box, because experiments rarely behave exactly as planned. You need patience, persistence and the ability to stay calm when instruments misbehave or data contradict expectations.

Communication is another central skill. I regularly explain technical concepts to students, collaborators from other disciplines, and biologists or clinicians who may not share the same vocabulary. Translating physics into accessible language, without oversimplifying the science, is something I consciously practise and it takes time and effort to achieve.

Project management also plays a surprisingly large role. Co-ordinating experiments, supervising students, meeting deadlines for proposals or manuscripts, and balancing long-term research goals with short-term deliverables requires structured planning.

Finally, mentoring is an important part of my routine. Guiding students and young scientists through experimental design, encouraging independent thinking, and helping them develop scientific confidence is both a responsibility and an integral component of academic work.

Essentially, while physics provides the foundation, my job relies on a blend of analytical rigour, practical problem-solving, communication and leadership.

What do you like best and least about your job?

What I value most is intellectual freedom: the ability to pursue questions that genuinely interest you is a privilege. There is something deeply satisfying about seeing a concept move from hypothesis to experimental evidence. Even incremental progress can feel meaningful when it clarifies a mechanism or resolves ambiguity.

I also appreciate the interdisciplinary environment. Working at the interface of physics, biology and biomedicine forces me to continuously learn and think beyond boundaries. It prevents intellectual stagnation and keeps curiosity alive.

Mentoring students is another highlight. Watching someone gain confidence, moving from following instructions to proposing their own ideas, is deeply rewarding. Research training is not only about technical knowledge; it is also about developing judgement and rigour.

On the more challenging side, uncertainty is a constant companion. Funding cycles; competitive grant applications and proposal rejections; and the unpredictability of research outcomes can be demanding. Not every idea works, and not every effort translates into immediate output. Maintaining momentum despite setbacks requires persistence and resilience.

Administrative responsibilities can also fragment time and reduce deep focus. Balancing research, supervision and institutional duties often requires careful prioritization.

What do you know today that you wish you knew when you were starting out in your career?

I wish I had understood earlier that uncertainty is not a sign of inadequacy but is the natural state of research. Early in my career, I expected clarity to come quickly if I worked hard enough. In reality, meaningful progress often requires extended periods of ambiguity. Learning earlier to tolerate that and even see it as productive would have reduced unnecessary self-doubt.

I also underestimated the importance of communication. Being technically correct is not enough; ideas need structure, clarity and narrative. Writing well and presenting clearly are not secondary skill; they are core scientific tools.

Another lesson is that collaboration is essential. Scientific progress increasingly happens at disciplinary boundaries with impactful discoveries emerging at interfaces. Engaging with people who think differently challenges assumptions and strengthens work.

Finally, remember that career paths are less rigid than they appear. There is rarely a single “correct” trajectory. Developing transferable skills, analytical thinking, adaptability, mentoring and project management provides resilience across different opportunities.

I would tell my younger self to focus less on short-term milestones and more on building depth, clarity of thought and professional relationships. Those foundations endure longer than any single milestone.

The post Ask me anything: Giannis Zacharakis – ‘The ability to pursue questions that genuinely interest you is a privilege’ appeared first on Physics World.

https://physicsworld.com/a/ask-me-anything-giannis-zacharakis-the-ability-to-pursue-questions-that-genuinely-interest-you-is-a-privilege/
Tushna Commissariat

Single metasurface could generate record numbers of trapped neutral atoms

Technique boosts prospects for building quantum computers with more than 100,000 qubits

The post Single metasurface could generate record numbers of trapped neutral atoms appeared first on Physics World.

Physicists in China have demonstrated that a structure called an optical metasurface can individually trap up to 78,400 neutral atoms – a promising development in efforts to build a large-scale quantum computer. The method, which is similar to one demonstrated independently by a team at Columbia University in the US, could help overcome a troublesome bottleneck for computers that use neutral atoms as their quantum bits (qubits).

Arrays of trapped neutral atoms are widely employed in physics research, and they are a promising platform for quantum computing. Their main drawback is scalability, explains physicist Zhongchi Zhang, who co-led the new study together with his Tsinghua University colleague Xue Feng. The components normally used to make such arrays, such as spatial light modulators (SLMs) and acousto-optic deflectors (AODs), can only create around 10,000 atom traps at any one time, and are thus limited to a maximum of 10,000 atomic qubits.

Flat optical surfaces made up of 2D arrays of metasurfaces

In their work, which is detailed in Chinese Physics Letters, Zhang and colleagues replaced SLMs and AODs with two-dimensional arrays of metasurfaces – artificial nanostructures that manipulate light in much the same way as traditional optics, but with far less bulk. To do this, they used a method known as a weighted Gerchberg-Saxton algorithm to design a metasurface made up of nanoscale pillars that can transform a single input laser beam into a 280 x 280 array. They then constructed this metasurface from silicon nitride using electron-beam lithography and reactive ion etching. Both methods are compatible with standard complementary metal–oxide–semiconductor (CMOS) manufacturing techniques and are thus highly reproducible.

The result is a set of nanoscale, light-manipulating, pixel-like structures that act like a superposition of tens of thousands of flat lenses. When a laser beam hits these “lenses”, they produce a unique pattern that contains tens of thousands of focal points. As long as the laser light is intense enough, each of these focal points can be used to trap and manipulate atoms via a well-established technique called optical tweezing.

Zhang explains that the main advantage of trapping atoms this way is that the metasurface generates the array of optical tweezers on its own, without the need for additional bulky and expensive optical components such as microscope objectives to focus the light. Another benefit is that such arrays are very robust to high laser intensities, which are a prerequisite when the goal is to trap hundreds of thousands of atoms. Indeed, Zhang says that arrays of this type can handle powers several orders of magnitude higher than is possible with arrays made using SLMs and AODs. The intensity of the light is also highly uniform (90.6%) across the array, and individual beams feature an Airy disk-like profile with an average first dark radius of around 1.017 µm – parameters that Zhang says are “ideal for trapping single atoms”.

Improving fault-tolerant quantum computing

“Our work addresses the critical need for scalable physical qubit arrays required for improving ‘fault-tolerant’ quantum computing and making it more robust to errors,” Zhang tells Physics World. “Since quantum error-correcting codes may call for hundreds of physical qubits to build a single logical qubit, scalability here becomes paramount.”

Researchers at Columbia University also recently demonstrated an atom-trapping array that replaced SLMs and AODs with flat optical metasurfaces. But whereas the Columbia team managed to create 360,000 tweezers with extreme pixel efficiency (around 300 pixels/tweezer, with over 95% uniformity) the Tsinghua University group prioritized the array’s robustness at higher laser power, achieving around 1354 pixels/tweezer. Both studies have validated the use of metasurfaces as a scalable platform beyond the limitations imposed by AODs and SLMs, says Zhang.

Spurred on by their preliminary results, Zhang and colleagues report that they are now fabricating a 19.5 mm-diameter metasurface designed to generate approximately 18,000 optical trapping sites. Their goal is to place this metasurface outside the vacuum chamber that contains the trapped atoms. “Such an external configuration represents a significant departure from conventional approaches and is expected to enable the trapping of over 10,000 atoms, surpassing current records while substantially simplifying the experimental setup,” Zhang explains.

The team is also developing a next-generation integrated architecture in which metasurfaces will replace the fluorescence imaging microscopes used to characterize trapped atoms, as well as the optical tweezer arrays used to trap them. “This approach aims to create a completely new system paradigm for neutral-atom quantum computing that eliminates the need for traditional bulky optics, enabling unprecedented compactness and scalability for future quantum processors,” Zhang says.

The post Single metasurface could generate record numbers of trapped neutral atoms appeared first on Physics World.

https://physicsworld.com/a/single-metasurface-could-generate-record-numbers-of-trapped-neutral-atoms/
Isabelle Dumé

Physicists demonstrate long-predicted exotic magnetic phases in 2D material

Observations of how magnetism behaves in atomically thin materials could pave the way for new generations of ultracompact magnetic technologies

The post Physicists demonstrate long-predicted exotic magnetic phases in 2D material appeared first on Physics World.

Physicists in the US and Taiwan have performed new experiments that verify long-standing theoretical predictions of how long-range magnetic order can emerge in atomically thin materials. Led by Edoardo Baldini at the University of Texas at Austin, the researchers showed how the transformation occurs through two distinct phase transitions – possibly paving the way for new generations of ultracompact magnetic materials.

Atomically thin two-dimensional (2D) materials are widely studied for their diverse electrical, optical, mechanical and thermal properties. So far, however, their magnetic properties have generally remained far more elusive. Underlying the problem are inevitable thermal fluctuations, which make it extremely difficult to sustain magnetic order over distances larger than atomic scales.

For decades, theorists have investigated a possible exception to this rule in “2D XY” systems: featuring flat arrays of spins that can rotate continuously within the plane and interact with neighbouring spins. One particularly interesting extension of this model describes how a phase transition can occur when these spins become locked into one of six preferred directions, corresponding to the symmetry of the crystal lattice.

“In the 1970s, theoretical work showed that 2D XY magnetic systems with this six-fold anisotropy could exhibit an unusual sequence of phase transitions described by the six-state ‘clock model’, including an intermediate Berezinskii–Kosterlitz–Thouless (BKT) phase,” Baldini explains. “These ideas became central to the theory of low-dimensional magnetism.”

Since these theories emerged, however, such effects have proven far more challenging to observe in real 2D materials.

Verifying the predictions

To tackle this challenge, Baldini’s team turned to a technique involving nonlinear optical microscopy, based on second-harmonic generation: where a material probed by intense light at one frequency emits secondary light at twice that frequency. Crucially, the polarization of this secondary light is highly sensitive to magnetic behaviour. This allowed the researchers to examine magnetic order in the atomically thin antiferromagnet nickel phosphorus trisulphide (NiPS3) without disrupting the system with invasive electrical contacts.

“By tracking how the optical response evolves with temperature, we were able to directly follow successive magnetic phase transitions and determine the universality class of the emergent magnetic phases,” Baldini explains. “In addition, polarization-resolved measurements allowed us to reconstruct the symmetry of the magnetic order parameter.”

As the researchers cooled the material, their measurements revealed two key phase transitions – each occurring suddenly below a distinct critical temperature. “The first transition marks the onset of a BKT phase, an unusual state in which magnetic correlations extend over long distances without forming conventional long-range order,” Baldini says.

In this phase, the material forms bound pairs of vortices and antivortices: topological defects in the spin field triggered by thermal fluctuations. Within these swirling patterns, spins collectively curl around single points, either in clockwise or anticlockwise directions.

At higher temperatures, these swirling patterns are isolated and can roam freely through the material, disrupting the emergence of long-range magnetic order. But when vortices and antivortices are bound together, their disruptive influences largely cancel each other out: allowing spin correlations to persist over longer distances, while still remaining sensitive to thermal fluctuations.

As the researchers cooled the NiPS3 further they observed a second phase transition, in which vortices and antivortices are suppressed and a six-state clock phase emerges. But this symmetry was constrained even further: across the whole system the six possible spin orientations could themselves be arranged in just two distinct ways. This interplay between six- and two-fold anisotropy ultimately gives rise to stable long-range magnetic order, just as earlier theories had predicted.

Through their experimental validation, the team’s results shed new light on the rich and unexpected magnetic phenomena that can emerge in 2D materials. Revealing two distinct phases, the work highlights how magnetism can arise in fundamentally different ways to that seen in more familiar three-dimensional materials.

“More broadly, these results establish atomically thin magnets as a powerful platform for exploring topological phase transitions and may inspire new approaches to controlling magnetism at the nanoscale for future ultracompact technologies,” Baldini says.

The findings are reported in Nature Materials.

The post Physicists demonstrate long-predicted exotic magnetic phases in 2D material appeared first on Physics World.

https://physicsworld.com/a/physicists-demonstrate-long-predicted-exotic-magnetic-phases-in-2d-material/
No Author

Inside the world’s particle‑physics labs: Global Physics Photowalk 2025 winners revealed

This is the fifth international photowalk following events held in 2010, 2012, 2015 and 2018

The post Inside the world’s particle‑physics labs: Global Physics Photowalk 2025 winners revealed appeared first on Physics World.

From an image of a detector hunting for signs of dark matter to a picture of a deep-sea neutrino telescope studying astrophysical phenomena, the winning entries for the 2025 Global Physics Photowalk have been announced by the Interactions Collaboration – an international network of particle physics institution.

Some 16 labs around the world took part in the event, in which they opened up their labs for a day in 2025 to amateur and professional photographers.

Each lab then entered their top three images into the global competition and from those 48 images a panel of judges selected their top three photos while the public also chose their top three favourite images via an online vote held on 13–27 January.

Marco Donghia’s photograph, main image above, was picked in first place by the judges. It features a researcher sat in front of the Cryogenic Laboratory for Detectors, which is based at INFN National Laboratories of Frascati. The experiment aims to detect extremely weak and rare signal such as those produced by dark matter.

“Finding out I had won left me speechless,” notes Donghia. “The cryostat I photographed is just a few fractions of a degree above absolute zero, yet this recognition filled me with such warmth and emotion that no cryogenic temperature could cool them down.”

The image on the left won first place in the public vote. It shows the back of the linear accelerator of SPIRAL2 at the Large Heavy Ion National Accelerator, GANIL, based in Caen, France.

Second place in the judges competition, meanwhile, went to Matteo Monzali for his photo, shown below, of the Advanced Gamma Tracking Array photon detector coupled with PRISMA magnetic spectrometer.

The experiment is based at the TANDEM-ALPI-PIAVE accelerator complex at INFN National Laboratories in Legnaro.

the Advanced Gamma Tracking Array photon detector

Third place in the judges competition, below, goes to a spectacular close-up image of a photomultiplier from the KM3NeT/ORCA experiment, a neutrino telescope currently being installed in the Mediterranean Sea off the coast of Provence, France.

A photomultiplier from the KM3NeT/ORCA experiment

This is the fifth international Photowalk following events held in 2010, 2012, 2015 and 2018.

All 48 images that were submitted to the 2025 competition can be viewed here.

The post Inside the world’s particle‑physics labs: Global Physics Photowalk 2025 winners revealed appeared first on Physics World.

https://physicsworld.com/a/inside-the-worlds-particle-physics-labs-global-physics-photowalk-2025-winners-revealed/
Michael Banks

Stripes of Enceladus: a jigsaw puzzle

Can you reconstruct the astrophysics image we’ve pulled apart?

The post Stripes of Enceladus: a jigsaw puzzle appeared first on Physics World.

There are two difficulty settings: choose between an 88-piece jigsaw and the 40-piece version.

Image courtesy: NASA/JPL/Space Science Institute

Fancy some more? Check out our puzzles page.

The post Stripes of Enceladus: a jigsaw puzzle appeared first on Physics World.

https://physicsworld.com/a/stripes-of-enceladus-a-jigsaw-puzzle/
No Author

Self-healing materials could make automobile parts last over 100 years

Scientists have created a material with the ability to repeatedly and autonomously repair cracks

The post Self-healing materials could make automobile parts last over 100 years appeared first on Physics World.

Researchers from North Carolina State University and the University of Houston have achieved sustained self-healing of a composite material. The findings promise to extend the lifetime of aircraft and automotive parts by a century, according to a recent paper published in the Proceedings of the National Academy of Sciences.

Composite materials bond two or more components to achieve balanced strength, flexibility and durability. Bone is a naturally occurring example, combining flexible collagen fibres with the stiffness of various minerals. Fibre-reinforced polymers (FRPs) are synthetic analogues that embed strong fibres within a polymer matrix to achieve similar material advantages, making them ubiquitous in aerospace, naval and wind energy sectors.

While bonding multiple layers is necessary to enforce strength, it makes the material prone to interlaminar delamination, or the separation of layers. Lead researcher Jack Turicek describes this type of delamination as “one of the most common and life-limiting failure modes in FRPs”. While nature boasts the remarkable ability to autonomously and repeatedly heal from delamination, achieving a similar feat in synthetic materials has only now become possible.

Healing by thermal remending

The researchers used a method known as “thermal remending” to enable self-healing. First, a healing agent, poly(ethylene-co-methacrylic acid) or EMAA, is embedded into a glass-fibre epoxy-matrix composite during curing. This forms strong covalent bonds between EMAA and the epoxy.

To test their materials, the researchers systematically created a fracture by applying controlled tensile loading until the fracture reached 50 mm. Then, to initiate healing, they warmed the material using built-in electrical heaters. The heat vaporized small water bubbles created during the initial curing process, which produced a microporous network that physically expanded and spread the EMAA into the fracture – the so-called “pressure delivery mechanism”.

Afterwards, 30 min of natural convective cooling to room temperature allowed the EMAA to solidify, forming new hydrogen and ionic bonds between EMAA and epoxy. The bonds reconnected the interfaces that had fractured, recovering the structural integrity of the material.

Self-healing of a composite material

The team repeated the entire procedure over 1000 cycles. Such a prolonged study was previously infeasible due to multi-day cycle lengths. In this work, the researchers set up programmable electrical, thermal and mechanical devices that automatically initiated fracturing, sensed progress to trigger healing, and monitored the rebonded crack before repeating the cycle. This automation reduced cycle lengths to an hour and the full experiment to only 40 days.

Understanding sustained healing

The team quantified the healing effectiveness using the critical strain energy release rate (GIC), a measure of the energy required to propagate a crack. A high GIC means that the material is resilient and well-healed. The EMAA-containing material showed maximum healing at test cycle 7, with 230% the GIC value of an RFP containing no EMAA. The results declined to 180% by cycle 100 and 60% by cycle 1000. When the data was fitted to a Weibull distribution, a common model for material failure, healing asymptotically approached a lower limit of 40% – suggesting that sustained repair is possible.

Optical and electron microscopy revealed two reasons for the observed decline in healing performance. First, the repeated fracture and healing process resulted in accumulation of glass fibre debris in EMAA, which blocked bonding sites. Second, chemical reactions between EMAA and the epoxy matrix are responsible for creating strong covalent bonds between them (necessary for cohesive fracturing of EMAA) and producing the bubbles for the pressure delivery mechanism. The microscopy showed a decline in both reactions, reducing the effectiveness of fracture recovery.

From prototype to practice

Out of the 1000 cycles tested, the self-healing composite maintained over 100% fracture recovery compared with non-EMAA materials for 500 cycles. Based on a 500-cycle lifetime, parts made using the new material could last 125 to 500 years, assuming a quarterly or annual repair schedule – a timeline that far exceeds current design lifetimes of about 40 years.

Integration with existing industry infrastructure is forthcoming. “We have designed both the healing agent interlayers and the resistive heaters to be easily integrated into real-world composites with existing fabrication processes. These functional components enable in situ self-healing (i.e., in the service environment) via electrical power input to the heaters,” says Turicek. “To enable autonomous self-healing, a sensing element that can detect damage is needed to automatically trigger the power on, and power off once repaired. We have such technology on the near horizon.”

The technology has been patented by Jason Patrick, the principal investigator of this research and chief technology officer of the startup company Structeryx. Patrick says that the company intends to “engage with existing and new defence/industry partners to customize the technology for various needs”, in addition to scaling manufacturing.

While we often search for ways to fix broken items, materials of the future may perhaps fix themselves.

The post Self-healing materials could make automobile parts last over 100 years appeared first on Physics World.

https://physicsworld.com/a/self-healing-materials-could-make-automobile-parts-last-over-100-years/
Candice Chua

A bursting bubble can make a puddle jump

In a breakthrough in droplet physics, researchers find a way to get centimetre-scale water droplets to jump into the air

The post A bursting bubble can make a puddle jump appeared first on Physics World.

Jiangtao Cheng of Virginia Tech

On a quiet spring morning, when dew settles on leaves, something curious sometimes happens. A droplet sitting there peacefully will suddenly lift off. No wind. No vibration. Just a tiny leap into the air.

Physicists call this phenomenon droplet jumping. In simple terms, it means that a droplet lifts off from the surface it sits on. If a raindrop hits a leaf and rebounds upward, that rebound can also be considered droplet jumping.

While this may seem like a minor detail in fluid behaviour, removing liquid from surfaces is important for many technologies. When droplets detach from a contaminated surface, they can carry away particles, a process that forms the basis of self-cleaning materials. When droplets leave hot surfaces, they remove heat. And on cold surfaces, quickly removing droplets can help prevent ice buildup.

For years, scientists believed that there was a physical limit to how large these jumping droplets could be. A new study published in Nature has now shown that this limit can be broken, with the help of a bubble.

The research was headed up by Jiangtao Cheng’s lab at Virginia Tech, and performed in collaboration with researchers from the Hong Kong University of Science and Technology and Wuhan University of Technology.

A stubborn limit in droplet physics

Within a droplet, two forces compete constantly: the first is surface tension, the other is gravity.

Surface tension tries to pull the droplet into a sphere, which minimizes its surface area and, therefore, its energy. Gravity, meanwhile, pulls the droplet downward, flattening it against the surface.

The balance between these two forces defines the so-called capillary length – which for water is 2.7 mm. Below this length, surface tension dominates and droplets can sometimes propel themselves upward. Above this length limitation, gravity takes over.

This balance has long been a fundamental barrier in the field of self-propelled droplet jumping. “For droplets larger than the capillary length, gravity dominates,” Cheng tells Physics World. “Simply releasing surface energy from shape relaxation is no longer sufficient to generate enough upward momentum for jumping.”

That is why most previous studies have observed droplets no larger than about 3 mm jumping on their own.

Inspiration from nature

The idea behind the new research began with observations in nature. First author Wenge Huang, who grew up in rural South China, often saw dew droplets on lotus leaves containing tiny air bubbles. Occasionally, when those bubbles burst, the droplets moved.

Years later, that observation led to a question: “could a bubble trapped inside a droplet provide the extra energy needed for jumping?”

A bubble-powered launch

To test this idea, the researchers placed a water droplet on a superhydrophobic surface, which strongly repels water. They then injected air into the droplet using a fine needle, forming a bubble inside the liquid. After a short time, the bubble burst.

High-speed cameras captured what happened next: the droplet lifted cleanly off the surface.

What surprised the researchers most was that droplets nearly 1 cm wide were able to jump – far exceeding the previously accepted capillary length limitation.

A bubble inside the droplet creates additional air–liquid interfaces, increasing the system’s stored surface energy while adding almost no mass. When the bubble bursts, that energy is released as capillary waves that focus momentum upward.

“Embedding a bubble increases the system’s surface energy without increasing its weight,” explains Cheng.

Small bubbles, strong possibilities

The researchers also found that the mechanism was extremely efficient, converting more than 90% of the energy into upward momentum, well above that of many conventional droplet propulsion methods.

The implications extend beyond basic physics; the discovery could help improve self-cleaning surfaces, heat transfer systems and anti-icing technologies. The bubble-burst process can also create directional liquid jets, which could be useful for microscale 3D printing and material deposition.

In simple terms, the study revealed something unexpected. A single bursting bubble can launch a much larger droplet than scientists once thought possible, even at the centimetre scale.

The post A bursting bubble can make a puddle jump appeared first on Physics World.

https://physicsworld.com/a/a-bursting-bubble-can-make-a-puddle-jump/
No Author

Word flower puzzle no. 1

How many words can you find in this new interactive puzzle?

The post Word flower puzzle no. 1 appeared first on Physics World.

How did you get on?

20 words Warming up nicely

32 words Getting hot, hot, hot

45 words Top dog!

Fancy some more? Check out our puzzles page.

The post Word flower puzzle no. 1 appeared first on Physics World.

https://physicsworld.com/a/word-flower-puzzle-no-1/
No Author

Droplet scientists push the boundary between living and non-living matter

Systems governed by chemistry and physics, not biology, can behave in surprisingly lifelike ways, as Giorgio Volpe, Rob Malinowski and Joe Forth explain

The post Droplet scientists push the boundary between living and non-living matter appeared first on Physics World.

In this episode of the Physics World Weekly podcast, we hear from a trio of scientists with a common interest in the physics of droplets. Specifically, Joe Forth, Rob Malinowski and Giorgio Volpe share a fascination with droplets that are “animate” – that is, capable of responding to their surroundings in ways that resemble the behaviour of living organisms.

As they explain in the podcast, systems must tick three boxes to qualify as animate. First, they must be active, able to use energy from their environment to do work and perform tasks. Second, they must be adaptive, able to move between different dynamical states in response to changes to their environment or their own internal states. Finally, they must be autonomous, able to process multiple inputs and choose how to respond to them without intervention from the outside world.

Incorporating all these behaviours into a droplet – or a system of many droplets – is challenging. The boundary between autonomous and non-autonomous systems is proving especially hard to overcome, and Volpe, Malinowski and Forth have a friendly disagreement over whether any droplet-based system has managed it yet.

Crosses disciplinary borders

Part of the challenge, they say, is that the field crosses disciplinary borders. Although Volpe thinks the community of droplet researchers is getting better at finding a common vocabulary for discussions, Forth jokes that it is still the case that “the chemists are scared of physics, the physicists are scared of chemists, everyone is scared of biology”. The potential rewards of overcoming these fears are great, however, with possible future applications of animate droplets ranging from consumer products such as deodorant to oil spill clean-up.

This discussion is based on a Perspective article that Volpe (a professor of soft matter in the chemistry department at University College London, UK), Malinowski (a research fellow in soft matter physics in the same department) and Forth (a colloid scientist and lecturer in the chemistry department at the University of Liverpool, UK) wrote for the journal EPL, which sponsors this episode of the podcast.

The post Droplet scientists push the boundary between living and non-living matter appeared first on Physics World.

https://physicsworld.com/a/droplet-scientists-push-the-boundary-between-living-and-non-living-matter/
Hamish Johnston

The American Physical Society’s 2026 Global Physics Summit opens in Denver

The "shared future" theme of the world's biggest physics meeting this year is opportune

The post The American Physical Society’s 2026 Global Physics Summit opens in Denver appeared first on Physics World.

The Global Physics Summit (GPS) bills itself as “the world’s largest physics research conference”. Organized by the American Physical Society (APS), it combines the previously separate APS March and April meetings, with at least 14,000 people expected to attend this year’s event in Denver, Colorado, which has the theme “science for a shared future”.

The two APS meetings (especially APS March) have long been pilgrimages for physicists. They’re a chance to meet people whose papers you’ve read, learn about new research, land a dream job or perhaps decide what your future physics career should look like. They offer unparalleled opportunities for gossiping, networking and making your name.

Sometimes they even host extraordinary announcements, such as in 2023 when one group claimed to have discovered room-temperature superconductors, or in 1987 when several groups really did present the first data on high-temperature ones.

Due to the current state of US politics, however, physicists from many countries may well have second thoughts about travelling to this and other scientific meetings in the US.

Indeed, if you’re from one of almost 40 nations to which the US government has partially or fully suspended issuing visas – supposedly “to protect the security of the United States” – you probably won’t be able to get into the country at all.

Among the countries affected by the Trump administration’s ban is Ethiopia, which is home to people like the physicist Mulugeta Bekele, who almost single-handedly kept Ethiopian physics alive in the 1970s and 1980s despite being jailed and tortured.

As Robert P Crease recounts in his latest feature, Mulugeta was awarded the APS’s Sakharov human-rights prize in 2012, picking up his award at that year’s APS March meeting in Boston. Would Mulugeta, I wonder, be able to enter the US in current circumstances?

One US physicist told me that outsiders should respond to the situation in America by boycotting the US entirely. To me, that’s a step too far, not least because breaking contact would show a lack of solidarity with US-based scientists suffering from funding cuts or worse. After all, physics is a global enterprise, as two recent Physics World articles make clear.

The first is a feature about quantifying the environmental impact of military conflicts by Ben Skuse. Numbers are hard to come by, but according to a 2022 estimate extrapolated from the small number of nations that do share their data, the total military carbon footprint is about 5.5% of global emissions. This would make the world’s militaries the fourth biggest carbon emitter if they were a nation.

In another feature, Michael Allen examines how climate change could trigger extreme changes in the activity of earthquakes and volcanoes. Worryingly, increased volcanic eruptions not only contribute to the build-up of greenhouse gases but also create other problems too. In particular, a warming climate melts ice caps, lowering surface loads and potentially causing more earthquakes to occur.

Both issues – and many more besides – will only be solved through global, interdisciplinary collaborations. As the theme of the GPS quite rightly puts it, we need science for a shared future.

That’s why it’s great that the APS, along with AIP Publishing and IOP Publishing, which together form the Purpose-led Publishing (PLP) coalition, are hosting a network of 23 satellite events in Africa, Asia and South America to expand participation in this year’s GPS.

PLP’s satellite hubs, which will take place both in person and online, aim to let researchers engage with the summit programme, contribute to discussions, and take part in locally organized workshops and presentations.

Taking place in countries ranging from Brazil and Benin to the Philippines and Pakistan, the events will host livestreamed and recorded content from Denver as well as offering debates, expert-led sessions and opportunities for networking.

One event will be held in Ethiopia, which, I hope, Mulugeta at least will be pleased to hear.

The post The American Physical Society’s 2026 Global Physics Summit opens in Denver appeared first on Physics World.

https://physicsworld.com/a/the-american-physical-societys-2026-global-physics-summit-opens-in-denver/
Matin Durrani

Interplaying hazards: can you solve our crossword on geophysical processes?

Try out our quick crossword all about climate and geophysics

The post Interplaying hazards: can you solve our crossword on geophysical processes? appeared first on Physics World.

See how much you know about the subject by trying our interactive crossword. Most of the clues are based on the article, but there are a few additional brain teasers thrown in. If you’re feeling stuck, check out the “assist” menu for help.

The post Interplaying hazards: can you solve our crossword on geophysical processes? appeared first on Physics World.

https://physicsworld.com/a/interplaying-hazards-can-you-solve-our-crossword-on-geophysical-processes/
No Author

Lunar magnetic field mystery may finally have an explanation

Physicists weren’t sure why Moon rocks brought back during the Apollo missions are more strongly magnetized than models predict

The post Lunar magnetic field mystery may finally have an explanation appeared first on Physics World.

When the Apollo astronauts returned from the Moon, they brought a puzzle back with them. Some of the rocks they collected were so strongly magnetic, it implied that the Moon’s magnetic field must have been stronger than the Earth’s when the rocks formed 3.9‒3.5 billion years ago. “That doesn’t make any sense with the physics that we understand about how planets generate magnetic fields,” says Claire Nichols, a planetary geologist at the University of Oxford, UK.

Nichols and her Oxford colleagues Jon Wade and Simon N Stephenson have now identified a possible explanation. The key, they say, lies in the rocks’ composition, which happens to provide ideal spacecraft landing sites, leading to sampling bias. “It was a proper kind of Eureka moment,” Nichols says.

The lunar dynamo

The magnetic fields of planets and moons stem from convective currents in their largely iron cores. Scientists expect that objects with smaller cores, such as the Moon, will have lower magnetic field strengths. But measurements of the Apollo samples suggested that the magnetic field strength might, in some cases, have exceeded 100 μT – higher than the typical value of 40μT on the surface of the Earth. It’s as if an AA battery were somehow powering a fridge.

“The dynamo modelling community have been trying to come up with all sorts of mechanisms to give you these really strong fields,” Nichols tells Physics World.

When Nichols mentioned this problem to Wade, a petrologist, his response intrigued her. “He said, kind of as a throwaway comment, ‘Have you looked to see if there’s any link between the composition and the intensities?’”

Upon inspecting the data, Nichols realized that Wade could be onto something. While all the lunar basalt samples with high magnetization contained large quantities of titanium, samples with low magnetization contained little.

A possible mechanism

Other researchers had previously suggested a process that could have supercharged the Moon’s dynamo, boosting the magnetization of titanium-bearing basalt in the process. When the Moon formed, an ocean of molten magma developed that gradually crystallized into today’s lunar mantle. The last material to solidify was a titanium-rich mineral called ilmenite. Solid ilmenite is incredibly dense, so once it solidified, it sank towards the Moon’s magnetic core.

According to the hypothesis, heat transfer across the core-mantle boundary then pushed the ilmenite to its melting point and increased the local temperature gradient, thereby boosting convection and, by extension, magnetic field strength. This means that the ilmenite-bearing rocks supercharged the dynamo behind the Moon’s magnetic field and became unusually highly magnetized in the process. Eventually, volcanic activity brought the rocks to the lunar surface, where the Apollo astronauts collected them.

The problem with this explanation, Nichols says, is that the heat flux at the boundary would only be raised for brief periods, meaning that by this mechanism, only two in every thousand Apollo samples would be strongly magnetized. The real figure is roughly half.

A further role for heat transfer?

Nichols and her colleagues therefore dug deeper into the process. They realized that while the period of melting was brief, it played a crucial role in creating the samples the Apollo astronauts found. “Those samples are all being erupted only at the times where the heat flux is high,” Nichols tells Physics World. And when they eventually made their way to the lunar surface, they did so as part of basaltic flows, which happen to make perfect landing sites for spacecraft.

Case solved? Not quite. According to widely accepted theories of convection in the lunar mantle, the ilmenite lumps could not have got as far as the boundary between the core and mantle, because if they did, they would have lacked the buoyancy to rise again. Still, John Tarduno, whose research at the University of Rochester, US, centres on the origins of Earth’s dynamo, describes Nichols and colleagues’ ideas as “intriguing and certainly worth further consideration through data collection and modelling”.

Tarduno, who was not involved in this work, adds that he isn’t sure that core heat flux alone would ensure that the lunar core once had an intermittent strong dynamo. “The work should motivate numerical dynamo simulations as well as modelling of mantle evolution to test the authors’ ideas,” he says.

Nichols is up for the challenge. By studying additional Apollo samples, together with new ones from the Artemis and Chang’e missions to other parts of the Moon, she aims to determine whether magnetization intensity really does correlate with titanium content, and thereby lay the mystery to rest.

The study appears in Nature Geoscience.

The post Lunar magnetic field mystery may finally have an explanation appeared first on Physics World.

https://physicsworld.com/a/lunar-magnetic-field-mystery-may-finally-have-an-explanation/
Anna Demming

Licensing puts the power into nuclear fusion

A consultancy firm with expertise in radiation safety can help companies developing a new generation of commercial fusion reactors to navigate the regulatory framework

The post Licensing puts the power into nuclear fusion appeared first on Physics World.

Superheated: A growing number of companies are aiming to build compact reactors that will deliver electricity from nuclear fusion (Credit: shutterstock/Love Employee)

Nuclear fusion has long held the promise of providing an unlimited supply of clean energy, but turning such a compelling concept into a practical reality has always seemed just beyond reach. That could be about to change, with a new wave of commercial operators developing compact nuclear reactors that they believe could be providing the grid with useful amounts of electricity within the next 10 years.

Leading the way is the US, where a combination of federal grants and private capital is fuelling the drive towards commercial production. One company grabbing the headlines is Helion, which has broken ground on a power plant that is due to supply 50 MW of power to Microsoft by 2028. Commonwealth Fusion Systems, set up with the backing of the Massachusetts Institute of Technology, has also announced an agreement with Google that trades an early strategic investment for 200 MW of power when the company’s first reactor comes online in the early 2030s.

Such commercial interest has been buoyed by a clarification in the licensing regime, at least within the US. In 2023 the Nuclear Regulatory Commission (NRC), the federal agency responsible for nuclear safety, ruled that fusion reactors need not be governed by the highly restrictive framework that applies to existing power plants based on nuclear fission. Instead, fusion developers must comply with the part of the code that is primarily focused on the handling of radioactive material.

“That was a big win for the industry,” says Steve Bump, an expert in radiation safety and licensing at consultancy firm Dade Moeller, part of the NV5 group. “Fusion is a much safer process because there is no spent fuel to deal with and there is no risk of the reaction running out of control. In the event of a system failure, everything just stops.”

Growth industry

Almost 50 companies are now actively involved in fusion development and research within the US, while others are active in the UK, China and Europe. Different reactor designs are being pursued, but each rely on heating a plasma containing deuterium and tritium to extreme temperatures and then confining the superheated plasma. When the light atomic nuclei collide and fuse together – which requires the plasma to reach temperatures above 100 million degrees Celsius – the nuclear reaction releases helium gas and high-energy neutrons, along with a vast amount of energy.

Nuclear fusion has already been shown to deliver intense bursts of energy that exceed the power needed to generate and sustain the plasma, but no-one has yet managed to produce a steady supply of electricity from the process. “The fusion industry is often characterized as a race,” says Bump. “There are many new companies that are aiming to build a commercially viable power plant that can be scaled up and replicated in multiple locations.”

Amid this rapid expansion, one upshot from the NRC ruling is that state-level regulators now have the authority to award licences for fusion reactors, provided that they follow the framework set out by the federal agency. But these state regulators are more accustomed to issuing licences to healthcare providers or research institutes that need to handle small amounts of radioactive material, and they are often wary of applications from fusion developers that ask for large quantities of radioactive tritium. “The amounts required for fusion can produce thousands and thousands of curies, while most other applications need less than a microcurie,” says Bump. “That makes it very different from a licensing standpoint, and the state agencies don’t have much experience with activities that use that much material. It makes them nervous.”

A big priority for them is to ensure that people in and around the plant are safe from any exposure, and we can help to ensure that the information provided by the company is clear, thorough and accurate.

Bump and his colleagues can help fusion companies to reassure the state regulators that all the evaluations have been done correctly. “Each state agency is a little different, and we need to work with each one to find out what they need and what they will accept,” adds Bump. “They need to consider the impact of the facility on public safety and the local environment, and they are going to ask questions before they are confident enough to issue a licence.”

That abundance of caution means that each application must be customized to address the concerns of each regulator. One area that receives particular scrutiny is the amount of shielding needed to protect people from the energetic neutrons produced by the fusion reaction. Slowing down and absorbing these neutral particles is a difficult process, requiring a multi-stage strategy that typically includes water-cooled steel and walls made of reinforced concrete.

As part of the licence application, companies need to demonstrate that their shielding mechanisms reduce the radiation dose to acceptable levels, both for people working inside the facility and those living and working in the neighbourhood. “We can review the shielding evaluations produced by companies before they are submitted to the state regulators,” says Bump. “A big priority for them is to ensure that people in and around the plant are safe from any exposure, and we can help to ensure that the information provided by the company is clear, thorough and accurate.”

Practical advice

The experts at Dade Moeller can also help fusion developers to make a realistic assessment of the amount of tritium they will need, since any licence will place a limit on how much radioactive material can be held within the facility. In addition, they can advise companies on how to establish and document failsafe procedures for storing and using tritium, along with real-time monitoring systems to ensure that emissions of tritium gas are kept within regulated limits. “We also look at the potential dose consequences if there is an accidental release, along with any emergency planning that may be needed if any radioactive material does escape,” adds Bump.

As well as providing the technical documentation needed by the regulators, fusion companies need to gain the support of local residents and businesses. Outreach events and public meetings are critical to explain how the technology works, openly discuss the risks and mitigation strategies, and highlight the benefits to the surrounding community. “We have attended some of the public meetings where people have had the opportunity to ask questions and voice their concerns,” says Bump. “We can help companies to prepare helpful and informative answers, particularly when questions are submitted prior to the meeting.”

If these efforts are successful, many local communities welcome the economic boost that could be produced by a commercial power plant, such as the creation of highly skilled jobs and the potential to attract other businesses to the area. Several fusion companies are planning to build their production facilities on the sites of previous coal-fired power stations, potentially breathing new life into small cities suffering from a post-industrial malaise.

These sites also provide prospective commercial operators with easy access to the existing electrical infrastructure. “It’s convenient for them because there is no need to install new transmission lines,” says Bump. “If they can make electricity, they can simply connect to the grid through the existing substation.”

Most commercial developers are currently building and testing pilot machines, with commercial production expected in the 2030s. As they make that transition, Bump and his colleagues can provide the expertise needed to navigate the licensing requirements across different states. “We can offer advice on how to get started, and how to set up a framework for radiation protection that will support companies as they scale up their operations,” says Bump. “It’s a growing industry, and we are here to help.”

 

The post Licensing puts the power into nuclear fusion appeared first on Physics World.

https://physicsworld.com/a/licensing-puts-the-power-into-nuclear-fusion/
No Author

Celebrating 100 years of physics at Tsinghua University

Wenhui Duan outlines how Tsinghua aims to continue its rich history in physics

The post Celebrating 100 years of physics at Tsinghua University appeared first on Physics World.

Can you tell us about your career in physics?

My academic path studying physics at Tsinghua University began in 1981 where I completed a bachelor’s and a master’s before earning a PhD in 1992. I then did a postdoc at the Central Iron & Steel Research Institute in Beijing before returning to Tsinghua University in 1994 as a faculty member in the physics department.

Have you always studied and worked in China?

During my time at Tsinghua I carried out two research visits abroad, first at the University of Minnesota from 1996 to 1999 and then at the University of California, Berkeley from 2002 to 2003.

What is your research focus?

My career has been centred on employing and developing theoretical computational methods to understand, predict and design the physical properties of materials from the microscopic level of atoms and electrons. My work is an attempt to use a “computational microscope” to probe the fundamental nature of materials and sketch blueprints for new ones. This journey from fundamental theory to potential application has been continuously challenging and immensely rewarding.

Can you explain some examples?

One is in the theoretical study of topological quantum materials. We have performed theoretical work predicting the potential for the quantum spin Hall effect in two-dimensional systems and we have explored new states of matter such as topological semimetals. Another avenue of research is on the physics of low-dimensional and artificial microstructures. My group has a long-standing interest in the electronic structure, magnetic properties, and optical responses of low-dimensional systems like graphene and two-dimensional magnetic materials. Recently, our team discovered a novel spin chirality-driven nonlinear optical effect in a 2D magnetic material.

Are you using AI in this endeavour?

Yes. A significant recent focus is pioneering the integration of artificial intelligence with computational materials science. We are developing deep-learning models that are compatible with mainstream computational frameworks to increase the efficiency of simulating complex material systems and accelerate the discovery of new materials.

What areas of physics research is Tsinghua active in?

Our department boasts a robust and comprehensive research portfolio. Our research can be mainly outlined as three core directions. The first is condensed-matter physics, which has historically been one of our largest and most prominent areas. Research here spans from fundamental quantum phenomena to materials design for future technologies.

Experimentally we work in areas such as topological quantum materials, high-temperature superconductivity, two-dimensional systems, and novel magnetic phenomena. The recent experimental discovery of the quantum anomalous Hall effect at Tsinghua is one example. Theoreticians, including my group, focus on predicting new quantum states and understanding complex electronic behaviours using first-principles calculations and model analysis.

A more diverse international community brings essential perspectives that challenge assumptions, spark innovation and elevate our collective work to a global standard

What about the other two areas?

The second area is atomic, molecular, and optical physics. Key topics include ultra-cold atoms for quantum simulation of complex many-body problems, quantum optics and quantum communication and precision measurement science. Work here often provides the physical platforms and techniques that enable advances in quantum-information science.

The other area is nuclear physics and particle physics: In particle physics, our faculty and students work in major international collaborations such as the Large Hadron Collider. Besides these core directions, our research is also focused on programmes in astrophysics/cosmology and in biophysics. The emergent field of quantum-information science also connects nearly all these areas making it a defining feature of our current research environment.

Are there some areas of physics that Tsinghua might increase its efforts in?

One is the integration of artificial intelligence and machine learning with fundamental physics research. In my own field of computational materials science, we are already using AI to accelerate the discovery of new quantum materials and predict complex properties with unprecedented speed. This approach should be expanded and deepened across the department — from using AI to analyse data from particle colliders and gravitational-wave detectors, to developing new algorithms for quantum many-body problems and astrophysical simulations.

Any other areas?

We must also intensify our efforts in the development and application of quantum technologies. We already have excellent groups in quantum information, quantum optics and quantum materials so the next step is to combine these strengths towards the engineering of functional quantum systems.

What are some of the major international institutions that Tsinghua collaborates with?

Internationally, our researchers are embedded in several “big science” projects such as the XENON collaboration for direct dark-matter detection, particle physics experiments like ATLAS, CMS and FASER at CERN as well as the LIGO collaboration in gravitational-wave astronomy.

What about those closer to home?

Domestically, we work with the Institute of Physics at the Chinese Academy of Sciences and the Beijing Academy of Quantum Information Sciences, particularly in areas like condensed matter and quantum science. We also value industry partnerships, a notable example being our long-standing collaboration with Foxconn, which formed the joint Foxconn Nanotechnology Center within our department.

How many students and staff are there in Tsinghua’s physics department?

We have an academic community of more than 900 people: 85 faculty members, around 100 staff members, 420 graduate students and 320 undergraduate students.

How many foreign staff and students do you have?

We currently have four foreign professors together with 11 international undergraduates and five international PhD candidates – from Malaysia, Germany, Belarus, Russia, and Iran.

Would you like to see these numbers increase?

Yes, but my emphasis is more on qualitative enhancement than just quantitative increase. A more diverse international community brings essential perspectives that challenge assumptions, spark innovation and elevate our collective work to a global standard. We are working to create an even more welcoming and supportive environment – through dedicated discussions on internationalization, fostering research collaborations, and hosting global conferences.

I hope we are known not just for our discoveries, but for building essential research “bridges” that solve big problems

Why is Tsinghua an attractive place to work?

It’s appeal lies not in any single attribute, but in a unique ecosystem that fosters research and innovation. First, is Tsinghua’s strengths across science and engineering that create a natural incubator for interdisciplinary work. My own research, particularly in integrating advanced computational methods with materials discovery, has been significantly accelerated by collaboration with leading experts in adjacent fields.

Second, is the balance of academic freedom and responsibility. The university provides substantial intellectual freedom and long-term support allowing researchers to pursue high-risk, fundamental questions without being bound solely by short-term deliverables. Coupled with this freedom is a profound sense of responsibility to contribute to national and global scientific efforts, an ethos deeply embedded in Tsinghua’s tradition.

Third, it is the quality of the students. Engaging with some of China’s most talented and driven young minds is perhaps the greatest privilege. Their curiosity, rigour and fresh perspectives constantly challenge and renew my own thinking. Mentoring them from promising undergraduates to independent researchers is a core part of the scientific legacy we build here.

What events do you have planned to mark the centenary of physics at Tsinghua?

We have a number of activities planned including the publication of an updated departmental history book that formally documents our century-long journey from 1926 to the present as well producing a centennial documentary film. We also have an alumni interview series and department exhibitions to visually narrate our history and scientific contributions.

We are collaborating with the Chinese Physical Society, the Chinese Academy of Sciences and the National Natural Science Foundation as well as IOP Publishing to publish commemorative special issues throughout the year. There will also be a series of high-level academic forums and lecture series at Tsinghua. The culmination of the year’s celebration will be the Centennial Commemoration Conference on Saturday 5 September.

What do you hope for Tsinghua in the coming 100 years?

First, I hope we become the world’s leading centre for a new way of doing physics: integrating AI directly into the core of our research cycle. This means moving beyond using AI just as a tool. I envision a future where AI actively helps us formulate new theories about quantum materials, guides the design of critical experiments in astrophysics and particle detection and even controls advanced instruments to run complex measurements. Our goal should be to pioneer a “AI-scientist” partnership, making it as natural as using a microscope.

Second, I hope we are known not just for our discoveries, but for building essential research “bridges” that solve big problems. This means deeply partnering with our engineering schools to turn quantum science into reliable technology as well as with life sciences and environmental science to apply physical principles to global challenges in health and sustainability. We aim to educate students who are not just technically able, but who are also ethically grounded and driven.

If we succeed, then Tsinghua Physics will continue to contribute meaningfully, not just to the scientific community, but to the broader human endeavour of understanding our world. That is the enduring legacy we strive for.

The post Celebrating 100 years of physics at Tsinghua University appeared first on Physics World.

https://physicsworld.com/a/celebrating-100-years-of-physics-at-tsinghua-university/
Michael Banks

Cobalt dissolution from PtₓCo/C cathode catalysts in PEM fuel cells: in situ quantification and removal methods

Tracking Co²⁺ leaching from PtₓCo/C catalysts and recovering cation‑induced performance losses in PEM fuel cells. Learn more in this webinar

The post Cobalt dissolution from PtₓCo/C cathode catalysts in PEM fuel cells: in situ quantification and removal methods appeared first on Physics World.

Pt-alloy/C catalysts, such as PtxCo/C, are used as cathode catalysts in proton-exchange membrane (PEM) fuel cells due to their exceptionally high kinetic activity for the oxygen reduction reaction (ORR). However, the performance and durability of membrane electrode assemblies (MEAs) with a PtxCo/C cathode catalyst are impaired by the dissolution of Co2+ cations in the ionomer phase of the MEA.

In the first part of this webinar, an in situ method to quantify the amount of Co2+ contamination in an MEA via electrochemical impedance spectroscopy (EIS) is presented. Pt/C model MEAs doped with different amounts of Co2+ ions are used to analyze the effects of Co2+ contamination on the H2/air performance and on ionic resistances under various conditions, highlighting the role of the inactive membrane area. Based on these model MEAs, a calibration curve is established that correlates the high-frequency resistance (HFR) under dry conditions to the amount of Co2+ in the MEA. Due to the high sensitivity of the dry HFR to metal cations, this method enables the tracking of Co2+ leaching from a Pt2.5Co/C MEA in voltage cycling accelerated stress tests.

In the second part, a recovery method to remove cationic contaminants from an MEA using CO2–O2 cathode gas feeds is presented. With this method, cation-induced performance losses of aged PtxCo/C MEAs can be largely recovered. The mechanism of cation removal and opportunities for the durability of Pt-alloy/C MEAs are discussed.

Markus Schilling

Markus Schilling is a PhD student at the chair of technical electrochemistry under the supervision of Prof Hubert A Gasteiger at the Technische Universität München. In his research, he investigates the degradation of Pt-alloy on carbon cathode catalysts (e.g., PtCo/C) for PEM fuel cells, with the aim of deepening the understanding of aging mechanisms and identifying strategies to increase durability. Current works include catalyst pre-treatments, development of diagnostic methods on the cell level, voltage cycling accelerated stress testing, and recovery methods.

Schilling received his BSc in 2019 from the Universität Konstanz and his MSc in 2022 from the Technische Universität München, where he investigated PEM fuel cell catalyst inks in his thesis, supervised by Prof Gasteiger.

The post Cobalt dissolution from PtₓCo/C cathode catalysts in PEM fuel cells: in situ quantification and removal methods appeared first on Physics World.

https://physicsworld.com/a/cobalt-dissolution-from-pt%e2%82%93co-c-cathode-catalysts-in-pem-fuel-cells-in-situ-quantification-and-removal-methods/
No Author

Compact optical amplifier is efficient enough for on-chip integration

Low-power optical device achieves around 100 times amplification using just a couple of hundred milliwatts of input power

The post Compact optical amplifier is efficient enough for on-chip integration appeared first on Physics World.

Light forms the backbone of many of today’s advanced technologies, offering the ability to transmit data and information much quicker than electrons. Within optical networks, optical amplifiers are used to increase the intensity of light and enable its transmission over long distances. Without this ability to amplify optical signals, satellite technology, long-distance fibre-optic communications and quantum information processing would not be possible. But many optical amplifiers use a lot of power, limiting their deployment.

Modern-day photonics are continually getting smaller and more efficient, and researchers from Stanford University have now developed an optical amplifier that uses a low amount of energy on a fingertip-sized device – achieved by recycling the energy used to power it.

The low-power optical amplifier operates across the optical spectrum and is small and efficient enough to be integrated on a chip. The device achieved more than 17 dB gain using less than 200 mW of input power – an order of magnitude improvement over previous optical amplifiers of a similar size.

“We wanted to store up optical energy and release it in intense bursts, kind of like how a Q-switched laser works, but now with an optical resonator being the store of energy that fills up,” explains senior author Amir Safavi-Naeini. “After a few months we started to see that it could address other challenges we had in the lab, like building a broadband low-power amplifier for squeezing light in a chip-scale device.”

Optical parametric amplifiers

There are many types of optical amplifiers. Erbium-doped amplifiers are common in telecommunications but only work within specific wavelength bands, while semiconductor amplifiers function over a larger range of wavelengths but are limited by high noise. Optical parametric amplifiers (OPAs) are seen as the bridge between the two. OPAs, which use nonlinear interactions to transfer energy from a pump beam into signal photons, offer high gain, wide bandwidth and low noise.

A high gain boosts signals above noise levels, while the broad bandwidth enables amplification of ultrafast or wavelength-division-multiplexed optical signals. However, as they typically require watt-level power, OPAs have been difficult to miniaturize and integrate onto tiny photonic chips. For most amplifiers, achieving a high gain requires a high power input, which is counterproductive to miniaturization.

Integrating lasers into the photonic chip is not ideal and an external optical pump is now seen as an alternative option, but usually requires a pump at the second harmonic (twice the wavelength frequency being amplified). In the new design, the researchers use an external pump laser at the fundamental wavelength, coupled by lensed fibre onto the chip, where it generates the resonant second-harmonic pump – using a new loop design to reduce power requirements.

“The trick is that we trap and recirculate the shorter-wavelength pump light in a loop, not the signal,” Safavi-Naeini explains. “This gives you the efficiency boost of a resonator without narrowing the amplification bandwidth.”

A low-power optical amplifier

The team built the low-power OPA using thin-film lithium niobate, which offers large second-order nonlinearity and tight optical confinement. The big advantage, however, lies in its second-harmonic resonant design, in which the optical pump is doubled into a second harmonic inside a cavity. The pump light travels in a circular loop, increasing its intensity until the desired power is met. Once this amplification is complete, the signal is output with a near-quantum-limited noise performance over a broad amplification bandwidth of 110 nm.

Performing the amplification inside the cavity reduces the required power because the OPA is powered by energy stored inside the light beam. “The pump light is generated inside the pump resonator, not coupled in. This means we can efficiently fill up this resonator without dealing with impedance matching constraints that limit other nonlinear devices,” explains Safavi-Naeini. “The pump field is therefore larger than what we can even couple into the chip, so we get a boost that otherwise wouldn’t be possible.”

The small-scale and low-power architecture could be used to build on-chip OPAs across a range of applications, including data communications technology, biosensors and novel light sources. The amplifier is also small and efficient enough to be powered by a battery, making it suitable for use in laptops and smartphones.

Looking ahead, Safavi-Naeini says that the goal is “to combine this amplifier with a small on-chip laser, so the whole thing is self-contained without bulky external equipment, and use it to generate large amounts of quantum squeezing in an integrated device”. In the short-term, he suggests that fabrication improvements could cut the power requirements by another factor of ten. “We’re looking to push the sensitivity beyond what’s currently possible with classical technologies.”

The research is reported in Nature.

The post Compact optical amplifier is efficient enough for on-chip integration appeared first on Physics World.

https://physicsworld.com/a/compact-optical-amplifier-is-efficient-enough-for-on-chip-integration/
No Author

The search for new bosons beyond Higgs

CMS researchers probed top‑quark pairs for signs of new scalar and pseudoscalar particles

The post The search for new bosons beyond Higgs appeared first on Physics World.

Particle physicists have been searching for new fundamental scalar and pseudoscalar bosons because, if discovered, they could reveal physics beyond the Standard Model and help explain mysteries such as dark matter and even why the Higgs exists. The Higgs remains the only confirmed scalar boson, and no pseudoscalar bosons have yet been observed, though they are predicted, for example, in theories involving axions and axion‑like particles. One promising way to find them is to look for their decay into a top quark and antiquark pair (tt̄).

Using the CMS detector at the Large Hadron Collider, researchers analysed 138 fb⁻¹ of proton–proton collision data. They reconstructed the invariant mass of the tt̄ system and used angular variables sensitive to its spin and parity to distinguish potential signals from the Standard Model background. Crucially, the analysis includes interference between any new boson and the Standard Model tt̄ production, which can create peak-dip distortions in the invariant mass of the tt̄ system rather than a simple bump. The observed event yield is consistent with the Standard Model prediction over the majority of the invariant mass spectrum, thus excluding a contribution from a potential new boson.

However, CMS observed a significant excess near the threshold of tt̄  production where the energy of colliding particles is just enough to produce top quarks and antiquarks. This excess has a local significance above five standard deviations and the kinematics of these events is more consistent with a pseudoscalar than a scalar interpretation. However, the excess could also be explained by a predicted tt̄ quasi‑bound state, known as toponium, which fits the data without requiring new particles beyond the Standard Model.

The researchers set upper limits on how strongly new bosons could couple to top quarks across masses from 365 to 1000 GeV and widths from 0.5% to 25%. These constraints exclude couplings down to around 0.3 for pseudoscalars and 0.4 for scalars, providing the most stringent limits to date for scalar resonances decaying to tt̄.

Do you want to learn more about this topic?

Prospects for Higgs physics at energies up to 100 TeV by Julien BaglioAbdelhak Djouadi and Jérémie Quevillon (2016)

The post The search for new bosons beyond Higgs appeared first on Physics World.

https://physicsworld.com/a/the-search-for-new-bosons-beyond-higgs/
Lorna Brigham

Pushing thermopower to the 2D limit

LETO/ETO superlattices achieve 20× thermopower enhancement through true 2D electron behaviour

The post Pushing thermopower to the 2D limit appeared first on Physics World.

Thermoelectric materials convert heat into electricity, and their effectiveness is largely determined by their thermopower, which reflects how charge carriers respond to their environment. Designing materials with very high thermopower is important because it boosts overall thermoelectric efficiency, enabling sensors with stronger voltage output, higher sensitivity, and the ability to detect smaller temperature changes. High thermopower also allows for thinner, lighter, and potentially flexible devices that use less material. In 2D materials, electrons become confined to very thin layers, altering their energy levels in ways that can dramatically increase thermopower.

The researchers explore this effect using superlattices made of La-doped EuTiO3 and La-doped EuTiO3 (LETO/ETO), where both dimensional confinement and electronic correlation effects play key roles. These structures achieve stronger 2D confinement than the commonly used SrTiO₃, which has a large Bohr radius that prevents electrons from being tightly localized. In contrast, the LETO/ETO system has a much smaller effective Bohr radius, allowing electrons to behave more like a true 2D gas. The Eu 4f electrons further modify the local potential landscape, strengthening confinement and producing orbital‑selective localization, particularly of the Ti 3dₓᵧ states that dominate the enhanced thermopower response.

A group photo of the Epitaxial Complex Oxide Laboratory at the summit of Halla Mountain on Jeju Island. Pictured is first author Dr. Dongwon Shin (front row, centre) alongside corresponding author Prof. Woo Seok Choi (back row, second from the right).

As a result, the thermopower becomes extremely large, up to 950 μV K⁻¹, and as much as 20 times higher in the 2D configuration than in the 3D case, an improvement roughly twice that achieved in comparable SrTiO₃-based superlattices. Thermopower measurements and hybrid density functional theory calculations confirm that this enhancement arises from the combined effects of strong confinement, modified band structure, and correlation-driven changes to the Ti 3d electron distribution.

Overall, the study demonstrates a new design strategy for thermoelectric materials that combines material selection (small Bohr radius, 4f-assisted confinement) with dimensional engineering to create ultrathin superlattices that force electrons into 2D behaviour. The authors note that future Hall measurements and conductivity optimization will be important for evaluating power factor and ZT (a measure used in thermoelectrics to describe how good a thermoelectric material is), and that integrating these oxide superlattices with emerging freestanding membrane techniques could enable flexible, high-sensitivity thermal sensing platforms.

Read the full article

Improving 2D-ness to enhance thermopower in oxide superlattices

Dongwon Shin et al 2026 Rep. Prog. Phys. 89 010501

Do you want to learn more about this topic?

Tuning phonon properties in thermoelectric materials by G P Srivastava (2015)

The post Pushing thermopower to the 2D limit appeared first on Physics World.

https://physicsworld.com/a/pushing-thermopower-to-the-2d-limit/
Lorna Brigham

A physicist’s journey into nuclear energy

Early-career physicist Natasha Khan talks about her journey to becoming a nuclear safety engineer

The post A physicist’s journey into nuclear energy appeared first on Physics World.

When I started my physics degree, I knew it could open the door to a range of career opportunities, but I wasn’t sure what path it would take me down. In the end, it was the optional modules that encouraged my interest in nuclear energy physics, steering me towards my current job as a nuclear safety engineer.

When I was looking at university degrees, I thought about studying chemical engineering, but my A-level physics teacher inspired me to consider physics instead. I’d always been fascinated with the subject, and enjoyed maths (and a challenge) too, so I thought why not give it a go.

I went on to study physics at the University of Liverpool, graduating in 2021. I absolutely loved the city and would highly recommend it to anyone considering physics – or any degree, for that matter. The campus is fantastic and Liverpool is an amazing place to be a student.

My undergraduate experience was incredibly rewarding. I met some of my closest friends and had countless memorable adventures. While the course was challenging at times, I have no regrets about choosing physics. I particularly enjoyed being able to pick specialist optional modules as it meant I could follow my interest in applied physics with topics such as nuclear power and medical physics.

Making a difference

In my final year, I started doing the obligatory job applications for those wanting to go into industry. But after receiving some rejections, I decided to explore an opportunity outside of science and ended up working for nearly a year in the charity sector as a Climate Action intern. There I undertook research projects related to decolonization in international development, and anti-racism and social justice, supporting the delivery of international development programmes.

While my time at Climate Action was rewarding and worthwhile, I wanted to move back into science and use my degree. Nuclear physics had been an area of interest for me since school, and my modules at university had encouraged that, so I turned my attention to the nuclear energy sector. Having worked for a charity, I was keen to find an organization whose values aligned with mine. Employee-owned engineering, management and development consultancy, Mott MacDonald, caught my eye, with its commitment to net zero, social outcomes and the UN’s Sustainable Development Goals.

I joined the company’s three-year graduate scheme and, although I didn’t have any direct experience in safety, was offered a graduate nuclear safety position. It is a great role that ties in skills from my degree and my interest in nuclear while still presenting challenges and an opportunity to learn.

After two years at Mott MacDonald, I won Graduate of the Year at the UK Nuclear Skills Awards 2024. My colleagues had kindly nominated me, recognizing my dedication and drive, and the contribution I’d made to the organization. This opportunity was highly valuable for me and elevated my profile not only at Mott MacDonald but also within the sector. Then, after only two and half years in the graduate scheme, I was promoted to my current position of nuclear safety engineer.

My role focuses on developing nuclear safety cases with the guidance and support of our experienced team. The work involves analysing potential hazards and risks, outlining safety measures, and presenting a structured, evidence-based argument that the facility is safe for operation. I’ve worked on a variety of different projects including small modular reactors, nuclear medicine and flood alleviation schemes.

A typical day for me involves project meetings, writing safety reports, conducting hazard identification studies, and reviewing documents. A key aspect of the work is identifying, assessing and effectively controlling all project-related risks.

Nuclear reactor at night

Beyond my technical role at Mott MacDonald, I am also part of committees for our internal Women in Nuclear and Europe and UK Advancing Race and Culture networks. These positions allow me to contribute to a range of equality, diversity and inclusion (EDI) initiatives. Creating an inclusive environment is important to allow people the space to be authentically themselves, share and bring diverse perspectives and feel psychologically safe. This is a big driver for me – by supporting equity and equal opportunities, I am helping ensure others like me have role models in the sector.

A nuclear skillset

Physics plays a crucial role in nuclear safety by providing the fundamental principles underlying nuclear processes. Studying nuclear physics at university has helped me understand and analyse reactor behaviour, radiation effects and potential hazards. This knowledge forms the basis for designing nuclear facility safety systems, for the protection of the workforce, environment and general public.

Throughout my degree, I also developed transferable skills such as analytical thinking, logical problem-solving and teamwork, all of which I apply daily in my role. As a safety-case engineer, I work as part of a team, and collaborate with specialists across fields, including process engineering, mechanical engineering and radioactive waste management. My ability to work effectively in teams and maintain strong interpersonal relationships has been key to success in my role.

I would encourage other physics students to explore a career in the nuclear industry

Applying my research and scientific report writing skills I developed at university, I can identify relevant information for safety-case updates, and present safety claims, arguments and evidence in a way that is understandable to a broad, non-specialist audience.

I also mentor and support more junior colleagues with various project and non-project related issues. Skills like critical thinking and the ability to tailor my communication style directly influence how I approach my work and support others.

I would encourage other physics students to explore a career in the nuclear industry. It offers a broad range of career paths, and the opportunity to contribute to some of the most diverse, exciting and challenging projects within the energy sector. You don’t need an engineering background to have a career in nuclear – there are many ways to contribute including beyond the technical route. As physicists we have a wide range of transferable skills, often more than we realize, making us highly adaptable and valuable in this sector.

It’s an incredible time to join the nuclear industry. With advancements like Sizewell C, small modular reactors, and cutting-edge medical nuclear-research facilities, there’s a wealth of diverse projects happening right now to get involved in. I hadn’t planned on a career in nuclear safety, but honestly, I’m really glad my path led this way. I am passionate about driving innovative nuclear solutions, and support progress towards reduced emissions and the global transition to net zero.

While I may be early on in my nuclear career, I have already worked on some interesting projects and met fantastic people. Now, I’m going through a structured training programme at Mott MacDonald to help me achieve chartership status with the Institute of Physics. I look forward to seeing what the future has to offer.

The post A physicist’s journey into nuclear energy appeared first on Physics World.

https://physicsworld.com/a/a-physicists-journey-into-nuclear-energy/
No Author

A glimpse into the future of particle therapy

Particle therapy experts gathered in London to discuss the transformative potential of laser-driven proton and ion beams

The post A glimpse into the future of particle therapy appeared first on Physics World.

Particle therapy is an incredibly powerful cancer treatment. But it is also an incredibly expensive option that relies on massive, bulky accelerator systems. As such, in 2025 there were only 137 proton and carbon-ion therapy facilities in operation worldwide. So how can more people benefit?

Hoping to resolve this challenge, the LhARA collaboration is investigating a new take on particle therapy delivery: a laser-hybrid accelerator for radiobiological applications. The idea is to use laser-driven proton and ion beams to create a compact, high-throughput treatment facility to advance our understanding of cancer and its response to radiation (see: “A novel hybrid design”).

Last month, in the first of a series of CP4CT workshops, experts in the field came together at Imperial College London to discuss the potential advantages of laser-driven charged particles. The workshop aimed to examine the current status of particle therapy technology, assess how the unique properties of laser-driven beams could revolutionize particle therapy, and identify the key research needed to develop personalized cancer therapy with laser-driven ions.

“We want to lay the foundation for the transformation of ion beam therapy,” said Kenneth Long (Imperial College London/STFC), who co-organized the event together with Richard Amos (University College London). “We are aiming to engage with the communities that we will target when the technology is mature.”

A novel hybrid design

LhARA uses a high-power, fast-pulsed laser to create high-flux proton and ion beams with arbitrary spatial and time structures, such as bunches as short as 10 to 40 ns. The beams are captured and focused by a novel electron-plasma lens, and then accelerated using a fixed-field alternating gradient accelerator, to energies of 15–127 MeV for protons and 5–34 MeV/u for ion beams.

LhARA concept

The LhARA team recently completed its conceptual design report for the proposed new accelerator facility and is now running radiobiology programmes to prove the feasibility of laser-driven hybrid acceleration, for both radiation biology and clinical studies.

Particle therapy today

The day’s first speaker, Alejandro Mazal (Centro de Protonterapia Quirónsalud) pointed out that despite huge clinical potential, only about 400,000 patients have been treated with proton therapy to date (and 65,000 with carbon ions), with a typical saturation of about 250 patients per year per treatment room. To increase this throughput, factors such as image guidance, adaptive tools, uptime and modularity for upgrades could prove vital.

Mazal cited some development priorities to address, including cost control, vendor robustness, system reliability and throughput optimization. It’s also vital to consider biological modulation techniques, integration into hospitals and generation of clinical evidence. “We used to say that randomized trials are not ethical with particle therapy but this is not always true, evidence must guide expansion,” he said.

Mazal emphasized that technology itself is not the endpoint, but that specifications must be driven by clinical benefit. “The goal is to be transformative, but only when we can measure a clinical value,” he explained.

Sandro Rossi (CNAO) then presented an update on the latest developments at the National Centre of Oncological Hadronotherapy (CNAO) in Italy. Since starting clinical treatments in 2011, the facility has now treated over 6000 patients – roughly half with protons and half with carbon ions. He noted that for some of the most challenging tumours, CNAO’s particle therapy delivered considerably better local tumour control than achieved by conventional X-ray treatments.

CNAO is also a research facility, currently hosting 17 funded research projects and seven active clinical trials. Looking forward, an expansion project will see the centre commission an additional proton therapy gantry, introduce boron neutron capture therapy (BNCT) and install an upright positioning system (from Leo Cancer Care) in one of the treatment rooms.

The killer biological questions

In parallel with the development of laser-based accelerators, researchers are investigating various radiobiological modulation strategies that could enhance the impact of particle therapy. The workshop examined three such options: proton minibeams, FLASH irradiation and combination with immunotherapies.

Minibeam therapy uses an array of submillimetre-sized radiation beams to deliver a pattern of alternating high-dose peaks and low-dose valleys. This spatially fractionated dose greatly reduces treatment toxicity while providing excellent tumour control, as demonstrated in extensive preclinical experiments.

Richard Amos, Yolanda Prezado and Kenneth Long

The first patient treatments (using X-ray minibeams) took place in 2024, and clinical investigations on proton minibeams are just starting, explained Yolanda Prezado (CiMUS). Recent studies revealed that minibeams induce a favourable immune response, with high T cell infiltration, vascular renormalization and reduced hypoxia dependence. Further evaluation is essential to explore the underlying radiobiological mechanisms, but Prezado noted that existing accelerators are limited in their ability to modulate treatment beams.

“It would be really interesting to have a system where we can flexibly vary all of the parameters to understand all of these techniques; LhARA could be a very interesting facility for this,” she suggested.

As for the second option, FLASH therapy, this is an emerging treatment approach in which radiation delivery at ultrahigh dose rates reduces normal tissue damage while effectively killing cancer cells. But how the FLASH effect works, and how to optimize this approach, remain key questions.

Joao Seco (DKFZ) presented a novel interpretation of FLASH, focusing on radiation chemistry and emphasizing the role of H2O2 generation in the FLASH process. Production of H2O2, a key molecule in cell damage, depends on the activity of a particular enzyme called superoxide dismutase 1 (SOD1). Seco hypothesized that inhibiting SOD1 could control H2O2 production and thus control cellular damage, effectively mimicking the FLASH effect.

“Forget radiation biology, we are missing a key component: redox chemistry,” he said. “If we know the redox chemistry, we can predict the response before we give radiotherapy.”

Marco Durante (GSI) suggested that the most urgent challenge for radiotherapy may be to combine it with immunotherapy, noting that charged particle beams offer both physical and biological advantages to achieve this. Citing various trials of combined immunotherapy and X-ray-based radiotherapy for cancer treatment, he showed some impressive examples of the benefit of the combination, but also cases with negative results.

“The question to understand is why doesn’t it always work,” he explained, suggesting that this may be due to the timing and sequencing of the two therapies, the fractionation scheme or biological factors. But perhaps a more promising approach would be to combine immunotherapy with particle therapy, he said, sharing examples where immunotherapy plus carbon-ions had better clinical outcomes than combinations with X-ray radiotherapy.

This superior outcome may arise from the various biological advantages of high-LET irradiation. Alongside, the lower integral dose from particle therapy compared with X-rays results in less lymphopenia (a low level of white blood cells), which is indicative of improved prognosis.

“Pre-clinical studies are essential to address timing and sequencing,” he concluded. “We also need more clinical trials to determine the impact of physical and biological properties of charged particles in radioimmunotherapy.”

Democratizing access

Manjit Dosanjh (University of Oxford) discussed the continuing need to increase global access to radiotherapy, noting that while radiotherapy is a key tool for over 50% of cancer patients, not all countries have access to sufficient treatment systems, nor to the expert personnel needed to run them.

Across Africa, for instance, there is  just one linac per 3.5 million people, in stark contrast to the one per 86, 000 people in the US. Many European countries also lack sufficient quality or quantity of radiotherapy facilities – a disparity that’s mirrored in terms of access to CT scanners, oncologists and medical physicists, which must be addressed in tandem. “If we could improve imaging, treatments and care quality, we could prevent 9.6 million deaths per year worldwide,” Dosanjh said.

Manjit Dosanjh

She described some initiatives designed to encourage collaboration and increase access, including ENLIGHT, the European Network for Light Ion Hadron Therapy. Launched in 2002 at CERN, ENLIGHT brings together clinicians, physicists, biologists and engineers working within particle therapy to develop new technologies and provide training, education and access to beams to move the field forward.

More recently, the STELLA (smart technologies to extend lives with linear accelerators) project was established to create a cost-effective, robust radiotherapy linac with lower staff requirements and maximal uptime. A global collaboration, STELLA aims to expand access to high-quality cancer treatment for all patients via innovative transformation of the treatment system, as well as providing training, education and mentoring.

Dosanjh also introduced SAPPHIRE, a UK-led initiative that partners with institutions in Ghana and South Africa to strengthen radiotherapy services across Africa. She stressed that improving access to radiotherapy is a big challenge that can only be achieved by building really good collaborations. “Collaboration is the invisible force that makes the impossible possible,” she said.

Konrad Nesteruk (Harvard) continued the theme of democratizing particle therapy, noting that advancement of beam technologies calls for innovations in space (the facility size), time (both irradiation and total treatment time) and dose (via techniques such as FLASH, proton arc and minibeams). All of these factors interact to create a multidimensional optimization problem, he explained.

The final speaker in this session, Rock Mackie (University of Wisconsin) examined how to translate innovative radiotherapy technology into clinical practice. Academia is the source of breakthrough ideas, he said, but most R&D is funded and refined by companies. And forming a company involves a series of key tasks: identifying an important problem; developing a technical solution; patenting it; customer testing; and procuring investment. If this final stage doesn’t happen, Mackie remarked, it wasn’t an important enough problem.

In particle therapy, the main problems are size and cost limiting patient access, a lack of effective imaging solutions and the fact that the gain in therapeutic ratio does not compensate for increased costs. Aiming to solve these problems, Mackie co-founded Leo Cancer Care in 2018 to commercialize an upright patient positioning system and CT scanner. This approach enables a proton therapy machine to fit into a photon vault, as well as easing patient positioning, thus reducing installation costs while simultaneously increasing throughput.

Mackie applied this startup scenario to LhARA. Here, the problem to solve is achieving high-energy, multi-ion, high-intensity beams for radiotherapy, FLASH, spatial fractionation and proton imaging. The solution is the development of a low-cost particle accelerator that meets all of these needs and fits in a single-storey vault. He also emphasized the importance of consulting with as many potential customers as time permits before defining specifications.

“The most important problem is finding a big enough problem to solve,” he concluded. “It will find a market if the product is less costly, works better and is easier to use.”

Development roadmap

Alexander Gerbershagen (PARTREC) told delegates about PARTREC, the particle therapy research centre at the University Medical Center Groningen. The facility’s superconducting accelerator, AGOR, provides protons with energies up to 190 MeV, as well as ion beams of all elements up to xenon. Ongoing projects at PARTREC include: developing glioblastoma treatments using boron proton capture therapy (NuCapCure); production of terbium isotopes for theranostics; image-guided pharmacotherapy using photon-activated drugs; and real-time in vivo verification of proton therapy dose.

The day closed with a look at the potential of LhARA as an international research facility. Kenneth Long emphasized the importance of investigating how ionizing radiation interacts with tissue, in vivo and in vitro, while considering all of the factors that may impact outcome. This includes time and space domains, different ion species and energies, and combinations with chemo- and immunotherapy. “If one flexible beam facility can do all that, it’s a substantial opportunity for a step change in understanding,” he said.

Long presented some initial cell irradiations using laser-driven beams at the SCAPA research centre in Strathclyde, and noted that component optimization is also underway in Swansea. He also shared designs for the envisaged research facility, with various in vivo and in vitro end-stations and robotic automation to move experiments around. “We have written a mission statement, now our business is to execute that programme,” he concluded.

The post A glimpse into the future of particle therapy appeared first on Physics World.

https://physicsworld.com/a/a-glimpse-into-the-future-of-particle-therapy/
Tami Freeman

Mulugeta Bekele: the jailed and tortured scientist who kept Ethiopian physics alive

Robert P Crease talks to Mulugeta Bekele, who almost single-handedly kept Ethiopian physics going

The post Mulugeta Bekele: the jailed and tortured scientist who kept Ethiopian physics alive appeared first on Physics World.

Mulugeta Bekele paid a heavy price for remaining in Ethiopia in the 1970s and 1980s. While many other academics had fled their homeland to avoid being targeted by its military rulers, Mulugeta did not. He stayed to teach physics, almost single-handedly keeping it alive in the country. But Mulugeta was arrested and brutally tortured by members of the Derg, Ethiopia’s ruling military junta. “I still have scars,” he says when we meet at his tiny, second-floor office at Addis Ababa University (AAU) in January 2026.

Gentle and softly spoken, Mulugeta, 79, is formally retired but still active as a research physicist. In 2012 his efforts led to him being awarded the Sakharov prize by the American Physical Society (APS) “for his tireless efforts in defence of human rights and freedom of expression and education anywhere in the world, and for inspiring students, colleagues and others to do the same”.

Mulugeta was born in 1947 near Asela, a small town south of Ethiopia’s capital Addis Ababa. The district had only a single secondary school that depended on volunteer teachers from other countries. One was a US Peace Corps volunteer named Ronald Lee, who taught history, maths and science for two years. Mulugeta recalls Lee as a dramatic and inventive teacher, who would climb trees in physics classes to demonstrate the actions of pulleys and hold special after-school calculus classes for advanced students.

Mulugeta and other Asela students were entranced. So when he entered AAU – then called Haile Selassie 1 University – in 1965, Mulugeta declared he wanted to study both mathematics and physics. Impossible, he was informed; he could do one or the other but not both. “I told myself that if I choose mathematics I will miss physics,” Mulugeta says. “But if I do physics, I will be continually engaged with mathematics.” Physics it was.

At the end of his third year, Mulugeta’s studies appeared in doubt. The university’s only physics teacher was an American named Ennis Pilcher, who was about to return to Union College in Schenectady, New York, after spending a year in Addis on a fellowship from the Fulbright Program. Pilcher, though, managed to convince Union to support Mulugeta so he could travel to the US and study physics there for his final year.

As I talk to Mulugeta, he pulls a dusty book off his shelf. “This was given to me by Pilcher,” he says, pointing to Walter Meyerhof’s classic undergraduate textbook Elements of Nuclear Physics. Mulugeta turns to the inside of the front cover and proudly shows me the inscription: “Mulugeta Bekele, Union College. Schenectady, 1969–1970”.

When Mulugeta returned to AAU in the summer of 1970, he was awarded a BSc in physics. He then received a grant from the US Agency for International Development (USAID) to attend the University of Maryland for a master’s degree. After two more years in the US, Mulugeta returned to Addis Ababa in 1973. As an accomplished researcher and teacher, he was made department chair and began to expand the physics programme at the university.

In the firing line

It was a time when political turmoil was upending Ethiopia, as well as the lives of Mulugeta and many other academics. For centuries the country had been ruled by a dynasty whose present emperor was Haile Selassie. Having come to the throne in 1930, he had tried to reform Ethiopia by bringing it into the League of Nations, drawing up a constitution, and taking measures to abolish slavery.

When fascist Italy invaded Ethiopia in May 1935, Selassie left, spending six years in exile in the UK during the Italian occupation of the country. He returned as emperor in 1941 after British and Ethiopian forces recaptured Addis Ababa. But famine, unemployment and corruption, as well as a brief unsuccessful coup attempt, undermined his rule and made him unexpectedly vulnerable.

While in Maryland, Mulugeta and other Ethiopian students in the US started supporting the Ethiopian People’s Revolutionary Party (EPRP) – a pro-democracy group that sought to build popular momentum against the monarchy. In February 1974 Selassie was deposed by the Derg – a repressive military junta named after the word for “committee” in Amharic, the most widely spoken language in Ethiopia. Selassie was assassinated the following year.

Mengistu Haile Mariam - official portrait plus leaders of the Derg

Led by an army officer named Mengistu Haile Mariam, the Derg’s radical totalitarianism was in sharp contrast to the student-led EPRP’s efforts and its agenda included seizing property from landowners. Mulugeta’s family lost all its land, and his father was killed fighting the Derg. “Land ownership was still inequitable,” Mulugeta remarks ruefully, “only the landlords changed.”

In September 1976 the EPRP tried, unsuccessfully, to assassinate Mengistu. The following February, on becoming chairman of Derg – and therefore head of state – Mengistu began ruthlessly to crush any opposition, particularly the EPRP, in what he himself called the “Red terror” campaign of political suppression. About half a million people in Ethiopia were killed.

“It was a police state,” recalls Solomon Bililign, Mulugeta’s then graduate assistant, now a professor of atomic and molecular physics at North Carolina Agricultural and Technical State University. “The police didn’t need any reason to arrest you. They would arrest people openly in the streets, break into homes, and left people dead in roads and parks. Many were tortured; others simply disappeared.”

Captured and tortured

Mulugeta himself was a target. In the summer of 1977, a policeman showed up at his office with an informant. Mulugeta was arrested and imprisoned for his role in helping to organize anti-Derg activities, as was Bililign. Mulugeta still recalls exactly how long he was jailed for: “Eight months and 20 days”.

After his release, Mulugeta knew it would be unsafe to stay in Addis and lived in hiding for several months. So he devised a plan to travel 500 km north to a holdout region not controlled by the Derg. However, while using a fake ID to pass through checkpoints to reach a compatriot, he was betrayed again, captured, and taken back to Addis.

Mulugeta was savagely tortured using a method that the Derg meted out on thousands of other prisoners

En route to Addis, he managed to steal back the fake ID that he’d been using from the pocket of the policeman travelling with him. He then tore it up to shield the identity of his compatriot, and tossed the pieces into a toilet. But the policeman noticed and retrieved the pieces. Mulugeta was then savagely tortured using a method that the Derg meted out on thousands of other prisoners. His arms and legs were tied around a pole, and he was hung in the foetal position between two chairs, upside down. His feet were then beaten until he could no longer walk.

Mulugeta was sent to Maekelawi, an infamous jail in Addis, in which up to 70 prisoners could be jammed in rooms each barely four metres long and four metres wide. Inmates were tortured without warning, could not have visitors, never had trials, were denied books and paper, and at night heard screams from periodic executions. Mulugeta helped those who were beaten by tending to their wounds.

“People who knew him in prison told me that his mental strength helped all of them endure,” remembers Mesfin Tsige, an undergraduate student of Mulugeta at the time, who is now a polymer physicist at the University of Akron in Ohio. Despite the awful conditions, Mulugeta managed to continue working on physics by surreptitiously taking paper from the foil linings of cigarette packets to compose problems.

Mulugeta, Bililign and Mekonnen

Another prisoner was Nebiy Mekonnen, a chemistry student of Mulugeta. Later a gifted artist, translator and newspaper editor, Mekonnen began translating the US writer Margaret Mitchell’s classic 1936 book Gone with the Wind into Amharic. It was the one book that the Maekelawi prisoners had in their hands, having retrieved it from the possessions of someone who had been executed.

Surreptitiously writing his translation onto the foil linings of cigarette packets, Mekonnen would read passages to fellow prisoners in the evening for what passed for entertainment. Mekonnen’s translation of Mitchell’s almost 1000-page book was recorded onto 3000 of the linings, which were then smuggled out of the prison stuffed in tobacco pouches and published years later.

Gone with the Wind might seem a strange choice to translate, but as Mulugeta reminds me: “It was the only book we had at the time”. More smuggled books did eventually arrive at the prison, but Gone with the Wind, which describes life in a war-torn country, has several passages that resonated with prisoners. One was: “In the end what will happen will be what has happened whenever a civilization breaks up. The people with brains and courage come through and the ones who haven’t are winnowed out.”

Release and recapture

In 1982 Mulugeta was moved to Kerchele, another prison. There, as at Maekelawi, inmates were forced to listen to Mengistu’s pompous speeches on radio and TV. During one, Mengistu pontificated that he would turn prisons into places of education. A clever inmate, knowing that the prison wardens were also cowering in terror, proposed that Kerchele establish a school with the prisoners as teachers.

The wardens found this a great idea, not least because it let them show off their loyalty to Mengistu. The Kerchele prisoners were promptly put to work erecting a schoolhouse of half a dozen rooms out of asbestos slabs. Unlike schools in the rest of Ethiopia, the Kerchele prison school was not short of teachers, as the prisoners included a wide range of professionals, such as architects, scientists and engineers.

Students included prison guards and their families, along with numerous inmates who had been jailed for non-political reasons. Mulugeta and Bililign taught physics. “It was therapy for us,” Bililign says – and the school was soon known as one of the best in Ethiopia.

When I ask Mulugeta how he maintained his interest in physics in jail, despite being locked up for so many years, he becomes animated

When I ask Mulugeta how he maintained his interest in physics in jail, despite being locked up for so many years, he becomes animated. “In those days, prisons were full of ideas,” he smiles. “We were university students, university teachers. We had a cause. It was exciting. Intellectually, we flourished.”

In the summer of 1985 Mulugeta was released. Many colleagues were not. “They were given release papers and as they left the building, one by one, they were strangled. I had a tenth-grade student who was one of the best; he didn’t make it. There were plenty of stories like this.” Mulugeta pauses. “Somehow we survived. But not them.”

Mulugeta returned to the university, now renamed from Haile Selassie University to Addis Ababa University, and started teaching physics full time. As the Derg was in full control no opposition was possible except in outer regions of Ethiopia. In summer 1991, after Mulugeta had taught physics for another six years, political turmoil erupted yet again.

Mengistu was overthrown that May by a political coalition representing pro-democracy groups from five of Ethiopia’s ethnic regions, the Ethiopian People’s Revolutionary Democratic Front (EPRDF). But ethnic tensions rose and human rights violations continued. “Even though the Derg was overthrown,” Mulugeta recalls, “we knew we were entering another dark age.”

In the same year Mulugeta was put in touch with a Swedish programme seeking to build networks of scientists across countries in the southern hemisphere. Mulugeta knew a physicist from Bangalore, India, who had visited Addis twice as an examiner for his master’s programme and arranged to work with him for his PhD.

That July, Mulugeta married Malefia, who worked in the university’s registrar office, and the two left for Bangalore. As a wedding present, his student Mekonnen painted a picture of two hands coming together, each with a ring on a finger, against a black Sun in the background. “Two rings, in the time of a dark sun” Mekonnen’s caption read, “Happy marriage!” Mulugeta still has the painting.

Mulugeta thrived in Bangalore. Here, he was finally able to combine his two loves, physics and maths, studying statistical physics and stochastic processes and applying them to issues in non-equilibrium thermodynamics. He has worked in that field ever since. He received his PhD in 1998 from the Indian Institute of Science in Bangalore and returned to Addis once more to teach.

Shortly after Mulugeta’s return from Bangalore to Ethiopia in August 1998, some of his former students formed the Ethiopian Physical Society, electing him as its first president. Other students of his who had taken positions in the US created the Ethiopian Physical Society of North America (EPSNA), formally established in 2008. Bililign organized and convened its first meeting.

In 2007 Philip Taylor, a soft-condensed-matter physicist from Case Western Reserve University in the US, who had been Tsige’s PhD supervisor, heard the story of Mulugeta’s imprisonment. Astonished, he spearheaded the successful 2012 application for Mulugeta to receive the APS’s Sakharov prize, which is given every two years to physicists who have displayed “outstanding leadership and achievements of scientists in upholding human rights”.

Mulugeta Bekele with his wife Malefia

Unsure that he would receive travel funds to attend a special award ceremony at that year’s APS March meeting in Boston, the EPSNA raised money for Mulugeta and his wife to attend. Jetlagged, worn out by the cold, and somewhat overwhelmed by the attention, Mulugeta could not be found as the ceremony began. EPSNA members tracked him down to his hotel room, where he was dressing in traditional Ethiopian clothes for the occasion – all white from head to toe, including shoes.

Under a dark Sun

In recent years, Mulugeta has continued to teach and collaborate with students and former students, publishing in a wide range of journals, as well as helping out with the Ethiopian Physical Society. But while I was in Ethiopia to talk to Mulugeta at the start of 2026, the Trump administration curtailed immigrant visas from Ethiopia and almost half of all nations in Africa supposedly in an attempt to “protect the security of the United States”. A few months before, it had imposed a $100,000 fee on work visas, all but preventing US universities from hiring non-US citizens. It killed the USAID programme that had once sent Mulugeta to the US for his master’s degree.

The Trump administration has also withdrawn the US from international scientific organizations, conventions and panels, and has gutted the most important US scientific agencies. These and other measures are destroying the networks of international physics collaborations of the kind that Mulugeta both promoted and benefited from – networks that nurture education, careers and knowledge.

“We are not yet in good hands,” Mulugeta warns me as I start to leave. “We are,” he says, “still under the dark Sun.”

The post Mulugeta Bekele: the jailed and tortured scientist who kept Ethiopian physics alive appeared first on Physics World.

https://physicsworld.com/a/mulugeta-bekele-the-jailed-and-tortured-scientist-who-kept-ethiopian-physics-alive/
Robert P Crease

Condensed-matter physics pioneer and Nobel laureate Anthony Leggett dies aged 87

Leggett made Nobel-prize-winning contributions to the theory of superfluidity in the 1970s

The post Condensed-matter physics pioneer and Nobel laureate Anthony Leggett dies aged 87 appeared first on Physics World.

The British-American theoretical physicist Anthony Leggett died on 8 March at the age of 87. Leggett shared the 2003 Nobel Prize in Physics with Alexei Abrikosov and Vitaly Ginzburg for their “pioneering contributions to the theory of superconductors and superfluidity”.

Born on 26 March 1938 in London, UK, Leggett graduated in literae humaniores (classical languages and literature, philosophy and Greco-Roman history) at the University of Oxford in 1959.

While philosophy was Leggett’s strongest subject, he did not envisage a career as a philosopher because he felt that the subject depended more on turns of phrase than objective criteria.

As part of an experiment at Oxford to see if it was possible to convert a classicist with minimal qualifications in maths and science into a physicist, Leggett was awarded a degree in physics in 1961.

Leggett then embarked on a DPhil in physics, which he completed at Oxford in 1964, followed by postdocs at the University of Illinois Urbana-Champaign in the US and Kyoto University, Japan.

In 1967 he moved back to the UK, spending the next 15 years at Sussex University. It was at Sussex that he carried out his Nobel-prize-winning work on the theory of superfluidity – the ability of a fluid to flow without viscosity.

Superfluidity in helium-4 was discovered in the 1930s, and in the 1960s several theorists predicted that helium-3 might also be a superfluid.

However, the two forms of helium are fundamentally different. Helium-4 atoms are bosons and can all condense into the same quantum ground state at low enough temperatures – an essential feature of both superfluidity and superconductivity.

Helium-3 atoms, on the other hand, are fermions and the Pauli exclusion principle prevents them from entering such a quantum state.

Electrons, which are also fermions, overcome this problem by forming Cooper pairs as described by the BCS theory of superconductivity that was developed in the mid-1950s by John Bardeen, Leon Cooper and Robert Schrieffer.

Theorists predicted that helium-3 atoms could do something similar and in 1972 superfluidity in helium-3 was finally observed at Cornell University – a feat that earned David Lee, Douglas Osheroff and Robert Richardson the 1996 Nobel Prize in Physics.

Yet many of the results puzzled theorists. In particular there were three different superfluid phases, and the results of nuclear magnetic resonance experiments on the samples could not be explained.

Leggett showed that these results could be explained by the spontaneous breaking of various symmetries in the superfluid and for the work he was awarded a third of the 2003 Nobel Prize in Physics, with Abrikosov and Ginzburg being honoured for their work on type-II superconductors.

A life in science

In 1983 Leggett moved to the University of Illinois at Urbana-Champaign where he remained for the rest of his career until retiring in 2019. There he focussed on problems in high-temperature superconductivity, superfluidity in quantum gases and the fundamentals of quantum mechanics.

In 1998 he was elected an Honorary Fellow of the Institute of Physics and in 2004 was appointed Knight Commander of the Order of the British Empire (KBE) “for services to physics”. In 2023 the Institute for Condensed Matter Theory at the University of Illinois at Urbana-Champaign was renamed the Sir Anthony Leggett Institute.

As well as the Nobel prize, Leggett won many other awards including the 2002 Wolf Prize for physics. He also published two books: The Problems of Physics (Oxford University Press, 1987) and Quantum Liquids (Oxford University Press, 2006).

Peter McClintock from Lancaster University, who has carried out work in superfluidity, says he is “very sad” to hear the news. “[Leggett] was a brilliant physicist whose genius was to comprehend underlying mechanisms and processes and explain their physical essence in comprehensible ways,” says McClintock. “My dominant memory is of the discovery of the superfluid phases of helium-3 and of the way in which [Leggett] was able to interpret each new item of experimental information and slot it into a nascent theoretical framework to build up a coherent picture of what was going on – while always enumerating the remaining loose ends and possible alternative explanations.”

James Sauls, a theorist at Louisiana State University, says that Leggett made discoveries in several areas of physics such as the foundations of quantum mechanics, quantum tunnelling in amorphous glasses and superconducting devices as well as the theory of heat transport at ultra-low temperatures. “Leggett’s contributions to quantum mechanics and low-temperature physics are remarkable and enduring,” adds Sauls. “[Leggett’s] style in theoretical physics was unique in its clarity and originality.”

In a statement, Makoto Gonokami, president of the RIKEN labs in Japan, also expressed that he is “deeply saddened” by the news and that Leggett had “provided warm support for researchers in Japan” through his many trips to the country.

“Leggett made pioneering contributions to our understanding of how quantum mechanics manifests itself in macroscopic matter [and] his theoretical work on superfluid helium-3 provided profound insights into quantum order in strongly interacting fermionic systems,” notes Gonokami. “His work significantly advanced the study of quantum condensed matter and macroscopic quantum coherence.”

The post Condensed-matter physics pioneer and Nobel laureate Anthony Leggett dies aged 87 appeared first on Physics World.

https://physicsworld.com/a/condensed-matter-physics-pioneer-and-nobel-laureate-anthony-leggett-dies-aged-87/
Michael Banks

Physicists identify unexpected quantum advantage in a permutation parity task

The list of things that quantum features enable us to do just got a little longer

The post Physicists identify unexpected quantum advantage in a permutation parity task appeared first on Physics World.

Imagine all the different ways you can rearrange a list of labelled items. If you know only a tiny fraction of the labels describing the elements of the list, it’s easy to assume you have almost no information about the order of the list as a whole under permutations. After all, if you shuffle a large deck of cards and then hide most of the labels on the cards, how could anyone possibly tell what permutations you made?

Recent theoretical work by physicists at Universitat Autonoma de Barcelona (UAB), Spain, and Hunter College of the City University of New York (CUNY), US, reveals that this intuition can fail in surprising ways, hinting at deep links between information, symmetry and computation. Specifically, the UAB-CUNY team found that quantum mechanics plays a key role in preserving parity – a global property of a permutation – even when most local information is erased.

An impressive parity identification

Imagine a clever magician named Alice. She hands you a stack of n coloured disks in a known order and leaves the room while you shuffle them. When she returns, she asks: “Can I tell how you permuted the disks?”

If every disk has its own unique label, the answer is obviously “yes”. But if Alice removes some of the labels, she can pose a subtler challenge: “Can I at least tell whether your shuffle swapped the positions of the cards an even or odd number of times?”

Classically, the answer is “no”. With fewer labels than disks, some labels must be repeated. Swapping two disks with the same label leaves the observed configuration unchanged, yet flips the parity of the underlying permutation. As a result, determining parity with certainty requires one unique label per disk. Anything less, and the information is fundamentally lost.

Quantum mechanics changes this conclusion. In their paper, which is published in Physical Review Letters, UAB’s Arnau Diebra and colleagues showed that as long as there are at least √n labels, far fewer than the total number of disks, one can still determine the parity of any permutation applied to the system when the game follows the rules of quantum mechanics. The problem remains the same; the only difference is that we are now preparing our initial state as a quantum state. In other words, even when most of the detailed information about individual elements is erased, a global feature of the transformation survives and exploiting quantum features makes it possible to extract it with carefully chosen information. This is not sleight of hand: it is a genuine mathematical insight into how much information certain global properties retain under massive data reduction.

Quantum advantage

In the field of quantum science, it’s common to ask whether quantum systems can outperform classical ones at specific tasks, a phenomenon known as quantum advantage. Here, “advantage” does not necessarily mean doing everything faster, but rather the ability to solve carefully chosen problems using fewer resources such as time, memory or information. Notable examples include quantum algorithms that factor large numbers more efficiently than any known classical method, and quantum communication protocols that achieve tasks that would be impossible with classical correlations alone.

The parity-identification problem fits naturally into this landscape. Parity is a global property, insensitive to most local details. In this respect, it resembles many other quantities studied in quantum physics, from topological invariants to entanglement measures.

What makes quantum advantage possible in this problem is entanglement – and lots of it. A compound quantum system is said to be entangled when its subsystems are correlated in a nonclassical way. A simple example might be a pair of qubits (quantum bits) for which measuring the state of one qubit gives you information about the state of the other in a way that cannot be reproduced by any classical correlation. In their work, the UAB-CUNY physicists used a geometric measure of entanglement: the “distance” between the state of the system and a state in which all subsystems are separable (that is, not entangled). If this distance is too short, the protocol fails entirely.

The crucial point is that entanglement allows information about the permutation to be stored in genuinely nonlocal correlations among particles (the “cards” in the deck), rather than in properties of each particle/card individually. In effect, the “memory” needed to identify the parity is written into the joint quantum state. No single particle carries the answer, but the system as a whole does. This is precisely what classical systems cannot replicate: once local labels are lost, there is nowhere left for the information to hide.

Can one do better than √n ?

The fact that the threshold for quantum advantage scales with √n  is one of the most intriguing aspects of the work. At present, the reason for this remains an open question. While Diebra and colleagues emphasize that the scaling is provably optimal within quantum mechanics, they acknowledge that a more intuitive or fundamental explanation is still missing. Finding such an explanation could illuminate broader principles governing how quantum systems compress and protect global information.

While the parity-identification problem has no immediate known applications, understanding how properties can be inferred from limited information is also crucial when dealing with realistic quantum devices, where noise, decoherence and imperfect measurements severely restrict what information can be accessed. Results like this therefore suggest that some computational or informational tasks may remain feasible even when our view of the system is drastically incomplete.

Speaking more broadly, the conceptual implications of proving new examples quantum advantage are clear: even for extremely simple inference tasks, quantum strategies can outperform classical ones in unexpected and qualitative ways. The result therefore provides a clean testing ground for deeper questions about quantum resources, symmetry and information compression. Which specific features of entanglement are responsible for the advantage? Can similar thresholds be found for other groups or more complex symmetries? And does the square-root scaling reflect a universal principle?

For now, the work serves as a reminder that – even decades into the development of quantum information theory – basic questions about how information is stored, hidden, and revealed in quantum systems can still produce genuine surprises.

The post Physicists identify unexpected quantum advantage in a permutation parity task appeared first on Physics World.

https://physicsworld.com/a/physicists-identify-unexpected-quantum-advantage-in-a-permutation-parity-task/
Daniele Iannotti

Long-distance quantum sensor network advances the search for dark matter

Physicists in China have set the tightest constraint yet on a parameter known as axion-nucleon coupling

The post Long-distance quantum sensor network advances the search for dark matter appeared first on Physics World.

A new of way of searching for dark-matter candidate particles called axions has produced the tightest constraint yet on how they can interact with normal matter. Using a two-city network of quantum sensors based on nuclear spins, physicists in China narrowed the possible values of a parameter known as axion-nucleon coupling below a limit previously set by astrophysical observations. As well as insights on the nature of dark matter, the technique could aid investigations of other beyond-the-Standard-Model physics phenomena such as axion stars, axion strings and Q-balls.

Dark matter is thought to make up over 25% of the universe’s mass, but it has never been detected directly. Instead, we infer its existence from its gravitational interactions with visible matter and its effect on the large-scale structure of the universe.

While the Standard Model of particle physics does not incorporate dark matter, several physicists have proposed ideas for how to bring it into the fold. One of the most promising involves particles called axions. First hypothesized in the 1970s as a way of explaining unresolved questions about charge-parity violation, axions are chargeless and much less massive than electrons. This means they interact only weakly with matter and electromagnetic radiation.

According to theoretical calculations, the Big Bang should have produced axions in abundance. During phase transitions in the early universe, these axions would have formed topological defects – defects that study leader Xinhua Peng of the University of Science and Technology of China (USTC) says should, in principle, be detectable. “These defects are expected to interact with nuclear spins and induce signals as the Earth crosses them,” Peng explains.

A new axion search method

The problem, Peng continues, is that such signals are expected to be extremely weak and transient. She and her colleagues therefore developed an alternative axion search method that exploits a different predicted behaviour.

When fermions (particles with half-integer spin) interact, or couple, with axions, they should produce a pseudo-magnetic field. Peng and colleagues looked for evidence of this interaction using a network of five quantum sensors, four in Hefei and one in Hangzhou. These sensors combined a large ensemble of polarized rubidium-87 (87Rb) atoms with polarized xeon-129 (129Xe) nuclear spins.

“Using nuclear spins has many advantages,” Peng explains. “These include a higher energy resolution detection for topological dark matter (TDM) axions thanks to a much smaller gyromagnetic ratio of nuclear spins; substantial spin amplification owing to the high ensemble density of noble-gas spins; and efficient optimal filtering enabled by the long nuclear-spin coherence time.”

The USTC researchers’ setup also has other advantages over previous laboratory-based TDM searches, including the Global Network of Optical Magnetometers for Exotic physics searches (GNOME). While GNOME operates in a steady-state detection mode, the USTC researchers use a detection scheme that probes transient “free-decay oscillating” signals generated on spins after a TDM crossing. The USTC team also implemented a dual-phase optimal filtering algorithm to extract TDM signals with a signal-to-noise ratio at the theoretical maximum.

Peng tells Physics World that these advantages enabled the team to explore regions of TDM parameter space well beyond limits set by astrophysical searches. The transient-state detection scheme also enables sensitive searches for TDM in the region where the axion mass exceeds 100 peV – a region that GNOME cannot access.

Most stringent constraints

The researchers have not yet recorded a statistically significant topological crossing event using their setup, so the dark matter search is not over. However, they have set more stringent constraints on axion-nucleon coupling across a range of axion masses from 10 peV to 0.2 μeV. Notably, they calculated that the coupling strength must be greater than 4.1 x 1010 GeV at an axion mass of 84 peV. This limit is stricter than those obtained from astrophysical observations, though Peng notes that these rely on different assumptions.

Peng says the technique developed in this study, which is published in Nature, could lead to the development of even larger, more sensitive networks for detecting transient spin signals such as those from TDM. It also opens new avenues for investigating other physical phenomena beyond the Standard Model that have been theoretically proposed, but have so far lacked a pathway for experimental exploration.

The researchers now plan to increase the number of sensor stations in their network and extend their geographical baselines to intercontinental and even space-based scales. Peng explains that doing so will enhance the network’s detection sensitivity and boost signal confidence. “We also want to enhance the sensitivity of individual sensors via better spin polarization, longer coherence times and advanced quantum control techniques,” she says. Switching to a ³He–K system, she adds, could boost their current spin-rotation sensitivity by up to four orders of magnitude.

The post Long-distance quantum sensor network advances the search for dark matter appeared first on Physics World.

https://physicsworld.com/a/long-distance-quantum-sensor-network-advances-the-search-for-dark-matter/
Isabelle Dumé

Pathways to a career in quantum: what skills do you need?

Matin Durrani reports from the Careers in Quantum event at the University of Bristol, UK

The post Pathways to a career in quantum: what skills do you need? appeared first on Physics World.

Careers in Quantum, which was held on 5 March 2026, is an unusual event. Now in its seventh year, it’s entirely organized by PhD students who are part of the Quantum Engineering Centre for Doctoral Training (CDT) at the University of Bristol in the UK.

As well as giving them valuable practical experience of creating an event featuring businesses in the burgeoning quantum sector, it also lets them build links with the very firms they – and the students and postdocs who attended – might end up working for.

A clever win-win if you like, with the day featuring talks, panel discussion and a careers fair made up companies such as Applied Quantum Computing, Duality, Hamamatsu, Orca Computing, Phasecraft, QphoX, Riverlane, Siloton and Sparrow Quantum.

IOP Publishing featured too with Antigoni Messaritaki talking about her journey from researcher to senior publisher and Physics World features and careers editor Tushna Commissariat taking part in a panel discussion on careers in quantum.

The importance of communication and other “soft skills” was emphasized by all speakers in the discussion, but what struck me most was a comment by Carrie Weidner, a lecturer in quantum engineering at Bristol, who underlined that it’s fine – in fact important – to learn to fail.

“If you’re resilient and can think critically, you can do anything,” said Weidner, who is also director of the quantum-engineering CDT. She warned too of the dangers of generative AI, joking that “every time you use ChatGPT, your brain is atrophying”.

Photo of Diya Nair

Another great talk was by Diya Nair, a computer-science undergraduate at the University of Birmingham, who is head of global outreach and UK ambassador for Girls in Quantum.

The organization is now active in almost 70 countries around the world, with the aim of “democratizing quantum education”. As Nair explained, Girls in Quantum does everything from arrange quantum computing courses and hackathons to creating its crowdfunded quantum-computing game called Hop.

The event also included a discussion about taking quantum research “from concept to commercialization”. It featured Jack Russel Bruce from Universal Quantum, Euan Allen from eye-imaging tech firm Siloton, Joe Longden from Duality Quantum Photonics, and Stewart Noakes, who has mentored numerous companies over the years.

Noakes emphasized that all hi-tech firms have three main needs: talent, money and ideas. In fact, as he explained, companies can sometimes suffer from having too much money as well as too little, especially if they grow too fast and hire people on big salaries who might then need to be let go if funding dries up.

Bruce, though, was positive about the overall state of the quantum-tech sector. “For me, the future is bright,” he said. But as all speakers underlined, if you want to join the industry, make sure you’ve got good communication skills, an open-minded attitude – and a willingness to learn on the go.

The post Pathways to a career in quantum: what skills do you need? appeared first on Physics World.

https://physicsworld.com/a/pathways-to-a-career-in-quantum-what-skills-do-you-need/
Matin Durrani

Metamaterial antennas enhance MR images of the eye and brain

Integrating metamaterials into radiofrequency antennas improves image sharpness and enables faster data acquisition using existing MRI scanners

The post Metamaterial antennas enhance MR images of the eye and brain appeared first on Physics World.

In vivo MR imaging

MRI is one of the most important imaging tools employed in medical diagnostics. But for deep-lying tissues or complex anatomic features, MRI can struggle to create clear images in a reasonable scan time. A research team led by Thoralf Niendorf at the Max Delbrück Center in Germany is using metamaterials to create a compact radiofrequency (RF) antenna that enhances image quality and enables faster MRI scanning.

Imaging the subtle structures of the eye and orbit (the surrounding eye socket) is a particular challenge for MRI, due to the high spatial resolution and small fields-of-view required, which standard MRI systems struggle to achieve. These limitations are generally due to the antennas (or RF coils) that transmit and receive the RF signals. Increasing the sensitivity of these antennas will increase signal strength and improve the resolution of the resulting MR images.

To achieve this, Niendorf and colleagues turned to electromagnetic metamaterials – artificially manufactured, regularly arranged structures made of periodic subwavelength unit cells (UCs) that interact with electromagnetic waves in ways that natural materials do not. They designed the metamaterial UCs based on a double-square split-ring resonator design, tailored for operation at a high magnetic field strength of 7.0 T.

Metamaterials improve transmit–receive performance

In their latest study, led by doctoral student Nandita Saha and reported in Advanced Materials, the researchers created a metamaterial-integrated RF antenna (MTMA) by fabricating the UCs into a 5 x 8 array. They built two configurations: a planar antenna (planar-MTMA); and a version with a 90° bend in the centre (bend-MTMA) to conform to the human face. For comparison, they also built conventional counterparts without the metamaterial (planar-loop and bend-loop).

The researchers simulated the MRI performances of the four antennas and validated their findings via measurements at 7.0 T. Tests in a rectangular phantom showed that the planar-MTMA demonstrated between 14% and 20% higher transmit efficiency than the planar-loop (assessed via B₁+ mapping).

They next imaged a head phantom, placing planar antennas behind the head to image the occipital lobe (the part of the brain involved in visual processing) and bend antennas over the eyes for ocular imaging. For the planar antennas, B₁+ mapping revealed that the planar-MTMA generated around 21% (axial), 19% (sagittal) and 13% (coronal) higher intensity than the planar-loop. Gradient-echo imaging showed that planar-MTMA also improved the receive sensitivity, by 106% (axial), 94% (sagittal) and 132% (coronal).

Antenna design and deployment

The bend antennas exhibited similar trends, with B₁+ maps showing transmit gains of roughly 20% for the bend-MTMA over the bend-loop. The bend-MTMA also outperformed the bend-loop in terms of receive signal intensity, by approximately 30%.

“With the metamaterials we developed, we were able to guide and modulate the RF fields generated in MRI more efficiently,” says Niendorf. “By integrating metamaterials into MRI antennas, we created a new type of transmitter and detector hardware that increases signal strength from the target tissue, improves image sharpness and enables faster data acquisition.”

In vivo imaging

Importantly, the new MRI antenna design is compatible with existing MRI scanners, meaning that no new infrastructure is needed for use in the clinic. The researchers validated their technology in a group of volunteers, working closely with partners at Rostock University Medical Center.

Before use on human subjects, the researchers evaluated the MRI safety of the four antennas. All configurations remained well below the IEC’s specific absorption rate (SAR) limit. They also assessed the bend-MTMA (which showed the highest SAR) using MR thermometry and fibre optic sensors. After 30 min at 10 W input power, the temperature increased by about 1.5°C. At 5 W, the increase was below 0.5°C, well within IEC safety thresholds and thus used for the in vivo MRI exams.

The team first performed MRI of the eye and orbit in three healthy adults, using the bend-loop and bend-MTMA antennas positioned over the eyes. Across all volunteers, the bend-MTMA exhibited better transmit performance in the ocular region that the bend-loop.

The bend-MTMA antenna also generated larger intraocular signals than the bend-loop (assessed via T2-weighted turbo spin-echo imaging), with signal increases of 51%, 28% and 25% in the left eyes, for volunteers 1, 2 and 3, respectively, and corresponding gains of 27%, 26% and 29% for their right eyes. Overall, the bend-MTMA provided more uniform and higher-intensity signal coverage of the ocular region at 7.0 T than the bend-loop.

To further demonstrate clinical application of the bend-MTMA, the team used it to image a volunteer with a retinal haemangioma in their left eye. A 7.0 T MRI scan performed 16 days after treatment revealed two distinct clusters of structural change due to the therapy. In addition, one of the volunteer’s ocular scans revealed a sinus cyst, an unexpected finding that showed the diagnostic benefit of the bend-MTMA being able to image beyond the orbit and into the paranasal sinuses and inferior frontal lobe.

The team used the planar antennas to image the occipital lobe, a clinically relevant target for neuro-ophthalmic examinations. The planar-MTMA exhibited significantly higher transmit efficiency than the planar-loop, as well as higher signal intensity and wider coverage, enhancing the anatomical depiction of posterior brain regions.

“Clearer signals and better images could open new doors in diagnostic imaging,” says Niendorf. “Early ophthalmology applications could include diagnostic confirmation of ambiguous ophthalmoscopic findings, visualization and local staging of ocular masses, 3D MRI, fusion with colour Doppler ultrasound, and physio-metabolic imaging to probe iron concentration or water diffusion in the eye.”

He notes that with slight modifications, the new antennas could enable MRI scans depicting the release and transport of drugs within the body. Their geometry and design could also be tuned to image organs such as the heart, kidneys or brain. “Another pioneering clinical application involves thermal magnetic resonance, which adds a thermal intervention dimension to an MRI device and integrates diagnostic guidance, thermal treatment and therapy monitoring facilitated by metamaterial RF antenna arrays,” he tells Physics World.

The post Metamaterial antennas enhance MR images of the eye and brain appeared first on Physics World.

https://physicsworld.com/a/metamaterial-antennas-enhance-mr-images-of-the-eye-and-brain/
Tami Freeman

Laser-written glass plates could store data for thousands of years

Scientists at Microsoft Research tout a potential long-term alternative to standard digital archives

The post Laser-written glass plates could store data for thousands of years appeared first on Physics World.

Humans are generating more data than ever before. While much of these data do not need to be stored long-term, some – such as scientific and historical records – would ideally still be retrievable in decades, or even centuries. The problem is that modern digital archive systems such as hard disk drives do not last that long. This means that data must regularly be transferred to new media, which is costly and time-consuming.

A team at Microsoft Research now claims to have found a solution. By using ultrashort, intense laser pulses to “write” data units called phase voxels into glass chips, the team says it has created a medium that could store 4.8 terabytes (TB) of data error-free for more than 10::000 years – a span that exceeds the age of history’s oldest surviving written records.

Direct laser writing

The idea of writing data into glass or other durable media with lasers is not new. Direct laser writing, as it is known, involves focusing high-power pulses, usually just femtoseconds (10-15 s) long, on a three-dimensional region within a medium. This modifies the medium’s optical properties in that region, and each modified region becomes a data-storage unit known as a voxel, which is the 3D equivalent of a pixel.

Because the laser’s energy is focused on a very small volume, the voxels created with this method can be very densely packed. Changing the amplitude and polarization of the laser’s output changes what information gets encoded at each voxel, and an optical microscope can “read out” this information by picking up changes in the light as it passes through each modified region. In terms of the media used, glass is particularly promising because it is thermally and chemically stable and is robust to moisture and electromagnetic interference.

Direct laser writing does have some limitations, however. In particular, encoding information generally requires multiple laser pulses per voxel, restricting the technique’s throughput and efficiency.

Two types of voxel, one laser pulse

Microsoft Research’s “Project Silica” team says it overcame this problem by encoding information in two types of voxel: phase voxels and birefringent voxels. Both types involve modifying the refractive index of the medium, and thus the speed of light within it. The difference is that whereas phase voxels create an isotropic change in the refractive index, birefringent voxels create an anisotropic change by rotating the voxel in the plane of the 120-mm square, 2-mm-thick glass chip.

Crucially, both types of voxel can be produced using a single laser pulse. According to Project Silica team leader Richard Black, this makes the modified region smaller and more uniform, minimizing effects such as light scattering that can interfere with read-outs from neighbouring voxels. It also allows many voxel layers to be written into, and then read out from, a single glass chip. The result is a system that can generate up to 10 million voxels per second, which equates to 25.6 million bits of data per second (Mbit s−1).

Performance of different types of glass

The Microsoft researchers studied two types of glass, both of which have better mechanical properties than ordinary window glass. In 301 layers of fused silica glass, they achieved a data density of 1.59 Gbit mm−3 using birefringent voxels, with a write throughput of 25.6 Mbit s−1 and a write efficiency of 10.1 nJ per bit. In 258 layers of borosilicate glass, the data density reached 0.678 Gbit mm−3 using phase voxels. Here, the write throughput was 18.4 Mbit s−1 and the write efficiency 8.85 nJ per bit.

“The phase voxel discovery in particular is quite notable because it lets us store data in ordinary borosilicate glass, rather than pure fused silica; do it with a single laser pulse per voxel; and do it highly parallel in close proximity,” says Black. “That combination of cheaper material and much simpler and faster writing and reading was a genuinely exciting moment for us.”

The researchers also showed that they could directly inscribe the glass using four independent laser beams in parallel, further increasing the write speeds for both types of glass.

Surviving “benign neglect”

To determine how long these inscribed glass plates could store data, the team repeatedly heated them to 500 °C, simulating their long-term ageing at lower temperatures. The results of these experiments suggest that encoded data could be retrieved after 10::000 years of storage at 290 °C. However, Black acknowledges that this figure does not account for external effects such as mechanical stress or chemical corrosion that could degrade the glass and the data it stores. Another unaddressed challenge is that storage capacity and writing speed will both need to grow before the technology can compete with today’s data centres.

If these deficiencies can be remedied, Black thinks the clearest potential applications would be in national libraries and other facilities that store scientific data and cultural records. “It’s also compelling for cloud archives where data is written once and kept indefinitely,” Black says. He points out that the team has already demonstrated proofs of concept with Warner Bros., the Global Music Vault and the Golden Record 2.0 project, a “cultural time capsule” inspired by the literal golden records launched on the Voyager spacecraft in the 1970s.

A common factor across all these organizations, Black explains, is that they need media that can survive “benign neglect” – something he says Project Silica delivers. He adds that the project also provides what he calls operational proportionality, meaning that its costs are primarily a function of the operations performed on the data, not the length of time the data are kept. “This completely alters the way we think about keeping archival material,” he says. “Once you have paid to keep the data, there is little point in deleting it, and you might as well keep it.”

Microsoft began exploring direct laser data storage in glass nearly a decade ago thanks to team member Ant Rowstron, who recognized the potential of work being done by physicist Peter Kazansky and colleagues at the University of Southampton, UK. The latest version of the technique, which is detailed in Nature, grew out of that collaboration, and Black says its capabilities are limited only by the power and speed of the femtosecond laser being used. “We have now concluded our research study and are sharing our results so that others may build on our work,” he says.

The post Laser-written glass plates could store data for thousands of years appeared first on Physics World.

https://physicsworld.com/a/laser-written-glass-plates-could-store-data-for-thousands-of-years/
Isabelle Dumé

Ultrasound system solves the ‘unsticking problem’ in biomedical research

Impulsonics’ “surround sound” technology frees-up living cells

The post Ultrasound system solves the ‘unsticking problem’ in biomedical research appeared first on Physics World.

“Surround sound for biological cells,” is how Luke Cox describes the ultrasound technology that Impulsonics has developed to solve the “unsticking problem” in biomedical science. Cox is co-founder and chief executive of UK-based Impulsonics, which spun-out of the University of Bristol in 2023.

He is also my guest in this episode of the Physics World Weekly podcast. He explains why living cells grown in a petri dish tend to stick together, and why this can be a barrier to scientific research and the development of new medical treatments.

The system uses an array of ultrasound transducers to focus sound so that it frees-up and manipulates cells in a way that does not alter their biological properties. This is unlike chemical unsticking processes, which can change cells and impact research results.

We also chat about Cox’s career arc from PhD student to chief executive and explore opportunities for physicists in the biomedical industry.

The following articles are mentioned in the podcast:

The post Ultrasound system solves the ‘unsticking problem’ in biomedical research appeared first on Physics World.

https://physicsworld.com/a/ultrasound-system-solves-the-unsticking-problem-in-biomedical-research/
Hamish Johnston

Scientists are failing to disclose their use of AI despite journal mandates, finds study

Analysis finds that the use of AI in scientific writing is increasing

The post Scientists are failing to disclose their use of AI despite journal mandates, finds study appeared first on Physics World.

An analysis of more than 5.2 million papers in 5000 different journals has revealed a dramatic rise in the use of artificial intelligence (AI) tools in academic writing across all scientific disciplines, especially physics.

However, the analysis has revealed a big gap between the number of researchers who use AI and those who admit to doing so – even though most scientific journals have policies requiring the use of AI to be disclosed.

Carried out by data scientist Yi Bu from Peking University and colleagues, the analysis looks at papers that are listed in the OpenAlex dataset and were published between 2021 and 2025.

To assess the impact of editorial guidelines introduced in response to the growing use of generative AI tools such as ChatGPT, they examined journal AI-writing policies, looked at author disclosures and used AI to see if papers had been written with the help of technology.

The AI detection analysis reveals that the use of AI writing tools has increased dramatically across all scientific disciplines since 2023. It also finds that 70% of journals have adopted AI policies, which primarily require authors to disclose the use of AI-writing tools.

IOP Publishing, which publishes Physics World, for example, has a journals policy that supports authors who use AI in a “responsible and appropriate” manner. It encourages authors, however, to be “transparent about their use of any generative AI tools in either the research or the drafting of the manuscript”.

A new framework

But in the new study, a full-text analysis of 75 000 papers published since 2023, reveals that only 76 articles (about 0.1% of the total) explicitly disclosed the use of AI writing tools.

In addition, the study finds no significant difference in the use of AI between journals that have disclosure policies and those that do not, which suggests that disclosure requirements are being ignored – what the authors call a “transparency gap”.

The study also finds that researchers from non-English-speaking countries are more likely to rely on AI writing tools than native English speakers. Increases in the use of AI writing tools are found to be particularly rapid in journals with high levels of open-access publishing.

The authors now call for a re-evaluation of ethical frameworks to foster responsible AI integration in science. They state that prohibition or disclosure requirements are insufficient to regulate AI use, with their results showing that researchers are not complying with policies.

The authors argue that instead of “opposition and resistance”, “proactive engagement and institutional innovation” is needed “to ensure AI technology truly enhances the value of science”.

The post Scientists are failing to disclose their use of AI despite journal mandates, finds study appeared first on Physics World.

https://physicsworld.com/a/scientists-are-failing-to-disclose-their-use-of-ai-despite-journal-mandates-finds-study/
No Author

The humanity of machines: the relationship between technology and our bodies

Anita Chandran reviews The Body Digital: a Brief History of Humans and Machines from Cuckoo Clocks to ChatGPT by Vanessa Chang

The post The humanity of machines: the relationship between technology and our bodies appeared first on Physics World.

Humanity has had a complicated relationship with machines and technology for centuries. While we created these inventions to make our lives easier, and have become heavily reliant upon them, we have often feared their impact on society.

In her debut book, The Body Digital: a Brief History of Humans and Machines from Cuckoo Clocks to ChatGPT, Vanessa Chang tells the story of this symbiotic partnership, covering tools as diverse as the self-playing piano and generative AI products. The short book combines creative storytelling, an inward look at our bodies and interpersonal relationships, and a detailed history of invention. Chang – who is the director of programmes at Leonardo, the International Society for the Arts, Sciences, and Technology in California – offers us a framework for examining future worlds based on the relationship between humanity and machines.

“Technology” has no easy definition. The Body Digital therefore takes a broad approach, looking at software, machines, infrastructure and tools. Chang examines objects as mundane as the pen and as complex as the road networks that define our cities. She focuses on the interplay between machine and human: how tools have lightened our load and become embedded in our behaviour. In doing this she asks the reader: is it possible for the human body to extract itself from technology?

Each chapter of the book centres on a different part of the human anatomy – hand, voice, ear, eye, foot, body and mind – looking at the historical relationship between that body part and technology. Chang follows this thread through to the modern day and the large-scale impact these technologies have had on the development of our communities, communications and social structures. The chapters are a vehicle for Chang to present interesting pieces of history and discussions about society and culture. Her explanations are tightly knit, and the book covers huge ground in its relatively concise page count.

Chang avoids “doomerism”, remaining even-handed about our reservations towards technological advancement. She is careful in her discussion of new technology, particularly those that are often fraught in the public discourse, such as the use of generative AI in creating art, and the potential harms of facial-recognition software.

She includes genuine concerns – like biases creeping into training data for large language models – but mitigates these fears by discussing how technologies have become enmeshed in human culture through history. Our fear of some technologies has been unfounded – take, for example, the idea that the self-playing piano would supersede live piano concerts. These debates, Chang argues, have happened throughout the history of technology, and some of the same arguments from the past can easily be applied to future technology.

While this commentary is often thought-provoking, it sometimes doesn’t go as far as it might. There is relatively limited discussion throughout the book about the technological ecosystem we currently live in and how that might impact our level of optimism about the future. In particular, the topics of human labour being supplanted by machine labour, and the impacts of tech monoliths like Apple and Google, are relatively minimal.

In one example, Chang discusses the ways in which “telecommunication technologies might serve as channels into the afterlife”, allowing us to use technology to artificially recreate the voices of our loved ones after death. While the book contains a full discussion of how uncanny and alarming this type of “artistic necrophilia” might be, Chang tempers fear by pointing out that by being careful with our data, careful with our digital selves, we might be able to “mitigate the transformation of [our] voices into pure commodities”. However, the questions of who controls our data, the relationship between data and capital, and the level of control that we have over the use of our data, is somewhat limited.

Poetic technology

The difference between offering interesting ideas and overexplaining is a hard needle to thread, and one that Chang navigates successfully. One striking feature of The Body Digital is the quality of the prose. Chang has a background in fiction writing and her descriptions reflect this. An automaton is anthropomorphized as a “petite, barefoot boy” with a “cloud of brown hair”; and the humble footpath is described as “veer[ing] at a jaunty angle from the pavement, an unruly alternative to concrete”. As a consequence, her ideas are interesting and memorable, making the book readable and often moving.

Particularly impressive is Chang’s attitude to exposition, which mimics fiction’s age-old adage of “show, don’t tell”. She gives the reader enough information to learn something new in context and ask follow-up questions, without banging the reader over the head with an answer to these questions. The book mimics the same relationship between the written word and human consciousness that Chang discusses within it. The Body Digital marinates with the reader in the way any good novel might, while teaching them something new.

The result is a poetic and well-observed text, which offers the reader a different way of understanding humanity’s relationship with technology. It reminds us that we have coexisted with machines throughout the history of our species, and that they have been helpful and positively shaped the direction of our world. While she covers too much ground to gaze in any one direction for too long, the reader is likely to come away enriched and perhaps even hopeful. And, as Chang points out, we have the opportunity to shape the future of technology, by “attending to the rich, idiosyncratic intelligence of our bodies”.

The post The humanity of machines: the relationship between technology and our bodies appeared first on Physics World.

https://physicsworld.com/a/the-humanity-of-machines-the-relationship-between-technology-and-our-bodies/
No Author

Making multipartite entanglement easier to detect

New advances in entanglement witnesses allow researchers to verify genuine multipartite entanglement even in noisy, high‑dimensional and computationally relevant quantum states

The post Making multipartite entanglement easier to detect appeared first on Physics World.

Genuine multipartite entanglement is the strongest form of entanglement, where every part of a quantum system is entangled with every other part. It plays a central role in advanced quantum tasks such as quantum metrology and quantum error correction. To detect this deep form of entanglement in practice, researchers often use entanglement witnesses which are fast, experimentally friendly tests that certify entanglement whenever a measurable quantity exceeds a certain bound.

In this work, the researchers significantly extend previous witness‑construction methods to cover a much broader family of multipartite quantum states. Their approach is built within the multi‑qudit stabiliser formalism, a powerful framework widely used in quantum error correction and known for describing large classes of entangled states, both pure and mixed. They generalise earlier results in two major directions: (i) to systems with arbitrary prime local dimension, going far beyond qubits, and (ii) to stabiliser subspaces, where the stabiliser defines not just a single state but an entire entangled subspace.

This generalisation allows them to construct witnesses tailored to high‑dimensional graph states and to stabiliser‑defined subspaces, and they show that these witnesses can be more robust to noise than those designed for multiqubit systems. In particular, witnesses tailored to GHZ‑type states achieve the strongest resistance to white noise, and in some cases the authors identify the most noise‑robust witness possible within this construction. They also demonstrate that stabiliser‑subspace witnesses can outperform graph‑state witnesses when the local dimension is greater than two.

Overall, this research provides more powerful and flexible tools for detecting genuine multipartite entanglement in noisy, high‑dimensional and computationally relevant quantum systems. It strengthens our ability to certify complex entanglement in real‑world quantum technologies and opens the door to future extensions beyond the stabiliser framework.

Read the full article

Entanglement witnesses for stabilizer states and subspaces beyond qubits

Jakub Szczepaniak et al 2025 Rep. Prog. Phys. 88 117602

Do you want to learn more about this topic?

Focus on Quantum Entanglement: State of the Art and Open Questions guest edited by Anna Sanpera and Carlo Marconi (2025-2026)

The post Making multipartite entanglement easier to detect appeared first on Physics World.

https://physicsworld.com/a/making-multipartite-entanglement-easier-to-detect/
Lorna Brigham

Resolving the spin of sound

Researchers show how sound waves can hold conserved spin angular momentum, resolving a long‑standing theoretical debate

The post Resolving the spin of sound appeared first on Physics World.

Acoustic waves are usually thought of as purely longitudinal, moving back and forth in the direction the wave is travelling and having no intrinsic rotation, therefore no spin (spin‑0). Recent work has shown that acoustic waves can in fact carry local spin‑like behaviour. However, until now, the total spin angular momentum of an acoustic field was believed to vanish, with the local positive and negative spin contributions cancelling each other to give an overall global spin‑0. In this work, the researchers show that acoustic vortex beams can carry a non‑zero longitudinal spin angular momentum when the beam is guided by certain boundary conditions. This overturns the long‑held assumption that longitudinal waves cannot possess a global spin degree of freedom.

Using a self‑consistent theoretical framework, the researchers derive the full spin, orbital and total angular momentum of these beams and reveal a new kind of spin–orbit interaction that appears when the beam is compressed or expanded. They also uncover a detailed relationship between the two competing descriptions of angular momentum in acoustics which are canonical‑Minkowski and kinetic‑Abraham. They demonstrate that only the canonical‑Minkowski form is truly conserved and directly tied to the beam’s azimuthal quantum number, which describes how the wave twists as it travels.

The team further demonstrates this mechanism experimentally using a waveguide with a slowly varying cross‑section. They show that the effect is not limited to this setup, it can also arise in evanescent acoustic fields and even in other wave systems such as electromagnetism. These results introduce a missing fundamental degree of freedom in longitudinal waves, offer new strategies for manipulating acoustic spin and orbital angular momentum, and open the door to future applications in wave‑based devices, underwater communication and particle manipulation.

Read the full article

Longitudinal acoustic spin and global spin–orbit interaction in vortex beams

Wei Wang et al 2025 Rep. Prog. Phys. 88 110501

Do you want to learn more about this topic?

Acoustic manipulation of multi-body structures and dynamics by Melody X LimBryan VanSaders and Heinrich M Jaeger (2024)

The post Resolving the spin of sound appeared first on Physics World.

https://physicsworld.com/a/resolving-the-spin-of-sound/
Lorna Brigham

Quantum memories could help make long-baseline optical astronomy a reality

Single-photon interferometry achieved over 1.5 km

The post Quantum memories could help make long-baseline optical astronomy a reality appeared first on Physics World.

Quantum-entangled sensors placed over a kilometre apart could allow interferometric measurements of optical light with single photon sensitivity, experiments in the US suggest. While this proof-of-principle demonstration of a theoretical proposal first made in 2012 is not yet practically useful for astronomy, it marks a significant step forward in quantum sensing.

Radio telescopes are often linked together to provide more detailed images with better angular resolution than would otherwise be possible. The Event Horizon Telescope array, for example, performs very long baseline interferometry of signals from observatories on four continents to take astrophysical images such as the first picture of a black hole in 2019. At shorter wavelengths, however, much weaker signals are often parcelled into higher-energy photons. “You start getting this granularity at the single photon level,” says Pieter-Jan Stas at Harvard University.

According to textbook quantum mechanics, one can create an interferometric image from single photons by recombining their paths at a single detector – provided that their paths are not measured before then. This principle is used in laboratory spectroscopy. In astronomical observations, however, attempting to transport single photons from widely spread telescopes to a central detector would almost certainly result in them being lost. The baseline of infrared and optical telescopes is therefore restricted to about 300 m.

In 2012, theorist Daniel Gottesman, then at the Perimeter Institute for Theoretical Physics in Canada, and colleagues proposed using a central single source of entangled photons as a quantum repeater to generate entanglement between two detection sites, putting them into the same quantum state. The effect of an incoming photon on this combined state could therefore be measured without having to recombine the paths and collect the photon at a central detector.

Hidden information

“In reality, the photon will be in a superposition of arriving at both of the detectors,” says Stas. “That’s where this advantage comes from – you have this photon that is delocalized and arrives at both the left and the right station – so you truly have this baseline that helps you with improving your resolution, but to do this you have to keep the ‘which path’ information hidden.”

The 2012 proposal was not thought to be practical, because it required distributing entanglement at a rate comparable with the telescope’s spectral bandwidth. In 2019, however, Harvard’s Mikail Lukin and colleagues proposed integrating a quantum memory into the system. In the new research, they demonstrate this in practice.

The team used qubits made from silicon–vacancy centres in diamond. These can be very long lived because the spin of the centre’s electron (which interacts with the photon) is mapped to the nuclear spin, which is very stable. The researchers used a central laser as a coherent photon source to generate heralded entanglement to certify that the qubits were event-ready. “It’s not like you have to receive the space signal to be simultaneous with the arrival of the photon,” says team member Aziza Suleymanzade at the University of California, Berkeley. “In our case, we distribute entanglement, and it has some coherence time, and during that time you can detect your signal.”

Using two detectors placed in adjacent laboratories and synthetic light sources, the researchers demonstrated photon detection above vacuum fluctuations in fibres over 1.5 km in length. They acknowledge that much work remains before this can be viable in practical astronomy, such as a higher rate of entanglement generation, but Stas says that “this is one step towards bringing quantum techniques into sensing”.

Similar work in China

The research is described in Nature. Researchers in China led by Jian-Wei Pan have achieved a similar result, but their work has yet to be peer reviewed.

Yujie Zhang of the University of Waterloo in Canada points out that Lukin and colleagues have done similar work on distributed quantum communication and the quantum internet. “The major difference is that for most of the original protocols, what people care about is trying to entangle different quantum memories in the quantum network so then they can do gates on those quantum memories,” he says. “There’s nothing about extra information from the environment…This one is different in that they have to get the information mapped from the starlight to their quantum memory.” He notes several difficulties acknowledged by the researchers – such as that vacancy centres are very narrowband, but says that now people know the system can work, they can work to show that it can beat classical systems in practice.

“I think this is definitely a step towards [realizing the protocol envisaged in 2012],” says Gottesman, now at the University of Maryland, College Park. “There have been previous experiments where they generated the entanglement and they did some interference but they didn’t have the repeater aspect, which is the real value-added aspect of doing quantum-assisted interferometry. Its rate is still well short of what you’d need to have a functioning telescope, but this is putting one of the important pieces into place.”

The post Quantum memories could help make long-baseline optical astronomy a reality appeared first on Physics World.

https://physicsworld.com/a/quantum-memories-could-help-make-long-baseline-optical-astronomy-a-reality/
No Author

UK physics leaders express ‘deep concern’ over funding cuts in letter to science minister Patrick Vallance

Heads of almost 60 physics departments sign letter saying UK funding cuts are causing “reputational risk”

The post UK physics leaders express ‘deep concern’ over funding cuts in letter to science minister Patrick Vallance appeared first on Physics World.

The heads of university physics departments in the UK have published an open letter expressing their “deep concern” about funding changes announced late last year by UK Research and Innovation (UKRI), the umbrella organization for the UK’s research councils.

Addressed to science minister Patrick Vallance, the letter says the cuts are causing “reputational risk” and calls for “strategic clarity and stability” to ensure that UK physics can thrive.

It has so far been signed by 58 people who represent 45 different universities, including Birmingham, Bristol, Cambridge, Durham, Imperial College, Liverpool, Manchester and Oxford.

The letter says that the changes at UKRI “risk undermining science’s fundamental role in improving our prosperity, health and quality of life, as well as delivering sustainable growth through innovation, productivity and scientific leadership”.

The signatories warn that the UK’s international standing in physics is “a strategic asset” and that areas such as particle physics, astronomy and nuclear physics are “especially important”.

Raising concerns

The decision by the heads of physics to write to Vallance comes in the wake of UKRI stating in December that it will be adjusting how it allocates government funding for scientific research and infrastructure.

The Science and Technology Facilities Council (STFC), which is part of UKRI, stated that projects would need to be cut given inflation, rising energy costs as well as “unfavourable movements in foreign exchange rates” that have increased STFC’s annual costs by over £50m a year.

The STFC noted that it would need to reduce spending from its core budget by at least 30% over 2024/2025 levels while also cutting the number of projects financed by its infrastructure fund.

The council has already said two UK national facilities – the Relativistic Ultrafast Electron Diffraction and Imaging facility and a mass spectrometry centre dubbed C‑MASS – will now not be prioritized.

In addition, two international particle-physics projects will not be supported: a UK-led upgrade to the LHCb experiment at CERN as well as a contribution to the Electron-Ion Collider at the Brookhaven National Laboratory that is currently being built.

Philip Burrows, director of the John Adams Institute for Accelerator Science at the University of Oxford, who is one of the signatories of the letter, told Physics World that the cuts are “like buying a Formula-1 car but not being able to afford the driver”.

Burrows admits that the STFC has been hit “particularly hard” by its flat-cash settlement, given that a large fraction of its expenditure pays the UK’s subscriptions to international facilities and operating the UK’s flagship national facilities.

But because most of the rest of the STFC’s budget supports scientists to do research at those facilities, he is concerned that the funding cuts will fall disproportionately on the science programme.

“Constraining these areas risks weakening the very talent pipeline on which the UK’s innovation economy depends,” the letter states. “Fundamental physics also delivers substantial public engagement and cultural impact, strengthening public support for science and reinforcing the UK’s reputation as a global scientific leader.”

The signatories also say they are “particularly concerned” about the UK’s capacity to lead the scientific exploitation of major international projects. “An abrupt pause in funding for key international science programmes risks damaging UK researchers’ competitive advantage into the 2040s,” they note.

The letter now calls on the government to work with UKRI and STFC to “stabilize” curiosity-driven grants for physics within STFC “at a minimum of flat funding in real terms” as well as protect postdocs, students and technicians from the cuts.

It also calls on the UK to develop a long-term strategy for infrastructure and call on the government to address facilities cost pressures through “dedicated and equitable mechanisms so that external shocks do not singularly erode the UK’s research base in STFC-funded research areas”.

The news comes as Michele Dougherty today formally stepped down from her role as IOP president. Dougherty, who also holds the position of executive chair of the STFC, had previously stepped back from presidential duties on 26 January due to a conflict of interest.

Paul Howarth, who has been IOP president-elect since September, will now become IOP president.

The post UK physics leaders express ‘deep concern’ over funding cuts in letter to science minister Patrick Vallance appeared first on Physics World.

https://physicsworld.com/a/uk-physics-leaders-express-deep-concern-over-funding-cuts-in-letter-to-science-minister-patrick-vallance/
Michael Banks

Ancient reversal of Earth’s magnetic field took an extraordinarily long time

Field-flipping event 40 million years ago in the Eocene epoch lasted 70,000 years

The post Ancient reversal of Earth’s magnetic field took an extraordinarily long time appeared first on Physics World.

The Earth’s magnetic poles have reversed 540 times over the past 170 million years. Usually, these reversals are relatively speedy in geological terms, taking around 10,000 years to complete. Now, however, scientists in the US, France and Japan have found evidence of much slower reversals deep in Earth’s geophysical past. Their findings could have important implications for our understanding of Earth’s climate and evolutionary history.

Scientists think the Earth’s magnetic field arises from a dynamo effect created by molten metal circulating inside the planet’s outer core. Its consequences include the bubble-like magnetosphere, which shields us from the solar wind and cosmic radiation that would otherwise erode our atmosphere.

From time to time, this field weakens, and the Earth’s magnetic north and south poles switch places. This is known as a geomagnetic reversal, and we know about it because certain types of terrestrial rocks and marine sediment cores contain evidence of past reversals. Judging from this evidence, reversals usually take a few thousand years, during which time the poles drift before settling again on opposite sides of the globe.

Looking into the past

Researchers led by Yuhji Yamamoto of Kochi University, Japan and Peter Lippert at the University of Utah, US, have now identified two major exceptions to this rule. Drawing on evidence obtained during the Integrated Ocean Drilling Program expedition in 2012, they say that around 40 million years ago, during the Eocene epoch, the Earth experienced two reversals that took 18,000 and 70,000 years.

The team based these findings on cores of sediment extracted off the coast of Newfoundland, Canada, up to 250 metres below the seabed. These cores contain crystals of magnetite that were produced by a combination of ancient microorganisms and other natural processes. The iron oxide particles within these crystals align with the polarity of the Earth’s magnetic field at the time the sediments were deposited. Because marine sediments are far less affected by erosion and weathering than sediments onshore, Yamamoto says the information they preserve about past Earth environments – including geomagnetic conditions – is exceptionally clean.

Significance for evolutionary history

The team says the difference between a geomagnetic reversal that takes 10,000 years and one that takes 70,000 years is significant because prolonged intervals of weaker geomagnetic fields would have exposed the Earth to higher amounts of cosmic radiation for longer. The effects on living creatures could have been devastating, says Lippert. As well as higher rates of genetic mutations due to increased radiation, he points out that organisms from bacteria to birds use the Earth’s magnetic field while navigating. “A lower strength field would create sustained pressures on these organisms to adapt,” he says.

If humans had existed at the time of these reversals, the effects on our species could have been similarly profound. “Modern humans (Homo sapiens) are thought to have begun dispersing out of Africa only about 50,000 years ago,” Yamamoto observes. “If a geomagnetic reversal can persist for a period comparable to – or even longer than – this timescale, it implies that the Earth’s environment could undergo substantial and continuous change throughout the entire period of human evolution.”

Although our genetic ancestors dodged that particular bullet, Yamamoto thinks the team’s findings, which are published in Nature Communications Earth & Environment, offer a valuable perspective on how evolution and environmental change could interact in the future. “This period corresponds to an epoch when Earth was far warmer than it is today, and when Greenland is thought to have been a truly ‘green land’,” he explains. “We also know that atmospheric CO₂ concentrations during this era were comparable to levels projected for the end of this century, making it an important ‘climate analogue’ for understanding near‑future climate conditions.”

The discovery could also have more direct implications for future life on Earth. The magnitude of the Earth’s magnetic field has decreased by around 5% in each century since records began. This decrease, combined with the slow drift of our current magnetic North Poletowards Siberia, could indicate that we are in the early stages of a new geomagnetic reversal. Re‑evaluating the duration of such reversals is thus not only an issue for geophysicists, Yamamoto says. It’s also an important opportunity to reconsider fundamental questions about how we should coexist with our planet and how we ought to confront a continually changing environment.

Motivation for future studies

John Tarduno, a geophysicist at the University of Rochester, US, who was not involved in the study, describes it as “outstanding” work that “documents an exciting discovery bearing on the nature of magnetic shielding through time and the geomagnetic reversal process”. He agrees that reduced shielding could have had biotic effects, and adds that the discovery of long reversal transitions could influence scientific thinking on the statistics of field reversals – including questions of whether the field retains some “memory” of previous events. “This new study will provide motivation to examine reversal transitions at very high resolution,” Tarduno says.

For their next project, Yamamoto and colleagues aim to use sequences of lava flows in Iceland to analyse how the Earth’s magnetic field evolved. Lippert’s team, for its part, will be studying features called geomagnetic excursions that appear in both deep sea and terrestrial sediments. Such excursions are evidence of short-lived, incomplete attempts at field reversals, and Lippert explains that they can be excellent stratigraphic markers, helping scientists correlate records on geological timescales and compare them with samples taken from different parts of the world. “Excursions, like long reversals, can inform our understanding of what ultimately causes a geomagnetic field reversal to start and persist to completion,” he says.

The post Ancient reversal of Earth’s magnetic field took an extraordinarily long time appeared first on Physics World.

https://physicsworld.com/a/ancient-reversal-of-earths-magnetic-field-took-an-extraordinarily-long-time/
Isabelle Dumé

Focusing on fusion: Debbie Callahan talks commercial laser fusion

Plasma physicist Debbie Callahan, chief strategy officer at Focused Energy, talks to Hamish Johnston about her work in laser fusion research

The post Focusing on fusion: Debbie Callahan talks commercial laser fusion appeared first on Physics World.

Debbie Callahan

With the world’s energy demands increasing, and our impact on the climate becoming ever clearer, the search is on for greener, cleaner energy production. That’s why research into fusion energy is undergoing something of a renaissance.

Construction of the International Thermonuclear Experimental Reactor (ITER) in France – the world’s largest fusion experiment – is currently under way, while there are numerous other large-scale facilities and academic research projects too. There has also been a rise in the number of smaller commercial companies joining the race.

One person at the forefront of fusion research is Debbie Callahan – a plasma physicist who spent 35 years working at the National Ignition Facility (NIF) at Lawrence Livermore National Laboratory in the US. She is now chief strategy officer at Focused Energy, a laser-fusion firm based in Germany and California, which is trying to generate energy from the laser-driven fusion of hydrogen isotopes.

Callahan recently talked to Physics World online editor Hamish Johnston about working in the fusion sector, Focused Energy’s research and technology, and the career opportunities available. The following is an edited extract of their conversation, which you can hear in full on the Physics World Weekly podcast.

How does NIF’s approach to fusion differ from that taken by magnetic confinement facilities such as ITER?

To get fusion to happen, you need three elements that we sometimes call the triple product. You need a certain amount of density in your plasma, you need temperature, and you need time. The product of those has to be over a certain value.

Magnetic fusion and inertial fusion are kind of the opposite of each other. In a magnetic fusion system like ITER, you have a low-density plasma, but you hold it for a long time. You do that by using magnetic fields that trap the plasma and keep it from escaping.

In inertial fusion – like at NIF – it’s the opposite. You don’t hold the plasma together at all, it’s only held by its own inertia, and you have a very high density for a short time. In both cases, you can make fusion happen.

What is the current state of the art at NIF, in terms of how much energy you have to put in to achieve fusion versus how much you get out?

To date, the best shot at NIF – by which I mean an individual, high-energy laser bombardment of the target capsule – occurred during an experiment in April 2025, which had a target gain of about 4.1. That means that they got out 4.1 times the amount of energy that they put in. The incident laser energy for those shots is around two megajoules, so they got out about eight megajoules.

This is a tremendous accomplishment that’s taken decades to get to. But to make inertial fusion energy successful and use it in a power plant, we need significantly higher gains of more like 50 to 100.

Target chamber at a fusion facility

Can you explain Focused Energy’s approach to fusion?

Focused Energy was founded in July 2021, and has offices in the US and Germany. Just a month later, we achieved fusion ignition, which is when the fusion fuel becomes hot enough for the reactions to sustain themselves through their own internal heating (it is not the same as gain).

At NIF lasers are fired into a small cylinder of gold or depleted uranium and the energy is converted into X-rays, which then drive the capsule. It’s what’s called laser indirect drive. At Focused Energy, however, we’re directly driving the capsule. The laser energy is put directly on the capsule, with no intermediate X-rays.

The advantage of this approach is that converting laser energy to X-rays is not very efficient. It makes it much harder to get the high target gains that we need. At Focused Energy, we believe that direct drive is the best option for fusion energy to get us to a gain of over 50.

So is boosting efficiency one of your key goals to make fusion practical at an industrial level?

Yes, exactly. You have to remember that NIF was funded for national security purposes, not for fusion energy. It wasn’t designed to be a power plant – the goal was just to generate fusion energy for the first time.

In particular, the laser at NIF is less than 1% efficient but we believe that for fusion power generation, the laser needs to be about 10% efficient.

So one of the big thrusts for our company is to develop more efficient lasers that are driven by diodes – called diode pump solid state lasers.

Can you tell us about Focused Energy’s two technologies called LightHouse and Pearl Fuel?

LightHouse is our fusion pilot plant. When operational, it will be the first power plant to produce engineering gain greater than one, meaning it will produce more energy than it took to drive it. In other words, we’ll be producing net electricity.

For NIF, in contrast, gain is the amount of energy out relative to the amount of laser energy in. But the laser is very inefficient, so the amount of electricity they had to put in to produce that eight megajoules of fusion energy is a lot.

Meanwhile, Pearl is the capsule the laser is aimed at in our direct drive system. It’s filled with deuterium–tritium fuel derived from sea water and lithium.

Artist impression of a proposed fusion power plant

How do you develop the capsule to absorb the laser energy and give as much of it to the fuel as possible?

The development of the capsule for a fusion power plant is quite complicated. First, we need it to be a perfect sphere so it compresses spherically. The materials also need to efficiently absorb the laser light so you can minimize the size of that laser.

You have to be able to cheaply and quickly mass produce these targets too. While NIF does 400 shots per year, we will need to do about 900,000 shots a day – about 10 per second. We’ll also have to efficiently remove the exploded target material from the reactor chamber so that it can be cleared for the next shot.

It’s a very complicated design that needs to bring together all the pieces of the power plant in a consistent way.

When you are designing these elements, what plays a bigger role – computer simulations or experiments?

Computer simulations play a large part in developing these designs. But one of the lessons that I learned from NIF was that, although the simulation codes are state of the art, you need very precise answers, and the codes are not quite good enough – experimental data play a huge role in optimizing the design. I expect the same will be true at Focused Energy.

A third factor that’s developing is artificial intelligence (AI) and machine learning. In fact, at Livermore, a project working on AI contributed to achieving gain for the first time in December 2022. I only see AI’s role in fusion getting bigger, especially once we are able to do higher repetition rate experiments, which will provide more training data.

What intellectual property (IP) does Focused Energy have in addition to that for the design of the Pearl target and the LightHouse plant?

We also have IP in the design of the lasers – they are not the same lasers as used at NIF. And I think there’ll be a lot of IP around how we fabricate the targets. After all, it’s pretty complicated to figure out how to build 900,000 targets a day at a reasonable cost.

We’ll see a lot of IP coming out of this project in those areas, but there’s also the act of putting it all together. How we integrate these things in order to make a successful plant is important.

What are the challenges of working with deuterium and tritium as materials for fusion?

We chose deuterium and tritium because they are the easiest elements to fuse, and have been successfully demonstrated as fusion fuel by NIF.

Deuterium can be found naturally in sea water, but getting tritium – which is radioactive – is more complicated. We breed it from lithium. Our reactor designs have lithium in them, and the neutrons from the fusion reactions breed the tritium.

Making sure that we have enough tritium, and figuring out how to extract that material to use it for future shots, is a big task. We have to be able to breed enough tritium to keep the plant going.

To work on this, we have a collaboration funded by the US Department of Energy to work with Savannah River National Lab in South Carolina. They have a lot of expertise in designing these tritium-extraction systems.

How will you capture the heat from the deuterium–tritium fusion reaction?

We will use a conventional steam cycle to convert the heat into electricity. It’s funny – we’ll have this very hi-tech way of producing heat, but at the end of the day, we will use a traditional system to produce the electricity from that heat.

So what’s the timeline on development?

Our plan is to have a pilot plant up by the end of the 2030s. It’s a fairly aggressive timeline given the things that we have to do. But that’s part of being a start-up – we have to take some risks and try to move quickly to achieve our goal.

To help that we have, in my view, a superpower – we have one foot in Europe and one foot in the US. There are a lot of opportunities between the two continents to partner with other companies, universities and governments. I think that makes us strong because we have access to some of the best talent from around the world.

How does working at Focused Energy compare with life as an academic at Lawrence Livermore?

There are a lot of similarities. My role now is to bring the knowledge and skills I learned at NIF to Focused Energy, so it’s been a natural transition.

In fact, there was a lot of pressure working at NIF. We were trying to move very quickly, so it’s actually very similar to working in a start-up like Focused Energy.

One of the big differences is the level of bureaucracy. Working for a government-funded lab meant there were lots of rules and paperwork, which takes up your time and you don’t always see the value in it.

In contrast, working for a small start-up means we can move more quickly because we don’t have as many of those kinds of constraints. Personally, I find that great because it leaves more time for the fun and interesting things – like trying to get fusion on the grid.

Are you still involved in academic research in any way?

As a firm, we are still out there collaborating with academics. Last year, for example, we gave four separate presentations at the American Physical Society Division of Plasma Physics meeting.

Debbie Callahan speaking on stage

I feel very strongly about peer review. Of course, publishing isn’t our number one priority, but we need feedback from others. We’re trying to do something that no-one’s done before, so it’s important to have our colleagues give us feedback on what we’re doing, point out mistakes we’re making or things we’re forgetting.

Working with universities and national labs in both Europe and the US is vital. Communicating with others in the field is important for us to get to where we want to go.

And of course, being an active part of the fusion community is good for recruitment too. We regularly give presentations at conferences that students attend. We meet those students and they learn about our work – and they might be future employees for our company.

What’s your advice for early-career physicists keen on joining the fusion industry?

There are so many opportunities right now, especially compared to the start of my career when the work was mainly just at universities or national labs. Nowadays, there are a lot of companies in the sector. Not all of them will survive because there’s only so much money, but there are still lots of opportunities. If you’re interested in fusion energy, go for it.

The field is always developing. There’s new stuff happening every day – and new problems. So if you like problem-solving, it’s great, especially if you want to do something good for the world.

There are also opportunities for people who are not plasma physicists. At Focused Energy we have people across so many fields – those who work on lasers, others who work on reactor design, some developing the AI and machine learning, and those who work on target physics, like me. To achieve fusion energy, we need physicists, engineers, mathematicians and computer scientists. We need researchers, technicians and operators. There’s going to be tremendous growth in this sector.

The post Focusing on fusion: Debbie Callahan talks commercial laser fusion appeared first on Physics World.

https://physicsworld.com/a/focusing-on-fusion-debbie-callahan-talks-commercial-laser-fusion/
Hamish Johnston

Shadow sculptures evoke quantum physics

The Midnight Ballet by Will Budgett is a treat for the eyes

The post Shadow sculptures evoke quantum physics appeared first on Physics World.

This winter in Bristol has been even gloomier than usual – so I was really looking forward to the Bristol Light Festival 2026. We went on the last evening of the event (28 February) and we were blessed with dry weather and warmish temperatures.

The festival featured 10 illuminated installations that were scattered throughout Bristol and the crowds were out in force to enjoy them. I wasn’t expecting to be thinking about physics as I wandered through town, but that’s exactly what I found myself doing at an installation called The Midnight Ballet by the British sculptor Will Budgett. Rather appropriately, it was located next to the HH Wills Physics Laboratory at the University of Bristol.

The display comprises seven sculptures that are illuminated from two different directions. The result is two very different images of ballerinas projected onto two screens (see image).

Art and science

So, why was I thinking about physics while admiring the work? To me the pieces embody – in a purely artistic way – the idea of superposition and measurement in quantum mechanics. A sculpture is capable of producing two different images (a superposition of states), but neither of these images is observable until a sculpture is illuminated from specific directions (the measurements).

Now, I know that this analogy is far from perfect. Measurements can be made simultaneously in two orthogonal planes, for example. But, Budgett’s beautiful artworks really made me think about quantum physics. Given the exhibit’s close proximity to the university’s physics department, I suspect I am not the only one.

The post Shadow sculptures evoke quantum physics appeared first on Physics World.

https://physicsworld.com/a/shadow-sculptures-evoke-quantum-physics/
Hamish Johnston

Nuclear-powered transport – how far can it take us?

Honor Powrie looks at the perils and plus-sides of nuclear-powered transport

The post Nuclear-powered transport – how far can it take us? appeared first on Physics World.

In 1942 physicists in Chicago, led by Enrico Fermi, famously produced the world’s first self-sustaining nuclear chain reaction. But it was to be another nine years before electricity was generated from fission for the first time. That landmark event occurred in 1951 when the Experimental Breeder Reactor-I in southern Idaho powered a string of four 200-watt light bulbs.

Our ability to harness nuclear power has been under constant development since then. In fact, according to the Nuclear Energy Association, a record 2667 terrawatt-hours of electricity was generated by nuclear reactors around the world in 2024 – up 2.5% on the year before. But what, I wonder, is the potential of nuclear-powered transport?

A “nuclear engine” has many advantages, notably providing a vehicle with an almost unlimited supply of onboard power, with no need for regular refuelling. That’s particularly attractive for large ships and submarines, where fuel stops at sea are few and far between. It’s even better for space craft, which cannot refuel at all.

The downside is that a vehicle needs to be fairly large to carry even a small nuclear fission reactor – plus all the heavy shielding to protect passengers onboard. Stringent safety requirements also have to be met. If the vehicle were to crash or explode, the shield around the reactor needs to stay fully intact.

Ships and planes

Perhaps the best known transport application of nuclear power is at sea, where it’s used for warships, submarines and supercarriers. The world’s first nuclear-powered ship was the US Navy submarine Nautilus, which was launched in 1954. As the first vessel to have a nuclear reactor for propulsion, it revolutionized naval capabilities.

Compared to oil or coal-fired ships, nuclear-powered vessels can travel far greater distances. All the fuel is in the reactor, which means there is no need for additional fuel be carried onboard – or for exhaust chimneys or air intakes. Even better, the fuel is relatively cheap. But operating and infrastructure costs are steep, which is why almost all nuclear-powered marine vessels belong to the military.

There have, however, been numerous attempts to develop other forms of nuclear-powered transport. While a nuclear-powered aircraft might seem unlikely, the idea of flying non-stop to the other side of the world, without giving off any greenhouse-gas emissions, is appealing. Incredible as it might seem, airborne nuclear reactors were actually trialled in the mid-1950s.

That was when the United States Air Force converted a B-36 bomber to carry an operational air-cooled reactor, weighing around 18 tons. The aircraft was not actually nuclear powered but it was operated in this configuration to assess the feasibility of flying a nuclear reactor. The aircraft made a total of 47 flights between July 1955 and March 1957.

In 1955 the Soviet Union also ran a project to adapt a Tupolev Tu-95 “Bear” aircraft for nuclear power. However, because of the radiation hazard to the crew and the difficulties in providing adequate shielding, the project was soon abandoned. Neither the American or the Soviet atomic-powered aircraft ever flew and – because the technology was inherently dangerous – it was never considered for commercial aviation.

Cars and trains

The same fate befell nuclear-powered trains. In 1954 the US nuclear physicist Lyle Borst, then at the University of Utah, proposed a 360-tonne locomotive carrying a uranium-235 fuelled nuclear reactor. Several other countries, including Germany, Russia and the UK, also had schemes for nuclear locos. But public concerns about safety could not be overcome and nuclear trains were never built. The $1.2m price tag of Borst’s train didn’t help either.

Ford Nucleon design

In the late 1950s, meanwhile, there were at least four theoretical nuclear-powered “concept cars”: the Ford Nucleon, the Studebaker Packard Astral, the Simca Fulgur and the Arbel Symétric. Based on the assumption that nuclear reactors would get much smaller over time, it was felt that such a car would need relatively light radiation shielding. I certainly wouldn’t have wanted to take one of those for a spin; in the end none got beyond concept stage.

Perhaps the real success story of nuclear propulsion has been in space

But perhaps the real success story of nuclear propulsion has been in space. Between 1967 and 1988, the Soviet Union pioneered the use of fission reactors for powering surveillance satellites, with over 30 nuclear-powered satellites being launched during that period. And since the early 1960s, radioisotopes have been a key source of energy in space.

Driven by the desire for faster, more capable and longer duration space missions to the Moon, Mars and beyond, China, Russia and the US are now investing significantly in the next generation of nuclear reactor technology for space propulsion, where solar or radioisotope power will be inadequate. Several options are on the table.

One is nuclear thermal propulsion, whereby energy from a fission reactor heats a propellant fuel. Another is nuclear electric propulsion, in which the fission energy ionizes a gas that gets propelled out the back of the spacecraft. Both involve using tiny nuclear reactors of the kind used in submarines, except they’re cooled by gas, not water. Key programmes are aiming for in-space demonstrations in the next 5–10 years.

Where next?

Many of the first ideas for nuclear-powered transport were dreamed up little more than a decade after the first self-sustaining chain reaction. The appeal was clear: compared to other fuels, nuclear power has a high energy density and lasts much longer. It also has zero carbon emissions. Nuclear power must have seemed a panacea for all our energy needs – using it for cars and planes must have seen an obvious next step.

However, there are major safety issues to address when nuclear sources are mobilized, from protecting passengers and crew, to ensuring appropriate safeguards should anything go wrong. And today we understand all too well the legacy of nuclear systems, from the safe disposal of spent fuel to the decommissioning of nuclear infrastructure and equipment.

We’ve struck the right balance when it comes to using nuclear power, confining it to sea-faring vessels under the watchful eye of the military

Here on Earth, I think we’ve struck the right balance when it comes to using nuclear power, confining it to sea-faring vessels under the watchful eye of the military. But as human-crewed, deep-space exploration beckons, a whole new set of issues will arise. There will, of course, be lots of technical and engineering challenges.

How, for example, will we maintain, repair and decommission nuclear-powered space craft? How will we avoid endangering crews or polluting the environment especially when craft take off? Who should set appropriate legislation – and how we do we police those rules? When it comes to space, nuclear will help us “to boldly go”; but it will also require bold regulation.

The post Nuclear-powered transport – how far can it take us? appeared first on Physics World.

https://physicsworld.com/a/nuclear-powered-transport-how-far-can-it-take-us/
Honor Powrie

Bubbles, foams and self-assembly: a conversation with Early Career Award winner Aurélie Hourlier-Fargette

Aurélie Hourlier‑Fargette, winner of the 2025 JPhys Materials Early Career Award, discusses the inspirations behind her interdisciplinary work on bubble assemblies and foam-based materials

The post Bubbles, foams and self-assembly: a conversation with Early Career Award winner Aurélie Hourlier-Fargette appeared first on Physics World.

Congratulations on winning the 2025 JPhys Materials Early Career Award. What does this mean for you at this stage of your career?

I am really grateful to the Editorial Board of JPhys Materials for this award and for highlighting our work. This is a key recognition for the whole team behind the results presented in this research paper. We were taking a new turn in our research with this topic – trying to convince bubbles to assemble into crystalline structures towards architected materials – and this award is an important encouragement to continue pushing in this direction. At the crossroads of physics, physical chemistry, materials science and mechanics, we hope that this is only the beginning of our interdisciplinary journey around bubble assemblies and foam-based materials.

Your research explores elasto-capillarity and foam architectures, what inspired you to work in this fascinating area?

I always say that research is a series of encounters – with people, and with scientific themes and objects. I was lucky to discover this interdisciplinary world as an undergraduate, during an internship on elasto-capillarity at the intersection of physics and mechanics. The scientific communities working on these topics – and also on foams – are fantastic. In both fields, I was fortunate to meet talented people who inspired my future work, combining scientific skills and creativity.

In France, the GDR MePhy (mechanics and physics of complex systems) played a key role in broadening my perspective, by organizing workshops on many different topics, always with interdisciplinarity in mind.

You have demonstrated mechanically guided self-assembly of bubbles leading to crystalline foam structures. What’s the significance of this finding and how could it impact materials design?

In the paper, part of the journal’s Emerging Leaders collection, we provide a proof-of-concept with alginate and polyurethane materials to demonstrate that it is possible to use a fibre array to order bubbles into a crystalline structure, which can be tuned by choosing the fibre pattern, and to keep this ordering upon solidification to provide an alternative approach to additive manufacturing. This work is mainly fundamental, and we hope it paves the way toward a wider use of mechanical self-assembly principles in the context of porous architected materials.

The use of solidifying materials for those studies is two-fold: first, it allows us to observe the systems with X-ray microtomography once solidified, and second, it demonstrates that we could use such techniques to build actual solid materials.

Guiding bubbles with fibre arrays

What excites you most about this field right now, and where do you see the biggest opportunities for breakthroughs?

Combining physical understanding and materials science is certainly a great area of opportunity to better exploit mechanical self-assembly. It is very compelling to search for strategies based on physical principles to generate materials with non-trivial mechanical or acoustic properties. Capillarity, elasticity, stimuli-induced modification of systems, as well as geometrical considerations, all offer a great playground to explore. Curiosity-driven research has many advantages, and often, unexpected observations completely reshape the trajectory that we had in mind.

Could you tell us about your team’s current research priorities and the directions you are most focused on?

We believe that focusing first on the underlying physical principles, especially in terms of mechanical self-assembly, will provide the building blocks to generate novel materials. One key research axis we are exploring now is widening the range of materials that can be used for “liquid foam templating” (a general approach that involves controlling the properties of a foam in its liquid state to control the resulting properties of the foam after solidification). We focus on the solidification mechanisms, either by playing with external stimuli or by controlling the solidification reactions via the introduction of catalysts or solidifying agents.

What are the key challenges in achieving ordered structures during solidification?

Liquid foams provide beautiful hierarchical structures that are also short-lived. To take advantage of the mechanical self-assembly of bubbles to build solid materials, understanding the relevant timescales is key: depending on whether the foam has time to drain and destabilize before solidification or not, its final morphology can be completely different. Controlling both the ageing mechanisms and the solidification of the matrix is particularly challenging.

How do you see foam-based materials impacting real-world applications?

Both biomedical devices and soft robots often rely on soft materials – either to match the mechanical properties of biological tissues or to provide the mechanical properties to build soft robots to enable motion. Being able to customize self-assembled hierarchical structures could allow us to explore a wider range of even softer materials, with specific properties resulting from their structural features. Applications could also extend to stiffer materials, mainly in the context of acoustic properties and wave propagation in such architected structures.

What are the most surprising behaviours you have observed during the processes of self-assembly and solidification of foams?

For the experiments detailed in the paper, the structures revealed their beauty once the X-ray tomography scans were performed. When we varied the parameters, we could only guess what was going to happen before getting the visual confirmation a few hours later. We were really happy to see that changing the pattern of the fibre array could indeed provide different ordered foam structures. In some other projects we are working on, foam stability has been a real challenge. We were sometimes surprised to obtain long-lasting liquid systems.

X-ray tomography scans of foams

Looking ahead, what are the next big questions you hope to tackle in your field?

In the fundamental context of the physics and mechanics of elasto-capillarity, the study of model systems involving self-assembly mechanisms will be a key aspect of our research. I then hope to successfully identify key applications for such architected systems – mainly in the fields of mechanical or acoustic metamaterials, but also for biomedical engineering. Regarding foam solidification, understanding the mechanisms of pore opening during the solidification process – leading to either closed-cell or open-cell foams – is also an important question for the community.

You worked on bio-integrated electronics during your postdoc and contributed to a seminal paper on skin-interfaced biosensors for wireless monitoring in neonatal ICUs. How has that shaped your current research interests?

That fantastic experience allowed me to work in a group with numerous people from many different backgrounds, pushing the frontiers of interdisciplinarity in ways I could not have imagined before joining the Rogers group as a postdoc. At the moment, I am focusing on more fundamental questions, but it is definitely important to keep in mind what physics and materials science can bring to a broad variety of applications that offer solutions for society, in biomedical engineering and beyond.

Your research often combines theory and experiment and involves interdisciplinary collaboration. How do you see these collaborations shaping the future of your field?

It is always the scientific questions we want to answer – or the goals we aim to achieve – that should define the collaborations, bringing together multiple skills and backgrounds to tackle a shared challenge. Clearly, at the intersection of physics, physical chemistry, materials science and mechanics, there are many interesting questions that require contributions from different disciplines and skillsets. A key aspect is how people trained in different areas learn to “speak the same language” in order to advance interdisciplinary topics.

X-ray microtomography on the MINAMEC platform

How do you envision your research evolving over the next 5–10 years?

I hope to be able to combine fundamental research and meaningful applications successfully – perhaps in the form of medical devices or tools for soft robots. There are many exciting possibilities, but it is certainly still too early for me to predict.

What advice would you give early-career researchers pursuing interdisciplinary projects?

Believe in what you are doing! We push boundaries more easily in areas we are passionate about, and we are also more productive when we work on topics for which we have found a supportive environment – with a unique combination of collaborators and access to state-of-the-art equipment.

In research, and especially in interdisciplinary fields, a key challenge is finding the right balance: you need to stay focused on the research projects that matter for you, while also keeping an open mind and staying aware of what others are doing. This broader vision helps you understand how your work integrates into a larger, more complex landscape.

Finally, what inspires you most as a scientist, and what keeps you motivated during challenging phases of research?

I have always liked working with desktop-scale experiments, where we can touch the objects and have an intuition for the physical mechanisms behind the observed phenomena.

Another source of inspiration is the beauty of the scientific objects that we study. With droplets, bubbles and foams – which are not only scientifically interesting but also beautiful – there is a strong connection with art and photography.

And finally, a key aspect of our professional life is the people we work with. It is clearly an additional motivation to feel part of a community where we can discuss both scientific questions and ways to improve how research is organized, as well as help younger students, PhDs and postdocs find their professional path. Working with amazing colleagues definitely helps when the path is longer or more difficult than expected.

The post Bubbles, foams and self-assembly: a conversation with Early Career Award winner Aurélie Hourlier-Fargette appeared first on Physics World.

https://physicsworld.com/a/bubbles-foams-and-self-assembly-a-conversation-with-early-career-award-winner-aurelie-hourlier-fargette/
No Author

From bunkers to bright spaces: the future of smart shielded radiosurgery treatment rooms

Discover how smart shielding enables bright, modern radiosurgery rooms beyond traditional bunker designs

The post From bunkers to bright spaces: the future of smart shielded radiosurgery treatment rooms appeared first on Physics World.

This webinar explores how smart shielding is transforming the design of Leksell Gamma Knife radiosurgery environments, shifting from bunker‑like spaces to open, patient‑centric treatment rooms. Drawing from dose‑rate maps, room‑dimension considerations and modern shielding innovations, we’ll demonstrate how treatment rooms can safely incorporate features such as windows and natural light, improving both functionality and patient experience.

Dr Riccardo Bevilacqua will walk through the key questions that clinicians, planners and hospital administrators should ask when evaluating new builds or upgrading existing treatment rooms. We will highlight how modern shielding approaches expand design possibilities, debunk outdated assumptions and offer practical guidance on evaluating sites and educating stakeholders on what lies “beyond bunkers”.

Dr Riccardo Bevilacqua

Dr Riccardo Bevilacqua, a nuclear physicist with a PhD in neutron data for Generation IV nuclear reactors from Uppsala University, has worked as a scientist for the European Commission and at various international research facilities. His career has transitioned from research to radiation safety and back to medical physics, the field that first interested him as a student in Italy. Based in Stockholm, Sweden, he leads global radiation‑safety initiatives at Elekta. Outside of work, Riccardo is a father, a stepfather and writes popular‑science articles on physics and radiation.

The post From bunkers to bright spaces: the future of smart shielded radiosurgery treatment rooms appeared first on Physics World.

https://physicsworld.com/a/from-bunkers-to-bright-spaces-the-future-of-smart-shielded-radiosurgery-treatment-rooms/
No Author

The physics of why basketball shoes are so squeaky

The noise is down to the base of the shoe forming wrinkles that travel at near supersonic speeds

The post The physics of why basketball shoes are so squeaky appeared first on Physics World.

If you have ever watched a basketball match, you will know that along with the sound of the ball being bounced, there is also the constant squeaking of shoes as the players move across the court.

Such noise is a common occurrence in everyday life from the scraping of chalk on a blackboard to when brakes are applied on a bicycle.

Physicists in France, Israel, the UK and the US have now recreated the phenomenon in a lab and discovered that the squeaking is due to a previously unseen mechanism.

Katia Bertoldi from the Harvard John A. Paulson School of Engineering and Applied Sciences and colleagues slid a basketball shoe, or a rubber sample, across a smooth glass plate and used high-speed imaging and audio measurements to analyse the squeak.

Previous studies looking at the effect suggested that “pulses” are created when two materials “stick and slip”, but such studies focused on slow movements, which do not create squeaks.

Bertoldi and team instead found that the noise was not caused by random stick-slip events, but rather deformations of the rubber sole pulsing in bursts, or rippling, across the surface.

In this case, small parts of the sole change shape and lose and regain contact with the surface, with the “ripple” travelling at near supersonic speeds.

The pitch of the squeak even matches the rate of the “bursts”, which is determined by the stiffness and thickness of the shoe sole.

The authors also found that if a soft surface is smooth, the pulses are irregular and produce no sharp sounds, whereas ridged surfaces – like the grip patterns on sports shoes – produce consistent pulse frequencies, resulting in a high-pitched squeak.

In another twist, lab experiments showed that in some instances, the slip pulses are triggered by triboelectric discharges – miniature lightning bolts caused by the friction of the rubber.

Indeed, the physics of these pulses share similar features with fracture fronts in plate tectonics, and so a better understanding the dynamics that occur between two surfaces may offer insights into  friction across a range of systems.

“These results bridge two fields that are traditionally disconnected: the tribology of soft materials and the dynamics of earthquakes,” notes Shmuel Rubinstein from Hebrew University. “Soft friction is usually considered slow, yet we show that the squeak of a sneaker can propagate as fast as, or even faster than, the rupture of a geological fault, and that their physics is strikingly similar.”

The post The physics of why basketball shoes are so squeaky appeared first on Physics World.

https://physicsworld.com/a/the-physics-of-why-basketball-shoes-are-so-squeaky/
Michael Banks

Dark optical cavity alters superconductivity

Quantum fluctuations couple to stretching bonds

The post Dark optical cavity alters superconductivity appeared first on Physics World.

An international team of researchers has shown that superconductivity can be modified by coupling a superconductor to a dark electromagnetic cavity. The research opens the door to the control of a material’s properties by modifying its electromagnetic environment.

Electronic structure defines many material properties – and this means that some properties can be changed by applying electromagnetic fields. The destruction of superconductivity by a magnetic field and the use of electric fields to control currents in semiconductors are two familiar examples.

There is growing interest in how electronic properties could be controlled by placing a material in a dark electromagnetic cavity that resonates with an electronic transition in that material. In this scenario, an external field is not applied to the material. Rather, interactions occur via quantum vacuum fluctuations within the cavity.

Holy Grail

“The Holy Grail of cavity materials research is to alter the properties of complex materials by engineering the electromagnetic environment,” explains the team – which includes Itai Keren, Tatiana Webb and Dmitri Basov at Columbia University in the US.

They created an optical cavity from a small slab of hexagonal boron nitride. This was interfaced with a slab of κ-ET, which is an organic low-temperature superconductor. The cavity was designed to resonate with an infrared transition in κ-ET involving the vibrational stretching of carbon–carbon bonds.

Hexagonal boron nitride was chosen because it is a hyperbolic van der Waals material. Van der Waals materials are stacks of atomically-thin layers. Atoms are strongly bound within each layer, but the layers are only weakly bound to each other by the van der Waals force. The gaps between layers can act as waveguides, confining light that bounces back and forth within the slab. As a result the slab behaves like an optical cavity with an isofrequency surface that is a hyperboloid in momentum space. Such a cavity supports a large number of modes and vacuum fluctuations, which enhances interactions with the superconductor.

Superfluid suppression

The researchers found that the presence of the cavity caused a strong suppression of superfluid density in κ-ET (a superconductor can be thought of as a superfluid of charged particles). The team mapped the superfluid density using magnetic force microscopy. This involved placing a tiny magnetic tip near to the surface of the superconductor. The magnetic field of the tip cannot penetrate into the superconductor (the Meissner effect) and this results in a force on the tip that is related to the superfluid density. They found that the density dropped by as much as 50% near the cavity interface.

The team also investigated the optical properties of the cavity using scattering-type scanning near-field optical microscope (s-SNOM). This involves firing tightly-focused laser light at an atomic force microscope (AFM) tip that is tapping on the surface of the cavity. The scattered light is processed to reveal the near-field component of light from just the region of the cavity below the tip .

The tapping tip creates phonon polaritons in the cavity, which are particle-like excitations that couple lattice vibrations to light. Analysing the near-field light across the cavity confirmed that the carbon stretching mode of κ-ET is coupled to the cavity. Calculations done by the team suggest that cavity coupling reduces the amplitude of the stretching mode vibrations.

Physicists know that superconductivity can arise from interactions between electrons and phonons (lattice vibrations), So, it is possible that the reduction in superfluid density is related to the suppression of stretching-mode vibrations. This, however, is not certain because κ-ET is an unconventional superconductor, which means that physicists do not understand the mechanism that causes its superconductivity. Further experiments could therefore shed light on the mysteries of unconventional superconductors.

“We are confident that our experiments will prompt further theoretical pursuits,” the team tells Physics World. The researchers also believe that practical applications could be possible. “Our work shows a new path towards the manipulation of superconducting properties.”

The research is described in Nature.

The post Dark optical cavity alters superconductivity appeared first on Physics World.

https://physicsworld.com/a/dark-optical-cavity-alters-superconductivity/
Hamish Johnston

Chernobyl at 40: physics, politics and the nuclear debate today

Jim Smith reflects on the 1986 disaster and how it still shapes public perception of nuclear power

The post Chernobyl at 40: physics, politics and the nuclear debate today appeared first on Physics World.

On 26 April 2026, it will be 40 years since the explosion at Unit 4 of the Chernobyl Nuclear Power Plant – the worst nuclear accident the world has known. In the early hours of 26 April 1986, a badly designed reactor, operated under intense pressure during a safety test, ran out of control. A powerful explosion and prolonged fire followed, releasing radioactive material across Ukraine, Belarus, Russia, with smaller quantities spewing across Europe.

In this episode of Physics World Stories, host Andrew Glester speaks with Jim Smith, an environmental physicist at the University of Portsmouth. Smith began his academic life studying astrophysics, but always had an interest in environmental issues. His PhD in applied mathematics at Liverpool focused on modelling how radioactive material from Chernobyl was transported through the atmosphere and deposited as far away as the Lake District in north-western England.

Smith recounts his visits to the abandoned Chernobyl plant and the 1000-square-mile exclusion zone, now home to roaming wolves and other thriving wildlife. He wants a rational debate about the relative risks, arguing that the accident’s social and economic consequences have significantly outweighed the long-term impacts of radiation itself.

The discussion ranges from the politics of nuclear energy and the hierarchical culture of the Soviet system, to lessons later applied during the Fukushima accident. Smith makes the case for nuclear power as a vital complement to renewables.

He also shares the story behind the Chernobyl Spirit Company – a social enterprise he has launched with Ukrainian colleagues, producing safe, high-quality spirits to support Ukrainian communities. Listen to find out whether Andrew Glester dared to try one.

The post Chernobyl at 40: physics, politics and the nuclear debate today appeared first on Physics World.

https://physicsworld.com/a/chernobyl-at-40-physics-politics-and-the-nuclear-debate-today/
James Dacey

LHCb upgrade: CERN collaboration responds to UK funding cut

LHCb spokesperson-elect Tim Gershon is our podcast guest

The post LHCb upgrade: CERN collaboration responds to UK funding cut appeared first on Physics World.

Later this year, CERN’s Large Hadron Collider (LHC) and its huge experiments will shutdown for the High Luminosity upgrade. When complete in 2030, the particle-collision rate in the LHC will be increased by a factor of 10 and the experiments will be upgraded so that they can better capture and analyse the results of these collisions. This will allow physicists to study particle interactions at unprecedented precision and could even reveal new physics beyond the Standard Model.

Earlier this year, however, the UK government announced that it will no longer fund the upgrade of the LHCb experiment on the LHC, which is run by a collaboration of more than 1700 physicists worldwide. The UK had promised to contribute about £50 million to the upgrade – which is a significant chunk of the overall cost.

In this episode of the Physics World Weekly podcast I am in conversation with the particle physicist Tim Gershon, who is based at the UK’s University of Warwick. Gershon is spokesperson-elect for the LHCb collaboration and is playing a leading role in the upgrade.

Gershon explains that UK participation and leadership has been crucial for the success of LHCb and cautions that the future of the experiment and the future of UK particle physics have been imperilled by the funding cut.

We also chat about recent discoveries made by LHCb and look forward to what new physics the experiment could find after the upgrade.

The post LHCb upgrade: CERN collaboration responds to UK funding cut appeared first on Physics World.

https://physicsworld.com/a/lhcb-upgrade-cern-collaboration-responds-to-uk-funding-cut/
Hamish Johnston

Read-out of Majorana qubits reveals their hidden nature

Mechanism could pave the way for more robust quantum computation, but questions remain over scalability

The post Read-out of Majorana qubits reveals their hidden nature appeared first on Physics World.

Quantum computers could solve problems that are out of reach for today’s classical machines. However, the quantum states they rely on are prone to decohering – that is, losing their quantum information due to local noise. One possible way around this is to use quantum bits (qubits) constructed from quasiparticle states known as Majorana zero modes (MZMs) that are protected from this noise. But there’s a catch. To perform computations, you need to be able to measure, or read out, the states of your qubits. How do you do that in a system that is inherently protected from its environment?

Scientists at QuTech in the Netherlands, together with researchers from the Madrid Institute of Materials Science (ICMM) in Spain, say they may have found an answer. By measuring a property known as quantum capacitance, they report that they have read out the parity of their MZM system, backing up an earlier readout demonstration from a team at Microsoft Quantum Hardware on a different Majorana platform.

Measuring parity

The QuTech/ICMM researchers generated their MZMs across two quantum dots – semiconductor structures that can confine electrons – connected by a superconducting nanowire. Electrons can transfer, or tunnel, between the quantum dots through this wire. Majorana-based qubits store their quantum information across these separated MZMs, with both elements in the pair required to encode a single “parity” bit. A pair of parity bits (combining four MZMs in total) forms a qubit.

A parity bit has two possible states. When the two quantum dots are in a superposition of both having one electron and both having none, the system is said to have even parity (a “0”). When the system is instead in superposition of only one of the quantum dots having an electron, the parity is said to be odd (a “1”). Importantly, these even and odd parity states have the same average value of electric charge, meaning that a charge sensor cannot tell them apart.

The key to measuring parity lies in the electrons’ behaviour. In the even-parity state, an even number of electrons can pair up and enter the superconductor together as a Cooper pair. In the odd-parity state, however, the lone electron lacks a partner and cannot flow through the wire in the same way. By measuring the charge flowing into the superconductor, the team was therefore able to determine the parity state. The researchers also determined that the lifetimes of these states were in the millisecond range, which they say is promising for quantum computations.

Competing platforms

According to Nick van Loo, a quantum engineer at QuTech and the first author of a Nature paper on the work, similar chains of quantum dots (known as Kitaev chains) are a promising platform for realizing Majorana modes because each element in the chain can be controlled and tuned. This control, he adds, makes results easier to reproduce, helping to overcome some of the interpretation challenges that have affected Majorana results over the past decade.

Van Loo also stresses that his team uses a different architecture from the Microsoft Quantum Hardware team to create its Majorana modes – one that he says allows for better tuneability as well as easier and more scalable readout. He adds that this architecture also allows an independent charge sensor to be used to confirm the MZM’s charge neutrality.

In response, Chetan Nayak, a technical fellow at Microsoft Quantum Hardware, says it is important that the QuTech/ICMM team independently measured a millisecond time scale for parity fluctuations. However, he notes that the team did not extend this parity lifetime and adds that the so-called “poor man’s Majoranas” used in this research do not constitute a scalable platform for topological qubits, as they lack topological protection.

Seeking full protection

Van Loo acknowledges that the team’s two-site Kitaev chain is not topologically protected. However, he says the degree of protection is expected to improve exponentially as more sites are added. In the near term, he and his colleagues hope to operate their qubit by inducing rotations through coupling pairs of Majorana modes. Once these hurdles are overcome, he tells Physics World that “one major milestone will still remain: demonstrating braiding of Majorana modes to establish their non-Abelian exchange statistics”.

Jay Deep Sau, a physicist at the University of Maryland, US, who was not involved in either the QuTech/ICMM or the Microsoft Quantum Hardware research, describes this as the first measurement of fermion parity in the smallest quantum dot chain platform for creating MZMs. Compared to the Microsoft result, Sau agrees that the quantum dot chain is more controlled. However, he is sceptical that this control will apply to larger chains, casting doubt on whether this is truly a scalable way of realizing MZMs. The significance of these results, he adds, will only be apparent if the quantum dot chain approach can demonstrate a coherent qubit before its semiconductor nanowire counterpart.

The post Read-out of Majorana qubits reveals their hidden nature appeared first on Physics World.

https://physicsworld.com/a/read-out-of-majorana-qubits-reveals-their-hidden-nature/
No Author

Quantum-secure Internet expands to citywide scale

Device-independent quantum-encrypted keys distributed over 100 km

The post Quantum-secure Internet expands to citywide scale appeared first on Physics World.

Researchers in China have distributed device-independent quantum cryptographic keys over city-scale distances for the first time – a significant improvement compared to the previous record of a few hundred metres. Led by Jian-Wei Pan of the University of Science and Technology of China (USTC) of the Chinese Academy of Sciences (CAS), the researchers say the achievement brings the world a step closer to a completely quantum-secure Internet.

Many of us use Internet encryption almost daily, for example when transferring sensitive information such as bank details. Today’s encryption techniques use keys based on mathematical algorithms, and classical supercomputers cannot crack them in any practical amount of time. Powerful quantum computers could change this, however, which has driven researchers to explore potential alternatives.

One such alternative, known as quantum key distribution (QKD), encrypts information by exploiting the quantum properties of photons. The appeal of this approach is that when quantum-entangled photons transmit a key between two parties, any attempted hack by a third party will be easy to detect because their intervention will disturb the entanglement.

While the basic form of QKD enables information to be transmitted securely, it does have some weak points. One of them is that a malicious third party could steal the key by hacking the devices the sender and/or receiver is using.

A more advanced version of QKD is device-independent QKD (DI-QKD). As its name suggests, this version does not depend on the state of a device. Instead, it derives its security key directly from fundamental quantum phenomena – namely, the violation of conditions known as Bell’s inequalities. Establishing this violation ensures that a third party has not interfered with the process employed to generate the secure key.

The main drawback of DI-QKD is that it is extremely technically demanding, requiring high-quality entanglement and an efficient means of detecting it. “Until now, this has only been possible over short distances – 700 m at best – and in laboratory-based proof-of-principle experiments,” says Pan.

High-fidelity entanglement over 11 km of fibre

In the latest work, Pan and colleagues constructed two quantum nodes consisting of single trapped atoms. Each node was equipped with four high-numerical-aperture lenses to efficiently collect single photons emitted by the atoms. These photons have a wavelength of 780 nm, which is not optimal for transmission through optical fibres. The team therefore used a process known as quantum frequency conversion to shift the emitted photons to a longer wavelength of 1315 nm, which is less prone to optical loss in fibres.

By interfering and detecting a single photon, the team was able to generate what’s known as heralded entanglement between the two quantum nodes – something Pan describes as “an essential resource” for DI-QKD. While significant progress has been made in extending the entangling distance for qubits of this type, Pan notes that these advances have been hampered by low fidelities and low entangling rates.

To address this, Pan and his colleagues employed a single-photon-based entangling scheme that boosts remote entangling probability by more than two orders of magnitude. They also placed their atoms in highly excited Rydberg states to generate single photons with high purity and low noise. “It is these innovations that allow us to achieve high-fidelity and high-rate entanglement over a long distance,” Pan explains.

Using this setup, the researchers explored the feasibility of performing DI-QKD between two entangled atoms linked by optical fibres up to 100 km in length. In this study, which is detailed in Science, they demonstrated practical DI-QKD under finite-key security over 11 km of fibre.

Metropolitan-scale quantum key distribution

Based on the technologies they developed, Pan thinks it could now be possible to implement DI-QKD over metropolitan scales with existing optical fibres. Such a system could provide encrypted communication with the highest level of physical security, but Pan notes that it could also have other applications. For example, high-fidelity entanglement could also serve as a fundamental building block for constructing quantum repeaters and scaling up quantum networks.

Carlos Sabín, a physicist at the Autonomous University of Madrid (UAM), Spain, who was not involved in the study, says that while the work is an important step, there is still a long way to go before we are able to perform completely secure and error-free quantum key distribution on an inter-city scale. “This is because quantum entanglement is an inherently fragile property,” Sabín explains. “As light travels through the fibre, small losses accumulate and the entanglement generated is of poorer quality, which translates into higher error rates in the cryptographic keys generated. Indeed, the results of the experiment show that errors in the key range from 3% when the distance is 11 km to more than 7% for 100 km.”

Pan and colleagues now plan to add more atoms to each node and to use techniques like tweezer arrays to further enhance both the entangling rate and the secure key rate over longer distances. “We are aiming for 1000 km, over which we hope to incorporate quantum repeaters,” Pan tells Physics World. “By using processes like ‘entanglement swapping’ to connect a series of such two-node entanglement, we anticipate that we will be able maintain a similar entangling rate for much longer distances.”

The post Quantum-secure Internet expands to citywide scale appeared first on Physics World.

https://physicsworld.com/a/quantum-secure-internet-expands-to-citywide-scale/
Isabelle Dumé

Todd McNutt: how an AI software solution enables creation of the best possible radiation treatment plans

Medical physicist Todd McNutt explains how Plan AI, an artificial intelligence-powered plan quality software solution, uses data mining to streamline and improve radiotherapy planning for cancer treatments

The post Todd McNutt: how an AI software solution enables creation of the best possible radiation treatment plans appeared first on Physics World.

Todd McNutt is a radiation oncology physicist at Johns Hopkins University in the US and the co-founder of Oncospace, where he led the development of an artificial intelligence (AI)-powered tool that simultaneously accelerates radiation planning and elevates plan quality and consistency. The software, now rebranded as Plan AI and available from US manufacturer Sun Nuclear, draws upon data from thousands of previous radiotherapy treatments to predict the lowest possible dose to healthy tissues for each new patient. Treatment planners then use this information to define goals that streamline and automate the creation of a best achievable plan.

Physics World’s Tami Freeman spoke with McNutt about the evolution of Oncospace and the benefits that Plan AI brings to radiotherapy patients and cancer treatment centres.

Can you describe how the Oncospace project began?

Back in 2007, several groups were discussing how we could better use clinical data for discovery and knowledge generation. I had several meetings with folks at Johns Hopkins, including Alex Szalay who helped develop the Sloan Digital Sky Survey. He built a large database of galaxies and stars and it became a huge research platform for both amateur and professional astronomers.

From that discussion, and other initiatives, we looked at moving towards structured data collection for patients in the clinical environment. By marrying these data with radiation treatment plans we could study how dose distributions across the anatomy affect patient outcomes. And we took that opportunity to build a database for radiotherapy.

What inspired the transition from academic research to founding the company Oncospace Inc in 2019?

After populating the database with data from many patients, we could examine which anatomic features impact our ability to generate a plan that minimizes radiation dose to normal tissues while treating target volumes as best as possible. We came up with a feature set that characterized the relationships between normal anatomy and targets, as well as target complexity.

This early work allowed us to predict expected doses from these shape-relationship features, and it worked well. At that point, we knew we could tap into this database and generate a prediction that could help create treatment plans for new patients. We thought of this as personalized medicine: for the first time, we could see the level of treatment plan quality that we could achieve for a specific patient.

I thought that this was useful commercially and that we should get it out to other clinics. Praveen Sinha, who I’d known from my previous work at Philips and now leads Sun Nuclear’s software business line, asked if I wanted to create a startup. The timing was right for both of us and I had a team here ready to go, so we went ahead and did it. With his knowledge of startups and my knowledge of what we wanted to achieve, we had perfect timing and a perfect group to work with.

Plan AI enables both predictive planning and peer review, how do these functions work?

The idea behind predictive planning is that, for a given patient, I can predict the expected dose that I should be able to achieve for them.

Plan AI software
Dose–volume histograms

Treatment planning involves specifying dosimetric objectives to the planning system and asking it to optimize radiation delivery to meet these. But nobody really knows what the right objectives even are – it is just a trial-and-error process. Plan AI’s prediction provides a rational set of objectives for plan optimization, allowing the planning system’s algorithm to move towards a good solution and making treatment planning an easier problem to solve.

Peer review involves a peer physician looking at every treatment plan to evaluate it for quality and safety. But again, people don’t really know the level of quality you can generate, it depends on the patient’s anatomy. Providing a predicted dose with clinical dose goals enables a rapid review to see whether it is a high-quality plan or not.

In the past we looked at simple things like whether a contour is missing slices or contains discontinuities and Plan AI checks for this, but you can do far more with AI. For example, you could look at all the contoured rectums in the system and predict if your contour goes too far into the sigmoid colon, then it may be mis-contoured. We have research software that can flag such potential anomalies so they don’t get overlooked.

The Plan AI models are developed using Oncospace’s database of previous treatments; can you describe this data lake?

When we first started, we developed a large SQL database containing all the shape-relationship features and dosimetry features. The SQL language is ideal for being able to query and sift through the data, but when the company was formed, we recognized that there was some age to that technology.

So for the Plan AI data lake, we extracted all the different shape-relationship and shape-complexity features and put them into a Parquet database in the cloud. This made the data lake much more amenable to applying machine learning algorithms to it. The SQL data lake at Johns Hopkins is maintained separately and primarily used to investigate toxicity predictions and spatial dose patterns. But for Plan AI, the models are fixed and streamlined for the specific task of dose prediction.

What does the model training process entail?

One of the first tasks was to curate the data, using the AAPM’s standardized structure-naming model. Our data scientist Julie Shade wrote some tools for automatic name mapping and target identification; that helped us process much larger amounts of data for the model.

Once we had all the shape-relationship and shape-complexity features and all the doses, we trained the models by anatomical region. We have FDA-approved models for the male and female pelvis, thorax, abdomen and head-and-neck. For each of these, we predict the doses for every organ-at-risk. Then we used a five-fold validation model to make sure that the predictions were good on an internal data set.

We also performed external validation at institutions including Johns Hopkins and Montefiore hospitals. We created predicted plans from recent treatment plans that had been evaluated by physicians. For almost all cases, both plan quality and plan efficiency were improved with Plan AI.

One aspect of this training is that whenever we drive optimization via predictive planning we want to push towards the best achievable dose. Regular machine learning predicts an expected, or average, dose across all patients. But you never want to drive a treatment plan towards the average dose, because then every plan you generate will be happy being average. Our model predicts both the average and the best achievable dose, and drives plan optimization towards the best achievable.

When implementing new technology in the clinic, it’s important to fit into the existing treatment workflow. How clinic-ready are these AI tools?

Radiation therapy is protocol-driven: we know what technique we’re going to use to treat and what our clinical dose goals are for different structures. What we don’t know is the patient-specific part of that. So for each anatomical region, we built models out of a wide range of treatment protocols, with many different types of patients, to ensure that the same prediction model works for any protocol. This means a user can use any protocol for treatment and the predictions will work, they don’t have to retrain anything. It’s ready to go out of the box, there’s a library of protocols to start with, and you can change protocols as you need for your own clinic.

The other part of being clinic-ready is aligning with the way that planning is currently performed, which is using dose–volume histograms. Treatment plans are optimized by manipulating these dose objectives, and that’s exactly what we predict. So users aren’t changing the whole paradigm of how planners operate. They still use their treatment planning system (TPS) – we just put the objectives in there. Basically, a TPS script sends the patient’s CT and contours to the cloud, where Plan AI makes the predictions. The TPS then pulls back in the objectives built from the models, based on this specific patient’s anatomy. The TPS runs the optimization and, as a last step, can send the plan back to Plan AI to check that it fits within the best achievable predictions.

Did you encounter any challenges bringing AI into a clinical setting?

Interestingly, the challenges aren’t technical, they are more human related. One of the more systemic challenges is data security when using medical data for training. A nice thing about our system is that the features we generate from treatment plans are just mathematical shape-relationship features and don’t involve a lot of identifiable information.

AI has been used in radiation therapy for image contouring and auto-segmentation, and early efforts were not so good. So, there’s always a good, healthy scepticism. But once you show people that it works and works well, this can be overcome. I have seen some people worried about job security and AI taking over. We are medical professionals designing a treatment plan to care for a patient and there’s a lot of pride and art in that – if you automate that, it takes away some of this pride and art.

I tell people that if we automate the easier things, then they can spend their quality time on the more difficult and challenging cases, because that’s where their talent might be needed more.

Do you have any advice for clinics looking to adopt AI-driven planning?

Introduce it as an assistant, not as a solution. You want people that already know what they’re doing to be able to use their knowledge more efficiently. We want to make their jobs easier and show them that it also improves quality.

With dosimetrists, for example, they create a plan and work hard getting the dose down – and then the physician looks at it and suggests that they can do better. Predictive planning gives them confidence that they are right and takes the uncertainty out of the physician review process. And once you’ve gained that level of confidence, you can start using it for adaptive planning or other technologies.

Where do you see predictive modelling and AI in oncology in five years from now?

Right now, there’s been a lot of data collected, but we want that data to advance and learn. Having multiple centres adding to this pool of knowledge and being able to continually update those models from new, broader data sets could be of huge value.

In terms of patient outcomes, we’ve done a lot of the work looking at how the spatial pattern of dose impacts toxicity and outcomes. This is part of the research being performed at Johns Hopkins and still in discovery mode. But down the road, some of these predictions of normal tissue outcomes could be fed into the planning process to help reduce toxicity at the patient level.

Finally, what’s been the most rewarding part of this journey for you?

During my prior experience building treatment planning systems, the biggest problem was always that nobody knew what the objective was. Nobody knew how to tell the system: “this is the dose I expect to receive, now optimize to get it for me”, because you didn’t know what you could do. For any given patient, you could ask for too much or too little. Now, for the first time, I argue that we actually know what our objective is in our treatment planning.

This levels the playing field between different environments, different countries, or even different dosimetrists with different levels of experience. The Plan AI tool brings all this to a consistent state and enables high quality, efficient planning everywhere. We can provide this predictive planning tool to clinics around the world. Now we just have to get everybody using it.

 

The post Todd McNutt: how an AI software solution enables creation of the best possible radiation treatment plans appeared first on Physics World.

https://physicsworld.com/a/todd-mcnutt-how-an-ai-software-solution-enables-creation-of-the-best-possible-radiation-treatment-plans/
Tami Freeman

The future of particle physics: what can the past teach us?

Robert P Crease reports from a conference at CERN on particle physics in the 1980s and 1990s

The post The future of particle physics: what can the past teach us? appeared first on Physics World.

In his opening remarks to the 4th International Symposium on the History of Particle Physics, Chris Llewellyn Smith – who was a director-general of CERN in the 1990s – suggested participants should speak about “what’s not written in the journals”, including “mistakes, dead-ends and problems with getting funding”. Doing so, he said, would “provide insight into the way science really progresses”.

The symposium was not your usual science conference. Held last November at CERN, it took place inside the lab’s 400-seat main auditorium, which has been the venue for many historic announcements, including the discovery of the Higgs boson. Its brown-beige walls are covered with lively designs by the Finnish artist Ilona Rista, suggesting to me the aftermath of a collision of high-energy bar codes.

The 1980s and 1990s saw the construction and operation of various important accelerators and detectors.

The focus of the meeting was the development of particle physics in the 1980s and 1990s – a period that saw the construction and operation of various important accelerators and detectors. At CERN, these included the UA1 and UA2 experiments at the Super Proton Synchrotron, where the W and Z bosons were discovered. Later, there was the Large Electron-Positron Collider (LEP), which came online in 1989, and the Large Hadron Collider (LHC), approved five years later.

Delegates also heard about the opening of various accelerators in the US during those two decades, including two at the Stanford Linear Accelerator Center – the Positron-Electron Project in 1980 and the Stanford Linear Collider in 1989. Most famous of all was the start-up of the Tevatron at Fermilab in 1983. Over at Dubna in the former Soviet Union, meanwhile, scientists built the Nuclotron, a superconducting synchrotron, which opened in 1992.

Conference speakers covered unfinished machines of the era as well. The US cancelled two proton–proton facilities – ISABELLE in 1983 and the Superconducting Super Collider (SSC) a decade later. The Soviet Union, meanwhile, abandoned the multi-TeV proton–proton collider UNK a few years later, though news has recently emerged that Russia might revive the project.

Several speakers recounted the discovery of the W and Z particles at CERN in 1983 and the discovery of the top quark at Fermilab in 1995. Others addressed the strange fact that fewer neutrinos from the Sun had been detected than theory suggested. The “solar-neutrino problem”, as it was known, was finally resolved by Takaaki Kajita’s discovery of neutrino oscillation in 1998, for which he shared the 2015 Nobel Prize for Physics with Art McDonald.

The conference also addressed unsuccessful searches for proton decay, axions, magnetic monopoles, the Higgs boson, supersymmetry particles and other targets. Other speakers described projects with highly positive outcomes, such as the advent of particle cosmology, or what some have jokingly dubbed “the heavenly lab”. The development of string theory, grand unified theories and perturbative quantum chromodynamics was tackled too.

In an exchange in the question-and-answer session after one talk, the Greek physicist Kostas Gavroglu referred to many of such quests as “failures”. That remark prompted the Australian-born US theoretical physicist Helen Quinn to say she preferred the term “falling forward”; such failures, she said, were instances of “I tried this, and it didn’t work so I tried that”.

In relating his work on detecting gravitational waves, the US Nobel-prize-winning physicist Barry Barish said he felt his charge was not to celebrate the importance of his discoveries nor the ingenuity of the route he took. Instead, Barish explained, his job was to answer the much more informal question: “What made me do what?”.

His point was illustrated by the US theorist Alan Guth, who described the very human and serendipitous path he took to working on cosmic inflation – the super-fast expansion of the universe just after the Big Bang. When he started, Guth said, “all the ingredients were already invented”. But the startling idea of inflation hinged on accidental meetings, chance conversations, unexpected visits, a restricted word count for Physical Review Letters, competitions, insecurities and “spectacular realizations” coalescing.

Wider world

Another theme that arose in the conference was that science does not unfold inside its own bubble but can have extensive and immediate impacts on the world around it. Two speakers, for instance, recounted the invention of the World Wide Web at CERN in the late 1980s. It’s fair to say that no other discovery by a single individual – Tim Berners-Lee – has so radically and quickly transformed the world.

The growing role of international politics in promoting and protecting projects was mentioned too, with various speakers explaining how high-level political negotiations enabled physicists to work at institutions and experiments in other nations. The Polish physicist Agnieszka Zalewska, for example, described her country’s path to membership in CERN, while Russian-born US physicist Vladimir Shiltsev spoke about the “diaspora” of Russian particle physicists after the fall of the Soviet Union in 1991.

As a result of the Superconducting Super Colllider’s controversial closure, the centre of gravity of high-energy physics shifted to Europe.

Sometimes politics created destructive interference. The US physicist, historian and author Michael Riordan described how the US’s determination to “go it alone” to outcompete Europe in high-energy physics was a major factor in bringing about the opposite: the termination of the SSC in 1993. As a result of that project’s controversial closure, the centre of gravity of high-energy physics shifted to Europe.

Indeed, contemporary politics occasionally hit the conference itself in incongruous and ironic ways. Two US physicists, for example, were denied permission to attend because budgets had been cut and travel restrictions increased. In the end, one took personal time off and paid his own way, leaving his affiliation off the programme.

Before the conference, some people complained that conference organizers hadn’t paid enough attention to physicists who’d worked in the Soviet Union but were from occupied republics. Several speakers addressed this shortcoming by mentioning people like Gersh Budker (1918–1977). A Ukrainian-born physicist who worked and died in the Soviet Union, Budker was nominated for a Nobel Prize (1957) and even has had a street named after him at CERN. Unmentioned, though, was that Budker was Jewish and that his father was killed by Ukrainian nationalists in a pogrom.

On the final day of the conference, which just happened to be World Science Day for Peace and Development, CERN mounted a public screening of the 2025 documentary film The Peace Particle. Directed by Alex Kiehl, much of it was about CERN’s internationalism, with a poster for the film describing the lab as “Mankind’s biggest experiment…science for peace in a divided world”.

But in the Q&A afterwards some audience members criticized CERN for allegedly whitewashing Russia for its invasion of the Ukraine and Israel for genocide. Those onstage defended CERN on the grounds of its desire to promote internationalism.

The critical point

The keynote speaker of the conference was John Krige, a science historian from Georgia Tech who has worked on a three-volume history of CERN. Those who launched the lab, Krige reminded the audience, had radical “scientific, political and cultural aspirations” for the institution. Their dream was that CERN wouldn’t just revive European science and promote regional collaborative effects after the Second World War, but also potentially improve the global world order too.

Krige went on to quote one CERN founder, who’d said that international science facilities such as CERN would be “one of the best ways of saving Western civilization”. Recent events, however, have shown just how fragile those ambitions are. Alluding to CERN’s Future Circular Collider and other possible projects, Llewellyn Smith ended his closing remarks with a warning.

“The perennial hope that the next big high-energy project will be genuinely global,” he said, “seems to be receding over the horizon due to the polarization of world politics”.

The post The future of particle physics: what can the past teach us? appeared first on Physics World.

https://physicsworld.com/a/the-future-of-particle-physics-what-can-the-past-teach-us/
Robert P Crease

A breakthrough in modelling open quantum matter

By analysing the Liouville gap in imaginary time, scientists reveal universal phase‑transition behaviour in both ground and finite‑temperature states

The post A breakthrough in modelling open quantum matter appeared first on Physics World.

Attempts to understand quantum phase transitions in open systems usually rely on real‑time Lindbladian evolution, which tracks how a quantum state changes as it relaxes toward a steady state. This approach is powerful for studying decoherence, dissipation and long‑time behaviour, but it often fails to reveal the deeper structure of the system including the phase transitions, critical points and hidden quantum order that define its underlying physics.

In this work, the researchers introduce a new framework called imaginary‑time Lindbladian evolution, which allows them to define and classify quantum phases in open systems using the spectrum of an imaginary‑Liouville superoperator. This approach works not only for pure ground states but also for finite‑temperature Gibbs states of stabilizer Hamiltonians, showing its relevance for realistic, mixed‑state conditions.

A key diagnostic in their method is the imaginary‑Liouville gap, the spectral gap between the lowest and next‑lowest decay modes. When this gap closes, the system undergoes a phase transition, a change that is accompanied by diverging correlation lengths and nonanalytic shifts in physical observables. The closing of this gap also coincides with the divergence of the Markov length, a recently proposed indicator of criticality in open quantum systems.

To demonstrate the power of their framework, the researchers map out phase diagrams for systems with

Z2σ×Z2τ

symmetry, capturing both spontaneous symmetry breaking and average symmetry‑protected topological phases. Their method reveals universal critical behaviour that real‑time Lindbladian steady states fail to detect, highlighting why imaginary‑time evolution fills a missing piece in the theory of open‑system phases.

Importantly, the authors emphasise that real‑time Lindbladians remain essential for modelling dissipation in practical settings. Their new framework complements this conventional approach, offering a systematic way to study phase transitions in open systems. They also outline how phase diagrams can be constructed using both bottom‑up (state‑based) and top‑down (Hamiltonian‑based) strategies, illustrating the method with a dissipative transverse‑field Ising model.

Overall, this work provides a unified and versatile way to understand quantum phases in open systems, revealing critical behaviour and topological structure that were previously inaccessible. It opens new directions for studying mixed‑state quantum matter and advances the theoretical foundations needed for future quantum technologies.

Read the full article

A new framework for quantum phases in open systems: steady state of imaginary-time Lindbladian evolution

Yuchen Guo et al 2025 Rep. Prog. Phys. 88 118001

Do you want to learn more about this topic?

Focus on Quantum Entanglement: State of the Art and Open Questions guest edited by Anna Sanpera and Carlo Marconi (2025-2026)

The post A breakthrough in modelling open quantum matter appeared first on Physics World.

https://physicsworld.com/a/a-breakthrough-in-modelling-open-quantum-matter/
Lorna Brigham

How reversibility becomes irreversible

A new framework shows how lost information in quantum systems gives rise to macroscopic entropy and the arrow of time

The post How reversibility becomes irreversible appeared first on Physics World.

In the macroscopic world, we see irreversible processes everywhere, heat flowing from hot to cold, gases mixing, systems decaying. Yet at the microscopic level, quantum mechanics is perfectly reversible, with its equations running equally well forwards and backwards in time. How then, does irreversibility emerge from fundamentally reversible dynamics?

A common explanation is coarse-graining, which simplifies a complex system by ignoring microscopic details and focusing only on large-scale behaviour. To make the micro–macro divide precise, however, one must first define what “macroscopic” means. Here it is given a quantitative inferential meaning: a state is macroscopic if it is perfectly inferable from the perspective of a specified measurement and prior. Central to this framework is a coarse-graining map built from the measurement and its optimal Bayesian recovery via the Petz map; macroscopic states are precisely its fixed points, turning macroscopicity into a sharp condition of perfect inferability. This construction is grounded in Bayesian retrodiction, which infers what a system likely was before it was measured, together with an observational deficit that quantifies how much information is lost in forming a macroscopic description.

States that are macroscopically inferable can be characterised in several equivalent ways, all tied to to a new measure of disorder called macroscopic entropy, which captures how irreversible, or “uninferable”, a macroscopic process appears from the observer’s perspective. This perspective is formalised through inferential reference frames, built from the combination of a prior and a measurement, which determine what an observer can and cannot recover about the underlying quantum state.

The researchers also develop a resource theory of microscopicity, treating macroscopic states as free and identifying the operations that cannot generate microscopic detail. This unifies and extends existing resource theories of coherence, athermality, and asymmetry. They further introduce observational discord, a new way to understand quantum correlations when observational power is limited, and provide conditions for when this discord vanishes.

Altogether, this work reframes macroscopic irreversibility as an information-theoretic phenomenon, grounded not in a fundamental dynamical asymmetry but in an inferential asymmetry arising from the observer’s limited perspective. It offers a unified way to understand coarse-graining, entropy, and the emergence of classical behaviour from quantum mechanics. It deepens our understanding of time’s direction and has implications for quantum computing, thermodynamics, and the study of quantum correlations in realistic, constrained settings.

Read the full article

Macroscopicity and observational deficit in states, operations, and correlations

Teruaki Nagasawa et al 2025 Rep. Prog. Phys. 88 117601

Do you want to learn more about this topic?

Focus on Quantum Entanglement: State of the Art and Open Questions guest edited by Anna Sanpera and Carlo Marconi (2025-2026)

The post How reversibility becomes irreversible appeared first on Physics World.

https://physicsworld.com/a/how-reversibility-becomes-irreversible/
Lorna Brigham

Visible light paints patterns onto chiral antiferromagnets

New technique for manipulating domains helps pave the way towards antiferromagnetic data storage

The post Visible light paints patterns onto chiral antiferromagnets appeared first on Physics World.

Researchers at Los Alamos National Laboratory in New Mexico, US have used visible light to both image and manipulate the domains of a chiral antiferromagnet (AFM). By “painting” complex patterns onto samples of cobalt niobium sulfite (Co1/3NbS2), they demonstrated that it is possible to control AFM domain formation and dynamics, boosting prospects for data storage devices based on antiferromagnetic materials rather than the ferromagnetic ones commonly used today.

In antiferromagnetic materials, the spins of neighbouring atoms in the material’s lattice are opposed to each other (they are antiparallel). For this reason, they do not exhibit a net magnetization in the absence of a magnetic field. This characteristic makes them largely immune to disturbances from external magnetic fields, but it also makes them all but invisible to simple electrical and optical probes, and extremely difficult to manipulate.

A special structure

In the new work, a Los Alamos team led by Scott Crooker focused on Co1/3NbS2 because of its topological nature. In this material, layers of cobalt atoms are positioned, or intercalated, between monolayers of niobium disulfide, creating 2D triangular lattices with ABAB stacking. The spins of these cobalt atoms point either toward or away from the centers of the tetrahedra formed by the atoms. The result is a noncoplanar spin ordering that produces a chiral, or “handed,” spin texture.

This chirality affects the motion of electrons in the material because when an electron passes through a chiral pattern of spins, it picks up a geometrical phase known as a Berry phase. This makes it move as if it were “seeing” a region with a real magnetic field, giving the material a nonzero Hall conductivity which, in turn, affects how it absorbs circularly polarized light.

Characterizing a topological antiferromagnet

To characterize this behaviour, the researchers used an optical technique called magnetic circular dichroism (MCD) that measures the difference in absorption between left and right circularly polarized light and depends explicitly on the Hall conductivity.

Similar to the MCD that is measured in well-known ferromagnets such as iron or nickel, the amplitude and sign of the MCD measured in Co1/3NbS2 varied as a function of the wavelength  of the light.  This dependence occurs because light prompts optical transitions between filled and empty energy bands. “In more complex materials like this, there is a whole spaghetti of bands, and one needs to consider all of them,” Crooker explains. “Precisely which mix of transitions are being excited depends of course on the photon energy, and this mix changes with energy. Sometimes the net response is positive, sometimes negative; it just depends on the details of the band structure.”

To understand the mix of transitions taking place, as well as the topological character of those transitions, scientists use the concept of Berry curvature, which is the momentum-space version of the magnetic field-like effect described earlier. If the accumulated Berry phase is positive (negative), then the electron is moving in a right-handed (left-handed) spin texture chirality, which is captured by the Berry curvature of the band structure in momentum space.

Imaging and painting chiral AFM domains

To image directly the domains with positive and negative chirality, the researchers cooled the sample below its ordering temperature, shined light of a particular wavelength onto it, and measured its MCD using a scanning MCD microscope. The sign of the measured MCD value revealed the chirality of the AFM domains.

To “write” a different chirality into these AFM domains, the researchers again cooled the sample below its ordering temperature, this time in the presence of a small positive magnetic field B, which fixed the sample in a positive chiral AFM state. They then reversed the polarity of B and illuminated a spot of the sample to heat it above the ordering temperature. Once the spot cooled down, the negative-polarity B-field changed the AFM state in the illuminated region into a negative chirality. When the “painting” was finished, the researchers imaged the patterns with the MCD microscope.

In the past, a similar thermo-magnetic scheme gave rise to ferromagnetic-based data storage disks. This work, which is published in Physical Review Letters, marks the first time that light has been used to manipulate AFM chiral domains – a fundamental requirement for developing AFM-based information storage technology and spintronics.  In the future, Crooker says the group plans to extend this technique to characterize other complex antiferromagnets with nontrivial magnetic configurations, use light to “write” interesting spatial patterns of chiral domains (patterns of Berry phase), and see how this influences electrical transport.

The post Visible light paints patterns onto chiral antiferromagnets appeared first on Physics World.

https://physicsworld.com/a/visible-light-paints-patterns-onto-chiral-antiferromagnets/
Nohora Hernández

Green concrete: paving the way for sustainable structures

Concrete is one of the world's biggest carbon emitters. Benjamin Skuse asks if AI can help tame concrete’s climate impact

The post Green concrete: paving the way for sustainable structures appeared first on Physics World.

Grey, ugly, dull. Concrete is not the most exciting material in the world. That is, until you start to think about its impact on our lives. Concrete is the second most consumed material on the planet after water. Humanity uses about 30 billion tonnes of the stuff every year, the equivalent of building an entire new New York City every month. Put another way, there is so much concrete in the world and so much being made that by the 2040s it will outweigh all living matter.

As the son of a builder, I have made a few concrete mixes over the years myself, usually following my father’s tried and trusted recipe. Take one part cement (fine mineral powder), two parts sand, and four parts aggregate (crushed stone), then mix and add enough water until it all goes gloopy.

The ubiquity and low cost of these simple ingredients are just two of the reasons for concrete’s global reach. In liquid form, it can be moulded into almost any shape, and once set, it is as hard and durable as stone. What’s more, it doesn’t burn, rot or get eaten by animals.

These factors make concrete the ideal material for everything from vast imposing dams to sleek kitchen floors. However, its gargantuan presence across society comes at an equally epic environmental cost. If concrete were a country, it would rank third behind only the US and China as a greenhouse gas emitter.

Though raw material processing and transport of concrete are part of the problem, concrete’s biggest environmental impact comes from the heat and chemical processes involved in producing cement. Ordinary cement clinker (the raw form of cement before it is ground to a powder) is the product of heating limestone up to 1450 °C until it breaks apart into lime and carbon dioxide (CO2). This heating requires lots of energy and the chemical process releases huge amounts of the greenhouse gas CO2 – meaning that cement makes up around 90% of the carbon footprint of an average concrete mix.

Cement factory at twilight

In the UK and some other parts of the world, this climate impact is well recognized, with the industry having made significant efforts to decarbonize over the last few decades. “Since 1990, the UK concrete industry has decreased its direct and indirect environmental impacts by over 53% through various technology levers,” says Elaine Toogood – an architect and senior director at the Mineral Products Association’s Concrete Centre, the UK’s technical hub for all things concrete.

This reduction has been achieved through actions such as fuel switching, decarbonizing electricity and transport networks, and carbon capture technology. “For example, over 50% of all the heat that’s needed to make cement is now supplied by waste-derived fuels,” Toogood adds.

Yet the sheer scale of the global concrete industry means that much more needs to be done to fully mitigate concrete’s carbon impact. Can physics, and more specifically AI, lend a hand?

Low-carbon replacements

Replacing cement – concrete’s least green ingredient – with low-carbon alternatives seems like a good place to start. Two well-proven options have been available for decades.

Fly ash – the by-product of burning coal at power plants – can replace about 30% of cement in concrete mixes. It has been used in the construction of many prominent structures including the Channel Tunnel, which opened in 1994. Blast furnace slag – the by-product of iron and steel production – is another capable replacement, and can make up to 70% of cement content. Slag was used in 2009 to substitute half of the regular cement in the precast concrete units that now make up the sea defences on Blackpool beach.

Yet although these waste materials are currently extensively used as cement or concrete additions in the UK and elsewhere, they rely on very polluting sources (coal-fired power plants and blast furnaces) that are gradually being phased out globally to meet climate targets. As a result, fly ash and blast furnace slag are not long-term solutions. New low-carbon materials are needed, which is where physics can play a decisive role.

Based in Debre Tabor University in Ethiopia, Gashaw Abebaw Adanu is an expert in innovative construction materials. In 2021 he and colleagues investigated the potential of partially replacing (0%, 5%, 10%, 15% and 20%) standard cement with ash from burning lowland Ethiopian bamboo leaf, a common local construction waste material (Adv. Civ. Eng 10.1155/2021/6468444). The findings were encouraging. Though the concrete took longer to set with increased bamboo leaf ash content, the material’s strength, water absorption and sulphate attack (concrete breakdown caused by sulphate ions reacting with the hardened cement paste) improved for 5–10% bamboo leaf ash mixes. The results suggest that up to 10% of cement could be swapped for this local low-carbon alternative.

Steel, copper – or hair?

More recently, Adanu has turned his focus to concrete fibre reinforcement. Adding small amounts of steel, copper or polyethene fibres is known to increase concrete’s ductility and crack resistance by up to 200% and 90%, respectively. The tiny fibres act like micro-stitches throughout the entire mix, transforming concrete from a brittle material into a tough, energy-absorbing composite.

Fibre reinforcement also leads to major cost savings and a reduced carbon footprint, primarily by removing the need for traditional steel rebar and mesh, where 50 kg of steel fibres can often do the work of 100 kg+ of traditional rebar. Eliminating this expensive material also reduces labour and maintenance costs.

In his latest research, Adanu has explored an unexpected alternative fibre reinforcement material that would decrease costs further as it would otherwise go to landfill: human hair (Eng. Res. Express 7 015115). Adanu took waste hair from barbershops in Debre Tabor (with permission, of course), and added small amounts of it in different quantities to standard concrete mixes. “It’s not biodegradable, it’s not compostable, but as a fibre reinforcement material, we found that using 1–2% human hair improves the concrete’s tensile strength, compressive strength, cracking resistance and reduces shrinkage,” says Adanu. “It makes concrete more clean and sustainable, and because it improves the quality of the concrete, it reduces cost at the same time.”

Research like Adanu’s, involving experimentation with local materials, has been the driving force for innovation in construction for millennia. From the ancient Neolithic practice of boosting mudbricks’ strength by adding local straw, to the Romans using volcanic dust as high-quality cement for concrete constructions like the Pantheon in Rome – a structure that still stands to this day, with its 43.3-m diameter non-reinforced concrete dome remaining the largest in the world. But testing one material at a time is no longer the only way.

Four photos of concrete buildings

Taking a more modern, wide-ranging approach, a team of researchers led by Soroush Mahjoubi and Elsa Olivetti of Massachusetts Institute of Technology (MIT), recently mined the cement and concrete literature, and a database of over one million rock samples, looking for cement ingredient substitutes (Communications Materials 6 99). The study not only confirmed the potential of the well-known alternatives fly ash and metallurgic slags, but also various biomass ashes like the bamboo leaf ash Adanu investigated, as well as rice husk, sugarcane bagasse, wood, tree bark and palm oil fuel ashes.

The meta-review in addition identified various other waste materials with high potential. These include construction and demolition wastes (ceramics, bricks, concrete), waste glass, municipal solid waste incineration ashes, and mine tailings (iron ore, copper, zinc), as well as 25 igneous rock types that could significantly reduce cement’s carbon impact.

AI to the rescue

Although a number of these alternative concrete materials have been known for some time, they have struggled to make an impact, with very few being used to partially replace regular cement in ready-mix concretes. Getting construction companies or concrete contractors to give them a try is no simple task.

“Concrete contractors are used to using certain mixes for certain jobs at certain times of the year, so they can plan a site and project based on how those materials are going to behave,” says Toogood. “Newer mixes act slightly differently when fresh,” she adds, which makes life tricky for those running a construction site, where concrete that behaves in a predictable manner is critical so that things run smoothly and efficiently.

Two physicists – Raphael Scheps and Gideon Farrell – aim to build this trust in low-carbon alternatives through their UK construction technology company Converge. Starting out using sensors to measure the real-time performance of different mixes of concrete in situ, they have built one of the world’s largest datasets on the performance of concrete.

Two photos of sensors on building sites - a macro shot of a probe and a wider shot of a person wearing hi-vis and a hard hat crouched on a concrete surface

They can now apply an AI model underpinned by physics principles. The program simulates the physical and chemical interactions of different components to predict the performance of a vast number of concrete mixes in a wide range of situations to a high level of accuracy. And this is key, as it builds trust to experiment with lower-carbon mixes. “With projects in the UK and Australia, we’ve helped people tweak the mix that they’re using and achieve quite major carbon savings,” says Scheps. “Anywhere from 10% all the way up to 44%.”

Currently used to recommend existing cost-saving concrete recipes, Scheps sees Converge’s AI model becoming more sophisticated over time. “As it starts to uncover the real fundamental physics-based rules for what drives concrete chemistry, our model will make projections for entirely new materials,” he enthuses.

Also exploring the power of AI to optimize concrete production is US company Concrete.ai. Like Converge, Concrete.ai was born from the idea of applying physics principles to optimize traditional materials and industries; specifically, how AI can be used to reduce the carbon footprint of concrete. And also like Converge, the company’s technology rests on one of the world’s largest concrete databases, consisting of vast amounts of different recipes and materials, alongside their associated performances.

Trained on this dataset, Concrete.ai’s generative AI model creates millions of possible mix designs to identify the optimal concrete recipe for any particular application. “The main difference between a solution like Concrete.ai’s and general models like ChatGPT or Gemini is that our goal is really to create recipes that don’t exist yet,” explains chief technology officer and co-founder Mathieu Bauchy. “Popular large language models regurgitate what they have been trained on and tend to hallucinate, whereas our model discovers new recipes that have never been produced before without breaking the laws of physics or chemistry, and in a reliable way.”

Bauchy sees Concrete.ai’s role as a bridge between concrete producers keen to cut their costs and carbon footprint, and innovators like Adanu or the MIT group exploring new low-carbon concrete materials who are unable to demonstrate the performance of these materials in real-world scenarios and at scale.

Circular benefits

It is perhaps apt that the industry most in need of AI insights from the likes of Converge, Concrete.ai and their growing number of competitors is the AI industry itself. New data centres being used to train, deploy and deliver AI applications and services are the cause of a huge spike in the greenhouse gas emissions of tech giants such as Google, Meta, Microsoft and Amazon. And one of the biggest contributors to those emissions is the concrete from which these hyperscale facilities are built.

Aerial view of large industrial building complex next to a solar farm

This is the reason Meta recently partnered with concrete maker Amrize to develop AI-optimized concrete. For Meta’s new 66,500 m2 data centre in Rosemount, Minnesota, the partners applied Meta’s AI models and Amrize’s materials-engineering expertise to deliver concrete that met key criteria including high strength and low carbon content, as well as practical performance characteristics like decent cure speed and surface quality. The partners estimate that the custom mix will reduce the total carbon footprint of this concrete by 35%.

“There is an interesting synergy between concrete and AI,” says Bauchy. “AI can help design greener concrete, and on the other hand, concrete can be used to build more sustainable data centres to power AI.” With other tech giants exploring AI’s potential in reducing the carbon footprint of the concrete they use too, it may well be that the very places in which AI is developed become the testbeds for AI-derived sustainable green concrete solutions.

The post Green concrete: paving the way for sustainable structures appeared first on Physics World.

https://physicsworld.com/a/green-concrete-paving-the-way-for-sustainable-structures/
No Author

New journal aims to advance the interdisciplinary field of personalized health

Medical Sensors and Imaging will publish innovative research at the intersection of engineering, biomedical and computer sciences

The post New journal aims to advance the interdisciplinary field of personalized health appeared first on Physics World.

Personalized health – the use of individualized measurements to address each patient’s specific needs – is a research field that’s evolving at pace. Bringing this level of personalization into the clinic is an interdisciplinary challenge, requiring the development of sensors that generate clinically meaningful data outside the hospital, new imaging modalities and analysis techniques, and computational tools that address the uncertainties of dealing with just one individual.

Much of the most impactful work in this field sits in the spaces between established disciplines. And for researchers looking to publish their findings or read about the latest breakthroughs, this work is often scattered across discipline-specific journals. A new open access journal from IOP Publishing – Medical Sensors & Imaging (MSI) – aims to remedy this shortfall, providing a dedicated home for authors working across sensing, imaging, modelling and data-driven healthcare.

Medical Sensors & Imaging

“We want a journal where physicists, engineers, computer scientists, biomedical researchers and clinicians can publish and read work that advances personalized health, without confinement into traditional silos,” explains founding editor-in-chief Marco Palombo from Cardiff University. “MSI also aims to play an important role in strengthening interdisciplinary exchange.”

“The community needs a specialized forum that doesn’t just report on new materials or a clinical trial, but validates innovations that can specifically solve complex biomedical challenges,” adds deputy editor Xiliang Luo from Qingdao University of Science and Technology. “I think this journal is a perfect fit for that gap.”

Connecting communities

Published by IOP Publishing on behalf of the Institute of Physics and Engineering in Medicine (IPEM), MSI aims to dismantle the barriers between engineering innovation and clinical application by creating a community of experts that work together to translate innovative technology into clinical settings.

MSI sits within IPEM’s journal portfolio that includes Physics in Medicine & Biology, Physiological Measurement and Medical Engineering & Physics. Its aims and scope were designed to complement, rather than overlap with, these existing journals and provide a dedicated venue for translational work and practical applied research that may otherwise struggle to fit a traditional scope.

Marco Palombo

Being part of this established family of journals brings with it strong editorial standards, an established readership base and a commitment to scientific integrity. The journal also offers rapid, high-quality peer review, with feedback that’s constructive, rigorous and fair. MSI is fully open access, which maximizes the visibility, reach and impact of its published papers.

“For a new journal in a dynamic field, ensuring content is discoverable and barrier-free is essential for building an audience quickly and establishing credibility,” says Palombo. “We also wanted MSI to support global participation. Many excellent groups operate with limited budgets but make major scientific contributions. Open access reduces inequities in who can read and build on published work.”

“For the authors, we can provide a specialized platform for scientists whose work transcends traditional boundaries, offering visibility to a broad audience that’s eager for translational solutions,” says Luo. “And for the readers, I think we will be the go-to resource for academic researchers, industry R&D leaders, and healthcare innovators seeking the latest breakthroughs in personalized health monitoring and advanced diagnostics.”

Hot topics

Palombo contributed to the strategic development of the journal at an early stage, drawing upon his experience in healthcare and medical imaging research and engaging with the research community to identify the scientific niche that MSI could fill. Working with IOP Publishing, he helped shape the journal’s aims and scope and assembled a diverse, internationally recognized editorial board with knowledge aligned with the journal’s mission – including Lui, who brings specialist expertise in wearable technologies and biosensors.

Xiliang Luo

The journal will publish high-quality research on novel biomedical sensing and imaging techniques, along with the algorithms, validation frameworks and translational studies that demonstrate their application in real-world medicine. MSI also provides a platform to showcase research on hot topics such as wearable and implantable sensors for continuous physiological monitoring, for example, or microneedle-based sensing technologies and breath analysis.

The development of flexible and biocompatible materials will be key for the growth of bio-integrated devices and biodegradable or transient electronics, as will anti-fouling strategies that enable use of sensors in complex biological environments. On the imaging side, the journal scope encompasses mainstay medical imaging techniques such as MRI, CT, ultrasound, PET and SPECT, as well as emerging multimodal and hybrid approaches, with a focus on technical innovation and translational relevance.

“Given my own background, I’m particularly keen to see strong submissions in the area of MRI, including advanced quantitative biomarkers and approaches that probe tissue microstructure,” notes Palombo. “I also see huge potential in connecting imaging to computational modelling – particularly digital twins – and in building imaging pipelines that enable personalized diagnosis and prognosis.”

“Other exciting areas include combining sensing and imaging technologies into one system, and closed-loop ‘sense then act’ systems, which sense something and can then release medicine to treat the disease,” says Luo.

The rise of AI

Artificial intelligence (AI) is becoming increasingly central to both sensing and imaging, and will likely play a major role in the evolution of personalized health, enabling a shift towards multimodal fusion of sensor streams, imaging and clinical data. AI could also facilitate the introduction of integrated sensor systems that collect data and interpret signals in real time, and digital twins that link patient-specific data with computational models to simulate disease progression or treatment response.

Palombo emphasizes the importance of trustworthy AI: methods that don’t just provide an output, but are explainable, robust and explicitly handle uncertainty. This is a direction seen in the general field of AI, but is especially important within healthcare. He also cites the increasing momentum around green healthcare and green AI, with personalized health technologies designed to reduce waste and minimize energy consumption, and clinical models developed with far greater computational efficiency.

“It would be fantastic to have an AI model running directly on the sensor, for example, and this ties in with the environmental impact of AI,” he explains. “If we keep AI small and manageable, then it pollutes less, is more affordable for everybody and can be deployed on small, lightweight devices.”

A community focal point

Looking ahead, Palombo hopes that MSI will becomes a leading platform for interdisciplinary innovation in personalized health, and the routine home for publishing major advances in sensing, imaging, modelling and trustworthy AI. “Over time, I’d like the journal to build depth in core areas, while also actively shaping emerging directions such as digital twins, uncertainty-aware and explainable AI, multimodal integration and technologies that are genuinely deployable in clinical workflows.”

“Currently, the fields of sensor engineering and clinical medicine often run on parallel tracks. My hope is that this journal will force these tracks to converge over time,” adds Luo. “I see the journal fostering a new language where chemists, physicists, engineers and doctors can understand each other by publishing papers in MSI.”

 

The post New journal aims to advance the interdisciplinary field of personalized health appeared first on Physics World.

https://physicsworld.com/a/new-journal-aims-to-advance-the-interdisciplinary-field-of-personalized-health/
Tami Freeman

Olympian Eileen Gu rules the piste with physics and international relations

There is lots of classical mechanics in freestyle skiing

The post Olympian Eileen Gu rules the piste with physics and international relations appeared first on Physics World.

Here at Physics World we are always on the look out for physicists with extraordinary talents outside of science. In 2023, for example we were in awe of Harvard University’s Jenny Hoffman who ran across the US in 47 days, 12 hours and 35 minutes – shattering the previous record by one week.

Now, coverage of the Winter Olympics in Italy has revealed that the Chinese freestyle skier Eileen Gu had studied physics at Stanford University. The most decorated female Olympic freestyle skier in history, US-born Gu bagged two gold medals and a silver at the 2022 Beijing games and added three silvers at Milano Cortina.

Gu has subsequently switched majors to international relations at Stanford, but we can still celebrate her as an honorary physicist.

Physics-rich event

Indeed, freestyle skiing is quite possibly the most physics-rich of all Olympic events. Athletes must consider friction, gravity and the conservation of momentum and angular momentum to perfect their skiing.

Now, I’m not suggesting that studying free-body diagrams of freestyle manoeuvres is essential for Olympic success, but I live in hope that an understanding of classical mechanics can improve one’s skiing. (I’m not sure why I believe this, because a PhD and decades of writing about physics certainly hasn’t improved my skiing!).

As well as being lauded for her prowess on the snow, Gu has found herself at the centre of an international furore regarding her choice of competing for China rather than for the US. So, international relations combined with physics seems like a very good course of study!

The post Olympian Eileen Gu rules the piste with physics and international relations appeared first on Physics World.

https://physicsworld.com/a/olympian-eileen-gu-rules-the-piste-with-physics-and-international-relations/
Hamish Johnston

Wobbling gyroscopes could harvest energy from ocean waves

Design can be tuned to work at a wide range of wave frequencies

The post Wobbling gyroscopes could harvest energy from ocean waves appeared first on Physics World.

A new way of extracting energy from ocean waves has been proposed by a researcher in Japan. The system couples a gyroscope to an electrical generator and could be fine tuned to extract energy from a wide range of wave conditions. A prototype of the design is currently being built for testing in a wave tank. If successful, the system could be used to generate electricity onboard ships.

Ocean waves contain huge amounts of energy and humans have tried to harness this energy for centuries. But, despite the development of myriad technologies and a number of trials, the widespread commercial conversion of wave energy remains an elusive goal. One important problem is that most generation schemes only work within a narrow range of wave conditions – and the ocean can be a very messy place.

Now, Takahito Iida at the University of Osaka has proposed a new energy-harvesting technology that uses gyroscopic flywheel system that can be tuned to absorb energy efficiently over a broad range of wave frequencies.

“Wave energy devices often struggle because ocean conditions are constantly changing,” says Iida. “However, a gyroscopic system can be controlled in a way that maintains high energy absorption, even as wave frequencies vary.”

Wobbling top

At the heart of the technology is gyroscopic precession, whereby a torque on a rotating object causes the object’s axis of rotation to trace out a circle. This is familiar to anyone who has played with a spinning top, which will wobble (precess) when perturbed.

Iida’s device is called a gyroscopic wave energy converter and comprises a spinning flywheel mounted on a floating platform (see figure). On calm seas, the gyroscope’s axis of rotation points in a fixed direction thanks to the conservation of angular momentum. However, waves will cause the platform to pitch from side-to-side, exerting torques on the gyroscope and causing it to precess.  It is this precession that drives a generator to deliver electrical power.

To design the system, Iida used linear wave theory to model the coupled interactions between waves, the platform, the gyroscope and the generator. This allowed him to devise a scheme for tuning the gyroscope frequency and generator parameters so that an energy conversion efficiency of 50% is achieved for a variety of wave conditions.

The effect of the generator was modelled as a spring-damper. This is a system that responds to a torque by storing and then returning some energy to the gyroscope (the spring), and removing some energy by converting it to electricity (the damper).  Iida discovered that a maximum conversion of 50% occurs when the spring coefficient of the generator is adjusted such that the gyroscope’s resonant frequency matches the resonant frequency of the floating platform.

Fundamental constraint

Iida explains that 50% is the maximum efficiency that can be achieved. “This efficiency limit is a fundamental constraint in wave energy theory. What is exciting is that we now know that it can be reached across broadband frequencies, not just at a single resonant condition.”

Iida tells Physics World that a small prototype (approximately 50 cm3 in size) is being built and will be tested in a 100 m-long tank.

The next step will be the development of a system with a generating capacity of about 5 kW. Iida says that the ultimate goal is a 300 kW generator.

Iida also explains that the gyroscopic wave energy converter is designed to operate untethered to the seabed. As a result he says the system would be ideal for use as an auxiliary power system for a ship. “The target output of 300 kW is based on the assumed auxiliary power demand of a typical commercial vessel,” says Iida.

The research is described in the Journal of Fluid Mechanics.

The post Wobbling gyroscopes could harvest energy from ocean waves appeared first on Physics World.

https://physicsworld.com/a/wobbling-gyroscopes-could-harvest-energy-from-ocean-waves/
Hamish Johnston

World’s smallest QR code paves the way for ultralong-life data storage

Tiny QR code etched on ceramic sets the Guinness World Record as the world’s smallest

The post World’s smallest QR code paves the way for ultralong-life data storage appeared first on Physics World.

A team headed up at TU Wien in Austria has set the Guinness World Record for creating the world’s smallest QR code. Working with industry partner Cerabyte, the researchers produced a stable and repeatedly readable QR code with an area of just 1.977 µm2. When read out – using an electron microscope, as its structure is too fine to be seen with a standard optical microscope – the QR code links to a scientific webpage at TU Wien.

But this wasn’t just a ploy to get into the record books, the QR code was created as part of the team’s research into ceramic data storage materials. Unlike conventional magnetic or electronic data storage media, which degrade within decades, ceramic-based storage is designed to withstand extreme temperatures, radiation, chemical corrosion and mechanical damage.

As such, information stored in ceramic materials could endure for centuries, or even millennia. And in contrast to today’s data centres, ceramics preserve stored information without any energy input and without requiring cooling.

Electron microscope image of QR code

To create these ultralong-life data storage systems, the researchers use focused ion beams to mill the QR code into a thin film of chromium nitride, a durable ceramic often used to coat high-performance cutting tools. As each individual pixel is just 49 nm in size, roughly 10 times smaller than the wavelength of visible light, the code cannot be imaged using visible light. But when examined with an electron microscope, the QR code could indeed be read out reliably.

After the writing process, the entire stack of ceramic films is subjected to extreme conditions, such as high temperatures, corrosive environments and mechanical stress, to evaluate the material’s long-term durability and readout stability.

Pushing storage to its limits

Creating a “tiny QR code” was not the team’s initial goal, but emerged as a natural outcome of pushing this storage technology to its limits, says Paul Mayrhofer from TU Wien’s Institute of Materials Science and Technology.

“During a discussion with one of my PhD students, Erwin Peck, we realised that the writing procedure we had developed already produced features smaller than what had previously been reported for QR codes,” he explains. “This sparked the idea: if we can reliably write structures at that scale, why not intentionally create the smallest QR code possible?”

To claim its place in the record books, the QR code was successfully milled and read out in the presence of witnesses and its size independently verified using calibrated scanning electron microscopy at the University of Vienna. It is now officially recognized by Guinness as the world’s smallest QR code, and is roughly one third the size of the previous record holder.

Mayrhofer points out that the storage capacity of the ceramic data storage technology far surpasses that of a single QR code. “Based on current estimates, a cartridge of 100 x 100 x 20 mm with ceramic storage medium could potentially store on the order of 290 terabytes of raw data,” he says.

As well as offering this impressive raw capacity, for practical applications it’s also crucial that the ceramic storage offers high writing speed, which determines how efficiently large datasets can be stored, and low energy consumption during writing, which will influence the potential for scalability and sustainability. The researchers are currently working to optimize both of these parameters.

“Humanity has preserved information for millennia when carved in stone, yet much of today’s digital information risks being lost within decades,” project leader Alexander Kirnbauer tells Physics World. “Our long-term goal is to create an ultrastable, sustainable data storage technology capable of preserving information for extremely long times – potentially thousands to millions of years. In essence, we want to develop a form of storage that ensures the knowledge of our digital age does not disappear over time.”

The post World’s smallest QR code paves the way for ultralong-life data storage appeared first on Physics World.

https://physicsworld.com/a/worlds-smallest-qr-code-paves-the-way-for-ultralong-life-data-storage/
Tami Freeman

Quantum Systems Accelerator focuses on technologies for computing

Bert de Jong of Lawrence Berkeley National Lab is our podcast guest

The post Quantum Systems Accelerator focuses on technologies for computing appeared first on Physics World.

Developing practical technologies for quantum information systems requires the cooperation of academic researchers, national laboratories and industry. That is the mission of the  Quantum Systems Accelerator (QSA), which is based at the Lawrence Berkeley National Laboratory in the US.

The QSA’s director Bert de Jong is my guest in this episode of the Physics World Weekly podcast. His academic research focuses on computational chemistry and he explains how this led him to realise that quantum phenomena can be used to develop technologies for solving scientific problems.

In our conversation, de Jong explains why the QSA is developing a range of  qubit platforms − including neutral atoms, trapped ions, and superconducting qubits – rather than focusing on a single architecture. He champions the co-development of quantum hardware and software to ensure that quantum computing is effective at solving a wide range of problems from particle physics to chemistry.

We also chat about the QSA’s strong links to industry and de Jong reveals his wish list of scientific problems that he would solve if he had access today to a powerful quantum computer.

Oxford Ionics logo

 

This podcast is supported by Oxford Ionics.

The post Quantum Systems Accelerator focuses on technologies for computing appeared first on Physics World.

https://physicsworld.com/a/quantum-systems-accelerator-focuses-on-technologies-for-computing/
Hamish Johnston

Metallic material breaks 100-year thermal conductivity record

Transition metal nitride conducts heat nearly three times better than copper

The post Metallic material breaks 100-year thermal conductivity record appeared first on Physics World.

A newly identified metallic material that conducts heat nearly three times better than copper could redefine thermal management in electronics. The material, which is known as theta-phase tantalum nitride (θ-TaN), has a thermal conductivity comparable to low-grade diamond, and its discoverers at the University of California Los Angeles (UCLA), US say it breaks a record on heat transport in metals that had held for more than 100 years.

Semiconductors and insulators mainly carry heat via vibrations, or phonons, in their crystalline lattices. A notable example is boron arsenide, a semiconductor that the UCLA researchers previously identified as also having a high thermal conductivity. Conventional metals, in contrast, mainly transport heat via the flow of electrons, which are strongly scattered by lattice vibrations.

Heat transport in θ-TaN combines aspects of both mechanisms. Although the material retains a metal-like electronic structure, study leader Yongjie Hu explains that its heat transport is phonon-dominated. Hu and his UCLA colleagues attribute this behaviour to the material’s unusual crystal structure, which features tantalum atoms interspersed with nitrogen atoms in a hexagonal pattern. Such an arrangement suppresses both electron–phonon and phonon–phonon scattering, they say.

Century-old upper limit for metallic heat transport

Materials with high thermal conductivity are vital in electronic devices because they remove excess heat that would otherwise impair the devices’ performance. Among metals, copper has long been the material of choice for thermal management thanks to its relative abundance and its thermal conductivity of around 400 Wm−1 K−1, which is higher than any other pure metal apart from silver.

Recent theoretical studies, however, had suggested that some metallic-like materials could break this record. θ-TaN, a metastable transition metal nitride, was among the most promising contenders, but it proved hard to study because high-quality samples were previously unavailable.

Highest thermal conductivity reported for a metallic material to date

Hu and colleagues overcame this problem using a flux-assisted metathesis reaction. This technique removed the need for the high pressures and temperatures required to make pure samples of the material using conventional techniques.

The team’s high-resolution structural measurements revealed that the as-synthesized θ-TaN crystals had smooth, clean surfaces and ranged in size from 10 to 100 μm. The researchers also used a variety of techniques, including electron diffraction, Raman spectroscopy, single-crystal X-ray diffraction, high-resolution transmission electron microscopy and electron energy loss spectroscopy to confirm that the samples contained single crystals.

The researchers then turned their attention to measuring the thermal conductivity of the θ-TaN crystals. They did this using an ultrafast optical pump-probe technique based on time-domain thermoreflectance, a standard approach that had already been used to measure the thermal conductivity of high-thermal-conductivity materials such as diamond, boron phosphide, boron nitride and metals.

Hu and colleagues made their measurements at temperatures between 150 and 600 K. At room temperature, the thermal conductivity of the θ-TaN crystals was 1100 Wm−1 K−1. “This represents the highest value reported for any metallic materials to date,” Hu says.

The researchers also found that the thermal conductivity remained uniformly high across an entire crystal. H says this reflects the samples’ high crystallinity, and it also confirms that the measured ultrahigh thermal conductivity originates from intrinsic lattice behaviour, in agreement with first-principles predictions.

Another interesting finding is that while θ-TaN has a metallic electronic structure, its thermal conductivity decreased with increasing temperature. This behaviour contrasts with the weak temperature dependence typically observed in conventional metals, in which heat transport is dominated by electrons and is limited by electron-phonon interactions.

Applications in technologies limited by heat

As well as cooling microelectronics, the researchers say the discovery could have applications in other technologies that are increasingly limited by heat. These include AI data centres, aerospace systems and emerging quantum platforms.

The UCLA team, which reports its work in Science, now plans to explore scalable ways of integrating θ-TaN into device-relevant platforms, including thin films and interfaces for microelectronics. They also aim to identify other candidate materials with lattice and electronic dynamics that could allow for similarly efficient heat transport.

The post Metallic material breaks 100-year thermal conductivity record appeared first on Physics World.

https://physicsworld.com/a/metallic-material-breaks-100-year-thermal-conductivity-record/
Isabelle Dumé

Nickel-enhanced biomaterial becomes stronger when wet

A biomaterial that increases its strength when in contact with water could provide a biodegradable alternative to plastics

The post Nickel-enhanced biomaterial becomes stronger when wet appeared first on Physics World.

Synthetic materials such as plastics are designed to be durable and water resistant. But the processing required to achieve these properties results in a lack of biodegradability, leading to an accumulation of plastic pollution that affects both the environment and human health. Researchers at the Institute for Bioengineering of Catalonia (IBEC) are developing a possible replacement for plastics: a novel biomaterial based on chitin, the second most abundant natural polymer on Earth.

“Every year, nature produces on the order of 1011 tonnes of chitin, roughly equivalent to more than three centuries of today’s global plastic production,” says study leader Javier G Fernández. “Chitin and [its derivative] chitosan are the ultimate natural engineering polymers. In nature, variations of this material produce stiff insect wings enabling flight, elastic joints enabling extraordinary jumping in grasshoppers, and armour-like protective exoskeletons in lobsters or clams.”

But while biomaterials provide a more environmentally friendly alternative to conventional plastics, most biological materials weaken when exposed to water. In this latest work, Fernández and first author Akshayakumar Kompa took inspiration from nature and developed a new biomaterial that increases its strength when in contact with water, while maintaining its natural biodegradability.

Metal matters

In the exoskeletons of insects and crustaceans, chitin it is secreted in a gel-like form into water and then transitions into a hard structure. Following a chance observation that removing zinc from a sandworm’s fangs caused them to soften in water, Kompa and Fernández investigated whether adding a different transition metal, nickel, to chitosan could have the opposite effect.

By mixing nickel chloride solution (at concentrations from 0.6 to 1.4 M) with dispersions of chitosan extracted from discarded shrimp shells, the researchers entrapped varying amounts of nickel within the chitosan structure. Fourier-transform infrared spectra of resulting chitosan films revealed the presence of nickel ions, which form weak hydrogen bonds with water molecules and increase the biomaterial’s capacity to bond with water.

“In our films, water molecules form reversible bridges between polymer chains through weak interactions that can rapidly break and reform under load,” Fernández explains. “That fast reconfiguration is what gives the material high strength and toughness under wet conditions: essentially a built-in, stress-activated ‘self-rearrangement’ mechanism. Nickel ions act as stabilizing anchors for these water-mediated bridges, enabling more and longer-range connections and making inter-chain connectivity more robust”.

The nickel-doped chitosan samples had tensile strengths of between 30 and 40 MPa, similar to that of standard plastics. Adding low concentrations of nickel did not significantly impact the mechanical properties of the films. Concentrations of 1 M or more, however, preserved the material’s strength while increasing its toughness (the ability to stretch before breaking) – a key goal in the field of structural materials and a feature unique to biological composites.

Testing a nickel-doped chitosan film

Upon immersion in water, the nickel-doped films exhibited greater tensile strength, increasing from 36.12±2.21 MPa when dry to 53.01±1.68 MPa, moving into the range of higher-performance engineering plastics. In particular, samples created from an optimal 0.8 M nickel concentration almost doubled in strength when wet (and were used for the remainder of the team’s experiments).

Scaling production

The manufacturing process involves an initial immersion in water, followed by drying for 24 h and then re-wetting. During the first immersion, any nickel ions that are not incorporated into the material’s functional bridging network are released into the water, ensuring that nickel is present only where it is structurally useful.

The researchers developed a zero-waste production cycle in which this water is used as a primary component for fabricating the next object. “The expelled nickel is recovered and used to make the next batch of material, so the process operates at essentially 100% nickel utilization across batches,” says Fernández.

Nickel-doped chitosan structures

They used this process to produce various nickel-doped chitosan objects, including watertight containers and a 1 m2 film that could support a 20 kg weight after 24 h of water immersion. They also created a 244 x 122 cm film with similar mechanical behaviour to the smaller samples, demonstrating the potential for rapid scaling to ecologically relevant scales. A standard half-life test revealed that after approximately four months buried in garden soil, half of the material had biodegraded.

The researchers suggest that the biomaterial’s first real-world use may be in sectors such as agriculture and fishing that require strong, water-compatible and ultimately biodegradable materials, likely for packaging, coatings and other water-exposed applications. Both nickel and chitosan are already employed within biomedicine, making medicine another possible target, although any new medical product will require additional regulatory and performance validation.

The team is currently setting up a 1000 m2 lab facility in Barcelona, scheduled to open in 2028, for academia–industry collaborations in sustainable bioengineering research. Fernández suggests that we are moving towards a “biomaterial age”, defined by the ability to “control, integrate, and broadly use biomaterials and biological principles within engineering applications”.

“Over the last 20 years, working on bioinspired manufacturing, we have been able to produce the largest bioprinted objects in the world, demonstrated pathways for resource-secure and sustainable production in urban environments, and even explored how these approaches can support interplanetary colonization,” he tells Physics World. “Now we are achieving material properties that were considered out of reach by designing the material to work with its environment, rather than isolating itself from it.”

The researchers report their findings in Nature Communications.

The post Nickel-enhanced biomaterial becomes stronger when wet appeared first on Physics World.

https://physicsworld.com/a/nickel-enhanced-biomaterial-becomes-stronger-when-wet/
Tami Freeman

2D materials help spacecraft electronics resist radiation damage

Transistors based on atomically thin transition-metal dichalcogenides appear particularly robust

The post 2D materials help spacecraft electronics resist radiation damage appeared first on Physics World.

Electronics made from certain atomically thin materials can survive harsh radiation environments up to 100 times longer than traditional silicon-based devices. This finding, which comes from researchers at Fudan University in Shanghai, China, could bring significant benefits for satellites and other spacecraft, which are prone to damage from intense cosmic radiation.

Cosmic radiation consists of a mixture of heavy ions and cosmic rays, which are high-energy protons, electrons and atomic nuclei. The Earth’s magnetic field protects us from 99.9% of this ionizing radiation, and our atmosphere significantly attenuates the rest. Space-based electronics, however, have no such protection, and this radiation can damage or even destroy them.

Adding radiation shielding to spacecraft mitigates these harmful effects, but the extra weight and power consumption increases the spacecraft’s costs. “This conflicts with the requirements of future spacecraft, which call for lightweight and cost-effective architectures,” says team leader Peng Zhou, a physicist in Fudan’s College of Integrated Circuits and Micro-Nano Electronics. “Implementing radiation tolerant electronic circuits is therefore an important challenge and if we can find materials that are intrinsically robust to this radiation, we could incorporate these directly into the design of onboard electronic circuits.”

Promising transition-metal dichalcogenides

Previous research had suggested that 2D materials might fit the bill, with transistors based on transition-metal dichalcogenides appearing particularly promising. Within this family of materials, 2D molybdenum disulphide (MoS2) proved especially robust to irradiation-induced defects, and Zhou points out that its electrical, mechanical and thermal properties are also highly attractive for space applications.

The studies that revealed these advantages were, however, largely limited to simulations and ground-based experiments. This meant they were unable to fully replicate the complex and dynamic radiation fields such circuits would encounter under real space conditions.

Better than NMOS transistors

In their work, Zhou and colleagues set out to fill this gap. After growing monolayer 2D MoS2 using chemical vapour deposition, they used this material to fabricate field-effect transistors. They then exposed these transistors to 10 Mrad of gamma-ray irradiation and looked for changes to their structure using several techniques, including cross-sectional transmission electron microscopy (TEM) imaging and corresponding energy-dispersive spectroscopy (EDS) mapping.

These measurements indicated that the 2D MoS2 in the transistors was about 0.7 nm thick (typical for a monolayer structure) and showed no obvious signs of defects or damage. Subsequent Raman characterization on five sites within the MoS2 film confirmed the devices’ structural integrity.

The researchers then turned their attention to the transistors’ electrical properties. They found that even after irradiation, the transistors’ on-off ratios remained ultra-high, at about 108. They note that this is considerably better than a similarly-sized Si N-channel metal–oxide–semiconductor (NMOS) transistors fabricated through a CMOS process, for which the on-off ratio decreased by a factor of more than 4000 after the same 10 Mrad irradiation.

The team also found that MoS2 system consumes only about 49.9 mW per channel, making its power requirement at least five times lower than the NMOS one. This is important owing to the strict energy limitations and stringent power budgets of spacecraft, Zhou says.

Surviving the space environment

In their final experiment, the researchers tested their MoS2 structures on a spacecraft orbiting at an altitude of 517 km, similar to the low-Earth orbit of many communication satellites. These tests showed that the bit-error rate in data transmitted from the structures remained below 10-8 even after nine months of operation, which Zhou says indicates significant radiation and long-term stability. Indeed, based on test data, electronic devices made from these 2D materials could operate for 271 years in geosynchronous orbit – 100 times longer than conventional silicon electronics.

“The discovery of intrinsic radiation tolerance in atomically thin 2D materials, and the successful on-orbit validation of the atomic-layer semiconductor-based spaceborne radio-frequency communication system have opened a uniquely promising pathway for space electronics leveraging 2D materials,” Zhou says. “And their exceptionally long operational lifetimes and ultra-low power consumption establishes the unique competitiveness of 2D electronic systems in frontier space missions, such as deep-space exploration, high-Earth-orbit satellites and even interplanetary communications.”

The researchers are now working to optimize these structures by employing advanced fabrication processes and circuit designs. Their goal is to improve certain key performance parameters of spaceborne radio-frequency chips employed in inter-satellite and satellite-to-ground communications. “We also plan to develop an atomic-layer semiconductor-based radiation-tolerant computing platform, providing core technological support for future orbital data centres, highly autonomous satellites and deep-space probes,” Zhou tells Physics World.

The researchers describe their work in Nature.

The post 2D materials help spacecraft electronics resist radiation damage appeared first on Physics World.

https://physicsworld.com/a/2d-materials-help-spacecraft-electronics-resist-radiation-damage/
Isabelle Dumé

Rethinking how quantum phases change

A new framework explains direct transitions between ordered states, offering insights into real quantum materials

The post Rethinking how quantum phases change appeared first on Physics World.

In this work, the researchers theoretically explore how quantum materials can transition continuously from one ordered state to another, for example, from a magnetic phase to a phase with crystalline or orientational order. Traditionally, such order‑to‑order transitions were thought to require fractionalisation, where particles effectively split into exotic components. Here, the team identifies a new route that avoids this complexity entirely.

Their mechanism relies on two renormalisation‑group fixed points in the system colliding and annihilating, which reshapes the flow of the system and removes the usual disordered phase. A separate critical fixed point, unaffected by this collision, then becomes the new quantum critical point linking the two ordered phases. This allows for a continuous, seamless transition without invoking fractionalised quasiparticles.

The authors show that this behaviour could occur in several real or realistic systems, including rare‑earth pyrochlore iridates, kagome quantum magnets, quantum impurity models and even certain versions of quantum chromodynamics. A striking prediction of the mechanism is a strong asymmetry in energy scales on the two sides of the transition, such as a much lower critical temperature and a smaller order parameter where the order emerges from fixed‑point annihilation.

This work reveals a previously unrecognised kind of quantum phase transition, expands the landscape beyond the usual Landau-Ginzburg-Wilson framework, which is the standard theory for phase transitions, and offers new ways to understand and test the behaviour of complex quantum systems.

Read the full article

Continuous order-to-order quantum phase transitions from fixed-point annihilation

David J Moser and Lukas Janssen 2025 Rep. Prog. Phys. 88 098001

Do you want to learn more about this topic?

Dynamical quantum phase transitions: a review by Markus Heyl (2018)

The post Rethinking how quantum phases change appeared first on Physics World.

https://physicsworld.com/a/rethinking-how-quantum-phases-change/
Lorna Brigham

How a single parameter reveals the hidden memory of glass

Glass may look just like a normal solid, but at the microscopic level it behaves in surprisingly complex ways

The post How a single parameter reveals the hidden memory of glass appeared first on Physics World.

Unlike crystals, whose atoms arrange themselves in tidy, repeating patterns, glass is a non‑equilibrium material. A glass is formed when a liquid is cooled so quickly that its atoms never settle into a regular pattern, instead forming a disordered, unstructured arrangement.

In this process, as temperature decreases, atoms move more and more slowly. Near a certain temperature –the glass transition temperature – the atoms move so slowly that the material effectively stops behaving like a liquid and becomes a glass.

This isn’t a sharp, well‑defined transition like water turning to ice. Instead, it’s a gradual slowdown: the structure appears solid long before the atoms would theoretically cease to rearrange.

This slowdown can be extrapolated and be used to predict the temperature at which the material’s internal rearrangement would take infinitely long. This hypothetical point is known as the ideal glass transition. It cannot be reached in practice, but it provides an important reference for understanding how glasses behave.

Despite years of research, it’s still not clear exactly how glass properties depend on how it was made – how fast it was cooled, how long it aged, or how it was mechanically disturbed. Each preparation route seems to give slightly different behaviour.

For decades, scientists have struggled to find a single measure that captures all these effects. How do you describe, in one number, how disordered a glass is?

Recent research has emerged that provides a compelling answer: a configurational distance metric. This is a way of measuring how far the internal structure of a piece of glass is from a well‑defined reference state.

When the researchers used this metric, they could neatly collapse data from many different experiments onto a single curve. In other words, they found a single physical parameter controlling the behaviour.

This worked across a wide range of conditions: glasses cooled at different rates, allowed to age for different times, or tested under different strengths and durations of mechanical probing.

As long as the experiments were conducted above the ideal glass transition temperature, the metric provided a unified description of how the material dissipates energy.

This insight is significant. It suggests that even though glass never fully reaches equilibrium, its behaviour is still governed by how close it is to this idealised transition point. In other words, the concept of the kinetic ideal glass transition isn’t just theoretical, it leaves a measurable imprint on real materials.

This research offers a powerful new way to understand and predict the mechanical behaviour of glasses in everyday technologies, from smartphone screens to industrial coatings.

Read the full article

Order parameter for non-equilibrium dissipation and ideal glass – IOPscience

Junying Jiang, Liang Gao and Hai-Bin Yu, 2025 Rep. Prog. Phys. 88 118002

The post How a single parameter reveals the hidden memory of glass appeared first on Physics World.

https://physicsworld.com/a/how-a-single-parameter-reveals-the-hidden-memory-of-glass/
Paul Mabey

Challenges in CO2 reduction selectivity measurements by hydrodynamic methods

Does electrolyte purity really matter in CO­2 electroreduction research? Quite a lot, if you’re doing rotating ring-disk electrode studies. Learn more in this webinar

The post Challenges in CO<sub>2</sub> reduction selectivity measurements by hydrodynamic methods appeared first on Physics World.

 

Electrochemical CO­2 reduction converts CO­2 to higher-value products using an electrocatalyst and could pave the way for electrification of the chemical industry. A key challenge for CO­2 reduction is its poor selectivity (faradaic efficiency) due to competition with the hydrogen evolution reaction in aqueous electrolytes. Rotating ring-disk electrode (RRDE) experiments have become a popular method to quantify faradaic efficiencies, especially for gold electrocatalysts. However, such measurements suffer from poor inter-laboratory reproducibility. This work identifies the causes of variability in RRDE selectivity measurements by comparing protocols with different electrochemical methods, reagent purities, and glassware cleaning procedures. Electroplating of electrolyte impurities onto the disk and ring surfaces were identified as major contributors to electrocatalyst deactivation. These results highlight the need for standardized and cross-laboratory validation of CO2RR selectivity measurements using RRDE. Researchers implementing this technique for CO2RR selectivity measurements need to be cognizant of electrode deactivation and its potential impacts on faradaic efficiencies and overall conclusions of their work.

maria-kelly-headshot-image

Maria Kelly is a Jill Hruby Postdoctoral Fellow at Sandia National Laboratories. She earned her PhD in Professor Wilson Smith’s research group at the University of Colorado Boulder and the National Renewable Energy Laboratory. Her doctoral work focused on characterization of carbon dioxide conversion interfaces using analytical electrochemical and in situ scanning probe methods. Her research interests broadly encompass advancing experimental measurement techniques to investigate the near-electrode environment during electrochemical reactions.

 

 

The post Challenges in CO<sub>2</sub> reduction selectivity measurements by hydrodynamic methods appeared first on Physics World.

https://physicsworld.com/a/challenges-in-co2-reduction-selectivity-measurements-by-hydrodynamic-methods/
No Author

Time crystal emerges in acoustic tweezers

System could shed light on emergent periodic phenomena in biological systems

The post Time crystal emerges in acoustic tweezers appeared first on Physics World.

Photograph of a particle being help in acoustic tweezers

Pairs of nonidentical particles trapped in adjacent nodes of a standing wave can harvest energy from the wave and spontaneously begin to oscillate, researchers in the US have shown. What is more, these interactions appear to violate Newton’s third law. The researchers believe their system, which is a simple example of a classical time crystal, could offer an easy way to measure mass with high precision. It might also, they hope, provide insights into emergent periodic phenomena in nature.

Acoustic tweezers use sound waves to create a potential-energy well that can hold an object in place – they are the acoustic analogue of optical tweezers. In the case of a single trapped object, this can be treated as a dissipationless process, in which the particle neither gains nor loses energy from the trapping wave.

In the new work, David Grier of New York University, together with graduate student Mia Morrell and undergraduate Leela Elliott, created an ultrasound standing wave in a cavity and levitated two objects (beads) in adjacent nodes.

“Ordinarily, you’d say ‘OK, they’re just going to sit there quietly and do nothing’,” says Grier; “And if the particles are identical, that’s exactly what’s going to happen.”

Breaking the law

If the two particles differ in size, material or any other property that affects acoustic scattering, they can spontaneously begin to oscillate. Even more surprisingly, this motion appears unconstrained by the requirement that momentum be conserved – Newton’s third law.

“Who ordered that?”, muses Grier.

The periodic oscillation, which has a frequency parametrized only by the properties of the particles and independent of the trapping frequency, forms a very simple type of emergent active matter called a time crystal.

The trio analysed the behaviour of adjacent particles trapped in this manner using the laws of classical mechanics, and discovered an important subtlety had been missed. When identical particles are trapped in nearby nodes, they interact by scattering waves, but the interactions are equal and opposite and therefore cancel.

“The part that had never been worked out before in detail is what happens when you have two particles with different properties interacting with each other,” says Grier. “And if you put in the hard work, which Mia and Leela did, what you find is that to the first approximation there’s nothing out of the ordinary.” At the second order, however, the expansion contains a nonreciprocal term. “That opens up all sorts of opportunities for new physics, and one of the most striking and surprising outcomes is this time crystal.”

Stealing energy

This nonreciprocity arises because, if one particle is more strongly affected by the mutual scattering than the other, it can be pushed farther away from the node of the standing wave and pick up potential energy, which can then be transferred through scattering to the other particle. “The unbalanced forces give the levitated particles the opportunity to steal some energy from the wave that they ordinarily wouldn’t have had access to,” explains Grier. The wave also carries away the missing momentum, resolving the apparent violation of Newton’s third law.

If it were acting in isolation, this energy input would make the oscillations unstable and throw the particles out of the nodes. However, energy is removed by viscosity: “If everything is absolutely right, the rate at which the particles consume energy exactly balances the rate at which they lose energy to viscous drag, and if you get that perfect, delicious balance, then the particles can jiggle in place forever, taking the fuel from the wave and dumping it back into the system as heat.” This can be stable indefinitely.

The researchers have filed a patent application for the use of the system to measure particle masses with microgram-scale precision from the oscillation frequency. Beyond this, they hope the phenomenon will offer insights into emergent periodic phenomena across timescales in nature: “Your neurons fire at kilohertz, but the pacemaker in your heart hopefully goes about once per second,” explains Grier.

The research is described in Physical Review Letters.

“When I read this I got somehow surprised,” says Glauber Silva of The Federal University of Alagoas in Brazil; “The whole thing of how to get energy from the surrounding fields and produce motion of the coupled particles is something that the theoretical framework of this field didn’t spot before.”

“I’ve done some work in the past, both in simulations and in optical systems that are analogous to this, where similar things happen, but not nearly as well controlled as in this particular experiment,” says Dustin Kleckner of University of California, Merced. He believes this will open up a variety of further questions: “What happens if you have more than two? What are the rules? How do we understand what’s going on and can we do more interesting things with it?” he says. 

The post Time crystal emerges in acoustic tweezers appeared first on Physics World.

https://physicsworld.com/a/time-crystal-emerges-in-acoustic-tweezers/
No Author

Giant barocaloric cooling effect offers a new route to refrigeration

Environmentally friendly dissolution-based method reduces temperature of water by nearly 27 K in just 20 seconds

The post Giant barocaloric cooling effect offers a new route to refrigeration appeared first on Physics World.

A new cooling technique based on the principles of dissolution barocaloric cooling could provide an environmentally friendly alternative to existing refrigeration methods. With a cooling capacity of 67 J/g and an efficiency of nearly 77%, the method developed by researchers from the Institute of Metal Research of the Chinese Academy of Sciences can reduce the temperature of a sample by 27 K in just 20 seconds – far more than is possible with standard barocaloric materials.

Traditional refrigeration relies on vapour-compression cooling. This technology has been around since the 19th century, and it relies on a fluid changing phase. Typically, an expansion valve allows a liquid refrigerant to evaporate into a gas, absorbing heat from its surroundings as it does so. A compressor then forces the refrigerant back into the liquid state, releasing the heat.

While this process is effective, it consumes a lot of electricity, and there is not much room for improvement. After more than a century of improvements, the vapour-compression cycle is fast approaching the maximum efficiency set by the Carnot limit. The refrigerants are also often toxic, contributing to environmental damage.

In recent years, researchers have been exploring caloric cooling as a possible alternative. Caloric cooling works by controlling the entropy, or disorder, within a material using magnetic or electric fields, mechanical forces or applied pressure. The last option, known as barocaloric cooling, is in some ways the most promising. However, most of the known barocaloric materials are solids, which suffer from poor heat transfer efficiency and limited cooling capacity. Transferring heat in and out of such materials is therefore slow.

A liquid system

The new technique overcomes this limitation thanks to a fundamental thermodynamic process called endothermic dissolution. The principle of endothermic dissolution is that when a salt dissolves in a solvent, some of the bonds in the solvent break. Breaking those bonds takes energy, and so the solvent cools down – sometimes dramatically.

In the new work, researchers led by metallurgist and materials scientist Bing Li discovered a way to reverse this process by applying pressure. They began by dissolving a salt, ammonium thiocyanate (NH4SCN), in water. When they applied pressure to the resulting solution, the salt precipitated out (an exothermic process) in line with Le Chatelier’s principle, which states that when a system in chemical equilibrium is disturbed, it will adjust itself to a new equilibrium by counteracting as far as possible the effect of the change.

When they then released the pressure, the salt re-dissolved almost immediately. This highly endothermic process absorbs a massive amount of heat, causing the temperature of the solution to drop by nearly 27 K at room temperature, and by up to 54 K at higher temperatures.

A chaotropic salt

Li and colleagues did not choose NH4SCN by chance. The material is a chaotropic agent, meaning that it disrupts hydrogen bonding, and it is highly soluble in water, which helps to maximize the amount present in the solution during that part of the cooling cycle. It also has a large enthalpy of solution, meaning that its temperature drops dramatically when it dissolves. Finally, and most importantly, it is highly sensitive to applied pressures in the range of hundreds of megapascals, which is within the capacity of conventional hydraulic systems.

Li says that he and his colleagues’ approach, which they detail in Nature, could encourage other researchers to find similar techniques that likewise do not rely on phase transitions. As for applications, he notes that because aqueous NH4SCN barocaloric cooling works well at high temperatures, it could be suited to the demanding thermal management requirements of AI data centres. Other possibilities include air conditioning in domestic and industrial vehicles and buildings.

There are, however, some issues that need to be resolved before such cooling systems find their way onto the market. NH4SCN and similar salts are corrosive, which could damage refrigerator components. The high pressures required in the current system could also prove damaging over the long run, Li adds.

To address these and other drawbacks, the researchers now plan to study other such near-saturated solutions at the atomic level, with a particular focus on how they respond to pressure. “Such fundamental studies are vital if we are to optimize the performance of these fluids as refrigerants,” Li tells Physics World.

The post Giant barocaloric cooling effect offers a new route to refrigeration appeared first on Physics World.

https://physicsworld.com/a/giant-barocaloric-cooling-effect-offers-a-new-route-to-refrigeration/
Isabelle Dumé

The hidden footprint of hydrogen

Leaked hydrogen boosts methane’s lifetime, yet its overall impact remains small compared to other emissions

The post The hidden footprint of hydrogen appeared first on Physics World.

Hydrogen is considered a clean fuel because it produces water rather than carbon dioxide when burned, and it is seen as a promising route toward lower emissions. It is especially valuable for replacing fossil fuels in industrial processes that require extremely high temperatures and are difficult to electrify. Although hydrogen itself is not a greenhouse gas like carbon dioxide, methane, or nitrous oxide (gases that trap heat in the Earth’s atmosphere), it can still indirectly contribute to warming. Normally, hydroxyl radicals, which are highly reactive atmospheric molecules made of one oxygen and one hydrogen atom with an unpaired electron, break down methane into carbon dioxide and water. But when hydroxyl radicals react with hydrogen instead, fewer radicals are available to remove methane, allowing methane to persist longer in the atmosphere and increasing its warming effect.

This study examines how hydrogen leakage in hydrogen‑based energy systems could influence the planet. The researchers analysed 23 different U.S. future scenarios, including some that eliminate fossil fuels entirely. They estimated how much hydrogen might leak in each scenario, compared those leaks to the remaining carbon dioxide and methane emissions, and calculated how much additional emissions reductions and/or carbon removal would be needed to offset the warming from hydrogen under low, medium, and high leak rates, and over both short‑term and long‑term warming timescales.

They found that although hydrogen leaks do contribute to warming, their impact is much smaller than the warming from the remaining carbon dioxide and methane in all scenarios. Hydrogen’s warming effect appears much larger over a 20 year period because its short‑lived chemical interactions amplify methane and ozone quickly, even though its long‑term impact remains relatively modest. Only small increases in carbon dioxide removal or small reductions in other emissions are needed to offset the warming caused by hydrogen leaks. However, because estimates of hydrogen leakage rates vary widely in the scientific literature, improved measurement and monitoring are essential.

Read the full article

Estimating the climate impacts of hydrogen emissions in a net-zero US economy

Ansh N Nasta et al 2025 Prog. Energy 7 045001

Do you want to learn more about this topic?

Hydrogen storage in liquid hydrogen carriers: recent activities and new trends Tolga Han Ulucan et al. (2023)

The post The hidden footprint of hydrogen appeared first on Physics World.

https://physicsworld.com/a/the-hidden-footprint-of-hydrogen/
Lorna Brigham

Transfer learning could help muon tomography identify illicit nuclear materials

Hidden coated materials could be detected using new technique

The post Transfer learning could help muon tomography identify illicit nuclear materials appeared first on Physics World.

Machine-learning could help us use cosmic muons to peer inside large objects such as nuclear reactors. Developed by researchers in China, the technique is capable of identifying target materials such as uranium even if they are coated with other materials.

The muon is a subatomic particle that is essentially a heavier version of the electron. Huge numbers of cosmic muons are created in Earth’s atmosphere when cosmic rays collide with gas molecules. Thousands of cosmic muons per second rain down on every square metre of Earth’s surface and these particles can penetrate tens to hundreds of metres through solid materials.

As a result, cosmic muons are used to peer inside large objects such as nuclear reactors, volcanoes and ancient pyramids. This involves placing detectors next to an object and detecting muons that have passed through or scattered within the object. Detector data are then processed using a tomography algorithm to create a 3D image of the object’s interior.

Illicit nuclear materials

Muons tend to scatter more from high-atomic-number materials, so the technique is particularly sensitive to the presence of materials such as uranium. As a result, it has been used to create systems for the detection of illicit nuclear materials hidden in freight containers.

Muon tomography is relatively straightforward when the object is of simple construction – such as a pyramid built of stone and containing voids. Producing useful images of more complex target – such as a freight container full of unknown objects – is much more difficult. The conventional computational approach is to calculate the muon-scattering physics of many different materials and combine these data with muon-tracking algorithms. This, however, tends to require huge computational resources.

Supervised machine learning has been used to reduce the computational overhead, but this requires prior knowledge of the target materials – limiting efficacy when imaging unknown and concealed materials. What is more, many materials in complex objects are coated with other materials and these coatings can affect muon scattering.

Now, Liangwen Chen at the Institute of Modern Physics of the Chinese Academy of Sciences and colleagues have used a technique called transfer learning to improve cosmic muon tomography of objects that contain coated materials. The idea of transfer learning is to begin with knowledge of the muon-scattering parameters of bare, uncoated materials and use machine learning to predict the parameters of coated materials. Chen and colleagues believe that this is the first application of transfer learning to muon tomography.

Monte Carlo simulations

The team began by creating a database describing how cosmic muons interact with representative materials with a wide range of atomic numbers. This was done by using Geant4 to do Monte Carlo simulations of how muons interact as they pass through materials. Geant4 is the most recent incarnation of the GEANT series of computer simulations, which have been used for over 50 years to design particle detectors and interpret the data that they produce.

Chen and colleagues used Geant4 to calculate how muons are scattered within nine materials ranging from magnesium (atomic number 12) to uranium (atomic number 92). These included common elements such as aluminium, copper and iron. The geometry of the scattering involves incoming cosmic muons with energies of 1 GeV and incident angles that are typical of cosmic muons. After scattering from a material target, the simulation assumes that the muons travel though two successive detectors, which measures the scattering angles. Data were generated for bare targets of the nine materials, as well as the nine materials coated with aluminium and polyethylene. Each simulation involved 500,000 muons passing through a target.

These data were then sampled using an inverse cumulative distribution function, as well as integration and interpolation. This is done to convert the data to a form that is optimal for training a neural network.

To use these data, the team created two lightweight neural-network frameworks for transfer learning: one based on fine tuning; and the other a domain-adversarial neural network. According to the team, both frameworks were able to identify correlations between muon scattering-angle distributions and different target materials. Crucially, this was the case even when the target materials were coated in aluminium or polyethylene.

Chen explains, “Transfer learning allows us to preserve the fundamental physical characteristics of muon scattering while efficiently adapting to unknown environments under shielding”.

Chen and colleagues are now trying to apply their process to more complicated scattering geometries. The also plan to include detector effects and targets made of several materials.

“By integrating simulation, physics, and data-driven learning, this research opens new pathways for applying artificial intelligence to nuclear science and security technologies,” says Chen.

The research is described in Nuclear Science and Techniques.

The post Transfer learning could help muon tomography identify illicit nuclear materials appeared first on Physics World.

https://physicsworld.com/a/transfer-learning-could-help-muon-tomography-identify-illicit-nuclear-materials/
Hamish Johnston

Ask me anything: Katie Perry – ‘I’d tell my younger self to network like crazy’

Katie Perry is chief executive of the Daphne Jackson Trust, which helps people who’ve had a career break

The post Ask me anything: Katie Perry – ‘I’d tell my younger self to network like crazy’ appeared first on Physics World.

Katie Perry studied physics at the University of Surrey in the UK, staying on there to do a PhD. While at Surrey, she worked with the nuclear physicist Daphne Jackson, who was the first female physics professor in the UK. Perry later worked in science communication – both as a science writer and in public relations.

She is currently chief executive of the Daphne Jackson Trust – a charity that supports returners to research careers after a break of at least two years for family, caring or health reasons. It offers fellowships to support people to overcome the challenges of returning, ensuring that their skills, talent, training and career promise are not lost.

What skills do you use every day in your job?

One of the most important skills is multitasking and working in an agile and flexible way. I’m often travelling to meetings, conferences and other events so I have to work wherever I am, whether it’s on a train, in a hotel or at the office. How I work reminds me of a moment I had towards the end of my physics degree when suddenly everything I’d been learning seemed to fit together; I could see both the detail and the bigger picture. It’s the same now. I have to switch quickly from one project or task to another, while keeping oversight of the overall direction and operation of the charity.

I am a strong advocate for part time and flexible working, not just for me, but for all my staff and the Daphne Jackson fellows. As a manager, a key skill is to see the person and their value – not just the hours they are working. Communication and networking skills are also vital as much of my role involves developing collaborations and working with stakeholders. I could be meeting a university vice chancellor, attending a networking reception, talking to our fellows or ensuring the trust complies with charity governance – all in one day.

What do you like best and least about your job?

I love my current role, and at the risk of sounding a little cheesy, it’s because of the trust’s amazing staff and the inspiring returners we support. The fact that I knew Daphne Jackson means that leading the organization is personal to me. I’m always blown away by how inspirational, dedicated, motivated and talented our fellows are and I love supporting them to return to successful research careers. It’s a privilege to lead the charity, helping to understand the challenges and barriers that returners face – and finding ways to overcome them.

Leading a small charity requires a broad set of skills. I enjoy the variety but it’s a challenge because you’re not so much a “chief executive officer” as a “chief everything officer”. I don’t have huge teams of people to help me with, say, human resources, finance or health and safety, which makes it struggle to do them as well as I’d like. It’s therefore important to have a good work-life balance, which is why I recently took up golf. I’ve yet to have a work meeting while out practising my swing, but one day my diary might say I’m “on a course”!

What do you know today, that you wish you knew when you were starting out in your career?

If I could go back in time, I’d tell myself – like I now tell my daughter – that it’s fine not to have a defined career path or plan. Sure, it helps to have an idea of what you want to do, but you have to live and work a little to discover what you like and – more importantly – don’t like. Careers these days are highly non-linear. Unexpected life events happen so you have to adapt, just as our Daphne Jackson fellows have done.

If someone had said to me in my 20s, when I was planning a career in science communication, that I’d be a charity chief executive I wouldn’t have believed them. But here I am running a charity founded in memory of the physicist who was such a great mentor to me during my PhD. When one door closes, a window often opens – so don’t be afraid to take set off in a new direction. It can be scary, but it’s often worth the effort.

I’d also tell my younger self to network like crazy. So many opportunities have opened up because I love speaking to people. You never know who you might meet at events or what making new connections can lead to. Finally, I wish I’d known that “impostor syndrome” will always be with you – and that it’s okay to feel that way provided you recognize it and manage it. Chances are, you may never defeat it completely.

The post Ask me anything: Katie Perry – ‘I’d tell my younger self to network like crazy’ appeared first on Physics World.

https://physicsworld.com/a/ask-me-anything-katie-perry-id-tell-my-younger-self-to-network-like-crazy/
Matin Durrani

Quantum scientists release ‘manifesto’ opposing the militarization of quantum research

More than 250 quantum scientists have signed the quantum scientists for disarmament manifesto

The post Quantum scientists release ‘manifesto’ opposing the militarization of quantum research appeared first on Physics World.

More than 250 quantum scientists have signed a “manifesto” opposing the use of quantum research for military purposes. The statement – quantum scientists for disarmament –  expresses a “deep concern” about the current geopolitical situation and “categorically rejects” the militarization of quantum research or its use in population control and surveillance. The signatories now call for an open debate about the ethical implications of quantum research.

While quantum science has the potential to improve many different areas – from sensors and medicine to computing – some are concerned about its applications for military purposes. They includes quantum key distribution and cryptographic networks for communication as well as quantum clocks and sensing for military navigation and positioning.

Marco Cattaneo from the University of Helsinki in Finland, who co-authored the manifesto, says that even the potential applications of quantum technologies in warfare can be used to militarize universities and research agendas, which he says is already happening. He notes it is not unusual for scientists to openly discuss military applications at conferences or to include such details in scientific papers.

“We are already witnessing restrictions on research collaborations with fellow quantum scientists from countries that are geopolitically opposed or ambiguous with respect to the European Union, such as Russia or China,” says Cattaneo. “When talking with our non-European colleagues, we also realized that these concerns are global and multifaceted.”

Long-term aims

The idea for a manifesto originated during a quantum-information workshop that was held in Benasque in Spain between June and July 2025.

“During a session on science policy, we realized that many of us shared the same concerns about the growing militarization of quantum science and academia,” Cattaneo recalls. “As physicists, we have a strong – and terrible – historical example that can guide our actions: the development of nuclear weapons, and the way the physics community organized to oppose them and to push for their control and abolition.”

Cattaneo says that the first goal of the manifesto is to address the militarization of quantum research, which he calls “the elephant in the room”. The document also aims to raise awareness and open a debate within the community and create a forum where concerns can be shared.

“A longer-term goal is to prevent, or at least to limit and critically address, research on quantum technologies for military purposes,” says Cattaneo. He notes that “one concrete proposal” is to push public universities and research institutes to publish a database of all projects with military goals or military funding, which, he says,  “would be a major step forward.”

Cattaneo claims the group is “not naïve” and understands that stopping the technology’s military application completely will not be possible. “Even if military uses of some quantum technologies cannot be completely stopped, we can still advocate for excluding them from public universities, for abolishing classified quantum research in public research institutions, and for creating associations and committees that review and limit the militarization of quantum technologies,” he adds.

The post Quantum scientists release ‘manifesto’ opposing the militarization of quantum research appeared first on Physics World.

https://physicsworld.com/a/quantum-scientists-release-manifesto-opposing-the-militarization-of-quantum-research/
Michael Banks

India announces three new telescopes in the Himalayan desert

The telescopes in Ladakh would significantly improve global coverage of transient and variable phenomena

The post India announces three new telescopes in the Himalayan desert appeared first on Physics World.

India has unveiled plans to build two new optical-infrared telescopes and a dedicated solar telescope in the Himalayan desert region of Ladakh. The three new facilities, expected to cost INR 35bn (about £284m), were announced by the Indian finance minister Nirmala Sitharaman on 1 February.

First up is a 3.7 m optical-infrared telescope, which is expected to come online by 2030. It will be built near the existing 2 m Himalayan Chandra Telescope (HCT) at Hanle, about 4500 m above sea level. Astronomers use the HCT for a wide range of investigations, including stellar evolution, galaxy spectroscopy, exoplanet atmospheres and time-domain studies of supernovae, variable stars and active galactic nuclei.

“The arid and high-altitude Ladakh desert is firmly established as among the world’s most attractive sites for multiwavelength astronomy,” Annapurni Subramaniam, director of the Indian Institute of Astrophysics (IIA) in Bangalore, told Physics World. “HCT has demonstrated both site quality and opportunities for sustained and competitive science from this difficult location.”

The 3.7 m telescope is a stepping stone towards a proposed 13.7 m National Large Optical-Infrared Telescope (NLOT), which is expected to open in 2038. “NLOT is intended to address contemporary astronomy goals, working in synergy with major domestic and international facilities,” says Maheswar Gopinathan, a scientist at the IIA, which is leading all three projects.

Gopinathan says NLOT’s large collecting area will enable research on young stellar systems, brown dwarfs and exoplanets, while also allowing astronomers to detect faint sources and to rapidly follow up extreme cosmic events and gravitational wave detections.

Along with India’s upgraded Giant Metrewave Radio Telescope, a planned gravitational-wave observatory in the country and the Square Kilometre Array in Australasia and South Africa, Gopinathan says that NLOT “will usher in a new era of multimessenger and multiwavelength astronomy.”

The third telescope to be supported is the 2m National Large Solar Telescope (NLST), which will be built near Pangong Tso lake 4350 m above sea level. Also expected to come online by 2030, the NLST is an advance on India’s existing 50 cm telescope at the Udaipur Solar Observatory, which provides a spatial resolution of about 100 km. Scientists also plan to combine NLST observations with data from Aditya-L1, India’s space-based solar observatory, which launched in 2023.

“We have two key goals [with NLST],” says Dibyendu Nandi, an astrophysicist at the Indian Institute of Science Education and Research in Kolkata, “to probe small-scale perturbations that cascade into large flares or coronal mass ejections and improve our understanding of space weather drivers and how energy in localised plasma flows is channelled to sustain the ubiquitous magnetic fields.”

While bolstering India’s domestic astronomical capabilities, scientists say the Ladakh telescopes – located between observatories in Europe, the Americas, East Asia and Australia – would significantly improve global coverage of transient and variable phenomena.

The post India announces three new telescopes in the Himalayan desert appeared first on Physics World.

https://physicsworld.com/a/india-announces-three-new-telescopes-in-the-himalayan-desert/
No Author

Black hole is born with an infrared whimper

Observation sheds new light on how some massive stars fade away

The post Black hole is born with an infrared whimper appeared first on Physics World.

A faint flash of infrared light in the Andromeda galaxy was emitted at the birth of a stellar-mass black hole – according to a team of astronomers in the US. Kishalay De at Columbia University and the Flatiron Institute, and colleagues, noticed that the flash was followed by the rapid dimming of a once-bright star. They say that the star collapsed, with almost all of its material falling into a newly forming black hole. Their analysis suggests that there may be many more such black holes in the universe than previously expected.

When a massive star runs out of fuel for nuclear fusion it can no longer avoid gravitational collapse. As it implodes, such a star is believed to emit an intense burst of neutrinos, whose energy can be absorbed by the star’s outer layers.

In some cases, this energy is enough to tear material away from the core, triggering spectacular explosions known as core-collapse supernovae. Sometimes, however, this energy transfer is insufficient to halt the collapse, which continues until a stellar-mass black hole is created. These stellar deaths are far less dramatic than supernovae, and are therefore very difficult to observe.

Observational evidence for these stellar-mass black holes include their gravitational influence on the motions of stars; and the gravitational waves emitted when they merge together. So far, however, their initial formation has proven far more difficult to observe.

Mysterious births

“While there is consensus that these objects must be formed as the end products of the lives of likely very massive stars, there has remained little convincing observational evidence of watching stars turn into black holes,” De explains. “As a result, we don’t even have constraints on questions as fundamental as which stars can turn into black holes.”

The main problem is the low key nature of the stellar implosions. While core-collapse supernovae shine brightly in the sky, “finding an individual star disappearing in a galaxy is remarkably difficult,” De says. “A typical galaxy has a 100 billion stars in it, and being able to spot one that disappears makes it very challenging.”

Fortunately, it is believed that these stars do not vanish without a trace. “Whenever a black hole does form from the near complete inward collapse of a massive star, its very outer envelope must be still ejected because it is too loosely bound to the star,” De explains. As it expands and cools, models predict that this ejected material should emit a flash of infrared radiation – vastly dimmer than a supernova, but still bright enough for infrared surveys to detect.

To search for these flashes, De’s team examined data from NASA’s NEOWISE infrared survey and several other telescopes. They identified a near-infrared flash that was observed in 2014 and closely matched their predictions for a collapsing star. That flash was emitted by a supergiant star in the Andromeda galaxy.

Nowhere to be seen

Between 2017 and 2022, the star dimmed rapidly before disappearing completely across all regions of the electromagnetic spectrum.  “This star used to be one of the most luminous stars in the Andromeda Galaxy, and now it was nowhere to be seen,” says De.

“Astronomers can spot supernovae billions of light years away – but even at this remarkable proximity, we didn’t see any evidence of an explosive supernova,” De says. “This suggests that the star underwent a near pure implosion, forming a black hole.”

The team also examined a previously-observed dimming in a galaxy 10 times more distant. While several competing theories had emerged to explain that disappearance, the pattern of dimming bore a striking resemblance to their newly-validated model, strongly suggesting that this event too signalled the birth of a stellar-mass black hole.

Because these events occurred so recently in ordinary galaxies like Andromeda, De’s team believe that similar implosions must be happening routinely across the universe – and they hope that their work will trigger a new wave of discoveries.

“The estimated mass of the star we observed is about 13 times the mass of the Sun, which is lower than what astronomers have assumed for the mass of stars that turn into black holes,” De says. “This fundamentally changes out understanding of the landscape of black hole formation – there could be many more black holes out there than we estimate.”

The research is described in Science.

The post Black hole is born with an infrared whimper appeared first on Physics World.

https://physicsworld.com/a/black-hole-is-born-with-an-infrared-whimper/
No Author

International Year of Quantum Science and Technology draws to a close

Two-day event in Ghana marked the official end of IYQ

The post International Year of Quantum Science and Technology draws to a close appeared first on Physics World.

The International Year of Quantum Science and Technology (IYQ) has officially closed following a two-day event in Accra, Ghana. The year has seen hundreds of events worldwide celebrating the science and applications of quantum physics.

Officially launched in February at the headquarters of the UN Educational, Scientific and Cultural Organization (UNESCO) in Paris, IYQ has involved hundreds of organizations – including the Institute of Physics, which publishes Physics World.

The year 2025 was chosen for an international year dedicated to quantum physics as it marks the centenary of the initial development of quantum mechanics by Werner Heisenberg. A range of international and national events have been held touching on quantum in everything from communications and computing to medicine and the arts.

One of the highlights of the year was a workshop on 9–14 June 2025 in Helgoland – the island off the coast of Germany where Heisenberg made his breakthrough exactly 100 years earlier. It was attended by more than 300 top quantum physicists, including four Nobel prize-winners, who gathered for talks, poster sessions and debates.

Another was the IOP’s two-day conference – Quantum Science and Technology: The First 100 Years; Our Quantum Future – held at the Royal Institution in London in November.

The closing event in Ghana, held on 10–11 February, was attended by government officials, UNESCO directors, physicists and representatives from international scientific societies, including the IOP. They discussed UNESCO’s official 2025 IYQ report as well as heard a reading of the IYQ 2025 poetry contest winning entry and attended an exhibition with displays from IYQ sponsors.

Organizers behind the IYQ hope its impact will be felt for many years to come. “The entire 2025 year was filled with impactful events happening all over the world. It has been a wonderful experience working alongside such dedicated and distinguished colleagues,” notes Duke University physicist Emily Edwards, who is a member of the IYQ steering committee. “We are thrilled to see the enthusiasm continue through to 2026 with the closing ceremony and are proud that a strong foundation has been laid for the years ahead.”

The UN has declared “international years” since 1959, to draw attention to topics deemed to be of worldwide importance. In recent years, there have been a number of successful science-based themes, including physics (2005), astronomy (2009), chemistry (2011), crystallography (2014) and light and light-based technologies (2015).

The post International Year of Quantum Science and Technology draws to a close appeared first on Physics World.

https://physicsworld.com/a/international-year-of-quantum-science-and-technology-draws-to-a-close/
Michael Banks

Asteroid deflection: why we need to get it right the first time

Aerospace engineer Rahil Makadia on the danger of asteroid “keyholes”

The post Asteroid deflection: why we need to get it right the first time appeared first on Physics World.

Science fiction became science fact in 2022 when NASA’s DART mission took the first steps towards creating a planetary defence system that could someday protect Earth from a catastrophic asteroid collision. However, much more work on asteroid deflection is needed from the latest generation of researchers – including Rahil Makadia, who has just completed a PhD in aerospace engineering at University of Illinois at Urbana-Champaign.

In this episode of the Physics World Weekly podcast, Makadia talks about his work on how we could deflect asteroids away from Earth. We also chat about the potential threats posed by near-Earth asteroids – from shattered windows to global destruction.

Makadia’s stresses the importance of getting a deflection right the first time, because his calculations reveal that a poorly deflected asteroid could return to Earth someday. In November, he published a paper that explored how a bad deflection could send an asteroid into a “keyhole” that guarantees its return.

But it is not all gloom and doom, Makadia points out that our current understanding of near-Earth asteroids suggests that no major collision will occur for at least 100 years. So even if there is a threat on the horizon, we have lots of time to develop deflection strategies and technologies.

The post Asteroid deflection: why we need to get it right the first time appeared first on Physics World.

https://physicsworld.com/a/asteroid-deflection-why-we-need-to-get-it-right-the-first-time/
Hamish Johnston

Fluid gears make their debut

New work could promote the development of next-generation machines without mechanical interlocking teeth

The post Fluid gears make their debut appeared first on Physics World.

Flowing fluids that act like the interlocking teeth of mechanical gears offer a possible route to novel machines that suffer less wear-and-tear than traditional devices. This is the finding of researchers at New York University (NYU) in the US, who have been studying how fluids transmit motion and force between two spinning solid objects. Their work sheds new light on how one such object, or rotor, causes another object to rotate in the liquid that surrounds it – sometimes with counterintuitive results.

“The surprising part in our work is that the direction of motion may not be what you expect,” says NYU mathematician Leif Ristroph, who led the study together with mathematical physicist Jun Zhang. “Depending on the exact conditions, one rotor can cause a nearby rotor to spin in the opposite direction, like a pair of gears pressed together. For other cases, the rotors spin in the same direction, as if they are two pulleys connected by a belt that loops around them.”

Making gear teeth using fluids

Gears have been around for thousands of years, with the first records dating back to 3000 BC. While they have advanced over time, their teeth are still made from rigid materials and are prone to wearing out and breaking.

Ristroph says that he and Zhang began their project with a simple question: might it possible to avoid this problem by making gears that don’t have teeth, and in fact don’t even touch, but are instead linked together by a fluid? The idea, he points out, is not unprecedented. Flowing air and water are commonly used to rotate structures such as turbines, so developing fluid gears to facilitate that rotation is in some ways a logical next step.

To test their idea, the researchers carried out a series of measurements aimed at determining how parameters like the spin rate and the distance between spinning objects affect the motion produced. In these measurements, they immersed the rotors – solid cylinders – in an aqueous glycerol solution with a controllable viscosity and density. They began by rotating one cylinder while allowing the other one to spin in response. Then they placed the cylinders at varying distances from each other and rotated the active cylinder at different speeds.

“The active cylinder should generate fluid flows and could therefore in principle cause rotation of the passive one,” says Ristroph, “and this is exactly what we observed.”

When the cylinders were very close to each other, the NYU team found that the fluid flows functioned like gear teeth – in effect, they “gripped” the passive rotor and caused it to spin in the opposite direction as the active one. However, when the cylinders were spaced farther apart and the active cylinder spun faster, the flows looped around the outside of the passive cylinder like a belt around a pulley, producing rotation in the same direction as the active cylinder.

A model involving gear-like- and belt-like modes

Ristroph says the team’s main difficulty was figuring out how to perform such measurements with the necessary precision. “Once we got into the project, an early challenge was to make sure we could make very precise measurements of the rotations, which required a special way to hold the rotors using air bearings,” he explains. Team member Jesse Smith, a PhD student and first author of a paper in Physical Review Letters about the research, was “brilliant in figuring out every step in this process”, Ristroph adds.

Another challenge the researchers faced was figuring out how to interpret their findings. This led them to develop a model involving “gear-like” and “belt-like” modes of induced rotations. Using this model, they showed that, at least in principle, a fluid gear could replace regular gears and pulley-and-belt systems in any system – though Ristroph suggests that transmitting rotations in a machine or keep timing via a mechanical device might be especially well-suited.

In general, Ristroph says that fluid gears offer many advantages over mechanical ones. Notably, they cannot become jammed or wear out due to grinding. But that isn’t all: “There has been a lot of recent interest in designing new types of so-called active materials that are composed of many particles, and one class of these involves spinning particles in a fluid,” he explains. “Our results could help to understand how these materials behave based on the interactions between the particles and the flows they generate.”

The NYU researchers say their next step will be to study more complex fluids. “For example, a slurry of corn starch is an everyday example of a shear-thickening fluid and it would be interesting to see if this helps the rotors better ‘grip’ one another and therefore transmit the motions/forces more effectively,” Ristroph says. “We are also numerically simulating the processes, which should allow us to investigate things like non-circular shapes of the rotors or more than just two rotors,” he tells Physics World.

The post Fluid gears make their debut appeared first on Physics World.

https://physicsworld.com/a/fluid-gears-make-their-debut/
Isabelle Dumé