Machine Learning Improves Fusion Modeling
Researchers at MIT are employing machine learning techniques to better understand turbulent plasma phenomena in fusion devices. According to MIT News, a new deep learning framework was developed that leverages artificial neural networks to represent a reduced turbulence theory.
The research is described in two papers, published in Physical Review E and Physics of Plasmas.
If researchers hope to control fusion for energy production, they need a better understanding of the turbulent motion of ions and electrons in plasmas moving through fusion reactors. The field lines of toroidal structures known as tokamaks force the plasma particles; the intent is to confine them long enough to produce significant net energy gains, but that’s a challenge with extraordinarily high temperatures but also small spaces.
Scientists are concentrating on numerical simulations of plasma turbulence to better understand conditions inside fusion reactors, but these calculations are complex. The development of simplified theories that work considerably faster while preserving predictive accuracy could speed up progress.
Plasma is the material that forms over 99.9% of the observable universe and is known as the fourth state of matter (the others are solid, liquid, and gas). At sufficiently high energy, gases become ionized, resulting in a mixture of positively charged particles (atomic nuclei) and negatively charged particles (electrons). While plasmas in stars are constrained by enormous gravity forces, this is not the case on Earth. One of the main challenge is developing devices that can heat the plasma to the required temperatures and confine it long enough for thermonuclear reactions to release kinetic energy that sustain new fusion reactions. A promising approach known as magnetic confinement is utilized in devices known as tokamaks (the Russian abbreviation for “magnetic toroidal chamber”), and is based on the utilization of strong magnetic fields to control the charged particles that make up the plasma.
Inside these extraordinarily sophisticated machines, plasmas are contained by magnetic fields. Only a few meters separate the superconducting magnets cryogenically cooled to below -200 degrees Celsius, and the plasmas themselves, which must be heated to above 100,000,000 degrees C.
Building these devices is a challenging task, not least because of the instabilities associated with the plasma, which pose a danger of damage to reactor components. (This limitation has an inherent safety benefit, however, in that the chain reaction can essentially never grow uncontrollably.)
The magnetic fields in a tokamak configuration must be of three types. Toroidal coils create a magnetic field along the machine’s symmetry axis, pushing charged plasma particles to flow in that direction. External coils that control the plasma’s position provide vertical fields. A poloidal field is created by electric current running through the plasma; this keeps it in equilibrium.
Magnetic confinement fusion devices pose major uncertainties in the particle and energy confinement of fusion systems. Because the circumstances of the onboard plasma substantially influence a variety of processes, the boundary region is crucial in evaluating the overall practicality of the fusion device, and modeling of the plasma and the entire structure is still a critical task.
One particular transport theory relevant to boundary plasmas and widely applied to analyze edge turbulence is the drift-reduced Braginskii model. For decades, tokamak physicists have routinely used this reduced “two-fluid theory” to simulate boundary plasmas in experiment, despite uncertainty about its accuracy.
In a couple of recent publications, MIT researchers have begun to directly test the accuracy of this reduced model by combining physics with machine learning. According to MIT’s researchers, the model examines the dynamic relationship of physical variables such as density, electric potential, and temperature and, at the same time, quantities such as the turbulent electric field and electron pressure. The researchers discovered that the turbulent electric fields associated with pressure fluctuations predicted by the reduced fluid model are compatible with high-fidelity gyrokinetic predictions in plasmas relevant to existing fusion devices.
With this work, they have also demonstrated a new deep learning technique that can diagnose unknown turbulent field fluctuations directly consistent with the drift-reduced Braginskii theory. Plasma turbulence is notoriously difficult to mimic, much more so than air or water turbulence. With machine learning techniques embedded into the equations, you can get a lot of information from a small number of observations. According to MIT researchers, these novel analytical approaches can open up new pathways for evaluating chaotic systems and broadening the scope of what can be discovered about turbulence in fusion plasmas.
The post Machine Learning Improves Fusion Modeling appeared first on EETimes.