DOESciDAC ReviewOffice of Science
Computing Atomic Nuclei
Petascale computing helps disentangle the nuclear puzzle. The goal of the Universal Nuclear Energy Density Functional (UNEDF) collaboration is to provide a comprehensive description of all nuclei and their reactions based on the most accurate knowledge of the nuclear interaction, the most reliable theoretical approaches, and the massive use of computer power.

Science of Nuclei
Nuclei comprise 99.9% of all baryonic matter in the Universe and are the fuel that burns in stars. The rather complex nature of the nuclear forces among protons and neutrons generates a broad range and diversity in the nuclear phenomena that can be observed. As shown during the last decade, developing a comprehensive description of all nuclei and their reactions requires theoretical and experimental investigations of rare isotopes with unusual neutron-to-proton ratios. These nuclei are labeled exotic, or rare, because they are not typically found on Earth. They are difficult to produce experimentally because they usually have extremely short lifetimes. The goal of a comprehensive description and reliable modeling of all nuclei—light, heavy, and superheavy—represents one of the great intellectual opportunities for physics in the twenty-first century.
The nuclear many-body problem is of broad intrinsic interest. The phenomena that arise—shell structure, superfluidity, collective motion, phase transitions—and the connections with many-body symmetries, are also fundamental to fields such as atomic physics, condensed matter physics, and quantum chemistry. Although the interactions of nuclear physics differ from the electromagnetic interactions that dominate chemistry, materials, and biological molecules, the theoretical methods and many of the computational techniques to solve the quantum many-body problems are shared (figure 1). All basis expansion methods—configuration interaction in chemistry, interacting shell model in nuclear physics—use exactly the same technique, that of diagonalizing the Hamiltonian matrix. Coupled cluster (CC) techniques, which were formulated by nuclear scientists in the 1950s, are essential techniques in chemistry today and have recently been resurgent in nuclear structure. Quantum Monte Carlo techniques dominate studies of phase transitions in spin systems and nuclei. These methods are used to understand both the nuclear and electronic equations of state in condensed systems, and they are used to investigate the excitation spectra in nuclei, atoms, and molecules.
The rather complex nature of the nuclear forces among protons and neutrons generates a broad range and diversity in the nuclear phenomena that can be observed.
Figure 1. The theoretical methods and computational techniques used to solve the nuclear many-body problem. On this chart of the nuclides in the (N,Z)-plane, the black squares represent stable nuclei and the yellow squares indicate unstable nuclei that have been produced and studied in the laboratory. The many thousands of these unstable nuclei yet to be explored are indicated in green (terra incognita). Except for the lightest nuclei, where it has been reached experimentally, the neutron drip line (the rightmost border of the nuclear landscape) has to be estimated on the basis of nuclear models—hence it is very uncertain due to the dramatic extrapolations involved. The red vertical and horizontal lines show the magic numbers, reflecting regions where nuclei are expected to be more tightly bound and have longer half-lives. The anticipated path of the astrophysical r-process responsible for nucleosynthesis of heavy elements is also shown (purple line). The thick dotted lines indicate domains of major theoretical approaches to the nuclear many-body problem. For the lightest nuclei, ab initio calculations (Green's function Monte Carlo, no-core shell model, coupled cluster method), based on the bare nucleon-nucleon interaction, are possible (red). Medium-mass nuclei can be treated by configuration interaction techniques (interacting shell model, in green). For heavy nuclei, the density functional theory based on self-consistent/mean field theory (blue) is the tool of choice. By investigating the intersections between these theoretical strategies, one aims at nothing less than developing a unified description of the nucleus.
When applied to systems with many active particles, ab initio and configuration interaction methods present computational challenges as the configuration space explodes rapidly. Thus other models are needed in which the most important degrees of freedom are identified and retained so that a full treatment of all interactions among the active particles can be avoided. This kind of approach to many-body quantum physics can be found in many other fields, such as condensed matter physics, atomic and molecular physics, and quantum chemistry. Density functional theory (DFT), a tool of choice for complex nuclei, is built on theorems showing the existence of universal energy functionals for many-body systems, which include, in principle, all many-body correlations. DFT has been spectacularly successful in condensed matter physics and chemistry, as was recognized by the 1998 Nobel Prize in chemistry, awarded to Walter Kohn. In fact, it was the combined work of many dedicated researchers that culminated in finding remarkably accurate functionals for use in chemistry. A concerted effort rooted in a fundamental understanding of internucleon interactions offers promise to achieve corresponding qualitative improvements in the accuracy and applicability for nuclear physics. Recognizing that the nucleus is composed of fermions, neutrons, and protons, DFT is the only tractable theory that can be applied across the entire table of nuclides. The new challenges faced by the nuclear DFT are the presence of two kinds of fermions, the essential role of pairing, and the need for symmetry restoration in finite, self-bound systems.

Practical Applications
Applications of nuclear physics in today's global economy and national security are numerous. They include the nuclear power industry and nuclear medicine, as well as national defense. As has been illustrated many times in all fields of science, improved understanding of the microworld benefits society. Fusion and fission are excellent examples. The description of these fundamental nuclear processes is still very schematic, yet nuclear fission powers reactors that produce energy for the nation, and fusion, which is responsible for energy production in stars, has the promise of providing a clean alternative source of energy.
Density functional theory is built on theorems showing the existence of universal energy functionals for many-body systems.
There is little question that the nuclear many-body problem has high societal relevance. In the area of national defense, for instance, developing a comprehensive description of nuclei aligns well with the goals of the National Nuclear Security Administration (NNSA) Stockpile Stewardship Program, which entails an accurate and complete modeling of the behavior and performance of devices in the nation's aging nuclear weapons stockpile. Improving the accuracy of that understanding is central to the continuing process of certifying both the safety and the reliability of the stockpile without a resumption of nuclear testing. In short, understanding of nuclei and their reactions is critical to providing a more secure homeland.

Important Questions Remain
In the last few years, perhaps more than ever, there has been an especially productive interplay between theory and experiment in forging a deeper understanding of the nuclear quantum many-body problem. Yet, there are significant components missing from the current understanding, and these must be addressed in order to develop a comprehensive predictive theory of the nucleus and nucleonic matter that will answer a number of fundamental scientific questions. As discussed in the next section, these questions can only be studied with access to new realms of nuclei—those with proton and neutron numbers far different than those of the familiar nuclei found in nature.

Experimental Evidence that Challenges Theory
While only about 300 combinations of protons and neutrons in nuclei are stable enough to exist in nature, several thousand nuclei can be synthesized in the laboratory, and even more can be created in stars. The chart of nuclei (figure 1, p43) shows all possible nuclei in the plane of the neutron number N and the proton (or atomic) number Z. In this landscape, the stable nuclei are bunched along the valley of beta-stability. The ensemble of the heaviest isotopes of each element forms a broken line in the nuclear chart, called the neutron drip line, to the right of beta stability. Atomic nuclei beyond that line are unbound with respect to neutron radioactivity. Coulomb repulsion limits the existence on the proton-rich side of the nuclear landscape.
There is little question that the nuclear many-body problem has high societal relevance.
In 1963 Maria Goeppert-Mayer and J. Hans D. Jensen received the Nobel Prize in physics for explaining why nuclei containing certain numbers of protons and neutrons (2, 8, 20, 28, 50, 82, and 126, for example) are extremely stable. These numbers are called "magic numbers." This discovery had led to an extremely successful tool used in describing nuclei called the "shell model." In this model, the protons and neutrons—collectively called the nucleons—move and mutually interact within a given subset of the shells that are well separated in energy from other shells in an average mean field generated by the remaining particles. This separation means a nucleus can be modeled as a core, representing the magic nucleus, and valence nucleons. This simple picture works well in many nuclei near the valley of stability.
However, a significant new theme concerns shell structure near the particle drip lines and in the superheavy nuclei. Theoretical predictions and experimental discoveries in the last decade indicate that nucleonic shell structure is being recognized now as a more local concept. The experimental data indicate that the magic numbers in neutron-rich nuclei are not the immutable benchmarks they were once thought to be. The magic numbers at N=20 and N=28 fade away with neutron number, and the new magic numbers at N=14, N=16, and N=32 seem to appear. Nuclei far from stability have unusual properties as compared to their stable cousins. Excellent examples are 6He and 11Li, where a two-neutron halo forms around the 4He and 9Li cores. This radial extension of neutron matter in these nuclei is indeed extreme: the halo radius of 11Li is the same as the radius of 208Pb. New experimental facilities such as the proposed Facility for Rare Isotope Beams (FRIB) in the U.S., as well as present radioactive-beam facilities, will undoubtedly find more of these surprising systems.
Why is nuclear structure changing in the exotic environment? There are several good reasons for this. First, the nuclear mean field is expected to strongly depend on the orbits being filled. Second, many-body correlations, such as superconductivity, involving weakly bound and unbound nucleons become crucial when the nucleonic binding gets small. Third, the nucleus is an open quantum system. The presence of states that are unbound to particle emission may have a significant impact on nuclear properties. All this could have a profound impact on the understanding of element production in the Universe as a number of important nucleosynthesis processes—especially those producing nuclei heavier than iron—occur in very neutron-rich or neutron-deficient nuclei. Nature does not have the luxury of dealing only with stable nuclei. A robust nuclear-theoretical capability is required in order to understand stable and exotic nuclei that are the core of matter and the fuel of stars.
Figure 2. The UNEDF collaboration includes researchers from six national laboratories and eight U.S. universities. These include Ames Laboratory, ANL, LBNL, LLNL, LANL, ORNL, Central Michigan University, Iowa State University, Michigan State University, Ohio State University, San Diego State Univesity, the University of North Carolina, the University of Tennessee-Knoxville, and the University of Washington.

The UNEDF Project
On the theoretical side, recent developments of powerful conceptual, analytic, algorithmic, and computational tools enable scientists to peer into the inner workings of nuclei with far greater precision than previously possible. These new tools make researchers optimistic that the goal of developing a comprehensive, quantitative, and predictive theory of the nucleus and nucleonic matter is indeed achievable.
The purpose of SciDAC's Universal Nuclear Energy Density Functional (UNEDF) project is to formulate the next generation of nuclear structure and reaction theory. The mission of the project is threefold:
  • Find an optimal energy density functional using all knowledge of nucleonic, Hamiltonian, and basic nuclear properties

  • Apply DFT and its extensions to validate the functional using all the available relevant nuclear structure data

  • Apply the validated theory to properties of interest that cannot be measured, in particular the properties needed for reaction theory, such as cross sections relevant to NNSA programs
The activities to be supported fall into different areas of nuclear theory and computer science, but the goal can only be achieved by working at the interfaces among these areas. The collaboration involves theoretical physicists and computer scientists from six national laboratories and eight universities (figure 2). The collaboration also involves a number of scientists based in Europe and Japan. Figure 3 shows the main research areas and methods used to achieve the end goal.
Figure 3. The Universal Nuclear Energy Density Functional collaboration, showing the main participants and the main physics (red font) and computational (blue font) themes.

Nuclear Forces and the Energy Density Functional
The past few years have witnessed the re-emergence of the more basic approach to understanding nuclei. In ab initio strategy, the basic interactions among protons and neutrons are treated explicitly and the many-body problem is solved with as few approximations as possible. DFT provides a way to systematically map the many-body problem onto a one-body problem without explicitly involving inter-nucleon interactions; here the fundamental entity is the energy functional that depends on one-body densities and currents. The following includes descriptions of both the ab initio and DFT approaches, with an emphasis on the latter because of its broad applicability across the entire table of nuclides.
The types of computations needed to describe physical phenomena always depend on the energy scale and the number of degrees of freedom of the problem, and the nuclear many-body problem is no exception (figure 4, p46). Quantum chromodynamics (QCD), the theory of strong interactions, governs the dynamics and properties of quarks and gluons that form baryons and mesons ("Probing Matter at Subnuclear Scales," SciDAC Review, Summer 2007, p36). Hence, it is also responsible for the forces that bind nuclei. In this area, significant progress is being made by computing properties of inter-nucleon forces using the effective field theory (EFT), which starts from an effective Lagrangian that retains the basic symmetries of QCD and is constructed in terms of nucleon and pion fields and their derivatives. A power-counting scheme enables one to write down various terms of the nuclear interaction in a systematic way. The unknown low-energy coupling strengths that appear in the expansion must be determined from experiment, or eventually from lattice QCD.
Understanding nuclei and their reactions is critical to nuclear power, nuclear medicine, and national security.
Figure 4. The basic elements (degrees of freedom) of strongly-interacting matter depend on the energy of the experimental probe and the distance scale. The building blocks of the theory of strong interactions, quantum chromodynamics (QCD), are quarks and gluons. Hadrons (baryons and mesons) can often be described by the dynamics of the effective (or constituent) quarks, with the gluon degrees of freedom being integrated out. The classical nuclear physics problem is an effective approximation to QCD. It involves a strongly interacting quantum mechanical system of two fermionic species, protons and neutrons. A common starting point for nuclear physics is an inter-nucleon interaction, represented by a potential or by a set of meson-exchange forces. For complex nuclei, calculations involving all protons and neutrons become prohibitively difficult. Therefore, a critical challenge is to develop new approaches that identify the important degrees of freedom of the nuclear system and are practical in use. Such a strategy is similar to what is being used in other fields of science, in particular in condensed matter physics, atomic and molecular physics, and quantum chemistry. Of particular importance is the development of the energy density functional, which may lead to a comprehensive description of the properties of both finite nuclei and extended asymmetric nucleonic matter. Here, the main building blocks are the effective fields represented by local proton and neutron densities and currents. Finally, for certain classes of nuclear models, in particular those representing emergent many-body phenomena that happen on a much lower energy scale, the effective degrees of freedom are collective coordinates describing various vibrations and rotations and the large-amplitude motion as seen in fission.
An important breakthrough in nuclear theory came more than 30 years ago with the realization that three-body forces are an integral part of the nuclear problem because two-nucleon forces alone could not account for the binding energy of the triton, or alpha particle. Of course, it is a natural idea since nucleons are not point particles. At the time several models of three-nucleon forces were developed, but the capability to calculate the effect of three-body forces in all but the smallest nuclei did not occur until the last 10 years. In fact, the beauty of EFT has been to systematically define what those forces should look like.
The main ingredient of DFT is the energy density functional that depends on densities and currents representing distributions of nucleonic matter, spins, momentum, and kinetic energy and their derivatives (gradient terms). Standard functionals used in nuclear DFT calculations have been parameterized by means of about 10 coupling constants that are adjusted to basic properties of nuclear matter (such as saturation density, binding energy per nucleon) and to selected data on magic nuclei. The functionals are augmented by the pairing term which describes nuclear superfluidity. When not corrected by additional phenomenological terms, standard functionals reproduce total binding energies with a root mean square error of about 2 MeV; however, they have been successfully tested over the whole nuclear chart on a broad range of phenomena and usually perform quite well when applied to energy differences, radii, and nuclear moments and deformations. Historically, the first nuclear energy density functionals appeared in the context of Hartree-Fock or Hartree-Fock-Bogoliubov methods and zero-range interactions such as the Skyrme force. However, it was realized afterwards that—in the spirit of DFT—an effective interaction could be secondary to the functional, that is, it is the density functional that defines the force. This is the strategy that the UNEDF collaboration will follow.
The density and gradient dependences of the functional are poorly known. To make progress, a concerted effort will be required to study the nuclear DFT when applied to finite nuclei and bulk nuclear matter. A promising approach to a systematic density expansion, rooted in fundamental inter-nucleon forces, is given by EFT supplemented by renormalization group methods. By evolving EFT interactions to lower momentum, they are softened to the extent that constructing a microscopic energy density functional based on many-body perturbation theory becomes feasible. By varying the cutoff, this approach allows theoretical error estimates, which may be coupled to the optimization of parameters from experiment through efficient global fits, with systematic error and covariance analysis.

Computing Nuclei in Ab Initio and Configuration Interaction Approaches
Given a realistic nuclear interaction, the next step is to solve the quantum many-body problem for a given nucleus with that interaction. Several ab initio techniques are being used today to calculate nuclear properties directly, each of which has advantages and disadvantages. The Green's function Monte Carlo (GFMC) technique has been vigorously pursued in light nuclei (up to 12C) since the mid-1990s and demonstrates that one can build nuclei from scratch. The GFMC is naturally parallel and requires specialized load-balancing algorithms to efficiently scale to thousands of processors.
Another ab initio approach, the no-core shell model (NCSM), involves diagonalization of the nuclear Hamiltonian in a basis of independent-particle states. Parallel Krylov techniques are used to find the lowest energy levels and wave functions in these computations. The methods used rely on sparse algorithms as well. Both the GFMC and NCSM approaches scale exponentially with the number of nucleons; a recent variation of GFMC, auxiliary field diffusion Monte Carlo (AFDMC), has only polynomial scaling. The CC method represents yet another approach to the nuclear problem. This technique is particularly appropriate for closed-shell or sub-shell nuclei. It has the advantage that it scales very gently (polynomially) with increasing numbers of nucleons. A complex-energy version of the CC theory, particularly useful for description of open exotic nuclei, was recently developed to calculate widths of states in the helium isotopic chain and is being run on ORNL's Jaguar (figure 5; sidebar "Calculating the Nuclear Mass Table on Jaguar," p49). CC theory requires the solution of coupled nonlinear algebraic equations and fast matrix-matrix multiplies.
The main ingredient of DFT is the energy density functional that depends on densities and currents representing distributions of nucleonic matter, spins, momentum, and kinetic energy and their derivatives (gradient terms).
Figure 5. ORNL's Jaguar, the world's second fastest computer, enables certain nuclear calculations only dreamt of a few years ago. As an example, Jaguar was used for the first ever ab initio computation of neutron-rich helium nuclei using coupled cluster theory (shown in the figure on the side of the computer). The figure shows the binding energy of these nuclei, while the inset indicates the width, related to lifetime. Experimental data are marked in red. The calculated masses show a systematic deviation from experiment; this can be attributed to a three-body force, missing in the calculation.
Given the dramatic rise in computing power, predicted to reach petaflop-scale within three years and moving toward exascale computing, researchers are likely to soon have the computational power to pursue ab initio calculations using CC techniques in very massive nuclei. A petascale coupled-cluster calculation will probably involve 100 nucleons in 1,000 orbitals. This is within the realm of the possible if algorithms can be generated that will scale to enormous numbers of compute cores. Efforts in this direction are underway. This same computational power should enable ab initio calculations using either GFMC (or its derivatives) or NCSM into the mass 20-40 regions. Because of the complementary nature of the methods, it will be important that all three methods advance to take advantage of petascale computing and beyond (sidebar "Computational Scaling of Ab Initio Techniques," p50). DFT equations present a nonlinear eigenvalue problem that must be solved iteratively. New wavelet expansion techniques are being developed to achieve better accuracy for the nuclear problem.
Figure 6. Configuration space dimension of the interacting shell model for fp-shell nuclei.
The ongoing work with configuration interaction techniques is also of note. The interacting shell model, in which the configuration space is truncated by involving valence nucleons only, can be used to make detailed studies of nuclear structure in small regions of the nuclear chart. The method was applied to p-shell nuclei (ranging from N=Z=2 to N=Z=8, initially calculated in the 1960s), sd-shell nuclei (ranging from N=Z=8 to N=Z=20, initially calculated in the 1980s), and fp-shell nuclei (ranging from N=Z=20 to N=Z=40, fully investigated in the 1990s-2000s). The next frontier will be nuclei in the gds-shell region. While the computational capability of the shell model has been the reason for this changing emphasis across shells, it is clear that either significant approximations or technology breakthroughs must occur in order to tackle the next shell because when going to heavier systems with many active nucleons, the configuration space explodes rapidly, resulting in combinatorial growth in the complexity of calculations (figure 6). It is expected that efforts with NCSM will deliver benefits to standard shell model computations as both methods rely on Krylov techniques to diagonalize the Hamiltonian matrix. Auxiliary field Monte Carlo methods which have been developed and run on Jaguar demonstrate an alternative to diagonalization that holds significant promise for certain aspects of the physics of medium mass and heavier nuclei, where the dimensions of the model space reach 1,030.

The hope for the longer term is to find a universal functional that would cover the entire chart of nuclei.
Nuclear Energy Density Functional Theory
Because nuclei are self-bound objects, they produce their own confining potential, or mean field. DFT provides the rigorous theoretical foundation for a self-consistent description of the nucleus in terms of one-body densities and currents that build the mean field. As discussed previously, the challenge consists in relating the nuclear DFT to the approaches based on the realistic inter-nucleon interactions and/or directly to low-energy QCD derivations. The hope for the longer term is to find a universal functional that would cover the entire chart of nuclei.
The nuclear DFT efforts have been quite successful in describing a wide variety of nuclear data with very good precision across the nuclear chart (figures 7, 8, and 9). The various parameterizations usually work quite well in regions where nuclear masses and other properties are experimentally determined, but extrapolations into very neutron-rich nuclei have been problematic. The next-generation facilities should enable theorists to obtain a functional parameterization that will describe bulk properties of all nuclei. Various nuclear data along long isotopic and isotonic chains are needed to constrain the isovector part of the energy functional. More specifically, one needs (difference of) masses and measures of collectivity and of the shell evolution in unknown regions, where predictions of currently used functionals disagree. Data on large deformations (at low and high angular momentum) and multipole strength distributions in neutron-rich nuclei will also be extremely valuable. All these data will be used to determine the coupling constants characterizing the functional (a many-dimensional optimization problem).
Figure 7. An example of large-scale systematic density functional theory (DFT) calculations for complex nuclei produced by the UNEDF collaboration. Results of the deformed DFT calculations of two-neutron separation energies for 1,553 particle-bound even-even nuclei with Z ≤ 108 and N ≤ 188.
At the heart of DFT lies the correlation energy that is rooted in quantum effects and symmetry breaking. The 1975 Nobel Prize in physics was awarded to Aage Niels Bohr, Ben Roy Mottelson, and Leo James Rainwater for showing that a large part of those correlations can be included by considering symmetry-breaking, independent-particle states. However, for finite systems, quantitative description often requires symmetry restoration. For this purpose, one can apply a variety of techniques, such as projection methods and the generator coordinate method. Ideally, approximations would be worked out that would allow avoiding full-scale collective calculations, but would be based on calculations performed on the top of self-consistent mean fields.
Defining a universal energy density functional requires computations of ground-state energies and other observable properties across the chart of nuclei.
Reliable extrapolation is possible only with the establishment of theoretical uncertainties. Consequently, construction of new energy density functionals should be supplemented by sensitivity analysis. It is not sufficient to predict properties of exotic nuclei by extrapolating properties of those measured in experiment. The UNEDF collaboration must also quantitatively determine errors related to such extrapolations. Moreover, for experimental work it is essential that an improvement gained by measuring one or two more isotopes be quantitatively known. From a theoretical perspective, scientists must know the confidence level with which the parameters of the functional are determined.
Figure 8. An example of large-scale systematic DFT calculations for complex nuclei produced by the UNEDF collaboration. Alpha-decay energies (Qa values) for even-even heavy and superheavy nuclei with 96 ≤ Z ≤ 118 calculated with the energy density functional SLy4. They are compared to experimental data (closed symbols).
Defining a universal energy density functional requires computations of ground-state energies and other observable properties across the chart of nuclei. This effort also requires the extensive use of computation in order to calculate the properties of several thousand nuclei numerous times. Once the next-generation functional has been obtained, further refinements will be necessary. DFT breaks laboratory-system symmetries such as particle number, parity, and angular momentum. These symmetries should be restored in order to compare theoretical results to observed nuclear excitation spectra. Projection techniques, performed self-consistently, will require about 1,000 computational cycles for each nucleus. The modern energy density functional should also lead to a microscopic theory of nuclear reactions and fission (figure 10). Here, the challenges include the proper treatment of the resonant and non-resonant continuum, development of the microscopic optical potential and theory for indirect reactions, and description of the large-amplitude nuclear collective motion.
The advent of terascale—and very shortly petascale—computing platforms is paving the way for today's progress in theoretical studies of the physics of nuclei.
Figure 9. Comparison between experimental and theoretical excitation energies of the lowest 2+ states in 519 even-even nuclei. Calculations are based on a microscopic collective Hamiltonian in five dimensions in which the potential energy and the tensor of inertia are obtained from constrained triaxial DFT using the Gogny D1S functional.

The advent of terascale—and very shortly petascale—computing platforms is paving the way for today's progress in theoretical studies of the physics of nuclei. The UNEDF collaboration has planned a comprehensive study of all nuclei based on the most accurate knowledge of the strong nuclear interaction, the most reliable theoretical approaches, and the massive use of the computer power available at this moment in time. Until recently, such an undertaking was hard to imagine, and even at the present time such an ambitious endeavor would be far beyond what a single researcher or a traditional research group could carry out. But the prospects look good: the UNEDF collaboration is witnessing breakthrough calculations of nuclear properties that the previous two generations of scientists had only begun to dream about.
Figure 10. The figure shows the energy surface of the transuranic element 258Fm calculated within the self-consistent nuclear density functional theory with the SkM* functional as a function of two collective variables: the total quadrupole moment Q20 representing the elongation of nuclear shape, and the total octupole moment Q30 representing the left-right shape asymmetry. Indicated are the two static fission valleys: asymmetric path aEF leading to asymmetric mass split of fission fragments, and symmetric-compact path sCF corresponding to a division into nearly spherical fragments. Experimentally, a transition is observed from an asymmetric distribution of mass splits in neutron-deficient fermium isotopes to a more symmetric distribution when getting closer to 264Fm. The density functional theory calculations explain this phenomenon in terms of shell effects in the emerging fission fragments approaching the doubly-magic 132Sn nuclei. In calculations, all possible nuclear shapes, including triaxial and reflection-asymmetric (pear-like) shapes, are allowed.

Contributors: For the UNEDF collaboration: Dr. George F. Bertsch, University of Washington; Dr. David J. Dean, ORNL; and Dr. Witold Nazarewicz, University of Tennessee and ORNL
Further Reading
RIA Theory Bluebook: A Road Map
Scientific Opportunities with a Rare-Isotope Facility in the United States