DOE SciDAC Review Office of Science
EXASCALE FOR ENERGY
The Role of EXASCALE Computing in ENERGY Security
How will the United States satisfy energy demand in a tightening global energy marketplace while, at the same time, reducing greenhouse gas emissions? Exascale computing, expected to be available within the next 8–10 years, may play a crucial role in answering that question by enabling a paradigm shift from test-based to science-based design and engineering.
 
Energy security has two key dimensions: reliability and resilience. Reliability means that energy users are able to access the energy they need, when they need it, at affordable prices. Resilience means the ability of the system to cope with shocks and change.
In today's world, energy security, economic security, national security, and environmental security are all closely interrelated. For example, world energy consumption is projected to grow by 44% over the period of 2006–2030, according to the Energy Information Administration's (EIA) 2009 International Energy Outlook (figure 1). Competition for energy supplies is certain to increase; but with U.S. oil production declining (figure 2), economic security and national security may be at risk unless we can obtain assured fuel supplies from reliable sources.
EIA ILLUSTRATION A. TOVEY
Figure 1. World energy consumption is projected to grow by 44% over the 2006–2030 period, with non-OECD (Organisation for Economic Co-operation and Development) countries accounting for 82% of the increase.

EIA ILLUSTRATION A. TOVEY
Figure 2. U.S. oil production and foreign oil imports (thousands of barrels per day).
At the same time, environmental security requires that the world transition to carbon-neutral energy sources that do not contribute to global warming. This constraint is hardly altruistic. In the 2007 report National Security and the Threat of Climate Change, a blue-ribbon panel of military experts concluded that climate change is a threat multiplier in already fragile regions, exacerbating conditions that lead to failed states — the breeding grounds for extremism and terrorism — while adding to tensions even in stable regions of the world. Climate change, national security, and energy dependence are a related set of global challenges.
Improving America's energy security while stabilizing the Earth's climate requires, in the near term, widespread implementation of existing conservation and energy efficiency technologies that reduce carbon dioxide emissions; in the mid-term, substantial improvement of existing energy technologies; and in the long term, development of new technologies and fuel sources.
Modeling and simulation using exascale computers will make a significant contribution to mid- and long-term advances in energy technologies by enabling a paradigm shift from test-based to science-based design and engineering.
Modeling and simulation using exascale computers — capable of one million trillion (1018) calculations per second — will make a significant contribution to mid- and long-term advances in energy technologies by enabling a paradigm shift from test-based to science-based design and engineering. Computational modeling of complete power generation systems and engines, based on scientific first principles, will accelerate the improvement of existing energy technologies and the development of new transformational technologies by pre-selecting the designs most likely to be successful for experimental validation, rather than relying on trial and error.
The predictive understanding of complex engineered systems made possible by computational modeling will also reduce the construction and operations costs, optimize performance, and improve safety. Exascale computing will make possible fundamentally new approaches to quantifying the uncertainty of safety and performance engineering.
This article discusses potential contributions of exascale modeling in three areas of energy production: nuclear power, combustion, and renewable sources of energy, which include hydrogen fuel, bioenergy conversion, photovoltaic solar energy, and wind turbines. Nuclear, combustion, photovoltaics, and wind turbines represent existing technologies that can be substantially improved, while hydrogen and biofuels represent long-term R&D projects that will be needed for a carbon-neutral economy. More detailed analyses of these topics can be found in the reports listed at the end of this article (Further Reading, p19).
 
Nuclear Power
Nuclear fission plays a significant and growing role in world energy production. Currently, 436 nuclear power plants in 30 countries produce about 15% of the electrical energy used worldwide. Sixteen countries depend on nuclear power for at least a quarter of their electricity. The United States is the world's largest producer of nuclear power, with more than 30% of worldwide nuclear generation of electricity. America's 104 nuclear reactors produced 809 billion kWh in 2008, almost 20% of total electrical output.
Despite more than 30 years with almost no new construction, U.S. reliance on nuclear power has continued to grow. The U.S. nuclear industry has maximized power plant utilization through improved refueling, operating efficiency, maintenance, and safety systems at existing plants. There is growing interest in operating existing reactors beyond their original design lifetimes.
The Energy Policy Act of 2005 has stimulated investment in a broad range of electricity infrastructure, including nuclear power. Over the last few years more than a dozen utility companies have announced their intentions to build a total of 27 new nuclear reactors. In addition, to meet the rising demand for carbon-free energy, a new generation of advanced nuclear energy systems is under development that would be capable of consuming transuranic elements from recycled spent fuel (sidebar "Simulating Reactor Core Coolant Flow" p6). These advanced reactors would extract the full energy value of the fuel, produce waste that does not create long-term hazards, and reduce proliferation of nuclear weapons materials.
Exascale computing is poised to play a major role in the development of next-generation nuclear plants. By enabling the high-fidelity modeling and simulation of complete nuclear power systems, exascale computing would change nuclear engineering from a test-based to a science-based discipline. Such modeling can benefit nuclear energy by:
  • Accelerating the iteration cycle of technology evaluation, design, engineering, and testing to optimize existing and new nuclear energy applications
  • Shortening the licensing process by providing reliably predictive integrated performance models that reduce uncertainties
  • Reducing construction and operations costs while also reducing uncertainty and risk
DOE SC/NE ILLUSTRATION: A. TOVEY
Figure 4. Individual simulation tools and integrated performance and safety codes (IPSCs) involve different physical phenomena at varying scales of interest. Source: Science Based Nuclear Energy Systems Enabled by Advanced Modeling and Simulation at the Extreme Scale workshop.  
Modeling has always played a key role in nuclear engineering, design, and safety analysis. Computational analyses based on large experimental databases have been used to analyze material properties, fuel performance, reactor design, safety scenarios, and waste storage. But legacy applications do not provide the high fidelity required to understand fundamental processes that affect facility efficiency, safety, and cost. These processes include:
  • Determination of material properties of fuels and structural materials under both static and dynamic conditions, including nuclear (for example, neutron and gamma reactions), thermophysical (such as thermal conductivity), mechanical (such as fracture toughness), and chemical (for example, corrosion rates). Nuclear fuel assemblies must perform in extreme environments where they are subjected to stress, heat, corrosion, and irradiation, all of which can lead to progressive degradation of the fuel cladding materials and other structural components. Researchers hope that simulations will help them discover new ways of preventing or mitigating material degradation.
  • Spent fuel reprocessing is an option that was abandoned in the 1970s but is now looked on more favorably. Reprocessing involves dissolving the spent fuel in acid, treating the fuel in a series of solvent extraction processes, and fabricating it into fuel or waste forms. Current models provide only qualitative descriptions of process behavior and are unable to answer many key questions. To support the detailed design and safe operation of reprocessing plants, advanced reprocessing models require improved chemistry, fluid dynamics, interfaces with nuclear criticality calculations, and whole-plant modeling.
  • Fuel development and performance evaluation is currently an empirical process that takes decades. New fuels must be fabricated, be tested in a test reactor under multiple accident scenarios, undergo post-irradiation examinations, and finally be placed in an operational reactor for several cycles. Fuel performance simulation tools could reduce the current 10–15 year qualification time by a factor of 3. But these tools must be comprehensive enough to predict the thermal, mechanical, and chemical response of the fuel rod throughout its irradiation lifetime.
  • Reactor design and safety simulation tools need improved physical, numerical, and geometric fidelity.
There are multiple challenges in modeling the design, performance, and safety of a nuclear reactor. Figure 4 illustrates the multiscale physical challenges that a nuclear reactor faces in size (length) and time.
DOE SC/NE ILLUSTRATION: A. TOVEY
Figure 5. Computational requirements for nuclear energy modeling, with units petaflop/s (PF) and exaflop/s (EF). Source: Science Based Nuclear Energy Systems Enabled by Advanced Modeling and Simulation at the Extreme Scale workshop.
A May 2009 workshop sponsored by the DOE Office of Science and DOE Office of Nuclear Energy developed a detailed picture of the computational requirements for nuclear energy modeling, culminating in a 10 exaflop/s requirement by the year 2024. The timeline in figure 5 (p8) includes nuclear energy drivers, science simulations to resolve science unknowns, and engineering simulations that use integrated codes. The workshop participants estimated that it will take nearly 15 years to resolve many of the scientific questions identified in figure 5 and to establish fully predictive, integrated codes that can quantify uncertainties. But they also estimated that exascale modeling could reduce the construction cost of a large-scale nuclear plant by 20%, saving as much as $3 billion on a $15 billion plant.
DOE EXASCALE INITIATIVE
Figure 6. Three-dimensional predictive simulations of fuel pin behavior from microstructure evolution will require exascale resources.
Exascale resources will improve the geometric, numerical, and physics fidelity in modeling key phenomena such as the evolution of fuel pin microstructure and behavior (figure 6, p9):
  • Improved geometric fidelity will extend lower-length-scale fidelity to sub-10 mm resolution of three-dimensional phenomena such as microstructure evolution and material failure, validated with uncertainty quantification methodologies.
  • Improved numerical fidelity will involve bridging vastly different time and length scales with multi-physics phenomena, such as bubble-fission fragment interactions (including molecular dynamics), and scaling up oxide and metal models into pellet simulations.
  • Improved physics fidelity will apply to modeling phenomena such as fission gas bubble formation, transport, and release; fuel chemistry and phase stability; fuel-cladding mechanical interactions; and thermal hydraulics, turbulence, and coolant flow in pin assemblies, and their effect on fuel and cladding evolution.
The creation of integrated performance and safety codes faces considerable technical challenges that range from improvements in software engineering and numerical methods to the development of more fully integrated physics models. But if these challenges are met with a sustained and focused effort, exascale computing can revolutionize the modeling and design of nuclear energy systems.
 
LLNL
Figure 7. Combustion accounts for 85% of the energy used in the United States.
Combustion
Currently 85% of our nation's energy comes from hydrocarbon combustion, including petroleum, natural gas, and coal (figure 7, p10). Although there may be long-term alternatives to combustion for some uses, changes in fuels and energy technologies tend to happen gradually (figure 8, p10). High infrastructure costs suggest that combustion may continue to be the predominant source of energy for the next 30–50 years.
EIA ILLUSTRATION A. TOVEY
Figure 8. Gradual changes in the U.S. energy supply from 1850–2000.
Transportation is the second largest energy consumer in the United States, accounting for two-thirds of petroleum usage. Transportation technologies provide opportunities for 25–50% improvement in efficiency through strategic technical investments in both advanced fuels and new low-temperature engine concepts. These improvements would result in potential savings of three million barrels of oil per day, from the total current U.S. consumption of 20 million barrels.
A 2006 DOE Office of Basic Energy Sciences workshop on Basic Research Needs for Clean and Efficient Combustion of 21st Century Transportation Fuels identified a single, overarching grand challenge: the development of a validated, predictive, multiscale, combustion modeling capability to optimize the design and operation of evolving fuels in advanced engines for transportation applications.
Concern for energy security is driving the development of alternative fuel sources, such as oil shale, oil sands, syngas, and renewable fuels such as ethanol, biodiesel, and hydrogen. These new fuel sources all have physical and chemical properties that are very different from traditional fuels. New combustion systems, for both stationary power plants and transportation, need to be developed to use these fuels efficiently while meeting strict emissions requirements.
Modeling internal combustion engines is a complex, multi-physics, multiscale problem. Engine combustion processes involve physical and chemical phenomena that span a wide dynamic range (~109) in spatial and temporal scales, involving hundreds of chemical species and thousands of reactions. The microscopic reaction chemistry affects the development of the macroscopic turbulent flow field in engines, and the change in temperature due to the altered flow dramatically affects the reaction rates. Changes in fuel composition directly affect phenomena at several scales. For example, at microscopic scales, fuel changes affect some reaction rates; at larger scales, changes in bulk liquid properties affect fuel injection and evaporation.
New combustion systems, for both stationary power plants and transportation, need to be developed to use these fuels efficiently while meeting strict emissions requirements.
Understanding how changes at specific scales affect the overall performance of an engine requires very careful coupling across the scales, as well as a wide variety of computational techniques, such as quantum dynamics, molecular dynamics, kinetic Monte Carlo, direct numerical simulation, large eddy simulation, and Reynolds-averaged simulation (figure 9, p11; sidebar "Simulations of Alternative Fuel Combustion Use DNS and LES Methods" p14).
DOE EXASCALE INITIATIVE ILLUSTRATION: A. TOVEY
Figure 9. Multiscale modeling describes internal combustion engine processes from quantum scales up to device-level, continuum scales.
Fuel and engine technologies should be developed together if we want to move them as quickly as possible from the laboratory to the marketplace. This co-development requires a new approach to engineering research and development: instead of hardware-intensive, experience-based engine design, which is slow and labor intensive, we need a relatively faster, simulation-intensive, science-based design process. High-fidelity multiscale modeling on exascale computers will play a key role in enabling this transition.
Lean premixed burners are one example of a new technology being considered for stationary gas turbines, which provide a significant portion of our electric power generation. Theoretically these burners could operate cleanly and efficiently with a variety of fuels, such as hydrogen, syngas, and ethanol, because of their high thermal efficiency and low emissions of NOx due to lower post-flame gas temperatures. However, the lean fuel mix makes some burners susceptible to flame instability and extinction, emissions of unburned fuel, and large pressure oscillations that can result in poor combustion efficiency, toxic emissions, or even mechanical damage to turbine machinery. An important exception to this trend is the low-swirl burner, which produces a stable flame (sidebar "Low-Swirl Combustion: Experiments and Simulations Working Together" p16).
Researchers are only beginning to acquire a fundamental understanding of the dynamics of premixed flame propagation and structure for the variety of different fuels that is required to meet the engineering design goals for lean premixed burners. Exascale computing will play a deciding role in whether we are able to design these types of systems.
Effective design of both power generation and transportation systems will require new computational tools that provide unprecedented levels of chemical and fluid dynamical fidelity. Current engineering practice is based on relatively simple models for turbulence combined with phenomenological models for the interaction of flames with the underlying turbulent flow. Design computations are often restricted to two-dimensional or relatively coarse three-dimensional models with low-fidelity approximations of the chemical kinetics.
Effective design of both power generation and transportation systems will require new computational tools that provide unprecedented levels of chemical and fluid dynamical fidelity.
A dramatic improvement in fidelity will be required to model the next generation of combustion devices (figure 10, p12). Theory cannot yet provide detailed flame structures or the progression of ignition in complex fuels, while experimental diagnostics provide only a limited picture of flame dynamics and ignition limits. Numerical simulation, working in concert with theory and experiment, has the potential to address the interplay of fluid mechanics, chemistry, and heat transfer needed to address key combustion design issues.
J. OEFELEIN AND R. BARLOW, SNL
Figure 10. Combustion science breakthroughs enabled by algorithms, applications, and HPC capability.
The grand challenge is to develop a validated, predictive, multiscale, combustion modeling capability that can optimize the design and operation of evolving fuels in advanced engines and power plants. Using exascale computing systems, thousands of design iterations — each corresponding to a high-fidelity multiscale simulation — could accelerate the optimization and implementation of new technologies.
 
Alternative and Renewable Energy Sources
According to the Energy Information Administration's Annual Energy Outlook 2006, renewable energy contributes only 6.8% to the present U.S. energy supply, as shown in figure 11 (p13). Despite the modest percentage of energy currently supplied by renewables, ambitious state and national goals have been set for the future renewable energy supply. These goals include hydrogen as a major contributor to future transportation fuel, producing sufficient biofuels to reduce gasoline usage by 20% in 10 years, deploying market-competitive photovoltaic systems by 2015, and generating 20% of total U.S. electrical supply from wind energy by 2030.
EIA ILLUSTRATION A. TOVEY
Figure 11. U.S. primary energy consumption by source and sector, 2006 (quadrillion BTU).
Meeting these ambitious goals will require major technological advances. In response to these challenges, the DOE Office of Science (SC) and the DOE Office of Energy Efficiency and Renewable Energy (EE) convened a workshop on Computational Research Needs for Alternative and Renewable Energy in Rockville, Maryland, September 19–20, 2007. Discussions at the workshop made clear that realizing the potential for alternative and renewable energy will require robust computational capabilities — including ultrascale computing systems, scalable modeling and simulation codes, capacious data storage and informatics, and high-speed communication networks.
Four renewable energy technologies which can greatly benefit from computational research are: hydrogen fuel, bioenergy conversion, photovoltaic solar energy conversion, and wind energy.
 
J. OEFELEIN, SNLJ. CHEN, SNL
Figure 13. LES of high-pressure injection processes under actual operating conditions. To the left are pe­ne­tration measurements and a corresponding shadow graph image acquired experimentally. In the center is a representative LES calculation at identical conditions showing the complete computational do­main. To the right is an enlargement of one of the jets showing the turbulent structure at an instant in time.
Renewable Fuels: Hydrogen
Molecular hydrogen (H2) is an energy carrier, not an energy source, but it can store energy for later use in both stationary and transportation applications. Hydrogen can be produced using a variety of domestically available sources including renewable solar, wind, and biomass resources, fossil fuels (such as natural gas and coal), and nuclear energy. In a fuel cell, the free energy associated with the formation reaction of water from hydrogen and oxygen is harnessed as electrical energy. Hydrogen can also be combusted in an internal combustion engine.
Advances in computational science are needed to accelerate the rate of discovery and implementation in all aspects of the future hydrogen energy economy. For example, fuel cell technologies require breakthroughs in catalysis, materials design and integration, and a deeper understanding of ion, electron, and molecular transport mechanisms. Computational models can accelerate experimental research and point the way to more efficient and less costly fuel cells.
For hydrogen fuel to displace our reliance on petroleum, we need to develop methods to store hydrogen inexpensively in vehicles in a safe, convenient, compact, and lightweight package. The DOE has been supporting basic research (DOE Office of Science) and applied research (DOE Office of Energy Efficiency and Renewable Energy) to develop hydrogen storage systems that can be incorporated into vehicles that will be desirable to consumers, but the challenge of producing such systems has not yet been met.
Advances in computational science are needed to accelerate the rate of discovery and implementation in all aspects of the future hydrogen energy economy.
Computation and theory have already played an important role in advancing next-generation, solid-state hydrogen storage options. For example, first-principles calculations have identified titanium carbide nanoparticles as a candidate hydrogen storage medium (figure 17, p17). However, much more needs to be done, as no storage method tested to date satisfies all of the requirements for efficiency, size, weight, cost, and safety for transportation vehicles. Computational science will play a critical role in advancing options such as metal hydrides, chemical hydrogen carriers, and sorption materials; but even physical storage methods would benefit from improved computational resources and techniques. For example, a better understanding of the mechanical properties of materials could lead to new, less expensive fibers or composites for containers that could store hydrogen via compression or liquefaction.
R. CHENG, LBNL ILLUSTRATION: A. TOVEY
Figure 17. Developing systems for high-density storage of hydrogen is crucial to successful hydrogen technology deployment. The depicted Ti14 C13 titanium carbide nanoparticle displays aspects of both hydrogen spillover and dihydrogen bonding, and can adsorb 68 hydrogen atoms for nearly 8% weight hydrogen storage.
The hydrogen panel in the SC/EE workshop identified the following five priority research directions that cut across all hydrogen-related technologies. It is important to note that these priority research directions address core needs associated with specific underlying fundamental processes:
  • Rate processes in hydrogen production, storage, and use
  • Inverse materials and system design
  • Synthesis of targeted materials
  • Long-term behavior and lifetime simulation
  • Linking models and scales, from atoms to systems.
Renewable Fuels: Bioenergy Conversion
Alternative and renewable fuels derived from biomass offer the potential to reduce our dependence on imported oil, support national economic growth, and mitigate global climate change. However, technological breakthroughs are needed to overcome key barriers to the development and commercialization of these fuels. These barriers include the high cost of pretreatment processes, enzymes, and microbial biocatalysts for biochemical conversion processes.
The lignocellulose in biomass is highly recalcitrant to most of the physical, chemical, and biochemical treatments currently used to liberate sugars. The cell walls of lignocellulose contain highly-ordered, water-excluding microfibrils of crystalline cellulose that pose a significant barrier to enzymatic hydrolysis. The cellulose microfibrils themselves are laminated with hemicellulose, pectin, and lignin polymers. This complex matrix of heteropolymers is the main reason why plant biomass has resisted low-cost chemical and enzymatic treatments.
Cellulases can be used to hydrolyze the polysaccharides in the plant cell wall to fermentable monosaccharides, but the large quantities of expensive cellulases that are currently needed make the process cost-prohibitive. Despite Herculean attempts, the specific activity of cellulases has not been improved after more than three decades of research. A better understanding of the structure, function, and relationships governing the activity of soluble enzymes on insoluble polymeric substrates is essential to break this bottleneck. Computation uniquely provides a multiscale framework of understanding to guide and interpret experimentation on complex biological systems.
Alternative and renewable fuels derived from biomass offer the potential to reduce our dependence on imported oil, support national economic growth, and mitigate global climate change.
The priority research directions for biomass conversion identified in the SC/EE workshop include understanding lignocellulosic biomass depolymerization and hydrolysis, and chemical energy extraction from heterogeneous biomass. At Oak Ridge National Laboratory's BioEnergy Science Center (BESC), a new method for molecular dynamics simulation of lignocellulosic biomass is currently being tested.
BESC researchers have developed a strategy for a fast all-atom molecular dynamics simulation of multimillion-atom biological systems on massively parallel supercomputers, using models of cellulose and lignocellulosic biomass in an aqueous solution. Their approach involves using the reaction field (RF) method for the computation of long-range electrostatic interactions, which permits efficient scaling on many thousands of cores. Although the range of applicability of the RF method for biomolecular systems remains to be demonstrated, for the benchmark systems the use of the RF produces molecular dipole moments, Kirkwood G factors, other structural properties, and mean-square fluctuations in excellent agreement with those obtained with the commonly used Particle Mesh Ewald (PME) method.
Due to its complexity, lignocellulose poses significant challenges to molecular dynamics simulation. Among these are the characteristic length scales (from angstroms to micrometers) and time scales (nanoseconds to microseconds, and beyond) of events pertinent to the recalcitrance of biomass to hydrolysis into sugars. To access these length and time scales, standard molecular dynamics protocols must be modified to scale up to massively parallel machines.
The BESC researchers simulated a system of lignocellulosic biomass containing 52 lignin molecules each with 61 monomers, a cellulose fibril of 36 chains with 80 monomers per chain (figure 18, p18), and 1,037,585 water molecules, totaling 3,316,463 atoms.
T. SPLETTSTOESSER, ORNL
Figure 18. A simulated cellulose fibril showing (a) the cross-section and (b) a side perspective. The fibril consists of 18 origin chains (blue) and 18 center chains (green). The axes of the unit cell are also indicated.
Their studies showed that the properties derived using the PME method are well reproduced using the computationally less demanding reaction field method. Scaling benchmarks showed that the use of RF drastically improves the parallel efficiency of the algorithm relative to PME, yielding ~30 nanoseconds of simulated reaction time per computing day, running at 16.9 teraflop/s on 12,288 cores of ORNL's Cray XT5 system, Jaguar. Consequently, microsecond time scale molecular dynamics simulations of multimillion-atom biomolecular systems now appear to be within reach.
Despite this important advance, some critical biological phenomena, such as ligand binding, require the simulation of relatively long time scales (up to 1,000 seconds). For this type of application, exascale computing will be required.
 
Renewable Electricity: Photovoltaic Solar Energy Conversion
Electrical generation by solar energy capture with photovoltaic systems has virtually no environmental impact beyond device manufacturing, is ideal for individual home or other distributed generation, and taps a virtually unlimited resource. Currently, however, it is two to four times more expensive than most residential or commercial rate electricity. So, although photovoltaic has a number of excellent small markets, it currently supplies only a small fraction of total electricity use and requires further cost reductions and efficiency improvements to be able to make a major contribution to meeting electrical demand.
More than 30 years of experimentation was needed for the relatively simple thin-film silicon solar cell to reach its current efficiency of 24%. In order to develop next-generation solar cells based on new materials and nanoscience fast enough to reduce the global warming crisis, a different paradigm of research is essential. Exascale computing can change the way the research is done — both through a direct numerical material-by-design search and by enabling a better understanding of the fundamental processes in nanosystems that are critical for solar energy applications.
Unlike bulk systems, nanostructures cannot be represented by just a few atoms in computational simulations. They are coordinated systems, and any attempt to understand the materials' properties must simulate the system as a whole. Density functional theory (DFT) allows physicists to simulate the electronic properties of materials, but DFT calculations are time-consuming; and any system with more than 1,000 atoms quickly overwhelms computing resources, because the computational cost of the conventional DFT method scales as the third power of the size of the system. Thus, when the size of a nanostructure increases 10 times, computing power must increase 1,000 times. Photovoltaic nanosystems often contain tens of thousands of atoms. So one of the keys to unleashing the energy harvesting power of nanotechnology is to find a way of retaining DFT's accuracy while performing calculations with tens of thousands of atoms.
Exascale computing can change the way the research is done – both through a direct numerical material-by-design search and by enabling a better understanding of the fundamental processes in nanosystems that are critical for solar energy applications.
Researchers at Lawrence Berkeley National Laboratory (LBNL) have demonstrated a way to accomplish this using a divide-and-conquer algorithm implemented in the new Linear Scaling Three-Dimensional Fragment (LS3DF) method. In November 2008, this research was honored with the Association for Computing Machinery (ACM) Gordon Bell Prize for Algorithm Innovation.
In a solar cell, there are a few key steps that determine overall efficiency in the conversion of sunlight to electricity: light absorption, exciton generation, exciton dissociation into separated electron and hole, carrier transport, and charge transfer across nanocontacts. A few aspects of nano-solar cells often limit their overall efficiency: weak absorption of light, electron-hole recombination, nanocontact barriers, or large overpotentials. Unfortunately, many of these processes are not well understood. This is one instance where computational simulations can play a critical role.
For example, materials that have separate electron states within the energy band gap, such as zinc tellurite oxide (ZnTeO), have been proposed as next-generation solar cells. Such systems could theoretically increase solar cell efficiencies from 30–63%. To test this hypothesis, the LBNL researchers used LS3DF to calculate the electron wave function of a 13,824 atom ZnTeO supercell on 17,280 cores of NERSC's Cray XT4 system, Franklin (figure 19, p18). The LS3DF calculation took just a few hours, compared with the four to six weeks it would have taken using a direct DFT method. The results showed that ZnTeO is a good candidate for photovoltaic applications, with a theoretical power efficiency estimated to be around 60%.
L.-W. WANG, LBNL
Figure 19. The electron wave functions for an oxygen-induced state (left) and ZnTe conduction band edge state (right) in a ZnTeO alloy with 3% O. The gray, blue, and red dots correspond to Zn, Te, and O atoms respectively.
In subsequent tests, LS3DF ran on more than 100,000 supercomputer cores, making it the first variationally-accurate, linearly-scaling ab initio electronic structure code that has been efficiently parallelized to such a large number of processors.
To go beyond nanoscale simulations, algorithmic breakthroughs like LS3DF, combined with multiscale methods and exascale computers, could be used for the integrated simulation and design of entire photovoltaic systems, shortening the cycle for device development and optimization, and improving the efficiency and cost of photovoltaics.
 
Renewable Electricity: Wind Energy
When it comes to generating electricity, wind power is the technology closest to being cost competitive with fossil fuel-driven power generating plants. Although its use by utilities is limited by its intermittent nature, there are sufficient wind energy resources in the continental United States to meet a substantial portion of national energy needs at a competitive cost. The goal of generating 20% of the total U.S. electrical supply from wind energy by 2030, while feasible, is highly challenging. Turbine installations are growing dramatically, but they still provide less than 1% of U.S. electricity.
Current wind plants often under-perform predicted performance by more than 10%, and wind turbines often suffer premature failures and reduced lifetimes from those predicted during design. Turbine downtimes and failures lead to a reduced return on investment, which lowers the confidence of investors and increases the cost of raising capital necessary to develop a wind plant. A key reason for these early failures is a lack of detailed knowledge about unsteady wind flows and how they interact with turbines.
The goal of generating 20% of the total U.S. electrical supply from wind energy by 2030, while feasible, is highly challenging.
Standard meteorological datasets and weather forecasting models do not provide the detailed information on the variability of wind speeds, horizontal and vertical shears, and turbulent velocity fields that are needed for the optimal design and operation of wind turbines and the exact siting of wind plants. For example, wind turbines are frequently deployed in regions of undulating topography to take advantage of the expected speed-up of wind as the atmosphere is forced up over the hill. But recent evidence suggests that the drag imposed by trees can create turbulence that can damage wind turbines on subsequent ridges. More research is needed on the planetary boundary layer (PBL) — the lowest part of the atmosphere — and how it interacts with the shape and ground cover of the land in specific locations in order to understand unsteady wind flows.
Researchers at the National Center for Atmospheric Research (NCAR) have developed a new large-eddy simulation (LES) code for modeling turbulent flows in boundary layers. Running on as many as 16,384 processors of the Franklin Cray XT4 at NERSC, the NCAR–LES code enables fine mesh simulations that allow a wide range of large- and small-scale structures to co-exist and thus interact in a turbulent flow.
P. SULLIVAN AND E. PATTON, NCAR
Figure 20. Visualization of the vertical velocity field in a convective PBL at heights of 100 m, 500 m, and 900 m, from a 5123 simulation. Plumes near the inversion can trace their origin to the hexagon patterns in the surface layer.

P. SULLIVAN AND E. PATTON, NCAR
Figure 21. Visualization of particles released in a convective PBL from a 1,0243 simulation of convection. The viewed area is ~ 3.8% of the total horizontal domain. Time advances from left to right beginning along the top row of images. Notice the evolution of the larger-scale line of convection into small-scale vortical dust devils.
High-resolution flow visualizations in figures 20 and 21 illustrate the formation of both large and small structures. In figure 20, we observe the classic formation of plumes in a convective PBL. Vigorous thermal plumes near the top of the PBL can trace their roots through the middle of the PBL down to the surface layer. Closer inspection of the large-scale flow patterns in figure 15 (p16) also reveals coherent smaller scale structures. This is demonstrated in figure 21, which tracks the evolution of 105 particles over about 1,000 seconds and shows how dust devil vortices form in convective boundary layers. Coarse-mesh LES hints at these coherent vortices, but fine-resolution simulations allow a detailed examination of their dynamics within a larger-scale flow.
Exascale computing will allow simulation of mesoscale systems with resolved clouds and a host of important smallscale processes that are now parameterized.
Petascale computing will permit simulation of turbulent flows over a wide range of scales in realistic outdoor environments, such as flow over tree-covered hills. This will allow researchers to resolve 1–10 meter surface features while still capturing 1–100 km energy scales of motion in the boundary layer. Exascale computing will allow simulation of mesoscale systems with resolved clouds and a host of important small-scale processes that are now parameterized.
Improved PBL and turbine inflow modeling capabilities will enable the design of wind turbines that optimize performance by getting more power out of lighter designs, but will also be more cost effective due to longer lifetimes and reduced operational cost.
 
Contributors John Hules (LBNL) and the workshop participants and authors of the reports listed below
 
Further Reading
H. D. Simon, T. Zacharia, R. Stevens, et al. 2007. Modeling and Simulation at the Exascale for Energy and the Environment (E3 Report).
http://www.sc.doe.gov/ascr/ProgramDocuments/Docs/TownHall.pdf
P. Finck, D. Keyes, R. Stevens, et al. 2006. Report on the Workshop on Simulation and Modeling for Advanced Nuclear Energy Systems.
http://www.sc.doe.gov/ascr/Misc/gnep06-final.pdf
A. McIlroy, G. McRae, et al. 2006. Basic Research Needs for Clean and Efficient Combustion of 21st Century Transportation Fuels.
http://www.sc.doe.gov/bes/reports/files/CTF_rpt.pdf
G. Bothun, S. Hammond, S. Picataggio, et al. 2008. Computational Research Needs for Alternative and Renewable Energy.
http://www.sc.doe.gov/ascr/ProgramDocuments/Docs/CRNAREWorkshopReport.pdf
S. Schreck, J. Lundquist, W. Shaw, et al. 2008. U.S. Department of Energy Workshop Report: Research Needs for Wind Resource Characterization (National Renewable Energy Laboratory technical report NREL/TP-500-43521). http://www.nrel.gov/docs/fy08osti/43521.pdf