A Computed Flame
To understand how fuel burns in a diesel engine takes chemistry knowledge and supercomputing muscle.
Modern diesel engines are still pretty dirty, even if they’re not belching out thick smoke. Cleaning them up—and making them even more efficient—presents a real challenge because understanding how they burn fuel is hard: Diesel engines operate at high pressures and temperatures that are too harsh and burn fuel too fast for the accurate measurements needed to inform better engine designs.
Enter computational fluid dynamics, through which scientists can take a virtual snapshot of fuel burning at high pressures and temperatures (right) and even make an animation (view below).
To make such a model, the first step is to pick the fuel. Researchers at Sandia National Laboratory chose dimethyl ether (DME) over diesel. “The chemistry of DME is better known and easier to model than some of the large hydrocarbon fuels,” says Jacqueline Chen, who ran the simulation along with Yuki Minamoto of the Tokyo Institute of Technology. Knowing the chemistry is a prerequisite, because as large hydrocarbons burn, they break up into smaller ones, which then also must be modeled. Diesel’s composition is variable; it contains a mixture of large hydrocarbons with 8 to 24 carbon atoms per molecule. With only 2 carbon atoms per molecule, DME (CH3OCH3) has fewer breakups.
Still, burning DME generates lots of different combustion products, or species, and modeling how they break up and where they virtually go requires a supercomputer. Using Oak Ridge National Laboratory’s Titan—currently the second-fastest supercomputer in the world and capable of around 20 million billion calculations per second—the researchers employed a previously developed simulation code to compute the mass, mass fraction, three components of momenta, and total energy of 30 of DME’s species at each of the billion points in a three-dimensional grid (3024 x 897 x 384).
Of those 30 species, this visualization includes just two. The first of these two occurs in preignition reactions. DME mixed with air (colored gray, injected turbulently upward as if into an engine’s cylinder) can become a radical (CH3OCH2) and combine with oxygen (O2) to form methoxymethyl-hydroperoxy (CH3OCH2O2). So tracing this species (colored yellow to red) shows the rate of low-temperature chemical reactions, which are key to the eventual flame ignition. Tracing the products of high-temperature combustion— including the hydroxyl radical (OH) (colored blue to green)—shows the location of the flame.
“The goal of this simulation is to understand how a flame stabilizes above a burner” at conditions relevant to understanding diesel engines, says Chen, “and what we’re seeing is that the flame seems to be stabilizing on the surfaces of these yellowish- colored species, indicating that low-temperature reactions help stabilize the flame against the disrupting effects of high-velocity turbulence.”
The animation, of which this picture is a single frame, shows this effect more clearly because the researchers carefully chose to simulate conditions—including turbulence intensity, fuel temperatures and pressures—in order to maximize “turbulence-flame interaction while maintaining a feasible computational cost,” Chen and Minamoto write in the July issue of Combustion and Flame.
Running this one simulation generated hundreds of terabytes of data, but only about a quarter of that was needed to visualize it, says Hongfeng Yu of the University of Nebraska-Lincoln. Still, downloading it all took hours. “I started transferring the data and then I went home,” he says. Once the data were downloaded, though, Yu rendered this image in about a second on his graphics workstation: a consumer-grade desktop computer. He then made about 60 more images to produce the animation. “So,” Yu says, “rendering time is marginal compared with simulation time and data-transferring time.”
For that reason, Yu and Chen have been working together to integrate data visualization with simulation, taking advantage of the supercomputer’s processing power to visualize and analyze data as soon as they are generated rather than store them for future transfer and analysis. “You just can’t store all that data,” says Chen, which will become ever more the case as researchers move on to modeling more complex problems for longer time frames that require even faster supercomputers. —Robert Frederick
This video is a combination of two videos courtesy of Jacqueline Chen at Sandia National Laboratories, who completed this work with Yuki Minamoto now at the Tokyo Institute of Technology. The visualization itself was done by Hongfeng Yu of the University of Nebraska-Lincoln.