IllinoisPaul Ricker

COSMOLOGICAL SIMULATION GROUP

UNIVERSITY OF ILLINOIS

In this section

Simulation codes

Introduction

Simulation "codes" are computer programs that solve approximate versions of the equations that describe the behavior of physical systems. In astrophysics, the "physical systems" being studied could be protoplanetary disks, stars, black holes, galaxies, etc. Our aim is generally to take some assumptions about how matter and energy are distributed in space at one point in time and then to solve the equations to see things will look at later times. By determining how these systems behave in time and comparing the results against observations of the real world, we test different ideas about how things work. To use a "car analogy," imagine we had a computer program that could predict everything that happens in a collision between two cars given their sizes, masses, velocities, and so on; but all we had access to in the real world was snapshots of collisions at various stages. The computer program would let us put the snapshots into the correct order and allow us to deduce what happened in a given collision.

Modern simulation codes tend to be very complex and require a great deal of effort to write, test, and maintain. Fortunately, many simulation codes are now freely available on the Internet for anyone to use. Even so, using them and interpreting results from them are still something of an art. The best approach is to treat them as a kind of experimental apparatus, establishing various checks and controls in order to clarify the question that is being asked of Nature and to eliminate sources of error. One commonly used term for this type of procedure is "verification and validation" or V&V.

University of Illinois graduate students Kuo-Chuan Pan and Paul Sutter (now at the Institute for Astrophysics in Paris) have worked with me on these projects.

FLASH

FLASH AMR example
Example FLASH AMR simulation
(common-envelope system)
FLASH is an astrophysical hydrodynamics simulation code that I have contributed to almost since its inception in 1998. It was developed at the University of Chicago ASC Center for Astrophysical Thermonuclear Flashes, where I was a postdoc from 1999 to 2002. FLASH uses a technology called adaptive mesh refinement (AMR) to concentrate resources in regions of space where interesting things are happening. Since astrophysical simulations are generally limited by the amount of computer memory that is available, AMR allows us to do larger, higher-fidelity simulations than we might otherwise be able to do. FLASH was originally written to study problems associated with Type Ia supernovae, but it has proven to be useful for a variety of problems and now has a large user community. It runs on a variety of computers, from laptops to the largest parallel supercomputers. I created the original framework for FLASH, performed its initial verification, prototyped its automated testing framework, and wrote numerous physics modules for it, including gravity, particle, cosmology, and radiative cooling modules. FLASH remains my group's primary research tool.

The generation of high-performance computers being developed now are called "petascale" computers because they perform quadrillions ("peta" = 1015) of mathematical operations per second and access quadrillions of bytes of memory or storage. Using these computers efficiently is a big challenge. We are exploring several approaches to optimization of FLASH for petascale machines. One approach is to abstract out of FLASH some of the tasks it performs right now (such as balancing its workload evenly across processors) and hand those tasks over to a specially tuned library that does those things very well. This approach has the additional benefit of providing services to FLASH that it could not previously take advantage of, such as fault tolerance. The library we are working with, CHARM++, is developed by the Parallel Programming Laboratory at the University of Illinois, headed by Laxmikant Kale. A second approach we are investigating involves the use of performance annotations with the Orio tool developed by Boyana Norris's group at Argonne National Laboratory. Often the most straightforward way to write a program from a human's point of view is not the most efficient from a computer's. Annotations allow programmers to suggest code transformations to the computer without changing the readability of their existing code. The annotation tool creates optimized source code that can significantly outperform compiler-based optimizations.

COSMOS

COSMOS example
Example COSMOS simulation
(galaxy cluster merger)
COSMOS is an older parallel simulation code that I developed in the 1990s with Scott Dodelson at Fermilab as part of my Ph.D. thesis research at the University of Chicago. Unlike FLASH, it uses a single mesh, but the mesh can be deformed in order to improve resolution within a chosen part of the spatial domain. Like FLASH, it can solve problems in one, two, or three dimensions that include hydrodynamics, particles, self-gravity, and radiative cooling. Also like FLASH, the COSMOS hydrodynamics solver is based on an algorithm optimized for supersonic flows (ie. that include shock waves). COSMOS version 1 was based on the Parallel Virtual Machine (PVM) parallel communications library; I used it on the Cray T3D/T3E systems of the 90s for my research as a graduate student and as a postdoc. I have also written a second version of COSMOS based on the Message-Passing Interface (MPI) standard, which is now more widespread. However, testing of COSMOS version 2 has gone very slowly, as I have not had a compelling reason to work on it for some time. At some point I intend to make it publicly available "as-is," but first more bugs need to be fixed.

COSMOS evolved out of a self-gravitating hydrodynamics code called PPMnD that I wrote as a graduate student between 1993 and 1996. Scott Dodelson contributed the N-body (particle) solver in 1997, at which point we started calling the code COSMOS. In order to be able to separately test the hydrodynamics, gravity, and particle solvers with a variety of different problem setups, I created a rudimentary configuration system based on a setup script that pulled source code in from different directories as directed, automatically generated code to handle runtime parameter settings, and created a build directory where the desired code could be compiled into a machine-usable form. This configuration system later became the basis for the FLASH 1.0 framework.