IllinoisPaul Ricker

COSMOLOGICAL SIMULATION GROUP

UNIVERSITY OF ILLINOIS

In this section

Large-scale structure

Introduction

Sloan Digital Sky Survey
Sloan Digital Sky Survey
Galaxies are randomly distributed in space. However, their distribution is not featureless: they cluster together, forming filamentary superclusters up to 300 million light-years long that bound large empty regions known as voids. Over the past few decades this pattern has begun to be mapped in three dimensions with great detail, both to understand how galaxies form and to determine what kind of universe we inhabit. Because the expansion history and matter density of the Universe determine how fast large-scale structure can grow, these maps provide important constraints on the values of the parameters that describe our cosmological model. The major task facing simulators now involves predicting the galaxy distribution with percent-level accuracy so that cosmologists can better understand the phenomenon of dark energy, which is causing the expansion of the Universe to accelerate.

At the University of Illinois, graduate students Zarija Lukic (now at Lawrence Berkeley National Laboratory) and Paul Sutter (now at the Institute for Astrophysics in Paris) have worked on these projects.

Quantifying the accuracy of simulations

Dark matter appears to be the dominant form of matter in the Universe. We have yet to produce it in the laboratory or to detect it on Earth, but we have managed to narrow down its list of properties. By definition dark matter interacts with ordinary matter (protons, neutrons, electrons) primarily via gravity -- any electromagnetic or weak nuclear interactions are extremely weak. It must be "nonbaryonic," meaning that it cannot be ordinary matter in a form that is hard to see, such as black holes or rocks. It does not interact strongly with itself, and dark matter particles must move at speeds considerably less than the speed of light (ie. it is "cold"). Given these characteristics, we can study its evolution in different cosmological models via the use of N-body simulations.

In an N-body simulation, we define a collection of N computational "particles," each of which has a mass, position, and velocity. Each computational particle represents a large number of actual dark matter particles; as we increase N and make it closer to the number of actual particles, the simulation behaves more and more like the real system we are trying to study. In general we are limited in how large we can make N by the speed and memory of computers. Since computer technology is rapidly advancing, so too has the value of N available to us. The first large-scale structure simulations around 1970 used N values of a few hundred; today simulations with as many as 100 billion particles are being attempted.

At each step of an N-body simulation, we compute the gravitational field produced by all N particles and use that to determine the particles' accelerations. We then move the particles through a short time interval using these accelerations and the particles' velocities. After this "timestep" we again compute the gravitational field and move the particles through another short time interval. At the same time we solve the equations that describe how much the Universe expands during this interval, given the cosmological model we have adopted. We continue to perform timesteps until we have reached the present day (or some intermediate time of interest). During the simulation the computational particles gravitationally clump together into "halos," which we associate with bound objects like galaxies and clusters of galaxies.

It is important to understand that an N-body simulation does not give an exact solution to the real physical problem, nor even to the idealized N-body problem constructed from the real problem. Instead it gives an approximate solution. It is well-known that the error in the solution decreases as we consider larger and larger values of N, but the magnitude and nature of the error and how fast the error decreases are not well-understood. For some simple problems we have "analytical" solutions that we can compare the simulation codes against, but the problems of physical interest generally have no analytical solution -- that's why we need simulations in the first place. So we take different N-body simulation codes written using different techniques and compare their performance on a well-specified problem to get an idea of how accurately we are solving the problem. To adequately compare different cosmological models including dark energy as well as dark matter, we need to achieve an accuracy of a few percent or better. This is ambitious but feasible.

N-body simulation
N-body simulation test
Together with Salman Habib, Katrin Heitmann, and Michael Warren at Los Alamos National Laboratory, we have conducted a large N-body code comparison involving more than 10 different codes. We found that on quantities of interest to observational cosmologists (such as the halo mass function), we achieved results that were consistent between codes at the 5-10% level. We also determined several "error control criteria" that must be satisfied by N-body simulations in order to achieve maximum accuracy and reproducibility. These criteria indicate the minimum spatial resolution, latest start time, largest timestep, and other parameter choices that give good results. These results are helping us to formulate improved simulation algorithms that will be more accurate.

Structure formation in scalar-field cosmologies

Unlike dark matter, we are still at a very early stage of understanding dark energy; the first conclusive evidence for the existence of dark energy (based on the accelerating expansion of the Universe) was published in 1998, whereas we have known about dark matter since Fritz Zwicky first discovered it in the 1930s. Thus it makes sense to explore some theoretical ideas that might establish what other observable phenomena we might look for in association with dark energy. Among these ideas is one that proposes that dark energy may be a form of "scalar field" that interacts with dark matter. The simplest dark energy theory that is consistent with observations proposes that dark energy is a form of "cosmological constant" or repulsive property of spacetime. This theory is referred to as Lambda-CDM (Lambda for the cosmological constant, CDM for "cold dark matter"). We would like to know how the predictions of interacting scalar-field dark energy might differ from Lambda-CDM.

To investigate the scalar-field model, we made some simple modifications to the N-body code we use to study Lambda-CDM. The dark matter particle mass changes with time, because in the scalar-field model the dark matter mass is produced by the dark matter-dark energy interaction. The rate of expansion of the Universe and the effect the expansion has on particle momenta are also different. Finally the dark matter particles experience a "fifth force" not seen by baryons. The most significant result from simulations run with this code is that the dark matter-dark energy coupling slightly changes the shape and evolution of the halo mass distribution. These changes are slight, but by observing galaxies both nearby and far away, it may be possible to detect the coupling and either show it exists or rule it out entirely.