
Old research pages
I'm keeping these old research pages here for the benefit of anyone
who might arrive here from an external page. They are no longer
updated. Please visit my new web site at
http://sipapu.astro.illinois.edu/~ricker/.
Eventually some content from these pages will migrate to the new site.

Specific entropy and velocity in an offcenter
collision between two clusters of galaxies (mass ratio 1:2,
gas fraction 15%), as computed with COSMOS 1.0.

COSMOS is a Fortran 90 code for parallel computers that solves the
evolution equations for gravitationally coupled gas and collisionless
matter in one, two, or three dimensions. It is oriented toward the
solution of astrophysical and cosmological hydrodynamics
problems. The hydrodynamical code uses the piecewiseparabolic method (PPM)
to solve the Eulerian gas equations on a static grid.
The collisionless (Nbody) code integrates
particle equations of motion using a variabletimestep leapfrog method.
Both matter components evolve in
their mutual gravitational field, which is determined at each
timestep of a simulation by solving the Poisson equation using
a multigrid or FFT method.
These algorithms are discussed in further detail on the
astrophysical hydrodynamics page.
I wrote the hydrodynamics and Poisson codes as part of my
Ph.D. thesis work with Don Lamb at the
University of Chicago.
Scott
Dodelson of the Fermilab Theoretical
Astrophysics Group wrote the Nbody code.
The modular design of COSMOS
permits different sets of initial and boundary conditions to be compiled
into the code in a straightforward manner with an easytouse setup script.
Most parameter values are set through an ASCII parameter file, which can
be edited and used without recompiling the COSMOS executable.
In COSMOS 2.0, all of the variable buffers are allocated at runtime,
enabling grid sizes also to be changed without recompilation.
COSMOS currently handles the following physics:

Nonrelativistic, compressible, inviscid Eulerian hydrodynamics in 1D
spherical and 1/2/3D Cartesian coordinates, including flows with
strong shocks


Collisionless, selfgravitating particle dynamics in 3D Cartesian
coordinates


Coupled selfgravity of the collisional and collisionless fluids


Static or comoving (cosmological) coordinate scaling


Static, nonuniform meshes with: periodic, outflow, isolated,
reflecting, or
userspecified boundaries for hydrodynamics; periodic, Dirichlet,
isolated, or givenvalue boundaries for selfgravity; and periodic
or isolated boundaries for particle dynamics


Point source functions for the gas,
such as a temperaturedependent cooling function


Additional advected quantities, such as fluid mixtures, tracer
fields, and abundances of different elements; these can also have
their own source and sink functions


Different equations of state, including perfect gas and
singletemperature gas plus radiation mixture

The code creates a log file for each job along with scalar (timedependent
global) data files for the gas and dark matter. Checkpointing is accomplished
through alternating
restart files which are written to disk at userspecified timestep
intervals. In addition, visualization outputs (3D volumes and 2D slices)
are written at userspecified simulation time intervals. Currently these use
a special data format, which can be read using a collection of Fortran
and IDL
routines we have written, but they will soon be converted to use the
Hierarchical Data Format
to enable their use with a wider variety of visualization tools.
COSMOS uses messagepassing with explicit domain decomposition to
achieve scalability on parallel computers.
COSMOS 2.0 uses MPI
for parallel communications, while COSMOS 1.0 uses
PVM.
We have used COSMOS 1.0 for production runs on the
Cray T3D
and T3E
at the
Pittsburgh Supercomputing Center
and the
San Diego Supercomputer Center.

Each processor (PE) controls a contiguous subdomain
of the complete grid. Communication of boundaries occurs between
logically adjacent PEs.

Each processor (PE) handles a distinct subset
of the full domain. This decomposition is the same for the hydrodynamics,
gravity, and Nbody components of the code. PEs are assigned
coordinates in the PEgrid according to the scheme depicted here. For
simplicity and efficiency the decomposition requires that the number of
processors along each dimension be a factor of two. If selfgravity is
used, the number of zones along each dimension must also be a factor of two.
For the hydrodynamics module, this decomposition produces high scalability and
a balanced computational load. Prior to each timestep, PEs trade boundary
information with their neighbors, then solve the hydrodynamic equations
within their subdomains as though each were the only processor.
Execution time scales as N_{PE}^{1},
where N_{PE} is the number of processors assigned,
until the subdomain size becomes equal to the number of boundary zones
required in each dimension (four for PPM), whereupon communication,
a fixed cost, begins to dominate.

Progressive redundancy scheme used to parallelize
the Vcycle multigrid algorithm for grids that are too coarse to
simply decompose.

The multigrid code requires a more sophisticated approach. It requires
that we subdivide not merely the grid with which the hydrodynamics is
discretized, but also several levels of coarser grids, each of which has
a factor of two fewer zones per dimension than the next finer one. As
long as the number of PEs along a dimension is smaller than onehalf the
number of zones in a grid along that dimension, the grid can be divided
evenly without redundancy. However, sufficiently coarse
grid levels require that some of the PEs be redundant on such levels.
We progressively increase the level of redundancy as grid coarseness
increases so as to minimize the impact of this difficulty on the parallel
efficiency of our code. Nevertheless, for a fixed problem size the execution
time for this module scales approximately as aN_{PE}^{1}
+ blnN_{PE}, where a and b are
constants.
Information for each particle (position, velocity, mass, etc.) is stored by
the PE which controls the subdomain in which it lies. When a particle leaves
a subdomain, a message is sent to the processor which controls its destination.
While this does not permit good load balancing under some circumstances,
the overhead associated with computing particle densities and forces
(via the particlemesh procedure) is a small fraction of the total cost of
a timestep; the force calculation easily dominates. We limit the cost of
particle messages by doublebuffering and by having each PE communicate only
with its neighbors. This is possible because we limit the timestep in such
a way that no particle can move more than one zone in a timestep.
As with any scientific code of this complexity it is important to apply
COSMOS to a number of test problems with known solutions before using it
on research problems. This is important not simply for the purpose of
verifying that the code works as it should, but also because it gives the
user an intuitive understanding of the code's strengths and weaknesses.
Such an understanding is essential for making effective use of any
hydrodynamical code.
We have performed a number of tests to exercise the various
code modules in different combinations.
The COSMOS test paper
gives detailed descriptions and results of tests performed with COSMOS 1.0.
We are in the process of documenting test results for COSMOS 2.0;
these are described in pages linked from the following table.
Problem 
Hydro 
Gravity 
Nbody 
Redshift 
Cooling 
Sedov
explosion 
X 




Sod shock tube 
X 




Square wave advection 
X 




Emery
wind tunnel 
X 




Two blast waves 
X 




Cooling test 




X 
Radiative shock 
X 



X 
Redshift test 



X 

Dust cloud
potential 

X 



Sine potential 

X 



Dust collapse I 
X 
X 



Jeans instability 
X 
X 



Dust collapse II 
X 
X 

X 

Zel'dovich pancake I 
X 
X 

X 

Cooling flow 
X 
X 


X 
Two particle orbit 

X 
X 


Two particle force 

X 
X 


Dust collapse III 

X 
X 


Dust collapse IV 

X 
X 
X 

Zel'dovich pancake II 

X 
X 
X 

SCDM cosmology I 
X 
X 

X 

SCDM cosmology II 

X 
X 
X 

SCDM cosmology III 
X 
X 
X 
X 

Santa Barbara cluster 
X 
X 
X 
X 


At present COSMOS is not publicly distributed, though this may change in the
future. We encourage scientists interested in using the code to
contact us for uptodate
information.

