These data are for collaborators only.

Raw data for mock catalogs.

These data were created from one of two 10243 N-body simulations of a LCDM model. The first (lcdm500A) has OmegaM=0.3, Lambda=0.7, h=0.7, OmegaB.h2=0.024, n=1 and sigma_8=0.9. The box side was 500Mpc/h (comoving). The particle mass is near 1010Msun/h and the (Plummer equivalent) force softening was 18kpc/h. Phase space data were dumped at equal conformal time intervals, every 100Mpc/h along the line-of-sight. The second (lcdm500B) has the same number of particles and box size, but a slightly different cosmology: OmegaM=0.25, Lambda=0.75, h=0.72, OmegaB.h2=0.0224, n=0.97 and sigma_8=0.8.

The files halo_?.????.dat.gz give the halo masses and positions at fixed output times (the scale factor is given in the file name). There is one line per halo, with the columns being halo number, mass of the FoF group (using a linking length of b=0.168 in units of the mean interparticle spacing) and the x, y and z positions of the particle, in box units --- so runing from [0,1). The simulation assumes periodic boundary conditions so that 1=0 in every coordinate.

The files box_?.????.gal are mock galaxy catalogs. There is one line per galaxy, with the columns being position, velocity, group number of the hosting halo and luminosity. For these catalogs the luminosity is 1.0 for all galaxies (i.e. it is not used). The positions are in units of the box length (i.e. run from [0,1) for all coordinates) and the velocities are stored as a distance offset in the same units. To go into redshift space one simply adds the velocity offset to the relevant coordinate. For example if we view the box down the z-axis the redshift space coordinate of the particle is (x,y,z+vz).

The files passive_?.????.gal are mock galaxy catalogs made from the z=0.9 output assuming passive evolution of the population. The file format is the same as the box_?.????.gal files. At z=0.9 the mock catalog is made by annointing dark matter simulation particles based on the input HOD. These particles are then simply tracked to lower redshift and their later time positions, velocities and group memberships output.

Markov chains have been run fitting to the observed data with the supplied dn/dz and covariance matrices. They are called "mcmc_wt_?.????.dat". The files are ascii text with 1 line per chain entry. The columns are: element number, log10(Mcut), log10(M1), sigma, kappa, alpha, chi^2, Ngal. There is a prior on kappa=1+/-0.01 which effectively removes this from the fit and makes the functional form [1+([M-Mmin]/M1)^a] times an error function turn on. See the README for more information.

There are also mocks with luminosities made either with a multi-HOD technique or an abundance matching method. These can be found in the relevant sub-directory.

Finally we have mocks made from larger boxes (Gpc/h) for use in computing covariances matrices. There are "old" files in in mocks_old.tgz which unpacks to 4 subdirectories, each containing 600 mock Bootes fields made from a 1.1Gpc/h simulation with OmegaM=0.3. There are two newer sets of files made from a 1.0Gpc/h simulation with the same cosmology as above (OmegaM=0.25). The file mock_galaxies.tgz contains the boxes of galaxies for z=0.5, 0.7 and 0.9. The file mock.tgz unpacks to 3 sub-directories (for z=0.5, 0.7 and 0.9) with 200 mock Bootes fields each.

Index of Mock/NOAO

NameSize (MB)Last modified
lcdm1000B 02007/10/01
lcdm500A 02006/11/09
lcdm500B 02007/10/01
mock_galaxies.tgz 632007/08/01
mocks.tgz 272007/08/01
mocks_old.tgz 792007/03/06
ms.pdf 02006/11/15

Return to Martin's home page

Last modified Fri Oct 12 09:14:59 2007