File: simulation_analysis.py

package info (click to toggle)
yt 4.1.4-2
  • links: PTS, VCS
  • area: main
  • in suites: bookworm
  • size: 76,192 kB
  • sloc: python: 125,909; ansic: 6,303; cpp: 3,590; sh: 556; javascript: 352; makefile: 131; csh: 36
file content (38 lines) | stat: -rw-r--r-- 1,284 bytes parent folder | download | duplicates (3)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
import yt

yt.enable_parallelism()

# Enable parallelism in the script (assuming it was called with
# `mpirun -np <n_procs>` )
yt.enable_parallelism()

# By using wildcards such as ? and * with the load command, we can load up a
# Time Series containing all of these datasets simultaneously.
ts = yt.load("enzo_tiny_cosmology/DD????/DD????")

# Calculate and store density extrema for all datasets along with redshift
# in a data dictionary with entries as tuples

# Create an empty dictionary
data = {}

# Iterate through each dataset in the Time Series (using piter allows it
# to happen in parallel automatically across available processors)
for ds in ts.piter():
    ad = ds.all_data()
    extrema = ad.quantities.extrema(("gas", "density"))

    # Fill the dictionary with extrema and redshift information for each dataset
    data[ds.basename] = (extrema, ds.current_redshift)

# Sort dict by keys
data = {k: v for k, v in sorted(data.items())}

# Print out all the values we calculated.
print("Dataset      Redshift        Density Min      Density Max")
print("---------------------------------------------------------")
for key, val in data.items():
    print(
        "%s       %05.3f          %5.3g g/cm^3   %5.3g g/cm^3"
        % (key, val[1], val[0][0], val[0][1])
    )