File: filedriver.py

package info (click to toggle)
paraview 5.13.2%2Bdfsg-3
  • links: PTS, VCS
  • area: main
  • in suites: sid, trixie
  • size: 544,220 kB
  • sloc: cpp: 3,374,605; ansic: 1,332,409; python: 150,381; xml: 122,166; sql: 65,887; sh: 7,317; javascript: 5,262; yacc: 4,417; java: 3,977; perl: 2,363; lex: 1,929; f90: 1,397; makefile: 170; objc: 153; tcl: 59; pascal: 50; fortran: 29
file content (128 lines) | stat: -rw-r--r-- 4,612 bytes parent folder | download
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
r"""
This script is meant to act as a Catalyst instrumented simulation code.
Instead of actually computing values though it just reads in files that
are passed in through the command line and sets that up to act like
it's coming in as simulation computed values.

This script must be run with pvbatch, either in serial or in parallel.
The arguments are as follows :
* --sym if run with more than a single MPI process so that pvbatch will
       have all processes run the script.
* a list of files in quotes (wildcards are acceptable) but these are
  treated as a single argument to the script.
* a list of Catalyst Python scripts.

eg, in serial :
<path>/pvbatch filedriver.py "temporalFile.ex2" gridwriter.py

eg, in parallel
mpirun -np 5 <path>/pvbatch --sym filedriver.py "input_*.pvtu" makeanimage.py makeaslice.py

This script currently only handles a single channel. It will try to find
an appropriate reader for the list of filenames and loop through the timesteps.
It attempts to sort the filenames as well, first by name length and second
alphabetically.
"""

import sys
import glob

# initialize and read input parameters
import paraview
paraview.options.batch = True
paraview.options.symmetric = True

import paraview.simple as pvsimple
from paraview.modules import vtkPVCatalyst, vtkPVPythonCatalyst
pm = pvsimple.servermanager.vtkProcessModule.GetProcessModule()
rank = pm.GetPartitionId()
nranks = pm.GetNumberOfLocalPartitions()

if len(sys.argv) < 2:
    if rank == 0:
        print("ERROR: must pass in a set of files to read in")
    sys.exit(1)

files = glob.glob(sys.argv[1])

# In case the filenames aren't padded we sort first by shorter length and then
# alphabetically. This is a slight modification based on the question by Adrian and answer by
# Jochen Ritzel at:
# https://stackoverflow.com/questions/4659524/how-to-sort-by-length-of-string-followed-by-alphabetical-order
files.sort(key=lambda item: (len(item), item))
if rank == 0:
    print("Reading in ", files)
reader = pvsimple.OpenDataFile(files)

if pm.GetSymmetricMPIMode() == False and nranks > 1:
    if rank == 0:
        print("ERROR: must run pvbatch with --sym when running with more than a single MPI process")
    sys.exit(1)

catalyst = vtkPVCatalyst.vtkCPProcessor()
# We don't need to initialize Catalyst since we run from pvbatch
# with the --sym argument which acts exactly like we're running
# Catalyst from a simulation code.
#catalyst.Initialize()

for script in sys.argv[2:]:
    import os.path
    if rank == 0:
        print("Adding script ", script)
    pipeline = vtkPVPythonCatalyst.vtkCPPythonPipeline.NewAndInitializePipeline(script)
    if pipeline:
        catalyst.AddPipeline(pipeline)
    elif rank == 0:
        print("failed to add pipeline for script:", script)

# we get the channel name here from the reader's dataset. if there
# isn't a channel name there we just assume that the channel name
# is 'input' since that's the convention for a single input
reader.UpdatePipeline()
dataset = pvsimple.servermanager.Fetch(reader)
array = dataset.GetFieldData().GetArray(catalyst.GetInputArrayName())
if array:
    channelname = array.GetValue(0)
else:
    channelname = 'input'

if rank == 0:
    print("The channel name is ", channelname)

if hasattr(reader, "TimestepValues"):
    timesteps = reader.TimestepValues
    if not timesteps:
        timesteps = [0]
else:
    timesteps = [0]

step = -1
for time in timesteps:
    step = step + 1
    datadescription = vtkPVCatalyst.vtkCPDataDescription()
    datadescription.SetTimeData(time, step)
    datadescription.AddInput(channelname)
    if time == timesteps[-1]:
         # last time step so we force the output
        datadescription.ForceOutputOn()

    retval = catalyst.RequestDataDescription(datadescription)

    if retval == 1:
        reader.UpdatePipeline(time)
        dataset = pvsimple.servermanager.Fetch(reader)
        inputdescription = datadescription.GetInputDescriptionByName(channelname)
        inputdescription.SetGrid(dataset)
        if dataset.IsA("vtkImageData") == True or dataset.IsA("vtkRectilinearGrid") == True \
           or dataset.IsA("vtkStructuredGrid") == True:
            from mpi4py import MPI
            extent = dataset.GetExtent()
            wholeextent = [extent[0], -extent[1], extent[2], -extent[3], extent[4], -extent[5]]
            MPI.COMM_WORLD.allreduce(wholeextent, op=MPI.MIN)
            for i in range(3):
                wholeextent[2*i] = -wholeextent[2*i]

            inputdescription.SetWholeExtent(wholeextent)

        catalyst.CoProcess(datadescription)
catalyst.Finalize()