File: README.md

package info (click to toggle)
ufo-core 0.17.0.22.gc831aec-1
  • links: PTS, VCS
  • area: main
  • in suites: sid, trixie
  • size: 1,184 kB
  • sloc: ansic: 10,768; python: 1,004; lisp: 266; cpp: 98; xml: 55; makefile: 25; sh: 25
file content (100 lines) | stat: -rw-r--r-- 2,421 bytes parent folder | download | duplicates (4)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
## About

The Python tools for UFO, are addons to make interaction with
[UFO](https://github.com/ufo-kit/ufo-core) easier.


### Numpy integration

UFO and Numpy buffers can be converted freely between each other:

```python
import numpy as np
import ufo.numpy

# Make Ufo.Buffer from Numpy buffer
a = np.ones((640, 480))
b = ufo.numpy.fromarray(a)

# Make Numpy buffer from Ufo.Buffer
a = ufo.numpy.asarray(b)
```


### Simpler task setup

Creating tasks becomes as simple as importing a class:

```python
from ufo import Read, Backproject, Write

read = Read(path='../*.tif')
backproject = Backproject(axis_pos=512.2)
write = Write(filename='foo-%05i.tif')
```

Hooking up tasks is wrapped by calls. You can use the outer call to schedule
execution:

```python
scheduler = Ufo.Scheduler()
scheduler.run(write(backproject(read())).graph)
```

or use the `run` method to launch execution asynchronously. `join` on the result
if you want to wait:

```python
write(backproject(read())).run().join()
```

If no final endpoint is specified, you must iterate over the data:

```python
for item in backproject(read()):
    print np.mean(item)
```

Similarly you can also input data by specyifing an iterable of NumPy arrays

```python
sinogram = np.ones((512, 512))

for slice_data in backproject([sinogram]):
    print slice_data
```


### TomoPy integration

Using the `tomopy` module we can hook into the TomoPy pipeline:

```python
import tomopy
import ufo.tomopy

proj, flat, dark = dxchange.read_aps_32id('aps_data.h5', sino=(0, 2))
proj = tomopy.minus_log(tomopy.normalize(proj, flat, dark))
center = tomopy.find_center(proj, theta, init=290, ind=0, tol=0.5)
tomopy.recon(proj, theta, center=center, algorithm=ufo.tomopy.fbp, ncore=1)
```


### Fabfile for easier cluster setup

Depending on the use case it is necessary to start several instances of `ufod`
on the same machine. To ease startup and connection one can use the provided
Fabric `fabfile.py` to run binaries that accept the `-a` flag for specifying a
host address. This requires Fabric to be installed on the master machine. To run
a simple pipeline you would issue:

```bash
fab -H 123.123.123.123 -u user start:cmd='ufo-launch',args='dummy-data ! blur ! null'
```

This starts as many `ufod` instances on 123.123.123.123 as it has GPUs
installed. To limit the number, you can use the `limit` argument, i.e.

```bash
fab start:cmd='ufo-launch',limit=2
```