1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 535 536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 558 559 560 561 562 563 564 565 566 567 568 569 570 571 572 573 574 575 576 577 578 579 580 581 582 583 584 585 586 587 588 589 590 591 592 593 594 595 596 597 598 599 600 601 602 603 604 605 606 607 608 609 610 611 612 613 614 615 616 617 618 619 620 621 622 623 624 625 626 627 628 629 630 631 632 633 634 635 636 637 638 639 640 641 642 643 644 645 646 647 648 649 650 651 652 653 654 655 656 657 658 659 660 661 662 663 664 665 666 667 668 669 670 671 672 673 674 675 676 677 678 679 680 681 682 683 684 685 686 687 688 689 690 691 692 693 694 695 696 697 698 699 700 701 702 703 704 705 706 707 708 709 710 711 712 713 714 715 716 717 718 719 720 721 722 723 724 725 726 727 728 729 730 731 732 733 734 735 736 737 738 739 740 741 742 743 744 745 746 747 748 749 750 751 752 753 754 755 756 757 758 759 760 761 762 763 764 765 766 767 768 769 770 771 772 773 774 775 776 777 778 779 780 781 782 783 784 785 786 787 788 789 790 791 792 793 794 795 796 797 798 799 800 801 802 803 804 805 806 807 808 809 810 811 812 813 814 815 816 817 818 819 820 821 822 823 824 825 826 827 828 829 830 831 832 833 834 835 836 837 838 839 840 841 842 843 844 845 846 847 848 849 850 851 852 853 854 855 856 857 858 859 860 861 862 863 864 865 866 867 868 869 870 871 872 873 874 875 876 877 878 879 880 881 882 883 884 885 886 887 888 889 890 891 892 893 894 895 896 897 898 899 900 901 902 903 904 905 906 907 908 909 910 911 912 913 914 915 916 917 918 919 920 921 922 923 924 925 926 927 928 929 930 931 932 933 934 935 936 937 938 939 940 941 942 943 944 945 946 947 948 949 950 951 952 953 954 955 956 957 958 959 960 961 962 963 964 965 966 967 968 969 970 971 972 973 974 975 976 977 978 979 980 981 982 983 984 985 986 987 988 989 990 991 992 993 994 995 996 997 998 999 1000 1001 1002 1003 1004 1005 1006 1007 1008 1009 1010 1011 1012 1013 1014 1015 1016 1017 1018 1019 1020 1021 1022 1023 1024 1025 1026 1027 1028 1029 1030 1031 1032 1033 1034 1035 1036 1037 1038 1039 1040 1041 1042 1043 1044 1045 1046 1047 1048 1049 1050 1051 1052 1053 1054 1055 1056 1057 1058 1059 1060 1061 1062 1063 1064 1065 1066 1067 1068 1069 1070 1071 1072 1073 1074 1075 1076 1077 1078 1079 1080 1081 1082 1083 1084 1085 1086 1087 1088 1089 1090 1091 1092 1093 1094 1095 1096 1097 1098 1099 1100 1101 1102 1103 1104 1105 1106 1107 1108 1109 1110 1111 1112 1113 1114 1115 1116 1117 1118 1119 1120 1121 1122 1123 1124 1125 1126 1127 1128 1129 1130 1131 1132 1133 1134 1135 1136 1137 1138 1139 1140 1141 1142 1143 1144 1145 1146 1147 1148 1149 1150 1151 1152 1153 1154 1155 1156 1157 1158 1159 1160 1161 1162 1163 1164 1165 1166 1167 1168 1169 1170 1171 1172 1173 1174 1175 1176 1177 1178 1179 1180 1181 1182 1183 1184 1185 1186 1187 1188 1189 1190 1191 1192 1193 1194 1195 1196 1197 1198 1199 1200 1201 1202 1203 1204 1205 1206 1207 1208 1209 1210 1211 1212 1213 1214 1215 1216 1217 1218 1219 1220 1221 1222 1223 1224 1225 1226 1227 1228 1229 1230 1231 1232 1233 1234 1235 1236 1237 1238 1239 1240 1241 1242 1243 1244 1245 1246 1247 1248 1249 1250 1251 1252 1253 1254 1255 1256 1257 1258 1259 1260 1261 1262 1263 1264 1265 1266 1267 1268 1269 1270 1271 1272 1273 1274 1275 1276 1277 1278 1279 1280 1281 1282 1283 1284 1285 1286 1287 1288 1289 1290 1291 1292 1293 1294 1295 1296 1297 1298 1299 1300 1301 1302 1303 1304 1305 1306 1307 1308 1309 1310 1311 1312 1313 1314 1315 1316 1317 1318 1319 1320 1321 1322 1323 1324 1325 1326 1327 1328 1329 1330 1331 1332 1333 1334 1335 1336 1337 1338 1339 1340 1341 1342 1343 1344 1345 1346 1347 1348 1349 1350 1351 1352 1353 1354 1355 1356 1357 1358 1359 1360 1361 1362 1363 1364 1365 1366 1367 1368 1369 1370 1371 1372 1373 1374 1375 1376 1377 1378 1379 1380 1381 1382 1383 1384 1385 1386 1387 1388 1389 1390 1391 1392 1393 1394 1395 1396 1397 1398 1399 1400 1401 1402 1403 1404 1405 1406 1407 1408 1409 1410 1411 1412 1413 1414 1415 1416 1417 1418 1419 1420 1421 1422 1423 1424 1425 1426 1427 1428 1429 1430 1431 1432 1433 1434 1435 1436 1437 1438 1439 1440 1441 1442 1443 1444 1445 1446 1447 1448 1449 1450 1451 1452 1453 1454 1455 1456 1457 1458 1459 1460 1461 1462 1463
|
<html>
<head>
<title>VolPack User's Guide</title>
</head>
<body>
<h1>VolPack User's Guide</h1>
<i>Version 2.0beta1</i>
<h2>Table of Contents</h2>
<dl>
<dt> <a href="#Section1"> Section 1: Overview </a>
<dd>
<ol>
<li> <a href="#Intro"> Introduction to VolPack </a>
<li> <a href="#Pipeline"> The Volume Rendering Pipeline </a>
<li> <a href="#Datatypes"> Data Structures and Rendering Algorithms </a>
</ol>
<dt> <a href="#Section2"> Section 2: Using VolPack </a>
<dd>
<ol>
<li> <a href="#Compilation"> Include Files and Libraries </a>
<li> <a href="#Contexts"> Rendering Contexts </a>
<li> <a href="#Volumes"> Volumes </a>
<li> <a href="#Classifiers"> Classification </a>
<li> <a href="#RLEVolume"> Classified Volumes </a>
<li> <a href="#Octree"> Min-Max Octrees </a>
<li> <a href="#View"> View Transformations </a>
<li> <a href="#Shaders"> Shading and Lighting </a>
<li> <a href="#Image"> Images </a>
<li> <a href="#Rendering"> Rendering </a>
<li> <a href="#State"> State Variables </a>
<li> <a href="#Utilities"> Utility Functions </a>
<li> <a href="#Errors"> Result Codes and Error Handling </a>
</ol>
<dt> <a href="#Section3"> Section 3: Tips and Pointers </a>
<dd>
<ol>
<li> <a href="#Speed"> Maximizing Rendering Speed </a>
<li> <a href="#Quality"> Maximizing Image Quality </a>
<li> <a href="#Help"> Software Support </a>
<li> <a href="#Source"> Obtaining the Software </a>
</ol>
</dl>
<h2> <a name="Section1"> Section 1: Overview </a></h2>
<h3> <a name="Intro"> Introduction to VolPack </a></h3>
VolPack is a portable software library for volume rendering.
It is based on a new family of fast volume rendering algorithms
(see <a href="http://www-graphics.stanford.edu/~lacroute"> Philippe
Lacroute </a> and <a href="http://www-graphics.stanford.edu/~levoy">
Marc Levoy </a>, <a
href="http://www-graphics.stanford.edu/papers/shear/"><cite>Fast
Volume Rendering Using a Shear-Warp Factorization of the Viewing
Transformation</cite></a>, Proc. SIGGRAPH '94 (Orlando, Florida, July
24-29, 1994). In Computer Graphics Proceedings, Annual Conference
Series, 1994, ACM SIGGRAPH, pp. 451-458).
The library has the following features:
<ul>
<li> Renders data sampled on a regular, three-dimensional grid.
<li> Supports user-specified transfer functions for both opacity and
color.
<li> Provides a shading model with directional light sources, multiple
material types with different reflective properties, depth cueing,
and shadows.
<li> Produces color (24 bits/pixel) or grayscale (8 bits/pixel) renderings,
with or without an alpha channel.
<li> Supports arbitrary affine view transformations.
<li> Supports a flexible data format that allows an arbitrary C
structure to be associated with each grid point.
<li> Achieves very fast rendering times without specialized hardware.
</ul>
The library is intended for use in C or C++ programs but may be
useful with other programming languages.
The current implementation does not support perspective projections or
clipping planes. These features will be added in a future release.
<p>
The remainder of this section contains a brief introduction to the
conceptual volume rendering pipeline used by VolPack, followed by a
high-level description of the data structures and algorithms used by
the library. This background material lays the foundation for
<a href="#Section2">Section 2</a> which describes each of the routines
provided by VolPack. The routines are grouped by function and are
presented roughly in the order that they would be called in a typical
application. More detailed descriptions of each command can be found
by consulting the man pages for VolPack. Finally, <a href="#Section3">
Section 3</a> covers some tips for maximizing rendering performance
and image quality, and describes how to obtain the VolPack software.
<h3> <a name="Pipeline"> The Volume Rendering Pipeline </a></h3>
The input to the volume renderer is a three-dimensional array of data.
Each element of the array is a C structure containing any number of
fields of data, such as tissue density or temperature. Each element
is called a "voxel." The first stage in the volume rendering pipeline
is to <i>classify</i> the volume data, which means to assign an
opacity to each voxel. Opacity is the inverse of transparency: an
opacity of 0.0 indicates a fully-transparent voxel, while an
opacity of 1.0 indicates a voxel which completely occludes anything
behind it. Intermediate values between 0.0 and 1.0 indicate
semi-transparent voxels. The purpose of classification is to assign
low opacities to regions of the data set which are uninteresting or
distracting and high opacities to regions of the data set which should
be visible in the rendering. Intermediate opacity values are used for
smooth transitions from transparent to opaque regions, and for effects
such as semi-transparent voxels which should not completely occlude
objects behind them.
<p>
VolPack provides a classification method based on lookup tables. To
use this method you specify a transfer function which maps the scalar
data in a particular array element into the opacity for that element.
Alternatively you can implement other classification techniques such as
context-sensitive segmentation and then provide VolPack with a
pre-classified volume.
<p>
The second rendering stage is to assign a color to each voxel, an
operation which is called <i>shading</i> (or more precisely,
<i>lighting</i>). VolPack includes support for the
standard Phong shading equation. To use this shading technique, the
volume data is preprocessed before rendering in order to compute a
gradient vector for each voxel. The gradient vector can then
be used as a pseudo surface normal to compute how light reflects off
of each voxel. The user specifies the position and color of one or
more light sources, and the reflective properties of the volume data.
See <cite>Computer Graphics: Principles and Practice</cite> (Chapter
16, 2nd ed.), by Foley, van Dam, Feiner and Hughes, for a detailed
discussion of the Phong shading equation. Alternative shading models
can be implemented through a callback function.
<p>
The third rendering stage is to specify a view transformation and to
transform
the volume accordingly. This step can be as simple as choosing the
position from which to look at the volume, or it can include an
arbitrary affine transformation of the volume including non-uniform
scaling and shearing. The view transformation also specifies how the
volume is projected onto a 2D image plane.
<p>
The fourth and final rendering stage is to composite the voxels into
an image. Digital compositing is analogous to
the compositing process used in the film industry: several layers of
semi-transparent film are merged together into a final image.
VolPack provides several rendering algorithms that use different
techniques to accelerate the compositing stage. The next subsection
briefly describes the available algorithms.
<p>
<h3> <a name="Datatypes"> Data Structures and Rendering
Algorithms </a></h3>
VolPack includes three rendering algorithms which are useful in different
situations. The algorithms differ in the degree to which they trade
flexibility for speed and in the type of preprocessing required before
rendering.
<p>
The fastest algorithm allows the user to
rapidly render a volume with any view transformation and with any shading
parameters while keeping the classification fixed. This algorithm
relies on a special data structure which contains run-length encoded,
classified volume data. Depending on the volume size it can take
several minutes to precompute the run-length encoded volume, so this
algorithm is most suitable when many renderings will be made
from the same volume without changing the classification.
<p>
The steps when using this algorithm to render a classified volume are:
<ul>
<li> load the volume data
<li> choose the classification function
<li> precompute the classified volume
<li> repeat:
<ul>
<li> set the view and shading parameters
<li> render with <code>vpRenderClassifiedVolume()</code>
</ul>
</ul>
<p>
The second algorithm is useful in situations where the classification
will be adjusted frequently. It also relies on a special data
structure: a min-max octree which contains the minimum and maximum
values of each voxel field. This data
structure must be computed once when a new
volume is acquired. The volume can then be rendered multiple times with any
opacity transfer function, any view transformation and any shading
parameters.
<p>
The steps when using this algorithm to render an unclassified volume are:
<ul>
<li> load the volume data
<li> precompute the min-max octree with <code>vpCreateMinMaxOctree()</code>
<li> repeat:
<ul>
<li> choose the classification function
<li> set the view and shading parameters
<li> render with <code>vpRenderRawVolume()</code>
</ul>
</ul>
<p>
Finally, the third algorithm does not use any precomputed data
structures. In most cases it is significantly slower than the other
two algorithms and is useful only if you wish to make a single
rendering from a volume. The steps for using this algorithm are
identical to the previous algorithm except that there is no need to
compute the min-max octree.
<p>
<h2> <a name="Section2"> Section 2: Using VolPack </a></h2>
This section describes how to use the routines provided by VolPack.
For more specific information about a particular routine, consult the
man pages provided with the library.
<p>
<h3> <a name="Compilation"> Include Files and Libraries </a></h3>
All of the definitions needed by a program which uses VolPack are
included in the header file <code>volpack.h</code>. The program must
be compiled with the VolPack library by including the switch
<code>-lvolpack</code> on the compilation command line. Other useful
free libraries you may wish to use are John Ousterhout's Tcl/Tk libraries
to build a graphical user interface, and Jef Poskanzer's pbmplus
library or Sam Leffler's TIFF library to store images.
<p>
The header file defines the following data types:
<ul>
<li> <code>vpContext</code>: a rendering context.
<li> <code>vpResult</code>: a result code.
<li> <code>vpVector3</code>: a three-element double-precision vector.
<li> <code>vpVector4</code>: a four-element double-precision vector.
<li> <code>vpMatrix3</code>: a three-by-three double-precision matrix.
<li> <code>vpMatrix4</code>: a four-by-four double-precision matrix.
</ul>
<p>
<h3> <a name="Contexts"> Rendering Contexts </a></h3>
The first argument of most of the routines in the VolPack library is a
<i>rendering context</i>, declared as a variable of type
<code>vpContext</code>. A rendering context contains all of the
information required to render a volume, including the classification and
shading parameters, the view transformation, a description of the
format of the volume, and private data structures used by the
rendering routines. The contents of a rendering context are not
directly accessible to the application programmer; instead, you use
the routines provided by the library to set, modify and query the
state in a context. A program can have multiple active contexts, for
instance to render different volumes or to render the same volume with
different parameters simultaneously.
<p>
To create a new context, use <code>vpCreateContext()</code>:
<pre>
vpContext *vpCreateContext();
</pre>
The return value is a pointer to the new context. It contains default
values for most of the rendering parameters, but you can change all of
them with the routines described later in this section.
<p>
To destroy a context and free the memory associated with it, use
<code>vpDestroyContext()</code>:
<pre>
void vpDestroyContext(vpContext *vpc);
</pre>
<p>
<h3> <a name="Volumes"> Volumes </a></h3>
A volume is simply a 3D array of data. The type of data
can be almost anything, but if you choose to use the classification
and shading routines provided by VolPack then you must supply the
fields these routines require. You may also wish to precompute
information required by your shader or classifier and store it in the
voxel. Here is an example layout for a voxel:
<pre>
typedef unsigned char Scalar;
typedef unsigned short Normal;
typedef unsigned char Gradient;
typedef struct {
Normal normal;
Scalar scalar;
Gradient gradient;
} Voxel;
</pre>
In this example the data stored in a voxel includes an 8-bit scalar
value and two precomputed fields. The first precomputed field is a
surface normal vector encoded in a 16-bit field; this field is used by
VolPack's shading routines. The second precomputed field is the
gradient-magnitude of the scalar value; this field can be used for
detecting surface boundaries during classification, for instance.
<p>
Note that the structure fields have been specified in the voxel
structure in a very particular order. Many machines have alignment
restrictions which require two-byte quantities to be aligned to
two-byte boundaries, four-byte quantities to be aligned to four-byte
boundaries, and so on. The compiler may have to insert wasted space
in between fields to satisfy these requirements if you are not
careful. Use the <code>sizeof()</code> operator to make sure the size
of the voxel matches your expectations.
<p>
You should also place the fields which are required for shading first,
followed by any other fields used only for classification. Ordering
the fields this way makes it possible to store just the fields for
shading when a classified volume is created for the fast rendering
algorithm. This saves memory and improves cache performance.
<p>
Once you have decided on the format of your volume you must describe
it to VolPack. To set the dimensions of the volume use
<code>vpSetVolumeSize</code>:
<pre>
vpResult
vpSetVolumeSize(vpContext *vpc, int xlen, int ylen, int zlen);
</pre>
The first argument is the context whose state you wish to modify, and
the remaining arguments are the number of elements in each dimension
of the 3D volume array. The return value is a result code (type
<code>vpResult</code>, which is an integer). The value VP_OK means
the arguments are valid and the routine completed successfully. Other
values indicate the type of error which occurred. See the man pages
for the specific types of errors which can occur for each routine, or
see the list of error codes in the <a href="#Errors">Result Codes and
Error Handling</a> section.
<p>
Use <code>vpSetVoxelSize()</code> to declare the size of the voxel and
the number of fields it contains:
<pre>
vpResult
vpSetVoxelSize(vpContext *vpc, int bytes_per_voxel,
int num_voxel_fields, int num_shade_fields,
int num_classify_fields);
</pre>
<code>Bytes_per_voxel</code> is the total size of a voxel in bytes.
<code>Num_voxel_fields</code> is the number of fields in the voxel.
<code>Num_shade_fields</code> is the number of fields required for
shading. <code>Num_classify_fields</code> is the number of fields
required for classification. The return value is a result code.
<p>
Continuing the earlier example, use the following call:
<pre>
#define NUM_FIELDS 3
#define NUM_SHADE_FIELDS 2
#define NUM_CLASSIFY_FIELDS 2
vpSetVoxelSize(vpc, sizeof(Voxel), NUM_FIELDS, NUM_SHADE_FIELDS,
NUM_CLASSIFY_FIELDS);
</pre>
<p>
Now call <code>vpSetVoxelField()</code> and the
<code>vpFieldOffset()</code> macro once for each field to declare
its size and position in the voxel:
<pre>
int
vpFieldOffset(void *voxel_ptr, field_name);
vpResult
vpSetVoxelField(vpContext *vpc, int field_num, int field_size,
int field_offset, int field_max);
</pre>
<code>Voxel_ptr</code> is a pointer to a dummy variable of the same
type as your voxel, and <code>field_name</code> is the name of the
voxel field. The return value of the macro is the byte offset of the
field from the beginning of the voxel.
<code>Field_num</code> is the ordinal index of the voxel field you are
declaring, starting with 0 for the first field.
<code>Field_size</code> is the size of the field in bytes. Use the
<code>sizeof()</code> operator (e.g.
<code>sizeof(voxel_ptr->field_name)</code>).
<code>Field_offset</code> is the byte offset returned by
<code>vpFieldOffset()</code>. <code>Field_max</code> is the maximum
value of the quantity stored in the field. The return value is a
result code.
<p>
Strictly speaking, the <code>vpSetVoxelField()</code> procedure must
be called only for voxel fields which will be used by the VolPack
classifier and shader. However, if you declare the other fields too
then VolPack can automatically convert volumes that were created on
machines with a different byte ordering. Only fields with size
1, 2 or 4 bytes can be declared with <code>vpSetVoxelField()</code>.
<p>
For the example voxel layout, make the following calls:
<pre>
#define NORM_FIELD 0
#define NORM_MAX VP_NORM_MAX
#define SCALAR_FIELD 1
#define SCALAR_MAX 255
#define GRAD_FIELD 2
#define GRAD_MAX VP_GRAD_MAX
Voxel *dummy_voxel;
vpSetVoxelField(vpc, NORM_FIELD, sizeof(dummy_voxel->normal),
vpFieldOffset(dummy_voxel, normal), NORM_MAX);
vpSetVoxelField(vpc, SCALAR_FIELD, sizeof(dummy_voxel->scalar),
vpFieldOffset(dummy_voxel, scalar), SCALAR_MAX);
vpSetVoxelField(vpc, GRAD_FIELD, sizeof(dummy_voxel->gradient),
vpFieldOffset(dummy_voxel, gradient), GRAD_MAX);
</pre>
The constants <code>VP_NORM_MAX</code> and <code>VP_GRAD_MAX</code>
are predefined by VolPack. In this example these fields will be
computed using standard routines provided by the library.
<p>
To specify the volume data itself, use <code>SetRawVoxels()</code>:
<pre>
vpResult
vpSetRawVoxels(vpContext *vpc, void *voxels, int size,
int xstride, int ystride, int zstride);
</pre>
<code>Voxels</code> is a pointer to the voxel data. <code>Size</code>
is the number of bytes of voxel data. The remaining arguments are the
strides in bytes for each of the three dimensions of the volume. For
instance, <code>xstride</code> is the byte offset from the beginning
of one voxel to the beginning of the next voxel along the x axis.
Some of the VolPack routines operate faster if the volume is stored in
z-major order (xstride < ystride < zstride), but it is not strictly
necessary. If <code>voxels</code> is a pointer to dynamically
allocated storage then the caller is responsible for freeing the
memory at the appropriate time. VolPack does not free the voxel array
when a context is destroyed. The data in the voxel array may be
initialized or modified at any time, before or after calling
<code>vpSetRawVoxels</code>.
<p>
Our running example continues as follows:
<pre>
Voxel *volume;
unsigned size;
#define VOLUME_XLEN 256
#define VOLUME_YLEN 256
#define VOLUME_ZLEN 256
size = VOLUME_XLEN * VOLUME_YLEN * VOLUME_ZLEN * sizeof(Voxel);
volume = malloc(size);
vpSetRawVoxels(vpc, volume, size, sizeof(Voxel),
VOLUME_XLEN * sizeof(Voxel),
VOLUME_YLEN * VOLUME_XLEN * sizeof(Voxel));
</pre>
<p>
VolPack provides a number of routines to help initialize some of the
fields of the volume. If your input data consists of a
three-dimensional array of 8-bit values and you wish to compute
gradient-magnitude data or encoded normal vectors, then you can use
<code>vpVolumeNormals()</code>:
<pre>
vpResult
vpVolumeNormals(vpContext *vpc, unsigned char *scalars,
int size, int scalar_field,
int gradient_field, int normal_field);
</pre>
<code>Scalars</code> is a pointer to the array of 8-bit values.
<code>Size</code> is the size of the array in bytes. It must equal
the number of voxels in the volume as previously specified with
<code>vpSetVolumeSize()</code>. <code>Scalar_field</code>,
<code>gradient_field</code> and <code>normal_field</code> are the
voxel field numbers in which to store the scalar values from the
array, the gradient-magnitudes of the scalar values, and the encoded
surface normals respectively. Any of these field numbers may be equal
to the constant <code>VP_SKIP_FIELD</code> if that item should not be
stored in the volume. This function computes the specified fields and
loads them into the volume array last specified with
<code>vpSetRawVolume()</code>.
<p>
In our example, we can initialize the volume array as follows:
<pre>
unsigned char *scalars;
scalars = LoadScalarData();
vpVolumeNormals(vpc, scalars, VOLUME_XLEN*VOLUME_YLEN*VOLUME_ZLEN,
SCALAR_FIELD, GRAD_FIELD, NORM_FIELD);
</pre>
<code>LoadScalarData()</code> might be a routine to load volume data
from a file.
<p>
If your volume is large it may be inefficient to load all of the
scalar data into one array and then copy it to the volume array. If
this is the case then you can use <code>vpScanlineNormals()</code>
to compute one scanline of the volume at a time:
<pre>
vpResult
vpScanlineNormals(vpContext *vpc, int size,
unsigned char *scalars,
unsigned char *scalars_minus_y,
unsigned char *scalars_plus_y,
unsigned char *scalars_minus_z,
unsigned char *scalars_plus_z,
void *voxel_scan, int scalar_field,
int gradient_field, int normal_field);
</pre>
<code>Size</code> is the length in bytes of one scanline of scalar
data (which should equal the x dimension of the volume).
<code>Scalars</code> points to the beginning of one scanline of
scalars. <code>Scalars_minus_y</code> and <code>scalars_plus_y</code>
point to the beginning of the previous and next scanlines in the y
dimension, respectively. Similarly, <code>scalars_minus_z</code> and
<code>scalars_plus_z</code> point to the beginning of the previous and
next scanlines in the z dimension. These last four scanlines are the
immediately-adjacent neighbors of the first scanline and are used to
compute the gradient and surface normal vector. The next argument,
<code>voxel_scan</code>, points to the scanline of the voxel array
to write the result data into. The last three arguments are the voxel
fields to write each type of data into and are identical to the
corresponding arguments to <code>vpVolumeNormals()</code>. You can
use <code>vpScanlineNormals()</code> in a loop which reads in the
scalar data slice-by-slice, keeping at most three slices of data in
memory at a time (in addition to the entire volume).
<p>
If you wish to compute normal vectors yourself but you still want to
use the shading routines provided by VolPack, you can use
<code>vpNormalIndex()</code> to encode a vector into the form expected
by the shaders:
<pre>
int vpNormalIndex(double nx, double ny, double nz);
</pre>
The arguments are the components of the normal vector, which must be
normalized (nx*nx + ny*ny + nz*nz == 1), and the return value is the
16-bit encoded normal. A routine is also provided to decode normals:
<pre>
vpResult
vpNormal(int n, double *nx, double *ny, double *nz);
</pre>
The encoded normal given by <code>n</code> is decoded, and the normal
vector components are stored in the locations specified by the
remaining arguments.
<p>
<h3> <a name="Classifiers"> Classification </a></h3>
The classification routines provided by VolPack allow you to customize
the opacity transfer function by specifying a collection of lookup
tables. Each lookup table is associated with one voxel field.
To classify a voxel, VolPack uses the value in each of the specified
fields of the voxel to index the corresponding tables. The table
values are then multiplied together to get the opacity of the voxel.
The tables should contain numbers in the range 0.0-1.0 so that the
final opacity is also in that range.
<p>
A lookup table is specified with <code>vpSetClassifierTable()</code>:
<pre>
vpResult
vpSetClassifierTable(vpContext *vpc, int param_num, int param_field,
float *table, int table_size);
</pre>
<code>Param_num</code> is the parameter number associated with the
table you are declaring. The total number of tables must equal
the <code>num_classify_fields</code> argument to
<code>vpSetVoxelSize()</code>. The first table is numbered 0.
<code>Param_field</code> is the number of the voxel field which should
be used to index the table. <code>Table</code> is a pointer to the
lookup table itself, and <code>table_size</code> is the size of the
table in bytes (not the number of entries!) Note that even if
<code>table</code> is dynamically allocated it is never deallocated
by VolPack, even if the rendering context is destroyed. The data in
the table may be initialized or modified at any time, before or after
calling <code>vpSetClassifierTable</code>.
<p>
We could declare a two-parameter classifier for our example using the
following calls:
<pre>
float scalar_table[SCALAR_MAX+1];
float gradient_table[GRAD_MAX+1];
vpSetClassifierTable(vpc, 0, SCALAR_FIELD, scalar_table,
sizeof(scalar_table));
vpSetClassifierTable(vpc, 0, GRAD_FIELD, gradient_table,
sizeof(gradient_table));
</pre>
<p>
VolPack provides a useful utility routine for initializing
classification tables with piecewise linear ramps:
<pre>
vpResult
vpRamp(float array[], int stride, int num_points,
int ramp_x[], float ramp_y[]);
</pre>
<code>Array</code> is the table to be initialized.
<code>Stride</code> is the number of bytes from the start of one array
element to the start of the next (useful if there are other fields in
the array which you want to skip over). <code>Num_points</code> is
the number of endpoints of the piecewise linear segments.
<code>Ramp_x</code> is an array of x coordinates (table indices), and
<code>ramp_y</code> is an array of y coordinates (values to store in
the array). <code>vpRamp</code> linearly-interpolates values for the
table entries in between the specified x coordinates.
<p>
For example, we can initialize our two classification tables as
follows:
<pre>
#define SCALAR_RAMP_POINTS 3
int scalar_ramp_x[] = { 0, 24, 255};
float scalar_ramp_y[] = {0.0, 1.0, 1.0};
vpRamp(scalar_table, sizeof(float), SCALAR_RAMP_POINTS,
scalar_ramp_x, scalar_ramp_y);
#define GRAD_RAMP_POINTS 4
int grad_ramp_x[] = { 0, 5, 20, 221};
float grad_ramp_y[] = {0.0, 0.0, 1.0, 1.0};
vpRamp(gradient_table, sizeof(float), GRAD_RAMP_POINTS,
grad_ramp_x, grad_ramp_y);
</pre>
<p>
If you wish to use an alternative classification algorithm instead of
the lookup-table classifier then you should store the voxel opacities
you compute in one of the fields of the voxel and define a lookup
table which converts the values in that field into floating-point
numbers. For instance, define a 1-byte opacity field which contains
values in the range 0-255, and declare a lookup table with a linear
ramp mapping those numbers to the range 0.0-1.0.
<p>
In addition to setting the classification function, you should also
set the minimum opacity threshold with <code>vpSeti</code>. This
threshold is used to discard voxels which are so transparent that they
do not contribute significantly to the image. The higher the
threshold, the faster the rendering algorithms. For example, to
discard voxels which are at most 5% opaque, use the following:
<pre>
vpSeti(vpc, VP_MIN_VOXEL_OPACITY, 0.05);
</pre>
<p>
<h3> <a name="RLEVolume"> Classified Volumes </a></h3>
The fastest rendering algorithm provided by VolPack uses a
run-length encoded volume data structure which must be computed before
rendering. Three routines are provided to compute this data
structure. Remember to set the opacity transfer function and the
minimum voxel opacity before calling the functions in this subsection.
<p>
If you have already constructed an unclassified volume and defined the
classification function as described in the previous subsections then
use <code>vpClassifyVolume()</code>:
<pre>
vpResult
vpClassifyVolume(vpContext *vpc);
</pre>
This routine reads data from the currently-defined volume array,
classifies it using the current classifier, and then stores it in
run-length encoded form in the rendering context. The volume array is
not modified or deallocated.
<p>
If you wish to load an array of 8-bit scalars and compute a classified
volume directly without building an unclassified volume,
then use <code>vpClassifyScalars()</code>:
<pre>
vpResult
vpClassifyScalars(vpContext *vpc, unsigned char *scalars,
int size, int scalar_field,
int gradient_field, int normal_field);
</pre>
The arguments to this routine are identical to those for
<code>vpVolumeNormals()</code> described above. The difference
between the two routines is that <code>vpClassifyScalars()</code>
stores the result as a classified, run-length encoded volume instead
of as an unclassified volume. The volume size, voxel size, voxel
fields, and classifier must all be declared before calling this routine,
but there is no need to call <code>vpSetRawVoxels()</code>.
<p>
If you wish to classify one scanline of voxel data at a time instead
of loading the entire array of scalar data at once then use
<code>vpClassifyScanline()</code>:
<pre>
vpResult
vpClassifyScanline(vpContext *vpc, void *voxel_scan);
</pre>
<code>Voxel_scan</code> is a pointer to one scanline of voxel data, in
the same format as the full unclassified volume. You could, for
instance, use <code>vpScanlineNormals()</code> to compute the fields
of the scanline before passing it to <code>vpClassifyScanline()</code>.
Each call to this routine appends one new scanline to the current
classified volume. Out-of-order calls are not possible, and the volume
cannot be rendered until all of the scanlines have been loaded.
<p>
Only one classified volume may be stored in a rendering context at a time.
If you start classifying a new volume, any old classified volume data
is deallocated. You can also force the current classified volume to
be deallocated with <code>vpDestroyClassifiedVolume()</code>:
<pre>
vpResult
vpDestroyClassifiedVolume(vpContext *vpc);
</pre>
Note that if you change the contents of the unclassified volume array
and you wish the classified volume to reflect those changes then you
must call one of the routines in this section to recompute the
classified volume.
<p>
<h3> <a name="Octree"> Min-Max Octrees </a></h3>
A min-max octree is a hierarchical data structure which contains
minimum and maximum values for each field used to index the
classification tables. This data structure can be used to accelerate
rendering unclassified volumes, and it can also accelerate the
computation of a classified volume from an unclassified volume.
<p>
To compute a min-max octree, first define an unclassified volume with
<code>vpSetVolumeSize()</code>, <code>vpSetVoxelSize()</code>,
<code>vpSetVoxelField()</code>, and <code>vpSetRawVoxels()</code>.
Also be sure to initialize the volume data. Now for each
classification table make one call to
<code>vpMinMaxOctreeThreshold()</code>:
<pre>
vpResult
vpMinMaxOctreeThreshold(vpContext *vpc, int param_num, int range);
</pre>
<code>Param_num</code> is the same parameter number you passed to
<code>vpSetClassifierTable()</code>. <code>Range</code> is a range of
table indices for this parameter which you consider to be "small".
The opacity of a voxel should not vary much if the table index is
changed by the amount specified in <code>range</code>. Choosing a
value which is too small or too large may result in a reduced
performance benefit during rendering. You may wish to experiment, but
the octree should improve performance even if you don't use the
optimum range value. You can use the routine
<code>vpOctreeMask()</code> to visualize the effectiveness of the
octree (see the man pages).
<p>
To compute the octree, call <code>vpCreateMinMaxOctree()</code>:
<pre>
vpResult
vpCreateMinMaxOctree(vpContext *vpc, int root_node_size,
int base_node_size);
</pre>
<code>Root_node_size</code> is currently not used but is reserved for
future use. <code>Base_node_size</code> specifies the size in voxels
of one side of the smallest node in the octree. The smaller the
value, the better the resolution of the data structure at the expense
of an increase in size. A value of 4 is a good starting point.
This routine reads the data in the unclassified volume array, computes
an octree, and stores it in the rendering context.
<p>
Once the octree has been computed it will be used automatically
whenever you call <code>vpClassifyVolume()</code> or
<code>vpRenderRawVolume()</code>. If you change the data in the volume
array you MUST call <code>vpCreateMinMaxOctree</code> to recompute the
octree, or else your renderings will be incorrect. You can also
destroy the octree by calling <code>vpDestroyMinMaxOctree()</code>:
<pre>
vpResult
vpDestroyMinMaxOctree(vpContext *vpc);
</pre>
<p>
<h3> <a name="View"> View Transformations </a></h3>
VolPack maintains four transformation matrices: a modeling
transform, a viewing transform, a projection transform, and a viewport
transform. The primary use of these matrices is to specify a
transformation from the volume data's coordinate system to the image
coordinate system. However, they also affect light direction vectors
(and in future releases of the library they will affect the positioning
of clipping planes and polygon primitives).
<p>
There are five coordinate systems implied by the transformation
matrices: object coordinates, world coordinates, eye coordinates, clip
coordinates, and image coordinates. In the object coordinate system
the volume is entirely contained in a unit cube centered at the
origin. The modeling transform is an affine transform which converts
object coordinates into world coordinates. The modeling transform is
also applied to light direction vectors to transform them to world
coordinates. The view transform is an affine transform that
converts world coordinates into eye coordinates. In eye
coordinates the viewer is looking down the -Z axis. The view
transform is typically used to specify the position of the viewer in
the world coordinate system. The projection
transform converts eye coordinates into clip coordinates. This
transform may specify a perspective or a parallel projection, although
perspective rendering is not yet supported. Finally, the viewport
transform converts the clip coordinate system into image coordinates.
<p>
VolPack provides a number of routines to change the modeling matrix,
viewing matrix and the projection matrix. First, use
<code>vpCurrentMatrix()</code> to select the matrix you wish to
modify:
<pre>
vpResult
vpCurrentMatrix(vpContext *vpc, int option);
</pre>
<code>Option</code> is one of the constants <code>VP_MODEL</code>,
<code>VP_VIEW</code> or <code>VP_PROJECT</code>. Now use the
following functions to modify the matrix contents (see the man pages
for specifics):
<dl>
<dt> <code>vpIdentityMatrix(vpContext *vpc)</code>
<dd> Load the identity matrix into the current transformation matrix.
<dt> <code>vpTranslate(vpContext *vpc, double tx, double ty, double tz)</code>
<dd> Multiply the current transformation matrix by a translation matrix.
<dt> <code>vpRotate(vpContext *vpc, int axis, double degrees)</code>
<dd> Multiply the current transformation matrix by a rotation matrix.
<code>Axis</code> is one of the constants <code>VP_X_AXIS</code>,
<code>VP_Y_AXIS</code> or <code>VP_Z_AXIS</code>.
<dt> <code>vpScale(vpContext *vpc, double sx, double sy, double sz)</code>
<dd> Multiply the current transformation matrix by a scaling matrix.
<dt> <code>vpMultMatrix(vpContext *vpc, vpMatrix4 m)</code>
<dd> Multiply the current transformation matrix by the given matrix.
<dt> <code>vpSetMatrix(vpContext *vpc, vpMatrix4 m)</code>
<dd> Load the given matrix into the current transformation matrix.
</dl>
By default, all of the routines use post-multiplication. For
instance, if the current modeling matrix is M and a rotation matrix R
is applied, then the new transformation is M*R. If a light direction
vector v is now specified (using commands discussed in the section on
shading), it is transformed into M*R*v before it is stored in
the current rendering context. If you prefer pre-multiplication of
matrices then call <code>vpSeti</code> with the
<code>CONCAT_MODE</code> argument. Note that vectors are always
post-multiplied.
<p>
Two special routines are provided for creating projection matrices.
These routines always store their result in the projection matrix, not
the current matrix. The first is <code>vpWindow()</code>:
<pre>
vpResult
vpWindow(vpContext *vpc, int type, double left, double right,
double bottom, double top, double near, double far);
</pre>
<code>Type</code> must be the constant <code>VP_PARALLEL</code> to
specify a parallel projection. In a future release perspective
projections will be allowed. The remaining arguments specify the
left, right, bottom, top, near, and far coordinates of the
planes bounding the view volume in eye coordinates. This routine
works just like the <code>glFrustum()</code> and
<code>glOrtho()</code> routines in OpenGL.
<p>
The second routine for creating a projection matrix uses the PHIGS
viewing model:
<pre>
vpResult
vpWindowPHIGS(vpContext *vpc, vpVector3 vrp, vpVector3 vpn,
vpVector3 vup, vpVector3 prp, double viewport_umin,
double viewport_umax, double viewport_vmin,
double viewport_vmax, double viewport_front,
double viewport_back, int type);
</pre>
<code>Vrp</code> is the view reference point, <code>vpn</code> is the
view plane normal, <code>vup</code> is the view up vector,
<code>prp</code> is the projection reference point, the next six
arguments are the bounds of the viewing volume in view reference
coordinates, and <code>type</code> is the constant
<code>VP_PARALLEL</code> to specify a parallel projection.
Since these parameters specify a viewpoint as well as a viewing
volume, typically the view matrix contains the identity.
See <cite>Computer Graphics: Principles and Practice</cite> (Chapter
6, 2nd ed.), by Foley, van Dam, Feiner and Hughes for a complete
discussion of the PHIGS viewing model.
<p>
The viewport transform is set automatically when you set the size of
the image, which is discussed in the next subsection.
<p>
Here is an example showing all the steps to set the view
transformation:
<pre>
vpCurrentMatrix(vpc, VP_MODEL);
vpIdentityMatrix(vpc);
vpRotate(vpc, VP_X_AXIS, 90.0);
vpRotate(vpc, VP_Y_AXIS, 23.0);
vpCurrentMatrix(vpc, VP_VIEW);
vpIdentityMatrix(vpc);
vpTranslate(vpc, 0.1, 0.0, 0.0);
vpCurrentMatrix(vpc, VP_PROJECT);
vpWindow(vpc, VP_PARALLEL, -0.5, 0.5, -0.5, 0.5, -0.5, 0.5);
</pre>
<p>
Note that light direction vectors are transformed according to the
modeling matrix in effect at the time of the call to
<code>vpSetLight</code>, and volumes are transformed according to the
modeling matrix in effect at the time of rendering. The same viewing,
projection and viewport transforms are applied to everything at the
time of rendering.
<p>
<h3> <a name="Shaders"> Shading and Lighting </a></h3>
VolPack supports two shading methods: shading via lookup tables, and
shading via callback functions. In addition, routines are provided to
initialize shading tables for the Phong illumination model.
<p>
The built-in routines are designed to support the multiple-material
voxel model described in <cite>Volume Rendering</cite> by Drebin,
Carpenter and Hanrahan in Proceedings of SIGGRAPH 88. Each voxel is
assumed to contain a mixture of basic material types. Each
material type has its own shading parameters, such as color and
shinyness. The color of a voxel is found by computing a color for
each material type and then combining the colors in proportion to the
fraction of each material in the voxel.
<p>
This functionality is implemented by storing two table indices in each
voxel and using two lookup tables. One voxel field must contain an encoded
surface normal vector as computed by <code>vpNormalIndex()</code>.
This field is used to index a table which contains a color for each of
the material types. The actual colors retrieved from the table
depend on the surface normal, so directional lights can be
implemented by storing appropriate values in the table. The second voxel
field contains a value which is used to index the second table. Each
row of the second table contains a fractional occupancy for each
material type. These fractional occupancies are used as weights to
determine the relative strength of each color retrieved from the first table.
<p>
To declare a lookup-table shader, use
<code>vpSetLookupShader()</code>:
<pre>
vpResult
vpSetLookupShader(vpContext *vpc, int color_channels,
int num_materials, int color_field,
float *color_table, int color_table_size,
int weight_field,
float *weight_table, int weight_table_size);
</pre>
<code>Color_channels</code> is 1 for grayscale renderings or 3 for
color (RGB) renderings. <code>Num_materials</code> is the number of
material types. <code>Color_field</code> is the voxel field number
for the color lookup table index. <code>Color_table</code> is the
corresponding lookup table, and <code>color_table_size</code> is the
size of the table in bytes. <code>Weight_field</code>,
<code>weight_table</code> and <code>weight_table_size</code> are the
field number, lookup table and table size for the second table which
contains weights for each material type. The color table must be an
array with the following dimensions:
<pre>
float color_table[n][num_materials][color_channels];
</pre>
where <code>n</code> is the number of possible values for the color
field. The colors are values in the range 0.0-1.0 (zero intensity to
full intensity). The weight table must be an array with the following
dimensions:
<pre>
float weight_table[m][num_materials];
</pre>
where <code>m</code> is the number of possible values for the weight
field. Weights are in the range 0.0-1.0. If there is only one
material type then the weight table is not used and the corresponding
parameters may be set to 0.
<p>
Returning to our example, the following code declares an RGB
shader with two material types:
<pre>
#define COLOR_CHANNELS 3
#define MATERIALS 2
float color_table[NORM_MAX+1][MATERIALS][COLOR_CHANNELS];
float weight_table[SCALAR_MAX+1][MATERIALS];
vpSetLookupShader(vpc, COLOR_CHANNELS, MATERIALS,
NORM_FIELD, color_table, sizeof(color_table),
SCALAR_FIELD, weight_table, sizeof(weight_table));
</pre>
<p>
The weight table can be initialized using the <code>vpRamp()</code>
function previously described, or using a loop which fills in values
in whatever way you choose. To initialize the color table, VolPack
provides a routine called <code>vpShadeTable()</code>. Before calling
the routine you must set the lighting and shading parameters as
follows.
<p>
To set the lighting parameters, use <code>vpSetLight()</code>:
<pre>
vpResult
vpSetLight(vpContext *vpc, int light_num, int property,
double n0, double n1, double n2);
</pre>
<code>Light_num</code> is one of the constants <code>VP_LIGHT0</code>,
<code>VP_LIGHT1</code>, ..., <code>VP_LIGHT5</code> and indicates
which of the six light sources you wish to adjust.
<code>Property</code> is either <code>VP_COLOR</code> or
<code>VP_DIRECTION</code>. For <code>VP_COLOR</code> the remaining
three arguments are the RGB components of the light color, in the
range 0.0-1.0. For <code>VP_DIRECTION</code> the remaining three
arguments are the x, y and z components of the direction of the light
source. This vector is transformed by the current modeling matrix
before it is stored in the rendering context (see <a
href="#View">View Transformations</a>). You must also call
<code>vpEnable()</code> to enable the light. By default, light 0 is
enabled and all others are disabled.
<p>
For example, to create a cyan light coming from above the viewer's
right shoulder, use the following:
<pre>
vpSetLight(vpc, VP_LIGHT1, VP_COLOR, 0.0, 1.0, 1.0);
vpSetLight(vpc, VP_LIGHT1, VP_DIRECTION, -0.6, 0.6, 1.0);
vpEnable(vpc, VP_LIGHT1, 1);
</pre>
<p>
You can also select "two-sided" lights using <code>vpEnable()</code>
with the <code>VP_LIGHT_BOTH_SIDES</code> option.
Under this lighting model each directional light shines in two
directions, both in the specified direction and in the opposite
direction.
<p>
To set the material parameters for a particular material type, call
<code>vpSetMaterial()</code>:
<pre>
vpResult
vpSetMaterial(vpContext *vpc, int material_num, int property,
int surface_side, double r, double g, double b);
</pre>
<code>Material_num</code> is one of the constants
<code>VP_MATERIAL0</code>, <code>VP_MATERIAL1</code>, ...,
<code>VP_MATERIAL5</code> and indicates which material you wish to
adjust. <code>Property</code> is one of the following:
<dl>
<dt> <code>VP_AMBIENT</code>
<dd> Set the R, G and B ambient light reflection coefficients.
<dt> <code>VP_DIFFUSE</code>
<dd> Set the R, G and B diffuse light reflection coefficients.
<dt> <code>VP_SPECULAR</code>
<dd> Set the R, G and B specular light reflection coefficients.
<dt> <code>VP_SHINYNESS</code>
<dd> Set the specular exponent. The <code>g</code> and <code>b</code>
arguments are not used.
</dl>
<code>Surface_side</code> is either <code>VP_EXTERIOR</code>,
<code>VP_INTERIOR</code>, or <code>VP_BOTH_SIDES</code>. In the first
case the parameters will only affect voxels on the "exterior" side of
a surface, which by default means that the voxel's gradient points
towards the viewer (you can use <code>vpEnable()</code> with the
<code>VP_REVERSE_SURFACE_SIDES</code> option to reverse the
meaning of exterior and interior). In the second case the parameters
will only affect voxels whose gradient points away from the viewer.
In the third case, all voxels are affected.
<p>
Here is an example which sets surface 0 to reflect red and green
ambient and diffuse light, and to have fairly strong specular
highlights which retain the color of the light source:
<pre>
vpSetMaterial(vpc, VP_MATERIAL0, VP_AMBIENT,
VP_BOTHSIDES, 0.1, 0.1, 0.0);
vpSetMaterial(vpc, VP_MATERIAL0, VP_DIFFUSE,
VP_BOTHSIDES, 0.4, 0.4, 0.0);
vpSetMaterial(vpc, VP_MATERIAL0, VP_SPECULAR,
VP_BOTHSIDES, 0.5, 0.5, 0.5);
vpSetMaterial(vpc, VP_MATERIAL0, VP_SHINYNESS,
VP_BOTHSIDES, 10.0, 0.0, 0.0);
</pre>
<p>
Now that all of the lighting and shading parameters have been set,
the color lookup table has been declared with
<code>vpSetLookupShader()</code>, and the viewing parameters have been
set, you can call
<code>vpShadeTable()</code> to recompute the entries of the lookup table:
<pre>
vpResult
vpShadeTable(vpContext *vpc);
</pre>
This routine computes all of the entries in the currently-defined
color table using the current lighting and material parameters and the
current view transformation. You should
call <code>vpShadeTable()</code> after any changes to the shading or
viewing parameters, but before calling any of the rendering routines.
<p>
If you wish to use some other shading model you have two options.
One approach is to create your own routine to initialize the
shading lookup tables. If you take this approach then you may define
tables of any size (there is no need to use VolPack's encoded normal
vectors). For example, you could use a color transfer function which
assigns a unique color to each possible value of the scalar field in
your volume. The second option is to define a callback routine which
will be called to shade each voxel during rendering. You do so by
calling <code>vpSetCallback()</code> instead of
<code>vpSetLookupShader()</code>. For example, to declare a grayscale
shading callback function use the following call:
<pre>
void myshader();
vpSetCallback(vpc, VP_GRAY_SHADE_FUNC, myshader);
</pre>
The function <code>myshader()</code> can do whatever you like to
compute a color. See the man page for <code>vpSetCallback()</code>
for more details. Using callback functions can lead to significant
performance degradation during rendering.
<p>
There is one more shading option which is independent of the shading
model you choose: depth cueing. Depth cueing allows you to introduce
black "fog" which makes more distant voxels appear darker then voxels which
are close to the viewer, thereby making it easier to distinguish
foreground objects from background objects. To enable depth cueing
call <code>vpEnable()</code>:
<pre>
vpEnable(vpc, VP_DEPTH_CUE, 1);
</pre>
You can use <code>vpSetDepthCueing()</code> to change the depth cueing
parameters:
<pre>
vpResult
vpSetDepthCueing(vpContext *vpc, double front_factor,
double density);
</pre>
<code>Front_factor</code> is the transparency of the fog at the front
plane of the viewing volume. It must be a positive number and it is
usually less than 1.0 (although larger numbers can be used to
brighten the foreground). <code>Density</code> controls the
"density" of the fog, or how rapidly objects recede into darkness.
The equation for the transparency of the fog is:
<blockquote>
T = front_factor * exp(-density * depth)
</blockquote>
where "depth" is 0 at the front plane of the viewing volume and 1 at
the back plane. Each voxel color component is multiplied by the fog
transparency during rendering.
<p>
VolPack also supports a fast one-pass shadow algorithm implemented
with lookup tables (in a similar fashion to the procedure described
above). See the man page for <code>vpSetShadowLookupShader</code>.
<p>
<h3> <a name="Image"> Images </a></h3>
The last step before rendering is to declare the array that VolPack
should store the image into. Use <code>vpSetImage</code>:
<pre>
vpResult
vpSetImage(vpContext *vpc, unsigned char *image, int width,
int height, int bytes_per_scan, int pixel_type);
</pre>
<code>Image</code> is a pointer to the array for the image. The next
two arguments are the size of the image. These arguments also
implicitly determine the viewport transformation: the clip coordinates are
scaled to make the left, right, top and bottom planes of the viewing
volume align with the sides of the image. The next
argument is the number of bytes in one scanline of the image. This
argument can be used to add padding to the end of each scanline in
case the image display routines on your system impose alignment
restrictions on the beginning of each scanline. Finally, the last
argument is a code that specifies the format of the pixels in the
image. The following formats are allowed:
<dl>
<dt> <code>VP_ALPHA</code>
<dd> opacity (1 byte/pixel)
<dt> <code>VP_LUMINANCE</code>
<dd> grayscale color (1 byte/pixel)
<dt> <code>VP_LUMINANCEA</code>
<dd> grayscale color plus opacity (2 bytes/pixel)
<dt> <code>VP_RGB</code>
<dd> RGB color (3 bytes/pixel)
<dt> <code>VP_RGBA</code>
<dd> RGB color plus opacity (4 bytes/pixel)
<dt> <code>VP_BGR</code>
<dd> RGB color, byte-swapped (3 bytes/pixel)
<dt> <code>VP_ABGR</code>
<dd> RGB color plus opacity, bytes-swapped (4 bytes/pixel)
</dl>
Use the luminance formats only with grayscale shaders, and the RGB
formats only with color shaders. The image should have dimensions:
<pre>
unsigned char image[bytes_per_scan][height][bytes_per_pixel];
</pre>
where <code>bytes_per_pixel</code> is the size of the pixel as
determined by the pixel format.
<p>
<h3> <a name="Rendering"> Rendering </a></h3>
VolPack provides two rendering routines. The first routine is used to
render pre-classified volumes which are created with the routines in
the <a href="#RLEVolume">Classified Volumes</a> subsection:
<pre>
vpResult
vpRenderClassifiedVolume(vpContext *vpc);
</pre>
This routine uses the current viewing and shading parameters to render
the classified volume stored in the rendering context. The result is
placed in the image buffer declared with <code>vpSetImage()</code>.
<p>
The second routine is used to render unclassified volumes which are
created with the routines in the <a href="#Volumes">Volumes</a> subsection:
<pre>
vpResult
vpRenderRawVolume(vpContext *vpc);
</pre>
This routine is identical to <code>vpRenderClassifiedVolume()</code>
except that the source of the volume data is the raw volume data
stored in the rendering context, and the volume data is classified
on-the-fly during rendering. If a min-max octree data structure is
present in the rendering context then it is used to accelerate rendering.
However, even with the octree this routine is slower than
<code>vpRenderClassifiedVolume()</code> because of the additional work
which must be performed.
<p>
There is one important state variable which can be used to improve
rendering performance: the maximum ray opacity threshold. During
compositing, if the opacity of an image pixel reaches this threshold
then no more voxel data is composited into the pixel. The threshold
should be a number slightly less than one (0.95 is a good value), so
that there is very little image degradation but voxels which do not
make a significant contribution to the image can be skipped. You set
the threshold with <code>vpSetd()</code> and the
<code>VP_MAX_RAY_OPACITY</code> option. For example:
<pre>
vpSetd(vpc, VP_MAX_RAY_OPACITY, 0.95);
</pre>
<p>
<h3> <a name="State"> State Variables </a></h3>
The previous subsections have described many routines which set state
variables in a rendering context. This subsection briefly mentions
the routines available to retrieve the values of these variables.
<p>
The function <code>vpGeti()</code> is used to retrieve integer
state variables:
<pre>
vpResult
vpGeti(vpContext *vpc, int option, int *iptr);
</pre>
<code>Option</code> is a constant indicating the particular value you
wish to get. The man page for <code>vpGeti</code> lists all of the
options. The value is stored in the integer pointed to by
<code>iptr</code>. As always, the return value of the routine is a
result code (not the state variable value).
<p>
To retrieve floating-point state variables used <code>vpGetd()</code>:
<pre>
vpResult
vpGetd(vpContext *vpc, int option, double *dptr);
</pre>
This routine stores its result in the double pointed to by
<code>dptr</code>. To retrieve pointers (e.g. the current raw volume
data pointer) use <code>vpGetp</code>:
<pre>
vpResult
vpGetp(vpContext *vpc, int option, void **pptr);
</pre>
<code>Pptr</code> is a pointer to a pointer, so the value of the state
variable is stored in <code>*ptr</code>. Transformation matrices can
be retrieved with <code>vpGetMatrix()</code>:
<pre>
vpResult
vpGetMatrix(vpContext *vpc, int option, vpMatrix4 m);
</pre>
The matrix values are stored in <code>m</code>.
<p>
Lighting and material parameters can be retrieved with
<code>vpGetLight()</code> and <code>vpGetMaterial()</code> which have
arguments which are similar to the corresponding functions to set
these parameters.
<p>
<h3> <a name="Utilities"> Utility Functions </a></h3>
VolPack provides a small collection of convenient utility functions.
First, there are routines to store volume data structures in files and
load them back into a rendering context. They allow you to perform
all of the time-consuming preprocessing steps once and save the
results in a file. See the man pages for the following routines:
<ul>
<li> <code>vpStoreRawVolume()</code>
<li> <code>vpLoadRawVolume()</code>
<li> <code>vpStoreClassifiedVolume()</code>
<li> <code>vpLoadClassifiedVolume()</code>
<li> <code>vpStoreMinMaxOctree()</code>
<li> <code>vpLoadMinMaxOctree()</code>
</ul>
<p>
The routine <code>vpExtract()</code> allows you to extract a
rectangular solid region from either the raw volume data or the
classified volume data. You can extract individual fields of the
volume (e.g. just the scalar data), or computed values (e.g. opacity
computed with the current classification function).
<p>
The routine <code>vpTranspose()</code> allows you to transpose the raw
volume data. This can be useful to improve rendering performance for
very large volumes. You can use <code>vpGeti</code> with the
<code>VP_VIEW_AXIS</code> option to determine how the volume should be
transposed for optimum performance given the current viewing parameters.
<p>
The routine <code>vpResample()</code> allows you to scale a volume to
a different resolution using a variety of resampling filters. It is
useful for scaling very large volumes down to a smaller size for fast
previewing, or to filter low-resolution data sets to a higher
resolution with a high-quality filter before rendering.
<p>
<h3> <a name="Errors"> Result Codes and Error Handling </a></h3>
Almost all of the routines in VolPack return a result of type
<code>vpResult</code> which is an integer. Routines return the value
<code>VP_OK</code> to indicate success. Any other value indicates an
error. See the man page for each function for the possible error
codes and their specific meanings.
<p>
When an error occurs VolPack also records the error code in the
rendering context. You can retrieve the error code later by calling
<code>vpGetError()</code>. If another error occurs before you call
<code>vpGetError()</code> then only the first one is returned. The
recorded value is then reset.
<p>
The routine <code>vpGetErrorString()</code> can be used to convert an
error code into a printable string.
<p>
<h2> <a name="Section3"> Section 3: Tips and Pointers </a></h2>
<h3> <a name="Speed"> Maximizing Rendering Speed </a></h3>
There are several techniques to keep in mind to get the maximum
possible performance out of VolPack. First of all, use the appropriate
rendering algorithm for the task at hand. If you want to render a
volume from several viewpoints without changing the classification
function then it is well worth the time to preprocess the volume into
the run-length encoded format before rendering. Use the min-max octree data
structure if the classification function does change for every
rendering but the volume data remains fixed.
<p>
Second, choose the various thresholds carefully. Changing the minimum
opacity threshold for classification and the maximum ray opacity for
rendering can have a big impact on rendering speed. Changing the
parameter range thresholds for the min-max octree can also improve
performance.
<p>
Third, minimize the need for reallocating internal data structures by
predeclaring their sizes. Internal buffers are used to store an
intermediate image during rendering and a depth cueing lookup table.
The sizes of these tables can change as the viewing parameters change,
so the tables may have to be reallocated over the course of a
multi-frame rendering loop. You can give VolPack "hints" for the
sizes of these data structures using <code>vpSeti()</code> with the
<code>VP_INT_WIDTH_HINT</code>, <code>VP_INT_HEIGHT_HINT</code> and
<code>VP_DEPTH_CUE_SIZE_HINT</code> options.
<p>
Finally, if you are using <code>vpRenderRawVolume()</code> with a
large volume then you may need to transpose the volume as the viewing
direction changes from one principal axis to another.
<p>
<h3> <a name="Quality"> Maximizing Image Quality </a></h3>
There are two important techniques which will help you to produce
images free from distracting aliasing artifacts. The first is to
choose classification functions that are fairly smooth. Functions
with discontinuities or very abrupt transitions introduce very sharp
transitions in the classified volume, and these transitions may be too
sharp to be properly sampled. The result can be jagged boundaries and
spurious patterns in the rendered image. These artifacts may be
difficult to distinguish from the information in the data set.
To diagnose this problem, try extracting slices of classified volume
data with <code>vpExtract()</code> and check if the opacity images
contain a lot of aliasing. Smooth transitions will produce the best images.
<p>
The second technique is to prefilter the volume data with a high-quality
filter before scaling or zooming, rather than using the viewing
transformation to do the scaling. There are two reasons that
prefiltering may help. The rendering routines use a simple bilinear
reconstruction filter, but if you prefilter you can use a
higher-quality filter which does a better job of reconstruction.
Furthermore, the resolution of the rendered image is limited by the
number of samples in the volume, so very large magnification factors
produce visible aliasing artifacts. Upscaling the volume with a
high-quality filter before rendering can solve this problem.
Several utility routines, described in the <code>vpResample()</code>
man page, are provided for prefiltering a volume.
<p>
<h3> <a name="Help"> Software Support </a></h3>
If you have problems, bug reports or bug fixes, please send mail to:
<blockquote>
<address>
volpack@graphics.stanford.edu
</address>
</blockquote>
The author makes no commitment to fix bugs or provide support.
However, future releases with fixes and enhancements are planned.
<p>
If you wish to be informed of future updates to the software then you
should subscribe to the volpack-announce mailing list. To do so, send
an email message to
<blockquote>
<address>
majordomo@lists.stanford.edu
</address>
</blockquote>
with the following message body:
<blockquote>
subscribe volpack-announce
</blockquote>
To be removed from the list, send the message:
<blockquote>
unsubscribe volpack-announce
</blockquote>
Mail will be sent to the list only to announce bug fixes and new releases.
<p>
If you like the library then drop us a note describing what you use it
for!
<h3> <a name="Source"> Obtaining the Software </a></h3>
VolPack is available from the Stanford Computer Graphics Laboratory's
Web page <a
href="http://www-graphics.stanford.edu/software/volpack/#Distribution">
(http://www-graphics.stanford.edu/software/volpack/#Distribution) </a>
or via anonymous ftp (ftp://www-graphics.stanford.edu/pub/volpack/).
<hr>
Last update: 16 December 1994
<address>
volpack@graphics.stanford.edu
</address>
</body>
</html>
|