1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 535 536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 558 559 560 561 562 563 564 565 566 567 568 569 570 571 572 573 574 575 576 577 578 579 580 581 582 583 584 585 586 587 588 589 590 591 592 593 594 595 596 597 598 599 600 601 602 603 604 605 606 607 608 609 610 611 612 613 614 615 616 617 618 619 620 621 622 623 624 625 626 627 628 629 630 631 632 633 634 635 636 637 638 639 640 641 642 643 644 645 646 647 648 649 650 651 652 653 654 655 656 657 658 659 660 661 662 663 664 665 666 667 668 669 670 671 672 673 674 675 676 677 678 679 680 681 682 683 684 685 686 687 688 689 690 691 692 693 694 695 696 697 698 699 700 701 702 703 704 705 706 707 708 709 710 711 712 713 714 715 716 717 718 719 720 721 722 723 724 725 726 727 728 729 730 731 732 733 734 735 736 737 738 739 740 741 742 743 744 745 746 747 748 749 750 751 752 753 754 755 756 757 758 759 760 761 762 763 764 765 766 767 768 769 770 771 772 773 774 775 776 777 778 779 780 781 782 783 784 785 786 787 788 789 790 791 792 793 794 795 796 797 798 799 800 801 802 803 804 805 806 807 808 809 810 811 812 813 814 815 816 817 818 819 820 821 822 823 824 825 826 827 828 829 830 831 832 833 834 835 836 837 838 839 840 841 842 843 844 845 846 847 848 849 850 851 852 853 854 855 856 857 858 859 860 861 862 863 864 865 866 867 868 869 870 871 872 873 874 875 876 877 878 879 880 881 882 883 884 885 886 887 888 889 890 891 892 893 894 895 896 897 898 899 900 901 902 903 904 905 906 907 908 909 910 911 912 913 914 915 916 917 918 919 920 921 922 923 924 925 926 927 928 929 930 931 932 933 934 935 936 937 938 939 940 941 942 943 944 945 946 947 948 949 950 951 952 953 954 955 956 957 958 959 960 961 962 963 964 965 966 967 968 969 970 971 972 973 974 975 976 977 978 979 980 981 982 983 984 985 986 987 988 989 990 991 992 993 994 995 996 997 998 999 1000 1001 1002 1003 1004 1005 1006 1007 1008 1009 1010 1011 1012 1013 1014 1015 1016 1017 1018 1019 1020 1021 1022 1023 1024 1025 1026 1027 1028 1029 1030 1031 1032 1033 1034 1035 1036 1037 1038 1039 1040 1041 1042 1043 1044 1045 1046 1047 1048 1049 1050 1051 1052 1053 1054 1055 1056 1057 1058 1059 1060 1061 1062 1063 1064 1065 1066 1067 1068 1069 1070 1071 1072 1073 1074 1075 1076 1077 1078 1079 1080 1081 1082 1083 1084 1085 1086 1087 1088 1089 1090 1091 1092 1093 1094 1095 1096 1097 1098 1099 1100 1101 1102 1103 1104 1105 1106 1107 1108 1109 1110 1111 1112 1113 1114 1115 1116 1117 1118 1119 1120 1121 1122 1123 1124 1125 1126 1127 1128 1129 1130 1131 1132 1133 1134 1135 1136 1137 1138 1139 1140 1141 1142 1143 1144 1145 1146 1147 1148 1149 1150 1151 1152 1153 1154 1155 1156 1157 1158 1159 1160 1161 1162 1163 1164 1165 1166 1167 1168 1169 1170 1171 1172 1173 1174 1175 1176 1177 1178 1179 1180 1181 1182 1183 1184 1185 1186 1187 1188 1189 1190 1191 1192 1193 1194 1195 1196 1197 1198 1199 1200 1201 1202 1203 1204 1205 1206 1207 1208 1209 1210 1211 1212 1213 1214 1215 1216 1217 1218 1219 1220 1221 1222 1223 1224 1225 1226 1227 1228 1229 1230 1231 1232 1233 1234 1235 1236 1237 1238 1239 1240 1241 1242 1243 1244 1245 1246 1247 1248 1249 1250 1251 1252 1253 1254 1255 1256 1257 1258 1259 1260 1261 1262 1263 1264 1265 1266 1267 1268 1269 1270 1271 1272 1273 1274 1275 1276 1277 1278 1279 1280 1281 1282 1283 1284 1285 1286 1287 1288 1289 1290 1291 1292 1293 1294 1295 1296 1297 1298 1299 1300 1301 1302 1303 1304 1305 1306 1307 1308 1309 1310 1311 1312 1313 1314 1315 1316 1317 1318 1319 1320 1321 1322 1323 1324 1325 1326 1327 1328 1329 1330 1331 1332 1333 1334 1335 1336 1337 1338 1339 1340 1341 1342 1343 1344 1345 1346 1347 1348 1349 1350 1351 1352 1353 1354 1355 1356 1357 1358 1359 1360 1361 1362 1363 1364 1365 1366 1367 1368 1369 1370 1371 1372 1373 1374 1375 1376 1377 1378 1379 1380 1381 1382 1383 1384 1385 1386 1387 1388 1389 1390 1391 1392 1393 1394 1395 1396 1397 1398 1399 1400 1401 1402 1403 1404 1405 1406 1407 1408 1409 1410 1411 1412 1413 1414 1415 1416 1417 1418 1419 1420 1421 1422 1423 1424 1425 1426 1427 1428 1429 1430 1431 1432 1433 1434 1435 1436 1437 1438 1439 1440 1441 1442 1443 1444 1445 1446 1447 1448 1449 1450 1451 1452 1453 1454 1455 1456 1457 1458
|
/* StarPU --- Runtime system for heterogeneous multicore architectures.
*
* Copyright (C) 2009-2020 Université de Bordeaux, CNRS (LaBRI UMR 5800), Inria
* Copyright (C) 2016 Uppsala University
*
* StarPU is free software; you can redistribute it and/or modify
* it under the terms of the GNU Lesser General Public License as published by
* the Free Software Foundation; either version 2.1 of the License, or (at
* your option) any later version.
*
* StarPU is distributed in the hope that it will be useful, but
* WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
*
* See the GNU Lesser General Public License in COPYING.LGPL for more details.
*/
/*! \page ExecutionConfigurationThroughEnvironmentVariables Execution Configuration Through Environment Variables
The behavior of the StarPU library and tools may be tuned thanks to
the following environment variables.
\section EnvConfiguringWorkers Configuring Workers
<dl>
<dt>STARPU_NCPU</dt>
<dd>
\anchor STARPU_NCPU
\addindex __env__STARPU_NCPU
Specify the number of CPU workers (thus not including workers
dedicated to control accelerators). Note that by default, StarPU will
not allocate more CPU workers than there are physical CPUs, and that
some CPUs are used to control the accelerators.
</dd>
<dt>STARPU_RESERVE_NCPU</dt>
<dd>
\anchor STARPU_RESERVE_NCPU
\addindex __env__STARPU_RESERVE_NCPU
Specify the number of CPU cores that should not be used by StarPU, so the
application can use starpu_get_next_bindid() and starpu_bind_thread_on() to bind
its own threads.
This option is ignored if \ref STARPU_NCPU or starpu_conf::ncpus is set.
</dd>
<dt>STARPU_NCPUS</dt>
<dd>
\anchor STARPU_NCPUS
\addindex __env__STARPU_NCPUS
This variable is deprecated. You should use \ref STARPU_NCPU.
</dd>
<dt>STARPU_NCUDA</dt>
<dd>
\anchor STARPU_NCUDA
\addindex __env__STARPU_NCUDA
Specify the number of CUDA devices that StarPU can use. If
\ref STARPU_NCUDA is lower than the number of physical devices, it is
possible to select which CUDA devices should be used by the means of the
environment variable \ref STARPU_WORKERS_CUDAID. By default, StarPU will
create as many CUDA workers as there are CUDA devices.
</dd>
<dt>STARPU_NWORKER_PER_CUDA</dt>
<dd>
\anchor STARPU_NWORKER_PER_CUDA
\addindex __env__STARPU_NWORKER_PER_CUDA
Specify the number of workers per CUDA device, and thus the number of kernels
which will be concurrently running on the devices, i.e. the number of CUDA
streams. The default value is 1.
</dd>
<dt>STARPU_CUDA_THREAD_PER_WORKER</dt>
<dd>
\anchor STARPU_CUDA_THREAD_PER_WORKER
\addindex __env__STARPU_CUDA_THREAD_PER_WORKER
Specify whether the cuda driver should use one thread per stream (1) or to use
a single thread to drive all the streams of the device or all devices (0), and
\ref STARPU_CUDA_THREAD_PER_DEV determines whether is it one thread per device or one
thread for all devices. The default value is 0. Setting it to 1 is contradictory
with setting \ref STARPU_CUDA_THREAD_PER_DEV.
</dd>
<dt>STARPU_CUDA_THREAD_PER_DEV</dt>
<dd>
\anchor STARPU_CUDA_THREAD_PER_DEV
\addindex __env__STARPU_CUDA_THREAD_PER_DEV
Specify whether the cuda driver should use one thread per device (1) or to use a
single thread to drive all the devices (0). The default value is 1. It does not
make sense to set this variable if \ref STARPU_CUDA_THREAD_PER_WORKER is set to to 1
(since \ref STARPU_CUDA_THREAD_PER_DEV is then meaningless).
</dd>
<dt>STARPU_CUDA_PIPELINE</dt>
<dd>
\anchor STARPU_CUDA_PIPELINE
\addindex __env__STARPU_CUDA_PIPELINE
Specify how many asynchronous tasks are submitted in advance on CUDA
devices. This for instance permits to overlap task management with the execution
of previous tasks, but it also allows concurrent execution on Fermi cards, which
otherwise bring spurious synchronizations. The default is 2. Setting the value to 0 forces a synchronous
execution of all tasks.
</dd>
<dt>STARPU_NOPENCL</dt>
<dd>
\anchor STARPU_NOPENCL
\addindex __env__STARPU_NOPENCL
OpenCL equivalent of the environment variable \ref STARPU_NCUDA.
</dd>
<dt>STARPU_OPENCL_PIPELINE</dt>
<dd>
\anchor STARPU_OPENCL_PIPELINE
\addindex __env__STARPU_OPENCL_PIPELINE
Specify how many asynchronous tasks are submitted in advance on OpenCL
devices. This for instance permits to overlap task management with the execution
of previous tasks, but it also allows concurrent execution on Fermi cards, which
otherwise bring spurious synchronizations. The default is 2. Setting the value to 0 forces a synchronous
execution of all tasks.
</dd>
<dt>STARPU_OPENCL_ON_CPUS</dt>
<dd>
\anchor STARPU_OPENCL_ON_CPUS
\addindex __env__STARPU_OPENCL_ON_CPUS
By default, the OpenCL driver only enables GPU and accelerator
devices. By setting the environment variable \ref STARPU_OPENCL_ON_CPUS
to 1, the OpenCL driver will also enable CPU devices.
</dd>
<dt>STARPU_OPENCL_ONLY_ON_CPUS</dt>
<dd>
\anchor STARPU_OPENCL_ONLY_ON_CPUS
\addindex __env__STARPU_OPENCL_ONLY_ON_CPUS
By default, the OpenCL driver enables GPU and accelerator
devices. By setting the environment variable \ref STARPU_OPENCL_ONLY_ON_CPUS
to 1, the OpenCL driver will ONLY enable CPU devices.
</dd>
<dt>STARPU_NMIC</dt>
<dd>
\anchor STARPU_NMIC
\addindex __env__STARPU_NMIC
MIC equivalent of the environment variable \ref STARPU_NCUDA, i.e. the number of
MIC devices to use.
</dd>
<dt>STARPU_NMICTHREADS</dt>
<dd>
\anchor STARPU_NMICTHREADS
\addindex __env__STARPU_NMICTHREADS
Number of threads to use on the MIC devices.
</dd>
<dt>STARPU_NMPI_MS</dt>
<dd>
\anchor STARPU_NMPI_MS
\addindex __env__STARPU_NMPI_MS
MPI Master Slave equivalent of the environment variable \ref STARPU_NCUDA, i.e. the number of
MPI Master Slave devices to use.
</dd>
<dt>STARPU_NMPIMSTHREADS</dt>
<dd>
\anchor STARPU_NMPIMSTHREADS
\addindex __env__STARPU_NMPIMSTHREADS
Number of threads to use on the MPI Slave devices.
</dd>
<dt>STARPU_MPI_MASTER_NODE</dt>
<dd>
\anchor STARPU_MPI_MASTER_NODE
\addindex __env__STARPU_MPI_MASTER_NODE
This variable allows to chose which MPI node (with the MPI ID) will be the master.
</dd>
<dt>STARPU_WORKERS_NOBIND</dt>
<dd>
\anchor STARPU_WORKERS_NOBIND
\addindex __env__STARPU_WORKERS_NOBIND
Setting it to non-zero will prevent StarPU from binding its threads to
CPUs. This is for instance useful when running the testsuite in parallel.
</dd>
<dt>STARPU_WORKERS_GETBIND</dt>
<dd>
\anchor STARPU_WORKERS_GETBIND
\addindex __env__STARPU_WORKERS_GETBIND
Setting it to non-zero makes StarPU use the OS-provided CPU binding to determine
how many and which CPU cores it should use. This is notably useful when running
several StarPU-MPI processes on the same host, to let the MPI launcher set the
CPUs to be used.
</dd>
<dt>STARPU_WORKERS_CPUID</dt>
<dd>
\anchor STARPU_WORKERS_CPUID
\addindex __env__STARPU_WORKERS_CPUID
Passing an array of integers in \ref STARPU_WORKERS_CPUID
specifies on which logical CPU the different workers should be
bound. For instance, if <c>STARPU_WORKERS_CPUID = "0 1 4 5"</c>, the first
worker will be bound to logical CPU #0, the second CPU worker will be bound to
logical CPU #1 and so on. Note that the logical ordering of the CPUs is either
determined by the OS, or provided by the library <c>hwloc</c> in case it is
available. Ranges can be provided: for instance, <c>STARPU_WORKERS_CPUID = "1-3
5"</c> will bind the first three workers on logical CPUs #1, #2, and #3, and the
fourth worker on logical CPU #5. Unbound ranges can also be provided:
<c>STARPU_WORKERS_CPUID = "1-"</c> will bind the workers starting from logical
CPU #1 up to last CPU.
Note that the first workers correspond to the CUDA workers, then come the
OpenCL workers, and finally the CPU workers. For example if
we have <c>STARPU_NCUDA=1</c>, <c>STARPU_NOPENCL=1</c>, <c>STARPU_NCPU=2</c>
and <c>STARPU_WORKERS_CPUID = "0 2 1 3"</c>, the CUDA device will be controlled
by logical CPU #0, the OpenCL device will be controlled by logical CPU #2, and
the logical CPUs #1 and #3 will be used by the CPU workers.
If the number of workers is larger than the array given in
\ref STARPU_WORKERS_CPUID, the workers are bound to the logical CPUs in a
round-robin fashion: if <c>STARPU_WORKERS_CPUID = "0 1"</c>, the first
and the third (resp. second and fourth) workers will be put on CPU #0
(resp. CPU #1).
This variable is ignored if the field
starpu_conf::use_explicit_workers_bindid passed to starpu_init() is
set.
</dd>
<dt>STARPU_MAIN_THREAD_BIND</dt>
<dd>
\anchor STARPU_MAIN_THREAD_BIND
\addindex __env__STARPU_MAIN_THREAD_BIND
When defined, this make StarPU bind the thread that calls starpu_initialize() to
a reserved CPU, subtracted from the CPU workers.
</dd>
<dt>STARPU_MAIN_THREAD_CPUID</dt>
<dd>
\anchor STARPU_MAIN_THREAD_CPUID
\addindex __env__STARPU_MAIN_THREAD_CPUID
When defined, this make StarPU bind the thread that calls starpu_initialize() to
the given CPU ID.
</dd>
<dt>STARPU_MPI_THREAD_CPUID</dt>
<dd>
\anchor STARPU_MPI_THREAD_CPUID
\addindex __env__STARPU_MPI_THREAD_CPUID
When defined, this make StarPU bind its MPI thread to the given CPU ID. Setting
it to -1 (the default value) will use a reserved CPU, subtracted from the CPU
workers.
</dd>
<dt>STARPU_MPI_NOBIND</dt>
<dd>
\anchor STARPU_MPI_NOBIND
\addindex __env__STARPU_MPI_NOBIND
Setting it to non-zero will prevent StarPU from binding the MPI to
a separate core. This is for instance useful when running the testsuite on a single system.
</dd>
<dt>STARPU_WORKERS_CUDAID</dt>
<dd>
\anchor STARPU_WORKERS_CUDAID
\addindex __env__STARPU_WORKERS_CUDAID
Similarly to the \ref STARPU_WORKERS_CPUID environment variable, it is
possible to select which CUDA devices should be used by StarPU. On a machine
equipped with 4 GPUs, setting <c>STARPU_WORKERS_CUDAID = "1 3"</c> and
<c>STARPU_NCUDA=2</c> specifies that 2 CUDA workers should be created, and that
they should use CUDA devices #1 and #3 (the logical ordering of the devices is
the one reported by CUDA).
This variable is ignored if the field
starpu_conf::use_explicit_workers_cuda_gpuid passed to starpu_init()
is set.
</dd>
<dt>STARPU_WORKERS_OPENCLID</dt>
<dd>
\anchor STARPU_WORKERS_OPENCLID
\addindex __env__STARPU_WORKERS_OPENCLID
OpenCL equivalent of the \ref STARPU_WORKERS_CUDAID environment variable.
This variable is ignored if the field
starpu_conf::use_explicit_workers_opencl_gpuid passed to starpu_init()
is set.
</dd>
<dt>STARPU_WORKERS_MICID</dt>
<dd>
\anchor STARPU_WORKERS_MICID
\addindex __env__STARPU_WORKERS_MICID
MIC equivalent of the \ref STARPU_WORKERS_CUDAID environment variable.
This variable is ignored if the field
starpu_conf::use_explicit_workers_mic_deviceid passed to starpu_init()
is set.
</dd>
<dt>STARPU_WORKER_TREE</dt>
<dd>
\anchor STARPU_WORKER_TREE
\addindex __env__STARPU_WORKER_TREE
Define to 1 to enable the tree iterator in schedulers.
</dd>
<dt>STARPU_SINGLE_COMBINED_WORKER</dt>
<dd>
\anchor STARPU_SINGLE_COMBINED_WORKER
\addindex __env__STARPU_SINGLE_COMBINED_WORKER
If set, StarPU will create several workers which won't be able to work
concurrently. It will by default create combined workers which size goes from 1
to the total number of CPU workers in the system. \ref STARPU_MIN_WORKERSIZE
and \ref STARPU_MAX_WORKERSIZE can be used to change this default.
</dd>
<dt>STARPU_MIN_WORKERSIZE</dt>
<dd>
\anchor STARPU_MIN_WORKERSIZE
\addindex __env__STARPU_MIN_WORKERSIZE
\ref STARPU_MIN_WORKERSIZE
permits to specify the minimum size of the combined workers (instead of the default 2)
</dd>
<dt>STARPU_MAX_WORKERSIZE</dt>
<dd>
\anchor STARPU_MAX_WORKERSIZE
\addindex __env__STARPU_MAX_WORKERSIZE
\ref STARPU_MAX_WORKERSIZE
permits to specify the minimum size of the combined workers (instead of the
number of CPU workers in the system)
</dd>
<dt>STARPU_SYNTHESIZE_ARITY_COMBINED_WORKER</dt>
<dd>
\anchor STARPU_SYNTHESIZE_ARITY_COMBINED_WORKER
\addindex __env__STARPU_SYNTHESIZE_ARITY_COMBINED_WORKER
Let the user decide how many elements are allowed between combined workers
created from hwloc information. For instance, in the case of sockets with 6
cores without shared L2 caches, if \ref STARPU_SYNTHESIZE_ARITY_COMBINED_WORKER is
set to 6, no combined worker will be synthesized beyond one for the socket
and one per core. If it is set to 3, 3 intermediate combined workers will be
synthesized, to divide the socket cores into 3 chunks of 2 cores. If it set to
2, 2 intermediate combined workers will be synthesized, to divide the the socket
cores into 2 chunks of 3 cores, and then 3 additional combined workers will be
synthesized, to divide the former synthesized workers into a bunch of 2 cores,
and the remaining core (for which no combined worker is synthesized since there
is already a normal worker for it).
The default, 2, thus makes StarPU tend to building a binary trees of combined
workers.
</dd>
<dt>STARPU_DISABLE_ASYNCHRONOUS_COPY</dt>
<dd>
\anchor STARPU_DISABLE_ASYNCHRONOUS_COPY
\addindex __env__STARPU_DISABLE_ASYNCHRONOUS_COPY
Disable asynchronous copies between CPU and GPU devices.
The AMD implementation of OpenCL is known to
fail when copying data asynchronously. When using this implementation,
it is therefore necessary to disable asynchronous data transfers.
</dd>
<dt>STARPU_DISABLE_ASYNCHRONOUS_CUDA_COPY</dt>
<dd>
\anchor STARPU_DISABLE_ASYNCHRONOUS_CUDA_COPY
\addindex __env__STARPU_DISABLE_ASYNCHRONOUS_CUDA_COPY
Disable asynchronous copies between CPU and CUDA devices.
</dd>
<dt>STARPU_DISABLE_ASYNCHRONOUS_OPENCL_COPY</dt>
<dd>
\anchor STARPU_DISABLE_ASYNCHRONOUS_OPENCL_COPY
\addindex __env__STARPU_DISABLE_ASYNCHRONOUS_OPENCL_COPY
Disable asynchronous copies between CPU and OpenCL devices.
The AMD implementation of OpenCL is known to
fail when copying data asynchronously. When using this implementation,
it is therefore necessary to disable asynchronous data transfers.
</dd>
<dt>STARPU_DISABLE_ASYNCHRONOUS_MIC_COPY</dt>
<dd>
\anchor STARPU_DISABLE_ASYNCHRONOUS_MIC_COPY
\addindex __env__STARPU_DISABLE_ASYNCHRONOUS_MIC_COPY
Disable asynchronous copies between CPU and MIC devices.
</dd>
<dt>STARPU_DISABLE_ASYNCHRONOUS_MPI_MS_COPY</dt>
<dd>
\anchor STARPU_DISABLE_ASYNCHRONOUS_MPI_MS_COPY
\addindex __env__STARPU_DISABLE_ASYNCHRONOUS_MPI_MS_COPY
Disable asynchronous copies between CPU and MPI Slave devices.
</dd>
<dt>STARPU_ENABLE_CUDA_GPU_GPU_DIRECT</dt>
<dd>
\anchor STARPU_ENABLE_CUDA_GPU_GPU_DIRECT
\addindex __env__STARPU_ENABLE_CUDA_GPU_GPU_DIRECT
Enable (1) or Disable (0) direct CUDA transfers from GPU to GPU, without copying
through RAM. The default is Enabled.
This permits to test the performance effect of GPU-Direct.
</dd>
<dt>STARPU_DISABLE_PINNING</dt>
<dd>
\anchor STARPU_DISABLE_PINNING
\addindex __env__STARPU_DISABLE_PINNING
Disable (1) or Enable (0) pinning host memory allocated through starpu_malloc, starpu_memory_pin
and friends. The default is Enabled.
This permits to test the performance effect of memory pinning.
</dd>
<dt>STARPU_BACKOFF_MIN</dt>
<dd>
\anchor STARPU_BACKOFF_MIN
\addindex __env__STARPU_BACKOFF_MIN
Set minimum exponential backoff of number of cycles to pause when spinning. Default value is 1.
</dd>
<dt>STARPU_BACKOFF_MAX</dt>
<dd>
\anchor STARPU_BACKOFF_MAX
\addindex __env__STARPU_BACKOFF_MAX
Set maximum exponential backoff of number of cycles to pause when spinning. Default value is 32.
</dd>
<dt>STARPU_MIC_SINK_PROGRAM_NAME</dt>
<dd>
\anchor STARPU_MIC_SINK_PROGRAM_NAME
\addindex __env__STARPU_MIC_SINK_PROGRAM_NAME
todo
</dd>
<dt>STARPU_MIC_SINK_PROGRAM_PATH</dt>
<dd>
\anchor STARPU_MIC_SINK_PROGRAM_PATH
\addindex __env__STARPU_MIC_SINK_PROGRAM_PATH
todo
</dd>
<dt>STARPU_MIC_PROGRAM_PATH</dt>
<dd>
\anchor STARPU_MIC_PROGRAM_PATH
\addindex __env__STARPU_MIC_PROGRAM_PATH
todo
</dd>
</dl>
\section ConfiguringTheSchedulingEngine Configuring The Scheduling Engine
<dl>
<dt>STARPU_SCHED</dt>
<dd>
\anchor STARPU_SCHED
\addindex __env__STARPU_SCHED
Choose between the different scheduling policies proposed by StarPU: work
random, stealing, greedy, with performance models, etc.
Use <c>STARPU_SCHED=help</c> to get the list of available schedulers.
</dd>
<dt>STARPU_MIN_PRIO</dt>
<dd>
\anchor STARPU_MIN_PRIO_env
\addindex __env__STARPU_MIN_PRIO
Set the mininum priority used by priorities-aware schedulers.
</dd>
<dt>STARPU_MAX_PRIO</dt>
<dd>
\anchor STARPU_MAX_PRIO_env
\addindex __env__STARPU_MAX_PRIO
Set the maximum priority used by priorities-aware schedulers.
</dd>
<dt>STARPU_CALIBRATE</dt>
<dd>
\anchor STARPU_CALIBRATE
\addindex __env__STARPU_CALIBRATE
If this variable is set to 1, the performance models are calibrated during
the execution. If it is set to 2, the previous values are dropped to restart
calibration from scratch. Setting this variable to 0 disable calibration, this
is the default behaviour.
Note: this currently only applies to <c>dm</c> and <c>dmda</c> scheduling policies.
</dd>
<dt>STARPU_CALIBRATE_MINIMUM</dt>
<dd>
\anchor STARPU_CALIBRATE_MINIMUM
\addindex __env__STARPU_CALIBRATE_MINIMUM
Define the minimum number of calibration measurements that will be made
before considering that the performance model is calibrated. The default value is 10.
</dd>
<dt>STARPU_BUS_CALIBRATE</dt>
<dd>
\anchor STARPU_BUS_CALIBRATE
\addindex __env__STARPU_BUS_CALIBRATE
If this variable is set to 1, the bus is recalibrated during intialization.
</dd>
<dt>STARPU_PREFETCH</dt>
<dd>
\anchor STARPU_PREFETCH
\addindex __env__STARPU_PREFETCH
Indicate whether data prefetching should be enabled (0 means
that it is disabled). If prefetching is enabled, when a task is scheduled to be
executed e.g. on a GPU, StarPU will request an asynchronous transfer in
advance, so that data is already present on the GPU when the task starts. As a
result, computation and data transfers are overlapped.
Note that prefetching is enabled by default in StarPU.
</dd>
<dt>STARPU_SCHED_ALPHA</dt>
<dd>
\anchor STARPU_SCHED_ALPHA
\addindex __env__STARPU_SCHED_ALPHA
To estimate the cost of a task StarPU takes into account the estimated
computation time (obtained thanks to performance models). The alpha factor is
the coefficient to be applied to it before adding it to the communication part.
</dd>
<dt>STARPU_SCHED_BETA</dt>
<dd>
\anchor STARPU_SCHED_BETA
\addindex __env__STARPU_SCHED_BETA
To estimate the cost of a task StarPU takes into account the estimated
data transfer time (obtained thanks to performance models). The beta factor is
the coefficient to be applied to it before adding it to the computation part.
</dd>
<dt>STARPU_SCHED_GAMMA</dt>
<dd>
\anchor STARPU_SCHED_GAMMA
\addindex __env__STARPU_SCHED_GAMMA
Define the execution time penalty of a joule (\ref Energy-basedScheduling).
</dd>
<dt>STARPU_SCHED_READY</dt>
<dd>
\anchor STARPU_SCHED_READY
\addindex __env__STARPU_SCHED_READY
For a modular scheduler with sorted queues below the decision component, workers
pick up a task which has most of its data already available. Setting this to 0
disables this.
</dd>
<dt>STARPU_IDLE_POWER</dt>
<dd>
\anchor STARPU_IDLE_POWER
\addindex __env__STARPU_IDLE_POWER
Define the idle power of the machine (\ref Energy-basedScheduling).
</dd>
<dt>STARPU_PROFILING</dt>
<dd>
\anchor STARPU_PROFILING
\addindex __env__STARPU_PROFILING
Enable on-line performance monitoring (\ref EnablingOn-linePerformanceMonitoring).
</dd>
</dl>
\section Extensions Extensions
<dl>
<dt>SOCL_OCL_LIB_OPENCL</dt>
<dd>
\anchor SOCL_OCL_LIB_OPENCL
\addindex __env__SOCL_OCL_LIB_OPENCL
THE SOCL test suite is only run when the environment variable
\ref SOCL_OCL_LIB_OPENCL is defined. It should contain the location
of the file <c>libOpenCL.so</c> of the OCL ICD implementation.
</dd>
<dt>OCL_ICD_VENDORS</dt>
<dd>
\anchor OCL_ICD_VENDORS
\addindex __env__OCL_ICD_VENDORS
When using SOCL with OpenCL ICD
(https://forge.imag.fr/projects/ocl-icd/), this variable may be used
to point to the directory where ICD files are installed. The default
directory is <c>/etc/OpenCL/vendors</c>. StarPU installs ICD
files in the directory <c>$prefix/share/starpu/opencl/vendors</c>.
</dd>
<dt>STARPU_COMM_STATS</dt>
<dd>
\anchor STARPU_COMM_STATS
\addindex __env__STARPU_COMM_STATS
Communication statistics for starpumpi (\ref MPIDebug)
will be enabled when the environment variable \ref STARPU_COMM_STATS
is defined to an value other than 0.
</dd>
<dt>STARPU_MPI_CACHE</dt>
<dd>
\anchor STARPU_MPI_CACHE
\addindex __env__STARPU_MPI_CACHE
Communication cache for starpumpi (\ref MPISupport) will be
disabled when the environment variable \ref STARPU_MPI_CACHE is set
to 0. It is enabled by default or for any other values of the variable
\ref STARPU_MPI_CACHE.
</dd>
<dt>STARPU_MPI_COMM</dt>
<dd>
\anchor STARPU_MPI_COMM
\addindex __env__STARPU_MPI_COMM
Communication trace for starpumpi (\ref MPISupport) will be
enabled when the environment variable \ref STARPU_MPI_COMM is set
to 1, and StarPU has been configured with the option
\ref enable-verbose "--enable-verbose".
</dd>
<dt>STARPU_MPI_CACHE_STATS</dt>
<dd>
\anchor STARPU_MPI_CACHE_STATS
\addindex __env__STARPU_MPI_CACHE_STATS
When set to 1, statistics are enabled for the communication cache (\ref MPISupport). For now,
it prints messages on the standard output when data are added or removed from the received
communication cache.
</dd>
<dt>STARPU_MPI_PRIORITIES</dt>
<dd>
\anchor STARPU_MPI_PRIORITIES
\addindex __env__STARPU_MPI_PRIORITIES
When set to 0, the use of priorities to order MPI communications is disabled
(\ref MPISupport).
</dd>
<dt>STARPU_MPI_NDETACHED_SEND</dt>
<dd>
\anchor STARPU_MPI_NDETACHED_SEND
\addindex __env__STARPU_MPI_NDETACHED_SEND
This sets the number of send requests that StarPU-MPI will emit concurrently. The default is 10.
</dd>
<dt>STARPU_MPI_NREADY_PROCESS</dt>
<dd>
\anchor STARPU_MPI_NREADY_PROCESS
\addindex __env__STARPU_MPI_NREADY_PROCESS
This sets the number of requests that StarPU-MPI will submit to MPI before
polling for termination of existing requests. The default is 10.
</dd>
<dt>STARPU_MPI_FAKE_SIZE</dt>
<dd>
\anchor STARPU_MPI_FAKE_SIZE
\addindex __env__STARPU_MPI_FAKE_SIZE
Setting to a number makes StarPU believe that there are as many MPI nodes, even
if it was run on only one MPI node. This allows e.g. to simulate the execution
of one of the nodes of a big cluster without actually running the rest.
It of course does not provide computation results and timing.
</dd>
<dt>STARPU_MPI_FAKE_RANK</dt>
<dd>
\anchor STARPU_MPI_FAKE_RANK
\addindex __env__STARPU_MPI_FAKE_RANK
Setting to a number makes StarPU believe that it runs the given MPI node, even
if it was run on only one MPI node. This allows e.g. to simulate the execution
of one of the nodes of a big cluster without actually running the rest.
It of course does not provide computation results and timing.
</dd>
<dt>STARPU_MPI_DRIVER_CALL_FREQUENCY</dt>
<dd>
\anchor STARPU_MPI_DRIVER_CALL_FREQUENCY
\addindex __env__STARPU_MPI_DRIVER_CALL_FREQUENCY
When set to a positive value, activates the interleaving of the execution of
tasks with the progression of MPI communications (\ref MPISupport). The
starpu_mpi_init_conf() function must have been called by the application
for that environment variable to be used. When set to 0, the MPI progression
thread does not use at all the driver given by the user, and only focuses on
making MPI communications progress.
</dd>
<dt>STARPU_MPI_DRIVER_TASK_FREQUENCY</dt>
<dd>
\anchor STARPU_MPI_DRIVER_TASK_FREQUENCY
\addindex __env__STARPU_MPI_DRIVER_TASK_FREQUENCY
When set to a positive value, the interleaving of the execution of tasks with
the progression of MPI communications mechanism to execute several tasks before
checking communication requests again (\ref MPISupport). The
starpu_mpi_init_conf() function must have been called by the application
for that environment variable to be used, and the
STARPU_MPI_DRIVER_CALL_FREQUENCY environment variable set to a positive value.
</dd>
<dt>STARPU_SIMGRID_TRANSFER_COST</dt>
<dd>
\anchor STARPU_SIMGRID_TRANSFER_COST
\addindex __env__STARPU_SIMGRID_TRANSFER_COST
When set to 1 (which is the default), data transfers (over PCI bus, typically) are taken into account
in SimGrid mode.
</dd>
<dt>STARPU_SIMGRID_CUDA_MALLOC_COST</dt>
<dd>
\anchor STARPU_SIMGRID_CUDA_MALLOC_COST
\addindex __env__STARPU_SIMGRID_CUDA_MALLOC_COST
When set to 1 (which is the default), CUDA malloc costs are taken into account
in SimGrid mode.
</dd>
<dt>STARPU_SIMGRID_CUDA_QUEUE_COST</dt>
<dd>
\anchor STARPU_SIMGRID_CUDA_QUEUE_COST
\addindex __env__STARPU_SIMGRID_CUDA_QUEUE_COST
When set to 1 (which is the default), CUDA task and transfer queueing costs are
taken into account in SimGrid mode.
</dd>
<dt>STARPU_PCI_FLAT</dt>
<dd>
\anchor STARPU_PCI_FLAT
\addindex __env__STARPU_PCI_FLAT
When unset or set to 0, the platform file created for SimGrid will
contain PCI bandwidths and routes.
</dd>
<dt>STARPU_SIMGRID_QUEUE_MALLOC_COST</dt>
<dd>
\anchor STARPU_SIMGRID_QUEUE_MALLOC_COST
\addindex __env__STARPU_SIMGRID_QUEUE_MALLOC_COST
When unset or set to 1, simulate within SimGrid the GPU transfer queueing.
</dd>
<dt>STARPU_MALLOC_SIMULATION_FOLD</dt>
<dd>
\anchor STARPU_MALLOC_SIMULATION_FOLD
\addindex __env__STARPU_MALLOC_SIMULATION_FOLD
Define the size of the file used for folding virtual allocation, in
MiB. The default is 1, thus allowing 64GiB virtual memory when Linux's
<c>sysctl vm.max_map_count</c> value is the default 65535.
</dd>
<dt>STARPU_SIMGRID_TASK_SUBMIT_COST</dt>
<dd>
\anchor STARPU_SIMGRID_TASK_SUBMIT_COST
\addindex __env__STARPU_SIMGRID_TASK_SUBMIT_COST
When set to 1 (which is the default), task submission costs are taken into
account in SimGrid mode. This provides more accurate SimGrid predictions,
especially for the beginning of the execution.
</dd>
<dt>STARPU_SIMGRID_FETCHING_INPUT_COST</dt>
<dd>
\anchor STARPU_SIMGRID_FETCHING_INPUT_COST
\addindex __env__STARPU_SIMGRID_FETCHING_INPUT_COST
When set to 1 (which is the default), fetching input costs are taken into
account in SimGrid mode. This provides more accurate SimGrid predictions,
especially regarding data transfers.
</dd>
<dt>STARPU_SIMGRID_SCHED_COST</dt>
<dd>
\anchor STARPU_SIMGRID_SCHED_COST
\addindex __env__STARPU_SIMGRID_SCHED_COST
When set to 1 (0 is the default), scheduling costs are taken into
account in SimGrid mode. This provides more accurate SimGrid predictions,
and allows studying scheduling overhead of the runtime system. However,
it also makes simulation non-deterministic.
</dd>
<dt>STARPU_SINK</dt>
<dd>
\anchor STARPU_SINK
\addindex __env__STARPU_SINK
Variable defined by StarPU when running MPI Xeon PHI on the sink.
</dd>
</dl>
\section MiscellaneousAndDebug Miscellaneous And Debug
<dl>
<dt>STARPU_HOME</dt>
<dd>
\anchor STARPU_HOME
\addindex __env__STARPU_HOME
Specify the main directory in which StarPU stores its
configuration files. The default is <c>$HOME</c> on Unix environments,
and <c>$USERPROFILE</c> on Windows environments.
</dd>
<dt>STARPU_PATH</dt>
<dd>
\anchor STARPU_PATH
\addindex __env__STARPU_PATH
Only used on Windows environments.
Specify the main directory in which StarPU is installed
(\ref RunningABasicStarPUApplicationOnMicrosoft)
</dd>
<dt>STARPU_PERF_MODEL_DIR</dt>
<dd>
\anchor STARPU_PERF_MODEL_DIR
\addindex __env__STARPU_PERF_MODEL_DIR
Specify the main directory in which StarPU stores its
performance model files. The default is <c>$STARPU_HOME/.starpu/sampling</c>.
</dd>
<dt>STARPU_PERF_MODEL_HOMOGENEOUS_CPU</dt>
<dd>
\anchor STARPU_PERF_MODEL_HOMOGENEOUS_CPU
\addindex __env__STARPU_PERF_MODEL_HOMOGENEOUS_CPU
When this is set to 0, StarPU will assume that CPU devices do not have the same
performance, and thus use different performance models for them, thus making
kernel calibration much longer, since measurements have to be made for each CPU
core.
</dd>
<dt>STARPU_PERF_MODEL_HOMOGENEOUS_CUDA</dt>
<dd>
\anchor STARPU_PERF_MODEL_HOMOGENEOUS_CUDA
\addindex __env__STARPU_PERF_MODEL_HOMOGENEOUS_CUDA
When this is set to 1, StarPU will assume that all CUDA devices have the same
performance, and thus share performance models for them, thus allowing kernel
calibration to be much faster, since measurements only have to be once for all
CUDA GPUs.
</dd>
<dt>STARPU_PERF_MODEL_HOMOGENEOUS_OPENCL</dt>
<dd>
\anchor STARPU_PERF_MODEL_HOMOGENEOUS_OPENCL
\addindex __env__STARPU_PERF_MODEL_HOMOGENEOUS_OPENCL
When this is set to 1, StarPU will assume that all OPENCL devices have the same
performance, and thus share performance models for them, thus allowing kernel
calibration to be much faster, since measurements only have to be once for all
OPENCL GPUs.
</dd>
<dt>STARPU_PERF_MODEL_HOMOGENEOUS_MIC</dt>
<dd>
\anchor STARPU_PERF_MODEL_HOMOGENEOUS_MIC
\addindex __env__STARPU_PERF_MODEL_HOMOGENEOUS_MIC
When this is set to 1, StarPU will assume that all MIC devices have the same
performance, and thus share performance models for them, thus allowing kernel
calibration to be much faster, since measurements only have to be once for all
MIC GPUs.
</dd>
<dt>STARPU_PERF_MODEL_HOMOGENEOUS_MPI_MS</dt>
<dd>
\anchor STARPU_PERF_MODEL_HOMOGENEOUS_MPI_MS
\addindex __env__STARPU_PERF_MODEL_HOMOGENEOUS_MPI_MS
When this is set to 1, StarPU will assume that all MPI Slave devices have the same
performance, and thus share performance models for them, thus allowing kernel
calibration to be much faster, since measurements only have to be once for all
MPI Slaves.
</dd>
<dt>STARPU_HOSTNAME</dt>
<dd>
\anchor STARPU_HOSTNAME
\addindex __env__STARPU_HOSTNAME
When set, force the hostname to be used when dealing performance model
files. Models are indexed by machine name. When running for example on
a homogenenous cluster, it is possible to share the models between
machines by setting <c>export STARPU_HOSTNAME=some_global_name</c>.
</dd>
<dt>STARPU_OPENCL_PROGRAM_DIR</dt>
<dd>
\anchor STARPU_OPENCL_PROGRAM_DIR
\addindex __env__STARPU_OPENCL_PROGRAM_DIR
Specify the directory where the OpenCL codelet source files are
located. The function starpu_opencl_load_program_source() looks
for the codelet in the current directory, in the directory specified
by the environment variable \ref STARPU_OPENCL_PROGRAM_DIR, in the
directory <c>share/starpu/opencl</c> of the installation directory of
StarPU, and finally in the source directory of StarPU.
</dd>
<dt>STARPU_SILENT</dt>
<dd>
\anchor STARPU_SILENT
\addindex __env__STARPU_SILENT
Allow to disable verbose mode at runtime when StarPU
has been configured with the option \ref enable-verbose "--enable-verbose". Also
disable the display of StarPU information and warning messages.
</dd>
<dt>STARPU_MPI_DEBUG_LEVEL_MIN</dt>
<dd>
\anchor STARPU_MPI_DEBUG_LEVEL_MIN
\addindex __env__STARPU_MPI_DEBUG_LEVEL_MIN
Set the minimum level of debug when StarPU
has been configured with the option \ref enable-mpi-verbose "--enable-mpi-verbose".
</dd>
<dt>STARPU_MPI_DEBUG_LEVEL_MAX</dt>
<dd>
\anchor STARPU_MPI_DEBUG_LEVEL_MAX
\addindex __env__STARPU_MPI_DEBUG_LEVEL_MAX
Set the maximum level of debug when StarPU
has been configured with the option \ref enable-mpi-verbose "--enable-mpi-verbose".
</dd>
<dt>STARPU_LOGFILENAME</dt>
<dd>
\anchor STARPU_LOGFILENAME
\addindex __env__STARPU_LOGFILENAME
Specify in which file the debugging output should be saved to.
</dd>
<dt>STARPU_FXT_PREFIX</dt>
<dd>
\anchor STARPU_FXT_PREFIX
\addindex __env__STARPU_FXT_PREFIX
Specify in which directory to save the generated trace if FxT is enabled.
</dd>
<dt>STARPU_FXT_SUFFIX</dt>
<dd>
\anchor STARPU_FXT_SUFFIX
\addindex __env__STARPU_FXT_SUFFIX
Specify in which file to save the generated trace if FxT is enabled.
</dd>
<dt>STARPU_FXT_TRACE</dt>
<dd>
\anchor STARPU_FXT_TRACE
\addindex __env__STARPU_FXT_TRACE
Specify whether to generate (1) or not (0) the FxT trace in /tmp/prof_file_XXX_YYY (the directory and file name can be changed with \ref STARPU_FXT_PREFIX and \ref STARPU_FXT_SUFFIX). The default is 1 (generate it)
</dd>
<dt>STARPU_LIMIT_CUDA_devid_MEM</dt>
<dd>
\anchor STARPU_LIMIT_CUDA_devid_MEM
\addindex __env__STARPU_LIMIT_CUDA_devid_MEM
Specify the maximum number of megabytes that should be
available to the application on the CUDA device with the identifier
<c>devid</c>. This variable is intended to be used for experimental
purposes as it emulates devices that have a limited amount of memory.
When defined, the variable overwrites the value of the variable
\ref STARPU_LIMIT_CUDA_MEM.
</dd>
<dt>STARPU_LIMIT_CUDA_MEM</dt>
<dd>
\anchor STARPU_LIMIT_CUDA_MEM
\addindex __env__STARPU_LIMIT_CUDA_MEM
Specify the maximum number of megabytes that should be
available to the application on each CUDA devices. This variable is
intended to be used for experimental purposes as it emulates devices
that have a limited amount of memory.
</dd>
<dt>STARPU_LIMIT_OPENCL_devid_MEM</dt>
<dd>
\anchor STARPU_LIMIT_OPENCL_devid_MEM
\addindex __env__STARPU_LIMIT_OPENCL_devid_MEM
Specify the maximum number of megabytes that should be
available to the application on the OpenCL device with the identifier
<c>devid</c>. This variable is intended to be used for experimental
purposes as it emulates devices that have a limited amount of memory.
When defined, the variable overwrites the value of the variable
\ref STARPU_LIMIT_OPENCL_MEM.
</dd>
<dt>STARPU_LIMIT_OPENCL_MEM</dt>
<dd>
\anchor STARPU_LIMIT_OPENCL_MEM
\addindex __env__STARPU_LIMIT_OPENCL_MEM
Specify the maximum number of megabytes that should be
available to the application on each OpenCL devices. This variable is
intended to be used for experimental purposes as it emulates devices
that have a limited amount of memory.
</dd>
<dt>STARPU_LIMIT_CPU_MEM</dt>
<dd>
\anchor STARPU_LIMIT_CPU_MEM
\addindex __env__STARPU_LIMIT_CPU_MEM
Specify the maximum number of megabytes that should be
available to the application in the main CPU memory. Setting it enables allocation
cache in main memory. Setting it to zero lets StarPU overflow memory.
</dd>
<dt>STARPU_LIMIT_CPU_NUMA_devid_MEM</dt>
<dd>
\anchor STARPU_LIMIT_CPU_NUMA_devid_MEM
\addindex __env__STARPU_LIMIT_CPU_NUMA_devid_MEM
Specify the maximum number of megabytes that should be available to the
application on the NUMA node with the OS identifier <c>devid</c>. Setting it
overrides the value of STARPU_LIMIT_CPU_MEM.
</dd>
<dt>STARPU_LIMIT_CPU_NUMA_MEM</dt>
<dd>
\anchor STARPU_LIMIT_CPU_NUMA_MEM
\addindex __env__STARPU_LIMIT_CPU_NUMA_MEM
Specify the maximum number of megabytes that should be available to the
application on each NUMA node. This is the same as specifying that same amount
with \ref STARPU_LIMIT_CPU_NUMA_devid_MEM for each NUMA node number. The total
memory available to StarPU will thus be this amount multiplied by the number of
NUMA nodes used by StarPU. Any \ref STARPU_LIMIT_CPU_NUMA_devid_MEM additionally
specified will take over STARPU_LIMIT_CPU_NUMA_MEM.
</dd>
<dt>STARPU_LIMIT_BANDWIDTH</dt>
<dd>
\anchor STARPU_LIMIT_BANDWIDTH
\addindex __env__STARPU_LIMIT_BANDWIDTH
Specify the maximum available PCI bandwidth of the system in MB/s. This can only
be effective with simgrid simulation. This allows to easily override the
bandwidths stored in the platform file generated from measurements on the native
system. This can be used e.g. for convenient
Specify the maximum number of megabytes that should be available to the
application on each NUMA node. This is the same as specifying that same amount
with \ref STARPU_LIMIT_CPU_NUMA_devid_MEM for each NUMA node number. The total
memory available to StarPU will thus be this amount multiplied by the number of
NUMA nodes used by StarPU. Any \ref STARPU_LIMIT_CPU_NUMA_devid_MEM additionally
specified will take over STARPU_LIMIT_BANDWIDTH.
</dd>
<dt>STARPU_MINIMUM_AVAILABLE_MEM</dt>
<dd>
\anchor STARPU_MINIMUM_AVAILABLE_MEM
\addindex __env__STARPU_MINIMUM_AVAILABLE_MEM
Specify the minimum percentage of memory that should be available in GPUs
(or in main memory, when using out of core), below which a reclaiming pass is
performed. The default is 0%.
</dd>
<dt>STARPU_TARGET_AVAILABLE_MEM</dt>
<dd>
\anchor STARPU_TARGET_AVAILABLE_MEM
\addindex __env__STARPU_TARGET_AVAILABLE_MEM
Specify the target percentage of memory that should be reached in
GPUs (or in main memory, when using out of core), when performing a periodic
reclaiming pass. The default is 0%.
</dd>
<dt>STARPU_MINIMUM_CLEAN_BUFFERS</dt>
<dd>
\anchor STARPU_MINIMUM_CLEAN_BUFFERS
\addindex __env__STARPU_MINIMUM_CLEAN_BUFFERS
Specify the minimum percentage of number of buffers that should be clean in GPUs
(or in main memory, when using out of core), below which asynchronous writebacks will be
issued. The default is 5%.
</dd>
<dt>STARPU_TARGET_CLEAN_BUFFERS</dt>
<dd>
\anchor STARPU_TARGET_CLEAN_BUFFERS
\addindex __env__STARPU_TARGET_CLEAN_BUFFERS
Specify the target percentage of number of buffers that should be reached in
GPUs (or in main memory, when using out of core), when performing an asynchronous
writeback pass. The default is 10%.
</dd>
<dt>STARPU_DIDUSE_BARRIER</dt>
<dd>
\anchor STARPU_DIDUSE_BARRIER
\addindex __env__STARPU_DIDUSE_BARRIER
When set to 1, StarPU will never evict a piece of data if it has not been used
by at least one task. This avoids odd behaviors under high memory pressure, but
can lead to deadlocks, so is to be considered experimental only.
</dd>
<dt>STARPU_DISK_SWAP</dt>
<dd>
\anchor STARPU_DISK_SWAP
\addindex __env__STARPU_DISK_SWAP
Specify a path where StarPU can push data when the main memory is getting
full.
</dd>
<dt>STARPU_DISK_SWAP_BACKEND</dt>
<dd>
\anchor STARPU_DISK_SWAP_BACKEND
\addindex __env__STARPU_DISK_SWAP_BACKEND
Specify the backend to be used by StarPU to push data when the main
memory is getting full. The default is unistd (i.e. using read/write functions),
other values are stdio (i.e. using fread/fwrite), unistd_o_direct (i.e. using
read/write with O_DIRECT), leveldb (i.e. using a leveldb database), and hdf5
(i.e. using HDF5 library).
</dd>
<dt>STARPU_DISK_SWAP_SIZE</dt>
<dd>
\anchor STARPU_DISK_SWAP_SIZE
\addindex __env__STARPU_DISK_SWAP_SIZE
Specify the maximum size in MiB to be used by StarPU to push data when the main
memory is getting full. The default is unlimited.
</dd>
<dt>STARPU_LIMIT_MAX_SUBMITTED_TASKS</dt>
<dd>
\anchor STARPU_LIMIT_MAX_SUBMITTED_TASKS
\addindex __env__STARPU_LIMIT_MAX_SUBMITTED_TASKS
Allow users to control the task submission flow by specifying
to StarPU a maximum number of submitted tasks allowed at a given time, i.e. when
this limit is reached task submission becomes blocking until enough tasks have
completed, specified by \ref STARPU_LIMIT_MIN_SUBMITTED_TASKS.
Setting it enables allocation cache buffer reuse in main memory.
</dd>
<dt>STARPU_LIMIT_MIN_SUBMITTED_TASKS</dt>
<dd>
\anchor STARPU_LIMIT_MIN_SUBMITTED_TASKS
\addindex __env__STARPU_LIMIT_MIN_SUBMITTED_TASKS
Allow users to control the task submission flow by specifying
to StarPU a submitted task threshold to wait before unblocking task submission. This
variable has to be used in conjunction with \ref STARPU_LIMIT_MAX_SUBMITTED_TASKS
which puts the task submission thread to
sleep. Setting it enables allocation cache buffer reuse in main memory.
</dd>
<dt>STARPU_TRACE_BUFFER_SIZE</dt>
<dd>
\anchor STARPU_TRACE_BUFFER_SIZE
\addindex __env__STARPU_TRACE_BUFFER_SIZE
Set the buffer size for recording trace events in MiB. Setting it to a big
size allows to avoid pauses in the trace while it is recorded on the disk. This
however also consumes memory, of course. The default value is 64.
</dd>
<dt>STARPU_GENERATE_TRACE</dt>
<dd>
\anchor STARPU_GENERATE_TRACE
\addindex __env__STARPU_GENERATE_TRACE
When set to <c>1</c>, indicate that StarPU should automatically
generate a Paje trace when starpu_shutdown() is called.
</dd>
<dt>STARPU_GENERATE_TRACE_OPTIONS</dt>
<dd>
\anchor STARPU_GENERATE_TRACE_OPTIONS
\addindex __env__STARPU_GENERATE_TRACE_OPTIONS
When the variable \ref STARPU_GENERATE_TRACE is set to <c>1</c> to
generate a Paje trace, this variable can be set to specify options (see
<c>starpu_fxt_tool --help</c>).
</dd>
<dt>STARPU_ENABLE_STATS</dt>
<dd>
\anchor STARPU_ENABLE_STATS
\addindex __env__STARPU_ENABLE_STATS
When defined, enable gathering various data statistics (\ref DataStatistics).
</dd>
<dt>STARPU_MEMORY_STATS</dt>
<dd>
\anchor STARPU_MEMORY_STATS
\addindex __env__STARPU_MEMORY_STATS
When set to 0, disable the display of memory statistics on data which
have not been unregistered at the end of the execution (\ref MemoryFeedback).
</dd>
<dt>STARPU_MAX_MEMORY_USE</dt>
<dd>
\anchor STARPU_MAX_MEMORY_USE
\addindex __env__STARPU_MAX_MEMORY_USE
When set to 1, display at the end of the execution the maximum memory used by
StarPU for internal data structures during execution.
</dd>
<dt>STARPU_BUS_STATS</dt>
<dd>
\anchor STARPU_BUS_STATS
\addindex __env__STARPU_BUS_STATS
When defined, statistics about data transfers will be displayed when calling
starpu_shutdown() (\ref Profiling). By default, statistics are printed
on the standard error stream, use the environement variable \ref
STARPU_BUS_STATS_FILE to define another filename.
</dd>
<dt>STARPU_BUS_STATS_FILE</dt>
<dd>
\anchor STARPU_BUS_STATS_FILE
\addindex __env__STARPU_BUS_STATS_FILE
Define the name of the file where to display data transfers
statistics, see \ref STARPU_BUS_STATS.
</dd>
<dt>STARPU_WORKER_STATS</dt>
<dd>
\anchor STARPU_WORKER_STATS
\addindex __env__STARPU_WORKER_STATS
When defined, statistics about the workers will be displayed when calling
starpu_shutdown() (\ref Profiling). When combined with the
environment variable \ref STARPU_PROFILING, it displays the energy
consumption (\ref Energy-basedScheduling). By default, statistics are
printed on the standard error stream, use the environement variable
\ref STARPU_WORKER_STATS_FILE to define another filename.
</dd>
<dt>STARPU_WORKER_STATS_FILE</dt>
<dd>
\anchor STARPU_WORKER_STATS_FILE
\addindex __env__STARPU_WORKER_STATS_FILE
Define the name of the file where to display workers statistics, see
\ref STARPU_WORKER_STATS.
</dd>
<dt>STARPU_STATS</dt>
<dd>
\anchor STARPU_STATS
\addindex __env__STARPU_STATS
When set to 0, data statistics will not be displayed at the
end of the execution of an application (\ref DataStatistics).
</dd>
<dt>STARPU_WATCHDOG_TIMEOUT</dt>
<dd>
\anchor STARPU_WATCHDOG_TIMEOUT
\addindex __env__STARPU_WATCHDOG_TIMEOUT
When set to a value other than 0, allows to make StarPU print an error
message whenever StarPU does not terminate any task for the given time (in µs),
but lets the application continue normally. Should
be used in combination with \ref STARPU_WATCHDOG_CRASH
(see \ref DetectionStuckConditions).
</dd>
<dt>STARPU_WATCHDOG_CRASH</dt>
<dd>
\anchor STARPU_WATCHDOG_CRASH
\addindex __env__STARPU_WATCHDOG_CRASH
When set to a value other than 0, trigger a crash when the watch
dog is reached, thus allowing to catch the situation in gdb, etc
(see \ref DetectionStuckConditions)
</dd>
<dt>STARPU_WATCHDOG_DELAY</dt>
<dd>
\anchor STARPU_WATCHDOG_DELAY
\addindex __env__STARPU_WATCHDOG_DELAY
Delay the activation of the watchdog by the given time (in µs). This can
be convenient for letting the application initialize data etc. before starting
to look for idle time.
</dd>
<dt>STARPU_TASK_BREAK_ON_PUSH</dt>
<dd>
\anchor STARPU_TASK_BREAK_ON_PUSH
\addindex __env__STARPU_TASK_BREAK_ON_PUSH
When this variable contains a job id, StarPU will raise SIGTRAP when the task
with that job id is being pushed to the scheduler, which will be nicely catched by debuggers
(see \ref DebuggingScheduling)
</dd>
<dt>STARPU_TASK_BREAK_ON_SCHED</dt>
<dd>
\anchor STARPU_TASK_BREAK_ON_SCHED
\addindex __env__STARPU_TASK_BREAK_ON_SCHED
When this variable contains a job id, StarPU will raise SIGTRAP when the task
with that job id is being scheduled by the scheduler (at a scheduler-specific
point), which will be nicely catched by debuggers.
This only works for schedulers which have such a scheduling point defined
(see \ref DebuggingScheduling)
</dd>
<dt>STARPU_TASK_BREAK_ON_POP</dt>
<dd>
\anchor STARPU_TASK_BREAK_ON_POP
\addindex __env__STARPU_TASK_BREAK_ON_POP
When this variable contains a job id, StarPU will raise SIGTRAP when the task
with that job id is being popped from the scheduler, which will be nicely catched by debuggers
(see \ref DebuggingScheduling)
</dd>
<dt>STARPU_TASK_BREAK_ON_EXEC</dt>
<dd>
\anchor STARPU_TASK_BREAK_ON_EXEC
\addindex __env__STARPU_TASK_BREAK_ON_EXEC
When this variable contains a job id, StarPU will raise SIGTRAP when the task
with that job id is being executed, which will be nicely catched by debuggers
(see \ref DebuggingScheduling)
</dd>
<dt>STARPU_DISABLE_KERNELS</dt>
<dd>
\anchor STARPU_DISABLE_KERNELS
\addindex __env__STARPU_DISABLE_KERNELS
When set to a value other than 1, it disables actually calling the kernel
functions, thus allowing to quickly check that the task scheme is working
properly, without performing the actual application-provided computation.
</dd>
<dt>STARPU_HISTORY_MAX_ERROR</dt>
<dd>
\anchor STARPU_HISTORY_MAX_ERROR
\addindex __env__STARPU_HISTORY_MAX_ERROR
History-based performance models will drop measurements which are really far
froom the measured average. This specifies the allowed variation. The default is
50 (%), i.e. the measurement is allowed to be x1.5 faster or /1.5 slower than the
average.
</dd>
<dt>STARPU_RAND_SEED</dt>
<dd>
\anchor STARPU_RAND_SEED
\addindex __env__STARPU_RAND_SEED
The random scheduler and some examples use random numbers for their own
working. Depending on the examples, the seed is by default juste always 0 or
the current time() (unless SimGrid mode is enabled, in which case it is always
0). \ref STARPU_RAND_SEED allows to set the seed to a specific value.
</dd>
<dt>STARPU_IDLE_TIME</dt>
<dd>
\anchor STARPU_IDLE_TIME
\addindex __env__STARPU_IDLE_TIME
When set to a value being a valid filename, a corresponding file
will be created when shutting down StarPU. The file will contain the
sum of all the workers' idle time.
</dd>
<dt>STARPU_GLOBAL_ARBITER</dt>
<dd>
\anchor STARPU_GLOBAL_ARBITER
\addindex __env__STARPU_GLOBAL_ARBITER
When set to a positive value, StarPU will create a arbiter, which
implements an advanced but centralized management of concurrent data
accesses (see \ref ConcurrentDataAccess).
</dd>
<dt>STARPU_USE_NUMA</dt>
<dd>
\anchor STARPU_USE_NUMA
\addindex __env__STARPU_USE_NUMA
When defined, NUMA nodes are taking into account by StarPU. Otherwise, memory
is considered as only one node. This is experimental for now.
When enabled, STARPU_MAIN_MEMORY is a pointer to the NUMA node associated to the
first CPU worker if it exists, the NUMA node associated to the first GPU discovered otherwise.
If StarPU doesn't find any NUMA node after these step, STARPU_MAIN_MEMORY is the first NUMA node
discovered by StarPU.
</dd>
<dt>STARPU_IDLE_FILE</dt>
<dd>
\anchor STARPU_IDLE_FILE
\addindex __env__STARPU_IDLE_FILE
If the environment variable STARPU_IDLE_FILE is defined, a file named after its contents will be created at the end of the execution.
The file will contain the sum of the idle times of all the workers.
</dd>
<dt>STARPU_HWLOC_INPUT</dt>
<dd>
\anchor STARPU_HWLOC_INPUT
\addindex __env__STARPU_HWLOC_INPUT
If the environment variable STARPU_HWLOC_INPUT is defined to the path of an XML file, hwloc will be made to use it as input instead of detecting the current platform topology, which can save significant initialization time.
To produce this XML file, use <c>lstopo file.xml</c>
</dd>
<dt>STARPU_CATCH_SIGNALS</dt>
<dd>
\anchor STARPU_CATCH_SIGNALS
\addindex __env__STARPU_CATCH_SIGNALS
By default, StarPU catch signals SIGINT, SIGSEGV and SIGTRAP to
perform final actions such as dumping FxT trace files even though the
application has crashed. Setting this variable to a value other than 1
will disable this behaviour. This should be done on JVM systems which
may use these signals for their own needs.
The flag can also be set through the field starpu_conf::catch_signals.
</dd>
<dt>STARPU_DISPLAY_BINDINGS</dt>
<dd>
\anchor STARPU_DISPLAY_BINDINGS
\addindex __env__STARPU_DISPLAY_BINDINGS
Display the binding of all processes and threads running on the machine. If MPI is enabled, display the binding of each node.<br>
Users can manually display the binding by calling starpu_display_bindings().
</dd>
</dl>
\section ConfiguringTheHypervisor Configuring The Hypervisor
<dl>
<dt>SC_HYPERVISOR_POLICY</dt>
<dd>
\anchor SC_HYPERVISOR_POLICY
\addindex __env__SC_HYPERVISOR_POLICY
Choose between the different resizing policies proposed by StarPU for the hypervisor:
idle, app_driven, feft_lp, teft_lp; ispeed_lp, throughput_lp etc.
Use <c>SC_HYPERVISOR_POLICY=help</c> to get the list of available policies for the hypervisor
</dd>
<dt>SC_HYPERVISOR_TRIGGER_RESIZE</dt>
<dd>
\anchor SC_HYPERVISOR_TRIGGER_RESIZE
\addindex __env__SC_HYPERVISOR_TRIGGER_RESIZE
Choose how should the hypervisor be triggered: <c>speed</c> if the resizing algorithm should
be called whenever the speed of the context does not correspond to an optimal precomputed value,
<c>idle</c> it the resizing algorithm should be called whenever the workers are idle for a period
longer than the value indicated when configuring the hypervisor.
</dd>
<dt>SC_HYPERVISOR_START_RESIZE</dt>
<dd>
\anchor SC_HYPERVISOR_START_RESIZE
\addindex __env__SC_HYPERVISOR_START_RESIZE
Indicate the moment when the resizing should be available. The value correspond to the percentage
of the total time of execution of the application. The default value is the resizing frame.
</dd>
<dt>SC_HYPERVISOR_MAX_SPEED_GAP</dt>
<dd>
\anchor SC_HYPERVISOR_MAX_SPEED_GAP
\addindex __env__SC_HYPERVISOR_MAX_SPEED_GAP
Indicate the ratio of speed difference between contexts that should trigger the hypervisor.
This situation may occur only when a theoretical speed could not be computed and the hypervisor
has no value to compare the speed to. Otherwise the resizing of a context is not influenced by the
the speed of the other contexts, but only by the the value that a context should have.
</dd>
<dt>SC_HYPERVISOR_STOP_PRINT</dt>
<dd>
\anchor SC_HYPERVISOR_STOP_PRINT
\addindex __env__SC_HYPERVISOR_STOP_PRINT
By default the values of the speed of the workers is printed during the execution
of the application. If the value 1 is given to this environment variable this printing
is not done.
</dd>
<dt>SC_HYPERVISOR_LAZY_RESIZE</dt>
<dd>
\anchor SC_HYPERVISOR_LAZY_RESIZE
\addindex __env__SC_HYPERVISOR_LAZY_RESIZE
By default the hypervisor resizes the contexts in a lazy way, that is workers are firstly added to a new context
before removing them from the previous one. Once this workers are clearly taken into account
into the new context (a task was poped there) we remove them from the previous one. However if the application
would like that the change in the distribution of workers should change right away this variable should be set to 0
</dd>
<dt>SC_HYPERVISOR_SAMPLE_CRITERIA</dt>
<dd>
\anchor SC_HYPERVISOR_SAMPLE_CRITERIA
\addindex __env__SC_HYPERVISOR_SAMPLE_CRITERIA
By default the hypervisor uses a sample of flops when computing the speed of the contexts and of the workers.
If this variable is set to <c>time</c> the hypervisor uses a sample of time (10% of an aproximation of the total
execution time of the application)
</dd>
</dl>
*/
|