1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 535 536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 558 559 560 561 562 563 564 565 566 567 568 569 570 571 572 573 574 575 576 577 578 579 580 581 582 583 584 585 586 587 588 589 590 591 592 593 594 595 596 597 598 599 600 601 602 603 604 605 606 607 608 609 610 611 612 613 614 615 616 617 618 619 620 621 622 623 624 625 626 627 628 629 630 631 632 633 634 635 636 637 638 639 640 641 642 643 644 645 646 647 648 649 650 651 652 653 654 655 656 657 658 659 660 661 662 663 664 665 666 667 668 669 670 671 672 673 674 675 676 677 678 679 680 681 682 683 684 685 686 687 688 689 690 691 692 693 694 695 696 697 698 699 700 701 702 703 704 705 706 707 708 709 710 711 712 713 714 715 716 717 718 719 720 721 722 723 724 725 726 727 728 729 730 731 732 733 734 735 736 737 738 739 740 741 742 743 744 745 746 747 748 749 750 751 752 753 754 755 756 757 758 759 760 761 762 763 764 765 766 767 768 769 770 771 772 773 774 775 776 777 778 779 780 781 782 783 784 785 786 787 788 789 790 791 792 793 794 795 796 797 798 799 800 801 802 803 804 805 806 807 808 809 810 811 812 813 814 815 816 817 818 819 820 821 822 823 824 825 826 827 828 829 830 831 832 833 834 835 836 837 838 839 840 841 842 843 844 845 846 847 848 849 850 851 852 853 854 855 856 857 858 859 860 861 862 863 864 865 866 867 868 869 870 871 872 873 874 875 876 877 878 879 880 881 882 883 884 885 886 887 888 889 890 891 892 893 894 895 896 897 898 899 900 901 902 903 904 905 906 907 908 909 910 911 912 913 914 915 916 917 918 919 920 921 922 923 924 925 926 927 928 929 930 931 932 933 934 935 936 937 938 939 940 941 942 943 944 945 946 947 948 949 950 951 952 953 954 955 956 957 958 959 960 961 962 963 964 965 966 967 968 969 970 971 972 973 974 975 976 977 978 979 980 981 982 983 984 985 986 987 988 989 990 991 992 993 994 995 996 997 998 999 1000 1001 1002 1003 1004 1005 1006 1007 1008 1009 1010 1011 1012 1013 1014 1015 1016 1017 1018 1019 1020 1021 1022 1023 1024 1025 1026 1027 1028 1029 1030 1031 1032 1033 1034 1035 1036 1037 1038 1039 1040 1041 1042 1043 1044 1045 1046 1047 1048 1049 1050 1051 1052 1053 1054 1055 1056 1057 1058 1059 1060 1061 1062 1063 1064 1065 1066 1067 1068 1069 1070 1071 1072 1073 1074 1075 1076 1077 1078 1079 1080 1081 1082 1083 1084 1085 1086 1087 1088 1089 1090 1091 1092 1093 1094 1095 1096 1097 1098 1099 1100 1101 1102 1103 1104 1105 1106 1107 1108 1109 1110 1111 1112 1113 1114 1115 1116 1117 1118 1119 1120 1121 1122 1123 1124 1125 1126 1127 1128 1129 1130 1131 1132 1133 1134 1135 1136 1137 1138 1139 1140 1141 1142 1143 1144 1145 1146 1147 1148 1149 1150 1151 1152 1153 1154 1155 1156 1157 1158 1159 1160 1161 1162 1163 1164 1165 1166 1167 1168 1169 1170 1171 1172 1173 1174 1175 1176 1177 1178 1179 1180 1181 1182 1183 1184 1185 1186 1187 1188 1189 1190 1191 1192 1193 1194 1195 1196 1197 1198 1199 1200 1201 1202 1203 1204 1205 1206 1207 1208 1209 1210 1211 1212 1213 1214 1215 1216 1217 1218 1219 1220 1221 1222 1223 1224 1225 1226 1227 1228 1229 1230 1231 1232 1233 1234 1235 1236 1237 1238 1239 1240 1241 1242 1243 1244 1245 1246 1247 1248 1249 1250 1251 1252 1253 1254 1255 1256 1257 1258 1259 1260 1261 1262 1263 1264 1265 1266 1267 1268 1269 1270 1271 1272 1273 1274 1275 1276 1277 1278 1279 1280 1281 1282 1283 1284 1285 1286 1287 1288 1289 1290 1291 1292 1293 1294 1295 1296 1297 1298 1299 1300 1301 1302 1303 1304 1305 1306 1307 1308 1309 1310 1311 1312 1313 1314 1315 1316 1317 1318 1319 1320 1321 1322 1323 1324 1325 1326 1327 1328 1329 1330 1331 1332 1333 1334 1335 1336 1337 1338 1339 1340 1341 1342 1343 1344 1345 1346 1347 1348 1349 1350 1351 1352 1353 1354 1355 1356 1357 1358 1359 1360 1361 1362 1363 1364 1365 1366 1367 1368 1369 1370 1371 1372 1373 1374 1375 1376 1377 1378 1379 1380 1381 1382 1383 1384 1385 1386 1387 1388 1389 1390 1391 1392 1393 1394 1395 1396 1397 1398 1399 1400 1401 1402 1403 1404 1405 1406 1407 1408 1409 1410 1411 1412 1413 1414 1415 1416 1417 1418 1419 1420 1421 1422 1423 1424 1425 1426 1427 1428 1429 1430 1431 1432 1433 1434 1435 1436 1437 1438 1439 1440 1441 1442 1443 1444 1445 1446 1447 1448 1449 1450 1451 1452 1453 1454 1455 1456 1457 1458 1459 1460 1461 1462 1463 1464 1465 1466 1467 1468 1469 1470 1471 1472 1473 1474 1475 1476 1477 1478 1479 1480 1481 1482 1483 1484 1485 1486 1487 1488 1489 1490 1491 1492 1493 1494 1495 1496 1497 1498 1499 1500 1501 1502 1503 1504 1505 1506 1507 1508 1509
|
<pre>Internet Engineering Task Force (IETF) A. Morton
Request for Comments: 6703 G. Ramachandran
Category: Informational G. Maguluri
ISSN: 2070-1721 AT&T Labs
August 2012
<span class="h1">Reporting IP Network Performance Metrics: Different Points of View</span>
Abstract
Consumers of IP network performance metrics have many different uses
in mind. This memo provides "long-term" reporting considerations
(e.g., hours, days, weeks, or months, as opposed to 10 seconds),
based on analysis of the points of view of two key audiences. It
describes how these audience categories affect the selection of
metric parameters and options when seeking information that serves
their needs.
Status of This Memo
This document is not an Internet Standards Track specification; it is
published for informational purposes.
This document is a product of the Internet Engineering Task Force
(IETF). It represents the consensus of the IETF community. It has
received public review and has been approved for publication by the
Internet Engineering Steering Group (IESG). Not all documents
approved by the IESG are a candidate for any level of Internet
Standard; see <a href="./rfc5741#section-2">Section 2 of RFC 5741</a>.
Information about the current status of this document, any errata,
and how to provide feedback on it may be obtained at
<a href="http://www.rfc-editor.org/info/rfc6703">http://www.rfc-editor.org/info/rfc6703</a>.
<span class="grey">Morton, et al. Informational [Page 1]</span></pre>
<hr class='noprint'/><!--NewPage--><pre class='newpage'><span id="page-2" ></span>
<span class="grey"><a href="./rfc6703">RFC 6703</a> Reporting Metrics August 2012</span>
Copyright Notice
Copyright (c) 2012 IETF Trust and the persons identified as the
document authors. All rights reserved.
This document is subject to <a href="https://www.rfc-editor.org/bcp/bcp78">BCP 78</a> and the IETF Trust's Legal
Provisions Relating to IETF Documents
(<a href="http://trustee.ietf.org/license-info">http://trustee.ietf.org/license-info</a>) in effect on the date of
publication of this document. Please review these documents
carefully, as they describe your rights and restrictions with respect
to this document. Code Components extracted from this document must
include Simplified BSD License text as described in Section 4.e of
the Trust Legal Provisions and are provided without warranty as
described in the Simplified BSD License.
This document may contain material from IETF Documents or IETF
Contributions published or made publicly available before November
10, 2008. The person(s) controlling the copyright in some of this
material may not have granted the IETF Trust the right to allow
modifications of such material outside the IETF Standards Process.
Without obtaining an adequate license from the person(s) controlling
the copyright in such materials, this document may not be modified
outside the IETF Standards Process, and derivative works of it may
not be created outside the IETF Standards Process, except to format
it for publication as an RFC or to translate it into languages other
than English.
<span class="grey">Morton, et al. Informational [Page 2]</span></pre>
<hr class='noprint'/><!--NewPage--><pre class='newpage'><span id="page-3" ></span>
<span class="grey"><a href="./rfc6703">RFC 6703</a> Reporting Metrics August 2012</span>
Table of Contents
<a href="#section-1">1</a>. Introduction ....................................................<a href="#page-4">4</a>
<a href="#section-2">2</a>. Purpose and Scope ...............................................<a href="#page-4">4</a>
<a href="#section-3">3</a>. Reporting Results ...............................................<a href="#page-5">5</a>
<a href="#section-3.1">3.1</a>. Overview of Metric Statistics ..............................<a href="#page-5">5</a>
<a href="#section-3.2">3.2</a>. Long-Term Reporting Considerations .........................<a href="#page-6">6</a>
<a href="#section-4">4</a>. Effect of POV on the Loss Metric ................................<a href="#page-8">8</a>
<a href="#section-4.1">4.1</a>. Loss Threshold .............................................<a href="#page-8">8</a>
<a href="#section-4.1.1">4.1.1</a>. Network Characterization ............................<a href="#page-8">8</a>
<a href="#section-4.1.2">4.1.2</a>. Application Performance ............................<a href="#page-11">11</a>
<a href="#section-4.2">4.2</a>. Errored Packet Designation ................................<a href="#page-11">11</a>
<a href="#section-4.3">4.3</a>. Causes of Lost Packets ....................................<a href="#page-11">11</a>
<a href="#section-4.4">4.4</a>. Summary for Loss ..........................................<a href="#page-12">12</a>
<a href="#section-5">5</a>. Effect of POV on the Delay Metric ..............................<a href="#page-12">12</a>
<a href="#section-5.1">5.1</a>. Treatment of Lost Packets .................................<a href="#page-12">12</a>
<a href="#section-5.1.1">5.1.1</a>. Application Performance ............................<a href="#page-13">13</a>
<a href="#section-5.1.2">5.1.2</a>. Network Characterization ...........................<a href="#page-13">13</a>
<a href="#section-5.1.3">5.1.3</a>. Delay Variation ....................................<a href="#page-14">14</a>
<a href="#section-5.1.4">5.1.4</a>. Reordering .........................................<a href="#page-15">15</a>
<a href="#section-5.2">5.2</a>. Preferred Statistics ......................................<a href="#page-15">15</a>
<a href="#section-5.3">5.3</a>. Summary for Delay .........................................<a href="#page-16">16</a>
<a href="#section-6">6</a>. Reporting Raw Capacity Metrics .................................<a href="#page-16">16</a>
<a href="#section-6.1">6.1</a>. Type-P Parameter ..........................................<a href="#page-17">17</a>
<a href="#section-6.2">6.2</a>. A priori Factors ..........................................<a href="#page-17">17</a>
<a href="#section-6.3">6.3</a>. IP-Layer Capacity .........................................<a href="#page-17">17</a>
<a href="#section-6.4">6.4</a>. IP-Layer Utilization ......................................<a href="#page-18">18</a>
<a href="#section-6.5">6.5</a>. IP-Layer Available Capacity ...............................<a href="#page-18">18</a>
<a href="#section-6.6">6.6</a>. Variability in Utilization and Available Capacity .........<a href="#page-19">19</a>
<a href="#section-6.6.1">6.6.1</a>. General Summary of Variability .....................<a href="#page-19">19</a>
<a href="#section-7">7</a>. Reporting Restricted Capacity Metrics ..........................<a href="#page-20">20</a>
<a href="#section-7.1">7.1</a>. Type-P Parameter and Type-C Parameter .....................<a href="#page-21">21</a>
<a href="#section-7.2">7.2</a>. A Priori Factors ..........................................<a href="#page-21">21</a>
<a href="#section-7.3">7.3</a>. Measurement Interval ......................................<a href="#page-22">22</a>
<a href="#section-7.4">7.4</a>. Bulk Transfer Capacity Reporting ..........................<a href="#page-22">22</a>
<a href="#section-7.5">7.5</a>. Variability in Bulk Transfer Capacity .....................<a href="#page-23">23</a>
<a href="#section-8">8</a>. Reporting on Test Streams and Sample Size ......................<a href="#page-23">23</a>
<a href="#section-8.1">8.1</a>. Test Stream Characteristics ...............................<a href="#page-23">23</a>
<a href="#section-8.2">8.2</a>. Sample Size ...............................................<a href="#page-24">24</a>
<a href="#section-9">9</a>. Security Considerations ........................................<a href="#page-25">25</a>
<a href="#section-10">10</a>. Acknowledgements ..............................................<a href="#page-25">25</a>
<a href="#section-11">11</a>. References ....................................................<a href="#page-25">25</a>
<a href="#section-11.1">11.1</a>. Normative References .....................................<a href="#page-25">25</a>
<a href="#section-11.2">11.2</a>. Informative References ...................................<a href="#page-26">26</a>
<span class="grey">Morton, et al. Informational [Page 3]</span></pre>
<hr class='noprint'/><!--NewPage--><pre class='newpage'><span id="page-4" ></span>
<span class="grey"><a href="./rfc6703">RFC 6703</a> Reporting Metrics August 2012</span>
<span class="h2"><a class="selflink" id="section-1" href="#section-1">1</a>. Introduction</span>
When designing measurements of IP networks and presenting a result,
knowledge of the audience is a key consideration. To present a
useful and relevant portrait of network conditions, one must answer
the following question:
"How will the results be used?"
There are two main audience categories for the report of results:
1. Network Characterization - describes conditions in an IP network
for quality assurance, troubleshooting, modeling, Service Level
Agreements (SLAs), etc. This point of view (POV) looks inward
toward the network where the report consumer intends their
actions.
2. Application Performance Estimation - describes the network
conditions in a way that facilitates determining effects on user
applications, and ultimately the users themselves. This POV
looks outward, toward the user(s), accepting the network as is.
This report consumer intends to estimate a network-dependent
aspect of performance or design some aspect of an application's
accommodation of the network. (These are *not* application
metrics; they are defined at the IP layer.)
This memo considers how these different POVs affect both the
measurement design (parameters and options of the metrics) and
statistics reported when serving the report consumer's needs.
The IP Performance Metrics (IPPM) Framework [<a href="./rfc2330" title=""Framework for IP Performance Metrics"">RFC2330</a>] and other RFCs
describing IPPM provide a background for this memo.
<span class="h2"><a class="selflink" id="section-2" href="#section-2">2</a>. Purpose and Scope</span>
The purpose of this memo is to clearly delineate two POVs for using
measurements and describe their effects on the test design, including
the selection of metric parameters and reporting the results.
The scope of this memo primarily covers the test design and reporting
of the loss and delay metrics [<a href="./rfc2680" title=""A One-way Packet Loss Metric for IPPM"">RFC2680</a>] [<a href="./rfc2679" title=""A One-way Delay Metric for IPPM"">RFC2679</a>]. It will also
discuss the delay variation [<a href="./rfc3393" title=""IP Packet Delay Variation Metric for IP Performance Metrics (IPPM)"">RFC3393</a>] and reordering metrics
[<a href="./rfc4737" title=""Packet Reordering Metrics"">RFC4737</a>] where applicable.
<span class="grey">Morton, et al. Informational [Page 4]</span></pre>
<hr class='noprint'/><!--NewPage--><pre class='newpage'><span id="page-5" ></span>
<span class="grey"><a href="./rfc6703">RFC 6703</a> Reporting Metrics August 2012</span>
With capacity metrics growing in relevance to the industry, the memo
also covers POV and reporting considerations for metrics resulting
from the Bulk Transfer Capacity Framework [<a href="./rfc3148" title=""A Framework for Defining Empirical Bulk Transfer Capacity Metrics"">RFC3148</a>] and Network
Capacity Definitions [<a href="./rfc5136" title=""Defining Network Capacity"">RFC5136</a>]. These memos effectively describe two
different categories of metrics:
o Restricted [<a href="./rfc3148" title=""A Framework for Defining Empirical Bulk Transfer Capacity Metrics"">RFC3148</a>]: includes restrictions of congestion control
and the notion of unique data bits delivered, and
o Raw [<a href="./rfc5136" title=""Defining Network Capacity"">RFC5136</a>]: uses a definition of raw capacity without the
restrictions of data uniqueness or congestion awareness.
It might seem, at first glance, that each of these metrics has an
obvious audience (raw = network characterization, restricted =
application performance), but reality is more complex and consistent
with the overall topic of capacity measurement and reporting. For
example, TCP is usually used in restricted capacity measurement
methods, while UDP appears in raw capacity measurement. The raw and
restricted capacity metrics will be treated in separate sections,
although they share one common reporting issue: representing
variability in capacity metric results as part of a long-term report.
Sampling, or the design of the active packet stream that is the basis
for the measurements, is also discussed.
<span class="h2"><a class="selflink" id="section-3" href="#section-3">3</a>. Reporting Results</span>
This section gives an overview of recommendations, followed by
additional considerations for reporting results in the "long term",
based on the discussion and conclusions of the major sections that
follow.
<span class="h3"><a class="selflink" id="section-3.1" href="#section-3.1">3.1</a>. Overview of Metric Statistics</span>
This section gives an overview of reporting recommendations for all
the metrics considered in this memo.
The minimal report on measurements must include both loss and delay
metrics.
For packet loss, the loss ratio defined in [<a href="./rfc2680" title=""A One-way Packet Loss Metric for IPPM"">RFC2680</a>] is a sufficient
starting point -- especially the existing guidance for setting the
loss threshold waiting time. In <a href="#section-4.1.1">Section 4.1.1</a>, we have calculated a
waiting time -- 51 seconds -- that should be sufficient to
differentiate between packets that are truly lost or have long finite
delays under general measurement circumstances. Knowledge of
<span class="grey">Morton, et al. Informational [Page 5]</span></pre>
<hr class='noprint'/><!--NewPage--><pre class='newpage'><span id="page-6" ></span>
<span class="grey"><a href="./rfc6703">RFC 6703</a> Reporting Metrics August 2012</span>
specific conditions can help to reduce this threshold, and a waiting
time of approximately 50 seconds is considered to be manageable in
practice.
We note that a loss ratio calculated according to [<a href="#ref-Y.1540" title=""Internet protocol data communication service - IP packet transfer and availability performance parameters"">Y.1540</a>] would
exclude errored packets from the numerator. In practice, the
difference between these two loss metrics is small, if any, depending
on whether the last link prior to the Destination contributes errored
packets.
For packet delay, we recommend providing both the mean delay and the
median delay with lost packets designated as undefined (as permitted
by [<a href="./rfc2679" title=""A One-way Delay Metric for IPPM"">RFC2679</a>]). Both statistics are based on a conditional
distribution, and the condition is packet arrival prior to a waiting
time dT, where dT has been set to take maximum packet lifetimes into
account, as discussed above for loss. Using a long dT helps to
ensure that delay distributions are not truncated.
For Packet Delay Variation (PDV), the minimum delay of the
conditional distribution should be used as the reference delay for
computing PDV according to [<a href="#ref-Y.1540" title=""Internet protocol data communication service - IP packet transfer and availability performance parameters"">Y.1540</a>] or [<a href="./rfc5481" title=""Packet Delay Variation Applicability Statement"">RFC5481</a>] and [<a href="./rfc3393" title=""IP Packet Delay Variation Metric for IP Performance Metrics (IPPM)"">RFC3393</a>]. A
useful value to report is a "pseudo" range of delay variation based
on calculating the difference between a high percentile of delay and
the minimum delay. For example, the 99.9th percentile minus the
minimum will give a value that can be compared with objectives in
[<a href="#ref-Y.1541" title=""Network performance objectives for IP-based services"">Y.1541</a>].
For both raw capacity and restricted capacity, reporting the
variability in a useful way is identified as the main challenge. The
min, max, and range statistics are suggested along with a ratio of
max to min and moving averages. In the end, a simple plot of the
singleton results over time may succeed where summary metrics fail or
may serve to confirm that the summaries are valid.
<span class="h3"><a class="selflink" id="section-3.2" href="#section-3.2">3.2</a>. Long-Term Reporting Considerations</span>
[<a id="ref-IPPM-RPT">IPPM-RPT</a>] describes methods to conduct measurements and report the
results on a near-immediate time scale (10 seconds, which we consider
to be "short-term").
Measurement intervals and reporting intervals need not be the same
length. Sometimes, the user is only concerned with the performance
levels achieved over a relatively long interval of time (e.g., days,
weeks, or months, as opposed to 10 seconds). However, there can be
risks involved with running a measurement continuously over a long
period without recording intermediate results:
<span class="grey">Morton, et al. Informational [Page 6]</span></pre>
<hr class='noprint'/><!--NewPage--><pre class='newpage'><span id="page-7" ></span>
<span class="grey"><a href="./rfc6703">RFC 6703</a> Reporting Metrics August 2012</span>
o Temporary power failure may cause loss of all results to date.
o Measurement system timing synchronization signals may experience a
temporary outage, causing subsets of measurements to be in error
or invalid.
o Maintenance on the measurement system or on its connectivity to
the network under test may be necessary.
For these and other reasons, such as
o the constraint to collect measurements on intervals similar to
user session length,
o the dual use of measurements in monitoring activities where
results are needed on a period of a few minutes, or
o the ability to inspect results of a single measurement interval
for deeper analysis,
there is value in conducting measurements on intervals that are much
shorter than the reporting interval.
There are several approaches for aggregating a series of measurement
results over time in order to make a statement about the longer
reporting interval. One approach requires the storage of all metric
singletons collected throughout the reporting interval, even though
the measurement interval stops and starts many times.
Another approach is described in [<a href="./rfc5835" title=""Framework for Metric Composition"">RFC5835</a>] as "temporal aggregation".
This approach would estimate the results for the reporting interval
based on combining many individual short-term measurement interval
statistics to yield a long-term result. The result would ideally
appear in the same form as though a continuous measurement had been
conducted. A memo addressing the details of temporal aggregation is
yet to be prepared.
Yet another approach requires a numerical objective for the metric,
and the results of each measurement interval are compared with the
objective. Every measurement interval where the results meet the
objective contribute to the fraction of time with performance as
specified. When the reporting interval contains many measurement
intervals, it is possible to present the results as "metric A was
less than or equal to objective X during Y% of time".
NOTE that numerical thresholds of acceptability are not set in
IETF performance work and are therefore excluded from the scope of
this memo.
<span class="grey">Morton, et al. Informational [Page 7]</span></pre>
<hr class='noprint'/><!--NewPage--><pre class='newpage'><span id="page-8" ></span>
<span class="grey"><a href="./rfc6703">RFC 6703</a> Reporting Metrics August 2012</span>
In all measurements, it is important to avoid unintended
synchronization with network events. This topic is treated in
[<a href="./rfc2330" title=""Framework for IP Performance Metrics"">RFC2330</a>] for Poisson-distributed inter-packet time streams and in
[<a href="./rfc3432" title=""Network performance measurement with periodic streams"">RFC3432</a>] for Periodic streams. Both avoid synchronization by using
random start times.
There are network conditions where it is simply more useful to report
the connectivity status of the Source-Destination path, and to
distinguish time intervals where connectivity can be demonstrated
from other time intervals (where connectivity does not appear to
exist). [<a href="./rfc2678" title=""IPPM Metrics for Measuring Connectivity"">RFC2678</a>] specifies a number of one-way and two-way
connectivity metrics of increasing complexity. In this memo, we
recommend that long-term reporting of loss, delay, and other metrics
be limited to time intervals where connectivity can be demonstrated,
and that other intervals be summarized as the percent of time where
connectivity does not appear to exist. We note that this same
approach has been adopted in ITU-T Recommendation [<a href="#ref-Y.1540" title=""Internet protocol data communication service - IP packet transfer and availability performance parameters"">Y.1540</a>] where
performance parameters are only valid during periods of service
"availability" (evaluated according to a function based on packet
loss, and sustained periods of loss ratio greater than a threshold
are declared "unavailable").
<span class="h2"><a class="selflink" id="section-4" href="#section-4">4</a>. Effect of POV on the Loss Metric</span>
This section describes the ways in which the loss metric can be tuned
to reflect the preferences of the two audience categories, or
different POVs. The waiting time before declaring that a packet is
lost -- the loss threshold -- is one area where there would appear to
be a difference, but the ability to post-process the results may
resolve it.
<span class="h3"><a class="selflink" id="section-4.1" href="#section-4.1">4.1</a>. Loss Threshold</span>
<a href="./rfc2680">RFC 2680</a> [<a href="./rfc2680" title=""A One-way Packet Loss Metric for IPPM"">RFC2680</a>] defines the concept of a waiting time for packets
to arrive, beyond which they are declared lost. The text of the RFC
declines to recommend a value, instead saying that "good engineering,
including an understanding of packet lifetimes, will be needed in
practice". Later, in the methodology, they give reasons for waiting
"a reasonable period of time" and leave the definition of
"reasonable" intentionally vague. Below, we estimate a practical
bound on waiting time.
<span class="h4"><a class="selflink" id="section-4.1.1" href="#section-4.1.1">4.1.1</a>. Network Characterization</span>
Practical measurement experience has shown that unusual network
circumstances can cause long delays. One such circumstance is when
routing loops form during IGP re-convergence following a failure or
drastic link cost change. Packets will loop between two routers
<span class="grey">Morton, et al. Informational [Page 8]</span></pre>
<hr class='noprint'/><!--NewPage--><pre class='newpage'><span id="page-9" ></span>
<span class="grey"><a href="./rfc6703">RFC 6703</a> Reporting Metrics August 2012</span>
until new routes are installed or until the IPv4 Time-to-Live (TTL)
field (or the IPv6 Hop Limit) decrements to zero. Very long delays
on the order of several seconds have been measured [<a href="#ref-Casner" title=""A Fine- Grained View of High-Performance Networking"">Casner</a>] [<a href="#ref-Cia03" title=""Standardized Active Measurements on a Tier 1 IP Backbone"">Cia03</a>].
Therefore, network characterization activities prefer a long waiting
time in order to distinguish these events from other causes of loss
(such as packet discard at a full queue, or tail drop). This way,
the metric design helps to distinguish more reliably between packets
that might yet arrive and those that are no longer traversing the
network.
It is possible to calculate a worst-case waiting time, assuming that
a routing loop is the cause. We model the path between Source and
Destination as a series of delays in links (t) and queues (q), as
these are the dominant contributors to delay (in active measurement,
the Source and Destination hosts contribute minimal delay). The
normal path delay, D, across n queues (where TTL is decremented at a
node with a queue) and n+1 links without encountering a loop, is
Path model with n=5
Source --- q1 --- q2 --- q3 --- q4 --- q5 --- Destination
t0 t1 t2 t3 t4 t5
n
---
\
D = t + > (t + q)
0 / i i
---
i = 1
Figure 1: Normal Path Delay
<span class="grey">Morton, et al. Informational [Page 9]</span></pre>
<hr class='noprint'/><!--NewPage--><pre class='newpage'><span id="page-10" ></span>
<span class="grey"><a href="./rfc6703">RFC 6703</a> Reporting Metrics August 2012</span>
and the time spent in the loop with L queues is
Path model with n=5 and L=3
Time in one loop = (qx+tx + qy+ty + qz+tz)
qy -- qz
| ?/exit?
qx--/\
Src --- q1 --- q2 ---/ q3 --- q4 --- q5 --- Dst
t0 t1 t2 t3 t4 t5
j + L-1
---
\ (TTL - n)
R = C > (t + q) where C = ---------
/ i i max L
---
i=j
Figure 2: Delay Due to Rotations in a Loop
where n is the total number of queues in the non-loop path (with n+1
links), j is the queue number where the loop begins, C is the number
of times a packet circles the loop, and TTL is the packet's initial
Time-to-Live value at the Source (or Hop Count in IPv6).
If we take the delays of all links and queues as 100 ms each, the
TTL=255, the number of queues n=5, and the queues in the loop L=4,
then using C_max:
D = 1.1 seconds and R ~= 50 seconds, and D + R ~= 51.1 seconds
We note that the link delays of 100 ms would span most continents,
and a constant queue length of 100 ms is also very generous. When a
loop occurs, it is almost certain to be resolved in 10 seconds or
less. The value calculated above is an upper limit for almost any
real-world circumstance.
A waiting time threshold parameter, dT, set consistent with this
calculation, would not truncate the delay distribution (possibly
causing a change in its mathematical properties), because the packets
that might arrive have been given sufficient time to traverse the
network.
It is worth noting that packets that are stored and deliberately
forwarded at a much later time constitute a replay attack on the
measurement system and are beyond the scope of normal performance
reporting.
<span class="grey">Morton, et al. Informational [Page 10]</span></pre>
<hr class='noprint'/><!--NewPage--><pre class='newpage'><span id="page-11" ></span>
<span class="grey"><a href="./rfc6703">RFC 6703</a> Reporting Metrics August 2012</span>
<span class="h4"><a class="selflink" id="section-4.1.2" href="#section-4.1.2">4.1.2</a>. Application Performance</span>
Fortunately, application performance estimation activities are not
adversely affected by the long estimated limit on waiting time,
because most applications will use shorter time thresholds. Although
the designer's tendency might be to set the loss threshold at a value
equivalent to a particular application's threshold, this specific
threshold can be applied when post-processing the measurements. A
shorter waiting time can be enforced by locating packets with delays
longer than the application's threshold and re-designating such
packets as lost. Thus, the measurement system can use a single loss
waiting time and support both application and network performance
POVs simultaneously.
<span class="h3"><a class="selflink" id="section-4.2" href="#section-4.2">4.2</a>. Errored Packet Designation</span>
<a href="./rfc2680">RFC 2680</a> designates packets that arrive containing errors as lost
packets. Many packets that are corrupted by bit errors are discarded
within the network and do not reach their intended destination.
This is consistent with applications that would check the payload
integrity at higher layers and discard the packet. However, some
applications prefer to deal with errored payloads on their own, and
even a corrupted payload is better than no packet at all.
To address this possibility, and to make network characterization
more complete, distinguishing between packets that do not arrive
(lost) and errored packets that arrive (conditionally lost) is
recommended.
<span class="h3"><a class="selflink" id="section-4.3" href="#section-4.3">4.3</a>. Causes of Lost Packets</span>
Although many measurement systems use a waiting time to determine
whether or not a packet is lost, most of the waiting is in vain. The
packets are no longer traversing the network and have not reached
their destination.
There are many causes of packet loss, including the following:
1. Queue drop, or discard
2. Corruption of the IP header, or other essential header
information
3. TTL expiration (or use of a TTL value that is too small)
<span class="grey">Morton, et al. Informational [Page 11]</span></pre>
<hr class='noprint'/><!--NewPage--><pre class='newpage'><span id="page-12" ></span>
<span class="grey"><a href="./rfc6703">RFC 6703</a> Reporting Metrics August 2012</span>
4. Link or router failure
5. Layers below the Source-to-Destination IP layer can discard
packets that fail error checking, and link-layer checksums often
cover the entire packet
It is reasonable to consider a packet that has not arrived after a
large amount of time to be lost (due to one of the causes above)
because packets do not "live forever" in the network or have infinite
delay.
<span class="h3"><a class="selflink" id="section-4.4" href="#section-4.4">4.4</a>. Summary for Loss</span>
Given that measurement post-processing is possible (even encouraged
in the definitions of IPPM), measurements of loss can easily serve
both POVs:
o Use a long waiting time to serve network characterization and
revise results for specific application delay thresholds as
needed.
o Distinguish between errored packets and lost packets when possible
to aid network characterization, and combine the results for
application performance if appropriate.
<span class="h2"><a class="selflink" id="section-5" href="#section-5">5</a>. Effect of POV on the Delay Metric</span>
This section describes the ways in which the delay metric can be
tuned to reflect the preferences of the two consumer categories, or
different POVs.
<span class="h3"><a class="selflink" id="section-5.1" href="#section-5.1">5.1</a>. Treatment of Lost Packets</span>
The delay metric [<a href="./rfc2679" title=""A One-way Delay Metric for IPPM"">RFC2679</a>] specifies the treatment of packets that do
not successfully traverse the network: their delay is undefined.
>>The *Type-P-One-way-Delay* from Src to Dst at T is undefined
(informally, infinite)<< means that Src sent the first bit of a
Type-P packet to Dst at wire-time T and that Dst did not receive
that packet.
It is an accepted but informal practice to assign infinite delay to
lost packets. We next look at how these two different treatments
align with the needs of measurement consumers who wish to
characterize networks or estimate application performance. Also, we
look at the way that lost packets have been treated in other metrics:
delay variation and reordering.
<span class="grey">Morton, et al. Informational [Page 12]</span></pre>
<hr class='noprint'/><!--NewPage--><pre class='newpage'><span id="page-13" ></span>
<span class="grey"><a href="./rfc6703">RFC 6703</a> Reporting Metrics August 2012</span>
<span class="h4"><a class="selflink" id="section-5.1.1" href="#section-5.1.1">5.1.1</a>. Application Performance</span>
Applications need to perform different functions, dependent on
whether or not each packet arrives within some finite tolerance. In
other words, a receiver's packet processing takes only one of two
alternative directions (a "fork" in the road):
o Packets that arrive within expected tolerance are handled by
removing headers, restoring smooth delivery timing (as in a
de-jitter buffer), restoring sending order, checking for errors in
payloads, and many other operations.
o Packets that do not arrive when expected lead to attempted
recovery from the apparent loss, such as retransmission requests,
loss concealment, or forward error correction to replace the
missing packet.
So, it is important to maintain a distinction between packets that
actually arrive and those that do not. Therefore, it is preferable
to leave the delay of lost packets undefined and to characterize the
delay distribution as a conditional distribution (conditioned on
arrival).
<span class="h4"><a class="selflink" id="section-5.1.2" href="#section-5.1.2">5.1.2</a>. Network Characterization</span>
In this discussion, we assume that both loss and delay metrics will
be reported for network characterization (at least).
Assume that packets that do not arrive are reported as lost, usually
as a fraction of all sent packets. If these lost packets are
assigned an undefined delay, then the network's inability to deliver
them (in a timely way) is relegated only in the loss metric when we
report statistics on the delay distribution conditioned on the event
of packet arrival (within the loss waiting time threshold). We can
say that the delay and loss metrics are orthogonal in that they
convey non-overlapping information about the network under test.
This is a valuable property whose absence is discussed below.
However, if we assign infinite delay to all lost packets, then
o The delay metric results are influenced both by packets that
arrive and those that do not.
o The delay singleton and the loss singleton do not appear to be
orthogonal (delay is finite when loss=0; delay is infinite when
loss=1).
<span class="grey">Morton, et al. Informational [Page 13]</span></pre>
<hr class='noprint'/><!--NewPage--><pre class='newpage'><span id="page-14" ></span>
<span class="grey"><a href="./rfc6703">RFC 6703</a> Reporting Metrics August 2012</span>
o The network is penalized in both the loss and delay metrics,
effectively double-counting the lost packets.
As further evidence of overlap, consider the Cumulative Distribution
Function (CDF) of delay when the value "positive infinity" is
assigned to all lost packets. Figure 3 shows a CDF where a small
fraction of packets are lost.
1 | - - - - - - - - - - - - - - - - - -+
| |
| _..----''''''''''''''''''''
| ,-''
| ,'
| / Mass at
| / +infinity
| / = fraction
|| lost
|/
0 |_____________________________________
0 Delay +o0
Figure 3: Cumulative Distribution Function for Delay
When Loss = +Infinity
We note that a delay CDF that is conditioned on packet arrival would
not exhibit this apparent overlap with loss.
Although infinity is a familiar mathematical concept, it is somewhat
disconcerting to see any time-related metric reported as infinity.
Questions are bound to arise and tend to detract from the goal of
informing the consumer with a performance report.
<span class="h4"><a class="selflink" id="section-5.1.3" href="#section-5.1.3">5.1.3</a>. Delay Variation</span>
[<a id="ref-RFC3393">RFC3393</a>] excludes lost packets from samples, effectively assigning
an undefined delay to packets that do not arrive in a reasonable
time. <a href="./rfc3393#section-4.1">Section 4.1 of [RFC3393]</a> describes this specification and its
rationale (ipdv = inter-packet delay variation in the quote below).
The treatment of lost packets as having "infinite" or "undefined"
delay complicates the derivation of statistics for ipdv.
Specifically, when packets in the measurement sequence are lost,
simple statistics such as sample mean cannot be computed. One
possible approach to handling this problem is to reduce the event
space by conditioning. That is, we consider conditional
statistics; namely we estimate the mean ipdv (or other derivative
statistic) conditioned on the event that selected packet pairs
<span class="grey">Morton, et al. Informational [Page 14]</span></pre>
<hr class='noprint'/><!--NewPage--><pre class='newpage'><span id="page-15" ></span>
<span class="grey"><a href="./rfc6703">RFC 6703</a> Reporting Metrics August 2012</span>
arrive at the Destination (within the given timeout). While this
itself is not without problems (what happens, for example, when
every other packet is lost), it offers a way to make some (valid)
statements about ipdv, at the same time avoiding events with
undefined outcomes.
We note that the argument above applies to all forms of packet delay
variation that can be constructed using the "selection function"
concept of [<a href="./rfc3393" title=""IP Packet Delay Variation Metric for IP Performance Metrics (IPPM)"">RFC3393</a>]. In recent work, the two main forms of delay
variation metrics have been compared, and the results are summarized
in [<a href="./rfc5481" title=""Packet Delay Variation Applicability Statement"">RFC5481</a>].
<span class="h4"><a class="selflink" id="section-5.1.4" href="#section-5.1.4">5.1.4</a>. Reordering</span>
[<a id="ref-RFC4737">RFC4737</a>] defines metrics that are based on evaluation of packet
arrival order and that include a waiting time before declaring that a
packet is lost (to exclude the packet from further processing).
If packets are assigned a delay value, then the reordering metric
would declare any packets with infinite delay to be reordered,
because their sequence numbers will surely be less than the "Next
Expected" threshold when (or if) they arrive. But this practice
would fail to maintain orthogonality between the reordering metric
and the loss metric. Confusion can be avoided by designating the
delay of non-arriving packets as undefined and reserving delay values
only for packets that arrive within a sufficiently long waiting time.
<span class="h3"><a class="selflink" id="section-5.2" href="#section-5.2">5.2</a>. Preferred Statistics</span>
Today in network characterization, the sample mean is one statistic
that is almost ubiquitously reported. It is easily computed and
understood by virtually everyone in this audience category. Also,
the sample is usually filtered on packet arrival, so that the mean is
based on a conditional distribution.
The median is another statistic that summarizes a distribution,
having somewhat different properties from the sample mean. The
median is stable in distributions with a few outliers or without
them. However, the median's stability prevents it from indicating
when a large fraction of the distribution changes value. 50% or more
values would need to change for the median to capture the change.
Both the median and sample mean have difficulty with bimodal
distributions. The median will reside in only one of the modes, and
the mean may not lie in either mode range. For this and other
reasons, additional statistics such as the minimum, maximum, and 95th
percentile have value when summarizing a distribution.
<span class="grey">Morton, et al. Informational [Page 15]</span></pre>
<hr class='noprint'/><!--NewPage--><pre class='newpage'><span id="page-16" ></span>
<span class="grey"><a href="./rfc6703">RFC 6703</a> Reporting Metrics August 2012</span>
When both the sample mean and median are available, a comparison will
sometimes be informative, because these two statistics are equal only
under unusual circumstances, such as when the delay distribution is
perfectly symmetrical.
Also, these statistics are generally useful from the application
performance POV, so there is a common set that should satisfy
audiences.
Plots of the delay distribution may also be useful when single-value
statistics indicate that new conditions are present. An empirically
derived probability distribution function will usually describe
multiple modes more efficiently than any other form of result.
<span class="h3"><a class="selflink" id="section-5.3" href="#section-5.3">5.3</a>. Summary for Delay</span>
From the perspectives of
1. application/receiver analysis, where subsequent processing
depends on whether the packet arrives or times out,
2. straightforward network characterization without double-counting
defects, and
3. consistency with delay variation and reordering metric
definitions,
the most efficient practice is to distinguish between packets that
are truly lost and those that are delayed packets with a sufficiently
long waiting time, and to designate the delay of non-arriving packets
as undefined.
<span class="h2"><a class="selflink" id="section-6" href="#section-6">6</a>. Reporting Raw Capacity Metrics</span>
Raw capacity refers to the metrics defined in [<a href="./rfc5136" title=""Defining Network Capacity"">RFC5136</a>], which do not
include restrictions such as data uniqueness or flow-control response
to congestion.
The metrics considered are IP-layer capacity, utilization (or used
capacity), and available capacity, for individual links and complete
paths. These three metrics form a triad: knowing one metric
constrains the other two (within their allowed range), and knowing
two determines the third. The link metrics have another key aspect
in common: they are single-measurement-point metrics at the egress of
a link. The path capacity and available capacity are derived by
examining the set of single-point link measurements and taking the
minimum value.
<span class="grey">Morton, et al. Informational [Page 16]</span></pre>
<hr class='noprint'/><!--NewPage--><pre class='newpage'><span id="page-17" ></span>
<span class="grey"><a href="./rfc6703">RFC 6703</a> Reporting Metrics August 2012</span>
<span class="h3"><a class="selflink" id="section-6.1" href="#section-6.1">6.1</a>. Type-P Parameter</span>
The concept of "packets of Type-P" is defined in [<a href="./rfc2330" title=""Framework for IP Performance Metrics"">RFC2330</a>]. The
Type-P categorization has critical relevance in all forms of capacity
measurement and reporting. The ability to categorize packets based
on header fields for assignment to different queues and scheduling
mechanisms is now commonplace. When unused resources are shared
across queues, the conditions in all packet categories will affect
capacity and related measurements. This is one source of variability
in the results that all audiences would prefer to see reported in a
useful and easily understood way.
Communication of Type-P within the One-Way Active Measurement
Protocol (OWAMP) and the Two-Way Active Measurement Protocol (TWAMP)
is essentially confined to the Diffserv Code Point (DSCP) [<a href="./rfc4656" title=""A One-way Active Measurement Protocol (OWAMP)"">RFC4656</a>].
DSCP is the most common qualifier for Type-P.
Each audience will have a set of Type-P qualifications and value
combinations that are of interest. Measurements and reports should
have the flexibility to report per-type and aggregate performance.
<span class="h3"><a class="selflink" id="section-6.2" href="#section-6.2">6.2</a>. A priori Factors</span>
The audience for network characterization may have detailed
information about each link that comprises a complete path (due to
ownership, for example), or some of the links in the path but not
others, or none of the links.
There are cases where the measurement audience only has information
on one of the links (the local access link) and wishes to measure one
or more of the raw capacity metrics. This scenario is quite common
and has spawned a substantial number of experimental measurement
methods (e.g., <a href="http://www.caida.org/tools/taxonomy/">http://www.caida.org/tools/taxonomy/</a>). Many of these
methods respect that their users want a result fairly quickly and in
one trial. Thus, the measurement interval is kept short (a few
seconds to a minute). For long-term reporting, a sample of
short-term results needs to be summarized.
<span class="h3"><a class="selflink" id="section-6.3" href="#section-6.3">6.3</a>. IP-Layer Capacity</span>
For links, this metric's theoretical maximum value can be determined
from the physical-layer bit rate and the bit rate reduction due to
the layers between the physical layer and IP. When measured, this
metric takes additional factors into account, such as the ability of
the sending device to process and forward traffic under various
conditions. For example, the arrival of routing updates may spawn
high-priority processes that reduce the sending rate temporarily.
<span class="grey">Morton, et al. Informational [Page 17]</span></pre>
<hr class='noprint'/><!--NewPage--><pre class='newpage'><span id="page-18" ></span>
<span class="grey"><a href="./rfc6703">RFC 6703</a> Reporting Metrics August 2012</span>
Thus, the measured capacity of a link will be variable, and the
maximum capacity observed applies to a specific time, time interval,
and other relevant circumstances.
For paths composed of a series of links, it is easy to see how the
sources of variability for the results grow with each link in the
path. Variability of results will be discussed in more detail below.
<span class="h3"><a class="selflink" id="section-6.4" href="#section-6.4">6.4</a>. IP-Layer Utilization</span>
The ideal metric definition of link utilization [<a href="./rfc5136" title=""Defining Network Capacity"">RFC5136</a>] is based on
the actual usage (bits successfully received during a time interval)
and the maximum capacity for the same interval.
In practice, link utilization can be calculated by counting the
IP-layer (or other layer) octets received over a time interval and
dividing by the theoretical maximum number of octets that could have
been delivered in the same interval. A commonly used time interval
is 5 minutes, and this interval has been sufficient to support
network operations and design for some time. 5 minutes is somewhat
long compared with the expected download time for web pages but short
with respect to large file transfers and TV program viewing. It is
fair to say that considerable variability is concealed by reporting a
single (average) utilization value for each 5-minute interval. Some
performance management systems have begun to make 1-minute averages
available.
There is also a limit on the smallest useful measurement interval.
Intervals on the order of the serialization time for a single Maximum
Transmission Unit (MTU) packet will observe on/off behavior and
report 100% or 0%. The smallest interval needs to be some multiple
of MTU serialization time for averaging to be effective.
<span class="h3"><a class="selflink" id="section-6.5" href="#section-6.5">6.5</a>. IP-Layer Available Capacity</span>
The available capacity of a link can be calculated using the capacity
and utilization metrics.
When available capacity of a link or path is estimated through some
measurement technique, the following parameters should be reported:
o Name and reference to the exact method of measurement
o IP packet length, octets (including IP header)
o Maximum capacity that can be assessed in the measurement
configuration
<span class="grey">Morton, et al. Informational [Page 18]</span></pre>
<hr class='noprint'/><!--NewPage--><pre class='newpage'><span id="page-19" ></span>
<span class="grey"><a href="./rfc6703">RFC 6703</a> Reporting Metrics August 2012</span>
o Time duration of the measurement
o All other parameters specific to the measurement method
Many methods of available capacity measurement have a maximum
capacity that they can measure, and this maximum may be less than the
actual available capacity of the link or path. Therefore, it is
important to know the capacity value beyond which there will be no
measured improvement.
The application performance estimation audience may have a desired
target capacity value and simply wish to assess whether there is
sufficient available capacity. This case simplifies the measurement
of link and path capacity to some degree, as long as the measurable
maximum exceeds the target capacity.
<span class="h3"><a class="selflink" id="section-6.6" href="#section-6.6">6.6</a>. Variability in Utilization and Available Capacity</span>
As with most metrics and measurements, assessing the consistency or
variability in the results gives the user an intuitive feel for the
degree (or confidence) that any one value is representative of other
results, or the spread of the underlying distribution of the
singleton measurements.
How can utilization be measured and summarized to describe the
potential variability in a useful way?
How can the variability in available capacity estimates be reported,
so that the confidence in the results is also conveyed?
We suggest some methods below.
<span class="h4"><a class="selflink" id="section-6.6.1" href="#section-6.6.1">6.6.1</a>. General Summary of Variability</span>
With a set of singleton utilization or available capacity estimates,
each representing a time interval needed to ascertain the estimate,
we seek to describe the variation over the set of singletons as
though reporting summary statistics of a distribution. Three useful
summary statistics are
o Minimum,
o Maximum, and
o Range
<span class="grey">Morton, et al. Informational [Page 19]</span></pre>
<hr class='noprint'/><!--NewPage--><pre class='newpage'><span id="page-20" ></span>
<span class="grey"><a href="./rfc6703">RFC 6703</a> Reporting Metrics August 2012</span>
An alternate way to represent the range is as a ratio of maximum to
minimum value. This enables an easily understandable statistic to
describe the range observed. For example, when maximum = 3*minimum,
then the max/min ratio is 3, and users may see variability of this
order. On the other hand, capacity estimates with a max/min ratio
near 1 are quite consistent and near the central measure or statistic
reported.
For an ongoing series of singleton estimates, a moving average of n
estimates may provide a single value estimate to more easily
distinguish substantial changes in performance over time. For
example, in a window of n singletons observed in time interval t, a
percentage change of x% is declared to be a substantial change and
reported as an exception.
Often, the most informative summary of the results is a two-axis plot
rather than a table of statistics, where time is plotted on the
x-axis and the singleton value on the y-axis. The time-series plot
can illustrate sudden changes in an otherwise stable range, identify
bi-modality easily, and help quickly assess correlation with other
time-series. Plots of frequency of the singleton values are likewise
useful tools to visualize the variation.
<span class="h2"><a class="selflink" id="section-7" href="#section-7">7</a>. Reporting Restricted Capacity Metrics</span>
Restricted capacity refers to the metrics defined in [<a href="./rfc3148" title=""A Framework for Defining Empirical Bulk Transfer Capacity Metrics"">RFC3148</a>], which
include criteria of data uniqueness or flow-control response to
congestion.
One primary metric considered is Bulk Transfer Capacity (BTC) for
complete paths. [<a href="./rfc3148" title=""A Framework for Defining Empirical Bulk Transfer Capacity Metrics"">RFC3148</a>] defines BTC as
BTC = data_sent / elapsed_time
for a connection with congestion-aware flow control, where data_sent
is the total number of unique payload bits (no headers).
We note that this definition *differs* from the raw capacity
definition in <a href="./rfc5136#section-2.3.1">Section 2.3.1 of [RFC5136]</a>, where IP-layer capacity
*includes* all bits in the IP header and payload. This means that
restricted capacity BTC is already operating at a disadvantage when
compared to the raw capacity at layers below TCP. Further, there are
cases where one IP layer is encapsulated in another IP layer or other
form of tunneling protocol, designating more and more of the
fundamental transport capacity as header bits that are pure overhead
to the BTC measurement.
<span class="grey">Morton, et al. Informational [Page 20]</span></pre>
<hr class='noprint'/><!--NewPage--><pre class='newpage'><span id="page-21" ></span>
<span class="grey"><a href="./rfc6703">RFC 6703</a> Reporting Metrics August 2012</span>
We also note that raw and restricted capacity metrics are not
orthogonal in the sense defined in <a href="#section-5.1.2">Section 5.1.2</a> above. The
information they convey about the network under test is certainly
overlapping, but they reveal two different and important aspects of
performance.
When thinking about the triad of raw capacity metrics, BTC is most
akin to the "IP-Type-P Available Path Capacity", at least in the eyes
of a network user who seeks to know what transmission performance a
path might support.
<span class="h3"><a class="selflink" id="section-7.1" href="#section-7.1">7.1</a>. Type-P Parameter and Type-C Parameter</span>
The concept of "packets of Type-P" is defined in [<a href="./rfc2330" title=""Framework for IP Performance Metrics"">RFC2330</a>]. The
considerations for restricted capacity are identical to the raw
capacity section on this topic, with the addition that the various
fields and options in the TCP header must be included in the
description.
The vast array of TCP flow-control options are not well captured by
Type-P, because they do not exist in the TCP header bits. Therefore,
we introduce a new notion here: TCP Configuration of "Type-C". The
elements of Type-C describe all of the settings for TCP options and
congestion control algorithm variables, including the main form of
congestion control in use. Readers should consider the parameters
and variables of [<a href="./rfc3148" title=""A Framework for Defining Empirical Bulk Transfer Capacity Metrics"">RFC3148</a>] and [<a href="./rfc6349" title=""Framework for TCP Throughput Testing"">RFC6349</a>] when constructing Type-C.
<span class="h3"><a class="selflink" id="section-7.2" href="#section-7.2">7.2</a>. A Priori Factors</span>
The audience for network characterization may have detailed
information about each link that comprises a complete path (due to
ownership, for example), or some of the links in the path but not
others, or none of the links.
There are cases where the measurement audience only has information
on one of the links (the local access link) and wishes to measure one
or more BTC metrics. The discussion in <a href="#section-6.2">Section 6.2</a> applies here
as well.
<span class="grey">Morton, et al. Informational [Page 21]</span></pre>
<hr class='noprint'/><!--NewPage--><pre class='newpage'><span id="page-22" ></span>
<span class="grey"><a href="./rfc6703">RFC 6703</a> Reporting Metrics August 2012</span>
<span class="h3"><a class="selflink" id="section-7.3" href="#section-7.3">7.3</a>. Measurement Interval</span>
There are limits on a useful measurement interval for BTC. Three
factors that influence the interval duration are listed below:
1. Measurements may choose to include or exclude the 3-way handshake
of TCP connection establishment, which requires at least 1.5 *
RTT (round-trip time) and contains both the delay of the path and
the host processing time for responses. However, user experience
includes the 3-way handshake for all new TCP connections.
2. Measurements may choose to include or exclude Slow-Start,
preferring instead to focus on a portion of the transfer that
represents "equilibrium" (which needs to be defined for
particular circumstances if used). However, user experience
includes the Slow-Start for all new TCP connections.
3. Measurements may choose to use a fixed block of data to transfer,
where the size of the block has a relationship to the file size
of the application of interest. This approach yields variable
size measurement intervals, where a path with faster BTC is
measured for less time than a path with slower BTC, and this has
implications when path impairments are time-varying, or
transient. Users are likely to turn their immediate attention
elsewhere when a very large file must be transferred; thus, they
do not directly experience such a long transfer -- they see the
result (success or failure) and possibly an objective measurement
of the transfer time (which will likely include the 3-way
handshake, Slow-Start, and application file management processing
time as well as the BTC).
Individual measurement intervals may be short or long, but there is a
need to report the results on a long-term basis that captures the BTC
variability experienced between each interval. Consistent BTC is a
valuable commodity along with the value attained.
<span class="h3"><a class="selflink" id="section-7.4" href="#section-7.4">7.4</a>. Bulk Transfer Capacity Reporting</span>
When BTC of a link or path is estimated through some measurement
technique, the following parameters should be reported:
o Name and reference to the exact method of measurement
o Maximum Transmission Unit (MTU)
o Maximum BTC that can be assessed in the measurement configuration
o Time and duration of the measurement
<span class="grey">Morton, et al. Informational [Page 22]</span></pre>
<hr class='noprint'/><!--NewPage--><pre class='newpage'><span id="page-23" ></span>
<span class="grey"><a href="./rfc6703">RFC 6703</a> Reporting Metrics August 2012</span>
o Number of BTC connections used simultaneously
o *All* other parameters specific to the measurement method,
especially the congestion control algorithm in use
See also [<a href="./rfc6349" title=""Framework for TCP Throughput Testing"">RFC6349</a>].
Many methods of BTC measurement have a maximum capacity that they can
measure, and this maximum may be less than the available capacity of
the link or path. Therefore, it is important to specify the measured
BTC value beyond which there will be no measured improvement.
The application performance estimation audience may have a desired
target capacity value and simply wish to assess whether there is
sufficient BTC. This case simplifies the measurement of link and
path capacity to some degree, as long as the measurable maximum
exceeds the target capacity.
<span class="h3"><a class="selflink" id="section-7.5" href="#section-7.5">7.5</a>. Variability in Bulk Transfer Capacity</span>
As with most metrics and measurements, assessing the consistency or
variability in the results gives the user an intuitive feel for the
degree (or confidence) that any one value is representative of other
results, or the underlying distribution from which these singleton
measurements have come.
With two questions looming --
1. What ways can BTC be measured and summarized to describe the
potential variability in a useful way?
2. How can the variability in BTC estimates be reported, so that the
confidence in the results is also conveyed?
-- we suggest the methods listed in <a href="#section-6.6.1">Section 6.6.1</a> above, and the
additional results presentations given in [<a href="./rfc6349" title=""Framework for TCP Throughput Testing"">RFC6349</a>].
<span class="h2"><a class="selflink" id="section-8" href="#section-8">8</a>. Reporting on Test Streams and Sample Size</span>
This section discusses two key aspects of measurement that are
sometimes omitted from the report: the description of the test stream
on which the measurements are based, and the sample size.
<span class="h3"><a class="selflink" id="section-8.1" href="#section-8.1">8.1</a>. Test Stream Characteristics</span>
Network characterization has traditionally used Poisson-distributed
inter-packet spacing, as this provides an unbiased sample. The
average inter-packet spacing may be selected to allow observation of
<span class="grey">Morton, et al. Informational [Page 23]</span></pre>
<hr class='noprint'/><!--NewPage--><pre class='newpage'><span id="page-24" ></span>
<span class="grey"><a href="./rfc6703">RFC 6703</a> Reporting Metrics August 2012</span>
specific network phenomena. Other test streams are designed to
sample some property of the network, such as the presence of
congestion, link bandwidth, or packet reordering.
If measuring a network in order to make inferences about applications
or receiver performance, then there are usually efficiencies derived
from a test stream that has similar characteristics to the sender.
In some cases, it is essential to synthesize the sender stream, as
with BTC estimates. In other cases, it may be sufficient to sample
with a "known bias", e.g., a Periodic stream to estimate real-time
application performance.
<span class="h3"><a class="selflink" id="section-8.2" href="#section-8.2">8.2</a>. Sample Size</span>
Sample size is directly related to the accuracy of the results and
plays a critical role in the report. Even if only the sample size
(in terms of number of packets) is given for each value or summary
statistic, it imparts a notion of the confidence in the result.
In practice, the sample size will be selected taking both statistical
and practical factors into account. Among these factors are the
following:
1. The estimated variability of the quantity being measured.
2. The desired confidence in the result (although this may be
dependent on assumption of the underlying distribution of the
measured quantity).
3. The effects of active measurement traffic on user traffic.
A sample size may sometimes be referred to as "large". This is a
relative and qualitative term. It is preferable to describe what one
is attempting to achieve with his sample. For example, stating an
implication may be helpful: this sample is large enough that a single
outlying value at ten times the "typical" sample mean (the mean
without the outlying value) would influence the mean by no more
than X.
The Appendix of [<a href="./rfc2330" title=""Framework for IP Performance Metrics"">RFC2330</a>] indicates that a sample size of 128
singletons worked well for goodness-of-fit testing, while a much
larger size (8192 singletons) almost always failed.
<span class="grey">Morton, et al. Informational [Page 24]</span></pre>
<hr class='noprint'/><!--NewPage--><pre class='newpage'><span id="page-25" ></span>
<span class="grey"><a href="./rfc6703">RFC 6703</a> Reporting Metrics August 2012</span>
<span class="h2"><a class="selflink" id="section-9" href="#section-9">9</a>. Security Considerations</span>
The security considerations that apply to any active measurement of
live networks are relevant here as well. See the Security
Considerations section of [<a href="./rfc4656" title=""A One-way Active Measurement Protocol (OWAMP)"">RFC4656</a>] for mandatory-to-implement
security features that intend to mitigate attacks.
Measurement systems conducting long-term measurements are more
exposed to threats as a by-product of ports open longer to perform
their task, and more easily detected measurement activity on those
ports. Further, use of long packet waiting times affords an attacker
a better opportunity to prepare and launch a replay attack.
<span class="h2"><a class="selflink" id="section-10" href="#section-10">10</a>. Acknowledgements</span>
The authors thank Phil Chimento for his suggestion to employ
conditional distributions for delay, Steve Konish Jr. for his careful
review and suggestions, Dave McDysan and Don McLachlan for useful
comments based on their long experience with measurement and
reporting, Daniel Genin for his observation of non-orthogonality
between raw and restricted capacity metrics (and for noticing our
previous omission of this fact), and Matt Zekauskas for suggestions
on organizing the memo for easier consumption.
<span class="h2"><a class="selflink" id="section-11" href="#section-11">11</a>. References</span>
<span class="h3"><a class="selflink" id="section-11.1" href="#section-11.1">11.1</a>. Normative References</span>
[<a id="ref-RFC2330">RFC2330</a>] Paxson, V., Almes, G., Mahdavi, J., and M. Mathis,
"Framework for IP Performance Metrics", <a href="./rfc2330">RFC 2330</a>,
May 1998.
[<a id="ref-RFC2678">RFC2678</a>] Mahdavi, J. and V. Paxson, "IPPM Metrics for Measuring
Connectivity", <a href="./rfc2678">RFC 2678</a>, September 1999.
[<a id="ref-RFC2679">RFC2679</a>] Almes, G., Kalidindi, S., and M. Zekauskas, "A One-way
Delay Metric for IPPM", <a href="./rfc2679">RFC 2679</a>, September 1999.
[<a id="ref-RFC2680">RFC2680</a>] Almes, G., Kalidindi, S., and M. Zekauskas, "A One-way
Packet Loss Metric for IPPM", <a href="./rfc2680">RFC 2680</a>, September 1999.
[<a id="ref-RFC3148">RFC3148</a>] Mathis, M. and M. Allman, "A Framework for Defining
Empirical Bulk Transfer Capacity Metrics", <a href="./rfc3148">RFC 3148</a>,
July 2001.
[<a id="ref-RFC3393">RFC3393</a>] Demichelis, C. and P. Chimento, "IP Packet Delay
Variation Metric for IP Performance Metrics (IPPM)",
<a href="./rfc3393">RFC 3393</a>, November 2002.
<span class="grey">Morton, et al. Informational [Page 25]</span></pre>
<hr class='noprint'/><!--NewPage--><pre class='newpage'><span id="page-26" ></span>
<span class="grey"><a href="./rfc6703">RFC 6703</a> Reporting Metrics August 2012</span>
[<a id="ref-RFC3432">RFC3432</a>] Raisanen, V., Grotefeld, G., and A. Morton, "Network
performance measurement with periodic streams", <a href="./rfc3432">RFC 3432</a>,
November 2002.
[<a id="ref-RFC4656">RFC4656</a>] Shalunov, S., Teitelbaum, B., Karp, A., Boote, J., and M.
Zekauskas, "A One-way Active Measurement Protocol
(OWAMP)", <a href="./rfc4656">RFC 4656</a>, September 2006.
[<a id="ref-RFC4737">RFC4737</a>] Morton, A., Ciavattone, L., Ramachandran, G., Shalunov,
S., and J. Perser, "Packet Reordering Metrics", <a href="./rfc4737">RFC 4737</a>,
November 2006.
[<a id="ref-RFC5136">RFC5136</a>] Chimento, P. and J. Ishac, "Defining Network Capacity",
<a href="./rfc5136">RFC 5136</a>, February 2008.
<span class="h3"><a class="selflink" id="section-11.2" href="#section-11.2">11.2</a>. Informative References</span>
[<a id="ref-Casner">Casner</a>] Casner, S., Alaettinoglu, C., and C. Kuan, "A Fine-
Grained View of High-Performance Networking",
NANOG 22 Conf., May 20-22 2001,
<<a href="http://www.nanog.org/presentations/archive/index.php">http://www.nanog.org/presentations/archive/index.php</a>>.
[<a id="ref-Cia03">Cia03</a>] Ciavattone, L., Morton, A., and G. Ramachandran,
"Standardized Active Measurements on a Tier 1 IP
Backbone", IEEE Communications Magazine, Vol. 41
No. 6, pp. 90-97, June 2003.
[<a id="ref-IPPM-RPT">IPPM-RPT</a>] Shalunov, S. and M. Swany, "Reporting IP Performance
Metrics to Users", Work in Progress, March 2011.
[<a id="ref-RFC5481">RFC5481</a>] Morton, A. and B. Claise, "Packet Delay Variation
Applicability Statement", <a href="./rfc5481">RFC 5481</a>, March 2009.
[<a id="ref-RFC5835">RFC5835</a>] Morton, A., Ed., and S. Van den Berghe, Ed., "Framework
for Metric Composition", <a href="./rfc5835">RFC 5835</a>, April 2010.
[<a id="ref-RFC6349">RFC6349</a>] Constantine, B., Forget, G., Geib, R., and R. Schrage,
"Framework for TCP Throughput Testing", <a href="./rfc6349">RFC 6349</a>,
August 2011.
[<a id="ref-Y.1540">Y.1540</a>] International Telecommunication Union, "Internet protocol
data communication service - IP packet transfer and
availability performance parameters", ITU-T
Recommendation Y.1540, March 2011.
[<a id="ref-Y.1541">Y.1541</a>] International Telecommunication Union, "Network
performance objectives for IP-based services", ITU-T
Recommendation Y.1541, December 2011.
<span class="grey">Morton, et al. Informational [Page 26]</span></pre>
<hr class='noprint'/><!--NewPage--><pre class='newpage'><span id="page-27" ></span>
<span class="grey"><a href="./rfc6703">RFC 6703</a> Reporting Metrics August 2012</span>
Authors' Addresses
Al Morton
AT&T Labs
200 Laurel Avenue South
Middletown, NJ 07748
USA
Phone: +1 732 420 1571
Fax: +1 732 368 1192
EMail: acmorton@att.com
URI: <a href="http://home.comcast.net/~acmacm/">http://home.comcast.net/~acmacm/</a>
Gomathi Ramachandran
AT&T Labs
200 Laurel Avenue South
Middletown, New Jersey 07748
USA
Phone: +1 732 420 2353
EMail: gomathi@att.com
Ganga Maguluri
AT&T Labs
200 Laurel Avenue South
Middletown, New Jersey 07748
USA
Phone: +1 732 420 2486
EMail: gmaguluri@att.com
Morton, et al. Informational [Page 27]
</pre>
|