1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 535 536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 558 559 560 561 562 563 564 565 566 567 568 569 570 571 572 573 574 575 576 577 578 579 580 581 582 583 584 585 586 587 588 589 590 591 592 593 594 595 596 597 598 599 600 601 602 603 604 605 606 607 608 609 610 611 612 613 614 615 616 617 618 619 620 621 622 623 624 625 626 627 628 629 630 631 632 633 634 635 636 637 638 639 640 641 642 643 644 645 646 647 648 649 650 651 652 653 654 655 656 657 658 659 660 661 662 663 664 665 666 667 668 669 670 671 672 673 674 675 676 677 678 679 680 681 682 683 684 685 686 687 688 689 690 691 692 693 694 695 696 697 698 699 700 701 702 703 704 705 706 707 708 709 710 711 712 713 714 715 716 717 718 719 720 721 722 723 724 725 726 727 728 729 730 731 732 733 734 735 736 737 738 739 740 741 742 743 744 745 746 747 748 749 750 751 752 753 754 755 756 757 758 759 760 761 762 763 764 765 766 767 768 769 770 771 772 773 774 775 776 777 778 779 780 781 782 783 784 785 786 787 788 789 790 791 792 793 794 795 796 797 798 799 800 801 802 803 804 805 806 807 808 809 810 811 812 813 814 815 816 817 818 819 820 821 822 823 824 825 826 827 828 829 830 831 832 833 834 835 836 837 838 839 840 841 842 843 844 845 846 847 848 849 850 851 852 853 854 855 856 857 858 859 860 861 862 863 864 865 866 867 868 869 870 871 872 873 874 875 876 877 878 879 880 881 882 883 884 885 886 887 888 889 890 891 892 893 894 895 896 897 898 899 900 901 902 903 904 905 906 907 908 909 910 911 912 913 914 915 916 917 918 919 920 921 922 923 924 925 926 927 928 929 930 931 932 933 934 935 936 937 938 939 940 941 942 943 944 945 946 947 948 949 950 951 952 953 954 955 956 957 958 959 960 961 962 963 964 965 966 967 968 969 970 971 972 973 974 975 976 977 978 979 980 981 982 983 984 985 986 987 988 989 990 991 992 993 994 995 996 997 998 999 1000 1001 1002 1003 1004 1005 1006 1007 1008 1009 1010 1011 1012 1013 1014 1015 1016 1017 1018 1019 1020 1021 1022 1023 1024 1025 1026 1027 1028 1029 1030 1031 1032 1033 1034 1035 1036 1037 1038 1039 1040 1041 1042 1043 1044 1045 1046 1047 1048 1049 1050 1051 1052 1053 1054 1055 1056 1057 1058 1059 1060 1061 1062 1063 1064 1065 1066 1067 1068 1069 1070 1071 1072 1073 1074 1075 1076 1077 1078 1079 1080 1081 1082 1083 1084 1085 1086 1087 1088 1089 1090 1091 1092 1093 1094 1095 1096 1097 1098 1099 1100 1101 1102 1103 1104 1105 1106 1107 1108 1109 1110 1111 1112 1113 1114 1115 1116 1117 1118 1119 1120 1121 1122 1123 1124 1125 1126 1127 1128 1129 1130 1131 1132 1133 1134 1135 1136 1137 1138 1139 1140 1141 1142 1143 1144 1145 1146 1147 1148 1149 1150 1151 1152 1153 1154 1155 1156 1157 1158 1159 1160 1161 1162 1163 1164 1165 1166 1167 1168 1169 1170 1171 1172 1173
|
<pre>Independent Submission Y. Nachum
Request for Comments: 7586
Category: Experimental L. Dunbar
ISSN: 2070-1721 Huawei
I. Yerushalmi
T. Mizrahi
Marvell
June 2015
<span class="h1">The Scalable Address Resolution Protocol (SARP)</span>
<span class="h1">for Large Data Centers</span>
Abstract
This document introduces the Scalable Address Resolution Protocol
(SARP), an architecture that uses proxy gateways to scale large data
center networks. SARP is based on fast proxies that significantly
reduce switches' Filtering Database (FDB) table sizes and reduce
impact of ARP and Neighbor Discovery (ND) on network elements in an
environment where hosts within one subnet (or VLAN) can spread over
various locations. SARP is targeted for massive data centers with a
significant number of Virtual Machines (VMs) that can move across
various physical locations.
Independent Submissions Editor Note
This is an Experimental document; that experiment will end two years
after the RFC is published. At that point, the RFC authors will
attempt to determine how widely SARP has been implemented and used.
IESG Note
The IESG notes that the problems described in <a href="./rfc6820">RFC 6820</a> can already be
addressed through the simple combination of existing standardized or
other published techniques including Layer 2 VPN (<a href="./rfc4664">RFC 4664</a>), proxy
ARP (<a href="./rfc925">RFC 925</a>), proxy Neighbor Discovery (<a href="./rfc4389">RFC 4389</a>), IGMP and MLD
snooping (<a href="./rfc4541">RFC 4541</a>), and ARP mediation for IP interworking of Layer 2
VPNs (<a href="./rfc6575">RFC 6575</a>).
<span class="grey">Nachum, et al. Experimental [Page 1]</span></pre>
<hr class='noprint'/><!--NewPage--><pre class='newpage'><span id="page-2" ></span>
<span class="grey"><a href="./rfc7586">RFC 7586</a> SARP June 2015</span>
Status of This Memo
This document is not an Internet Standards Track specification; it is
published for examination, experimental implementation, and
evaluation.
This document defines an Experimental Protocol for the Internet
community. This is a contribution to the RFC Series, independently
of any other RFC stream. The RFC Editor has chosen to publish this
document at its discretion and makes no statement about its value for
implementation or deployment. Documents approved for publication by
the RFC Editor are not a candidate for any level of Internet
Standard; see <a href="./rfc5741#section-2">Section 2 of RFC 5741</a>.
Information about the current status of this document, any errata,
and how to provide feedback on it may be obtained at
<a href="http://www.rfc-editor.org/info/rfc7586">http://www.rfc-editor.org/info/rfc7586</a>.
Copyright Notice
Copyright (c) 2015 IETF Trust and the persons identified as the
document authors. All rights reserved.
This document is subject to <a href="https://www.rfc-editor.org/bcp/bcp78">BCP 78</a> and the IETF Trust's Legal
Provisions Relating to IETF Documents
(<a href="http://trustee.ietf.org/license-info">http://trustee.ietf.org/license-info</a>) in effect on the date of
publication of this document. Please review these documents
carefully, as they describe your rights and restrictions with respect
to this document.
<span class="grey">Nachum, et al. Experimental [Page 2]</span></pre>
<hr class='noprint'/><!--NewPage--><pre class='newpage'><span id="page-3" ></span>
<span class="grey"><a href="./rfc7586">RFC 7586</a> SARP June 2015</span>
Table of Contents
<a href="#section-1">1</a>. Introduction ....................................................<a href="#page-3">3</a>
<a href="#section-1.1">1.1</a>. SARP Motivation ............................................<a href="#page-4">4</a>
<a href="#section-1.2">1.2</a>. SARP Overview ..............................................<a href="#page-7">7</a>
<a href="#section-1.3">1.3</a>. SARP Deployment Options ....................................<a href="#page-8">8</a>
<a href="#section-1.4">1.4</a>. Comparison with Existing Solutions .........................<a href="#page-9">9</a>
<a href="#section-2">2</a>. Terms and Abbreviations Used in This Document ..................<a href="#page-10">10</a>
<a href="#section-3">3</a>. SARP: Theory of Operation ......................................<a href="#page-11">11</a>
<a href="#section-3.1">3.1</a>. Control Plane: ARP/ND .....................................<a href="#page-11">11</a>
<a href="#section-3.1.1">3.1.1</a>. ARP/NS Request for a Local VM ......................<a href="#page-11">11</a>
<a href="#section-3.1.2">3.1.2</a>. ARP/NS Request for a Remote VM .....................<a href="#page-12">12</a>
3.1.3. Gratuitous ARP and Unsolicited Neighbor
Advertisement (UNA) ................................<a href="#page-13">13</a>
<a href="#section-3.2">3.2</a>. Data Plane: Packet Transmission ...........................<a href="#page-13">13</a>
<a href="#section-3.2.1">3.2.1</a>. Local Packet Transmission ..........................<a href="#page-13">13</a>
<a href="#section-3.2.2">3.2.2</a>. Packet Transmission between Sites ..................<a href="#page-13">13</a>
<a href="#section-3.3">3.3</a>. VM Migration ..............................................<a href="#page-14">14</a>
<a href="#section-3.3.1">3.3.1</a>. VM Local Migration .................................<a href="#page-14">14</a>
<a href="#section-3.3.2">3.3.2</a>. VM Migration from One Site to Another ..............<a href="#page-14">14</a>
3.3.2.1. Impact on IP-to-MAC Mapping Cache
Table of Migrated VMs .....................<a href="#page-16">16</a>
<a href="#section-3.4">3.4</a>. Multicast and Broadcast ...................................<a href="#page-17">17</a>
<a href="#section-3.5">3.5</a>. Non-IP Packet .............................................<a href="#page-17">17</a>
<a href="#section-3.6">3.6</a>. High Availability and Load Balancing ......................<a href="#page-17">17</a>
<a href="#section-3.7">3.7</a>. SARP Interaction with Overlay Networks ....................<a href="#page-18">18</a>
<a href="#section-4">4</a>. Security Considerations ........................................<a href="#page-18">18</a>
<a href="#section-5">5</a>. References .....................................................<a href="#page-19">19</a>
<a href="#section-5.1">5.1</a>. Normative References ......................................<a href="#page-19">19</a>
<a href="#section-5.2">5.2</a>. Informative References ....................................<a href="#page-20">20</a>
Acknowledgments ...................................................<a href="#page-21">21</a>
Authors' Addresses ................................................<a href="#page-21">21</a>
<span class="h2"><a class="selflink" id="section-1" href="#section-1">1</a>. Introduction</span>
This document describes a proxy gateway technique, called the
Scalable Address Resolution Protocol (SARP), which reduces switches'
Filtering Database (FDB) size and ARP/Neighbor Discovery impact on
network elements in an environment where hosts within one subnet (or
VLAN) can spread over various access domains in data centers.
The main idea of SARP is to represent all VMs (or hosts) under each
access domain by the MAC address of their corresponding access node
(or aggregation node). For example (Figure 1), when host A in the
west site needs to communicate with host B, which is on the same VLAN
but connected to a different access domain (east site), SARP requires
host A to use the MAC address of SARP proxy 2, rather than the
address of host B. By doing so, switches in each domain do not need
<span class="grey">Nachum, et al. Experimental [Page 3]</span></pre>
<hr class='noprint'/><!--NewPage--><pre class='newpage'><span id="page-4" ></span>
<span class="grey"><a href="./rfc7586">RFC 7586</a> SARP June 2015</span>
to maintain a list of MAC addresses for all the VMs (hosts) in
different access domains; every switch only needs to be familiar with
MAC addresses that reside in the current domain, and addresses of
remote SARP proxy gateways. Therefore, the switches' FDB size is
limited regardless of the number of access domains.
+-------+ +-------+ _ __ +-------+ +-------+
| | | SARP | / \_/ \_ | SARP | | |
|host A |<===>| proxy |<=>\_ \<==>| proxy |<===>|host B |
| | | 1 | / _/ | 2 | | |
+-------+ +-------+ \__ _/ +-------+ +-------+
\_/
<------West Site------> <------East Site------>
Figure 1: A Brief Overview of SARP
<span class="h3"><a class="selflink" id="section-1.1" href="#section-1.1">1.1</a>. SARP Motivation</span>
[<a id="ref-RFC6820">RFC6820</a>] discusses the impacts and scaling issues that arise in data
center networks when subnets span across multiple Layer 2 / Layer 3
(L2/L3) boundary routers.
Unfortunately, when the combined number of VMs (or hosts) in all
those subnets is large, it can lead to an explosion of the size of
the switches' MAC address table and a heavy impact on network
elements.
There are four major issues associated with subnets spanning across
multiple L2/L3 boundary router ports:
1) Explosion of the size of the intermediate switches' MAC address
table (FDB).
When hosts in a VLAN (or subnet) span across multiple access
domains and each access domain has hosts belonging to different
VLANs, each access switch has to enable multiple VLANs. Thus,
those access switches are exposed to all MAC addresses across all
VLANs.
For example, for an access switch with 40 attached physical
servers, where each server has 100 VMs, the access switch has
4,000 attached MAC addresses. If hosts/VMs can indeed be moved
anywhere, the worst case for the Access Switch is when all those
4,000 VMs belong to different VLANs, i.e., the access switch has
4000 VLANs enabled. If each VLAN has 200 hosts, this access
switch's MAC address table potentially has 200 * 4,000 = 800,000
entries.
<span class="grey">Nachum, et al. Experimental [Page 4]</span></pre>
<hr class='noprint'/><!--NewPage--><pre class='newpage'><span id="page-5" ></span>
<span class="grey"><a href="./rfc7586">RFC 7586</a> SARP June 2015</span>
It is important to note that the example above is relevant
regardless of whether IPv4 or IPv6 is used.
The example illustrates a scenario that is worse than what today's
L2/L3 gateway has to face. In today's environment, where each
subnet is limited to a few access switches, the number of MAC
addresses the gateway has to learn is of a significantly smaller
scale.
2) ARP/ND processing load impact on the L2/L3 boundary routers.
All VMs periodically send NDs to their corresponding gateway nodes
to get gateway nodes' MAC addresses. When the combined number of
VMs across all the VLANs is large, processing the responses to the
ND requests from those VMs can easily exhaust the gateway's CPU
utilization.
An L2/L3 boundary router could be hit with ARP/ND twice when the
originating and destination stations are in different subnets
attached to the same router and when those hosts do not
communicate with external peers very frequently. The first hit is
when the originating station in subnet 1 initiates an ARP/ND
request to the L2/L3 boundary router. The second hit is when the
L2/L3 boundary router initiates an ARP/ND request to the target in
subnet 2 if the target is not in the router's ARP/ND cache.
3) In IPv4, every end station in a subnet receives ARP broadcast
messages from all other end stations in the subnet. IPv6 ND has
eliminated this issue by using multicast.
However, most devices support a limited number of multicast
addresses, due to the scaling of multicast filtering. Once the
number of multicast addresses exceeds the multicast filter limit,
the multicast addresses have to be processed by the devices' CPUs
(i.e., the slow path).
It is less of an issue in data centers without VM mobility, since
each port is only dedicated to one (or a small number of) VLANs.
Thus, the number of multicast addresses hitting each port is
significantly lower.
4) The ARP/ND messages are flooded to many physical link segments
that can reduce the bandwidth utilization for user traffic.
ARP/ND flooding is, in most cases, an insignificant issue in
today's data center networks, as the majority of data center
servers are shifting towards 1G or 10G Ethernet ports. The
bandwidth used by ARP/ND, even when flooded to all physical links,
<span class="grey">Nachum, et al. Experimental [Page 5]</span></pre>
<hr class='noprint'/><!--NewPage--><pre class='newpage'><span id="page-6" ></span>
<span class="grey"><a href="./rfc7586">RFC 7586</a> SARP June 2015</span>
becomes negligible compared to the link bandwidth. Furthermore,
IGMP and Multicast Listener Discovery (MLD) snooping [<a href="./rfc4541" title=""Considerations for Internet Group Management Protocol (IGMP) and Multicast Listener Discovery (MLD) Snooping Switches"">RFC4541</a>] can
further reduce the ND multicast traffic to some physical link
segments.
Statistics gathered by Merit Network [<a href="#ref-ARMDStats" title=""Address Resolution Statistics"">ARMDStats</a>] have shown that the
major impact of a large number of VMs in data centers is on the L2/L3
boundary routers, i.e., issue 2 above. An L2/L3 boundary router
could be hit with ARP/ND twice when 1) the originating and
destination stations are in different subnets attached to the same
router, and 2) those hosts do not communicate with external peers
often enough.
Overlay approaches, e.g., [<a href="./rfc7364" title=""Problem Statement: Overlays for Network Virtualization"">RFC7364</a>], can hide addresses of hosts
(VMs) in the core, but they do not prevent the MAC address table
explosion problem (issue 1) unless the Network Virtualization Edge
(NVE) is on a server.
The scaling practices documented in [<a href="./rfc7342" title=""Practices for Scaling ARP and Neighbor Discovery (ND) in Large Data Centers"">RFC7342</a>] can only reduce some
ARP impact on L2/L3 boundary routers in some scenarios, but not all.
In order to protect router CPUs from being overburdened by target
resolution requests, some routers rate-limit the target MAC
resolution requests to the router's CPU. When the rate limit is
exceeded, the incoming data frames are dropped. In traditional data
centers, this issue is less significant, since the number of hosts
attached to one L2/L3 boundary router is limited by the number of
physical ports of the switches/routers. When servers are virtualized
to support 30+ VMs, the number of hosts under one router can grow by
a factor of 30+. Furthermore, in traditional data center networks,
each subnet is neatly bound to a limited number of server racks,
i.e., switches only need to be familiar with MAC addresses of hosts
that reside in this small number of subnets. In contemporary data
center networks, as subnets are spread across many server racks,
switches are exposed to VLAN/MAC addresses of many subnets, greatly
increasing the size of switches' FDB tables.
The solution proposed in this document can eliminate or reduce the
likelihood of inter-subnet data frames being dropped and reduce the
number of host MAC addresses that intermediate switches are exposed
to, thus reducing switches' FDB table sizes.
<span class="grey">Nachum, et al. Experimental [Page 6]</span></pre>
<hr class='noprint'/><!--NewPage--><pre class='newpage'><span id="page-7" ></span>
<span class="grey"><a href="./rfc7586">RFC 7586</a> SARP June 2015</span>
<span class="h3"><a class="selflink" id="section-1.2" href="#section-1.2">1.2</a>. SARP Overview</span>
The SARP approach uses proxy gateways to address the problems
discussed above.
Note: The guidelines to proxy developers [<a href="./rfc4389" title=""Neighbor Discovery Proxies (ND Proxy)"">RFC4389</a>] have been
carefully considered for SARP. <a href="#section-3.3">Section 3.3</a> discusses how SARP works
when VMs are moved from one segment to another.
In order to enable VMs to be moved across servers while ensuring
their MAC/IP addresses remain unchanged, the Layer 2 network (e.g.,
VLAN) that interconnects those VMs may spread across different server
racks, different rows of server racks, or even different data center
sites.
A multisite data center network is comprised of two main building
blocks: an interconnecting segment and an access segment. While the
access network is, in most cases, a Layer 2 network, the
interconnecting segment is not necessarily a Layer 2 network.
The SARP proxies are located at the boundaries where the access
segment connects to its interconnecting segment. The boundary node
can be a hypervisor virtual switch, a top-of-rack switch, an
aggregation switch (or end-of-row switch), or a data center core
switch. Figure 2 depicts an example of two remote data centers that
are managed as a single, flat Layer 2 domain. SARP proxies are
implemented at the edge devices connecting the data center to the
transport network. SARP significantly reduces the ARP/ND
transmissions over the interconnecting network.
<span class="grey">Nachum, et al. Experimental [Page 7]</span></pre>
<hr class='noprint'/><!--NewPage--><pre class='newpage'><span id="page-8" ></span>
<span class="grey"><a href="./rfc7586">RFC 7586</a> SARP June 2015</span>
*-------------------*
| |
+-------| Interconnecting |-------+
| | network | |
| *-------------------* |
| |
*-----------------* *----------------*
| SARP Proxies | | SARP Proxies |
*-----------------* *----------------*
| | | |
*-------* *-------* *-------* *-------*
|Access | |Access | |Access | |Access |
*-------* *-------* *-------* *-------*
|
*----------*
|Hypervisor|
*----------*
|
*--------*
|Virtual |
|Machine |
*--------*
(West Site) (East Site)
Figure 2: SARP: Network Architecture Example
<span class="h3"><a class="selflink" id="section-1.3" href="#section-1.3">1.3</a>. SARP Deployment Options</span>
SARP deployment is tightly coupled with the data center architecture.
SARP proxies are located at the point where the Layer 2
infrastructure connects to its Layer 2 cloud using overlay networks.
SARP proxies can be located at the data center edge (as Figure 2
depicts), data center core, or data center aggregation (denoted by
"Agg" in the figure). SARP can also be implemented by the hypervisor
(as Figure 3 depicts).
To simplify the description, we will focus on data centers that are
managed as a single, flat Layer 2 network, where SARP proxies are
located at the boundary where the data center connects to the
transport network (as Figure 2 depicts).
<span class="grey">Nachum, et al. Experimental [Page 8]</span></pre>
<hr class='noprint'/><!--NewPage--><pre class='newpage'><span id="page-9" ></span>
<span class="grey"><a href="./rfc7586">RFC 7586</a> SARP June 2015</span>
*-------------------*
| |
+-------| TRANSPORT |-------+
| | | |
| *-------------------* |
| |
*-----------------* *----------------*
| Edge Device | | Edge Device |
*-----------------* *----------------*
| |
*-----------------* *----------------*
| Core | | Core |
*-----------------* *----------------*
| | | |
*-------* *-------* *-------* *-------*
| Agg | | Agg | | Agg | | Agg |
*-------* *-------* *-------* *-------*
|
*----------*
|Hypervisor|
*----------*
(West Site) (East Site)
Figure 3: SARP Deployment Options
<span class="h3"><a class="selflink" id="section-1.4" href="#section-1.4">1.4</a>. Comparison with Existing Solutions</span>
The IETF has developed several mechanisms to address issues
associated with Layer 2 networks over multiple geographic locations,
for example, Layer 2 VPN [<a href="./rfc4664" title=""Framework for Layer 2 Virtual Private Networks (L2VPNs)"">RFC4664</a>], proxy ARP [<a href="./rfc925" title=""Multi-LAN address resolution"">RFC925</a>] [<a href="#ref-ProxyARP" title=""Using ARP to implement transparent subnet gateways"">ProxyARP</a>],
proxy Neighbor Discovery [<a href="./rfc4389" title=""Neighbor Discovery Proxies (ND Proxy)"">RFC4389</a>], IGMP and MLD snooping [<a href="./rfc4541" title=""Considerations for Internet Group Management Protocol (IGMP) and Multicast Listener Discovery (MLD) Snooping Switches"">RFC4541</a>],
and ARP mediation for IP interworking of Layer 2 VPNs [<a href="./rfc6575" title=""Address Resolution Protocol (ARP) Mediation for IP Interworking of Layer 2 VPNs"">RFC6575</a>].
However, all those solutions work well when hosts within one subnet
are placed together under one access domain, so that the intermediate
switches in each access domain are only exposed to host addresses
from a limited number of subnets. SARP is to provide a solution when
hosts within one subnet are spread across multiple access domains,
and each access domain has hosts from many subnets. Under this
environment, the intermediate switches in each access domain are
exposed to combined hosts of all the subnets that are enabled by the
access domain.
<span class="grey">Nachum, et al. Experimental [Page 9]</span></pre>
<hr class='noprint'/><!--NewPage--><pre class='newpage'><span id="page-10" ></span>
<span class="grey"><a href="./rfc7586">RFC 7586</a> SARP June 2015</span>
<span class="h2"><a class="selflink" id="section-2" href="#section-2">2</a>. Terms and Abbreviations Used in This Document</span>
ARP: Address Resolution Protocol [<a href="#ref-ARP" title=""Ethernet Address Resolution Protocol: Or Converting Network Protocol Addresses to 48.bit Ethernet Address for Transmission on Ethernet Hardware"">ARP</a>]
FDB: Filtering Database, which is used for Layer 2 switches
[<a href="#ref-802.1Q" title=""IEEE Standard for Local and metropolitan area networks -- Bridges and Bridged Networks"">802.1Q</a>]. Layer 2 switches flood data frames when the
Destination Address (DA) is not in the FDB, whereas routers
drop data frames when the DA is not in the Forwarding
Information Base (FIB). That is why the FDB is used for Layer
2 switches.
FIB: Forwarding Information Base
Hypervisor: a software layer that creates and runs virtual machines
on a server
IP-D: IP address of the destination virtual machine
IP-S: IP address of the source virtual machine
MAC-D: MAC address of the destination virtual machine
MAC-E: MAC address of the East Proxy SARP Device
MAC-S: MAC address of the source virtual machine
NA: IPv6 ND's Neighbor Advertisement
ND: IPv6 Neighbor Discovery Protocol [<a href="#ref-ND" title=""Neighbor Discovery for IP version 6 (IPv6)"">ND</a>]. In this document, ND
also refers to Neighbor Solicitation, Neighbor Advertisement,
and Unsolicited Neighbor Advertisement messages defined by <a href="./rfc4861">RFC</a>
<a href="./rfc4861">4861</a>.
NS: IPv6 ND's Neighbor Solicitation
SARP Proxy: The components that participate in SARP
UNA: IPv6 ND's Unsolicited Neighbor Advertisement [<a href="#ref-ND" title=""Neighbor Discovery for IP version 6 (IPv6)"">ND</a>]
VM: Virtual Machine
<span class="grey">Nachum, et al. Experimental [Page 10]</span></pre>
<hr class='noprint'/><!--NewPage--><pre class='newpage'><span id="page-11" ></span>
<span class="grey"><a href="./rfc7586">RFC 7586</a> SARP June 2015</span>
<span class="h2"><a class="selflink" id="section-3" href="#section-3">3</a>. SARP: Theory of Operation</span>
<span class="h3"><a class="selflink" id="section-3.1" href="#section-3.1">3.1</a>. Control Plane: ARP/ND</span>
This section describes the ARP/ND procedure scenarios. The first
scenario addresses a case where both the source and destination VMs
reside in the same access segment. In the second scenario, the
source VM is in the local access segment and the destination VM is
located at the remote access segment.
In all scenarios, the VMs (source and destination) share the same L2
broadcast domain.
<span class="h4"><a class="selflink" id="section-3.1.1" href="#section-3.1.1">3.1.1</a>. ARP/NS Request for a Local VM</span>
When source and destination VMs are located at the same access
segment (Figure 4), the address resolution process is as described in
[<a href="#ref-ARP" title=""Ethernet Address Resolution Protocol: Or Converting Network Protocol Addresses to 48.bit Ethernet Address for Transmission on Ethernet Hardware"">ARP</a>] and [<a href="#ref-ND" title=""Neighbor Discovery for IP version 6 (IPv6)"">ND</a>]; host A sends an ARP request or an IPv6 Neighbor
Solicitation (NS) to learn the IP-to-MAC mapping of host B, and it
receives a reply from host B with the IP-D to MAC-D mapping.
+-------+ _ __ +-------+ _ __
|host A | / \_/ \_ | SARP | / \_/ \_
| IP-S |<--->\_access \<==>| proxy |<===>\_interc.\
| MAC-S | /network_/ | 1 | /network_/
+-------+ +->\__ _/ +-------+ \__ _/
| \_/ \_/
+-------+ |
|host B |<-+
| IP-D |
| MAC-D |
+-------+
<--------------West Site------------>
Figure 4: SARP: Two Hosts in the Same Access Segment
<span class="grey">Nachum, et al. Experimental [Page 11]</span></pre>
<hr class='noprint'/><!--NewPage--><pre class='newpage'><span id="page-12" ></span>
<span class="grey"><a href="./rfc7586">RFC 7586</a> SARP June 2015</span>
<span class="h4"><a class="selflink" id="section-3.1.2" href="#section-3.1.2">3.1.2</a>. ARP/NS Request for a Remote VM</span>
When the source and destination VMs are located at different access
segments, the address resolution process is as follows.
+-------+ +-------+ _ __ +-------+ +-------+
|host A | | SARP | / \_/ \_ | SARP | |host B |
| IP-S |<===>|proxy 1|<=>\_ \<==>|proxy 2|<===>| IP-D |
| MAC-S | | MAC-W | / _/ | MAC-E | | MAC-D |
+-------+ +-------+ \__ _/ +-------+ +-------+
\_/
<------West Site------> <------East Site------>
Figure 5: SARP: Two Hosts That Reside in Different Segments
In the example illustrated in Figure 5, the source VM is located at
the west access segment and the destination VM is located at the east
access segment.
When host A sends an ARP/NS request to find out the IP-to-MAC mapping
of host B:
1. If SARP proxy 1 does not have IP-D in its ARP cache, the ARP/NS
request is propagated to all access segments that might have VMs
in the same virtual network as the originating VM, including the
east access segment.
2. As SARP proxy 1 forwards the ARP/NS message, it replaces the
source MAC address, MAC-S, with its own MAC address, MAC-W. Thus,
all switches that reside in the interconnecting segment are not
exposed to MAC-S.
3. The ARP/NS request reaches SARP proxy 2.
4. If SARP proxy 2 does not have IP-D in its ARP cache, the ARP/NS
request is forwarded to the east access network. Host B responds
with an ARP reply (IPv4) or a Neighbor Advertisement (IPv6) to the
request with MAC-D.
5. When the response message reaches SARP proxy 2, it replaces MAC-D
with MAC-E; thus, the response reaches SARP proxy 1 with MAC-E.
6. As SARP proxy 1 forwards the response to host A, it replaces the
destination address from MAC-W to MAC-S.
<span class="grey">Nachum, et al. Experimental [Page 12]</span></pre>
<hr class='noprint'/><!--NewPage--><pre class='newpage'><span id="page-13" ></span>
<span class="grey"><a href="./rfc7586">RFC 7586</a> SARP June 2015</span>
SARP Proxy ARP/ND Cache
SARP proxies maintain a cache of the IP-to-MAC mapping. This cache
is based on ARP/ND messages that are sent by hosts and traverse the
SARP proxies.
In steps 1 and 4 above, if the SARP proxy has IP-D in its ARP cache,
it responds with MAC-E, without forwarding the ARP/NS request.
This caching approach significantly reduces the volume of the ARP/ND
transmission over the network and reduces the round-trip time of
ARP/ND requests.
When the west SARP proxy caches the IP-to-MAC mapping entries for
remote VMs, the expiration timers should be set to relatively low
values to prevent stale entries due to remote VMs being moved or
deleted. In environments where VMs move more frequently, it is not
recommended for SARP proxies to cache the IP-to-MAC mapping entries
of remote VMs.
<span class="h4"><a class="selflink" id="section-3.1.3" href="#section-3.1.3">3.1.3</a>. Gratuitous ARP and Unsolicited Neighbor Advertisement (UNA)</span>
Hosts (or VMs) send out Gratuitous ARP (IPv4) [<a href="#ref-TcpIp" title=""TCP/IP Illustrated, Volume 1: The Protocols"">TcpIp</a>] and Unsolicited
Neighbor Advertisement (UNA) (IPv6) messages to allow other nodes to
refresh IP-to-MAC entries in their caches.
The local SARP proxy processes the Gratuitous ARP or UNA message in
the same way as the ARP reply or IPv6 NA, i.e., replaces the MAC
addresses in the same manner.
<span class="h3"><a class="selflink" id="section-3.2" href="#section-3.2">3.2</a>. Data Plane: Packet Transmission</span>
<span class="h4"><a class="selflink" id="section-3.2.1" href="#section-3.2.1">3.2.1</a>. Local Packet Transmission</span>
When a VM transmits packets to a destination VM that is located at
the same site (Figure 4), the data plane is unaffected by SARP;
packets are sent from (IP-S, MAC-S) to (IP-D, MAC-D).
<span class="h4"><a class="selflink" id="section-3.2.2" href="#section-3.2.2">3.2.2</a>. Packet Transmission between Sites</span>
Packets that are sent between sites (Figure 5) traverse the SARP
proxy of both sites.
A packet sent from host A to host B undergoes the following
procedure:
1. Host A sends a packet to IP-D, and based on its ARP table, it uses
the MAC addresses {MAC-E, MAC-S}.
<span class="grey">Nachum, et al. Experimental [Page 13]</span></pre>
<hr class='noprint'/><!--NewPage--><pre class='newpage'><span id="page-14" ></span>
<span class="grey"><a href="./rfc7586">RFC 7586</a> SARP June 2015</span>
2. SARP proxy 1 receives the packet and replaces the source MAC
address, such that the packet includes {MAC-E, MAC-W}.
3. SARP proxy 2 receives the packet and replaces the destination MAC
address, and the packet is sent to host B with {MAC-D, MAC-W}.
SARP proxy 1 replaces the source MAC address with its own, since
switches in the interconnecting segment are only familiar with SARP
proxy MAC addresses and are not familiar with host addresses.
Note: it is a common security practice in data center networks to use
access lists, allowing each VM to communicate only with a list of
authorized peer VMs. In most cases, such access control lists are
based on IP addresses and, hence, are not affected by the MAC address
replacement in SARP.
<span class="h3"><a class="selflink" id="section-3.3" href="#section-3.3">3.3</a>. VM Migration</span>
<span class="h4"><a class="selflink" id="section-3.3.1" href="#section-3.3.1">3.3.1</a>. VM Local Migration</span>
When a VM migrates locally within its access segment, SARP does not
require any special behavior. VM migration is resolved entirely by
the Layer 2 mechanisms.
<span class="h4"><a class="selflink" id="section-3.3.2" href="#section-3.3.2">3.3.2</a>. VM Migration from One Site to Another</span>
This section focuses on a scenario where a VM migrates from the west
site to the east site while maintaining its MAC and IP addresses.
VM migration might affect networking elements based on their
respective locations:
- origin site (west site)
- destination site (east site)
- other sites
+-------+ +-------+ _ __ +-------+ +-------+
|host A | | SARP | / \_/ \_ | SARP | |host A |
| IP-D |<===>|proxy 1|<=>\_ \<==>|proxy 2|<===>| IP-D |
| MAC-D | | MAC-W | / _/ | MAC-E | | MAC-D |
+-------+ +-------+ \__ _/ +-------+ +-------+
\_/
<------West Site------> <------East Site------>
Origin Site Destination Site
Figure 6: SARP: Host A Migrates from West Site to East Site
<span class="grey">Nachum, et al. Experimental [Page 14]</span></pre>
<hr class='noprint'/><!--NewPage--><pre class='newpage'><span id="page-15" ></span>
<span class="grey"><a href="./rfc7586">RFC 7586</a> SARP June 2015</span>
Origin Site
The origin site is the site where the VM resides before the
migration (west site).
Before the VM (IP=IP-D, MAC=MAC-D) is moved, all VMs at the west
site that have an ARP entry of IP-D in their ARP table have the
IP-D -> MAC-D mapping. VMs on other access segments have an ARP
entry of IP-D -> MAC-W mapping where MAC-W is the MAC address of
the SARP proxy on the west access segment.
After the VM (IP-D) in the west site moves to the east site, if a
Gratuitous ARP (IPv4) or an Unsolicited Neighbor Advertisement
(IPv6) message is sent out by the destination hypervisor on behalf
of the VM (IP-D), then the IP-to-MAC mapping cache of the VMs in
all access segments is updated by IP-D -> MAC-E, where MAC-E is
the MAC address of the SARP proxy on the east site. If no
Gratuitous ARP or UNA message is sent out by the destination
hypervisor, the IP-to-MAC cache on the VMs in the west site (and
other sites) is eventually aged out.
Until the IP-to-MAC mapping cache tables are updated, the source
VMs from the west site continue sending packets locally to MAC-D,
and switches at the west site are still configured with the old
location of MAC-D. This transient condition can be resolved by
having the VM manager send out a fake Gratuitous ARP or UNA
message on behalf of the destination Hypervisor. Another
alternative is to have a shorter aging timer configured for the
IP-to-MAC cache table.
Destination Site
The destination site is the site to which the VM migrated, i.e.,
the east site in Figure 6.
Before any Gratuitous ARP or UNA messages are sent out by the
destination hypervisor, all VMs at the east site (and all other
sites) might have an IP-D -> MAC-W mapping in their IP-to-MAC
mapping cache. The IP-to-MAC mapping cache is updated by aging or
by a Gratuitous ARP or UNA message sent by the destination
hypervisor. Until the IP-to-MAC mapping caches are updated, VMs
from the east site continue to send packets to MAC-W. This can be
resolved by having the VM manager send out a fake Gratuitous ARP
or UNA message immediately after the VM migration or by
redirecting the packets from the SARP proxy of the east site back
to the migrated VM by updating the destination MAC of the packets
to MAC-D.
<span class="grey">Nachum, et al. Experimental [Page 15]</span></pre>
<hr class='noprint'/><!--NewPage--><pre class='newpage'><span id="page-16" ></span>
<span class="grey"><a href="./rfc7586">RFC 7586</a> SARP June 2015</span>
Other Sites
All VMs at the other sites that have an ARP entry of IP-D in their
ARP table have the IP-D -> MAC-W mapping. The ARP mapping is
updated by aging or by a Gratuitous ARP message sent by the
destination hypervisor of the migrated VM and modified by the SARP
proxy of the east site to an IP-D -> MAC-E mapping. Until ARP
tables are updated, VMs from other sites continue sending packets
to MAC-W.
<span class="h5"><a class="selflink" id="section-3.3.2.1" href="#section-3.3.2.1">3.3.2.1</a>. Impact on IP-to-MAC Mapping Cache Table of Migrated VMs</span>
When a VM (IP-D) is moved from one site to another, its IP-to-MAC
mapping entries for VMs located at other sites (i.e., neither the
east site nor the west site) are still valid, even though most guest
OSs (or VMs) will refresh their IP-to-MAC cache after migration.
The migrated VM's IP-to-MAC mapping entries for VMs located at the
east site, if not refreshed after migration, can be kept with no
change until the ARP aging time, as these entries are mapped to MAC-
E. All traffic originated from the migrated VM in its new location
to VMs located at the east site traverses the SARP proxy of the east
site. That SARP proxy can redirect the traffic back to the
corresponding destinations on the east site. Furthermore, an ARP/UNA
message sent by the SARP proxy of the east site or by the VMs on the
east site can refresh the corresponding entries in the migrated VM's
IP-to-MAC cache.
The migrated VM's ARP entries for VMs located at the west site remain
unchanged until either the ARP entries age out or new data frames are
received from the remote sites. Since all MAC addresses of the VMs
located at the west site are unknown at the east site, all unknown
traffic from the VM is intercepted by the SARP proxy of the east site
and forwarded to the SARP proxy of the west site (during the
transient period before the ARP entries age out). This transient
behavior is avoided if the SARP proxy has the destination IP address
in its ARP cache, and, upon receiving a packet with an unknown
destination MAC address, it could send a Gratuitous ARP or UNA
message to the migrated VM.
Note that overlay networks providing Layer 2 network virtualization
services configure their edge-device MAC aging timers to be greater
than the ARP request interval.
<span class="grey">Nachum, et al. Experimental [Page 16]</span></pre>
<hr class='noprint'/><!--NewPage--><pre class='newpage'><span id="page-17" ></span>
<span class="grey"><a href="./rfc7586">RFC 7586</a> SARP June 2015</span>
<span class="h3"><a class="selflink" id="section-3.4" href="#section-3.4">3.4</a>. Multicast and Broadcast</span>
Multicast and broadcast traffic is forwarded by SARP proxies as
follows:
o SARP proxies modify the source MAC address of multicast and
broadcast packets as described in <a href="#section-3.2">Section 3.2</a>.
o SARP proxies do not modify the destination MAC address of
multicast and broadcast packets.
<span class="h3"><a class="selflink" id="section-3.5" href="#section-3.5">3.5</a>. Non-IP Packet</span>
The L2/L3 boundary routers in the current document are capable of
forwarding non-IP IEEE 802.1 Ethernet frames (Layer 2) without
changing the MAC headers. When subnets span across multiple ports of
those routers, they are still under the category of a single link, or
a multi-access link model recommended by [<a href="./rfc4903" title=""Multi-Link Subnet Issues"">RFC4903</a>]. They differ from
the "multi-link" subnets described in [<a href="#ref-MultLinkSub">MultLinkSub</a>] and [<a href="./rfc4903" title=""Multi-Link Subnet Issues"">RFC4903</a>],
which refer to a different physical media with the same prefix
connected to a router, where the Layer 2 frames cannot be natively
forwarded without changing the headers.
<span class="h3"><a class="selflink" id="section-3.6" href="#section-3.6">3.6</a>. High Availability and Load Balancing</span>
The SARP proxy is located at the boundary where the local Layer 2
infrastructure connects to the interconnecting network. All traffic
from the local site to the remote sites traverses the SARP proxy.
The SARP proxy is subject to high-availability and bandwidth
requirements.
The SARP architecture supports multiple SARP proxies connecting a
single site to the transport network. In the SARP architecture, all
proxies can be active and can back up one another. The SARP
architecture is robust and allows network administrators to allocate
proxies according to bandwidth and high-availability requirements.
Traffic is segregated between SARP proxies by using VLANs. An SARP
proxy is the Master SARP proxy of a set of VLANs and the Backup SARP
proxy of another set of VLANs.
For example, assume the SARP proxies of the west site are SARP proxy
1 and SARP proxy 2. The west site supports VLAN 1 and VLAN 2, while
SARP proxy 1 is the Master SARP proxy of VLAN 1 and the Backup SARP
proxy of VLAN 2, and SARP proxy 2 is the Master SARP proxy of VLAN 2
and the Backup SARP proxy of VLAN 1. Both proxies are members of
VLAN 1 and VLAN 2.
<span class="grey">Nachum, et al. Experimental [Page 17]</span></pre>
<hr class='noprint'/><!--NewPage--><pre class='newpage'><span id="page-18" ></span>
<span class="grey"><a href="./rfc7586">RFC 7586</a> SARP June 2015</span>
The Master SARP proxy updates its Backup SARP proxy with all the ARP
reply messages. The Backup SARP proxy maintains a backup database to
all the VLANs that it is the Backup SARP proxy of.
The Master and the Backup SARP proxies maintain a keepalive
mechanism. In case of a failure, the Backup SARP proxy becomes the
Master SARP proxy. The failure decision is per VLAN. When the
Master and the Backup SARP proxies switch over, the Backup SARP proxy
can use the MAC address of the Master SARP proxy. The Backup SARP
proxy sends locally a Gratuitous ARP message with the MAC address of
the Master SARP proxy to update the forwarding tables on the local
switches. The Backup SARP proxy also updates the remote SARP proxies
on the change.
<span class="h3"><a class="selflink" id="section-3.7" href="#section-3.7">3.7</a>. SARP Interaction with Overlay Networks</span>
SARP can be used over overlay networks, providing L2 network
virtualization (such as IP, Virtual Private LAN Service (VPLS),
Transparent Interconnection of Lots of Links (TRILL), Overlay
Transport Virtualization (OTV), Network Virtualization using GRE
(NVGRE), and Virtual eXtensible Local Area Network (VXLAN)). The
mapping of SARP to overlay networks is straightforward; the VM does
the mapping of the destination IP to the SARP proxy MAC address. The
mapping of the proxy MAC to its correct tunnel is done by the overlay
networks.
SARP significantly scales down the complexity of the overlay networks
and transport networks by reducing the mapping tables to the number
of SARP proxies.
<span class="h2"><a class="selflink" id="section-4" href="#section-4">4</a>. Security Considerations</span>
SARP proxies are located at the boundaries of access networks, where
the local Layer 2 infrastructure connects to its Layer 2 cloud. SARP
proxies interoperate with overlay network protocols that extend the
Layer 2 subnet across data centers or between different systems
within a data center.
SARP does not expose the network to security threats beyond those
that exist whether or not SARP is present.
SARP proxies may be exposed to denial-of-service (DoS) attacks by
means of ARP/ND message flooding. Thus, SARP proxies must have
sufficient resources to support the SARP control plane without making
the network more vulnerable to DoS than it was without SARP proxies.
<span class="grey">Nachum, et al. Experimental [Page 18]</span></pre>
<hr class='noprint'/><!--NewPage--><pre class='newpage'><span id="page-19" ></span>
<span class="grey"><a href="./rfc7586">RFC 7586</a> SARP June 2015</span>
SARP adds security to the data plane in terms of network
reconnaissance, by hiding all the local Layer 2 MAC addresses from
potential attackers located at the interconnecting network and
significantly limiting the number of addresses exposed to an attacker
at a remote site.
<span class="h2"><a class="selflink" id="section-5" href="#section-5">5</a>. References</span>
<span class="h3"><a class="selflink" id="section-5.1" href="#section-5.1">5.1</a>. Normative References</span>
[<a id="ref-ARP">ARP</a>] Plummer, D., "Ethernet Address Resolution Protocol: Or
Converting Network Protocol Addresses to 48.bit Ethernet
Address for Transmission on Ethernet Hardware", STD 37,
<a href="./rfc826">RFC 826</a>, DOI 10.17487/RFC0826, November 1982,
<<a href="http://www.rfc-editor.org/info/rfc826">http://www.rfc-editor.org/info/rfc826</a>>.
[<a id="ref-ND">ND</a>] Narten, T., Nordmark, E., Simpson, W., and H. Soliman,
"Neighbor Discovery for IP version 6 (IPv6)", <a href="./rfc4861">RFC 4861</a>,
DOI 10.17487/RFC4861, September 2007,
<<a href="http://www.rfc-editor.org/info/rfc4861">http://www.rfc-editor.org/info/rfc4861</a>>.
[<a id="ref-ProxyARP">ProxyARP</a>] Carl-Mitchell, S. and J. Quarterman, "Using ARP to
implement transparent subnet gateways", <a href="./rfc1027">RFC 1027</a>,
DOI 10.17487/RFC1027, October 1987,
<<a href="http://www.rfc-editor.org/info/rfc1027">http://www.rfc-editor.org/info/rfc1027</a>>.
[<a id="ref-RFC925">RFC925</a>] Postel, J., "Multi-LAN address resolution", <a href="./rfc925">RFC 925</a>,
DOI 10.17487/RFC0925, October 1984,
<<a href="http://www.rfc-editor.org/info/rfc925">http://www.rfc-editor.org/info/rfc925</a>>.
[<a id="ref-RFC4389">RFC4389</a>] Thaler, D., Talwar, M., and C. Patel, "Neighbor Discovery
Proxies (ND Proxy)", <a href="./rfc4389">RFC 4389</a>, DOI 10.17487/RFC4389,
April 2006, <<a href="http://www.rfc-editor.org/info/rfc4389">http://www.rfc-editor.org/info/rfc4389</a>>.
[<a id="ref-RFC4541">RFC4541</a>] Christensen, M., Kimball, K., and F. Solensky,
"Considerations for Internet Group Management Protocol
(IGMP) and Multicast Listener Discovery (MLD) Snooping
Switches", <a href="./rfc4541">RFC 4541</a>, DOI 10.17487/RFC4541, May 2006,
<<a href="http://www.rfc-editor.org/info/rfc4541">http://www.rfc-editor.org/info/rfc4541</a>>.
[<a id="ref-RFC4664">RFC4664</a>] Andersson, L., Ed., and E. Rosen, Ed., "Framework for
Layer 2 Virtual Private Networks (L2VPNs)", <a href="./rfc4664">RFC 4664</a>,
DOI 10.17487/RFC4664, September 2006,
<<a href="http://www.rfc-editor.org/info/rfc4664">http://www.rfc-editor.org/info/rfc4664</a>>.
<span class="grey">Nachum, et al. Experimental [Page 19]</span></pre>
<hr class='noprint'/><!--NewPage--><pre class='newpage'><span id="page-20" ></span>
<span class="grey"><a href="./rfc7586">RFC 7586</a> SARP June 2015</span>
[<a id="ref-RFC6575">RFC6575</a>] Shah, H., Ed., Rosen, E., Ed., Heron, G., Ed., and V.
Kompella, Ed., "Address Resolution Protocol (ARP)
Mediation for IP Interworking of Layer 2 VPNs", <a href="./rfc6575">RFC 6575</a>,
DOI 10.17487/RFC6575, June 2012,
<<a href="http://www.rfc-editor.org/info/rfc6575">http://www.rfc-editor.org/info/rfc6575</a>>.
<span class="h3"><a class="selflink" id="section-5.2" href="#section-5.2">5.2</a>. Informative References</span>
[<a id="ref-802.1Q">802.1Q</a>] IEEE, "IEEE Standard for Local and metropolitan area
networks -- Bridges and Bridged Networks", IEEE Std
802.1Q.
[<a id="ref-ARMDStats">ARMDStats</a>] Karir, M., and J. Rees, <a style="text-decoration: none" href='https://www.google.com/search?sitesearch=datatracker.ietf.org%2Fdoc%2Fhtml%2F&q=inurl:draft-+%22Address+Resolution+Statistics%22'>"Address Resolution Statistics"</a>,
Work in Progress, <a href="./draft-karir-armd-statistics-01">draft-karir-armd-statistics-01</a>, July
2011.
[<a id="ref-MultLinkSub">MultLinkSub</a>]
Thaler, D., and C. Huitema, "Multi-link Subnet Support in
IPv6", Work in Progress,
<a href="./draft-ietf-ipv6-multi-link-subnets-00">draft-ietf-ipv6-multi-link-subnets-00</a>, June 2002.
[<a id="ref-RFC4903">RFC4903</a>] Thaler, D., "Multi-Link Subnet Issues", <a href="./rfc4903">RFC 4903</a>,
DOI 10.17487/RFC4903, June 2007,
<<a href="http://www.rfc-editor.org/info/rfc4903">http://www.rfc-editor.org/info/rfc4903</a>>.
[<a id="ref-RFC6820">RFC6820</a>] Narten, T., Karir, M., and I. Foo, "Address Resolution
Problems in Large Data Center Networks", <a href="./rfc6820">RFC 6820</a>,
DOI 10.17487/RFC6820, January 2013,
<<a href="http://www.rfc-editor.org/info/rfc6820">http://www.rfc-editor.org/info/rfc6820</a>>.
[<a id="ref-RFC7342">RFC7342</a>] Dunbar, L., Kumari, W., and I. Gashinsky, "Practices for
Scaling ARP and Neighbor Discovery (ND) in Large Data
Centers", <a href="./rfc7342">RFC 7342</a>, DOI 10.17487/RFC7342, August 2014,
<<a href="http://www.rfc-editor.org/info/rfc7342">http://www.rfc-editor.org/info/rfc7342</a>>.
[<a id="ref-RFC7364">RFC7364</a>] Narten, T., Ed., Gray, E., Ed., Black, D., Fang, L.,
Kreeger, L., and M. Napierala, "Problem Statement:
Overlays for Network Virtualization", <a href="./rfc7364">RFC 7364</a>,
DOI 10.17487/RFC7364, October 2014,
<<a href="http://www.rfc-editor.org/info/rfc7364">http://www.rfc-editor.org/info/rfc7364</a>>.
[<a id="ref-TcpIp">TcpIp</a>] Stevens, W., "TCP/IP Illustrated, Volume 1: The
Protocols", Addison-Wesley, 1994.
<span class="grey">Nachum, et al. Experimental [Page 20]</span></pre>
<hr class='noprint'/><!--NewPage--><pre class='newpage'><span id="page-21" ></span>
<span class="grey"><a href="./rfc7586">RFC 7586</a> SARP June 2015</span>
Acknowledgments
The authors thank Ted Lemon, Eric Gray, and Adrian Farrel for
providing valuable comments and suggestions for the document.
Authors' Addresses
Youval Nachum
EMail: youval.nachum@gmail.com
Linda Dunbar
Huawei Technologies
5430 Legacy Drive, Suite #175
Plano, TX 75024
United States
Phone: (469) 277 5840
EMail: ldunbar@huawei.com
Ilan Yerushalmi
Marvell
6 Hamada St.
Yokneam, 20692
Israel
EMail: yilan@marvell.com
Tal Mizrahi
Marvell
6 Hamada St.
Yokneam, 20692
Israel
EMail: talmi@marvell.com
Nachum, et al. Experimental [Page 21]
</pre>
|