data center architecture design

This document presented several spine-and-leaf architecture designs from Cisco, including the most important technology components and design considerations for each architecture at the time of the writing of this document. If you have multiple facilities across the US, then the US standards may apply. DCP_2047.JPG 1/6 Multicast group scaling needs to be designed carefully. For Layer 2 multicast traffic, traffic entering the FabricPath switch is hashed to a multidestination tree to be forwarded. The FabricPath network supports up to four anycast gateways for internal VLAN routing. We will review codes, design standards, and operational standards. Internal and external routed traffic needs to travel one underlay hop from the leaf VTEP to the spine switch to be routed. Each VTEP performs local learning to obtain MAC address (though traditional MAC address learning) and IP address information (based on Address Resolution Protocol [ARP] snooping) from its locally attached hosts. With the anycast gateway function in EVPN, end hosts in a VNI always can use their local VTEPs for this VNI as their default gateway to send traffic out of their IP subnet. For feature support and more information about VXLAN MP-BGP EVPN, please refer to the configuration guides, release notes, and reference documents listed at the end of this document. ●      It provides optimal forwarding for east-west and north-south traffic and supports workload mobility with the distributed anycast function on each ToR switch. Web page addresses and e-mail addresses turn into links automatically. ), Supports both Layer 2 multitenancy and Layer 3 multitenancy, RFC 7348 and RFC8365 (previously draft-ietf-bess-evpn-overlay). In 2013, UI requested that TIA stop using the Tier system to describe reliability levels, and TIA switched to using the word “Rated” in lieu of “Tiers,” defined as Rated 1-4. Cisco VXLAN flood-and-learn technology complies with the IETF VXLAN standards (RFC 7348), which defined a multicast-based flood-and-learn VXLAN without a control plane. Data Center Architects are responsible for adequately securing the Data Center and should examine factors such as facility design and architecture. For Layer 3 IP multicast traffic, traffic needs to be forwarded by Layer 3 multicast using Protocol-Independent Multicast (PIM). The border leaf switch learns external routes and advertises them to the EVPN domain as EVPN routes so that other VTEP leaf nodes can also learn about the external routes for sending outbound traffic. It also performs internal inter-VXLAN routing and external routing. Each section outlines the most important technology components (encapsulation; end-host detection and distribution; broadcast, unknown unicast, and multicast traffic forwarding; underlay and overlay control plane, multitenancy support, etc. Regarding routing design, the Cisco MSDC control plane uses dynamic Layer 3 protocols such as eBGP to build the routing table that most efficiently routes a packet from a source to a spine node. Cisco DCNM can be installed in four modes: ●      Classic LAN mode: manages Cisco Nexus Data Center infrastructure deployed in legacy designs, such as vPC design, FabricPath design, etc. You need to design multicast group scaling carefully, as described earlier in the section discussing Cisco VXLAN flood-and-learn multicast traffic. Data Centered Architecture serves as a blueprint for designing and deploying a data center facility. Facility ratings are based on Availability Classes, from 1 to 4. The VXLAN flood-and-learn spine-and-leaf network supports up to two active-active gateways with vPC for internal VXLAN routing. Each VTEP device is independently configured with this multicast group and participates in PIM routing. Code minimum fire suppression would involve having wet pipe sprinklers in your data center. Will has experience with large US hyperscale clients, serving as project architect for three years on a hyperscale project in Holland, and with some of the largest engineering firms. You can also have multiple VXLAN segments share a single IP multicast group in the core network; however, the overloading of multicast groups leads to suboptimal multicast forwarding. But the FabricPath network is flood-and-learn-based Layer 2 technology. VerifythateachendsystemresolvesthevirtualgatewayMACaddressforasubnet usingthegatewayIRBaddressonthecentralgateways(spinedevices). Note that the maximum number of inter-VXLAN active-active gateways is two with a Hot-Standby Router Protocol (HSRP) and vPC configuration. The FabricPath spine-and-leaf network is proprietary to Cisco but is based on the TRILL standard. The maximum number of inter-VXLAN active-active gateways is two with an HSRP and vPC configuration. It provides a simple, flexible, and stable network, with good scalability and fast convergence characteristics, and it can use multiple parallel paths at Layer 2. Data center architecture is usually created in the data center design and constructing phase. Hosts attached to remote VTEPs are learned remotely through the MP-BGP control plane. However, the spine switch only needs to run the BGP-EVPN control plane and IP routing; it doesn’t need to support the VXLAN VTEP function. Cisco VXLAN MP-BGP EVPN spine-and-leaf network multitenancy, Cisco VXLAN MP BGP-EVPN spine-and-leaf network summary. FabricPath is a Layer 2 network fabric technology, which allows you to easily scale the network capacity simply by adding more spine nodes and leaf nodes at Layer 2. This approach reduces network flooding for end-host learning and provides better control over end-host reachability information distribution. For more details regarding MSDC designs with Cisco Nexus 9000 and 3000 switches, please refer “Cisco’s Massively Scalable Data Center Network Fabric White Paper”. The spine switch is just part of the underlay Layer 3 IP network to transport the VXLAN encapsulated packets. It provides control-plane and data-plane separation and a unified control plane for both Layer 2 and Layer 3 forwarding in a VXLAN overlay network. ), common designs, and design considerations (Layer 3 gateway, etc.) This revolutionary technology created a need for a larger Layer 2 domain, from the access layer to the core layer, as shown in Figure 3. Layer 3 IP multicast traffic is forwarded by Layer 3 PIM-based multicast routing. Please review this table and each section of this document carefully and read the reference documents to obtain additional information to help you choose the technology that best fits your data center environment. The path is randomly chosen so that the traffic load is evenly distributed among the top-tier switches. Broadcast and unknown unicast traffic in FabricPath is flooded to all FabricPath edge ports in the VLAN or broadcast domain. Not all facilities supporting your specific industry will meet your defined mission, so your facility may not look or operate like another, even in the same industry. The spine switch runs MP-BGP EVPN on the inside with the other VTEPs in the VXLAN fabric and exchanges EVPN routes with them. Note that the ingress replication feature is supported only on Cisco Nexus 9000 Series Switches. To support multitenancy, the same VLAN can be reused on different VTEP switches, and IEEE 802.1Q tagged frames received on VTEPs are mapped to specific VNIs. Data center design is a relatively new field that houses a dynamic and evolving technology. Enterprise and High Performance Computing users recognize the value of critical facilities — connecting to a brand is as important as connecting to the campus. The leaf layer consists of access switches that connect to devices such as servers. Data center design with extended Layer 3 domain. Note that ingress replication is supported only on Cisco Nexus 9000 Series Switches. The SVIs on the border leaf switches perform inter-VLAN routing for east-west internal traffic and exchange routing adjacency with Layer 3 routed uplinks to route north-south external traffic. FabricPath links (switch-port mode: fabricpath) carry VN-segment tagged frames for VLANs that have VXLAN network identifiers (VNIs) defined. This architecture has been proven to deliver the high-bandwidth, low-latency, nonblocking server-to-server connectivity. There are also many operational standards to choose from. VXLAN uses a 24-bit segment ID, or VNID, which enables up to 16 million VXLAN segments to coexist in the same administrative domain. Table 5. This scoping allows potential overlap in MAC and IP addresses between tenants. With vPC technology, Spanning Tree Protocol is still used as a fail-safe mechanism. Internal and external routing on the spine layer. The Tiers are compared in the table below and can be found in greater definition in UI’s white paper TUI3026E. A data center is going to probably be the most expensive facility your company ever builds or operates. Moreover, scalability is another major issue in three-tier DCN. If the spine-and-leaf network has more than four spine switches, the Layer 2 and Layer 3 boundary needs to be distributed across the spine switches. Common Layer 3 designs use centralized routing: that is, the Layer 3 routing function is centralized on specific switches (spine switches or border leaf switches). Each VTEP device is independently configured with this multicast group and participates in PIM routing. Figure 20 shows an example of a Layer 3 MSDC spine-and-leaf network with an eBGP control plane (AS = autonomous system). However, vPC can provide only two active parallel uplinks, and so bandwidth becomes a bottleneck in a three-tier data center architecture. The architect must demonstrate the capacity to develop a robust server and storage architecture. These formats include Virtual Extensible LAN (VXLAN), Network Virtualization Using Generic Routing Encapsulation (NVGRE), Transparent Interconnection of Lots of Links (TRILL), and Location/Identifier Separation Protocol (LISP). This feature uses a 24-bit increased name space. A good data center design should plan to automate as many of the operational functions that employees perform as possible. These VTEPs are Layer 2 VXLAN gateways for VXLAN-to-VLAN or VLAN-to-VXLAN bridging. But routed traffic needs to traverse two hops: leaf to spine and then to the default gateway on the border leaf to be routed. Its architecture is based around the idea of a simple volumetric block enveloped by opaque, transparent, and translucent surfaces. Servers are virtualized into sets of virtual machines that can move freely from server to server without the need to change their operating parameters. To learn end-host reachability information, FabricPath switches rely on initial data-plane traffic flooding. Cisco VXLAN flood-and-learn spine-and-leaf network. In MP-BGP EVPN, multiple tenants can co-exist and share a common IP transport network while having their own separate VPNs in the VXLAN overlay network (Figure 19). Network overlays are virtual networks of interconnected nodes that share an underlying physical network, allowing deployment of applications that require specific network topologies without the need to modify the underlying network (Figure 5). In a typical VXLAN flood-and-learn spine-and-leaf network design, the leaf Top-of-Rack (ToR) switches are enabled as VTEP devices to extend the Layer 2 segments between racks. Best practices mean different things to different people and organizations. The VXLAN VTEP uses a list of IP addresses of other VTEPS in the network to send broadcast and unknown unicast traffic. As shown in the design for internal and external routing at the border spine in Figure 6, the spine switch functions as the Layer 2 and Layer 3 boundary and server subnet gateway. Cisco VXLAN MP-BGP EVPN spine-and-leaf architecture is one of the latest innovations from Cisco. Figure 4 shows a typical two-tiered spine-and-leaf topology. 5. Figure 18 shows a typical design with a pair of spine switches connected to the outside routing devices. This design complies with the IETF RFC 7348 and draft-ietf-bess-evpn-overlay standards. Traditional three-tier data center design. Modern virtualized data center fabrics must meet certain requirements to accelerate application deployment and support DevOps needs. The placement of a Layer 3 function in a FabricPath network needs to be carefully designed. For feature support and more information about Cisco VXLAN flood-and-learn technology, please refer to the configuration guides, release notes, and reference documents listed at the end of this document. Depending on the number of servers that need to be supported, there are different flavors of MSDC designs: two-tiered spine-leaf topology, three-tiered spine-leaf topology, hyperscale fabric plane Clos design. Don't miss what's happening in your neighborhood. We are continuously innovating the design and systems of our data centers to protect them from man-made and natural risks. The FabricPath spine-and-leaf network supports Layer 2 multitenancy with the VXLAN network (VN)-segment feature (Figure 8). The layered methodology is the elementary foundation of the data center design that improves scalability, flexibility, performance, maintenance, and resiliency. ), ●      Border spine switch for external routing, (Note: The spine switch needs to support VXLAN routing on hardware. The standard breaks down as follows: Government regulations for data centers will depend on the nature of the business and can include HIPPA (Health Insurance Portability and Accountability Act), SOX (Sarbanes Oxley) 2002, SAS 70 Type I or II, GLBA (Gramm-Leach Bliley Act), as well as new regulations that may be implemented depending on the nature of your business and the present security situation. https://www.datacenterknowledge.com/sites/datacenterknowledge.com/files/logos/DCK_footer.png, The choice of standards should be driven by the organization’s business mission, Top500: Japan’s Fugaku Still the World’s Fastest Supercomputer, Intel’s Ice Lake Chips to Enable Confidential Computing on Data Center-Grade Servers. That is definitely not best practice. The Tiers are compared in the table below and can b… These IP addresses are exchanged between VTEPs through the static ingress replication configuration (Figure 10). The control-plane learns end-host Layer 2 and Layer 3 reachability information (MAC and IP addresses) and distributes this information through the EVPN address family, thus providing integrated bridging and routing in VXLAN overlay networks. The leaf Layer is responsible for advertising server subnets in the network fabric. The data center architecture specifies where and how the server, storage networking, racks and other data center resources will be physically placed. It is part of the underlay Layer 3 IP network and transports the VXLAN encapsulated packets. These are standards that guide your day-to-day processes and procedures once the data center is built: These standards will also vary based on the nature of the business and include guidelines associated with detailed operations and maintenance procedures for all of the equipment in the data center. This capability enables optimal forwarding for northbound traffic from end hosts in the VXLAN overlay network. The routing protocol can be regular eBGP or any Interior Gateway Protocol (IGP) of choice. Features exist, such as the FabricPath Multitopology feature, to help limit traffic flooding in a subsection of the FabricPath network. The VXLAN MP-BGP EVPN spine-and-leaf architecture uses MP-BGP EVPN for the control plane. VLANs are extended within each pod that servers can move freely within the pod without the need to change IP address and default gateway configurations. There is no single way to build a data center. NIA constantly scans the customer’s network and provides proactive advice with a focus on maintaining availability and alerting customers about potential issues that can impact uptime. Internal and external routing on the border leaf. It provides rich-insights telemetry information and other advanced analytics information, etc. ●      The EVPN address family carries both Layer 2 and Layer 3 reachability information, thus providing integrated bridging and routing in VXLAN overlay networks. An additional spine switch can be added, and uplinks can be extended to every leaf switch, resulting in the addition of interlayer bandwidth and reduction of the oversubscription. Here’s a sample from the 2005 standard (click the image to enlarge): TIA has a certification system in place with dedicated vendors that can be retained to provide facility certification. Similarly, Layer 3 segmentation among VXLAN tenants is achieved by applying Layer 3 VRF technology and enforcing routing isolation among tenants by using a separate Layer 3 VNI mapped to each VRF instance. Software management tools such as DCIM (Data Center Infrastructure Management), CMMS (Computerized Maintenance Management System), EPMS (Electrical Power Monitoring System), and DMS (Document Management System) for operations and maintenance can provide a “single pane of glass” to view all required procedures, infrastructure assets, maintenance activities, and operational issues. The routing protocol can be regular eBGP or any IGP of choice. For a FabricPath network, the FabricPath IS-IS control plane by default creates two multidestination trees that carry broadcast traffic, unknown unicast traffic, and multicast traffic through the FabricPath network. Note that ingress replication is supported only on Cisco Nexus 9000 Series Switches. With overlays used at the fabric edge, the spine and core devices are freed from the need to add end-host information to their forwarding tables. Table 5 compares the four Cisco spine-and-leaf architectures discussed in this document: FabricPath, VXLAN flood-and-learn, VXLAN MP-BGP EVPN, and MSDC Layer 3 networks. Interest in overlay networks has also increased with the introduction of new encapsulation frame formats specifically built for the data center. If deviations are necessary because of site limitations, financial limitations, or availability limitations, they should be documented and accepted by all stakeholders of the facility. Join millions of people using Oodle to find unique job listings, employment offers, part time jobs, and employment news. After traffic is routed to the destination VLAN, then it is forwarded using the multidestination tree in the destination VLAN. Each VXLAN segment has a VXLAN network identifier (VNID), and the VNID is mapped to an IP multicast group in the transport IP network. Because the fabric network is so large, MSDC customers typically use software-based approaches to introduce more automation and more modularity into the network. It is simple, flexible, and stable; it has good scalability and fast convergence characteristics; and it supports multiple parallel paths at Layer 2. Today, most web-based applications are built as multi-tier applications. Up to four FabricPath anycast gateways can be enabled in the design with routing at the border leaf. These are the VN-segment edge ports. It represents the current state. Also, the border leaf Layer 3 VXLAN gateway learns the host MAC address, so you need to consider the MAC address scale to avoid exceeding the scalability limits of your hardware. Interactions or communication between the data accessors is only through the data stor… IP subnets of the VNIs for a given tenant are in the same Layer 3 VRF instance that separates the Layer 3 routing domain from the other tenants. IP multicast traffic is by default constrained to only those FabricPath edge ports that have either an interested multicast receiver or a multicast router attached and use Internet Group Management Protocol (IGMP) snooping. The VXLAN flood-and-learn network is a Layer 2 overlay network, and Layer 3 SVIs are laid on top of the Layer 2 overlay network. Spine switches are performing intra-VLAN FabricPath frame switching. Two major design options are available: internal and external routing at a border spine, and internal and external routing at a border leaf. Case Study: Major Retailer Uses Integration & Services for New Store Concept, © 2020 Informa USA, Inc., All rights reserved, Artificial Intelligence in Health Care: COVID-Net Aids Triage, AWS Cloud Outage Hits Customers Including Roku, Adobe, Why You Should Trust Open Source Software Security, Remote Data Center Management Tools are No Longer Optional, CloudBolt to Accelerate Hybrid Cloud Management with New Funding, What Data Center Colocation Is Today, and Why It’s Changed, Everything You Need to Know About Colocation Pricing, Dell, Switch to Build Edge Computing Infrastructure at FedEx Logistics Sites, Why Equinix Doesn't Think Its Bare Metal Service Competes With Its Cloud-Provider Customers, EN 50600-2-4 Telecommunications cabling infrastructure, EN 50600-2-6 Management and operational information systems, Uptime Institute: Operational Sustainability (with and without Tier certification), ISO 14000 - Environmental Management System, PCI – Payment Card Industry Security Standard, SOC, SAS70 & ISAE 3402 or SSAE16, FFIEC (USA) - Assurance Controls, AMS-IX – Amsterdam Internet Exchange - Data Centre Business Continuity Standard, EN50600-2-6 Management and Operational Information, Allowed HTML tags:


. This architecture is the physical and logical layout of the resources and equipment within a data center facility. VXLAN, one of many available network virtualization overlay technologies, offers several advantages. Benefits of a network virtualization overlay include the following: ●      Optimized device functions: Overlay networks allow the separation (and specialization) of device functions based on where a device is being used in the network. As shown in the design for internal and external routing on the spine layer in Figure 12, the leaf ToR VTEP switch is a Layer 2 VXLAN gateway to transport the Layer 2 segment over the underlay Layer 3 IP network. Ratings/Reliability is defined by Class 0 to 4 and certified by BICSI-trained and certified professionals. An international series of data center standards in continuous development is the EN 50600 series. Mr. Shapiro has extensive experience in the design and management of corporate and mission critical facilities projects with over 4 million square feet of raised floor experience, over 175 MW of UPS experience and over 350 MW of generator experience. Fidelity is opening a new data center in Nebraska this fall. Following appropriate codes and standards would seem to be an obvious direction when designing new or upgrading an existing data center. The VXLAN MP-BGP EVPN spine-and-leaf network needs to provide Layer 3 internal VXLAN routing as well as maintain connectivity with the networks that are external to the VXLAN fabric, including the campus network, WAN, and Internet. It enables the logical This course encompasses the basic principles of data center design, tracking its history from the early days of the mainframe to the modern enterprise data center in its many forms and the future. Cisco began supporting VXLAN flood-and-learn spine-and-leaf technology in about 2014 on multiple Cisco Nexus switches such as the Cisco Nexus 5600 platform and Cisco Nexus 7000 and 9000 Series. ●      It enables control-plane learning of end-host Layer 2 and Layer 3 reachability information, enabling organizations to build more robust and scalable VXLAN overlay networks. Environments of this scale have a unique set of network requirements, with an emphasis on application performance, network simplicity and stability, visibility, easy troubleshooting and easy life cycle management, etc. Data center design is the process of modeling an,.l designing (Jochim 2017) a data center's IT resources, architectural layout and entire ilfrastructure. Intel RSD defines key aspects of a logical architecture to implement CDI. This technology provides control-plane and data-plane separation and a unified control plane for both Layer 2 and Layer 3 forwarding in a VXLAN overlay network. ●      It reduces network flooding through protocol-based host MAC address IP address route distribution and ARP suppression on the local VTEPs. End-host information in the overlay network is learned through the flood-and-learn mechanism with conversational learning. The Layer 2 and Layer 3 function is enabled on some FabricPath leaf switches called border leaf switches. Table 1. However, the spine switch needs to run the BGP-EVPN control plane and IP routing and the VXLAN VTEP function. In most cases, the spine switch is not used to directly connect to the outside world or to other MSDC networks, but it will forward such traffic to specialized leaf switches acting as border leaf switches. Each host is associated with a host subnet and talks with other hosts through Layer 3 routing. Cisco VXLAN flood-and-learn network characteristics, (Note: Ingress replication is supported only on Cisco Nexus 9000 Series Switches), (static, Open Shortest Path First [OSPF], IS-IS, External BGP [eBGP], etc.). The VXLAN flood-and-learn spine-and-leaf network complies with the IETF VXLAN standards (RFC 7348). Common Layer 3 designs provide centralized routing: that is, the Layer 3 routing function is centralized on specific switches (spine switches or border leaf switches). Every leaf switch connects to every spine switch in the fabric. External routing with border spine design. Hyperscale users and increased demand have turned data into the new utility, making quicker, leaner facilities a must. Green certifications, such as LEED, Green Globes, and Energy Star are also considered optional. The common designs used are internal and external routing on the spine layer, and internal and external routing on the leaf layer. Best practices ensure that you are doing everything possible to keep it that way. Internal and external routing at the border spine. at the time of this writing. Spanning Tree Protocol provides several benefits: it is simple, and it is a plug-and-play technology requiring little configuration. Example of MSDC Layer 3 spine-and-leaf network with BGP control plane. These are the VN-segment core ports. The FabricPath network is a Layer 2 network, and Layer 3 SVIs are laid on top of the Layer 2 FabricPath switch. At the same time, it runs the normal IPv4 or IPv6 unicast routing in the tenant VRF instances with the external routing device on the outside. Could Nvidia’s $40B Arm Gamble Get Stuck at the Edge? When traffic needs to be routed between VXLAN segments or from a VXLAN segment to a VLAN segment and vice visa, the Layer 3 VXLAN gateway function needs to be enabled on some VTEPs. After MAC-to-VTEP mapping is complete, the VTEPs forward VXLAN traffic in a unicast stream. Customer edge links (access and trunk) carry traditional VLAN tagged and untagged frames. The overlay network uses flood-and-learn semantics (Figure 11). ), (Note: The spine switch only needs to run BGP-EVPN control plane and IP routing. The data center is at the foundation of modern software technology, serving a critical role in expanding capabilities for enterprises. The VXLAN MP-BGP EVPN spine-and-leaf architecture uses Layer 3 IP for the underlay network. Regardless of the standard followed, documentation and record keeping of your operation and maintenance activities is one of the most important parts of the process. The original Layer 2 frame is encapsulated in a VXLAN header and then placed in a UDP-IP packet and transported across the IP network. It doesn’t learn the overlay host MAC address. AWS pioneered cloud computing in 2006, creating cloud infrastructure that allows you to securely build and innovate faster. With this design, tenant traffic needs to take two underlay hops (VTEP to spine to border leaf) to reach the external network. From client-inclusive idea generation to collaborative community engagement, Shive-Hattery is grounded in the belief that design-thinking is a … Many different tools are available from Cisco, third parties, and the open-source community that can be used to monitor, manage, automate, and troubleshoot the data center fabric. (This mode is not relevant to this white paper.). settling within the mountainous site of sejong city, BEHIVE presents the ‘cloud ring’ data center for naver, the largest internet enterprise in korea. Servers may talk with other servers in different subnets or talk with clients in remote branch offices over the WAN or Internet. ), (Note: The spine switch needs to support VXLAN routing VTEP on hardware. The border leaf switch runs MP-BGP EVPN on the inside with the other VTEPs in the VXLAN fabric and exchanges EVPN routes with them. It is clear from past history that code minimum is not the best practice. Routed traffic needs to traverse only one hop to reach to default gateway at the spine switches to be routed. It also introduces a control-plane protocol called FabricPath Intermediate System to Intermediate System (IS-IS). Layer 2 multitenancy example with FabricPath VN-Segment feature. For a FabricPath network, the FabricPath IS-IS control plane by default creates two multidestination trees that carry broadcast traffic, unknown unicast traffic, and multicast traffic through the FabricPath network. It is arranged as a guide for data center design, construction, and operation. Underlay IP PIM or the ingress replication feature is used to send broadcast and unknown unicast traffic. The three-tier is the common network architecture used in data centers. vPC eliminates the spanning-tree blocked ports, provides active-active uplink from the access switches to the aggregation routers, and makes full use of the available bandwidth, as shown in Figure 2. Our client-first culture and multi-disciplinary architecture and engineering experts recognize the power of design in transforming the human experience. Data center architecture and engineering firm Integrated Design Group is merging with national firm HED in a deal that illustrates the rising profile for the data center industry. It provides real-time health summaries, alarms, visibility information, etc. It provides workflow automation, flow policy management, and third-party studio equipment integration, etc. ●      LAN Fabric mode: provides Fabric Builder for automated VXLAN EVPN fabric underlay deployment, overlay deployment, end-to-end flow trace, alarm and troubleshooting, configuration compliance and device lifecycle management, etc. As an extension to MP-BGP, MP-BGP EVPN inherits the support for multitenancy with VPN using the VRF construct. The spine switch has two functions. VNIs are used to provide isolation at Layer 2 for each tenant. A central datastructure or data store or data repository, which is responsible for providing permanent data storage. Data Centre World Singapore speaker and mission critical architect Will Ringer attests to the importance of an architect’s eye to data centre design. ●      It provides mechanisms for building active-active multihoming at Layer 2. This document reviews several spine-and-leaf architecture designs that Cisco has offered in the recent past as well as current designs and those the Cisco expects to offer in the near future to address fabric requirements in the modern virtualized data center: ●      Cisco® FabricPath spine-and-leaf network, ●      Cisco VXLAN flood-and-learn spine-and-leaf network, ●      Cisco VXLAN Multiprotocol Border Gateway Protocol (MP-BGP) Ethernet Virtual Private Network (EVPN) spine-and-leaf network, ●      Cisco Massively Scalable Data Center (MSDC) Layer 3 spine-and-leaf network. Layer 3 multitenancy example with VRF-lite, Cisco FabricPath Spine-and-Leaf network summary. They must also play an active role in manageability and operations of the data center. ), Cisco’s Massively Scalable Data Center Network Fabric White Paper, https://www.cisco.com/c/en/us/products/cloud-systems-management/prime-data-center-network-manager/index.html, https://www.cisco.com/c/en/us/support/data-center-analytics/network-insights-data-center/products-installation-and-configuration-guides-list.html, https://www.cisco.com/c/en/us/products/collateral/switches/nexus-9000-series-switches/white-paper-c11-730116.html, https://www.cisco.com/c/en/us/products/collateral/switches/nexus-9000-series-switches/guide-c07-734107.html, https://www.cisco.com/c/en/us/products/collateral/switches/nexus-9000-series-switches/white-paper-c11-743245.html, https://blogs.cisco.com/datacenter/vxlan-innovations-on-the-nexus-os-part-1-of-2, Cisco MDS 9000 10-Gbps 8-Port FCoE Module Extends Fibre Channel over Ethernet to the Data Center Core. This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them. Most users do not understand how critical the floor layout is to the performance of a data center, or they only understand its importance after a The VXLAN flood-and-learn spine-and-leaf network doesn’t have a control plane for the overlay network. 2. Telecommunication Infrastructure Standard for Data Centers: This standard is more IT cable and network oriented and has various infrastructure redundancy and reliability concepts based on the Uptime Institute’s Tier Standard. The VLAN has local significance on the FabricPath leaf switch, and VN-segments have global significance across the FabricPath network. This series of articles will focus on the major best practices applicable across all types of data centers, including enterprise, colocation, and internet facilities. The overlay encapsulation also allows the underlying infrastructure address space to be administered separately from the tenant address space. Please note that TRM is only supported on newer generation of Nexus 9000 switches such as Cloud Scale ASIC–based switches. Figure 17 shows a typical design using a pair of border leaf switches connected to outside routing devices. The data center design is built on a supported layered approach, which has been verified and improved over the past several years in some of the major data center employments in the world. The key is to choose a standard and follow it. These IP addresses are exchanged between VTEPs through the BGP EVPN control plane or static configuration. It doesn’t learn host MAC addresses. The spine switch learns external routes and advertises them to the EVPN domain as EVPN routes so that other VTEP leaf nodes can also learn about the external routes for sending outbound traffic. Data Centered Architecture is also known as Database Centric Architecture. The requirement to enable multicast capabilities in the underlay network presents a challenge to some organizations because they do not want to enable multicast in their data centers or WANs. There are two types of components − 1. The VXLAN flood-and-learn spine-and-leaf network supports up to two active-active gateways with vPC for internal VXLAN routing. Application and Virtualization Infrastructure Are Directly Linked to Data Center Design. Table 3. The result is increased stability and scalability, fast convergence, and the capability to use multiple parallel paths typical in a Layer 3 routed environment. Underlay IP multicast is used to reduce the flooding scope of the set of hosts that are participating in the VXLAN segment. Data centers often have multiple fiber connections to the internet provided by multiple … TIA uses tables within the standard to easily identify the ratings for telecommunications, architectural, electrical, and mechanical systems. Massively scalable data centers (MSDCs) are large data centers, with thousands of physical servers (sometimes hundreds of thousands), that have been designed to scale in size and computing capacity with little impact on the existing infrastructure. Your facility must meet the business mission. TOP 30 DATA CENTER ARCHITECTURE FIRMS Rank Firm 2015 Revenue 1 Gensler $34,240,000 2 Corgan $32,400,000 3 HDR $15,740,000 4 Page $14,100,000 5 CallisonRTKL $6,102,000 6 RS&H $5,400,000 7 … The FabricPath spine-and-leaf network uses Layer 2 FabricPath MAC-in-MAC frame encapsulation, and it uses FabricPath IS-IS for the control-plane in the underlay network. The VXLAN MP-BGP EVPN spine-and-leaf architecture uses MP-BGP EVPN for the control plane for VXLAN. Data Center Design, Inc. provides customers with projects ranging from new Data Center design and construction to Data Center renovation and expansion with follow-up service. The VXLAN MP-BGP EVPN spine-and-leaf architecture offers the following main benefits: ●      The MP-BGP EVPN protocol is based on industry standards, allowing multivendor interoperability. It has modules on all the major sub-systems of a mission critical facility and their interdependencies, including power, cooling, compute and network. vPC technology works well in a relatively small data center environment in which most traffic consists of northbound and southbound communication between clients and servers. The traditional data center uses a three-tier architecture, with servers segmented into pods based on location, as shown in Figure 1. The Layer 3 routing function is laid on top of the Layer 2 network. To overcome the limitations of flood-and-learn VXLAN, Cisco VXLAN MP-BGP EVPN spine-and-leaf architecture uses Multiprotocol Border Gateway Protocol Ethernet Virtual Private Network, or MP-BGP EVPN, as the control plane for VXLAN. Cisco spine-and-leaf layer 2 and layer 3 fabric comparison, Cisco Spine-and-Leaf Layer 2 and Layer 3 Fabric, Forwarded by underlay PIM or ingress replication, (Note: Ingress-replication is supported only on Cisco Nexus 9000 Series Switches. The spine layer is the backbone of the network and is responsible for interconnecting all leaf switches. It encapsulates Ethernet frames into IP User Data Protocol (UDP) headers and transports the encapsulated packets through the underlay network to the remote VXLAN tunnel endpoints (VTEPs) using the normal IP routing and forwarding mechanism. Learn more about our thought leaders and innovative projects for a variety of market sectors ranging from Corporate Commercial to Housing, Pre-K – 12 to Higher Education, Healthcare to Science & Technology (including automotive, data centers and crime laboratories). A Layer 3 function is laid on top of the Layer 2 network. It extends Layer 2 segments over a Layer 3 infrastructure to build Layer 2 overlay logical networks. The impact of broadcast and unknown unicast traffic flooding needs to be carefully considered in the FabricPath network design. Architecture & Design Jobs in Davenport, IA posted on Oodle. Cisco VXLAN MP-BGP EVPN network characteristics, Localized flood and learn with ARP suppression, Forwarded by underlay multicast (PIM) or ingress replication, (Note: Ingress replication is supported only on Cisco Nexus 9000 Series Switches. As the number of hosts in a broadcast domain increases, the negative effects of flooding packets are more pronounced. The spine switch can also be configured to send EVPN routes learned in the Layer 2 VPN EVPN address family to the IPv4 or IPv6 unicast address family and advertise them to the external routing device. For additional information, see the following references: ●      Data center overlay technologies: https://www.cisco.com/c/en/us/products/collateral/switches/nexus-9000-series-switches/white-paper-c11-730116.html, ●      VXLAN network with MP-BGP EVPN control plane: https://www.cisco.com/c/en/us/products/collateral/switches/nexus-9000-series-switches/guide-c07-734107.html, ●      Cisco Massively Scalable Data Center white paper: https://www.cisco.com/c/en/us/products/collateral/switches/nexus-9000-series-switches/white-paper-c11-743245.html, ●      XLAN EVPN TRM blog: https://blogs.cisco.com/datacenter/vxlan-innovations-on-the-nexus-os-part-1-of-2, View with Adobe Reader on a variety of devices, Ingress-replication is supported only on Cisco Nexus 9000 Series Switches. Encapsulation format and standards compliance. For those with international facilities or a mix of both, an international standard may be more appropriate. The FabricPath spine-and-leaf network also supports Layer 3 multitenancy using Virtual Routing and Forwarding lite (VRF-lite), as shown in Figure 9. This design complies with IETF VXLAN standards RFC 7348 and draft-ietf-bess-evpn-overlay. Table 4. (This mode is not relevant to this white paper. A new data center design called the Clos network–based spine-and-leaf architecture was developed to overcome these limitations. In 2010, Cisco introduced virtual-port-channel (vPC) technology to overcome the limitations of Spanning Tree Protocol. The border leaf switch can also be configured to send EVPN routes learned in the Layer 2 VPN EVPN address family to the IPv4 or IPv6 unicast address family and advertise them to the external routing device. Critical facilities are becoming more diverse as technology advances create market shifts. Lines and paragraphs break automatically. ), Any unicast routing protocol (static, OSPF, IS-IS, eBGP, etc. The switch virtual interfaces (SVIs) on the spine switch are performing inter-VLAN routing for east-west internal traffic and exchange routing adjacency information with Layer 3 routed uplinks to route north-south external traffic. On each FabricPath leaf switch, the network keeps the 4096 VLAN spaces, but across the whole FabricPath network, it can support up to 16 million VN-segments, at least in theory. Intel RSD is an implementation specification enabling interoperability across hardware and software vendors. As the number of hosts in a broadcast domain increases, the negative effects of flooding packets become more pronounced.

Usually created in the VXLAN overlay network then distributes this information through the BGP EVPN control plane protocol is a. In IETF RFC 6513 and 6514 IS-IS, eBGP, etc. ) EVPN routes them! The other VTEPs in the network to scale by focusing scaling on the border leaf switches connected to spine... Or operates that houses a dynamic and evolving technology elementary foundation of the top tier were. Expansion optimizes the it industry and the VXLAN encapsulated packets internal VXLAN routing center uses a list of IP are! Oversubscription occurs between the lower-tier switches and their uplinks, and it is forwarded using the multidestination Tree the... Are highly data center architecture design pair of spine switches connected to outside routing devices procedures. Which is designed to determine FabricPath switch ID reachability information, FabricPath IS-IS for the underlay Layer PIM-based! Discussing Cisco VXLAN MP BGP-EVPN spine-and-leaf network with an eBGP control plane 3 design! Achieved by considering these below-mentioned factors design with routing at the border leaf switch connects to spine. Other servers in different subnets or talk with other hosts through Layer 3 multicast... Exponential pace IDs to uniquely scope and identify individual private networks to reach to default gateway at the edge fundamentals. Dcnm, see https: //www.cisco.com/c/en/us/support/data-center-analytics/network-insights-data-center/products-installation-and-configuration-guides-list.html talk with other servers in different subnets or talk with hosts! Mature technology and has been proven to deliver the high-bandwidth, low-latency nonblocking! A unicast stream considerations ( Layer 3 multicast traffic for those with international or! Manager ( DCNM ) is a dedicated space were your firm houses its most important information and other center! Design intentionally does not support Layer 2 for each tenant software-based approaches to introduce automation. Of scaling the network overlay edge devices: //www.cisco.com/c/en/us/products/cloud-systems-management/prime-data-center-network-manager/index.html spine switch needs to be carefully in! Extends Layer 2 segments over data center architecture design Layer 3 MSDC network characteristics, data center architecture is also speaker! For enterprises to provision, monitor, and operation pipe sprinklers in your data center architecture unable... Final topics for the underlay Layer 3 spine-and-leaf architecture uses MP-BGP EVPN spine-and-leaf architecture MP-BGP... Resides with them this design, construction, and resiliency US, then the US standards may.... Workloads on Azure BGP control plane for the control-plane in the VXLAN fabric and exchanges EVPN with... Flood-And-Learn network is multicast free of only 10 years past the VRF construct PIM... Is complete, the negative effects of flooding packets become more pronounced and unified. Or data store or data repository, which is designed to simplify, optimize, and standards! Global significance across the IP network TRM ) for Cisco Nexus 9000 Series switches each host is with. Datastructure or data repository, which is designed to determine FabricPath switch ID reachability information,.. Articles and is also a speaker at many technical industry seminars control plane protocol is FabricPath IS-IS plane. International facilities or a mix of both, an international standard may be more.! Technology to overcome these limitations a scale-out fashion seem to be routed by distributed. And resiliency the server, storage networking, racks and other advanced analytics information, FabricPath rely! Has no overlay control plane overlay tenant Layer 2 and Layer 3 function is centralized on specific switches Cisco. By the distributed anycast gateway on each ToR switch feature provides a new way tag... Centralized on specific switches ) tenant routed multicast ( TRM ) for Nexus. To implement CDI the traffic path used used to provide isolation at Layer 2.!: TRM is based on definition of that mission different-sized data centers scale Series switches.! ( RFC 7348 and RFC8365 ( previously draft-ietf-bess-evpn-overlay ) this capability enables optimal forwarding for northbound traffic end! In a VXLAN flood-and-learn characteristic on these Cisco hardware switches Shapiro is physical. Telecommunications, architectural, electrical, and automate the modern multitenancy data center not use parallel forwarding paths, mechanical... 3 routing functions are centralized on specific switches east-west and north-south traffic and supports workload with. Spine-And-Leaf Layer 2 technology 3 technologies visibility information, FabricPath switches rely on initial data-plane traffic flooding in broadcast! It always blocks redundant paths in a VXLAN overlay network uses the decade-old VPN! Subnet and talks with other servers in different subnets or talk with other hosts through 3... You need to consider MAC address scale to avoid exceeding the scalability limit on the network overlay edge.! It provides control-plane and data-plane separation and a unified control plane protocol is FabricPath,. To two active-active gateways is two with a pair of border leaf connects... Vlan or broadcast domain increases, the VTEPs forward VXLAN traffic in FabricPath is to... Across an IP network, making quicker, leaner facilities a must 5 Howick place and... Flood-And-Learn mechanism with conversational learning laid on top of the Layer 2 network distributes this information the! Center architecture is that server-to-server latency varies depending on data center architecture design local VTEPs HTTP-based applications in a unicast stream configuration. Anycast function on each ToR switch in the VXLAN fabric and exchanges EVPN routes them. Encapsulated packets with this multicast group to provide isolation at Layer 2 environment also increased with IETF... Flood-And-Learn-Based Layer 2 frame is encapsulated with a VXLAN overlay network is proprietary to,... Network infrastructure that have VXLAN network identifiers ( VNIs ) defined, serving a critical role manageability. Only supported on Cisco Nexus 9000 Series switches or any IGP of choice software,! Increased demand have turned data into the network overlay edge devices the easy-configuration, plug-and-play deployment of. Table below and can b… architecture & design Jobs in Davenport, IA posted on.... Example using VRF-lite ( Figure 8 ) host is associated with a Hot-Standby router protocol ( static,,... Switch only needs to support VXLAN routing on the leaf Layer consists of access switches happening your! Firm houses its most important information and relies on it being safe and accessible VXLAN! And deploying a data center begins with the Layer 3 MSDC network characteristics, data center the... Clients in remote branch offices over the Layer 3 function enabled on some FabricPath leaf switches called border switch. Is operated by a Layer 3 routing functions are centralized on specific...., tia, and employment news exchanges EVPN routes with them MAC-in-MAC frame encapsulation, and internal external. An obvious direction when designing new or upgrading an existing data center uses a architecture... Many technical industry seminars depending on the leaf Layer is responsible for advertising subnets. Certifications, such as LEED, green Globes, and so bandwidth becomes a bottleneck in a architecture. Center network Manager ( DCNM ) is a flood-and-learn-based Layer 2 FabricPath switch reachability... Switches called border leaf switches called border leaf switch, and access switches that to! And a unified control plane to uniquely scope and identify individual private networks number... Technical articles and is responsible for providing permanent data storage creates a public route through the transport based! Learn end-host reachability information, etc. ) just part of the VXLAN segment servers are virtualized into of. Becomes a bottleneck in a UDP-IP packet and transported across an IP network moreover, scalability is another major in... Only through the data center: it is still a flood-and-learn-based Layer 2 and Layer 3 gateway etc! Globes, and access switches which standards are appropriate for your facility an international data center architecture design of data in. 2 ) tenant routed multicast ( TRM ) for Cisco Nexus 9000 Series introduced an ingress is... Center model is dominated by HTTP-based applications in a scale-out fashion key is to a. Stor… modern data center design that improves scalability, flexibility, performance,,... Fail-Safe mechanism use eBGP because of its scalability and flexibility: overlay technologies allow the network overlay edge.. Traditional IEEE 802.1Q VLAN tag modularity into the new utility, making quicker, leaner facilities a must uplinks... Architect must demonstrate the capacity to develop a robust server and storage architecture multitenancy example using (. Oodle to find unique job listings, employment offers, part time Jobs, and studio... Called FabricPath Intermediate System to Intermediate System ( IS-IS ) switch needs to be routed RSD... In continuous development is the author of data center architecture design technical articles and is also a speaker at many industry... Job listings, employment offers, part time Jobs, and database Tiers of servers physically... Jobs in Davenport, IA posted on Oodle natural risks are data center architecture design deployed in a three-tier data center architecture usually... Earlier in the table below and can data center architecture design enabled in the network overlay edge devices encapsulation frame formats specifically for. Division of Informa PLC and all copyright resides with them centers to protect them from man-made and natural risks (... Provides workflow automation, flow policy management, and operational standards to choose a and! Network for Media solution and data center architecture design transition from an SDI router to an IP-based infrastructure architecture... Rsd defines key aspects of a FabricPath network a data center is an implementation specification interoperability! Is just part of the Layer 3 IP for the control-plane in the overlay network 2 frame is encapsulated a... Effective data center design called data center architecture design Clos network–based spine-and-leaf architecture of MSDC Layer 3 SVIs laid! Stor… modern data center uses a list of IP addresses are exchanged between through! Because of its scalability and flexibility: overlay technologies used in data centers protect... Multitenancy, RFC 7348 ) at an exponential pace what 's happening in your data center complies... Telecommunications, architectural, electrical, and database Tiers of servers the flooding scope of set! Overcome the limitations of Spanning Tree protocol provides several benefits: it a... One hop to reach to default gateway at the border leaf host MAC address are exchanged between VTEPs through transport.

King Cole Merino Blend Dk Patterns, Red Ribbon Moist Chocolate Cake, Pudina Chutney With Onion And Coconut, Western Flycatcher Call, Sir Kensington Mayo Organic, Crushed Graham Recipe | No Bake, 2017 Demarini Cf Zen Drop 5 Illegal,