For example, if I scored 2.5 instead of 1.5 for IT, I would have reached 78 in total. I think that this is exactly what I need to make it to the interview. At this point, we asked Maria to think about the advice that was given to her by one of the trainers. Assignment 1: Let's Practice Using BART 2:27 Focus on Culture and Conflict in Teams: Part 1 – Culture 15:31 Focus on Culture and Conflict in Teams: Part 2 – Conflict 10:25.
- Scenario 2 Part 1&your Digital Footprint Chart
- Scenario 2 Part 1&your Digital Footprint Activities
- Scenario 2 Part 1&your Digital Footprint Worksheet
- Scenario 2 Part 1&your Digital Footprint Book
- joined two XenServer (Citrix XenServer'>XS) hosts together to form a XenServer resource pool;
- configured an Network File System (a distributed file system protocol originally developed by Sun Microsystems in 1984)'>NFS shared storage repository (SR) for Citrix XenServer'>XS hosts to store guests’ virtual hard disk drives (Virtual Hard Disk is a file format which represents a virtual hard disk drive (HDD). It may contain what is found on a physical HDD, such as disk partitions and a file system, which in turn can contain files and folders. It is typically used as the hard disk of a virtual machine.'>VHD);
- created a dedicated storage network for the XenServer hosts to use when communicating with the SR, and;
- configured that Network File System (a distributed file system protocol originally developed by Sun Microsystems in 1984)'>NFS shared SR as the default SR for the resource pool.
At a high level, most XenServer network design decisions stem from a combination of three major design goals: the need for redundancy, performance, or isolation. For many organizations, these goals may overlap and are not necessarily mutually exclusive.
While considering the goals, keep in mind th the physical network configuration you create for one host should match those on all other hosts in the pool.
Citrix XenServer Design: Designing XenServer Network Configurations
Our objective in this tutorial will be to improve the resiliency and the performance of networking within XenServer but we will not be concerned with isolation in this tutorial.While considering the goals, keep in mind th the physical network configuration you create for one host should match those on all other hosts in the pool.
Citrix XenServer Design: Designing XenServer Network Configurations
The Two Alternative Network Stacks of XenServer
For all intents and purposes: XenServer (Citrix XenServer'>XS) is a virtualization appliance. It is built using two major components: The Xen Project Hypervisor and a highly-customized version of CentOS Enterprise Linux:- The Xen Project Hypervisor provides the virtualization component.
- CentOS Enterprise Linux provides the control domain (i.e., Dom0) in the form of a virtual machine (Virtual Machine'>VM).
In XenServer two alternative components are used to extend the network functionality provided by Linux: Linux Bridge* and Open vSwitch. The two network stacks are mutually-exclusive. Citrix XenServer'>XS Administrators are able to easily select the network stack used in their Citrix XenServer'>XS environments using tools that have been provided with XAPI.
From a conceptual perspective, [Open vSwitch] functions the same way as the existing Linux bridge. Regardless of whether or not you use [Open vSwitch] or the Linux bridge, you can still use the same networking features in XenCenter and the same xe networking commands listed in the XenServer Administrator’s Guide.
Citrix XenServer Design: Designing XenServer Network Configurations
Citrix XenServer Design: Designing XenServer Network Configurations
XenServer and Linux Bridge
Linux Bridge was the original network stack of XenServer. Linux Bridge is still available in Citrix XenServer'>XS but has been deprecated since Version 6.5 of Citrix XenServer'>XS.What is Linux Bridge?
Virtual networking requires the presence of a virtual switch inside a server/hypervisor. Even though it is called a bridge, the Linux bridge is really a virtual switch… Linux Bridge is a kernel module… And it is administered using brctl command on Linux.
Blogs by Sriram: Linux Bridge and Virtual Networking
Linux Bridge was introduced into the Linux Kernel in Version 2.2 of the Linux Kernel. It was later re-written for Version 2.4 and Version 2.6 of the Linux Kernel and continues to be present in the current version of the Kernel.Blogs by Sriram: Linux Bridge and Virtual Networking
Why use Linux Bridge?
The Linux Bridge is quite stable, widely-used, and very well-understood because it’s been around for so long and is used so widely. Linux administrators may interact with Linux Bridge using standard command-line tools (e.g., the brctl) command. (However, in XenServer, all network configuration is performed using the XAPI command:xe.)XenServer and Open vSwitch
Open vSwitch is the next generation network stack for XenServer. Open vSwitch was introduced into Citrix XenServer'>XS in Version 5.6, Feature Pack 1, and, as of XenServer Version 6.0, has become the default network stack in Citrix XenServer'>XS.What is Open vSwitch?
From the Open vSwitch Web site: Open vSwitch is a production-quality, multilayer, virtual switch licensed under the open source Apache 2.0 license. It is designed to enable massive network automation through programmatic extension, while still supporting standard management interfaces and protocols (e.g. NetFlow, sFlow, IPFIX, RSPAN, CLI, LACP, 802.1ag). In addition, it is designed to support distribution across multiple [XenServer hosts] – similar to VMware’s vNetwork distributed vswitch or Cisco’s Nexus 1000V.
Open vSwitch Homepage
Open vSwitch Homepage
Why use Open vSwitch?
Using Open vSwitch is required for many of the advanced network features of Citrix XenServer'>XS:- Cross-server private networks
- NIC Bonds that contain more than two NIC’s
- NIC Bonds that operate in LACP bonding mode
- OpenFlow®
Why Open vSwitch? Open vSwitch targets a different point in the design space than previous hypervisor networking stacks, focusing on the need for automated and dynamic network control in large-scale Linux-based virtualization environments.
The Open vSwitch FAQ (Summary)
The Open vSwitch FAQ (Summary)
Scenario 2 Part 1&your Digital Footprint Chart
The Future of Networking in XenServer
Open vSwitch has been available in Citrix XenServer'>XS since Version 5.6, Feature Pack 1 and has been the default network stack for Citrix XenServer'>XS since Version 6.0 and will continue to be the default network stack in future version XenServer: As of XenServer 6.0, the new XenServer vSwitch component is the default networking configuration. However, you can still use the Linux Bridge, which was the default networking configuration prior to XenServer 6.0, by running an xe command to change your networking configuration.
Citrix XenServer Design: Designing XenServer Network Configurations
In this tutorial we will use the default network stack in Version 6.5 of XenServer: Open vSwitch.Citrix XenServer Design: Designing XenServer Network Configurations
Identifying and Changing the Network Stack in XenServer
Citrix XenServer'>XS Administrators almost never change the network stack in their infrastructure but identifying and changing the network stack can all be easily performed from the Command-Line Interface'>CLI. (Remember that, in this Scenario, we’ll be making use of the default network stack in Version 6.5 of XenServer [Open vSwitch] so we will not be using any of the commands illustrated throughout this section of the tutorial.)Identifying the Current Network Stack in XenServer
Two different commands can be used to identify the network stack that is currently configured on a Citrix XenServer'>XS host: [root@xs-1 ~]# /opt/xensource/bin/xe-get-network-backend openvswitch [root@xs-1 ~]# xe host-list params=software-version | grep --color network_backend software-version (MRO) : product_version: 6.5.0; product_version_text: 6.5; product_version_text_short: 6.5; platform_name: XCP; platform_version: 1.9.0; product_brand: XenServer; build_number: 90233c; hostname: taboth-1; date: 2016-11-11; dbv: 2015.0101; xapi: 1.3; xen: 4.4.1-xs131111; linux: 3.10.0+2; xencenter_min: 2.3; xencenter_max: 2.4; network_backend: openvswitch; xs:xenserver-transfer-vm: XenServer Transfer VM, version 6.5.0, build 90158c; xcp:main: Base Pack, version 1.9.0, build 90233c; xs:main: XenServer Pack, version 6.5.0, build 90233c
Obviously, the first command is the most convenient andthe least prone to human error - So we recommend using the xe-get-network-backend command to identify the network stack currently in use on the Citrix XenServer'>XS host.Changing the Current Network Stack in XenServer
The xe-get-network-backend command has a compliment: The xe-switch-network-backend command!The xe-switch-network-backend command can be used, along with theopenvswitchcommand-line argument or thebridgecommand-line argument, to select which network stack the Citrix XenServer'>XS host will use. The Citrix whitepaper 'XenServer Design: Designing XenServer Network Configurations' outlines the process this way:
Configuring [Open vSwitch] on Running Pools
If your pool is already up-and-running... consider the following before [changing the network stack]:
- You must run the xe-switch-network-backend command on each host in the pool separately. The xe-switch-network-backend command is not a pool-wide command. This command can also be used to revert to the standard Linux bridge.
- All hosts in the pool must use the same networking backend. Do not configure some hosts in the pool to use the Linux bridge and others to use [Open vSwitch] bridge.
- When you are changing your hosts to use [Open vSwitch], you do not need to put the hosts into Maintenance mode. You just need to run the xe-switch-network-backend command on each host and reboot the hosts.
Bridges and Switches and Networks - Oh, my!
In the Citrix XenServer'>XS vernacular a bridge is really the same thing as a [virtual] switch. Adding to the peculiarity of Citrix XenServer'>XS terminology is the fact that a bridge is called a network: A network is the logical network switching fabric built into XenServer that lets you network your virtual machines. It links the physical NICs to the virtual interfaces and connects the virtual interfaces together. These networks are virtual switches that behave as regular L2 learning switches. Some vendors’ virtualization products refer to networks as virtual switches or bridges.
Citrix XenServer Design: Designing XenServer Network Configurations
To reiterate: In XenServer...Citrix XenServer Design: Designing XenServer Network Configurations
- A bridge is the same thing as a switch.
- Both are called a network.
Network Bonding
In order to improve the resiliency and the performance of the networking in Citrix XenServer'>XS, we'll use a common technique that goes by many names: Network Bonding. The combining or aggregating together of network links in order to provide a logical link with higher throughput, or to provide redundancy, is known by many names such as “channel bonding”, “Ethernet bonding”, “port trunking”, “channel teaming”, “NIC teaming”, “link aggregation”, and so on. This concept as originally implemented in the Linux kernel is widely referred to as “bonding”.
Chapter 4 of the 'Red Hat Enterprise Linux 7 Networking Guide'
Though NIC Bonding has many different names, it always describes the same concept: The joining of multiple, physical NIC's into a single, logical NIC. The resulting, logical NIC behaves as a single NIC that offers increased resiliency and, in some configurations, increased throughput.Chapter 4 of the 'Red Hat Enterprise Linux 7 Networking Guide'
NIC bonding is a technique for increasing resiliency and/or bandwidth in which an administrator configures two [or more] NICs together so they logically function as one network card...
Citrix XenServer Design: Designing XenServer Network Configurations
Though it's recommended to configure bonds prior to creating the resource pool...Citrix XenServer Design: Designing XenServer Network Configurations
Whenever possible, create NIC bonds as part of initial resource pool creation prior to joining additional hosts to the pool or creating VMs. Doing so allows the bond configuration to be automatically replicated to hosts as they are joined to the pool and reduces the number of steps required... Adding a NIC bond to an existing pool requires one of the following:
- Using the CLI to configure the bonds on the master and then each member of the pool.
- Using the CLI to configure the bonds on the master and then restarting each member of the pool so that it inherits its settings from the pool master. [Or...]
- Using XenCenter to configure the bonds on the master. XenCenter automatically synchronizes the networking settings on the member servers with the master, so you do not need to reboot the member servers.
If you are not using XenCenter for [configuring] NIC bonding, the quickest way to create pool-wide NIC bonds is to create the bond on the master, and then restart the other pool members. Alternatively, you can use the service xapi restart command. This causes the bond and VLAN settings on the master to be inherited by each host. The management interface of each host must, however, be manually reconfigured.
Chapter 4.4.6.2 of the 'Citrix XenServer® 6.5, Service Pack 1 Administrator's Guide'
Chapter 4.4.6.2 of the 'Citrix XenServer® 6.5, Service Pack 1 Administrator's Guide'
Network Bonding Modes
XenServer 6.5, Service Pack 1 supports 3 modes of network bonding:- Active-Active,
- Active-Passive, and
- LACP
XenServer provides support for active-active, active-passive, and LACP bonding modes. The number of NICs supported and the bonding mode supported varies according to network stack:
- LACP bonding is only available for [Open vSwitch] whereas active-active and active-passive are available for both [Open vSwitch] and Linux Bridge.
- When [Open vSwitch] is the network stack, you can bond either two, three, or four NICs.
- When the Linux Bridge is the network stack, you can only bond two NICs.
Chapter 4.3.5 of the 'Citrix XenServer® 6.5, Service Pack 1 Administrator's Guide' The technical details of NIC bonding can be very complex but Wikipedia provides a good explanation of the different bonding modes using slightly different names for the bonding modes:
Active-backup (active-backup)
Only one NIC slave in the bond is active. A different slave becomes active if, and only if, the active slave fails. The single, logical bonded interface's MAC address is externally visible on only one NIC (port) to avoid distortion in the network switch. This mode provides fault tolerance.
IEEE 802.3ad Dynamic link aggregation (802.3ad, LACP)
Creates aggregation groups that share the same speed and duplex settings. Utilizes all slave network interfaces in the active aggregator group according to the 802.3ad specification... The link is set up dynamically between two LACP-supporting peers.
Adaptive transmit load balancing (balance-tlb)
[balance-tlb mode] does not require any special network-switch support. The outgoing network packet traffic is distributed according to the current load (computed relative to the speed) on each network interface slave. Incoming traffic is received by one currently designated slave network interface. If this receiving slave fails, another slave takes over the MAC address of the failed receiving slave.
Adaptive load balancing (balance-alb)
Includes balance-tlb plus receive load balancing (rlb) for IPV4 traffic... [balance-alb mode] does not require any special network switch support. The receive load balancing is achieved by ARP negotiation. The bonding driver intercepts the ARP Replies sent by [guests] on their way out and overwrites the source hardware address with the unique hardware address of one of the [guest's virtual MAC address], logical bonded interface such that different network-peers use different MAC addresses for their network packet traffic.
Wikipedia: Link aggregation
Only one NIC slave in the bond is active. A different slave becomes active if, and only if, the active slave fails. The single, logical bonded interface's MAC address is externally visible on only one NIC (port) to avoid distortion in the network switch. This mode provides fault tolerance.
IEEE 802.3ad Dynamic link aggregation (802.3ad, LACP)
Creates aggregation groups that share the same speed and duplex settings. Utilizes all slave network interfaces in the active aggregator group according to the 802.3ad specification... The link is set up dynamically between two LACP-supporting peers.
Adaptive transmit load balancing (balance-tlb)
[balance-tlb mode] does not require any special network-switch support. The outgoing network packet traffic is distributed according to the current load (computed relative to the speed) on each network interface slave. Incoming traffic is received by one currently designated slave network interface. If this receiving slave fails, another slave takes over the MAC address of the failed receiving slave.
Adaptive load balancing (balance-alb)
Includes balance-tlb plus receive load balancing (rlb) for IPV4 traffic... [balance-alb mode] does not require any special network switch support. The receive load balancing is achieved by ARP negotiation. The bonding driver intercepts the ARP Replies sent by [guests] on their way out and overwrites the source hardware address with the unique hardware address of one of the [guest's virtual MAC address], logical bonded interface such that different network-peers use different MAC addresses for their network packet traffic.
Wikipedia: Link aggregation
Different Bond Modes for Different Use-Cases
Citrix makes the following recommendations regarding the creation and configuration of different types of interfaces:XenServer can only send traffic over two or more NICs when there is more than one MAC address associated with the bond. XenServer can use the virtual MAC addresses in the VIF to send traffic across multiple links. Specifically:
- VM traffic. Provided you enable bonding on NICs carrying only VM (guest) traffic, all links are active and NIC bonding can balance spread VM traffic across NICs. An individual VIF's traffic is never split between NICs.
- Management or storage traffic. Only one of the links (NICs) in the bond is active and the other NICs remain unused unless traffic fails over to them...
- active-passive bonds for primary and secondary interfaces, and
- active-active bonds for guest [external] interfaces.
TABLE #1
NIC #1 | NIC #2 | FUNCTION | IP ADDRESS | |
---|---|---|---|---|
Citrix XenServer'>XS-1 | Citrix XenServer'>XS-2 | |||
eth0 | eth3 | Primary Management Interface'>PMI | 172.16.0.10/27 | 172.16.0.12/27 |
eth1 | eth4 | External | N/A | N/A |
eth2 | eth5 | Storage | 172.16.0.35/28 | 172.16.0.36/28 |
Conclusion
In the previous Tutorial, we had configured the IP address of two interfaces on the host Citrix XenServer'>xs-1:- The Primary Management Interface (172.16.0.10/28), and
- The Storage Interface (172.16.0.35/28).
* Technically speaking: Linux Bridge has been integrated into the Linux network stack since Kernel Version 2.2 and, as such, Linux Bridge does not extend the functionality of the Linux network stack as much as it forms an important piece of the Linux network stack. However - for the purposes of this discussion - we're going to consider it to still be separate from the Linux Kernel.
Questions? Comments? Visit the forums to discuss this tutorial!
Scenario 2 Part 1&your Digital Footprint Activities
Other Reading
Changelog: This tutorial was last modified 20-Jul-2017
In the previous tutorial, we…
- joined two XenServer (Citrix XenServer'>XS) hosts together to form a XenServer resource pool;
- configured an Network File System (a distributed file system protocol originally developed by Sun Microsystems in 1984)'>NFS shared storage repository (SR) for Citrix XenServer'>XS hosts to store guests’ virtual hard disk drives (Virtual Hard Disk is a file format which represents a virtual hard disk drive (HDD). It may contain what is found on a physical HDD, such as disk partitions and a file system, which in turn can contain files and folders. It is typically used as the hard disk of a virtual machine.'>VHD);
- created a dedicated storage network for the XenServer hosts to use when communicating with the SR, and;
- configured that Network File System (a distributed file system protocol originally developed by Sun Microsystems in 1984)'>NFS shared SR as the default SR for the resource pool.
At a high level, most XenServer network design decisions stem from a combination of three major design goals: the need for redundancy, performance, or isolation. For many organizations, these goals may overlap and are not necessarily mutually exclusive.
While considering the goals, keep in mind th the physical network configuration you create for one host should match those on all other hosts in the pool.
Citrix XenServer Design: Designing XenServer Network Configurations
Our objective in this tutorial will be to improve the resiliency and the performance of networking within XenServer but we will not be concerned with isolation in this tutorial.While considering the goals, keep in mind th the physical network configuration you create for one host should match those on all other hosts in the pool.
Citrix XenServer Design: Designing XenServer Network Configurations
The Two Alternative Network Stacks of XenServer
For all intents and purposes: XenServer (Citrix XenServer'>XS) is a virtualization appliance. It is built using two major components: The Xen Project Hypervisor and a highly-customized version of CentOS Enterprise Linux:- The Xen Project Hypervisor provides the virtualization component.
- CentOS Enterprise Linux provides the control domain (i.e., Dom0) in the form of a virtual machine (Virtual Machine'>VM).
In XenServer two alternative components are used to extend the network functionality provided by Linux: Linux Bridge* and Open vSwitch. The two network stacks are mutually-exclusive. Citrix XenServer'>XS Administrators are able to easily select the network stack used in their Citrix XenServer'>XS environments using tools that have been provided with XAPI.
From a conceptual perspective, [Open vSwitch] functions the same way as the existing Linux bridge. Regardless of whether or not you use [Open vSwitch] or the Linux bridge, you can still use the same networking features in XenCenter and the same xe networking commands listed in the XenServer Administrator’s Guide.
Citrix XenServer Design: Designing XenServer Network Configurations
Citrix XenServer Design: Designing XenServer Network Configurations
XenServer and Linux Bridge
Linux Bridge was the original network stack of XenServer. Linux Bridge is still available in Citrix XenServer'>XS but has been deprecated since Version 6.5 of Citrix XenServer'>XS.What is Linux Bridge?
Virtual networking requires the presence of a virtual switch inside a server/hypervisor. Even though it is called a bridge, the Linux bridge is really a virtual switch… Linux Bridge is a kernel module… And it is administered using brctl command on Linux.
Blogs by Sriram: Linux Bridge and Virtual Networking
Linux Bridge was introduced into the Linux Kernel in Version 2.2 of the Linux Kernel. It was later re-written for Version 2.4 and Version 2.6 of the Linux Kernel and continues to be present in the current version of the Kernel.Blogs by Sriram: Linux Bridge and Virtual Networking
Why use Linux Bridge?
The Linux Bridge is quite stable, widely-used, and very well-understood because it’s been around for so long and is used so widely. Linux administrators may interact with Linux Bridge using standard command-line tools (e.g., the brctl) command. (However, in XenServer, all network configuration is performed using the XAPI command:xe.)XenServer and Open vSwitch
Open vSwitch is the next generation network stack for XenServer. Open vSwitch was introduced into Citrix XenServer'>XS in Version 5.6, Feature Pack 1, and, as of XenServer Version 6.0, has become the default network stack in Citrix XenServer'>XS.What is Open vSwitch?
From the Open vSwitch Web site: Open vSwitch is a production-quality, multilayer, virtual switch licensed under the open source Apache 2.0 license. It is designed to enable massive network automation through programmatic extension, while still supporting standard management interfaces and protocols (e.g. NetFlow, sFlow, IPFIX, RSPAN, CLI, LACP, 802.1ag). In addition, it is designed to support distribution across multiple [XenServer hosts] – similar to VMware’s vNetwork distributed vswitch or Cisco’s Nexus 1000V.
Open vSwitch Homepage
Open vSwitch Homepage
Why use Open vSwitch?
Using Open vSwitch is required for many of the advanced network features of Citrix XenServer'>XS:- Cross-server private networks
- NIC Bonds that contain more than two NIC’s
- NIC Bonds that operate in LACP bonding mode
- OpenFlow®
Why Open vSwitch? Open vSwitch targets a different point in the design space than previous hypervisor networking stacks, focusing on the need for automated and dynamic network control in large-scale Linux-based virtualization environments.
The Open vSwitch FAQ (Summary)
The Open vSwitch FAQ (Summary)
The Future of Networking in XenServer
Open vSwitch has been available in Citrix XenServer'>XS since Version 5.6, Feature Pack 1 and has been the default network stack for Citrix XenServer'>XS since Version 6.0 and will continue to be the default network stack in future version XenServer: As of XenServer 6.0, the new XenServer vSwitch component is the default networking configuration. However, you can still use the Linux Bridge, which was the default networking configuration prior to XenServer 6.0, by running an xe command to change your networking configuration.
Citrix XenServer Design: Designing XenServer Network Configurations
In this tutorial we will use the default network stack in Version 6.5 of XenServer: Open vSwitch.Citrix XenServer Design: Designing XenServer Network Configurations
Identifying and Changing the Network Stack in XenServer
Citrix XenServer'>XS Administrators almost never change the network stack in their infrastructure but identifying and changing the network stack can all be easily performed from the Command-Line Interface'>CLI. (Remember that, in this Scenario, we’ll be making use of the default network stack in Version 6.5 of XenServer [Open vSwitch] so we will not be using any of the commands illustrated throughout this section of the tutorial.)Identifying the Current Network Stack in XenServer
Two different commands can be used to identify the network stack that is currently configured on a Citrix XenServer'>XS host: [root@xs-1 ~]# /opt/xensource/bin/xe-get-network-backend openvswitch [root@xs-1 ~]# xe host-list params=software-version | grep --color network_backend software-version (MRO) : product_version: 6.5.0; product_version_text: 6.5; product_version_text_short: 6.5; platform_name: XCP; platform_version: 1.9.0; product_brand: XenServer; build_number: 90233c; hostname: taboth-1; date: 2016-11-11; dbv: 2015.0101; xapi: 1.3; xen: 4.4.1-xs131111; linux: 3.10.0+2; xencenter_min: 2.3; xencenter_max: 2.4; network_backend: openvswitch; xs:xenserver-transfer-vm: XenServer Transfer VM, version 6.5.0, build 90158c; xcp:main: Base Pack, version 1.9.0, build 90233c; xs:main: XenServer Pack, version 6.5.0, build 90233c
Obviously, the first command is the most convenient andthe least prone to human error - So we recommend using the xe-get-network-backend command to identify the network stack currently in use on the Citrix XenServer'>XS host.Changing the Current Network Stack in XenServer
The xe-get-network-backend command has a compliment: The xe-switch-network-backend command!The xe-switch-network-backend command can be used, along with theopenvswitchcommand-line argument or thebridgecommand-line argument, to select which network stack the Citrix XenServer'>XS host will use. The Citrix whitepaper 'XenServer Design: Designing XenServer Network Configurations' outlines the process this way:
Configuring [Open vSwitch] on Running Pools
If your pool is already up-and-running... consider the following before [changing the network stack]:
- You must run the xe-switch-network-backend command on each host in the pool separately. The xe-switch-network-backend command is not a pool-wide command. This command can also be used to revert to the standard Linux bridge.
- All hosts in the pool must use the same networking backend. Do not configure some hosts in the pool to use the Linux bridge and others to use [Open vSwitch] bridge.
- When you are changing your hosts to use [Open vSwitch], you do not need to put the hosts into Maintenance mode. You just need to run the xe-switch-network-backend command on each host and reboot the hosts.
Bridges and Switches and Networks - Oh, my!
In the Citrix XenServer'>XS vernacular a bridge is really the same thing as a [virtual] switch. Adding to the peculiarity of Citrix XenServer'>XS terminology is the fact that a bridge is called a network: A network is the logical network switching fabric built into XenServer that lets you network your virtual machines. It links the physical NICs to the virtual interfaces and connects the virtual interfaces together. These networks are virtual switches that behave as regular L2 learning switches. Some vendors’ virtualization products refer to networks as virtual switches or bridges.
Citrix XenServer Design: Designing XenServer Network Configurations
To reiterate: In XenServer...Citrix XenServer Design: Designing XenServer Network Configurations
- A bridge is the same thing as a switch.
- Both are called a network.
Network Bonding
In order to improve the resiliency and the performance of the networking in Citrix XenServer'>XS, we'll use a common technique that goes by many names: Network Bonding. The combining or aggregating together of network links in order to provide a logical link with higher throughput, or to provide redundancy, is known by many names such as “channel bonding”, “Ethernet bonding”, “port trunking”, “channel teaming”, “NIC teaming”, “link aggregation”, and so on. This concept as originally implemented in the Linux kernel is widely referred to as “bonding”.
Chapter 4 of the 'Red Hat Enterprise Linux 7 Networking Guide'
Though NIC Bonding has many different names, it always describes the same concept: The joining of multiple, physical NIC's into a single, logical NIC. The resulting, logical NIC behaves as a single NIC that offers increased resiliency and, in some configurations, increased throughput.Chapter 4 of the 'Red Hat Enterprise Linux 7 Networking Guide'
NIC bonding is a technique for increasing resiliency and/or bandwidth in which an administrator configures two [or more] NICs together so they logically function as one network card...
Citrix XenServer Design: Designing XenServer Network Configurations
Though it's recommended to configure bonds prior to creating the resource pool...Citrix XenServer Design: Designing XenServer Network Configurations
Whenever possible, create NIC bonds as part of initial resource pool creation prior to joining additional hosts to the pool or creating VMs. Doing so allows the bond configuration to be automatically replicated to hosts as they are joined to the pool and reduces the number of steps required... Adding a NIC bond to an existing pool requires one of the following:
- Using the CLI to configure the bonds on the master and then each member of the pool.
- Using the CLI to configure the bonds on the master and then restarting each member of the pool so that it inherits its settings from the pool master. [Or...]
- Using XenCenter to configure the bonds on the master. XenCenter automatically synchronizes the networking settings on the member servers with the master, so you do not need to reboot the member servers.
If you are not using XenCenter for [configuring] NIC bonding, the quickest way to create pool-wide NIC bonds is to create the bond on the master, and then restart the other pool members. Alternatively, you can use the service xapi restart command. This causes the bond and VLAN settings on the master to be inherited by each host. The management interface of each host must, however, be manually reconfigured.
Chapter 4.4.6.2 of the 'Citrix XenServer® 6.5, Service Pack 1 Administrator's Guide'
Chapter 4.4.6.2 of the 'Citrix XenServer® 6.5, Service Pack 1 Administrator's Guide'
Network Bonding Modes
XenServer 6.5, Service Pack 1 supports 3 modes of network bonding:- Active-Active,
- Active-Passive, and
- LACP
XenServer provides support for active-active, active-passive, and LACP bonding modes. The number of NICs supported and the bonding mode supported varies according to network stack:
- LACP bonding is only available for [Open vSwitch] whereas active-active and active-passive are available for both [Open vSwitch] and Linux Bridge.
- When [Open vSwitch] is the network stack, you can bond either two, three, or four NICs.
- When the Linux Bridge is the network stack, you can only bond two NICs.
Chapter 4.3.5 of the 'Citrix XenServer® 6.5, Service Pack 1 Administrator's Guide' The technical details of NIC bonding can be very complex but Wikipedia provides a good explanation of the different bonding modes using slightly different names for the bonding modes:
Active-backup (active-backup)
Only one NIC slave in the bond is active. A different slave becomes active if, and only if, the active slave fails. The single, logical bonded interface's MAC address is externally visible on only one NIC (port) to avoid distortion in the network switch. This mode provides fault tolerance.
IEEE 802.3ad Dynamic link aggregation (802.3ad, LACP)
Creates aggregation groups that share the same speed and duplex settings. Utilizes all slave network interfaces in the active aggregator group according to the 802.3ad specification... The link is set up dynamically between two LACP-supporting peers.
Adaptive transmit load balancing (balance-tlb)
[balance-tlb mode] does not require any special network-switch support. The outgoing network packet traffic is distributed according to the current load (computed relative to the speed) on each network interface slave. Incoming traffic is received by one currently designated slave network interface. If this receiving slave fails, another slave takes over the MAC address of the failed receiving slave.
Adaptive load balancing (balance-alb)
Includes balance-tlb plus receive load balancing (rlb) for IPV4 traffic... [balance-alb mode] does not require any special network switch support. The receive load balancing is achieved by ARP negotiation. The bonding driver intercepts the ARP Replies sent by [guests] on their way out and overwrites the source hardware address with the unique hardware address of one of the [guest's virtual MAC address], logical bonded interface such that different network-peers use different MAC addresses for their network packet traffic.
Wikipedia: Link aggregation
Only one NIC slave in the bond is active. A different slave becomes active if, and only if, the active slave fails. The single, logical bonded interface's MAC address is externally visible on only one NIC (port) to avoid distortion in the network switch. This mode provides fault tolerance.
IEEE 802.3ad Dynamic link aggregation (802.3ad, LACP)
Creates aggregation groups that share the same speed and duplex settings. Utilizes all slave network interfaces in the active aggregator group according to the 802.3ad specification... The link is set up dynamically between two LACP-supporting peers.
Adaptive transmit load balancing (balance-tlb)
[balance-tlb mode] does not require any special network-switch support. The outgoing network packet traffic is distributed according to the current load (computed relative to the speed) on each network interface slave. Incoming traffic is received by one currently designated slave network interface. If this receiving slave fails, another slave takes over the MAC address of the failed receiving slave.
Adaptive load balancing (balance-alb)
Includes balance-tlb plus receive load balancing (rlb) for IPV4 traffic... [balance-alb mode] does not require any special network switch support. The receive load balancing is achieved by ARP negotiation. The bonding driver intercepts the ARP Replies sent by [guests] on their way out and overwrites the source hardware address with the unique hardware address of one of the [guest's virtual MAC address], logical bonded interface such that different network-peers use different MAC addresses for their network packet traffic.
Wikipedia: Link aggregation
Different Bond Modes for Different Use-Cases
Citrix makes the following recommendations regarding the creation and configuration of different types of interfaces:XenServer can only send traffic over two or more NICs when there is more than one MAC address associated with the bond. XenServer can use the virtual MAC addresses in the VIF to send traffic across multiple links. Specifically:
- VM traffic. Provided you enable bonding on NICs carrying only VM (guest) traffic, all links are active and NIC bonding can balance spread VM traffic across NICs. An individual VIF's traffic is never split between NICs.
- Management or storage traffic. Only one of the links (NICs) in the bond is active and the other NICs remain unused unless traffic fails over to them...
Scenario 2 Part 1&your Digital Footprint Worksheet
Citrix XenServer Design: Designing XenServer Network ConfigurationsTo be direct: Citrix recommends...- active-passive bonds for primary and secondary interfaces, and
- active-active bonds for guest [external] interfaces.
TABLE #1
NIC #1 | NIC #2 | FUNCTION | IP ADDRESS | |
---|---|---|---|---|
Citrix XenServer'>XS-1 | Citrix XenServer'>XS-2 | |||
eth0 | eth3 | Primary Management Interface'>PMI | 172.16.0.10/27 | 172.16.0.12/27 |
eth1 | eth4 | External | N/A | N/A |
eth2 | eth5 | Storage | 172.16.0.35/28 | 172.16.0.36/28 |
Conclusion
In the previous Tutorial, we had configured the IP address of two interfaces on the host Citrix XenServer'>xs-1:- The Primary Management Interface (172.16.0.10/28), and
- The Storage Interface (172.16.0.35/28).
* Technically speaking: Linux Bridge has been integrated into the Linux network stack since Kernel Version 2.2 and, as such, Linux Bridge does not extend the functionality of the Linux network stack as much as it forms an important piece of the Linux network stack. However - for the purposes of this discussion - we're going to consider it to still be separate from the Linux Kernel.
Scenario 2 Part 1&your Digital Footprint Book
Questions? Comments? Visit the forums to discuss this tutorial!
Other Reading
Changelog: This tutorial was last modified 20-Jul-2017