Skip to content

NCD – Configuring Load Balancing



A load balancer is a service which routes client network traffic arriving on an external ("public") or internal interface, distributing requests among multiple servers to share the traffic load and achieve optimal resource utilization. The load balancer accepts incoming IP requests and distributes them to a pool of servers based on a selected distribution algorithm.

Load balancing is available on vDC networks which are not isolated (see Identifying Isolated Networks for more information). Load balancing can be applied to TCP, HTTP, HTTPS, and UDP traffic.
Note: vDC networks in the same vDataCenter share load balancers (as well as firewall and NAT Rules). If a load balancer is enabled for a vDC network, it is also enabled for all vDC networks in the same vDC.

To configure load balancing for a vDC network:
  1. At the Navisite Cloud Director Dashboard page, click vDataCenters in the navigation bar on the left side of the page to display the vDataCenters page.

  2. In the "vDataCenters" list, click the name of the vDC for which load balancing is being configured. The vDataCenter detail page appears.

  3. Click Load Balancing in the "Network Services" section of the vDataCenter page. Any existing load balancers are displayed in the "Load Balancers" list.


Enabling/Disabling Load Balancing for the vDataCenter

  1. Click the "Gear" icon to the right of the "Enabled:" field.

  2. Select "Yes" or "No" from the resulting drop-down menu.

  3. Click the green check mark to make your selection, or click X to cancel the operation.

Enabling/Disabling Acceleration

Enabling acceleration allows the load balancer to use the faster L4 load balancing engine, rather than the L7 engine, which requires more processing. Note that acceleration must be enabled in order to utilize the UDP protocol.
  1. Click the "Gear" icon to the right of the "Acceleration Enabled:" field.

  2. Select "Yes" or "No" from the resulting drop-down menu.

  3. Click the green check mark to make your selection, or click X to cancel the operation.
Note: The L4 virtual IP address (VIP) is processed before the Edge Gateway firewall, so no "Allow" firewall rule is required when acceleration is enabled.

Adding/Editing Load Balancers

  1. To create a new load balancer, click +Create Load Balancer. To edit an existing load balancer, click the "Gear" icon in the "Load Balancers" list corresponding to the load balancer to be edited. The Create/Edit Load Balancer page appears.


  2. Enable or disable the load balancer virtual server by selecting/deselecting the "Enabled" checkbox.

  3. Enter a name for the load balancer in the "Name" field.

  4. Select a load balancing algorithm for the load balancer virtual server from the "Balancing Method" drop-down menu. Available options include:

    • Round Robin – each server is used in turn according to the weight assigned to it. This is the optimal algorithm when the server's processing time remains equally distributed.

    • Least Connected – distributes client requests to multiple servers based on the number of existing connections on the server. New connections are sent to the server with the fewest connections.

    • IP Hash – selects a server based on a hash of the source and destination IP address of each packet.

    • URI – a URI (Uniform Resource Identifier) is a string of characters that unambiguously identifies a particular resource. Using this algorithm, the request URL is hashed and divided by the total weight of the running servers. The result designates which server receives the request, ensuring that a URI is always directed to the same server as long as all servers remain available.

      If you select URI, the "URI Length" and "URI Depth" fields are displayed. These options allow the algorithm to balance servers based on the beginning of the URI only. If both parameters are specified, the evaluation stops when either is reached.

      1. In the "URI Length" field, enter the number of defined characters at the beginning of the URI that should be used to compute the hash.

      2. In the "URI Depth" field, enter the maximum directory depth to be used to compute the hash. One level is counted for each slash in the request.

    • HTTP Header – using this algorithm, the HTTP header name is looked up in each HTTP request. The header name is not case sensitive. If the header is absent or does not contain any value, the round robin algorithm is applied.

      If you select HTTP Header, the "Header Name" field is displayed, allowing you to specify an HTTP header name to be looked up in each HTTP request.

      1. Enter the desired HTTP header name in the "Header Name" field.

    • URL – using this algorithm, the URL parameter value is hashed and divided by the total weight of the running servers. The result designates which server receives the request. This process is used to track user identifiers in requests, and ensure that the same user ID is always sent to the same server as long as all servers remain available. If no URL parameter value is found, the round robin algorithm is applied.

      If you select URL, the "URL" field is displayed, allowing you to specify a URL value to be looked up in the query string of each HTTP GET request.

      1. Enter the desired URL value in the "URL" field.

  5. Select a virtual server protocol from the "Protocol" drop-down menu. Available options include:

    • HTTP

    • HTTPS – not available for use with the URI balancing method.

    • TCP – not available for use with the URI balancing method.

    • UDP – not available for use with URI, HTTP Header, or URL balancing methods.
    Note: Your virtual server "Protocol" selection affects the options available/required for balancing method, virtual server acceleration, and application profile persistence.

  6. In the "Port" field, enter the number, or range of numbers, of the port(s) on which the load balancer listens.
    Note: In order to use FTP, a load balancer using the TCP protocol must have port 21 assigned to it.

  7. In the "Connection Limit" field, enter the maximum number of concurrent connections that the virtual server can process.

  8. In the "Connection Rate Limit (CPS)" field, enter the maximum number of incoming new connection requests per second for the virtual server.

  9. To allow the virtual server to use the faster L4 load balancing engine, rather than the L7 engine (which requires more processing), select the "Acceleration" checkbox.
    Note: Acceleration must be enabled in order for the virtual server to use the UDP protocol. Note also that this selection is separate from the "Acceleration Enabled" selection at the load balancer level. Both must be selected in order to use the UDP protocol option.

  10. To apply the load balancer on the external internet, select the "Public" checkbox. If this checkbox is not selected, the load balancer is applied on the internal network selected using the "Create In" drop-down menu in the following step.
    Note: When the load balancer is created, the system creates a firewall rule to make the load balancer's IP accessible from the external internet if the "Public" checkbox is selected. Refer to [LINK] Configuring Firewall Rules [/LINK] for details on managing firewall rules.

  11. From the "Create In" drop-down menu, select the internal network on which the load balancer is to be applied. Note that this field is not available if the "Public" checkbox is selected.

  12. In the "IP Address" field, enter the IP address at which incoming traffic is to be distributed by the load balancer service.

  13. The "Application Profile" section allows you to define a load balancer application profile to define the behavior of a particular type of network traffic.

  14. Enter an application profile name in the "Name" field in the "Application Profile" section.

  15. From the "Protocol" drop-down menu in the "Application Profile" section, select the traffic type for which you are creating the profile (HTTP, HTTPS, TCP, or UDP).
    Note: If you have selected HTTPS for both the virtual server and application profile protocols, the "SSL Passthrough" field appears, and is selected by default. SSL Passthrough is required for this configuration.

  16. From the "Persistence" drop-down menu in the "Application Profile" section, select a persistence setting for the profile.

    Persistence tracks and stores session data, such as the specific pool member that serviced a client request. This ensures that client requests are directed to the same pool member throughout the life of a session or during subsequent sessions.

    Available persistence settings include:

    • None

    • Source IP

    • Cookie – supported for HTTP and HTTPS protocols. Cookie session persistence inserts a cookie to uniquely identify the session and persist the connection to the server during subsequent requests.
      1. If you select "Cookie" from the "Persistence" drop-down menu, the "Cookie Mode" and "Cookie Name" fields appear.
        1. Select a cookie insertion mode from the "Cookie Mode" drop-down menu. Available options include:
          • Insert – clients receive cookies from the Edge Gateway and the server.

          • Prefix – clients receive a single cookie from the server with Edge Gateway cookie information added as a prefix.

          • App – clients receive a URL with session ID information appended to it.

        2. Enter a name for the cookie in the "Cookie Name" field.

    • SSL SessionID – supported for HTTPS. SSL Session ID persistence uses the SSL session ID from the initial SSL "handshake" process to direct requests with the same session ID to the same server.

    • MSRDP – supported for TCP. MSRDP persistence uses a session broker token to maintain persistence records to ensure that Microsoft Terminal Services user sessions are assigned to specific servers.

  17. In the "Members" section, specify the IP addresses of VMs providing the load balancing service (i.e., the members of the load balancer "pool").

    To add or edit members:
    1. To add a new load balancer pool member, click +Add Member. To edit an existing member, click the "gear" icon in the "Members" list row for the member to be edited.

    2. To enable the pool member, select the "Enabled" checkbox.

    3. Enter a name for the pool member in the "Name" field.

    4. In the "IP Address" field, enter the IP address of the pool member.

    5. In the "Port" field, enter the number of the port on which the pool member is to receive traffic.

    6. In the "Weight" field, enter the proportion of traffic that the pool member is to handle.

    7. Click the green check mark to add or edit the member, or click X to cancel the operation.

    To delete members:
    1. Click Delete in the "Members" list row for the member to be deleted.

  18. Click Create/Update Load Balancer to add or modify the load balancer. Click Cancel to cancel the operation.

Deleting Load Balancers

To delete a load balancer:
  1. In the "Load Balancers" list in the "Load Balancing" section of the vDataCenter page, click the Delete button corresponding to the load balancer to be deleted. A confirmation pop-up window appears.

  2. Click OK to delete the load balancer, or Cancel to cancel the operation.

Related vCloud Director Documents

Note: When you use vCloud Director (vCD) to configure load balancing, VMware refers to individual load balancers as "virtual servers." Also, vCD permits the reuse of member pools for more than one load balancer. Be aware that in Navisite Cloud Director, when editing the members of a load balancer that uses a shared pool configured via vCD, your edits affect all load balancers that employ that member pool.

Load Balancer Service Configurations
Managing Load Balancer Service on an Edge Gateway




Feedback and Knowledge Base