Proxy HA on a Blade Server
You can configure the proxy HA (High Availability) mode for Web Gateway on a blade server. This mode provides the functions of a proxy that runs in explicit proxy mode combined with High Availability functions.
This High Availability configuration is also known as High Availability cluster. In this cluster, multiple instances of Web Gateway on blade servers run as nodes. There must be at least two director nodes, so a failover can be performed in case one of them is down. A director node directs data packets to the nodes that scan the data in a suitable manner to enable load balancing.
The director node that acts in this role at a given point of time is known as active director. The second director node, which takes over when the first is down, is also known as backup node. If you want you can configure even more than one backup node.
We recommend that you configure the proxy HA mode as a two-legged proxy solution. This means the following is configured on a director node:
- Network interface for inbound web traffic
- Network interface for outbound web traffic
The network interface that handles inbound traffic must have a virtual IP address of its own. The network interface for outbound web traffic should also be used to do the load balancing.
This is achieved by filling in a table with the IP addresses of the scanning nodes in the cluster when configuring the director node. The following must be entered in this table for any particular node:
- For a backup node — IP address and Peer/Director as type
- For a node that runs as a scanning node only — IP address and Scanner as type
If the node that is first to run as an active director also runs as scanning node, its IP address must also be entered in the scanner table with Scanner as type.
We also recommend that you configure the following on each director node:
- Network interface for out-of-band management
Configuring this network interface allows you to perform management communication separately.
- Network interface for internal communication within the blade system enclosure.
This network interface has its IP address configured under VRRP (Virtual Router Redundancy Protocol).
The virtual IP address that the active director nodes uses on its interface for communication with the Web Gateway clients must be added to the settings of the HTTP and FTP proxies with ports that listen to requests coming in from the clients.
If switches are installed as interconnect modules on an enclosure, link resilience can be achieved in the following way:
- Two of the ports used as uplink ports on a switch are bundled in a trunk group.
- Each of these ports is connected by a network cable to a physical link.
This means that if one the two links fails, the trunk group remains still active.
The interconnect modules and the trunk groups are mapped to the ports on the network interfaces, for example, as shown in the following table. For the network interface that handles internal communication, no port mapping to a trunk group is required.
|Port on network interface||Interconnect module||Trunk group|
|Inbound web traffic interface||Switch in interconnect bay 1||Group 1: port21, port 22|
|Outbound web traffic interface||Switch in interconnect bay 2||Group 2: port21, port 22|
|Out-of-band management interface||Switch in interconnect bay 3||Group 3: port21, port 22|
|Internal communication interface||Switch in interconnect bay 4||no uplink ports required|
For more information on how to configure the interconnect modules, refer to the GbE2c Ethernet Blade Switch for c-Class BladeSystem Application Guide that is available on the website of the Skyhigh Security partner.