How do I set up Load Balancing on an Unmanaged VPS?
20i Load Balancers distribute traffic across multiple virtual private servers (VPS) to create better-performing, more robust infrastructure for your HTTP and TCP applications.
Setting up a Load Balancer is a two-step process. Firstly, adding the Load Balancer, and then choosing the VPS you’d like to balance the traffic between. VPS can be added or removed as required.
Each Load Balancer is assigned a single unique IP. This becomes the target address for your website or applications.
What improvements am I likely to see?
Your apps will be more available and perform faster by having more resources to draw from.
If you’re already using the 20i VPS platform in a highly available way, you’ve probably got at least three servers: one VPS acting as a Load Balancer and the other two serving the requests.
By using 20i Load Balancing, it’s a) managed by us b) scaled by us and c) completely highly available. You will no longer need that Load Balancer server. So that not only means there’s one less server to maintain, you also don’t need to worry about that server’s capacity management.
Before you begin
You’ll need at least two Unmanaged VPS to distribute incoming requests between. Each VPS will need to contain identical content so that all balanced requests receive the same content.
Ordering a Load Balancer
Assigning guests to your Load Balancer
Assigning your VPS to your Load Balancer is quick and easy:
- Head to Manage VPS > Manage Load Balancing
- Select Options > Configure Guests
- Select the VPS you’d like to balance load between and select Save.
You can now use the IP address as the target address. For example, for an A record, load will then be balanced between your selected guest machines.
Configuring your Load Balancer
There are several configurable options you can set to get desired behavior and best performance from your Load Balancer.
To manage and configure your Load Balancer:
Here’s what you can configure:
This is what you’d like your Load Balancer to be called. This is just for your reference.
At present, Load Balancers are only based in the UK. We’ll be working on further deployment locations shortly.
The load balancing algorithm determines what method the Load Balancer uses to divert traffic between your selected guest machines. There are three available options for the algorithm:
- Least connection - balances to the server with the least number of connections
- Source - balances a single client IP to the same server (as long as it’s healthy)
- Round robin - balances to a server in sequence (A>B>C>A…)
- HTTP – Standard HTTP load balancing routes requests using the standard HTTP protocol. For example, if you’re distributing requests between two web servers, you’d select HTTP.
- TCP – TCP load balancing is required for applications or infrastructure that doesn’t use the HTTP protocol. For example, if your Load Balancer is deployed in front of two database instances, you’d need to use TCP.
In HTTP mode you need to tell the Load Balancer which domains you want to be balanced between. This also allows us to do SSL offloading.
A healthcheck verifies that the virtual machines are live and reachable. The Load Balancer does this by periodically sending a request (ping) to check that status of the attached servers. If a virtual machine fails a healthcheck, the Load Balancer stops traffic to that instance and routes traffic to another live server.
A healthcheck URL will continue to be checked even if an instance is down. This way, when the instance becomes live again the Load Balancer will begin balancing requests back to that server. All servers that are assigned to the Load Balancer are checked via the same healthcheck URL.
This is the frequency at which a healthcheck is run. By default, this is set to 5000ms (i.e., 5 seconds).
The rise is the number of successful checks before a server is put back in balancing after a failure. The fall is the number of failed checks before a server if taken out of balancing.
Accept proxy (proxy protocol) forwards client information through the Load Balancer to the attached backend guest instances. A human-readable header is added to the request with information such as the original client IP address.
This may help in situations where you need to access the original client information in your logs, or for other purposes. You must make sure your VPS are configured to process the proxy protocol information.