In a production environment, ensuring your infrastructure is always available is critical. Combining HAProxy (for load balancing) and Keepalived (for automatic failover) provides a robust solution for high availability.
High Availability refers to systems designed to ensure continuous operation and minimal downtime. It involves redundancy, failover mechanisms, and robust monitoring to avoid single points of failure.
Load Balancing distributes incoming traffic across multiple servers to ensure no single server becomes overwhelmed.
Automatic Failover ensures that if the primary service or server fails, another takes over seamlessly to maintain uptime.
Installing HAProxy
Install HAProxy on all nodes that will act as load balancers.
apt install haproxy -y
Default HAProxy Configuration
HAProxy’s default configuration file is located at /etc/haproxy/haproxy.cfg. Start by enabling the built-in stats page to monitor traffic and backend health:
listen stats
bind :9000
mode http
stats enable
stats uri /
stats realm Haproxy\ Statistics
stats auth admin:password
stats refresh 10s
stats hide-version
admin:passwordcan be replaced with a secure username and password.
Configuring HAProxy for Database Backend
To load balance a MySQL cluster backend assuming the database replication layer is taken care of and the database cluster is working on active/active mode with something like Galera cluster,
frontend db_frontend
bind *:3306
mode tcp
option tcplog
default_backend db_backend
backend db_backend
mode tcp
option tcpka
balance source
option mysql-check user haproxy
server db1 prod.db1.lab:3306 check
server db2 prod.db2.lab:3306 check backup
server db3 prod.db3.lab:3306 check backup
Ensure that the user
haproxyexists on all database nodes:
CREATE USER 'haproxy'@'%';
Configuring HAProxy for Web Backend
To load balance a web frontend servers:
frontend web_frontend
bind *:80
mode http
option httplog
default_backend web_backend
backend web_backend
mode http
balance roundrobin
option httpchk
http-check send meth GET uri /
server web1 prod.app01.lab:8080 check
server web2 prod.app02.lab:8080 check
server web3 prod.app03.lab:8080 check
The health check (
httpchk) ensures servers are only used when responsive.
Configuring Stickiness in HAProxy
Session stickiness (or persistence) ensures a user’s requests consistently go to the same backend server.
Cookie-Based Stickiness
backend web_backend
mode http
balance roundrobin
cookie SERVERID insert indirect nocache
server app01 prod.app01.lab:8080 check cookie prod.app01.lab
server app02 prod.app02.lab:8080 check cookie prod.app02.lab
Clients store a SERVERID cookie identifying the backend server.
Source-Based Stickiness
backend web_backend
mode http
option forwardfor
balance source
server app01 prod.app01.lab:8080 check
server app02 prod.app02.lab:8080 check
Requests are routed based on the client’s IP address.
Installing Keepalived
To support failover between HAProxy nodes, install Keepalived:
apt install keepalived -y
Configuring Keepalived (Primary Node)
On the primary HAProxy node, create /etc/keepalived/keepalived.conf with the following:
vrrp_script chk_haproxy {
script "killall -0 haproxy"
interval 2
weight 2
}
vrrp_instance VI_1 {
state MASTER
interface mainbr0
virtual_router_id 51
priority 101
advert_int 1
authentication {
auth_type PASS
auth_pass password
}
virtual_ipaddress {
10.0.10.100
}
track_script {
chk_haproxy
}
}
ðŸ§
virtual_ipaddressis the shared IP that floats between nodes.
Configuring Keepalived (Secondary Nodes)
On secondary nodes, use the same configuration file with slight changes:
vrrp_script chk_haproxy {
script "killall -0 haproxy"
interval 2
weight 2
}
vrrp_instance VI_1 {
state BACKUP
interface mainbr0
virtual_router_id 51
priority 100 # or 99 on other backups
advert_int 1
authentication {
auth_type PASS
auth_pass password
}
virtual_ipaddress {
10.0.10.100
}
track_script {
chk_haproxy
}
}
Use a slightly lower
prioritythan the master node to allow automatic failover.
Starting and Verifying Keepalived
After configuration, restart Keepalived on all nodes:
systemctl restart keepalived
Check if the virtual IP (10.0.1.100) is active on the master node:
ip a | grep 10.0.1.100