== HaProxy description == HaProxy is a widely used Load Balancer that is available on almost every Linux distribution. You can install it on Ubuntu using the following bash command: {{{ sudo apt install haproxy }}} Official website: http://www.haproxy.org/ <<BR>> Man Page: http://manpages.ubuntu.com/manpages/xenial/man1/haproxy.1.html <<BR>> == Sample Configurations == This section contains a few examples for configuring HaProxy. For production environments, please refer to the official haproxy.org documentation in order to be sure that the configuration is optimal for your solution === HaProxy for Web servers === This section contains example on how to configure HaProxy to load balance traffic between web servers. For quick setup, '''append''' the following code to '''/etc/haproxy/haproxy.cfg''': {{{ frontend http bind *:80 mode http default_backend web-backend backend web-backend balance roundrobin server httpd-01 10.10.20.21:80 check server httpd-02 10.10.20.22:80 check server httpd-03 10.10.20.23:80 check }}} More complex solutions might require SSL (HTTPS). {{{ # code block for https required }}} For information regarding the Apache2 HTTPD web server, please visit the [[https://help.ubuntu.com/lts/serverguide/httpd.html|official server guide]]. === HaProxy for Galera Cluster === HaProxy can be used in association with Galera cluster in order to load balance traffic between the MariaDB/MySQL Nodes. If you want to run a quick setup, '''append''' the following code to '''/etc/haproxy/haproxy.cfg''': {{{ listen galera bind 127.0.0.1:3306 balance leastconn mode tcp option tcpka option mysql-check user haproxy server mariadb-server1 10.10.10.11:3306 check weight 1 server mariadb-server2 10.10.10.12:3306 check weight 1 server mariadb-server3 10.10.10.13:3306 check weight 1 }}} For the checks to work, you must create a haproxy user within the database: {{{ CREATE USER 'haproxy'@'10.10.20.21'; }}} This setup assumes that your database servers are part of network 10.10.10.0/24 and that the server where haproxy runs has the IP 10.10.20.21. <<BR>> The bind address is 127.0.0.1:3306 since you will probably install this on the server where your Apache2 is running, and Apache2 will need to connect to the DB. <<BR>> === Using keepalived for high availability === In some cases you might need to always have a haproxy instace running, for production environments. If downtime is not accepted, please consider keepalived to shift the IP to another server in case your main servers goes down. Use the following command to install keepalived: {{{ apt install keepalived }}} Let's assume that your haproxy configutation has the following IPs: <<BR>> haproxy-node-01: 10.10.30.11 on eth0<<BR>> haproxy-node-02: 10.10.30.12 on eth0<<BR>> You will need a 3rd IP to shift between the two servers when one of them goes down. Let's assume the IP that will be always available is '''10.10.30.50'''. For such an example, the configuration for would be as follows bello: On node '''haproxy-node-01''', create '''/etc/keepalived/keepalived.conf''' and add the following to it: {{{ global_defs { smtp_server localhost smtp_connect_timeout 30 } vrrp_instance VI_1 { state MASTER interface eth0 virtual_router_id 101 priority 101 advert_int 1 authentication { auth_type PASS auth_pass 1111 } virtual_ipaddress { 10.10.30.50 } } }}} On node '''haproxy-node-02''', create '''/etc/keepalived/keepalived.conf''' and add the following to it: {{{ global_defs { smtp_server localhost smtp_connect_timeout 30 } vrrp_instance VI_1 { state MASTER interface eth0 virtual_router_id 101 priority 100 advert_int 1 authentication { auth_type PASS auth_pass 1111 } virtual_ipaddress { 10.10.30.50 } } }}} As you can see, the two nodes have similar configutation. The big difference is the priority, which states that '''haproxy-node-01''' will be '''active''' and that '''haproxy-node-02''' will be '''standby'''. The lower priority on the 2nd node meands that the 2nd node will take 10.10.30.50 only when node-01 is down. <<BR>> <<BR>> This configuration can be tested by running '''ip addr''' to check the IPs on each machines. With a running setup, node-02 will take the virtual IP when node-01 is down.