Nginx has the ability to run as a software load balancer using the http_proxy module, and it’s one of the most common and useful ways to use nginx in a web stack. The default configuration for an upstream is to balance requests using a round-robin method, round-robin load balancing is completely ignorant, so factors like the actual load of the server or number of connections is not considered.
By default, when a single request to a server inside of the upstream directive fails, the server will be removed from the pool for 10 seconds.
Install nginx apt -y install nginx
Configure nginx cp -r /etc/nginx /root/ vim /etc/nginx/nginx.conf
HTTP Load Balancing
user nginx;
worker_processes auto;
pid /run/nginx.pid;
include /usr/share/nginx/modules/*.conf;
events {
worker_connections 1024;
}
http {
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
keepalive_timeout 65;
default_type application/octet-stream;
include /etc/nginx/mime.types;
include /etc/nginx/conf.d/*.conf;
upstream backend {
#you can also use hostname for server value, e.g :
#server node01.darin.web.id
server 103.43.x.34;
server 103.43.x.36;
}
server {
listen 80;
location / {
proxy_pass http://backend;
}
}
}
TCP Load Balancing
user nginx;
worker_processes auto;
pid /run/nginx.pid;
include /usr/share/nginx/modules/*.conf;
events {
worker_connections 1024;
}
http {
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
keepalive_timeout 65;
default_type application/octet-stream;
include /etc/nginx/mime.types;
include /etc/nginx/conf.d/*.conf;
}
stream {
server {
listen 80;
proxy_pass backend;
}
upstream backend {
server 103.43.47.x:80;
server 103.43.47.x:80;
}
}
systemctl start nginx systemctl restart nginx
Weighted servers Weights are useful if you want to send more traffic to particular server because it has faster hardware, or, if you want to send less traffic to a particular server to test a change on it. Below is the example :
upstream backend {
server 103.43.x.34 weight=3; #have 3x more requests
server 103.43.x.36;
}
#Because app3 has a lower weight
#It will receive only 20% of the total traffic.
upstream backend {
server node01.darin.web.id weight=2;
server node02.darin.web.id weight=2;
server node03.darin.web.id weight=1;
}
Health checks In the open-source version of nginx, health checks don’t really exist. What you get out of the box are passive checks— checks that will remove a server from the pool if it causes an error a certain number of times.
The default handling is, if an incoming request to an upstream server errors out or times out once, the server will be removed from the pool for 10s. You can tweak the behavior of the health checks with below directives :
max_fails and fail_timeout
upstream backend {
#server needs to fail 2 times in 5 seconds for it to be marked unhealthy
server 103.43.x.34 max_fails=2 fail_timeout=5;
#server needs to fail 100 times in 50 seconds for it to be marked unhealthy
server 103.43.x.36 max_fails=100 fail_timeout=50;
}
Removing a server from the pool
When a server is marked as down, it is considered completely unavailable and no traffic will be routed to it (you don't have to remove a server directive).
upstream backend {
server node01.darin.web.id down;
server node02.darin.web.id;
server node03.darin.web.id;
}
Backup servers
node04.darin.web.id will only receive traffic if all of the hosts in the pool are marked as unavailable, its usefulness is limited to smaller workloads, as you’d need enough backup servers to handle the traffic for your entire pool.
upstream backend {
server node01.darin.web.id;
server node02.darin.web.id;
server node03.darin.web.id;
server node04.darin.web.id backup;
}
Turn on least_conn algorithm
upstream backend {
least_conn;
server 103.43.x.34;
server 103.43.x.36;
}
Turn on sticky sessions algorithm with ip_hash
upstream backend {
ip_hash;
server 103.43.x.34;
server 103.43.x.36;
}
SSL Termination
stream {
upstream stream_backend {
server backend1.example.com:12345;
server backend2.example.com:12345;
server backend3.example.com:12345;
}
server {
listen 12345 ssl;
proxy_pass stream_backend;
ssl_certificate /etc/ssl/certs/server.crt;
ssl_certificate_key /etc/ssl/certs/server.key;
ssl_protocols SSLv3 TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers HIGH:!aNULL:!MD5;
ssl_session_cache shared:SSL:20m;
ssl_session_timeout 4h;
ssl_handshake_timeout 30s;
#...
}
}
Reference : Nginx practical guide.pdf