第454集负载均衡Nginx从基础到架构实战
|字数总计:4.8k|阅读时长:22分钟|阅读量:
负载均衡Nginx从基础到架构实战
1. 概述
1.1 负载均衡的重要性
负载均衡(Load Balancing)是分布式系统架构的核心组件,通过将请求分发到多个服务器,实现高可用、高性能、可扩展的系统架构。Nginx作为高性能的反向代理服务器,是负载均衡的理想选择。
负载均衡的价值:
- 高可用性:单点故障不影响服务
- 性能提升:分散请求压力,提高系统吞吐量
- 水平扩展:可以动态增加服务器节点
- 资源优化:合理利用服务器资源
1.2 Nginx负载均衡优势
Nginx负载均衡特点:
- 高性能:基于epoll事件模型,支持高并发
- 低资源消耗:内存占用少,CPU消耗低
- 配置灵活:支持多种负载均衡算法
- 功能丰富:健康检查、会话保持、故障转移
1.3 本文内容结构
本文将从以下几个方面全面解析Nginx负载均衡:
- 负载均衡基础:概念、类型、算法
- Nginx负载均衡配置:upstream配置、算法选择
- 健康检查:主动检查、被动检查
- 会话保持:ip_hash、sticky模块
- 数据库负载均衡:MySQL、Oracle、PostgreSQL、SQL Server等
- 高可用架构:主备、双主、集群
- 性能优化:连接池、缓存、压缩
- 监控告警:状态监控、性能监控
- 实战案例:电商、金融等场景
2. 负载均衡基础
2.1 负载均衡概念
2.1.1 什么是负载均衡
负载均衡定义:
负载均衡是一种将网络流量或计算负载分配到多个服务器的技术,以提高系统的可用性、性能和可扩展性。
负载均衡架构:
1 2 3 4 5 6 7 8
| 用户请求 ↓ 负载均衡器(Nginx) ↓ ├──→ 后端服务器1 ├──→ 后端服务器2 ├──→ 后端服务器3 └──→ 后端服务器N
|
2.1.2 负载均衡类型
按网络层次分类:
| 层次 |
类型 |
说明 |
| L4(传输层) |
TCP/UDP负载均衡 |
基于IP和端口 |
| L7(应用层) |
HTTP/HTTPS负载均衡 |
基于HTTP协议 |
按实现方式分类:
| 方式 |
说明 |
优点 |
缺点 |
| 硬件负载均衡 |
F5、A10等硬件设备 |
性能高、稳定 |
成本高 |
| 软件负载均衡 |
Nginx、HAProxy等 |
成本低、灵活 |
性能相对较低 |
2.2 负载均衡算法
2.2.1 常用算法
1. 轮询(Round Robin):
2. 加权轮询(Weighted Round Robin):
- 根据服务器权重分配请求
- 权重高的服务器处理更多请求
3. IP哈希(IP Hash):
- 根据客户端IP计算哈希值
- 相同IP总是访问同一服务器(会话保持)
4. 最少连接(Least Connections):
5. 最短响应时间(Least Time):
- 将请求分配给响应时间最短的服务器
- 需要Nginx Plus
2.3 负载均衡场景
2.3.1 Web应用负载均衡
场景:
- 多个Tomcat应用服务器
- Nginx作为反向代理
- 分发HTTP请求
2.3.2 数据库负载均衡
场景:
2.3.3 API服务负载均衡
场景:
3. Nginx负载均衡配置
3.1 upstream模块
3.1.1 upstream基础配置
基本语法:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16
| http { upstream backend { server 192.168.1.100:8080; server 192.168.1.101:8080; server 192.168.1.102:8080; } server { listen 80; location / { proxy_pass http://backend; } } }
|
3.1.2 upstream参数说明
server指令参数:
| 参数 |
说明 |
示例 |
weight |
权重 |
server 192.168.1.100:8080 weight=3; |
max_fails |
最大失败次数 |
server 192.168.1.100:8080 max_fails=3; |
fail_timeout |
失败超时时间 |
server 192.168.1.100:8080 fail_timeout=30s; |
backup |
备份服务器 |
server 192.168.1.100:8080 backup; |
down |
标记为不可用 |
server 192.168.1.100:8080 down; |
完整配置示例:
1 2 3 4 5
| upstream backend { server 192.168.1.100:8080 weight=3 max_fails=2 fail_timeout=10s; server 192.168.1.101:8080 weight=2 max_fails=2 fail_timeout=10s; server 192.168.1.102:8080 weight=1 max_fails=2 fail_timeout=10s backup; }
|
3.2 负载均衡算法配置
3.2.1 轮询(默认)
1 2 3 4 5 6
| upstream backend { server 192.168.1.100:8080; server 192.168.1.101:8080; server 192.168.1.102:8080; }
|
3.2.2 加权轮询
1 2 3 4 5
| upstream backend { server 192.168.1.100:8080 weight=3; server 192.168.1.101:8080 weight=2; server 192.168.1.102:8080 weight=1; }
|
3.2.3 IP哈希
1 2 3 4 5 6
| upstream backend { ip_hash; server 192.168.1.100:8080; server 192.168.1.101:8080; server 192.168.1.102:8080; }
|
特点:
- 相同IP总是访问同一服务器
- 实现会话保持
- 如果服务器down,会重新hash
3.2.4 最少连接
1 2 3 4 5 6
| upstream backend { least_conn; server 192.168.1.100:8080; server 192.168.1.101:8080; server 192.168.1.102:8080; }
|
适用场景:
3.2.5 一致性哈希(需要第三方模块)
1 2 3 4 5 6
| upstream backend { consistent_hash $request_uri; server 192.168.1.100:8080; server 192.168.1.101:8080; server 192.168.1.102:8080; }
|
特点:
3.3 健康检查
3.3.1 被动健康检查
Nginx默认健康检查:
1 2 3 4
| upstream backend { server 192.168.1.100:8080 max_fails=3 fail_timeout=30s; server 192.168.1.101:8080 max_fails=3 fail_timeout=30s; }
|
工作原理:
- 请求失败时,失败计数+1
- 达到max_fails后,标记为不可用
- fail_timeout后重新尝试
3.3.2 主动健康检查(需要第三方模块)
nginx_upstream_check_module:
1 2 3 4 5 6 7 8 9
| upstream backend { server 192.168.1.100:8080; server 192.168.1.101:8080; check interval=3000 rise=2 fall=3 timeout=1000 type=http; check_http_send "GET /health HTTP/1.0\r\n\r\n"; check_http_expect_alive http_2xx http_3xx; }
|
参数说明:
interval:检查间隔(毫秒)
rise:成功次数后标记为健康
fall:失败次数后标记为不健康
timeout:检查超时时间
type:检查类型(http、tcp等)
3.4 会话保持
3.4.1 ip_hash方式
1 2 3 4 5
| upstream backend { ip_hash; server 192.168.1.100:8080; server 192.168.1.101:8080; }
|
特点:
- 简单易用
- 相同IP总是访问同一服务器
- 如果服务器down,会重新hash
3.4.2 sticky模块(需要第三方模块)
1 2 3 4 5
| upstream backend { sticky cookie srv_id expires=1h domain=.example.com path=/; server 192.168.1.100:8080; server 192.168.1.101:8080; }
|
特点:
- 基于Cookie的会话保持
- 更灵活
- 支持expires、domain、path等参数
4. 数据库负载均衡
4.1 MySQL负载均衡
4.1.1 MySQL读写分离架构
架构图:
1 2 3 4 5 6
| 应用服务器 ↓ Nginx(TCP负载均衡) ↓ ├──→ MySQL主库(写) └──→ MySQL从库(读)
|
4.1.2 Nginx TCP负载均衡配置
编译Nginx支持TCP负载均衡:
1 2 3 4 5 6 7 8
| cd /soft/src git clone https://github.com/yaoweibin/nginx_tcp_proxy_module.git
./configure \ --add-module=/soft/src/nginx_tcp_proxy_module \
|
配置TCP负载均衡:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33
| stream { upstream mysql_master { server 192.168.1.100:3306 weight=1; server 192.168.1.101:3306 weight=1 backup; } upstream mysql_slave { server 192.168.1.102:3306 weight=3; server 192.168.1.103:3306 weight=2; server 192.168.1.104:3306 weight=1; } server { listen 3307; proxy_pass mysql_master; proxy_timeout 1s; proxy_responses 1; error_log /var/log/nginx/mysql_master.log; } server { listen 3308; proxy_pass mysql_slave; proxy_timeout 1s; proxy_responses 1; error_log /var/log/nginx/mysql_slave.log; } }
|
4.1.3 应用层读写分离
使用MyCat实现读写分离:
1 2 3 4 5 6 7 8 9 10
| <dataNode name="dn1" dataHost="localhost1" database="testdb" /> <dataHost name="localhost1" maxCon="1000" minCon="10" balance="1" writeType="0" dbType="mysql" dbDriver="native"> <heartbeat>select user()</heartbeat> <writeHost host="hostM1" url="192.168.1.100:3306" user="root" password="123456"> <readHost host="hostS1" url="192.168.1.102:3306" user="root" password="123456"/> <readHost host="hostS2" url="192.168.1.103:3306" user="root" password="123456"/> </writeHost> </dataHost>
|
使用ShardingSphere实现读写分离:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22
| dataSources: master: url: jdbc:mysql://192.168.1.100:3306/testdb username: root password: 123456 slave1: url: jdbc:mysql://192.168.1.102:3306/testdb username: root password: 123456 slave2: url: jdbc:mysql://192.168.1.103:3306/testdb username: root password: 123456
masterSlaveRule: name: ms_ds masterDataSourceName: master slaveDataSourceNames: - slave1 - slave2 loadBalanceAlgorithmType: ROUND_ROBIN
|
4.1.4 MySQL主从复制配置
主库配置:
1 2 3 4 5
| [mysqld] server-id = 1 log-bin = mysql-bin binlog-format = ROW
|
从库配置:
1 2 3 4 5
| [mysqld] server-id = 2 relay-log = mysql-relay-bin read-only = 1
|
配置主从复制:
1 2 3 4 5 6 7 8 9 10 11 12 13
| CREATE USER 'repl'@'%' IDENTIFIED BY 'password'; GRANT REPLICATION SLAVE ON *.* TO 'repl'@'%';
CHANGE MASTER TO MASTER_HOST='192.168.1.100', MASTER_USER='repl', MASTER_PASSWORD='password', MASTER_LOG_FILE='mysql-bin.000001', MASTER_LOG_POS=154;
START SLAVE;
|
4.2 Oracle负载均衡
4.2.1 Oracle RAC负载均衡
Oracle RAC架构:
1 2 3 4 5 6 7
| 应用服务器 ↓ Nginx(TCP负载均衡) ↓ ├──→ Oracle RAC节点1 ├──→ Oracle RAC节点2 └──→ Oracle RAC节点3
|
Nginx配置:
1 2 3 4 5 6 7 8 9 10 11 12 13 14
| stream { upstream oracle_rac { server 192.168.1.100:1521 weight=1; server 192.168.1.101:1521 weight=1; server 192.168.1.102:1521 weight=1; } server { listen 1522; proxy_pass oracle_rac; proxy_timeout 3s; proxy_connect_timeout 1s; } }
|
4.2.2 Oracle连接字符串配置
TNS配置:
1 2 3 4 5 6 7 8 9 10 11 12 13
| ORACLE_RAC = (DESCRIPTION = (ADDRESS_LIST = (LOAD_BALANCE = ON) (ADDRESS = (PROTOCOL = TCP)(HOST = 192.168.1.100)(PORT = 1521)) (ADDRESS = (PROTOCOL = TCP)(HOST = 192.168.1.101)(PORT = 1521)) (ADDRESS = (PROTOCOL = TCP)(HOST = 192.168.1.102)(PORT = 1521)) ) (CONNECT_DATA = (SERVER = DEDICATED) (SERVICE_NAME = orcl) ) )
|
JDBC连接字符串:
1 2 3 4 5 6 7
| String url = "jdbc:oracle:thin:@(DESCRIPTION=" + "(ADDRESS_LIST=" + "(LOAD_BALANCE=ON)" + "(ADDRESS=(PROTOCOL=TCP)(HOST=192.168.1.100)(PORT=1521))" + "(ADDRESS=(PROTOCOL=TCP)(HOST=192.168.1.101)(PORT=1521))" + ")" + "(CONNECT_DATA=(SERVICE_NAME=orcl)))";
|
4.2.3 Oracle Data Guard负载均衡
主备切换配置:
1 2 3 4 5 6 7 8 9 10 11 12
| stream { upstream oracle_primary { server 192.168.1.100:1521 weight=10; server 192.168.1.101:1521 weight=1 backup; } server { listen 1522; proxy_pass oracle_primary; proxy_timeout 3s; } }
|
4.3 PostgreSQL负载均衡
4.3.1 PostgreSQL主从复制
主库配置:
1 2 3 4 5 6 7
| # postgresql.conf wal_level = replica max_wal_senders = 3 wal_keep_segments = 16
# pg_hba.conf host replication repl 192.168.1.0/24 md5
|
从库配置:
1 2 3 4 5 6
| pg_basebackup -h 192.168.1.100 -D /var/lib/postgresql/data -U repl -v -P -W
standby_mode = 'on' primary_conninfo = 'host=192.168.1.100 port=5432 user=repl'
|
4.3.2 Nginx负载均衡配置
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29
| stream { upstream postgresql_read { least_conn; server 192.168.1.102:5432 weight=3; server 192.168.1.103:5432 weight=2; server 192.168.1.104:5432 weight=1; } upstream postgresql_write { server 192.168.1.100:5432 weight=1; server 192.168.1.101:5432 weight=1 backup; } server { listen 5433; proxy_pass postgresql_read; proxy_timeout 3s; } server { listen 5434; proxy_pass postgresql_write; proxy_timeout 3s; } }
|
4.3.3 使用PgBouncer连接池
PgBouncer配置:
1 2 3 4 5 6 7 8 9
| [databases] testdb = host=192.168.1.100 port=5432 dbname=testdb
[pgbouncer] listen_addr = 0.0.0.0 listen_port = 6432 pool_mode = transaction max_client_conn = 1000 default_pool_size = 25
|
Nginx负载均衡PgBouncer:
1 2 3 4 5 6 7 8 9 10 11 12
| stream { upstream pgbouncer { server 192.168.1.200:6432; server 192.168.1.201:6432; server 192.168.1.202:6432; } server { listen 6433; proxy_pass pgbouncer; } }
|
4.4 SQL Server负载均衡
4.4.1 SQL Server AlwaysOn可用性组
AlwaysOn架构:
1 2 3 4 5 6
| 应用服务器 ↓ Nginx(TCP负载均衡) ↓ ├──→ SQL Server主副本(读写) └──→ SQL Server辅助副本(只读)
|
Nginx配置:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27
| stream { upstream sqlserver_primary { server 192.168.1.100:1433 weight=10; server 192.168.1.101:1433 weight=1 backup; } upstream sqlserver_secondary { server 192.168.1.102:1433 weight=3; server 192.168.1.103:1433 weight=2; } server { listen 1434; proxy_pass sqlserver_primary; proxy_timeout 3s; } server { listen 1435; proxy_pass sqlserver_secondary; proxy_timeout 3s; } }
|
4.4.2 SQL Server连接字符串
ADO.NET连接字符串:
1 2 3 4 5
| string writeConnection = "Server=192.168.1.100,1434;Database=testdb;User Id=sa;Password=123456;";
string readConnection = "Server=192.168.1.100,1435;Database=testdb;User Id=sa;Password=123456;ApplicationIntent=ReadOnly;";
|
JDBC连接字符串:
1 2 3 4 5
| String writeUrl = "jdbc:sqlserver://192.168.1.100:1434;databaseName=testdb";
String readUrl = "jdbc:sqlserver://192.168.1.100:1435;databaseName=testdb;applicationIntent=ReadOnly";
|
4.5 MongoDB负载均衡
4.5.1 MongoDB副本集
副本集配置:
1 2 3 4 5 6 7 8 9
| rs.initiate({ _id: "rs0", members: [ { _id: 0, host: "192.168.1.100:27017", priority: 10 }, { _id: 1, host: "192.168.1.101:27017", priority: 5 }, { _id: 2, host: "192.168.1.102:27017", priority: 1, arbiterOnly: true } ] });
|
Nginx负载均衡配置:
1 2 3 4 5 6 7 8 9 10 11 12
| stream { upstream mongodb { server 192.168.1.100:27017 weight=10; server 192.168.1.101:27017 weight=5; } server { listen 27018; proxy_pass mongodb; proxy_timeout 3s; } }
|
MongoDB连接字符串:
1 2
| String uri = "mongodb://192.168.1.100:27017,192.168.1.101:27017/testdb?replicaSet=rs0";
|
4.6 Redis负载均衡
4.6.1 Redis主从复制
主从配置:
1 2 3
| slaveof 192.168.1.100 6379 slave-read-only yes
|
Nginx负载均衡配置:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24
| stream { upstream redis_master { server 192.168.1.100:6379 weight=10; } upstream redis_slave { server 192.168.1.101:6379 weight=3; server 192.168.1.102:6379 weight=2; } server { listen 6380; proxy_pass redis_master; } server { listen 6381; proxy_pass redis_slave; } }
|
4.6.2 Redis Cluster负载均衡
Redis Cluster配置:
1 2 3 4 5 6 7 8 9
| redis-cli --cluster create \ 192.168.1.100:6379 \ 192.168.1.101:6379 \ 192.168.1.102:6379 \ 192.168.1.103:6379 \ 192.168.1.104:6379 \ 192.168.1.105:6379 \ --cluster-replicas 1
|
Nginx负载均衡Redis Cluster:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16
| stream { upstream redis_cluster { hash $remote_addr consistent; server 192.168.1.100:6379; server 192.168.1.101:6379; server 192.168.1.102:6379; server 192.168.1.103:6379; server 192.168.1.104:6379; server 192.168.1.105:6379; } server { listen 6380; proxy_pass redis_cluster; } }
|
4.7 其他数据库负载均衡
4.7.1 DB2负载均衡
1 2 3 4 5 6 7 8 9 10 11
| stream { upstream db2 { server 192.168.1.100:50000 weight=1; server 192.168.1.101:50000 weight=1; } server { listen 50001; proxy_pass db2; } }
|
1 2 3 4 5 6 7 8 9 10 11
| stream { upstream informix { server 192.168.1.100:9088 weight=1; server 192.168.1.101:9088 weight=1; } server { listen 9089; proxy_pass informix; } }
|
5. 高可用架构
5.1 Nginx主备架构
5.1.1 Keepalived + Nginx
架构图:
1 2 3 4 5 6
| 用户请求 ↓ 虚拟IP (VIP) ↓ ├──→ Nginx主(192.168.1.10) └──→ Nginx备(192.168.1.11)
|
Keepalived配置(主):
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30
| # /etc/keepalived/keepalived.conf global_defs { router_id nginx_master }
vrrp_script chk_nginx { script "/etc/keepalived/check_nginx.sh" interval 2 weight -5 fall 3 rise 2 }
vrrp_instance VI_1 { state MASTER interface eth0 virtual_router_id 51 priority 100 advert_int 1 authentication { auth_type PASS auth_pass 1234 } virtual_ipaddress { 192.168.1.100 } track_script { chk_nginx } }
|
Keepalived配置(备):
1 2 3 4 5 6 7
| vrrp_instance VI_1 { state BACKUP interface eth0 virtual_router_id 51 priority 90 # ... 其他配置相同 }
|
健康检查脚本:
1 2 3 4 5 6 7 8 9 10 11
| #!/bin/bash
if ! pgrep -x nginx > /dev/null; then systemctl start nginx sleep 2 if ! pgrep -x nginx > /dev/null; then exit 1 fi fi exit 0
|
5.2 Nginx双主架构
5.2.1 DNS轮询
DNS配置:
1 2
| www.example.com A 192.168.1.10 www.example.com A 192.168.1.11
|
特点:
- 简单易用
- 故障切换需要DNS TTL时间
- 不适合实时切换
5.2.2 智能DNS
使用智能DNS服务:
- 根据地理位置选择最近的Nginx
- 根据健康状态自动切换
- 支持故障转移
5.3 数据库高可用架构
5.3.1 MySQL MHA架构
架构图:
1 2 3 4 5 6 7 8 9
| 应用服务器 ↓ Nginx(负载均衡) ↓ ├──→ MySQL主 ├──→ MySQL从1 └──→ MySQL从2 ↓ MHA Manager(故障切换)
|
MHA配置:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27
| # /etc/mha/app1.cnf [server default] manager_workdir=/var/log/mha/app1 manager_log=/var/log/mha/app1/manager.log master_binlog_dir=/var/lib/mysql user=mha password=mha ping_interval=3 remote_workdir=/tmp repl_user=repl repl_password=repl ssh_user=root
[server1] hostname=192.168.1.100 port=3306 candidate_master=1
[server2] hostname=192.168.1.101 port=3306 candidate_master=1
[server3] hostname=192.168.1.102 port=3306 no_master=1
|
5.3.2 Oracle Data Guard架构
架构图:
1 2 3 4 5 6 7 8
| 应用服务器 ↓ Nginx(负载均衡) ↓ ├──→ Oracle主库 └──→ Oracle备库 ↓ Data Guard Broker(自动切换)
|
Data Guard配置:
1 2 3 4 5 6 7 8 9 10
| DGMGRL> CREATE CONFIGURATION dg_config AS PRIMARY DATABASE IS orcl_primary CONNECT IDENTIFIER IS orcl_primary;
DGMGRL> ADD DATABASE orcl_standby AS CONNECT IDENTIFIER IS orcl_standby MAINTAINED AS PHYSICAL;
DGMGRL> ENABLE CONFIGURATION;
|
6. 性能优化
6.1 连接优化
6.1.1 连接池配置
Nginx upstream连接池:
1 2 3 4 5 6 7 8 9 10 11 12
| upstream backend { server 192.168.1.100:8080; keepalive 32; }
server { location / { proxy_pass http://backend; proxy_http_version 1.1; proxy_set_header Connection ""; } }
|
数据库连接池配置:
1 2 3 4 5 6 7 8 9 10
| HikariConfig config = new HikariConfig(); config.setJdbcUrl("jdbc:mysql://192.168.1.100:3306/testdb"); config.setUsername("root"); config.setPassword("123456"); config.setMaximumPoolSize(20); config.setMinimumIdle(5); config.setConnectionTimeout(30000); config.setIdleTimeout(600000); config.setMaxLifetime(1800000);
|
6.2 缓存优化
6.2.1 Nginx缓存
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16
| http { proxy_cache_path /var/cache/nginx levels=1:2 keys_zone=my_cache:10m max_size=10g; upstream backend { server 192.168.1.100:8080; } server { location / { proxy_pass http://backend; proxy_cache my_cache; proxy_cache_valid 200 304 12h; proxy_cache_key $host$uri$is_args$args; } } }
|
6.3 压缩优化
1 2 3 4 5 6 7 8 9 10 11 12 13
| http { gzip on; gzip_vary on; gzip_min_length 1000; gzip_types text/plain text/css application/json application/javascript; server { location / { proxy_pass http://backend; gzip on; } } }
|
7. 监控告警
7.1 Nginx状态监控
7.1.1 stub_status模块
1 2 3 4 5 6 7
| location /nginx_status { stub_status on; access_log off; allow 127.0.0.1; allow 192.168.1.0/24; deny all; }
|
监控指标:
- Active connections:活跃连接数
- accepts:接受的连接数
- handled:处理的连接数
- requests:请求数
- Reading:读取连接数
- Writing:写入连接数
- Waiting:等待连接数
7.1.2 监控脚本
1 2 3 4 5 6 7 8 9 10 11
| #!/bin/bash
STATUS=$(curl -s http://localhost/nginx_status)
ACTIVE=$(echo "$STATUS" | grep "Active connections" | awk '{print $3}') REQUESTS=$(echo "$STATUS" | grep "requests" | awk '{print $3}')
echo "Active Connections: $ACTIVE" echo "Total Requests: $REQUESTS"
|
7.2 数据库监控
7.2.1 MySQL监控
1 2 3 4 5 6 7 8
| SHOW SLAVE STATUS\G
SHOW STATUS LIKE 'Threads_connected';
SHOW VARIABLES LIKE 'slow_query_log';
|
7.2.2 Prometheus监控
Nginx Exporter配置:
1 2 3 4 5
| scrape_configs: - job_name: 'nginx' static_configs: - targets: ['192.168.1.10:9113']
|
8. 实战案例
8.1 案例1:电商系统负载均衡
8.1.1 架构设计
1 2 3 4 5 6 7 8 9 10 11 12 13 14
| 用户请求 ↓ CDN ↓ Nginx(负载均衡) ↓ ├──→ Web服务器1(Tomcat) ├──→ Web服务器2(Tomcat) └──→ Web服务器3(Tomcat) ↓ Nginx(数据库负载均衡) ↓ ├──→ MySQL主库(写) └──→ MySQL从库(读)
|
8.1.2 配置示例
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30
| upstream web_backend { least_conn; server 192.168.1.100:8080 weight=3; server 192.168.1.101:8080 weight=2; server 192.168.1.102:8080 weight=1; }
stream { upstream mysql_write { server 192.168.1.200:3306; } upstream mysql_read { least_conn; server 192.168.1.201:3306 weight=3; server 192.168.1.202:3306 weight=2; } server { listen 3307; proxy_pass mysql_write; } server { listen 3308; proxy_pass mysql_read; } }
|
8.2 案例2:微服务架构负载均衡
8.2.1 架构设计
1 2 3 4 5 6 7 8 9 10 11
| API Gateway(Nginx) ↓ ├──→ 用户服务(User Service) ├──→ 订单服务(Order Service) ├──→ 支付服务(Payment Service) └──→ 商品服务(Product Service) ↓ 数据库集群 ├──→ MySQL集群 ├──→ Redis集群 └──→ MongoDB集群
|
8.2.2 配置示例
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22
| upstream user_service { server 192.168.1.100:8001; server 192.168.1.101:8001; }
upstream order_service { server 192.168.1.100:8002; server 192.168.1.101:8002; }
server { listen 80; location /api/user/ { proxy_pass http://user_service; } location /api/order/ { proxy_pass http://order_service; } }
|
9. 总结
9.1 核心要点
- 负载均衡基础:概念、算法、类型
- Nginx配置:upstream模块、算法选择、健康检查
- 数据库负载均衡:MySQL、Oracle、PostgreSQL、SQL Server、MongoDB、Redis等
- 高可用架构:主备、双主、集群
- 性能优化:连接池、缓存、压缩
- 监控告警:状态监控、性能监控
9.2 架构师建议
算法选择:
- 一般场景:轮询或加权轮询
- 需要会话保持:ip_hash或sticky
- 长连接场景:least_conn
数据库负载均衡:
- 读写分离:主库写,从库读
- 高可用:主备切换、集群
- 连接池:合理配置连接数
监控告警:
- 实时监控负载均衡状态
- 监控后端服务器健康
- 设置告警阈值
9.3 最佳实践
- 标准化:统一负载均衡配置标准
- 自动化:自动化配置和故障切换
- 监控化:实时监控和告警
- 文档化:维护配置文档和架构图
相关文章: