第299集Druid连接池配置分析与优化架构实战:性能调优、监控告警与企业级数据库连接池解决方案
|字数总计:8.4k|阅读时长:39分钟|阅读量:
前言
Druid连接池配置分析与优化作为企业级数据库连接池的核心能力之一,直接影响着系统的数据库访问性能和稳定性。通过智能的连接池配置策略,完善的性能监控机制,能够构建稳定可靠的数据库连接池系统,保障企业级应用的高并发数据库访问能力。本文从连接池架构设计到配置优化,从基础原理到企业级实践,系统梳理Druid连接池配置分析与优化的完整解决方案。
一、Druid连接池架构设计
1.1 连接池整体架构
1.2 连接池核心组件
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135
|
@Component public class DruidConnectionPoolManager {
@Autowired private DruidDataSource druidDataSource;
@Autowired private ConnectionPoolConfigService configService;
@Autowired private ConnectionPoolMonitorService monitorService;
@Autowired private ConnectionPoolOptimizer optimizer;
@Autowired private ConnectionPoolHealthChecker healthChecker;
public void initializeConnectionPool() { try { loadConnectionPoolConfig();
initializeDataSource();
configureMonitoring();
startHealthCheck();
startPerformanceOptimization();
log.info("Druid连接池初始化完成");
} catch (Exception e) { log.error("Druid连接池初始化失败", e); throw new ConnectionPoolInitializationException("连接池初始化失败", e); } }
public void startConnectionPool() { try { checkConnectionPoolStatus();
warmupConnectionPool();
monitorService.startMonitoring();
validateConnectionPoolHealth();
log.info("Druid连接池启动成功");
} catch (Exception e) { log.error("Druid连接池启动失败", e); throw new ConnectionPoolStartException("连接池启动失败", e); } }
public void stopConnectionPool() { try { monitorService.stopMonitoring();
stopHealthCheck();
closeDataSource();
log.info("Druid连接池停止成功");
} catch (Exception e) { log.error("Druid连接池停止失败", e); } }
private void loadConnectionPoolConfig() { log.info("加载连接池配置"); }
private void initializeDataSource() { log.info("初始化数据源"); }
private void configureMonitoring() { log.info("配置监控统计"); }
private void startHealthCheck() { log.info("启动健康检查"); }
private void startPerformanceOptimization() { log.info("启动性能优化"); } }
|
二、Druid连接池配置管理
2.1 连接池配置服务
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203
|
@Service public class DruidConnectionPoolConfigService {
@Autowired private ConfigurationRepository configRepository;
@Autowired private DruidDataSource druidDataSource;
public DruidConnectionPoolConfig getConnectionPoolConfig(String configId) { return configRepository.findConnectionPoolConfigById(configId) .orElseThrow(() -> new ConfigNotFoundException("连接池配置不存在: " + configId)); }
public void saveConnectionPoolConfig(DruidConnectionPoolConfig config) { try { validateConnectionPoolConfig(config);
configRepository.saveConnectionPoolConfig(config);
applyConnectionPoolConfig(config);
log.info("连接池配置保存成功: {}", config.getConfigId());
} catch (Exception e) { log.error("连接池配置保存失败", e); throw new ConfigSaveException("连接池配置保存失败", e); } }
public void updateConnectionPoolConfig(String configId, DruidConnectionPoolConfig config) { try { if (!configRepository.existsConnectionPoolConfig(configId)) { throw new ConfigNotFoundException("连接池配置不存在: " + configId); }
validateConnectionPoolConfig(config);
config.setConfigId(configId); configRepository.saveConnectionPoolConfig(config);
applyConnectionPoolConfig(config);
log.info("连接池配置更新成功: {}", configId);
} catch (Exception e) { log.error("连接池配置更新失败", e); throw new ConfigUpdateException("连接池配置更新失败", e); } }
public void deleteConnectionPoolConfig(String configId) { try { if (!configRepository.existsConnectionPoolConfig(configId)) { throw new ConfigNotFoundException("连接池配置不存在: " + configId); }
configRepository.deleteConnectionPoolConfig(configId);
log.info("连接池配置删除成功: {}", configId);
} catch (Exception e) { log.error("连接池配置删除失败", e); throw new ConfigDeleteException("连接池配置删除失败", e); } }
public List<DruidConnectionPoolConfig> getAllConnectionPoolConfigs() { return configRepository.findAllConnectionPoolConfigs(); }
private void applyConnectionPoolConfig(DruidConnectionPoolConfig config) { try { druidDataSource.setUrl(config.getUrl()); druidDataSource.setUsername(config.getUsername()); druidDataSource.setPassword(config.getPassword()); druidDataSource.setDriverClassName(config.getDriverClassName());
druidDataSource.setInitialSize(config.getInitialSize()); druidDataSource.setMinIdle(config.getMinIdle()); druidDataSource.setMaxActive(config.getMaxActive()); druidDataSource.setMaxWait(config.getMaxWait());
druidDataSource.setTimeBetweenEvictionRunsMillis(config.getTimeBetweenEvictionRunsMillis()); druidDataSource.setMinEvictableIdleTimeMillis(config.getMinEvictableIdleTimeMillis()); druidDataSource.setMaxEvictableIdleTimeMillis(config.getMaxEvictableIdleTimeMillis()); druidDataSource.setValidationQuery(config.getValidationQuery()); druidDataSource.setValidationQueryTimeout(config.getValidationQueryTimeout()); druidDataSource.setTestWhileIdle(config.isTestWhileIdle()); druidDataSource.setTestOnBorrow(config.isTestOnBorrow()); druidDataSource.setTestOnReturn(config.isTestOnReturn());
druidDataSource.setFilters(config.getFilters()); druidDataSource.setStatLoggerClassName(config.getStatLoggerClassName()); druidDataSource.setStatLoggerLevel(config.getStatLoggerLevel()); druidDataSource.setUseGlobalDataSourceStat(config.isUseGlobalDataSourceStat());
druidDataSource.setConnectionProperties(config.getConnectionProperties()); druidDataSource.setPasswordCallbackClassName(config.getPasswordCallbackClassName()); druidDataSource.setPasswordCallback(config.getPasswordCallback());
log.info("连接池配置应用成功");
} catch (Exception e) { log.error("连接池配置应用失败", e); throw new ConfigApplyException("连接池配置应用失败", e); } }
private void validateConnectionPoolConfig(DruidConnectionPoolConfig config) { if (config.getConfigId() == null || config.getConfigId().isEmpty()) { throw new ConfigValidationException("配置ID不能为空"); }
if (config.getUrl() == null || config.getUrl().isEmpty()) { throw new ConfigValidationException("数据库URL不能为空"); }
if (config.getUsername() == null || config.getUsername().isEmpty()) { throw new ConfigValidationException("数据库用户名不能为空"); }
if (config.getDriverClassName() == null || config.getDriverClassName().isEmpty()) { throw new ConfigValidationException("数据库驱动类名不能为空"); }
if (config.getInitialSize() < 0) { throw new ConfigValidationException("初始连接数不能小于0"); }
if (config.getMinIdle() < 0) { throw new ConfigValidationException("最小空闲连接数不能小于0"); }
if (config.getMaxActive() <= 0) { throw new ConfigValidationException("最大活跃连接数必须大于0"); }
if (config.getMinIdle() > config.getMaxActive()) { throw new ConfigValidationException("最小空闲连接数不能大于最大活跃连接数"); }
if (config.getInitialSize() > config.getMaxActive()) { throw new ConfigValidationException("初始连接数不能大于最大活跃连接数"); }
if (config.getMaxWait() <= 0) { throw new ConfigValidationException("最大等待时间必须大于0"); }
if (config.getTimeBetweenEvictionRunsMillis() <= 0) { throw new ConfigValidationException("空闲连接回收器运行间隔时间必须大于0"); }
if (config.getMinEvictableIdleTimeMillis() <= 0) { throw new ConfigValidationException("连接在池中最小生存时间必须大于0"); }
if (config.getValidationQuery() == null || config.getValidationQuery().isEmpty()) { throw new ConfigValidationException("验证查询SQL不能为空"); }
if (config.getValidationQueryTimeout() <= 0) { throw new ConfigValidationException("验证查询超时时间必须大于0"); } } }
|
2.2 连接池配置优化器
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335
|
@Service public class DruidConnectionPoolOptimizer {
@Autowired private DruidDataSource druidDataSource;
@Autowired private ConnectionPoolMetricsCollector metricsCollector;
@Autowired private ConnectionPoolAnalyzer analyzer;
public void optimizeConnectionPoolConfig() { try { DruidConnectionPoolConfig currentConfig = getCurrentConfig(); ConnectionPoolMetrics currentMetrics = metricsCollector.collectMetrics();
ConnectionPoolAnalysis analysis = analyzer.analyzeConnectionPool(currentConfig, currentMetrics);
List<OptimizationSuggestion> suggestions = generateOptimizationSuggestions(analysis);
applyOptimizationSuggestions(suggestions);
log.info("连接池配置优化完成");
} catch (Exception e) { log.error("连接池配置优化失败", e); throw new ConnectionPoolOptimizationException("连接池配置优化失败", e); } }
public void autoOptimizeConnectionPoolConfig() { try { List<ConnectionPoolMetrics> historicalMetrics = metricsCollector.collectHistoricalMetrics();
PerformanceTrendAnalysis trendAnalysis = analyzer.analyzePerformanceTrend(historicalMetrics);
PerformancePrediction prediction = analyzer.predictPerformanceNeeds(trendAnalysis);
List<OptimizationSuggestion> autoSuggestions = generateAutoOptimizationSuggestions(prediction);
applyAutoOptimizationSuggestions(autoSuggestions);
log.info("连接池配置自动优化完成");
} catch (Exception e) { log.error("连接池配置自动优化失败", e); throw new ConnectionPoolAutoOptimizationException("连接池配置自动优化失败", e); } }
public void optimizeConnectionPoolSize() { try { ConnectionUsageAnalysis usageAnalysis = analyzer.analyzeConnectionUsage();
OptimalPoolSize optimalSize = calculateOptimalPoolSize(usageAnalysis);
adjustConnectionPoolSize(optimalSize);
log.info("连接池大小优化完成: 初始={}, 最小={}, 最大={}", optimalSize.getInitialSize(), optimalSize.getMinIdle(), optimalSize.getMaxActive());
} catch (Exception e) { log.error("连接池大小优化失败", e); throw new ConnectionPoolSizeOptimizationException("连接池大小优化失败", e); } }
public void optimizeConnectionTimeoutConfig() { try { ConnectionTimeoutAnalysis timeoutAnalysis = analyzer.analyzeConnectionTimeout();
OptimalTimeoutConfig optimalTimeout = calculateOptimalTimeoutConfig(timeoutAnalysis);
adjustTimeoutConfig(optimalTimeout);
log.info("连接超时配置优化完成: 最大等待={}ms", optimalTimeout.getMaxWait());
} catch (Exception e) { log.error("连接超时配置优化失败", e); throw new ConnectionTimeoutOptimizationException("连接超时配置优化失败", e); } }
public void optimizeConnectionEvictionConfig() { try { ConnectionEvictionAnalysis evictionAnalysis = analyzer.analyzeConnectionEviction();
OptimalEvictionConfig optimalEviction = calculateOptimalEvictionConfig(evictionAnalysis);
adjustEvictionConfig(optimalEviction);
log.info("连接回收配置优化完成: 回收间隔={}ms, 最小生存时间={}ms", optimalEviction.getTimeBetweenEvictionRunsMillis(), optimalEviction.getMinEvictableIdleTimeMillis());
} catch (Exception e) { log.error("连接回收配置优化失败", e); throw new ConnectionEvictionOptimizationException("连接回收配置优化失败", e); } }
private DruidConnectionPoolConfig getCurrentConfig() { DruidConnectionPoolConfig config = new DruidConnectionPoolConfig();
config.setUrl(druidDataSource.getUrl()); config.setUsername(druidDataSource.getUsername()); config.setPassword(druidDataSource.getPassword()); config.setDriverClassName(druidDataSource.getDriverClassName()); config.setInitialSize(druidDataSource.getInitialSize()); config.setMinIdle(druidDataSource.getMinIdle()); config.setMaxActive(druidDataSource.getMaxActive()); config.setMaxWait(druidDataSource.getMaxWait()); config.setTimeBetweenEvictionRunsMillis(druidDataSource.getTimeBetweenEvictionRunsMillis()); config.setMinEvictableIdleTimeMillis(druidDataSource.getMinEvictableIdleTimeMillis()); config.setMaxEvictableIdleTimeMillis(druidDataSource.getMaxEvictableIdleTimeMillis()); config.setValidationQuery(druidDataSource.getValidationQuery()); config.setValidationQueryTimeout(druidDataSource.getValidationQueryTimeout()); config.setTestWhileIdle(druidDataSource.isTestWhileIdle()); config.setTestOnBorrow(druidDataSource.isTestOnBorrow()); config.setTestOnReturn(druidDataSource.isTestOnReturn()); config.setFilters(druidDataSource.getFilters()); config.setStatLoggerClassName(druidDataSource.getStatLoggerClassName()); config.setStatLoggerLevel(druidDataSource.getStatLoggerLevel()); config.setUseGlobalDataSourceStat(druidDataSource.isUseGlobalDataSourceStat()); config.setConnectionProperties(druidDataSource.getConnectionProperties());
return config; }
private List<OptimizationSuggestion> generateOptimizationSuggestions(ConnectionPoolAnalysis analysis) { List<OptimizationSuggestion> suggestions = new ArrayList<>();
if (analysis.getConnectionUsageRate() > 0.8) { OptimizationSuggestion suggestion = new OptimizationSuggestion(); suggestion.setType(OptimizationType.POOL_SIZE); suggestion.setPriority(Priority.HIGH); suggestion.setTitle("增加连接池大小"); suggestion.setDescription("当前连接使用率过高,建议增加最大连接数"); suggestion.setCurrentValue(analysis.getMaxActive()); suggestion.setSuggestedValue(analysis.getMaxActive() * 1.5); suggestions.add(suggestion); }
if (analysis.getConnectionTimeoutRate() > 0.1) { OptimizationSuggestion suggestion = new OptimizationSuggestion(); suggestion.setType(OptimizationType.TIMEOUT); suggestion.setPriority(Priority.MEDIUM); suggestion.setTitle("调整连接超时时间"); suggestion.setDescription("当前连接超时率过高,建议增加最大等待时间"); suggestion.setCurrentValue(analysis.getMaxWait()); suggestion.setSuggestedValue(analysis.getMaxWait() * 2); suggestions.add(suggestion); }
if (analysis.getConnectionEvictionRate() > 0.2) { OptimizationSuggestion suggestion = new OptimizationSuggestion(); suggestion.setType(OptimizationType.EVICTION); suggestion.setPriority(Priority.LOW); suggestion.setTitle("调整连接回收策略"); suggestion.setDescription("当前连接回收率过高,建议调整回收间隔"); suggestion.setCurrentValue(analysis.getTimeBetweenEvictionRunsMillis()); suggestion.setSuggestedValue(analysis.getTimeBetweenEvictionRunsMillis() * 1.5); suggestions.add(suggestion); }
return suggestions; }
private void applyOptimizationSuggestions(List<OptimizationSuggestion> suggestions) { for (OptimizationSuggestion suggestion : suggestions) { try { switch (suggestion.getType()) { case POOL_SIZE: adjustConnectionPoolSize(suggestion); break; case TIMEOUT: adjustConnectionTimeout(suggestion); break; case EVICTION: adjustConnectionEviction(suggestion); break; default: log.warn("未知的优化类型: {}", suggestion.getType()); }
log.info("应用优化建议: {}", suggestion.getTitle());
} catch (Exception e) { log.error("应用优化建议失败: {}", suggestion.getTitle(), e); } } }
private OptimalPoolSize calculateOptimalPoolSize(ConnectionUsageAnalysis usageAnalysis) { OptimalPoolSize optimalSize = new OptimalPoolSize();
double avgUsageRate = usageAnalysis.getAverageUsageRate(); int currentMaxActive = usageAnalysis.getCurrentMaxActive();
double targetUsageRate = 0.8; int optimalMaxActive = (int) (currentMaxActive * avgUsageRate / targetUsageRate);
optimalMaxActive = Math.max(optimalMaxActive, 10); optimalMaxActive = Math.min(optimalMaxActive, 100);
optimalSize.setMaxActive(optimalMaxActive); optimalSize.setMinIdle(optimalMaxActive / 4); optimalSize.setInitialSize(optimalMaxActive / 2);
return optimalSize; }
private OptimalTimeoutConfig calculateOptimalTimeoutConfig(ConnectionTimeoutAnalysis timeoutAnalysis) { OptimalTimeoutConfig optimalTimeout = new OptimalTimeoutConfig();
long avgWaitTime = timeoutAnalysis.getAverageWaitTime(); long currentMaxWait = timeoutAnalysis.getCurrentMaxWait();
long optimalMaxWait = Math.max(avgWaitTime * 2, 30000); optimalMaxWait = Math.min(optimalMaxWait, 120000);
optimalTimeout.setMaxWait(optimalMaxWait);
return optimalTimeout; }
private OptimalEvictionConfig calculateOptimalEvictionConfig(ConnectionEvictionAnalysis evictionAnalysis) { OptimalEvictionConfig optimalEviction = new OptimalEvictionConfig();
long avgIdleTime = evictionAnalysis.getAverageIdleTime(); long currentEvictionInterval = evictionAnalysis.getCurrentEvictionInterval();
long optimalEvictionInterval = Math.max(avgIdleTime / 3, 60000); optimalEvictionInterval = Math.min(optimalEvictionInterval, 300000);
long optimalMinIdleTime = optimalEvictionInterval * 2;
optimalEviction.setTimeBetweenEvictionRunsMillis(optimalEvictionInterval); optimalEviction.setMinEvictableIdleTimeMillis(optimalMinIdleTime);
return optimalEviction; }
private void adjustConnectionPoolSize(OptimalPoolSize optimalSize) { druidDataSource.setInitialSize(optimalSize.getInitialSize()); druidDataSource.setMinIdle(optimalSize.getMinIdle()); druidDataSource.setMaxActive(optimalSize.getMaxActive()); }
private void adjustTimeoutConfig(OptimalTimeoutConfig optimalTimeout) { druidDataSource.setMaxWait(optimalTimeout.getMaxWait()); }
private void adjustEvictionConfig(OptimalEvictionConfig optimalEviction) { druidDataSource.setTimeBetweenEvictionRunsMillis(optimalEviction.getTimeBetweenEvictionRunsMillis()); druidDataSource.setMinEvictableIdleTimeMillis(optimalEviction.getMinEvictableIdleTimeMillis()); } }
|
三、连接池监控与分析
3.1 连接池监控服务
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124
|
@Service public class DruidConnectionPoolMonitorService {
@Autowired private DruidDataSource druidDataSource;
@Autowired private ConnectionPoolMetricsCollector metricsCollector;
@Autowired private ConnectionPoolHealthChecker healthChecker;
@Autowired private AlertService alertService;
private final ScheduledExecutorService monitorScheduler;
public DruidConnectionPoolMonitorService() { this.monitorScheduler = Executors.newScheduledThreadPool(3); }
public void startMonitoring() { monitorScheduler.scheduleAtFixedRate( this::monitorConnectionPool, 0, 30, TimeUnit.SECONDS );
log.info("Druid连接池监控启动成功"); }
public void stopMonitoring() { try { monitorScheduler.shutdown(); if (!monitorScheduler.awaitTermination(30, TimeUnit.SECONDS)) { monitorScheduler.shutdownNow(); }
log.info("Druid连接池监控停止成功");
} catch (Exception e) { log.error("Druid连接池监控停止失败", e); } }
private void monitorConnectionPool() { try { ConnectionPoolMetrics metrics = metricsCollector.collectMetrics();
ConnectionPoolHealthStatus healthStatus = healthChecker.checkHealth(metrics);
if (!healthStatus.isHealthy()) { handleUnhealthyConnectionPool(healthStatus); }
recordMonitoringResult(metrics, healthStatus);
} catch (Exception e) { log.error("连接池监控失败", e); } }
private void handleUnhealthyConnectionPool(ConnectionPoolHealthStatus healthStatus) { try { sendConnectionPoolAlert(healthStatus);
recordConnectionPoolIssue(healthStatus);
} catch (Exception e) { log.error("不健康连接池处理失败", e); } }
private void sendConnectionPoolAlert(ConnectionPoolHealthStatus healthStatus) { ConnectionPoolAlert alert = new ConnectionPoolAlert(); alert.setAlertType(AlertType.CONNECTION_POOL_UNHEALTHY); alert.setSeverity(healthStatus.getSeverity()); alert.setMessage("Druid连接池状态异常"); alert.setHealthStatus(healthStatus); alert.setTimestamp(System.currentTimeMillis());
alertService.sendAlert(alert); }
private void recordMonitoringResult(ConnectionPoolMetrics metrics, ConnectionPoolHealthStatus healthStatus) { ConnectionPoolMonitoringResult result = new ConnectionPoolMonitoringResult(); result.setTimestamp(System.currentTimeMillis()); result.setMetrics(metrics); result.setHealthStatus(healthStatus);
} }
|
3.2 连接池指标收集器
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179
|
@Component public class DruidConnectionPoolMetricsCollector {
@Autowired private DruidDataSource druidDataSource;
public ConnectionPoolMetrics collectMetrics() { ConnectionPoolMetrics metrics = new ConnectionPoolMetrics();
try { collectBasicMetrics(metrics);
collectPerformanceMetrics(metrics);
collectSQLMetrics(metrics);
collectConnectionMetrics(metrics);
} catch (Exception e) { log.error("连接池指标收集失败", e); }
return metrics; }
private void collectBasicMetrics(ConnectionPoolMetrics metrics) { try { metrics.setInitialSize(druidDataSource.getInitialSize()); metrics.setMinIdle(druidDataSource.getMinIdle()); metrics.setMaxActive(druidDataSource.getMaxActive()); metrics.setMaxWait(druidDataSource.getMaxWait());
metrics.setActiveCount(druidDataSource.getActiveCount()); metrics.setPoolingCount(druidDataSource.getPoolingCount()); metrics.setWaitThreadCount(druidDataSource.getWaitThreadCount());
} catch (Exception e) { log.error("基础指标收集失败", e); } }
private void collectPerformanceMetrics(ConnectionPoolMetrics metrics) { try { metrics.setConnectCount(druidDataSource.getConnectCount()); metrics.setCloseCount(druidDataSource.getCloseCount()); metrics.setConnectErrorCount(druidDataSource.getConnectErrorCount());
metrics.setNotEmptyWaitCount(druidDataSource.getNotEmptyWaitCount()); metrics.setNotEmptyWaitMillis(druidDataSource.getNotEmptyWaitMillis()); metrics.setNotEmptyWaitThreadCount(druidDataSource.getNotEmptyWaitThreadCount());
metrics.setEvictCount(druidDataSource.getEvictCount()); metrics.setEvictErrorCount(druidDataSource.getEvictErrorCount());
} catch (Exception e) { log.error("性能指标收集失败", e); } }
private void collectSQLMetrics(ConnectionPoolMetrics metrics) { try { DruidStatManagerFacade statManager = DruidStatManagerFacade.getInstance(); Map<String, Object> sqlStatMap = statManager.getSqlStatData();
if (sqlStatMap != null) { metrics.setSqlStatMap(sqlStatMap);
calculateSQLStatistics(metrics, sqlStatMap); }
} catch (Exception e) { log.error("SQL指标收集失败", e); } }
private void collectConnectionMetrics(ConnectionPoolMetrics metrics) { try { metrics.setConnectionHoldTimeHistogram(druidDataSource.getConnectionHoldTimeHistogram()); metrics.setConnectionHoldTimeHistogramRanges(druidDataSource.getConnectionHoldTimeHistogramRanges());
metrics.setConnectionLeakDetectionThreshold(druidDataSource.getConnectionLeakDetectionThreshold()); metrics.setConnectionLeakDetectionCount(druidDataSource.getConnectionLeakDetectionCount());
} catch (Exception e) { log.error("连接指标收集失败", e); } }
private void calculateSQLStatistics(ConnectionPoolMetrics metrics, Map<String, Object> sqlStatMap) { try { int totalSQLCount = 0; long totalExecuteTime = 0; int slowSQLCount = 0; int errorSQLCount = 0;
for (Map.Entry<String, Object> entry : sqlStatMap.entrySet()) { Object sqlStat = entry.getValue(); if (sqlStat instanceof JdbcSqlStat) { JdbcSqlStat stat = (JdbcSqlStat) sqlStat;
totalSQLCount += stat.getExecuteCount(); totalExecuteTime += stat.getExecuteTimeMillis();
if (stat.getExecuteTimeMillis() > 1000) { slowSQLCount++; }
if (stat.getErrorCount() > 0) { errorSQLCount++; } } }
metrics.setTotalSQLCount(totalSQLCount); metrics.setTotalExecuteTime(totalExecuteTime); metrics.setSlowSQLCount(slowSQLCount); metrics.setErrorSQLCount(errorSQLCount);
if (totalSQLCount > 0) { metrics.setAverageExecuteTime(totalExecuteTime / totalSQLCount); }
} catch (Exception e) { log.error("SQL统计指标计算失败", e); } }
public List<ConnectionPoolMetrics> collectHistoricalMetrics() { List<ConnectionPoolMetrics> historicalMetrics = new ArrayList<>();
try {
} catch (Exception e) { log.error("历史指标收集失败", e); }
return historicalMetrics; } }
|
3.3 连接池健康检查器
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290
|
@Component public class DruidConnectionPoolHealthChecker {
@Autowired private DruidDataSource druidDataSource;
public ConnectionPoolHealthStatus checkHealth(ConnectionPoolMetrics metrics) { ConnectionPoolHealthStatus healthStatus = new ConnectionPoolHealthStatus();
try { BasicHealthStatus basicHealth = checkBasicHealth(metrics); healthStatus.setBasicHealth(basicHealth);
PerformanceHealthStatus performanceHealth = checkPerformanceHealth(metrics); healthStatus.setPerformanceHealth(performanceHealth);
SQLHealthStatus sqlHealth = checkSQLHealth(metrics); healthStatus.setSqlHealth(sqlHealth);
ConnectionHealthStatus connectionHealth = checkConnectionHealth(metrics); healthStatus.setConnectionHealth(connectionHealth);
evaluateOverallHealth(healthStatus);
} catch (Exception e) { log.error("连接池健康检查失败", e); healthStatus.setHealthy(false); healthStatus.setSeverity(Severity.HIGH); }
return healthStatus; }
private BasicHealthStatus checkBasicHealth(ConnectionPoolMetrics metrics) { BasicHealthStatus basicHealth = new BasicHealthStatus();
try { if (druidDataSource.isClosed()) { basicHealth.setPoolAvailable(false); basicHealth.setPoolAvailableLevel(StatusLevel.CRITICAL); } else { basicHealth.setPoolAvailable(true); basicHealth.setPoolAvailableLevel(StatusLevel.NORMAL); }
int activeCount = metrics.getActiveCount(); int maxActive = metrics.getMaxActive(); double usageRate = (double) activeCount / maxActive;
basicHealth.setConnectionUsageRate(usageRate); if (usageRate > 0.9) { basicHealth.setConnectionUsageLevel(StatusLevel.CRITICAL); } else if (usageRate > 0.8) { basicHealth.setConnectionUsageLevel(StatusLevel.WARNING); } else { basicHealth.setConnectionUsageLevel(StatusLevel.NORMAL); }
int waitThreadCount = metrics.getWaitThreadCount(); basicHealth.setWaitThreadCount(waitThreadCount); if (waitThreadCount > 10) { basicHealth.setWaitThreadLevel(StatusLevel.CRITICAL); } else if (waitThreadCount > 5) { basicHealth.setWaitThreadLevel(StatusLevel.WARNING); } else { basicHealth.setWaitThreadLevel(StatusLevel.NORMAL); }
} catch (Exception e) { log.error("基础健康检查失败", e); basicHealth.setPoolAvailable(false); basicHealth.setPoolAvailableLevel(StatusLevel.CRITICAL); }
return basicHealth; }
private PerformanceHealthStatus checkPerformanceHealth(ConnectionPoolMetrics metrics) { PerformanceHealthStatus performanceHealth = new PerformanceHealthStatus();
try { long connectCount = metrics.getConnectCount(); long connectErrorCount = metrics.getConnectErrorCount(); double connectErrorRate = connectErrorCount > 0 ? (double) connectErrorCount / connectCount : 0.0;
performanceHealth.setConnectErrorRate(connectErrorRate); if (connectErrorRate > 0.1) { performanceHealth.setConnectErrorLevel(StatusLevel.CRITICAL); } else if (connectErrorRate > 0.05) { performanceHealth.setConnectErrorLevel(StatusLevel.WARNING); } else { performanceHealth.setConnectErrorLevel(StatusLevel.NORMAL); }
long notEmptyWaitCount = metrics.getNotEmptyWaitCount(); long notEmptyWaitMillis = metrics.getNotEmptyWaitMillis(); double averageWaitTime = notEmptyWaitCount > 0 ? (double) notEmptyWaitMillis / notEmptyWaitCount : 0.0;
performanceHealth.setAverageWaitTime(averageWaitTime); if (averageWaitTime > 5000) { performanceHealth.setWaitTimeLevel(StatusLevel.CRITICAL); } else if (averageWaitTime > 2000) { performanceHealth.setWaitTimeLevel(StatusLevel.WARNING); } else { performanceHealth.setWaitTimeLevel(StatusLevel.NORMAL); }
long evictCount = metrics.getEvictCount(); long evictErrorCount = metrics.getEvictErrorCount(); double evictErrorRate = evictCount > 0 ? (double) evictErrorCount / evictCount : 0.0;
performanceHealth.setEvictErrorRate(evictErrorRate); if (evictErrorRate > 0.1) { performanceHealth.setEvictErrorLevel(StatusLevel.CRITICAL); } else if (evictErrorRate > 0.05) { performanceHealth.setEvictErrorLevel(StatusLevel.WARNING); } else { performanceHealth.setEvictErrorLevel(StatusLevel.NORMAL); }
} catch (Exception e) { log.error("性能健康检查失败", e); }
return performanceHealth; }
private SQLHealthStatus checkSQLHealth(ConnectionPoolMetrics metrics) { SQLHealthStatus sqlHealth = new SQLHealthStatus();
try { int totalSQLCount = metrics.getTotalSQLCount(); int slowSQLCount = metrics.getSlowSQLCount(); int errorSQLCount = metrics.getErrorSQLCount();
sqlHealth.setTotalSQLCount(totalSQLCount); sqlHealth.setSlowSQLCount(slowSQLCount); sqlHealth.setErrorSQLCount(errorSQLCount);
double slowSQLRate = totalSQLCount > 0 ? (double) slowSQLCount / totalSQLCount : 0.0; sqlHealth.setSlowSQLRate(slowSQLRate); if (slowSQLRate > 0.1) { sqlHealth.setSlowSQLLevel(StatusLevel.CRITICAL); } else if (slowSQLRate > 0.05) { sqlHealth.setSlowSQLLevel(StatusLevel.WARNING); } else { sqlHealth.setSlowSQLLevel(StatusLevel.NORMAL); }
double errorSQLRate = totalSQLCount > 0 ? (double) errorSQLCount / totalSQLCount : 0.0; sqlHealth.setErrorSQLRate(errorSQLRate); if (errorSQLRate > 0.05) { sqlHealth.setErrorSQLLevel(StatusLevel.CRITICAL); } else if (errorSQLRate > 0.01) { sqlHealth.setErrorSQLLevel(StatusLevel.WARNING); } else { sqlHealth.setErrorSQLLevel(StatusLevel.NORMAL); }
long averageExecuteTime = metrics.getAverageExecuteTime(); sqlHealth.setAverageExecuteTime(averageExecuteTime); if (averageExecuteTime > 1000) { sqlHealth.setExecuteTimeLevel(StatusLevel.CRITICAL); } else if (averageExecuteTime > 500) { sqlHealth.setExecuteTimeLevel(StatusLevel.WARNING); } else { sqlHealth.setExecuteTimeLevel(StatusLevel.NORMAL); }
} catch (Exception e) { log.error("SQL健康检查失败", e); }
return sqlHealth; }
private ConnectionHealthStatus checkConnectionHealth(ConnectionPoolMetrics metrics) { ConnectionHealthStatus connectionHealth = new ConnectionHealthStatus();
try { int connectionLeakDetectionCount = metrics.getConnectionLeakDetectionCount(); connectionHealth.setConnectionLeakCount(connectionLeakDetectionCount); if (connectionLeakDetectionCount > 0) { connectionHealth.setConnectionLeakLevel(StatusLevel.CRITICAL); } else { connectionHealth.setConnectionLeakLevel(StatusLevel.NORMAL); }
long[] connectionHoldTimeHistogram = metrics.getConnectionHoldTimeHistogram(); if (connectionHoldTimeHistogram != null) { long totalConnections = 0; long longHoldConnections = 0;
for (long count : connectionHoldTimeHistogram) { totalConnections += count; }
if (connectionHoldTimeHistogram.length > 6) { for (int i = 6; i < connectionHoldTimeHistogram.length; i++) { longHoldConnections += connectionHoldTimeHistogram[i]; } }
double longHoldRate = totalConnections > 0 ? (double) longHoldConnections / totalConnections : 0.0; connectionHealth.setLongHoldRate(longHoldRate); if (longHoldRate > 0.2) { connectionHealth.setLongHoldLevel(StatusLevel.WARNING); } else { connectionHealth.setLongHoldLevel(StatusLevel.NORMAL); } }
} catch (Exception e) { log.error("连接健康检查失败", e); }
return connectionHealth; }
private void evaluateOverallHealth(ConnectionPoolHealthStatus healthStatus) { boolean isHealthy = true; Severity maxSeverity = Severity.LOW;
if (healthStatus.getBasicHealth().hasCriticalIssues()) { isHealthy = false; maxSeverity = Severity.max(maxSeverity, Severity.HIGH); }
if (healthStatus.getPerformanceHealth().hasCriticalIssues()) { isHealthy = false; maxSeverity = Severity.max(maxSeverity, Severity.MEDIUM); }
if (healthStatus.getSqlHealth().hasCriticalIssues()) { isHealthy = false; maxSeverity = Severity.max(maxSeverity, Severity.MEDIUM); }
if (healthStatus.getConnectionHealth().hasCriticalIssues()) { isHealthy = false; maxSeverity = Severity.max(maxSeverity, Severity.HIGH); }
healthStatus.setHealthy(isHealthy); healthStatus.setSeverity(maxSeverity); } }
|
四、企业级Druid连接池方案
4.1 连接池配置模板
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142
|
@Service public class DruidConnectionPoolConfigTemplateService {
public DruidConnectionPoolConfig getHighPerformanceConfig() { DruidConnectionPoolConfig config = new DruidConnectionPoolConfig();
config.setInitialSize(10); config.setMinIdle(10); config.setMaxActive(50); config.setMaxWait(60000);
config.setTimeBetweenEvictionRunsMillis(60000); config.setMinEvictableIdleTimeMillis(300000); config.setMaxEvictableIdleTimeMillis(900000); config.setValidationQuery("SELECT 1"); config.setValidationQueryTimeout(3); config.setTestWhileIdle(true); config.setTestOnBorrow(false); config.setTestOnReturn(false);
config.setFilters("stat,wall,log4j2"); config.setStatLoggerClassName("com.alibaba.druid.support.logging.Log4j2Impl"); config.setStatLoggerLevel("INFO"); config.setUseGlobalDataSourceStat(true);
config.setConnectionProperties("druid.stat.mergeSql=true;druid.stat.slowSqlMillis=2000");
return config; }
public DruidConnectionPoolConfig getHighAvailabilityConfig() { DruidConnectionPoolConfig config = new DruidConnectionPoolConfig();
config.setInitialSize(5); config.setMinIdle(5); config.setMaxActive(20); config.setMaxWait(30000);
config.setTimeBetweenEvictionRunsMillis(30000); config.setMinEvictableIdleTimeMillis(180000); config.setMaxEvictableIdleTimeMillis(600000); config.setValidationQuery("SELECT 1"); config.setValidationQueryTimeout(5); config.setTestWhileIdle(true); config.setTestOnBorrow(true); config.setTestOnReturn(false);
config.setFilters("stat,wall,log4j2"); config.setStatLoggerClassName("com.alibaba.druid.support.logging.Log4j2Impl"); config.setStatLoggerLevel("WARN"); config.setUseGlobalDataSourceStat(true);
config.setConnectionProperties("druid.stat.mergeSql=true;druid.stat.slowSqlMillis=5000");
return config; }
public DruidConnectionPoolConfig getDevelopmentConfig() { DruidConnectionPoolConfig config = new DruidConnectionPoolConfig();
config.setInitialSize(2); config.setMinIdle(2); config.setMaxActive(10); config.setMaxWait(10000);
config.setTimeBetweenEvictionRunsMillis(60000); config.setMinEvictableIdleTimeMillis(300000); config.setMaxEvictableIdleTimeMillis(900000); config.setValidationQuery("SELECT 1"); config.setValidationQueryTimeout(3); config.setTestWhileIdle(true); config.setTestOnBorrow(false); config.setTestOnReturn(false);
config.setFilters("stat,wall,log4j2"); config.setStatLoggerClassName("com.alibaba.druid.support.logging.Log4j2Impl"); config.setStatLoggerLevel("DEBUG"); config.setUseGlobalDataSourceStat(true);
config.setConnectionProperties("druid.stat.mergeSql=true;druid.stat.slowSqlMillis=1000");
return config; }
public DruidConnectionPoolConfig getProductionConfig() { DruidConnectionPoolConfig config = new DruidConnectionPoolConfig();
config.setInitialSize(20); config.setMinIdle(20); config.setMaxActive(100); config.setMaxWait(120000);
config.setTimeBetweenEvictionRunsMillis(60000); config.setMinEvictableIdleTimeMillis(300000); config.setMaxEvictableIdleTimeMillis(900000); config.setValidationQuery("SELECT 1"); config.setValidationQueryTimeout(3); config.setTestWhileIdle(true); config.setTestOnBorrow(false); config.setTestOnReturn(false);
config.setFilters("stat,wall,log4j2"); config.setStatLoggerClassName("com.alibaba.druid.support.logging.Log4j2Impl"); config.setStatLoggerLevel("INFO"); config.setUseGlobalDataSourceStat(true);
config.setConnectionProperties("druid.stat.mergeSql=true;druid.stat.slowSqlMillis=2000");
return config; } }
|
4.2 连接池性能分析器
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351
|
@Service public class DruidConnectionPoolAnalyzer {
@Autowired private ConnectionPoolMetricsCollector metricsCollector;
public ConnectionPoolPerformanceAnalysis analyzePerformance() { ConnectionPoolPerformanceAnalysis analysis = new ConnectionPoolPerformanceAnalysis();
try { ConnectionPoolMetrics currentMetrics = metricsCollector.collectMetrics();
List<ConnectionPoolMetrics> historicalMetrics = metricsCollector.collectHistoricalMetrics();
ConnectionUsageAnalysis usageAnalysis = analyzeConnectionUsage(currentMetrics); analysis.setUsageAnalysis(usageAnalysis);
SQLPerformanceAnalysis sqlAnalysis = analyzeSQLPerformance(currentMetrics); analysis.setSqlAnalysis(sqlAnalysis);
ConnectionPoolEfficiencyAnalysis efficiencyAnalysis = analyzeConnectionPoolEfficiency(currentMetrics); analysis.setEfficiencyAnalysis(efficiencyAnalysis);
PerformanceTrendAnalysis trendAnalysis = analyzePerformanceTrend(historicalMetrics); analysis.setTrendAnalysis(trendAnalysis);
String performanceReport = generatePerformanceReport(analysis); analysis.setPerformanceReport(performanceReport);
} catch (Exception e) { log.error("连接池性能分析失败", e); throw new ConnectionPoolPerformanceAnalysisException("连接池性能分析失败", e); }
return analysis; }
private ConnectionUsageAnalysis analyzeConnectionUsage(ConnectionPoolMetrics metrics) { ConnectionUsageAnalysis usageAnalysis = new ConnectionUsageAnalysis();
try { int activeCount = metrics.getActiveCount(); int maxActive = metrics.getMaxActive(); double usageRate = (double) activeCount / maxActive;
usageAnalysis.setCurrentUsageRate(usageRate); usageAnalysis.setActiveCount(activeCount); usageAnalysis.setMaxActive(maxActive);
int poolingCount = metrics.getPoolingCount(); double idleRate = (double) poolingCount / maxActive;
usageAnalysis.setIdleRate(idleRate); usageAnalysis.setPoolingCount(poolingCount);
int waitThreadCount = metrics.getWaitThreadCount(); usageAnalysis.setWaitThreadCount(waitThreadCount);
if (usageRate > 0.9) { usageAnalysis.setUsageLevel(StatusLevel.CRITICAL); usageAnalysis.setRecommendation("连接使用率过高,建议增加最大连接数"); } else if (usageRate > 0.8) { usageAnalysis.setUsageLevel(StatusLevel.WARNING); usageAnalysis.setRecommendation("连接使用率较高,建议监控连接使用情况"); } else { usageAnalysis.setUsageLevel(StatusLevel.NORMAL); usageAnalysis.setRecommendation("连接使用率正常"); }
} catch (Exception e) { log.error("连接使用情况分析失败", e); }
return usageAnalysis; }
private SQLPerformanceAnalysis analyzeSQLPerformance(ConnectionPoolMetrics metrics) { SQLPerformanceAnalysis sqlAnalysis = new SQLPerformanceAnalysis();
try { int totalSQLCount = metrics.getTotalSQLCount(); int slowSQLCount = metrics.getSlowSQLCount(); int errorSQLCount = metrics.getErrorSQLCount(); long averageExecuteTime = metrics.getAverageExecuteTime();
sqlAnalysis.setTotalSQLCount(totalSQLCount); sqlAnalysis.setSlowSQLCount(slowSQLCount); sqlAnalysis.setErrorSQLCount(errorSQLCount); sqlAnalysis.setAverageExecuteTime(averageExecuteTime);
double slowSQLRate = totalSQLCount > 0 ? (double) slowSQLCount / totalSQLCount : 0.0; sqlAnalysis.setSlowSQLRate(slowSQLRate);
double errorSQLRate = totalSQLCount > 0 ? (double) errorSQLCount / totalSQLCount : 0.0; sqlAnalysis.setErrorSQLRate(errorSQLRate);
if (slowSQLRate > 0.1 || errorSQLRate > 0.05) { sqlAnalysis.setPerformanceLevel(StatusLevel.CRITICAL); sqlAnalysis.setRecommendation("SQL性能问题严重,建议优化慢SQL和错误SQL"); } else if (slowSQLRate > 0.05 || errorSQLRate > 0.01) { sqlAnalysis.setPerformanceLevel(StatusLevel.WARNING); sqlAnalysis.setRecommendation("SQL性能需要关注,建议监控慢SQL"); } else { sqlAnalysis.setPerformanceLevel(StatusLevel.NORMAL); sqlAnalysis.setRecommendation("SQL性能正常"); }
} catch (Exception e) { log.error("SQL性能分析失败", e); }
return sqlAnalysis; }
private ConnectionPoolEfficiencyAnalysis analyzeConnectionPoolEfficiency(ConnectionPoolMetrics metrics) { ConnectionPoolEfficiencyAnalysis efficiencyAnalysis = new ConnectionPoolEfficiencyAnalysis();
try { long connectCount = metrics.getConnectCount(); long connectErrorCount = metrics.getConnectErrorCount(); double connectErrorRate = connectCount > 0 ? (double) connectErrorCount / connectCount : 0.0;
efficiencyAnalysis.setConnectErrorRate(connectErrorRate);
long notEmptyWaitCount = metrics.getNotEmptyWaitCount(); long notEmptyWaitMillis = metrics.getNotEmptyWaitMillis(); double averageWaitTime = notEmptyWaitCount > 0 ? (double) notEmptyWaitMillis / notEmptyWaitCount : 0.0;
efficiencyAnalysis.setAverageWaitTime(averageWaitTime);
long evictCount = metrics.getEvictCount(); long evictErrorCount = metrics.getEvictErrorCount(); double evictErrorRate = evictCount > 0 ? (double) evictErrorCount / evictCount : 0.0;
efficiencyAnalysis.setEvictErrorRate(evictErrorRate);
if (connectErrorRate > 0.1 || averageWaitTime > 5000) { efficiencyAnalysis.setEfficiencyLevel(StatusLevel.CRITICAL); efficiencyAnalysis.setRecommendation("连接池效率问题严重,建议检查数据库连接和网络"); } else if (connectErrorRate > 0.05 || averageWaitTime > 2000) { efficiencyAnalysis.setEfficiencyLevel(StatusLevel.WARNING); efficiencyAnalysis.setRecommendation("连接池效率需要关注,建议优化连接配置"); } else { efficiencyAnalysis.setEfficiencyLevel(StatusLevel.NORMAL); efficiencyAnalysis.setRecommendation("连接池效率正常"); }
} catch (Exception e) { log.error("连接池效率分析失败", e); }
return efficiencyAnalysis; }
private PerformanceTrendAnalysis analyzePerformanceTrend(List<ConnectionPoolMetrics> historicalMetrics) { PerformanceTrendAnalysis trendAnalysis = new PerformanceTrendAnalysis();
try { if (historicalMetrics.size() < 2) { trendAnalysis.setTrendDirection(TrendDirection.STABLE); trendAnalysis.setTrendDescription("历史数据不足,无法分析趋势"); return trendAnalysis; }
List<Double> usageRates = historicalMetrics.stream() .map(metrics -> (double) metrics.getActiveCount() / metrics.getMaxActive()) .collect(Collectors.toList());
TrendDirection usageTrend = analyzeTrend(usageRates); trendAnalysis.setUsageTrend(usageTrend);
List<Double> slowSQLRates = historicalMetrics.stream() .map(metrics -> metrics.getTotalSQLCount() > 0 ? (double) metrics.getSlowSQLCount() / metrics.getTotalSQLCount() : 0.0) .collect(Collectors.toList());
TrendDirection sqlTrend = analyzeTrend(slowSQLRates); trendAnalysis.setSqlTrend(sqlTrend);
List<Double> waitTimes = historicalMetrics.stream() .map(metrics -> metrics.getNotEmptyWaitCount() > 0 ? (double) metrics.getNotEmptyWaitMillis() / metrics.getNotEmptyWaitCount() : 0.0) .collect(Collectors.toList());
TrendDirection efficiencyTrend = analyzeTrend(waitTimes); trendAnalysis.setEfficiencyTrend(efficiencyTrend);
TrendDirection overallTrend = determineOverallTrend(usageTrend, sqlTrend, efficiencyTrend); trendAnalysis.setOverallTrend(overallTrend);
String trendDescription = generateTrendDescription(trendAnalysis); trendAnalysis.setTrendDescription(trendDescription);
} catch (Exception e) { log.error("性能趋势分析失败", e); }
return trendAnalysis; }
private TrendDirection analyzeTrend(List<Double> values) { if (values.size() < 2) { return TrendDirection.STABLE; }
double firstHalf = values.subList(0, values.size() / 2).stream() .mapToDouble(Double::doubleValue) .average() .orElse(0.0);
double secondHalf = values.subList(values.size() / 2, values.size()).stream() .mapToDouble(Double::doubleValue) .average() .orElse(0.0);
if (secondHalf > firstHalf * 1.1) { return TrendDirection.INCREASING; } else if (secondHalf < firstHalf * 0.9) { return TrendDirection.DECREASING; } else { return TrendDirection.STABLE; } }
private TrendDirection determineOverallTrend(TrendDirection usageTrend, TrendDirection sqlTrend, TrendDirection efficiencyTrend) { if (usageTrend == TrendDirection.INCREASING || sqlTrend == TrendDirection.INCREASING || efficiencyTrend == TrendDirection.INCREASING) { return TrendDirection.INCREASING; }
if (usageTrend == TrendDirection.DECREASING && sqlTrend == TrendDirection.DECREASING && efficiencyTrend == TrendDirection.DECREASING) { return TrendDirection.DECREASING; }
return TrendDirection.STABLE; }
private String generateTrendDescription(PerformanceTrendAnalysis trendAnalysis) { StringBuilder description = new StringBuilder();
description.append("性能趋势分析:"); description.append("连接使用率趋势=").append(trendAnalysis.getUsageTrend()).append(","); description.append("SQL性能趋势=").append(trendAnalysis.getSqlTrend()).append(","); description.append("连接池效率趋势=").append(trendAnalysis.getEfficiencyTrend()).append(","); description.append("整体趋势=").append(trendAnalysis.getOverallTrend());
return description.toString(); }
private String generatePerformanceReport(ConnectionPoolPerformanceAnalysis analysis) { StringBuilder report = new StringBuilder();
report.append("Druid连接池性能分析报告\n"); report.append("==================\n\n");
ConnectionUsageAnalysis usageAnalysis = analysis.getUsageAnalysis(); report.append("1. 连接使用情况\n"); report.append(" 当前使用率: ").append(String.format("%.2f", usageAnalysis.getCurrentUsageRate() * 100)).append("%\n"); report.append(" 活跃连接数: ").append(usageAnalysis.getActiveCount()).append("\n"); report.append(" 最大连接数: ").append(usageAnalysis.getMaxActive()).append("\n"); report.append(" 等待线程数: ").append(usageAnalysis.getWaitThreadCount()).append("\n"); report.append(" 评估结果: ").append(usageAnalysis.getUsageLevel()).append("\n"); report.append(" 建议: ").append(usageAnalysis.getRecommendation()).append("\n\n");
SQLPerformanceAnalysis sqlAnalysis = analysis.getSqlAnalysis(); report.append("2. SQL性能\n"); report.append(" 总SQL数: ").append(sqlAnalysis.getTotalSQLCount()).append("\n"); report.append(" 慢SQL数: ").append(sqlAnalysis.getSlowSQLCount()).append("\n"); report.append(" 错误SQL数: ").append(sqlAnalysis.getErrorSQLCount()).append("\n"); report.append(" 平均执行时间: ").append(sqlAnalysis.getAverageExecuteTime()).append("ms\n"); report.append(" 慢SQL比例: ").append(String.format("%.2f", sqlAnalysis.getSlowSQLRate() * 100)).append("%\n"); report.append(" 错误SQL比例: ").append(String.format("%.2f", sqlAnalysis.getErrorSQLRate() * 100)).append("%\n"); report.append(" 评估结果: ").append(sqlAnalysis.getPerformanceLevel()).append("\n"); report.append(" 建议: ").append(sqlAnalysis.getRecommendation()).append("\n\n");
ConnectionPoolEfficiencyAnalysis efficiencyAnalysis = analysis.getEfficiencyAnalysis(); report.append("3. 连接池效率\n"); report.append(" 连接错误率: ").append(String.format("%.2f", efficiencyAnalysis.getConnectErrorRate() * 100)).append("%\n"); report.append(" 平均等待时间: ").append(efficiencyAnalysis.getAverageWaitTime()).append("ms\n"); report.append(" 回收错误率: ").append(String.format("%.2f", efficiencyAnalysis.getEvictErrorRate() * 100)).append("%\n"); report.append(" 评估结果: ").append(efficiencyAnalysis.getEfficiencyLevel()).append("\n"); report.append(" 建议: ").append(efficiencyAnalysis.getRecommendation()).append("\n\n");
PerformanceTrendAnalysis trendAnalysis = analysis.getTrendAnalysis(); report.append("4. 性能趋势\n"); report.append(" 趋势描述: ").append(trendAnalysis.getTrendDescription()).append("\n");
return report.toString(); } }
|
五、最佳实践与总结
5.1 Druid连接池配置最佳实践
连接池大小配置
- 根据应用并发量合理设置初始连接数
- 最小空闲连接数设置为最大连接数的1/4
- 最大连接数根据数据库服务器性能设置
性能优化策略
- 启用连接验证和回收机制
- 合理设置连接超时时间
- 优化SQL执行和监控
监控告警体系
- 建立完善的监控指标
- 设置合理的告警阈值
- 实现自动化运维
安全配置机制
5.2 架构师级Druid连接池运维技能
连接池管理能力
- 深入理解Druid连接池架构
- 掌握连接池配置和优化
- 管理连接生命周期
性能调优能力
故障处理能力
监控运维能力
5.3 持续改进建议
配置优化
自动化程度提升
- 实现更多自动化操作
- 优化配置管理流程
- 提升运维效率
知识积累
总结
Druid连接池配置分析与优化是企业级数据库连接池的核心能力,通过智能的连接池配置策略、完善的性能监控机制和系统化的优化分析,能够构建稳定可靠的数据库连接池系统,保障企业级应用的高并发数据库访问能力。本文从连接池架构设计到配置优化,从基础原理到企业级实践,系统梳理了Druid连接池配置分析与优化的完整解决方案。
关键要点:
- 连接池架构设计:高可用的连接池架构和组件设计
- 配置管理策略:连接池配置服务、配置优化器
- 监控分析方案:连接池监控、指标收集、健康检查
- 性能优化体系:性能分析器、配置模板、持续优化
- 企业级实践:配置模板、性能分析、持续改进
通过深入理解这些技术要点,架构师能够设计出完善的Druid连接池系统,提升数据库连接池的稳定性和可靠性,确保企业级应用的高可用性。