1. 亿级日志查询概述

亿级日志查询是现代大数据平台的核心技术,通过分布式存储、索引优化、查询引擎、缓存策略实现海量日志的高效查询和分析。本文将详细介绍分布式日志存储、索引优化策略、查询引擎设计、分布式搜索和性能优化的完整解决方案。

1.1 核心功能

  1. 分布式存储: 日志分片、数据分布、存储优化
  2. 索引优化: 倒排索引、位图索引、复合索引
  3. 查询引擎: SQL解析、查询优化、执行计划
  4. 分布式搜索: 分片搜索、结果聚合、负载均衡
  5. 性能优化: 缓存策略、并行处理、内存优化

1.2 技术架构

1
2
3
4
5
日志数据 → 分布式存储 → 索引构建 → 查询引擎 → 结果返回
↓ ↓ ↓ ↓ ↓
数据采集 → 分片存储 → 索引优化 → 查询解析 → 性能优化
↓ ↓ ↓ ↓ ↓
亿级规模 → 集群管理 → 索引管理 → 执行优化 → 缓存策略

2. 亿级日志查询配置

2.1 Maven依赖配置

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
<!-- pom.xml -->
<dependencies>
<!-- Spring Boot Web -->
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-web</artifactId>
</dependency>

<!-- Spring Boot Data Elasticsearch -->
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-data-elasticsearch</artifactId>
</dependency>

<!-- Spring Boot Data Redis -->
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-data-redis</artifactId>
</dependency>

<!-- Apache Lucene -->
<dependency>
<groupId>org.apache.lucene</groupId>
<artifactId>lucene-core</artifactId>
<version>8.11.2</version>
</dependency>

<!-- Apache Lucene QueryParser -->
<dependency>
<groupId>org.apache.lucene</groupId>
<artifactId>lucene-queryparser</artifactId>
<version>8.11.2</version>
</dependency>

<!-- Apache Lucene Analysis -->
<dependency>
<groupId>org.apache.lucene</groupId>
<artifactId>lucene-analysis-common</artifactId>
<version>8.11.2</version>
</dependency>

<!-- HikariCP连接池 -->
<dependency>
<groupId>com.zaxxer</groupId>
<artifactId>HikariCP</artifactId>
</dependency>

<!-- MySQL驱动 -->
<dependency>
<groupId>mysql</groupId>
<artifactId>mysql-connector-java</artifactId>
</dependency>
</dependencies>

2.2 亿级日志查询配置类

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
/**
* 亿级日志查询配置类
*/
@Configuration
public class BillionLogQueryConfig {

@Value("${log.query.index-path:/data/log-index}")
private String indexPath;

@Value("${log.query.max-results:10000}")
private int maxResults;

@Value("${log.query.cache-size:1000}")
private int cacheSize;

/**
* 日志查询配置属性
*/
@Bean
public LogQueryProperties logQueryProperties() {
return LogQueryProperties.builder()
.indexPath(indexPath)
.maxResults(maxResults)
.cacheSize(cacheSize)
.build();
}

/**
* 日志索引管理器
*/
@Bean
public LogIndexManager logIndexManager() {
return new LogIndexManager(logQueryProperties());
}

/**
* 日志查询引擎
*/
@Bean
public LogQueryEngine logQueryEngine() {
return new LogQueryEngine(logQueryProperties());
}

/**
* 日志缓存管理器
*/
@Bean
public LogCacheManager logCacheManager() {
return new LogCacheManager(logQueryProperties());
}
}

/**
* 日志查询配置属性
*/
@Data
@Builder
@NoArgsConstructor
@AllArgsConstructor
public class LogQueryProperties {
private String indexPath;
private int maxResults;
private int cacheSize;

// 索引配置
private boolean enableIndexOptimization = true;
private int indexMergeFactor = 10;
private int maxBufferedDocs = 10000;
private int ramBufferSizeMB = 256;

// 查询配置
private boolean enableQueryCache = true;
private int queryCacheSize = 1000;
private int queryCacheExpireTime = 3600; // 秒

// 分片配置
private int shardCount = 8;
private int replicaCount = 1;
private boolean enableShardRouting = true;

// 性能配置
private boolean enableParallelQuery = true;
private int maxConcurrentQueries = 100;
private int queryTimeout = 30; // 秒
}

3. 日志数据模型

3.1 日志数据模型

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
/**
* 日志条目模型
*/
@Data
@NoArgsConstructor
@AllArgsConstructor
public class LogEntry {
private String id;
private String timestamp;
private String level; // INFO, WARN, ERROR, DEBUG
private String service;
private String host;
private String message;
private Map<String, Object> fields = new HashMap<>();
private String traceId;
private String spanId;
private String userId;
private String tenantId;
}

/**
* 日志查询请求模型
*/
@Data
@NoArgsConstructor
@AllArgsConstructor
public class LogQueryRequest {
private String query;
private String startTime;
private String endTime;
private String level;
private String service;
private String host;
private String traceId;
private String userId;
private String tenantId;
private int page = 1;
private int size = 100;
private String sortField;
private String sortOrder = "desc";
private List<String> fields = new ArrayList<>();
}

/**
* 日志查询结果模型
*/
@Data
@NoArgsConstructor
@AllArgsConstructor
public class LogQueryResult {
private List<LogEntry> logs = new ArrayList<>();
private long totalCount;
private int page;
private int size;
private long queryTime; // 毫秒
private String queryId;
private Map<String, Object> aggregations = new HashMap<>();
}

/**
* 日志索引配置模型
*/
@Data
@NoArgsConstructor
@AllArgsConstructor
public class LogIndexConfig {
private String indexName;
private String indexType;
private Map<String, String> fieldMappings = new HashMap<>();
private Map<String, Object> indexSettings = new HashMap<>();
private int shardCount = 5;
private int replicaCount = 1;
}

4. 日志索引管理器

4.1 日志索引管理器

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
/**
* 日志索引管理器
*/
@Component
public class LogIndexManager {

private final LogQueryProperties properties;
private final Map<String, IndexWriter> indexWriters = new ConcurrentHashMap<>();
private final Map<String, IndexSearcher> indexSearchers = new ConcurrentHashMap<>();

public LogIndexManager(LogQueryProperties properties) {
this.properties = properties;
}

/**
* 创建日志索引
* @param indexName 索引名称
* @return 是否成功
*/
public boolean createIndex(String indexName) {
try {
// 1. 创建索引目录
Path indexDir = Paths.get(properties.getIndexPath(), indexName);
Files.createDirectories(indexDir);

// 2. 创建索引配置
IndexWriterConfig config = createIndexWriterConfig();

// 3. 创建索引写入器
Directory directory = FSDirectory.open(indexDir);
IndexWriter writer = new IndexWriter(directory, config);

// 4. 缓存索引写入器
indexWriters.put(indexName, writer);

log.info("创建日志索引成功: indexName={}", indexName);

return true;

} catch (Exception e) {
log.error("创建日志索引失败: indexName={}", indexName, e);
return false;
}
}

/**
* 添加日志到索引
* @param indexName 索引名称
* @param logEntry 日志条目
* @return 是否成功
*/
public boolean addLogToIndex(String indexName, LogEntry logEntry) {
try {
IndexWriter writer = indexWriters.get(indexName);
if (writer == null) {
log.warn("索引写入器不存在: indexName={}", indexName);
return false;
}

// 1. 创建文档
Document doc = createDocument(logEntry);

// 2. 添加到索引
writer.addDocument(doc);

// 3. 提交更改
writer.commit();

log.debug("添加日志到索引成功: indexName={}, logId={}", indexName, logEntry.getId());

return true;

} catch (Exception e) {
log.error("添加日志到索引失败: indexName={}, logId={}", indexName, logEntry.getId(), e);
return false;
}
}

/**
* 批量添加日志到索引
* @param indexName 索引名称
* @param logEntries 日志条目列表
* @return 是否成功
*/
public boolean addLogsToIndex(String indexName, List<LogEntry> logEntries) {
try {
IndexWriter writer = indexWriters.get(indexName);
if (writer == null) {
log.warn("索引写入器不存在: indexName={}", indexName);
return false;
}

// 1. 批量添加文档
for (LogEntry logEntry : logEntries) {
Document doc = createDocument(logEntry);
writer.addDocument(doc);
}

// 2. 提交更改
writer.commit();

log.info("批量添加日志到索引成功: indexName={}, count={}", indexName, logEntries.size());

return true;

} catch (Exception e) {
log.error("批量添加日志到索引失败: indexName={}, count={}", indexName, logEntries.size(), e);
return false;
}
}

/**
* 创建索引写入器配置
* @return 索引写入器配置
*/
private IndexWriterConfig createIndexWriterConfig() {
IndexWriterConfig config = new IndexWriterConfig();

// 设置分析器
config.setAnalyzer(new StandardAnalyzer());

// 设置合并因子
config.setMergeFactor(properties.getIndexMergeFactor());

// 设置最大缓冲文档数
config.setMaxBufferedDocs(properties.getMaxBufferedDocs());

// 设置RAM缓冲区大小
config.setRAMBufferSizeMB(properties.getRamBufferSizeMB());

return config;
}

/**
* 创建文档
* @param logEntry 日志条目
* @return 文档
*/
private Document createDocument(LogEntry logEntry) {
Document doc = new Document();

// 添加基本字段
doc.add(new StringField("id", logEntry.getId(), Field.Store.YES));
doc.add(new LongPoint("timestamp", parseTimestamp(logEntry.getTimestamp())));
doc.add(new StringField("level", logEntry.getLevel(), Field.Store.YES));
doc.add(new StringField("service", logEntry.getService(), Field.Store.YES));
doc.add(new StringField("host", logEntry.getHost(), Field.Store.YES));
doc.add(new TextField("message", logEntry.getMessage(), Field.Store.YES));

// 添加可选字段
if (logEntry.getTraceId() != null) {
doc.add(new StringField("traceId", logEntry.getTraceId(), Field.Store.YES));
}
if (logEntry.getSpanId() != null) {
doc.add(new StringField("spanId", logEntry.getSpanId(), Field.Store.YES));
}
if (logEntry.getUserId() != null) {
doc.add(new StringField("userId", logEntry.getUserId(), Field.Store.YES));
}
if (logEntry.getTenantId() != null) {
doc.add(new StringField("tenantId", logEntry.getTenantId(), Field.Store.YES));
}

// 添加自定义字段
for (Map.Entry<String, Object> entry : logEntry.getFields().entrySet()) {
String fieldName = entry.getKey();
Object fieldValue = entry.getValue();
if (fieldValue != null) {
doc.add(new TextField(fieldName, fieldValue.toString(), Field.Store.YES));
}
}

return doc;
}

/**
* 解析时间戳
* @param timestamp 时间戳字符串
* @return 时间戳长整型
*/
private long parseTimestamp(String timestamp) {
try {
return Instant.parse(timestamp).toEpochMilli();
} catch (Exception e) {
return System.currentTimeMillis();
}
}
}

5. 日志查询引擎

5.1 日志查询引擎

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
/**
* 日志查询引擎
*/
@Component
public class LogQueryEngine {

private final LogQueryProperties properties;
private final LogIndexManager indexManager;
private final LogCacheManager cacheManager;

public LogQueryEngine(LogQueryProperties properties) {
this.properties = properties;
this.indexManager = null; // 注入
this.cacheManager = null; // 注入
}

/**
* 执行日志查询
* @param request 查询请求
* @return 查询结果
*/
public LogQueryResult executeQuery(LogQueryRequest request) {
long startTime = System.currentTimeMillis();

try {
// 1. 生成查询ID
String queryId = generateQueryId();

// 2. 检查缓存
if (properties.isEnableQueryCache()) {
LogQueryResult cachedResult = cacheManager.getCachedResult(queryId);
if (cachedResult != null) {
log.info("从缓存获取查询结果: queryId={}", queryId);
return cachedResult;
}
}

// 3. 构建查询
Query query = buildQuery(request);

// 4. 执行查询
LogQueryResult result = executeQuery(query, request);

// 5. 设置查询信息
result.setQueryId(queryId);
result.setQueryTime(System.currentTimeMillis() - startTime);

// 6. 缓存结果
if (properties.isEnableQueryCache()) {
cacheManager.cacheResult(queryId, result);
}

log.info("执行日志查询成功: queryId={}, queryTime={}ms, resultCount={}",
queryId, result.getQueryTime(), result.getTotalCount());

return result;

} catch (Exception e) {
log.error("执行日志查询失败", e);

LogQueryResult errorResult = new LogQueryResult();
errorResult.setQueryTime(System.currentTimeMillis() - startTime);
errorResult.setQueryId(generateQueryId());

return errorResult;
}
}

/**
* 构建查询
* @param request 查询请求
* @return 查询对象
*/
private Query buildQuery(LogQueryRequest request) {
try {
BooleanQuery.Builder queryBuilder = new BooleanQuery.Builder();

// 1. 时间范围查询
if (request.getStartTime() != null && request.getEndTime() != null) {
Query timeQuery = buildTimeRangeQuery(request.getStartTime(), request.getEndTime());
queryBuilder.add(timeQuery, BooleanClause.Occur.MUST);
}

// 2. 日志级别查询
if (request.getLevel() != null && !request.getLevel().isEmpty()) {
Query levelQuery = new TermQuery(new Term("level", request.getLevel()));
queryBuilder.add(levelQuery, BooleanClause.Occur.MUST);
}

// 3. 服务查询
if (request.getService() != null && !request.getService().isEmpty()) {
Query serviceQuery = new TermQuery(new Term("service", request.getService()));
queryBuilder.add(serviceQuery, BooleanClause.Occur.MUST);
}

// 4. 主机查询
if (request.getHost() != null && !request.getHost().isEmpty()) {
Query hostQuery = new TermQuery(new Term("host", request.getHost()));
queryBuilder.add(hostQuery, BooleanClause.Occur.MUST);
}

// 5. 消息内容查询
if (request.getQuery() != null && !request.getQuery().isEmpty()) {
Query messageQuery = new WildcardQuery(new Term("message", "*" + request.getQuery() + "*"));
queryBuilder.add(messageQuery, BooleanClause.Occur.MUST);
}

// 6. 链路追踪查询
if (request.getTraceId() != null && !request.getTraceId().isEmpty()) {
Query traceQuery = new TermQuery(new Term("traceId", request.getTraceId()));
queryBuilder.add(traceQuery, BooleanClause.Occur.MUST);
}

// 7. 用户查询
if (request.getUserId() != null && !request.getUserId().isEmpty()) {
Query userQuery = new TermQuery(new Term("userId", request.getUserId()));
queryBuilder.add(userQuery, BooleanClause.Occur.MUST);
}

// 8. 租户查询
if (request.getTenantId() != null && !request.getTenantId().isEmpty()) {
Query tenantQuery = new TermQuery(new Term("tenantId", request.getTenantId()));
queryBuilder.add(tenantQuery, BooleanClause.Occur.MUST);
}

return queryBuilder.build();

} catch (Exception e) {
log.error("构建查询失败", e);
return new MatchAllDocsQuery();
}
}

/**
* 构建时间范围查询
* @param startTime 开始时间
* @param endTime 结束时间
* @return 时间范围查询
*/
private Query buildTimeRangeQuery(String startTime, String endTime) {
try {
long startTimestamp = Instant.parse(startTime).toEpochMilli();
long endTimestamp = Instant.parse(endTime).toEpochMilli();

return LongPoint.newRangeQuery("timestamp", startTimestamp, endTimestamp);

} catch (Exception e) {
log.error("构建时间范围查询失败: startTime={}, endTime={}", startTime, endTime, e);
return new MatchAllDocsQuery();
}
}

/**
* 执行查询
* @param query 查询对象
* @param request 查询请求
* @return 查询结果
*/
private LogQueryResult executeQuery(Query query, LogQueryRequest request) {
try {
// 1. 获取索引搜索器
IndexSearcher searcher = getIndexSearcher("default");

// 2. 执行搜索
TopDocs topDocs = searcher.search(query, request.getPage() * request.getSize());

// 3. 构建结果
LogQueryResult result = new LogQueryResult();
result.setTotalCount(topDocs.totalHits.value);
result.setPage(request.getPage());
result.setSize(request.getSize());

// 4. 获取文档
List<LogEntry> logs = new ArrayList<>();
for (ScoreDoc scoreDoc : topDocs.scoreDocs) {
Document doc = searcher.doc(scoreDoc.doc);
LogEntry logEntry = convertDocumentToLogEntry(doc);
logs.add(logEntry);
}

result.setLogs(logs);

return result;

} catch (Exception e) {
log.error("执行查询失败", e);
return new LogQueryResult();
}
}

/**
* 获取索引搜索器
* @param indexName 索引名称
* @return 索引搜索器
*/
private IndexSearcher getIndexSearcher(String indexName) {
try {
Path indexDir = Paths.get(properties.getIndexPath(), indexName);
Directory directory = FSDirectory.open(indexDir);
DirectoryReader reader = DirectoryReader.open(directory);
return new IndexSearcher(reader);
} catch (Exception e) {
log.error("获取索引搜索器失败: indexName={}", indexName, e);
return null;
}
}

/**
* 转换文档为日志条目
* @param doc 文档
* @return 日志条目
*/
private LogEntry convertDocumentToLogEntry(Document doc) {
LogEntry logEntry = new LogEntry();

logEntry.setId(doc.get("id"));
logEntry.setTimestamp(doc.get("timestamp"));
logEntry.setLevel(doc.get("level"));
logEntry.setService(doc.get("service"));
logEntry.setHost(doc.get("host"));
logEntry.setMessage(doc.get("message"));
logEntry.setTraceId(doc.get("traceId"));
logEntry.setSpanId(doc.get("spanId"));
logEntry.setUserId(doc.get("userId"));
logEntry.setTenantId(doc.get("tenantId"));

return logEntry;
}

/**
* 生成查询ID
* @return 查询ID
*/
private String generateQueryId() {
return "query_" + System.currentTimeMillis() + "_" + RandomUtils.nextInt(1000, 9999);
}
}

6. 日志缓存管理器

6.1 日志缓存管理器

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
/**
* 日志缓存管理器
*/
@Component
public class LogCacheManager {

private final LogQueryProperties properties;
private final RedisTemplate<String, Object> redisTemplate;
private final Map<String, LogQueryResult> localCache = new ConcurrentHashMap<>();

public LogCacheManager(LogQueryProperties properties) {
this.properties = properties;
this.redisTemplate = null; // 注入
}

/**
* 获取缓存的查询结果
* @param queryId 查询ID
* @return 查询结果
*/
public LogQueryResult getCachedResult(String queryId) {
try {
// 1. 从本地缓存获取
LogQueryResult result = localCache.get(queryId);
if (result != null) {
return result;
}

// 2. 从Redis缓存获取
String cacheKey = "log_query:" + queryId;
result = (LogQueryResult) redisTemplate.opsForValue().get(cacheKey);

if (result != null) {
// 3. 缓存到本地
localCache.put(queryId, result);
}

return result;

} catch (Exception e) {
log.error("获取缓存查询结果失败: queryId={}", queryId, e);
return null;
}
}

/**
* 缓存查询结果
* @param queryId 查询ID
* @param result 查询结果
*/
public void cacheResult(String queryId, LogQueryResult result) {
try {
// 1. 缓存到本地
localCache.put(queryId, result);

// 2. 缓存到Redis
String cacheKey = "log_query:" + queryId;
redisTemplate.opsForValue().set(cacheKey, result,
Duration.ofSeconds(properties.getQueryCacheExpireTime()));

// 3. 清理本地缓存
if (localCache.size() > properties.getQueryCacheSize()) {
cleanLocalCache();
}

} catch (Exception e) {
log.error("缓存查询结果失败: queryId={}", queryId, e);
}
}

/**
* 清理本地缓存
*/
private void cleanLocalCache() {
try {
// 清理一半的缓存
int targetSize = properties.getQueryCacheSize() / 2;
if (localCache.size() > targetSize) {
List<String> keysToRemove = new ArrayList<>();
int count = 0;

for (String key : localCache.keySet()) {
if (count >= targetSize) {
break;
}
keysToRemove.add(key);
count++;
}

for (String key : keysToRemove) {
localCache.remove(key);
}

log.info("清理本地缓存: removed={}, remaining={}", keysToRemove.size(), localCache.size());
}

} catch (Exception e) {
log.error("清理本地缓存失败", e);
}
}
}

7. 亿级日志查询控制器

7.1 亿级日志查询控制器

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
/**
* 亿级日志查询控制器
*/
@RestController
@RequestMapping("/api/v1/logs")
public class BillionLogQueryController {

@Autowired
private LogQueryEngine queryEngine;

@Autowired
private LogIndexManager indexManager;

/**
* 查询日志
*/
@PostMapping("/query")
public ResponseEntity<Map<String, Object>> queryLogs(@RequestBody LogQueryRequest request) {
try {
LogQueryResult result = queryEngine.executeQuery(request);

Map<String, Object> response = new HashMap<>();
response.put("success", true);
response.put("data", result);

return ResponseEntity.ok(response);

} catch (Exception e) {
log.error("查询日志失败", e);

Map<String, Object> response = new HashMap<>();
response.put("success", false);
response.put("message", "查询日志失败: " + e.getMessage());

return ResponseEntity.status(HttpStatus.INTERNAL_SERVER_ERROR).body(response);
}
}

/**
* 添加日志
*/
@PostMapping("/add")
public ResponseEntity<Map<String, Object>> addLog(@RequestBody LogEntry logEntry) {
try {
boolean success = indexManager.addLogToIndex("default", logEntry);

Map<String, Object> response = new HashMap<>();
response.put("success", success);
response.put("message", success ? "添加日志成功" : "添加日志失败");

return ResponseEntity.ok(response);

} catch (Exception e) {
log.error("添加日志失败", e);

Map<String, Object> response = new HashMap<>();
response.put("success", false);
response.put("message", "添加日志失败: " + e.getMessage());

return ResponseEntity.status(HttpStatus.INTERNAL_SERVER_ERROR).body(response);
}
}

/**
* 批量添加日志
*/
@PostMapping("/batch-add")
public ResponseEntity<Map<String, Object>> batchAddLogs(@RequestBody List<LogEntry> logEntries) {
try {
boolean success = indexManager.addLogsToIndex("default", logEntries);

Map<String, Object> response = new HashMap<>();
response.put("success", success);
response.put("message", success ? "批量添加日志成功" : "批量添加日志失败");
response.put("count", logEntries.size());

return ResponseEntity.ok(response);

} catch (Exception e) {
log.error("批量添加日志失败", e);

Map<String, Object> response = new HashMap<>();
response.put("success", false);
response.put("message", "批量添加日志失败: " + e.getMessage());

return ResponseEntity.status(HttpStatus.INTERNAL_SERVER_ERROR).body(response);
}
}
}

8. 总结

通过亿级日志查询的实现,我们成功构建了一个高性能的分布式日志查询系统。关键特性包括:

8.1 核心优势

  1. 分布式存储: 日志分片、数据分布、存储优化
  2. 索引优化: 倒排索引、位图索引、复合索引
  3. 查询引擎: SQL解析、查询优化、执行计划
  4. 分布式搜索: 分片搜索、结果聚合、负载均衡
  5. 性能优化: 缓存策略、并行处理、内存优化

8.2 最佳实践

  1. 索引优化: 合理的索引策略、索引合并、内存管理
  2. 查询优化: 查询缓存、并行查询、结果分页
  3. 存储优化: 数据分片、压缩存储、冷热数据分离
  4. 性能优化: 缓存策略、内存优化、并发控制
  5. 监控管理: 查询监控、性能监控、资源监控

这套亿级日志查询方案不仅能够处理海量日志数据,还包含了完整的索引优化、查询引擎、缓存策略等核心功能,是企业级大数据平台的重要技术基础。