第227集SpringBoot分片上传架构实战:大文件处理、断点续传、分布式存储的企业级解决方案

前言

在当今数字化时代,大文件上传已成为企业级应用的核心需求。传统的单文件上传方式在面对GB级别的大文件时,往往面临网络超时、内存溢出、上传失败等问题。分片上传技术通过将大文件拆分为多个小片段,实现了并行上传、断点续传、进度监控等功能,显著提升了文件上传的成功率和用户体验。随着微服务架构和云存储技术的普及,如何设计并实现高效、可靠的分片上传系统,已成为企业级架构师必须掌握的关键技能。

本文将深入探讨SpringBoot分片上传的架构设计与实战应用,从分片策略到断点续传,从分布式存储到性能优化,为企业构建稳定、高效的大文件上传解决方案提供全面的技术指导。

一、分片上传架构概述与核心原理

1.1 分片上传架构设计

分片上传系统采用分层架构设计,通过文件分片、并发上传、断点续传等技术,实现大文件的高效上传和管理。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
graph TB
A[客户端] --> B[文件分片]
B --> C[分片上传]
C --> D[服务端接收]
D --> E[分片验证]
E --> F[分片存储]
F --> G[分片合并]
G --> H[文件完整性校验]
H --> I[存储完成]

J[分片管理] --> K[分片信息记录]
J --> L[上传进度跟踪]
J --> M[断点续传支持]
J --> N[分片清理机制]

O[存储策略] --> P[本地存储]
O --> Q[分布式存储]
O --> R[云存储集成]
O --> S[存储优化]

T[性能优化] --> U[并发上传]
T --> V[网络优化]
T --> W[缓存策略]
T --> X[负载均衡]

1.2 分片上传核心特性

1.2.1 文件分片策略

  • 固定分片大小:根据文件类型和网络环境设置合适的分片大小
  • 动态分片调整:根据网络状况动态调整分片大小
  • 分片数量控制:合理控制分片数量,平衡并发度和管理复杂度
  • 分片命名规则:使用唯一标识符确保分片不冲突

1.2.2 断点续传机制

  • 上传状态记录:记录每个分片的上传状态
  • 进度恢复:支持从中断点继续上传
  • 分片重传:失败分片的自动重传机制
  • 完整性校验:确保文件上传的完整性

1.2.3 并发上传优化

  • 多线程上传:支持多个分片并行上传
  • 连接池管理:优化HTTP连接的使用
  • 带宽控制:合理分配网络带宽资源
  • 优先级调度:重要分片的优先上传

二、SpringBoot分片上传核心实现

2.1 分片上传控制器

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
// 分片上传控制器
@RestController
@RequestMapping("/api/upload")
@Slf4j
public class ChunkUploadController {

@Autowired
private ChunkUploadService chunkUploadService;

@Autowired
private FileMetadataService fileMetadataService;

@Autowired
private StorageService storageService;

/**
* 初始化分片上传
*/
@PostMapping("/init")
public ResponseEntity<UploadInitResponse> initChunkUpload(@RequestBody UploadInitRequest request) {
try {
// 1. 验证请求参数
validateInitRequest(request);

// 2. 生成上传ID
String uploadId = generateUploadId();

// 3. 计算分片信息
ChunkInfo chunkInfo = calculateChunkInfo(request.getFileSize(), request.getChunkSize());

// 4. 创建上传任务
UploadTask uploadTask = createUploadTask(uploadId, request, chunkInfo);

// 5. 保存上传任务
chunkUploadService.saveUploadTask(uploadTask);

UploadInitResponse response = new UploadInitResponse();
response.setUploadId(uploadId);
response.setChunkInfo(chunkInfo);
response.setExpireTime(uploadTask.getExpireTime());

return ResponseEntity.ok(response);

} catch (Exception e) {
log.error("初始化分片上传失败: {}", e.getMessage());
return ResponseEntity.status(HttpStatus.INTERNAL_SERVER_ERROR)
.body(new UploadInitResponse(false, "初始化失败: " + e.getMessage()));
}
}

/**
* 上传分片
*/
@PostMapping("/chunk")
public ResponseEntity<ChunkUploadResponse> uploadChunk(
@RequestParam("uploadId") String uploadId,
@RequestParam("chunkIndex") Integer chunkIndex,
@RequestParam("chunkSize") Long chunkSize,
@RequestParam("file") MultipartFile file) {
try {
// 1. 验证上传任务
UploadTask uploadTask = chunkUploadService.getUploadTask(uploadId);
if (uploadTask == null) {
return ResponseEntity.status(HttpStatus.NOT_FOUND)
.body(new ChunkUploadResponse(false, "上传任务不存在"));
}

// 2. 验证分片参数
validateChunkRequest(uploadTask, chunkIndex, chunkSize, file);

// 3. 处理分片上传
ChunkUploadResult result = chunkUploadService.uploadChunk(uploadTask, chunkIndex, file);

// 4. 检查是否所有分片上传完成
if (result.isAllChunksUploaded()) {
// 触发分片合并
CompletableFuture.runAsync(() -> mergeChunks(uploadTask));
}

ChunkUploadResponse response = new ChunkUploadResponse();
response.setSuccess(true);
response.setChunkIndex(chunkIndex);
response.setUploadProgress(result.getUploadProgress());
response.setAllChunksUploaded(result.isAllChunksUploaded());

return ResponseEntity.ok(response);

} catch (Exception e) {
log.error("分片上传失败: uploadId={}, chunkIndex={}", uploadId, chunkIndex, e);
return ResponseEntity.status(HttpStatus.INTERNAL_SERVER_ERROR)
.body(new ChunkUploadResponse(false, "分片上传失败: " + e.getMessage()));
}
}

/**
* 查询上传进度
*/
@GetMapping("/progress/{uploadId}")
public ResponseEntity<UploadProgressResponse> getUploadProgress(@PathVariable String uploadId) {
try {
UploadTask uploadTask = chunkUploadService.getUploadTask(uploadId);
if (uploadTask == null) {
return ResponseEntity.status(HttpStatus.NOT_FOUND)
.body(new UploadProgressResponse(false, "上传任务不存在"));
}

UploadProgress progress = chunkUploadService.getUploadProgress(uploadId);

UploadProgressResponse response = new UploadProgressResponse();
response.setSuccess(true);
response.setUploadId(uploadId);
response.setProgress(progress);

return ResponseEntity.ok(response);

} catch (Exception e) {
log.error("查询上传进度失败: uploadId={}", uploadId, e);
return ResponseEntity.status(HttpStatus.INTERNAL_SERVER_ERROR)
.body(new UploadProgressResponse(false, "查询进度失败: " + e.getMessage()));
}
}

/**
* 取消上传
*/
@DeleteMapping("/cancel/{uploadId}")
public ResponseEntity<CancelUploadResponse> cancelUpload(@PathVariable String uploadId) {
try {
boolean success = chunkUploadService.cancelUpload(uploadId);

CancelUploadResponse response = new CancelUploadResponse();
response.setSuccess(success);
response.setMessage(success ? "取消上传成功" : "取消上传失败");

return ResponseEntity.ok(response);

} catch (Exception e) {
log.error("取消上传失败: uploadId={}", uploadId, e);
return ResponseEntity.status(HttpStatus.INTERNAL_SERVER_ERROR)
.body(new CancelUploadResponse(false, "取消上传失败: " + e.getMessage()));
}
}

/**
* 验证初始化请求
*/
private void validateInitRequest(UploadInitRequest request) {
if (request.getFileName() == null || request.getFileName().trim().isEmpty()) {
throw new IllegalArgumentException("文件名不能为空");
}

if (request.getFileSize() <= 0) {
throw new IllegalArgumentException("文件大小必须大于0");
}

if (request.getFileSize() > getMaxFileSize()) {
throw new IllegalArgumentException("文件大小超过限制");
}

if (request.getChunkSize() <= 0) {
throw new IllegalArgumentException("分片大小必须大于0");
}
}

/**
* 验证分片请求
*/
private void validateChunkRequest(UploadTask uploadTask, Integer chunkIndex, Long chunkSize, MultipartFile file) {
if (chunkIndex < 0 || chunkIndex >= uploadTask.getChunkInfo().getTotalChunks()) {
throw new IllegalArgumentException("分片索引无效");
}

if (file.isEmpty()) {
throw new IllegalArgumentException("分片文件为空");
}

if (file.getSize() != chunkSize) {
throw new IllegalArgumentException("分片大小不匹配");
}
}

/**
* 生成上传ID
*/
private String generateUploadId() {
return UUID.randomUUID().toString().replace("-", "");
}

/**
* 计算分片信息
*/
private ChunkInfo calculateChunkInfo(long fileSize, long chunkSize) {
int totalChunks = (int) Math.ceil((double) fileSize / chunkSize);
long lastChunkSize = fileSize % chunkSize;
if (lastChunkSize == 0) {
lastChunkSize = chunkSize;
}

ChunkInfo chunkInfo = new ChunkInfo();
chunkInfo.setTotalChunks(totalChunks);
chunkInfo.setChunkSize(chunkSize);
chunkInfo.setLastChunkSize(lastChunkSize);
chunkInfo.setFileSize(fileSize);

return chunkInfo;
}

/**
* 创建上传任务
*/
private UploadTask createUploadTask(String uploadId, UploadInitRequest request, ChunkInfo chunkInfo) {
UploadTask uploadTask = new UploadTask();
uploadTask.setUploadId(uploadId);
uploadTask.setFileName(request.getFileName());
uploadTask.setFileSize(request.getFileSize());
uploadTask.setChunkInfo(chunkInfo);
uploadTask.setStatus(UploadStatus.INITIALIZED);
uploadTask.setCreateTime(System.currentTimeMillis());
uploadTask.setExpireTime(System.currentTimeMillis() + 24 * 60 * 60 * 1000); // 24小时过期

return uploadTask;
}

/**
* 合并分片
*/
private void mergeChunks(UploadTask uploadTask) {
try {
log.info("开始合并分片: uploadId={}", uploadTask.getUploadId());

// 1. 更新任务状态
chunkUploadService.updateUploadTaskStatus(uploadTask.getUploadId(), UploadStatus.MERGING);

// 2. 执行分片合并
String finalFilePath = chunkUploadService.mergeChunks(uploadTask);

// 3. 验证文件完整性
boolean isValid = validateFileIntegrity(uploadTask, finalFilePath);

if (isValid) {
// 4. 保存文件元数据
FileMetadata metadata = createFileMetadata(uploadTask, finalFilePath);
fileMetadataService.saveFileMetadata(metadata);

// 5. 更新任务状态
chunkUploadService.updateUploadTaskStatus(uploadTask.getUploadId(), UploadStatus.COMPLETED);

// 6. 清理分片文件
chunkUploadService.cleanupChunks(uploadTask.getUploadId());

log.info("分片合并完成: uploadId={}, filePath={}", uploadTask.getUploadId(), finalFilePath);
} else {
// 文件校验失败
chunkUploadService.updateUploadTaskStatus(uploadTask.getUploadId(), UploadStatus.FAILED);
log.error("文件校验失败: uploadId={}", uploadTask.getUploadId());
}

} catch (Exception e) {
log.error("分片合并失败: uploadId={}", uploadTask.getUploadId(), e);
chunkUploadService.updateUploadTaskStatus(uploadTask.getUploadId(), UploadStatus.FAILED);
}
}

/**
* 验证文件完整性
*/
private boolean validateFileIntegrity(UploadTask uploadTask, String filePath) {
try {
File file = new File(filePath);
if (!file.exists()) {
return false;
}

// 检查文件大小
if (file.length() != uploadTask.getFileSize()) {
return false;
}

// 检查文件MD5(如果提供了)
if (uploadTask.getFileMd5() != null) {
String actualMd5 = calculateFileMD5(file);
return uploadTask.getFileMd5().equals(actualMd5);
}

return true;

} catch (Exception e) {
log.error("文件完整性验证失败: uploadId={}", uploadTask.getUploadId(), e);
return false;
}
}

/**
* 创建文件元数据
*/
private FileMetadata createFileMetadata(UploadTask uploadTask, String filePath) {
FileMetadata metadata = new FileMetadata();
metadata.setFileId(UUID.randomUUID().toString());
metadata.setFileName(uploadTask.getFileName());
metadata.setFileSize(uploadTask.getFileSize());
metadata.setFilePath(filePath);
metadata.setUploadId(uploadTask.getUploadId());
metadata.setCreateTime(System.currentTimeMillis());
metadata.setStatus(FileStatus.ACTIVE);

return metadata;
}

/**
* 计算文件MD5
*/
private String calculateFileMD5(File file) throws IOException {
try (FileInputStream fis = new FileInputStream(file);
DigestInputStream dis = new DigestInputStream(fis, MessageDigest.getInstance("MD5"))) {

byte[] buffer = new byte[8192];
while (dis.read(buffer) != -1) {
// 读取文件内容
}

byte[] digest = dis.getMessageDigest().digest();
return bytesToHex(digest);
} catch (NoSuchAlgorithmException e) {
throw new RuntimeException("MD5算法不可用", e);
}
}

/**
* 字节数组转十六进制字符串
*/
private String bytesToHex(byte[] bytes) {
StringBuilder result = new StringBuilder();
for (byte b : bytes) {
result.append(String.format("%02x", b));
}
return result.toString();
}

/**
* 获取最大文件大小
*/
private long getMaxFileSize() {
return 5L * 1024 * 1024 * 1024; // 5GB
}
}

// 上传初始化请求
public class UploadInitRequest {
private String fileName;
private long fileSize;
private long chunkSize;
private String fileMd5;
private String contentType;

// 构造函数和getter/setter方法
}

// 上传初始化响应
public class UploadInitResponse {
private boolean success = true;
private String message;
private String uploadId;
private ChunkInfo chunkInfo;
private long expireTime;

// 构造函数和getter/setter方法
}

// 分片信息
public class ChunkInfo {
private int totalChunks;
private long chunkSize;
private long lastChunkSize;
private long fileSize;

// 构造函数和getter/setter方法
}

// 上传任务
public class UploadTask {
private String uploadId;
private String fileName;
private long fileSize;
private ChunkInfo chunkInfo;
private UploadStatus status;
private long createTime;
private long expireTime;
private String fileMd5;

// 构造函数和getter/setter方法
}

// 上传状态枚举
public enum UploadStatus {
INITIALIZED, // 已初始化
UPLOADING, // 上传中
MERGING, // 合并中
COMPLETED, // 已完成
FAILED, // 失败
CANCELLED // 已取消
}

// 分片上传结果
public class ChunkUploadResult {
private boolean allChunksUploaded;
private double uploadProgress;
private List<Integer> uploadedChunks;

// 构造函数和getter/setter方法
}

// 分片上传响应
public class ChunkUploadResponse {
private boolean success = true;
private String message;
private int chunkIndex;
private double uploadProgress;
private boolean allChunksUploaded;

// 构造函数和getter/setter方法
}

// 上传进度响应
public class UploadProgressResponse {
private boolean success = true;
private String message;
private String uploadId;
private UploadProgress progress;

// 构造函数和getter/setter方法
}

// 取消上传响应
public class CancelUploadResponse {
private boolean success = true;
private String message;

// 构造函数和getter/setter方法
}

2.2 分片上传服务实现

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
// 分片上传服务
@Service
@Slf4j
public class ChunkUploadService {

@Autowired
private UploadTaskRepository uploadTaskRepository;

@Autowired
private ChunkRepository chunkRepository;

@Autowired
private StorageService storageService;

@Autowired
private RedisTemplate<String, Object> redisTemplate;

private static final String UPLOAD_TASK_KEY = "upload:task:";
private static final String CHUNK_KEY = "upload:chunk:";

/**
* 保存上传任务
*/
public void saveUploadTask(UploadTask uploadTask) {
try {
// 1. 保存到数据库
uploadTaskRepository.save(uploadTask);

// 2. 缓存到Redis
String key = UPLOAD_TASK_KEY + uploadTask.getUploadId();
redisTemplate.opsForValue().set(key, uploadTask, Duration.ofHours(24));

log.info("上传任务保存成功: uploadId={}", uploadTask.getUploadId());

} catch (Exception e) {
log.error("保存上传任务失败: uploadId={}", uploadTask.getUploadId(), e);
throw new UploadTaskException("保存上传任务失败", e);
}
}

/**
* 获取上传任务
*/
public UploadTask getUploadTask(String uploadId) {
try {
// 1. 先从Redis获取
String key = UPLOAD_TASK_KEY + uploadId;
UploadTask uploadTask = (UploadTask) redisTemplate.opsForValue().get(key);

if (uploadTask != null) {
return uploadTask;
}

// 2. 从数据库获取
uploadTask = uploadTaskRepository.findByUploadId(uploadId);

if (uploadTask != null) {
// 3. 缓存到Redis
redisTemplate.opsForValue().set(key, uploadTask, Duration.ofHours(24));
}

return uploadTask;

} catch (Exception e) {
log.error("获取上传任务失败: uploadId={}", uploadId, e);
throw new UploadTaskException("获取上传任务失败", e);
}
}

/**
* 上传分片
*/
public ChunkUploadResult uploadChunk(UploadTask uploadTask, Integer chunkIndex, MultipartFile file) {
try {
// 1. 检查分片是否已上传
if (isChunkUploaded(uploadTask.getUploadId(), chunkIndex)) {
log.info("分片已存在: uploadId={}, chunkIndex={}", uploadTask.getUploadId(), chunkIndex);
return getChunkUploadResult(uploadTask.getUploadId());
}

// 2. 保存分片文件
String chunkPath = saveChunkFile(uploadTask.getUploadId(), chunkIndex, file);

// 3. 保存分片信息
ChunkInfo chunkInfo = createChunkInfo(uploadTask.getUploadId(), chunkIndex, chunkPath, file);
chunkRepository.save(chunkInfo);

// 4. 更新Redis缓存
updateChunkCache(uploadTask.getUploadId(), chunkIndex);

// 5. 更新上传进度
updateUploadProgress(uploadTask.getUploadId());

log.info("分片上传成功: uploadId={}, chunkIndex={}", uploadTask.getUploadId(), chunkIndex);

return getChunkUploadResult(uploadTask.getUploadId());

} catch (Exception e) {
log.error("分片上传失败: uploadId={}, chunkIndex={}", uploadTask.getUploadId(), chunkIndex, e);
throw new ChunkUploadException("分片上传失败", e);
}
}

/**
* 检查分片是否已上传
*/
private boolean isChunkUploaded(String uploadId, Integer chunkIndex) {
try {
// 1. 检查Redis缓存
String chunkKey = CHUNK_KEY + uploadId + ":" + chunkIndex;
Boolean exists = redisTemplate.hasKey(chunkKey);

if (exists != null && exists) {
return true;
}

// 2. 检查数据库
ChunkInfo chunkInfo = chunkRepository.findByUploadIdAndChunkIndex(uploadId, chunkIndex);
return chunkInfo != null;

} catch (Exception e) {
log.error("检查分片状态失败: uploadId={}, chunkIndex={}", uploadId, chunkIndex, e);
return false;
}
}

/**
* 保存分片文件
*/
private String saveChunkFile(String uploadId, Integer chunkIndex, MultipartFile file) {
try {
// 1. 创建分片目录
String chunkDir = getChunkDirectory(uploadId);
File directory = new File(chunkDir);
if (!directory.exists()) {
directory.mkdirs();
}

// 2. 生成分片文件名
String chunkFileName = String.format("chunk_%d.tmp", chunkIndex);
String chunkPath = chunkDir + File.separator + chunkFileName;

// 3. 保存文件
File chunkFile = new File(chunkPath);
file.transferTo(chunkFile);

return chunkPath;

} catch (Exception e) {
log.error("保存分片文件失败: uploadId={}, chunkIndex={}", uploadId, chunkIndex, e);
throw new ChunkFileException("保存分片文件失败", e);
}
}

/**
* 创建分片信息
*/
private ChunkInfo createChunkInfo(String uploadId, Integer chunkIndex, String chunkPath, MultipartFile file) {
ChunkInfo chunkInfo = new ChunkInfo();
chunkInfo.setUploadId(uploadId);
chunkInfo.setChunkIndex(chunkIndex);
chunkInfo.setChunkPath(chunkPath);
chunkInfo.setChunkSize(file.getSize());
chunkInfo.setUploadTime(System.currentTimeMillis());
chunkInfo.setStatus(ChunkStatus.UPLOADED);

return chunkInfo;
}

/**
* 更新分片缓存
*/
private void updateChunkCache(String uploadId, Integer chunkIndex) {
try {
String chunkKey = CHUNK_KEY + uploadId + ":" + chunkIndex;
redisTemplate.opsForValue().set(chunkKey, true, Duration.ofHours(24));

} catch (Exception e) {
log.error("更新分片缓存失败: uploadId={}, chunkIndex={}", uploadId, chunkIndex, e);
}
}

/**
* 更新上传进度
*/
private void updateUploadProgress(String uploadId) {
try {
// 1. 获取已上传的分片数量
int uploadedChunks = getUploadedChunkCount(uploadId);

// 2. 获取总分片数量
UploadTask uploadTask = getUploadTask(uploadId);
int totalChunks = uploadTask.getChunkInfo().getTotalChunks();

// 3. 计算进度
double progress = (double) uploadedChunks / totalChunks;

// 4. 更新任务状态
if (uploadedChunks == totalChunks) {
updateUploadTaskStatus(uploadId, UploadStatus.UPLOADING);
}

log.debug("上传进度更新: uploadId={}, progress={}", uploadId, progress);

} catch (Exception e) {
log.error("更新上传进度失败: uploadId={}", uploadId, e);
}
}

/**
* 获取已上传分片数量
*/
private int getUploadedChunkCount(String uploadId) {
try {
// 1. 从Redis获取
String pattern = CHUNK_KEY + uploadId + ":*";
Set<String> keys = redisTemplate.keys(pattern);

if (keys != null) {
return keys.size();
}

// 2. 从数据库获取
return chunkRepository.countByUploadIdAndStatus(uploadId, ChunkStatus.UPLOADED);

} catch (Exception e) {
log.error("获取已上传分片数量失败: uploadId={}", uploadId, e);
return 0;
}
}

/**
* 获取分片上传结果
*/
private ChunkUploadResult getChunkUploadResult(String uploadId) {
try {
UploadTask uploadTask = getUploadTask(uploadId);
int uploadedChunks = getUploadedChunkCount(uploadId);
int totalChunks = uploadTask.getChunkInfo().getTotalChunks();

ChunkUploadResult result = new ChunkUploadResult();
result.setAllChunksUploaded(uploadedChunks == totalChunks);
result.setUploadProgress((double) uploadedChunks / totalChunks);

// 获取已上传的分片列表
List<Integer> uploadedChunkList = getUploadedChunkList(uploadId);
result.setUploadedChunks(uploadedChunkList);

return result;

} catch (Exception e) {
log.error("获取分片上传结果失败: uploadId={}", uploadId, e);
throw new ChunkUploadException("获取分片上传结果失败", e);
}
}

/**
* 获取已上传分片列表
*/
private List<Integer> getUploadedChunkList(String uploadId) {
try {
List<ChunkInfo> chunks = chunkRepository.findByUploadIdAndStatus(uploadId, ChunkStatus.UPLOADED);
return chunks.stream()
.map(ChunkInfo::getChunkIndex)
.sorted()
.collect(Collectors.toList());

} catch (Exception e) {
log.error("获取已上传分片列表失败: uploadId={}", uploadId, e);
return new ArrayList<>();
}
}

/**
* 合并分片
*/
public String mergeChunks(UploadTask uploadTask) {
try {
log.info("开始合并分片: uploadId={}", uploadTask.getUploadId());

// 1. 获取所有分片
List<ChunkInfo> chunks = chunkRepository.findByUploadIdAndStatusOrderByChunkIndex(
uploadTask.getUploadId(), ChunkStatus.UPLOADED);

if (chunks.size() != uploadTask.getChunkInfo().getTotalChunks()) {
throw new ChunkMergeException("分片数量不匹配");
}

// 2. 创建最终文件
String finalFilePath = createFinalFilePath(uploadTask);
File finalFile = new File(finalFilePath);

// 3. 确保目录存在
File parentDir = finalFile.getParentFile();
if (!parentDir.exists()) {
parentDir.mkdirs();
}

// 4. 合并分片
try (FileOutputStream fos = new FileOutputStream(finalFile)) {
for (ChunkInfo chunk : chunks) {
File chunkFile = new File(chunk.getChunkPath());
if (!chunkFile.exists()) {
throw new ChunkMergeException("分片文件不存在: " + chunk.getChunkPath());
}

try (FileInputStream fis = new FileInputStream(chunkFile)) {
byte[] buffer = new byte[8192];
int bytesRead;
while ((bytesRead = fis.read(buffer)) != -1) {
fos.write(buffer, 0, bytesRead);
}
}
}
}

log.info("分片合并完成: uploadId={}, filePath={}", uploadTask.getUploadId(), finalFilePath);
return finalFilePath;

} catch (Exception e) {
log.error("分片合并失败: uploadId={}", uploadTask.getUploadId(), e);
throw new ChunkMergeException("分片合并失败", e);
}
}

/**
* 创建最终文件路径
*/
private String createFinalFilePath(UploadTask uploadTask) {
String uploadDir = getUploadDirectory();
String fileName = uploadTask.getFileName();
String fileExtension = getFileExtension(fileName);
String baseName = getBaseFileName(fileName);

// 生成唯一文件名
String uniqueFileName = baseName + "_" + uploadTask.getUploadId() + fileExtension;

return uploadDir + File.separator + uniqueFileName;
}

/**
* 获取上传目录
*/
private String getUploadDirectory() {
String uploadDir = System.getProperty("user.dir") + File.separator + "uploads";
File directory = new File(uploadDir);
if (!directory.exists()) {
directory.mkdirs();
}
return uploadDir;
}

/**
* 获取分片目录
*/
private String getChunkDirectory(String uploadId) {
String chunkDir = getUploadDirectory() + File.separator + "chunks" + File.separator + uploadId;
File directory = new File(chunkDir);
if (!directory.exists()) {
directory.mkdirs();
}
return chunkDir;
}

/**
* 获取文件扩展名
*/
private String getFileExtension(String fileName) {
int lastDotIndex = fileName.lastIndexOf('.');
if (lastDotIndex > 0) {
return fileName.substring(lastDotIndex);
}
return "";
}

/**
* 获取基础文件名
*/
private String getBaseFileName(String fileName) {
int lastDotIndex = fileName.lastIndexOf('.');
if (lastDotIndex > 0) {
return fileName.substring(0, lastDotIndex);
}
return fileName;
}

/**
* 清理分片文件
*/
public void cleanupChunks(String uploadId) {
try {
// 1. 删除分片文件
String chunkDir = getChunkDirectory(uploadId);
File directory = new File(chunkDir);
if (directory.exists()) {
deleteDirectory(directory);
}

// 2. 删除分片记录
chunkRepository.deleteByUploadId(uploadId);

// 3. 清理Redis缓存
String pattern = CHUNK_KEY + uploadId + ":*";
Set<String> keys = redisTemplate.keys(pattern);
if (keys != null && !keys.isEmpty()) {
redisTemplate.delete(keys);
}

log.info("分片清理完成: uploadId={}", uploadId);

} catch (Exception e) {
log.error("分片清理失败: uploadId={}", uploadId, e);
}
}

/**
* 删除目录
*/
private void deleteDirectory(File directory) {
if (directory.isDirectory()) {
File[] files = directory.listFiles();
if (files != null) {
for (File file : files) {
deleteDirectory(file);
}
}
}
directory.delete();
}

/**
* 取消上传
*/
public boolean cancelUpload(String uploadId) {
try {
// 1. 更新任务状态
updateUploadTaskStatus(uploadId, UploadStatus.CANCELLED);

// 2. 清理分片
cleanupChunks(uploadId);

// 3. 清理任务缓存
String taskKey = UPLOAD_TASK_KEY + uploadId;
redisTemplate.delete(taskKey);

log.info("上传取消成功: uploadId={}", uploadId);
return true;

} catch (Exception e) {
log.error("取消上传失败: uploadId={}", uploadId, e);
return false;
}
}

/**
* 更新上传任务状态
*/
public void updateUploadTaskStatus(String uploadId, UploadStatus status) {
try {
// 1. 更新数据库
uploadTaskRepository.updateStatusByUploadId(uploadId, status);

// 2. 更新缓存
UploadTask uploadTask = getUploadTask(uploadId);
if (uploadTask != null) {
uploadTask.setStatus(status);
String key = UPLOAD_TASK_KEY + uploadId;
redisTemplate.opsForValue().set(key, uploadTask, Duration.ofHours(24));
}

} catch (Exception e) {
log.error("更新上传任务状态失败: uploadId={}, status={}", uploadId, status, e);
}
}

/**
* 获取上传进度
*/
public UploadProgress getUploadProgress(String uploadId) {
try {
UploadTask uploadTask = getUploadTask(uploadId);
if (uploadTask == null) {
return null;
}

int uploadedChunks = getUploadedChunkCount(uploadId);
int totalChunks = uploadTask.getChunkInfo().getTotalChunks();

UploadProgress progress = new UploadProgress();
progress.setUploadId(uploadId);
progress.setUploadedChunks(uploadedChunks);
progress.setTotalChunks(totalChunks);
progress.setProgress((double) uploadedChunks / totalChunks);
progress.setStatus(uploadTask.getStatus());

return progress;

} catch (Exception e) {
log.error("获取上传进度失败: uploadId={}", uploadId, e);
return null;
}
}
}

// 分片信息实体
@Entity
@Table(name = "chunk_info")
public class ChunkInfo {
@Id
@GeneratedValue(strategy = GenerationType.IDENTITY)
private Long id;

@Column(name = "upload_id", nullable = false)
private String uploadId;

@Column(name = "chunk_index", nullable = false)
private Integer chunkIndex;

@Column(name = "chunk_path", nullable = false)
private String chunkPath;

@Column(name = "chunk_size", nullable = false)
private Long chunkSize;

@Column(name = "upload_time", nullable = false)
private Long uploadTime;

@Enumerated(EnumType.STRING)
@Column(name = "status", nullable = false)
private ChunkStatus status;

// 构造函数和getter/setter方法
}

// 分片状态枚举
public enum ChunkStatus {
UPLOADING, // 上传中
UPLOADED, // 已上传
FAILED // 失败
}

// 上传进度
public class UploadProgress {
private String uploadId;
private int uploadedChunks;
private int totalChunks;
private double progress;
private UploadStatus status;

// 构造函数和getter/setter方法
}

三、断点续传实现

3.1 断点续传管理器

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
// 断点续传管理器
@Component
@Slf4j
public class ResumeUploadManager {

@Autowired
private ChunkUploadService chunkUploadService;

@Autowired
private UploadTaskRepository uploadTaskRepository;

@Autowired
private RedisTemplate<String, Object> redisTemplate;

private static final String RESUME_KEY = "resume:upload:";

/**
* 检查是否可以断点续传
*/
public ResumeCheckResult checkResumeUpload(String uploadId) {
try {
// 1. 获取上传任务
UploadTask uploadTask = chunkUploadService.getUploadTask(uploadId);
if (uploadTask == null) {
return new ResumeCheckResult(false, "上传任务不存在");
}

// 2. 检查任务状态
if (uploadTask.getStatus() == UploadStatus.COMPLETED) {
return new ResumeCheckResult(false, "文件已上传完成");
}

if (uploadTask.getStatus() == UploadStatus.CANCELLED) {
return new ResumeCheckResult(false, "上传已取消");
}

// 3. 检查任务是否过期
if (System.currentTimeMillis() > uploadTask.getExpireTime()) {
return new ResumeCheckResult(false, "上传任务已过期");
}

// 4. 获取已上传的分片
List<Integer> uploadedChunks = getUploadedChunks(uploadId);

// 5. 计算需要上传的分片
List<Integer> remainingChunks = calculateRemainingChunks(uploadTask, uploadedChunks);

ResumeCheckResult result = new ResumeCheckResult();
result.setCanResume(true);
result.setUploadId(uploadId);
result.setUploadedChunks(uploadedChunks);
result.setRemainingChunks(remainingChunks);
result.setUploadProgress((double) uploadedChunks.size() / uploadTask.getChunkInfo().getTotalChunks());

return result;

} catch (Exception e) {
log.error("检查断点续传失败: uploadId={}", uploadId, e);
return new ResumeCheckResult(false, "检查断点续传失败: " + e.getMessage());
}
}

/**
* 恢复上传
*/
public ResumeUploadResult resumeUpload(String uploadId) {
try {
// 1. 检查是否可以续传
ResumeCheckResult checkResult = checkResumeUpload(uploadId);
if (!checkResult.isCanResume()) {
return new ResumeUploadResult(false, checkResult.getMessage());
}

// 2. 更新任务状态
chunkUploadService.updateUploadTaskStatus(uploadId, UploadStatus.UPLOADING);

// 3. 缓存续传信息
cacheResumeInfo(uploadId, checkResult);

ResumeUploadResult result = new ResumeUploadResult();
result.setSuccess(true);
result.setUploadId(uploadId);
result.setRemainingChunks(checkResult.getRemainingChunks());
result.setMessage("断点续传已启动");

log.info("断点续传启动成功: uploadId={}, 剩余分片数={}",
uploadId, checkResult.getRemainingChunks().size());

return result;

} catch (Exception e) {
log.error("恢复上传失败: uploadId={}", uploadId, e);
return new ResumeUploadResult(false, "恢复上传失败: " + e.getMessage());
}
}

/**
* 获取已上传的分片
*/
private List<Integer> getUploadedChunks(String uploadId) {
try {
// 1. 从Redis获取
String pattern = "upload:chunk:" + uploadId + ":*";
Set<String> keys = redisTemplate.keys(pattern);

if (keys != null && !keys.isEmpty()) {
return keys.stream()
.map(key -> {
String chunkIndexStr = key.substring(key.lastIndexOf(':') + 1);
return Integer.parseInt(chunkIndexStr);
})
.sorted()
.collect(Collectors.toList());
}

// 2. 从数据库获取
List<ChunkInfo> chunks = chunkRepository.findByUploadIdAndStatusOrderByChunkIndex(
uploadId, ChunkStatus.UPLOADED);

return chunks.stream()
.map(ChunkInfo::getChunkIndex)
.collect(Collectors.toList());

} catch (Exception e) {
log.error("获取已上传分片失败: uploadId={}", uploadId, e);
return new ArrayList<>();
}
}

/**
* 计算剩余分片
*/
private List<Integer> calculateRemainingChunks(UploadTask uploadTask, List<Integer> uploadedChunks) {
int totalChunks = uploadTask.getChunkInfo().getTotalChunks();
Set<Integer> uploadedSet = new HashSet<>(uploadedChunks);

List<Integer> remainingChunks = new ArrayList<>();
for (int i = 0; i < totalChunks; i++) {
if (!uploadedSet.contains(i)) {
remainingChunks.add(i);
}
}

return remainingChunks;
}

/**
* 缓存续传信息
*/
private void cacheResumeInfo(String uploadId, ResumeCheckResult checkResult) {
try {
String key = RESUME_KEY + uploadId;
redisTemplate.opsForValue().set(key, checkResult, Duration.ofHours(24));

} catch (Exception e) {
log.error("缓存续传信息失败: uploadId={}", uploadId, e);
}
}

/**
* 获取续传信息
*/
public ResumeCheckResult getResumeInfo(String uploadId) {
try {
String key = RESUME_KEY + uploadId;
return (ResumeCheckResult) redisTemplate.opsForValue().get(key);

} catch (Exception e) {
log.error("获取续传信息失败: uploadId={}", uploadId, e);
return null;
}
}

/**
* 清理续传信息
*/
public void cleanupResumeInfo(String uploadId) {
try {
String key = RESUME_KEY + uploadId;
redisTemplate.delete(key);

} catch (Exception e) {
log.error("清理续传信息失败: uploadId={}", uploadId, e);
}
}
}

// 续传检查结果
public class ResumeCheckResult {
private boolean canResume;
private String message;
private String uploadId;
private List<Integer> uploadedChunks;
private List<Integer> remainingChunks;
private double uploadProgress;

// 构造函数和getter/setter方法
}

// 续传结果
public class ResumeUploadResult {
private boolean success;
private String message;
private String uploadId;
private List<Integer> remainingChunks;

// 构造函数和getter/setter方法
}

3.2 客户端断点续传实现

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
// 客户端断点续传管理器
@Component
@Slf4j
public class ClientResumeManager {

@Autowired
private RestTemplate restTemplate;

@Autowired
private LocalStorageService localStorageService;

private static final String UPLOAD_URL = "http://localhost:8080/api/upload";

/**
* 执行断点续传
*/
public ResumeUploadResult resumeUpload(String uploadId, String filePath) {
try {
// 1. 检查本地续传信息
ResumeInfo localResumeInfo = localStorageService.getResumeInfo(uploadId);

if (localResumeInfo != null) {
// 2. 验证文件是否发生变化
if (isFileChanged(filePath, localResumeInfo)) {
log.info("文件已发生变化,重新上传: uploadId={}", uploadId);
return restartUpload(filePath);
}

// 3. 从断点继续上传
return continueUploadFromResume(uploadId, filePath, localResumeInfo);
} else {
// 4. 检查服务端续传信息
ResumeCheckResult serverResumeInfo = checkServerResumeInfo(uploadId);

if (serverResumeInfo != null && serverResumeInfo.isCanResume()) {
return continueUploadFromServer(uploadId, filePath, serverResumeInfo);
} else {
log.info("无法续传,重新上传: uploadId={}", uploadId);
return restartUpload(filePath);
}
}

} catch (Exception e) {
log.error("断点续传失败: uploadId={}", uploadId, e);
return new ResumeUploadResult(false, "断点续传失败: " + e.getMessage());
}
}

/**
* 检查文件是否发生变化
*/
private boolean isFileChanged(String filePath, ResumeInfo resumeInfo) {
try {
File file = new File(filePath);
if (!file.exists()) {
return true;
}

// 检查文件大小
if (file.length() != resumeInfo.getFileSize()) {
return true;
}

// 检查文件MD5
if (resumeInfo.getFileMd5() != null) {
String currentMd5 = calculateFileMD5(file);
return !resumeInfo.getFileMd5().equals(currentMd5);
}

return false;

} catch (Exception e) {
log.error("检查文件变化失败: filePath={}", filePath, e);
return true;
}
}

/**
* 从本地续传信息继续上传
*/
private ResumeUploadResult continueUploadFromResume(String uploadId, String filePath, ResumeInfo resumeInfo) {
try {
// 1. 获取剩余分片
List<Integer> remainingChunks = resumeInfo.getRemainingChunks();

if (remainingChunks.isEmpty()) {
return new ResumeUploadResult(true, "文件已上传完成");
}

// 2. 上传剩余分片
for (Integer chunkIndex : remainingChunks) {
boolean success = uploadChunk(uploadId, chunkIndex, filePath, resumeInfo);
if (!success) {
return new ResumeUploadResult(false, "分片上传失败: " + chunkIndex);
}
}

// 3. 清理本地续传信息
localStorageService.removeResumeInfo(uploadId);

return new ResumeUploadResult(true, "断点续传完成");

} catch (Exception e) {
log.error("从本地续传信息继续上传失败: uploadId={}", uploadId, e);
return new ResumeUploadResult(false, "续传失败: " + e.getMessage());
}
}

/**
* 从服务端续传信息继续上传
*/
private ResumeUploadResult continueUploadFromServer(String uploadId, String filePath, ResumeCheckResult serverResumeInfo) {
try {
// 1. 获取剩余分片
List<Integer> remainingChunks = serverResumeInfo.getRemainingChunks();

if (remainingChunks.isEmpty()) {
return new ResumeUploadResult(true, "文件已上传完成");
}

// 2. 创建本地续传信息
ResumeInfo localResumeInfo = createLocalResumeInfo(uploadId, filePath, remainingChunks);
localStorageService.saveResumeInfo(uploadId, localResumeInfo);

// 3. 上传剩余分片
for (Integer chunkIndex : remainingChunks) {
boolean success = uploadChunk(uploadId, chunkIndex, filePath, localResumeInfo);
if (!success) {
return new ResumeUploadResult(false, "分片上传失败: " + chunkIndex);
}
}

// 4. 清理本地续传信息
localStorageService.removeResumeInfo(uploadId);

return new ResumeUploadResult(true, "断点续传完成");

} catch (Exception e) {
log.error("从服务端续传信息继续上传失败: uploadId={}", uploadId, e);
return new ResumeUploadResult(false, "续传失败: " + e.getMessage());
}
}

/**
* 上传分片
*/
private boolean uploadChunk(String uploadId, Integer chunkIndex, String filePath, ResumeInfo resumeInfo) {
try {
// 1. 读取分片数据
byte[] chunkData = readChunkData(filePath, chunkIndex, resumeInfo.getChunkSize());

// 2. 创建分片文件
MultipartFile chunkFile = createChunkMultipartFile(chunkData, chunkIndex);

// 3. 上传分片
HttpHeaders headers = new HttpHeaders();
headers.setContentType(MediaType.MULTIPART_FORM_DATA);

MultiValueMap<String, Object> body = new LinkedMultiValueMap<>();
body.add("uploadId", uploadId);
body.add("chunkIndex", chunkIndex);
body.add("chunkSize", chunkData.length);
body.add("file", chunkFile);

HttpEntity<MultiValueMap<String, Object>> requestEntity = new HttpEntity<>(body, headers);

ResponseEntity<ChunkUploadResponse> response = restTemplate.postForEntity(
UPLOAD_URL + "/chunk", requestEntity, ChunkUploadResponse.class);

if (response.getStatusCode() == HttpStatus.OK && response.getBody().isSuccess()) {
log.debug("分片上传成功: uploadId={}, chunkIndex={}", uploadId, chunkIndex);
return true;
} else {
log.error("分片上传失败: uploadId={}, chunkIndex={}", uploadId, chunkIndex);
return false;
}

} catch (Exception e) {
log.error("分片上传异常: uploadId={}, chunkIndex={}", uploadId, chunkIndex, e);
return false;
}
}

/**
* 读取分片数据
*/
private byte[] readChunkData(String filePath, Integer chunkIndex, long chunkSize) throws IOException {
try (RandomAccessFile file = new RandomAccessFile(filePath, "r")) {
long offset = (long) chunkIndex * chunkSize;
file.seek(offset);

byte[] buffer = new byte[(int) chunkSize];
int bytesRead = file.read(buffer);

if (bytesRead < buffer.length) {
byte[] actualData = new byte[bytesRead];
System.arraycopy(buffer, 0, actualData, 0, bytesRead);
return actualData;
}

return buffer;
}
}

/**
* 创建分片MultipartFile
*/
private MultipartFile createChunkMultipartFile(byte[] data, Integer chunkIndex) {
return new MultipartFile() {
@Override
public String getName() {
return "file";
}

@Override
public String getOriginalFilename() {
return "chunk_" + chunkIndex;
}

@Override
public String getContentType() {
return "application/octet-stream";
}

@Override
public boolean isEmpty() {
return data.length == 0;
}

@Override
public long getSize() {
return data.length;
}

@Override
public byte[] getBytes() throws IOException {
return data;
}

@Override
public InputStream getInputStream() throws IOException {
return new ByteArrayInputStream(data);
}

@Override
public void transferTo(File dest) throws IOException, IllegalStateException {
try (FileOutputStream fos = new FileOutputStream(dest)) {
fos.write(data);
}
}
};
}

/**
* 检查服务端续传信息
*/
private ResumeCheckResult checkServerResumeInfo(String uploadId) {
try {
String url = UPLOAD_URL + "/resume/" + uploadId;
ResponseEntity<ResumeCheckResult> response = restTemplate.getForEntity(url, ResumeCheckResult.class);

if (response.getStatusCode() == HttpStatus.OK) {
return response.getBody();
}

return null;

} catch (Exception e) {
log.error("检查服务端续传信息失败: uploadId={}", uploadId, e);
return null;
}
}

/**
* 创建本地续传信息
*/
private ResumeInfo createLocalResumeInfo(String uploadId, String filePath, List<Integer> remainingChunks) {
try {
File file = new File(filePath);

ResumeInfo resumeInfo = new ResumeInfo();
resumeInfo.setUploadId(uploadId);
resumeInfo.setFilePath(filePath);
resumeInfo.setFileSize(file.length());
resumeInfo.setFileMd5(calculateFileMD5(file));
resumeInfo.setChunkSize(1024 * 1024); // 1MB
resumeInfo.setRemainingChunks(remainingChunks);
resumeInfo.setCreateTime(System.currentTimeMillis());

return resumeInfo;

} catch (Exception e) {
log.error("创建本地续传信息失败: uploadId={}", uploadId, e);
return null;
}
}

/**
* 重新开始上传
*/
private ResumeUploadResult restartUpload(String filePath) {
try {
// 1. 初始化上传
UploadInitRequest initRequest = createInitRequest(filePath);
ResponseEntity<UploadInitResponse> initResponse = restTemplate.postForEntity(
UPLOAD_URL + "/init", initRequest, UploadInitResponse.class);

if (initResponse.getStatusCode() != HttpStatus.OK || !initResponse.getBody().isSuccess()) {
return new ResumeUploadResult(false, "初始化上传失败");
}

String uploadId = initResponse.getBody().getUploadId();

// 2. 创建续传信息
ResumeInfo resumeInfo = createLocalResumeInfo(uploadId, filePath,
initResponse.getBody().getChunkInfo().getTotalChunks());
localStorageService.saveResumeInfo(uploadId, resumeInfo);

// 3. 开始上传
return continueUploadFromResume(uploadId, filePath, resumeInfo);

} catch (Exception e) {
log.error("重新开始上传失败: filePath={}", filePath, e);
return new ResumeUploadResult(false, "重新开始上传失败: " + e.getMessage());
}
}

/**
* 创建初始化请求
*/
private UploadInitRequest createInitRequest(String filePath) {
try {
File file = new File(filePath);

UploadInitRequest request = new UploadInitRequest();
request.setFileName(file.getName());
request.setFileSize(file.length());
request.setChunkSize(1024 * 1024); // 1MB
request.setFileMd5(calculateFileMD5(file));
request.setContentType("application/octet-stream");

return request;

} catch (Exception e) {
log.error("创建初始化请求失败: filePath={}", filePath, e);
return null;
}
}

/**
* 计算文件MD5
*/
private String calculateFileMD5(File file) throws IOException {
try (FileInputStream fis = new FileInputStream(file);
DigestInputStream dis = new DigestInputStream(fis, MessageDigest.getInstance("MD5"))) {

byte[] buffer = new byte[8192];
while (dis.read(buffer) != -1) {
// 读取文件内容
}

byte[] digest = dis.getMessageDigest().digest();
return bytesToHex(digest);
} catch (NoSuchAlgorithmException e) {
throw new RuntimeException("MD5算法不可用", e);
}
}

/**
* 字节数组转十六进制字符串
*/
private String bytesToHex(byte[] bytes) {
StringBuilder result = new StringBuilder();
for (byte b : bytes) {
result.append(String.format("%02x", b));
}
return result.toString();
}
}

// 续传信息
public class ResumeInfo {
private String uploadId;
private String filePath;
private long fileSize;
private String fileMd5;
private long chunkSize;
private List<Integer> remainingChunks;
private long createTime;

// 构造函数和getter/setter方法
}

// 本地存储服务
@Component
public class LocalStorageService {

private static final String RESUME_DIR = System.getProperty("user.dir") + File.separator + "resume";

@PostConstruct
public void init() {
File directory = new File(RESUME_DIR);
if (!directory.exists()) {
directory.mkdirs();
}
}

/**
* 保存续传信息
*/
public void saveResumeInfo(String uploadId, ResumeInfo resumeInfo) {
try {
String filePath = RESUME_DIR + File.separator + uploadId + ".json";
try (FileWriter writer = new FileWriter(filePath)) {
ObjectMapper mapper = new ObjectMapper();
mapper.writeValue(writer, resumeInfo);
}
} catch (Exception e) {
log.error("保存续传信息失败: uploadId={}", uploadId, e);
}
}

/**
* 获取续传信息
*/
public ResumeInfo getResumeInfo(String uploadId) {
try {
String filePath = RESUME_DIR + File.separator + uploadId + ".json";
File file = new File(filePath);

if (!file.exists()) {
return null;
}

try (FileReader reader = new FileReader(file)) {
ObjectMapper mapper = new ObjectMapper();
return mapper.readValue(reader, ResumeInfo.class);
}
} catch (Exception e) {
log.error("获取续传信息失败: uploadId={}", uploadId, e);
return null;
}
}

/**
* 删除续传信息
*/
public void removeResumeInfo(String uploadId) {
try {
String filePath = RESUME_DIR + File.separator + uploadId + ".json";
File file = new File(filePath);

if (file.exists()) {
file.delete();
}
} catch (Exception e) {
log.error("删除续传信息失败: uploadId={}", uploadId, e);
}
}
}

四、分布式存储集成

4.1 分布式存储适配器

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
// 分布式存储适配器接口
public interface DistributedStorageAdapter {

/**
* 上传分片
*/
String uploadChunk(String uploadId, Integer chunkIndex, byte[] data) throws StorageException;

/**
* 下载分片
*/
byte[] downloadChunk(String chunkPath) throws StorageException;

/**
* 合并分片
*/
String mergeChunks(String uploadId, List<String> chunkPaths, String finalFileName) throws StorageException;

/**
* 删除分片
*/
void deleteChunk(String chunkPath) throws StorageException;

/**
* 删除文件
*/
void deleteFile(String filePath) throws StorageException;

/**
* 检查文件是否存在
*/
boolean fileExists(String filePath) throws StorageException;

/**
* 获取文件信息
*/
FileInfo getFileInfo(String filePath) throws StorageException;
}

// 阿里云OSS适配器
@Component("aliyunOssAdapter")
@Slf4j
public class AliyunOssAdapter implements DistributedStorageAdapter {

@Autowired
private OSS ossClient;

@Value("${storage.aliyun.bucket-name}")
private String bucketName;

@Value("${storage.aliyun.endpoint}")
private String endpoint;

@Override
public String uploadChunk(String uploadId, Integer chunkIndex, byte[] data) throws StorageException {
try {
String objectKey = generateChunkObjectKey(uploadId, chunkIndex);

PutObjectRequest putObjectRequest = new PutObjectRequest(bucketName, objectKey,
new ByteArrayInputStream(data));

PutObjectResult result = ossClient.putObject(putObjectRequest);

log.debug("分片上传成功: uploadId={}, chunkIndex={}, objectKey={}",
uploadId, chunkIndex, objectKey);

return objectKey;

} catch (Exception e) {
log.error("分片上传失败: uploadId={}, chunkIndex={}", uploadId, chunkIndex, e);
throw new StorageException("分片上传失败", e);
}
}

@Override
public byte[] downloadChunk(String chunkPath) throws StorageException {
try {
OSSObject ossObject = ossClient.getObject(bucketName, chunkPath);

try (InputStream inputStream = ossObject.getObjectContent()) {
return inputStream.readAllBytes();
}

} catch (Exception e) {
log.error("分片下载失败: chunkPath={}", chunkPath, e);
throw new StorageException("分片下载失败", e);
}
}

@Override
public String mergeChunks(String uploadId, List<String> chunkPaths, String finalFileName) throws StorageException {
try {
String finalObjectKey = generateFinalObjectKey(uploadId, finalFileName);

// 创建合并请求
CompleteMultipartUploadRequest completeRequest = new CompleteMultipartUploadRequest(
bucketName, finalObjectKey, uploadId, new ArrayList<>());

// 添加分片
for (int i = 0; i < chunkPaths.size(); i++) {
String chunkPath = chunkPaths.get(i);

// 复制分片到最终文件
CopyObjectRequest copyRequest = new CopyObjectRequest(bucketName, chunkPath,
bucketName, finalObjectKey + "_part_" + i);

ossClient.copyObject(copyRequest);

// 添加到合并请求
PartETag partETag = new PartETag(i + 1, "");
completeRequest.getPartETags().add(partETag);
}

// 执行合并
CompleteMultipartUploadResult result = ossClient.completeMultipartUpload(completeRequest);

// 清理分片文件
cleanupChunks(chunkPaths);

log.info("分片合并完成: uploadId={}, finalObjectKey={}", uploadId, finalObjectKey);

return finalObjectKey;

} catch (Exception e) {
log.error("分片合并失败: uploadId={}", uploadId, e);
throw new StorageException("分片合并失败", e);
}
}

@Override
public void deleteChunk(String chunkPath) throws StorageException {
try {
ossClient.deleteObject(bucketName, chunkPath);
log.debug("分片删除成功: chunkPath={}", chunkPath);

} catch (Exception e) {
log.error("分片删除失败: chunkPath={}", chunkPath, e);
throw new StorageException("分片删除失败", e);
}
}

@Override
public void deleteFile(String filePath) throws StorageException {
try {
ossClient.deleteObject(bucketName, filePath);
log.debug("文件删除成功: filePath={}", filePath);

} catch (Exception e) {
log.error("文件删除失败: filePath={}", filePath, e);
throw new StorageException("文件删除失败", e);
}
}

@Override
public boolean fileExists(String filePath) throws StorageException {
try {
return ossClient.doesObjectExist(bucketName, filePath);

} catch (Exception e) {
log.error("检查文件存在性失败: filePath={}", filePath, e);
throw new StorageException("检查文件存在性失败", e);
}
}

@Override
public FileInfo getFileInfo(String filePath) throws StorageException {
try {
OSSObject ossObject = ossClient.getObject(bucketName, filePath);
ObjectMetadata metadata = ossObject.getObjectMetadata();

FileInfo fileInfo = new FileInfo();
fileInfo.setFilePath(filePath);
fileInfo.setFileSize(metadata.getContentLength());
fileInfo.setContentType(metadata.getContentType());
fileInfo.setLastModified(metadata.getLastModified());

return fileInfo;

} catch (Exception e) {
log.error("获取文件信息失败: filePath={}", filePath, e);
throw new StorageException("获取文件信息失败", e);
}
}

/**
* 生成分片对象键
*/
private String generateChunkObjectKey(String uploadId, Integer chunkIndex) {
return String.format("chunks/%s/chunk_%d.tmp", uploadId, chunkIndex);
}

/**
* 生成最终对象键
*/
private String generateFinalObjectKey(String uploadId, String fileName) {
return String.format("files/%s/%s", uploadId, fileName);
}

/**
* 清理分片文件
*/
private void cleanupChunks(List<String> chunkPaths) {
try {
for (String chunkPath : chunkPaths) {
deleteChunk(chunkPath);
}
} catch (Exception e) {
log.error("清理分片文件失败", e);
}
}
}

// 腾讯云COS适配器
@Component("tencentCosAdapter")
@Slf4j
public class TencentCosAdapter implements DistributedStorageAdapter {

@Autowired
private COSClient cosClient;

@Value("${storage.tencent.bucket-name}")
private String bucketName;

@Value("${storage.tencent.region}")
private String region;

@Override
public String uploadChunk(String uploadId, Integer chunkIndex, byte[] data) throws StorageException {
try {
String objectKey = generateChunkObjectKey(uploadId, chunkIndex);

PutObjectRequest putObjectRequest = new PutObjectRequest(bucketName, objectKey,
new ByteArrayInputStream(data), new ObjectMetadata());

PutObjectResult result = cosClient.putObject(putObjectRequest);

log.debug("分片上传成功: uploadId={}, chunkIndex={}, objectKey={}",
uploadId, chunkIndex, objectKey);

return objectKey;

} catch (Exception e) {
log.error("分片上传失败: uploadId={}, chunkIndex={}", uploadId, chunkIndex, e);
throw new StorageException("分片上传失败", e);
}
}

@Override
public byte[] downloadChunk(String chunkPath) throws StorageException {
try {
GetObjectRequest getObjectRequest = new GetObjectRequest(bucketName, chunkPath);
COSObject cosObject = cosClient.getObject(getObjectRequest);

try (InputStream inputStream = cosObject.getObjectContent()) {
return inputStream.readAllBytes();
}

} catch (Exception e) {
log.error("分片下载失败: chunkPath={}", chunkPath, e);
throw new StorageException("分片下载失败", e);
}
}

@Override
public String mergeChunks(String uploadId, List<String> chunkPaths, String finalFileName) throws StorageException {
try {
String finalObjectKey = generateFinalObjectKey(uploadId, finalFileName);

// 创建合并请求
CompleteMultipartUploadRequest completeRequest = new CompleteMultipartUploadRequest(
bucketName, finalObjectKey, uploadId, new ArrayList<>());

// 添加分片
for (int i = 0; i < chunkPaths.size(); i++) {
String chunkPath = chunkPaths.get(i);

// 复制分片到最终文件
CopyObjectRequest copyRequest = new CopyObjectRequest(bucketName, chunkPath,
bucketName, finalObjectKey + "_part_" + i);

cosClient.copyObject(copyRequest);

// 添加到合并请求
PartETag partETag = new PartETag(i + 1, "");
completeRequest.getPartETags().add(partETag);
}

// 执行合并
CompleteMultipartUploadResult result = cosClient.completeMultipartUpload(completeRequest);

// 清理分片文件
cleanupChunks(chunkPaths);

log.info("分片合并完成: uploadId={}, finalObjectKey={}", uploadId, finalObjectKey);

return finalObjectKey;

} catch (Exception e) {
log.error("分片合并失败: uploadId={}", uploadId, e);
throw new StorageException("分片合并失败", e);
}
}

@Override
public void deleteChunk(String chunkPath) throws StorageException {
try {
cosClient.deleteObject(bucketName, chunkPath);
log.debug("分片删除成功: chunkPath={}", chunkPath);

} catch (Exception e) {
log.error("分片删除失败: chunkPath={}", chunkPath, e);
throw new StorageException("分片删除失败", e);
}
}

@Override
public void deleteFile(String filePath) throws StorageException {
try {
cosClient.deleteObject(bucketName, filePath);
log.debug("文件删除成功: filePath={}", filePath);

} catch (Exception e) {
log.error("文件删除失败: filePath={}", filePath, e);
throw new StorageException("文件删除失败", e);
}
}

@Override
public boolean fileExists(String filePath) throws StorageException {
try {
return cosClient.doesObjectExist(bucketName, filePath);

} catch (Exception e) {
log.error("检查文件存在性失败: filePath={}", filePath, e);
throw new StorageException("检查文件存在性失败", e);
}
}

@Override
public FileInfo getFileInfo(String filePath) throws StorageException {
try {
GetObjectMetadataRequest getObjectMetadataRequest = new GetObjectMetadataRequest(bucketName, filePath);
ObjectMetadata metadata = cosClient.getObjectMetadata(getObjectMetadataRequest);

FileInfo fileInfo = new FileInfo();
fileInfo.setFilePath(filePath);
fileInfo.setFileSize(metadata.getContentLength());
fileInfo.setContentType(metadata.getContentType());
fileInfo.setLastModified(metadata.getLastModified());

return fileInfo;

} catch (Exception e) {
log.error("获取文件信息失败: filePath={}", filePath, e);
throw new StorageException("获取文件信息失败", e);
}
}

/**
* 生成分片对象键
*/
private String generateChunkObjectKey(String uploadId, Integer chunkIndex) {
return String.format("chunks/%s/chunk_%d.tmp", uploadId, chunkIndex);
}

/**
* 生成最终对象键
*/
private String generateFinalObjectKey(String uploadId, String fileName) {
return String.format("files/%s/%s", uploadId, fileName);
}

/**
* 清理分片文件
*/
private void cleanupChunks(List<String> chunkPaths) {
try {
for (String chunkPath : chunkPaths) {
deleteChunk(chunkPath);
}
} catch (Exception e) {
log.error("清理分片文件失败", e);
}
}
}

// 存储服务
@Service
@Slf4j
public class StorageService {

@Autowired
private DistributedStorageAdapter storageAdapter;

@Value("${storage.type:local}")
private String storageType;

/**
* 上传分片
*/
public String uploadChunk(String uploadId, Integer chunkIndex, byte[] data) throws StorageException {
try {
return storageAdapter.uploadChunk(uploadId, chunkIndex, data);

} catch (Exception e) {
log.error("上传分片失败: uploadId={}, chunkIndex={}", uploadId, chunkIndex, e);
throw new StorageException("上传分片失败", e);
}
}

/**
* 下载分片
*/
public byte[] downloadChunk(String chunkPath) throws StorageException {
try {
return storageAdapter.downloadChunk(chunkPath);

} catch (Exception e) {
log.error("下载分片失败: chunkPath={}", chunkPath, e);
throw new StorageException("下载分片失败", e);
}
}

/**
* 合并分片
*/
public String mergeChunks(String uploadId, List<String> chunkPaths, String finalFileName) throws StorageException {
try {
return storageAdapter.mergeChunks(uploadId, chunkPaths, finalFileName);

} catch (Exception e) {
log.error("合并分片失败: uploadId={}", uploadId, e);
throw new StorageException("合并分片失败", e);
}
}

/**
* 删除分片
*/
public void deleteChunk(String chunkPath) throws StorageException {
try {
storageAdapter.deleteChunk(chunkPath);

} catch (Exception e) {
log.error("删除分片失败: chunkPath={}", chunkPath, e);
throw new StorageException("删除分片失败", e);
}
}

/**
* 删除文件
*/
public void deleteFile(String filePath) throws StorageException {
try {
storageAdapter.deleteFile(filePath);

} catch (Exception e) {
log.error("删除文件失败: filePath={}", filePath, e);
throw new StorageException("删除文件失败", e);
}
}

/**
* 检查文件是否存在
*/
public boolean fileExists(String filePath) throws StorageException {
try {
return storageAdapter.fileExists(filePath);

} catch (Exception e) {
log.error("检查文件存在性失败: filePath={}", filePath, e);
throw new StorageException("检查文件存在性失败", e);
}
}

/**
* 获取文件信息
*/
public FileInfo getFileInfo(String filePath) throws StorageException {
try {
return storageAdapter.getFileInfo(filePath);

} catch (Exception e) {
log.error("获取文件信息失败: filePath={}", filePath, e);
throw new StorageException("获取文件信息失败", e);
}
}
}

// 文件信息
public class FileInfo {
private String filePath;
private long fileSize;
private String contentType;
private Date lastModified;

// 构造函数和getter/setter方法
}

// 存储异常
public class StorageException extends Exception {
public StorageException(String message) {
super(message);
}

public StorageException(String message, Throwable cause) {
super(message, cause);
}
}

五、最佳实践与总结

5.1 SpringBoot分片上传最佳实践

5.1.1 分片策略优化

  • 分片大小选择:根据文件类型和网络环境选择合适的分片大小(1MB-10MB)
  • 分片数量控制:合理控制分片数量,避免过多分片影响性能
  • 动态分片调整:根据网络状况动态调整分片大小
  • 分片命名规则:使用唯一标识符确保分片不冲突

5.1.2 断点续传实现

  • 状态持久化:将上传状态持久化到数据库和缓存
  • 进度恢复:支持从中断点继续上传
  • 分片重传:失败分片的自动重传机制
  • 完整性校验:确保文件上传的完整性

5.1.3 性能优化策略

  • 并发上传:支持多个分片并行上传
  • 连接池管理:优化HTTP连接的使用
  • 带宽控制:合理分配网络带宽资源
  • 缓存策略:使用Redis缓存上传状态

5.1.4 存储优化

  • 分布式存储:支持多种云存储服务
  • 存储适配器:统一的存储接口适配不同存储服务
  • 文件管理:完善的文件生命周期管理
  • 清理机制:自动清理过期和失败的分片

5.2 企业级应用场景

5.2.1 大文件上传场景

  • 视频文件上传:支持GB级别的视频文件上传
  • 文档文件上传:支持大型文档和压缩包上传
  • 图片文件上传:支持高分辨率图片批量上传
  • 数据文件上传:支持大数据文件的上传和处理

5.2.2 高并发场景

  • 多用户上传:支持大量用户同时上传文件
  • 负载均衡:通过负载均衡分散上传压力
  • 资源隔离:不同用户的上传资源隔离
  • 限流控制:防止恶意用户占用过多资源

5.2.3 容错恢复场景

  • 网络中断恢复:网络中断后的自动恢复
  • 服务重启恢复:服务重启后的状态恢复
  • 分片失败重传:失败分片的自动重传
  • 数据一致性保障:确保数据的一致性和完整性

5.3 架构演进建议

5.3.1 微服务架构支持

  • 服务拆分:将上传服务拆分为多个微服务
  • 服务治理:实现服务的注册发现、负载均衡
  • 容器化部署:使用Docker等容器技术部署
  • 服务网格:使用Istio等服务网格技术

5.3.2 云原生架构演进

  • 弹性伸缩:实现基于负载的自动扩缩容
  • 服务发现:使用云原生的服务发现机制
  • 配置管理:使用云原生的配置管理
  • 监控告警:集成云原生的监控告警系统

5.3.3 智能化运维

  • AI驱动优化:使用机器学习算法优化上传策略
  • 自动调优:实现基于监控数据的自动调优
  • 预测性维护:预测系统故障并提前处理
  • 智能告警:实现智能告警和故障诊断

5.4 总结

SpringBoot分片上传技术是企业级应用处理大文件的核心技术,通过合理的架构设计和优化策略,可以实现高效、可靠的大文件上传功能。分片上传不仅解决了传统单文件上传的局限性,还提供了断点续传、进度监控、并发优化等高级功能,显著提升了用户体验和系统性能。

在未来的发展中,随着云原生技术和人工智能技术的普及,分片上传系统将更加智能化和自动化。企业需要持续关注技术发展趋势,不断优化和完善分片上传策略,以适应不断变化的业务需求和技术环境。

通过本文的深入分析和实践指导,希望能够为企业构建高质量的SpringBoot分片上传解决方案提供有价值的参考和帮助,推动企业级应用在大文件处理场景下的稳定运行和持续发展。