SpringBoot+Redis:大文件分片上传与断点续传实战

1. 大文件分片上传概述

在现代Web应用中,大文件上传是一个常见需求。传统的文件上传方式在处理大文件时存在超时、内存溢出等问题。分片上传技术通过将大文件分割成小块,逐片上传,有效解决了这些问题。本文将详细介绍SpringBoot+Redis实现大文件分片上传、断点续传、进度跟踪、文件合并和错误处理的完整解决方案。

1.1 核心功能

  1. 分片上传: 将大文件分割成小块进行上传
  2. 断点续传: 支持上传中断后继续上传
  3. 进度跟踪: 实时跟踪上传进度
  4. 文件合并: 自动合并分片文件
  5. 错误处理: 完善的错误处理和重试机制

1.2 技术架构

1
2
3
4
5
客户端 → 文件分片 → 分片上传 → Redis缓存 → 文件合并
↓ ↓ ↓ ↓ ↓
文件切分 → 进度跟踪 → 断点续传 → 状态管理 → 完成通知
↓ ↓ ↓ ↓ ↓
MD5校验 → 并发上传 → 错误重试 → 清理缓存 → 结果返回

2. 分片上传配置

2.1 Maven依赖配置

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
<!-- pom.xml -->
<dependencies>
<!-- Spring Boot Web -->
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-web</artifactId>
</dependency>

<!-- Spring Boot Data Redis -->
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-data-redis</artifactId>
</dependency>

<!-- Apache Commons FileUpload -->
<dependency>
<groupId>commons-fileupload</groupId>
<artifactId>commons-fileupload</artifactId>
<version>1.4</version>
</dependency>

<!-- Apache Commons IO -->
<dependency>
<groupId>commons-io</groupId>
<artifactId>commons-io</artifactId>
<version>2.11.0</version>
</dependency>

<!-- Jackson JSON处理 -->
<dependency>
<groupId>com.fasterxml.jackson.core</groupId>
<artifactId>jackson-databind</artifactId>
</dependency>

<!-- Redis客户端 -->
<dependency>
<groupId>redis.clients</groupId>
<artifactId>jedis</artifactId>
</dependency>
</dependencies>

2.2 分片上传配置类

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
/**
* 分片上传配置类
*/
@Configuration
public class ChunkUploadConfig {

@Value("${chunk.upload.path:/tmp/uploads}")
private String uploadPath;

@Value("${chunk.upload.max-size:1073741824}")
private long maxFileSize; // 1GB

@Value("${chunk.upload.chunk-size:1048576}")
private int chunkSize; // 1MB

@Value("${chunk.upload.timeout:3600000}")
private long timeout; // 1小时

@Value("${chunk.upload.cleanup-days:7}")
private int cleanupDays;

/**
* 分片上传配置属性
*/
@Bean
public ChunkUploadProperties chunkUploadProperties() {
return ChunkUploadProperties.builder()
.uploadPath(uploadPath)
.maxFileSize(maxFileSize)
.chunkSize(chunkSize)
.timeout(timeout)
.cleanupDays(cleanupDays)
.build();
}

/**
* 分片上传服务
*/
@Bean
public ChunkUploadService chunkUploadService() {
return new ChunkUploadService(chunkUploadProperties());
}

/**
* 文件合并服务
*/
@Bean
public FileMergeService fileMergeService() {
return new FileMergeService(chunkUploadProperties());
}

/**
* 进度跟踪服务
*/
@Bean
public ProgressTrackingService progressTrackingService() {
return new ProgressTrackingService();
}
}

/**
* 分片上传配置属性
*/
@Data
@Builder
@NoArgsConstructor
@AllArgsConstructor
public class ChunkUploadProperties {
private String uploadPath;
private long maxFileSize;
private int chunkSize;
private long timeout;
private int cleanupDays;

// 并发配置
private int maxConcurrentUploads = 5;
private int maxRetryAttempts = 3;
private long retryDelay = 1000;

// 存储配置
private boolean enableCompression = false;
private boolean enableEncryption = false;
private String encryptionKey;

// 清理配置
private boolean enableAutoCleanup = true;
private int cleanupInterval = 24; // 小时
}

3. 数据模型定义

3.1 分片上传数据模型

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
/**
* 分片上传请求模型
*/
@Data
@NoArgsConstructor
@AllArgsConstructor
public class ChunkUploadRequest {
private String fileId;
private String fileName;
private long fileSize;
private int chunkIndex;
private int totalChunks;
private String chunkMd5;
private String fileMd5;
private String uploadId;
}

/**
* 分片上传响应模型
*/
@Data
@NoArgsConstructor
@AllArgsConstructor
public class ChunkUploadResponse {
private boolean success;
private String message;
private String fileId;
private int chunkIndex;
private String uploadId;
private UploadProgress progress;
}

/**
* 上传进度模型
*/
@Data
@NoArgsConstructor
@AllArgsConstructor
public class UploadProgress {
private String fileId;
private String fileName;
private long fileSize;
private int totalChunks;
private int uploadedChunks;
private long uploadedBytes;
private double progressPercentage;
private String status; // UPLOADING, COMPLETED, FAILED, PAUSED
private LocalDateTime startTime;
private LocalDateTime lastUpdateTime;
private List<Integer> completedChunks;
private String errorMessage;
}

/**
* 文件信息模型
*/
@Data
@NoArgsConstructor
@AllArgsConstructor
public class FileInfo {
private String fileId;
private String fileName;
private long fileSize;
private String fileMd5;
private String filePath;
private String contentType;
private LocalDateTime createTime;
private LocalDateTime completeTime;
private String status; // UPLOADING, COMPLETED, FAILED
private int totalChunks;
private List<ChunkInfo> chunks;
}

/**
* 分片信息模型
*/
@Data
@NoArgsConstructor
@AllArgsConstructor
public class ChunkInfo {
private int chunkIndex;
private long chunkSize;
private String chunkMd5;
private String chunkPath;
private boolean uploaded;
private LocalDateTime uploadTime;
private String errorMessage;
}

/**
* 文件合并请求模型
*/
@Data
@NoArgsConstructor
@AllArgsConstructor
public class FileMergeRequest {
private String fileId;
private String fileName;
private String fileMd5;
private int totalChunks;
private String uploadId;
}

/**
* 文件合并响应模型
*/
@Data
@NoArgsConstructor
@AllArgsConstructor
public class FileMergeResponse {
private boolean success;
private String message;
private String fileId;
private String filePath;
private long fileSize;
private String fileMd5;
private LocalDateTime completeTime;
}

4. 分片上传服务

4.1 分片上传服务

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
/**
* 分片上传服务
*/
@Service
public class ChunkUploadService {

private final ChunkUploadProperties properties;
private final RedisTemplate<String, Object> redisTemplate;
private final ProgressTrackingService progressTrackingService;
private final FileMergeService fileMergeService;

public ChunkUploadService(ChunkUploadProperties properties) {
this.properties = properties;
this.redisTemplate = new RedisTemplate<>();
this.progressTrackingService = new ProgressTrackingService();
this.fileMergeService = new FileMergeService(properties);
}

/**
* 上传分片
* @param request 分片上传请求
* @param chunkData 分片数据
* @return 上传响应
*/
public ChunkUploadResponse uploadChunk(ChunkUploadRequest request, byte[] chunkData) {
try {
// 1. 验证分片数据
validateChunkData(request, chunkData);

// 2. 生成分片文件路径
String chunkPath = generateChunkPath(request);

// 3. 保存分片文件
saveChunkFile(chunkPath, chunkData);

// 4. 验证分片MD5
String actualMd5 = calculateMD5(chunkData);
if (!actualMd5.equals(request.getChunkMd5())) {
throw new BusinessException("分片MD5校验失败");
}

// 5. 更新上传进度
updateUploadProgress(request);

// 6. 检查是否所有分片上传完成
boolean allChunksUploaded = checkAllChunksUploaded(request.getFileId());

if (allChunksUploaded) {
// 7. 触发文件合并
triggerFileMerge(request.getFileId());
}

return ChunkUploadResponse.builder()
.success(true)
.message("分片上传成功")
.fileId(request.getFileId())
.chunkIndex(request.getChunkIndex())
.uploadId(request.getUploadId())
.progress(progressTrackingService.getProgress(request.getFileId()))
.build();

} catch (Exception e) {
log.error("分片上传失败: fileId={}, chunkIndex={}",
request.getFileId(), request.getChunkIndex(), e);

return ChunkUploadResponse.builder()
.success(false)
.message("分片上传失败: " + e.getMessage())
.fileId(request.getFileId())
.chunkIndex(request.getChunkIndex())
.uploadId(request.getUploadId())
.build();
}
}

/**
* 初始化上传
* @param fileName 文件名
* @param fileSize 文件大小
* @param fileMd5 文件MD5
* @return 文件信息
*/
public FileInfo initializeUpload(String fileName, long fileSize, String fileMd5) {
try {
// 1. 验证文件大小
if (fileSize > properties.getMaxFileSize()) {
throw new BusinessException("文件大小超过限制");
}

// 2. 生成文件ID
String fileId = generateFileId();

// 3. 计算分片数量
int totalChunks = (int) Math.ceil((double) fileSize / properties.getChunkSize());

// 4. 创建文件信息
FileInfo fileInfo = FileInfo.builder()
.fileId(fileId)
.fileName(fileName)
.fileSize(fileSize)
.fileMd5(fileMd5)
.contentType(getContentType(fileName))
.createTime(LocalDateTime.now())
.status("UPLOADING")
.totalChunks(totalChunks)
.chunks(new ArrayList<>())
.build();

// 5. 初始化分片信息
initializeChunks(fileInfo);

// 6. 保存文件信息到Redis
saveFileInfo(fileInfo);

// 7. 初始化上传进度
progressTrackingService.initializeProgress(fileInfo);

return fileInfo;

} catch (Exception e) {
log.error("初始化上传失败: fileName={}", fileName, e);
throw new RuntimeException("初始化上传失败", e);
}
}

/**
* 获取上传进度
* @param fileId 文件ID
* @return 上传进度
*/
public UploadProgress getUploadProgress(String fileId) {
return progressTrackingService.getProgress(fileId);
}

/**
* 暂停上传
* @param fileId 文件ID
* @return 暂停结果
*/
public boolean pauseUpload(String fileId) {
try {
progressTrackingService.pauseProgress(fileId);
return true;
} catch (Exception e) {
log.error("暂停上传失败: fileId={}", fileId, e);
return false;
}
}

/**
* 恢复上传
* @param fileId 文件ID
* @return 恢复结果
*/
public boolean resumeUpload(String fileId) {
try {
progressTrackingService.resumeProgress(fileId);
return true;
} catch (Exception e) {
log.error("恢复上传失败: fileId={}", fileId, e);
return false;
}
}

/**
* 取消上传
* @param fileId 文件ID
* @return 取消结果
*/
public boolean cancelUpload(String fileId) {
try {
// 1. 更新进度状态
progressTrackingService.cancelProgress(fileId);

// 2. 清理分片文件
cleanupChunkFiles(fileId);

// 3. 清理Redis缓存
cleanupRedisCache(fileId);

return true;
} catch (Exception e) {
log.error("取消上传失败: fileId={}", fileId, e);
return false;
}
}

/**
* 验证分片数据
* @param request 分片上传请求
* @param chunkData 分片数据
*/
private void validateChunkData(ChunkUploadRequest request, byte[] chunkData) {
if (chunkData == null || chunkData.length == 0) {
throw new BusinessException("分片数据不能为空");
}

if (request.getChunkIndex() < 0 || request.getChunkIndex() >= request.getTotalChunks()) {
throw new BusinessException("分片索引无效");
}

if (chunkData.length > properties.getChunkSize()) {
throw new BusinessException("分片大小超过限制");
}
}

/**
* 生成分片文件路径
* @param request 分片上传请求
* @return 分片文件路径
*/
private String generateChunkPath(ChunkUploadRequest request) {
String chunkDir = properties.getUploadPath() + "/" + request.getFileId();
File dir = new File(chunkDir);
if (!dir.exists()) {
dir.mkdirs();
}

return chunkDir + "/chunk_" + request.getChunkIndex();
}

/**
* 保存分片文件
* @param chunkPath 分片文件路径
* @param chunkData 分片数据
*/
private void saveChunkFile(String chunkPath, byte[] chunkData) throws IOException {
try (FileOutputStream fos = new FileOutputStream(chunkPath)) {
fos.write(chunkData);
fos.flush();
}
}

/**
* 计算MD5
* @param data 数据
* @return MD5值
*/
private String calculateMD5(byte[] data) {
try {
MessageDigest md = MessageDigest.getInstance("MD5");
byte[] hash = md.digest(data);
StringBuilder hexString = new StringBuilder();

for (byte b : hash) {
String hex = Integer.toHexString(0xff & b);
if (hex.length() == 1) {
hexString.append('0');
}
hexString.append(hex);
}

return hexString.toString();
} catch (Exception e) {
throw new RuntimeException("计算MD5失败", e);
}
}

/**
* 更新上传进度
* @param request 分片上传请求
*/
private void updateUploadProgress(ChunkUploadRequest request) {
progressTrackingService.updateProgress(request.getFileId(), request.getChunkIndex());
}

/**
* 检查所有分片是否上传完成
* @param fileId 文件ID
* @return 是否完成
*/
private boolean checkAllChunksUploaded(String fileId) {
return progressTrackingService.isAllChunksUploaded(fileId);
}

/**
* 触发文件合并
* @param fileId 文件ID
*/
private void triggerFileMerge(String fileId) {
try {
FileInfo fileInfo = getFileInfo(fileId);
FileMergeRequest mergeRequest = FileMergeRequest.builder()
.fileId(fileId)
.fileName(fileInfo.getFileName())
.fileMd5(fileInfo.getFileMd5())
.totalChunks(fileInfo.getTotalChunks())
.uploadId(fileId)
.build();

fileMergeService.mergeFile(mergeRequest);
} catch (Exception e) {
log.error("触发文件合并失败: fileId={}", fileId, e);
}
}

/**
* 生成文件ID
* @return 文件ID
*/
private String generateFileId() {
return UUID.randomUUID().toString().replace("-", "");
}

/**
* 获取内容类型
* @param fileName 文件名
* @return 内容类型
*/
private String getContentType(String fileName) {
String extension = fileName.substring(fileName.lastIndexOf(".") + 1).toLowerCase();
switch (extension) {
case "jpg":
case "jpeg":
return "image/jpeg";
case "png":
return "image/png";
case "gif":
return "image/gif";
case "pdf":
return "application/pdf";
case "txt":
return "text/plain";
default:
return "application/octet-stream";
}
}

/**
* 初始化分片信息
* @param fileInfo 文件信息
*/
private void initializeChunks(FileInfo fileInfo) {
List<ChunkInfo> chunks = new ArrayList<>();
for (int i = 0; i < fileInfo.getTotalChunks(); i++) {
ChunkInfo chunk = ChunkInfo.builder()
.chunkIndex(i)
.chunkSize(calculateChunkSize(i, fileInfo.getFileSize(), fileInfo.getTotalChunks()))
.uploaded(false)
.build();
chunks.add(chunk);
}
fileInfo.setChunks(chunks);
}

/**
* 计算分片大小
* @param chunkIndex 分片索引
* @param fileSize 文件大小
* @param totalChunks 总分片数
* @return 分片大小
*/
private long calculateChunkSize(int chunkIndex, long fileSize, int totalChunks) {
if (chunkIndex == totalChunks - 1) {
return fileSize - (long) chunkIndex * properties.getChunkSize();
}
return properties.getChunkSize();
}

/**
* 保存文件信息到Redis
* @param fileInfo 文件信息
*/
private void saveFileInfo(FileInfo fileInfo) {
String key = "file_info:" + fileInfo.getFileId();
redisTemplate.opsForValue().set(key, fileInfo, Duration.ofDays(properties.getCleanupDays()));
}

/**
* 获取文件信息
* @param fileId 文件ID
* @return 文件信息
*/
private FileInfo getFileInfo(String fileId) {
String key = "file_info:" + fileId;
return (FileInfo) redisTemplate.opsForValue().get(key);
}

/**
* 清理分片文件
* @param fileId 文件ID
*/
private void cleanupChunkFiles(String fileId) {
try {
String chunkDir = properties.getUploadPath() + "/" + fileId;
File dir = new File(chunkDir);
if (dir.exists()) {
FileUtils.deleteDirectory(dir);
}
} catch (Exception e) {
log.error("清理分片文件失败: fileId={}", fileId, e);
}
}

/**
* 清理Redis缓存
* @param fileId 文件ID
*/
private void cleanupRedisCache(String fileId) {
try {
String fileInfoKey = "file_info:" + fileId;
String progressKey = "upload_progress:" + fileId;
redisTemplate.delete(fileInfoKey);
redisTemplate.delete(progressKey);
} catch (Exception e) {
log.error("清理Redis缓存失败: fileId={}", fileId, e);
}
}
}

5. 文件合并服务

5.1 文件合并服务

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
/**
* 文件合并服务
*/
@Service
public class FileMergeService {

private final ChunkUploadProperties properties;
private final RedisTemplate<String, Object> redisTemplate;

public FileMergeService(ChunkUploadProperties properties) {
this.properties = properties;
this.redisTemplate = new RedisTemplate<>();
}

/**
* 合并文件
* @param request 文件合并请求
* @return 合并响应
*/
public FileMergeResponse mergeFile(FileMergeRequest request) {
try {
// 1. 验证所有分片是否上传完成
if (!validateAllChunks(request)) {
throw new BusinessException("分片上传未完成");
}

// 2. 生成最终文件路径
String finalFilePath = generateFinalFilePath(request);

// 3. 合并分片文件
mergeChunkFiles(request, finalFilePath);

// 4. 验证文件MD5
String actualMd5 = calculateFileMD5(finalFilePath);
if (!actualMd5.equals(request.getFileMd5())) {
throw new BusinessException("文件MD5校验失败");
}

// 5. 更新文件信息
updateFileInfo(request, finalFilePath);

// 6. 清理分片文件
cleanupChunkFiles(request.getFileId());

return FileMergeResponse.builder()
.success(true)
.message("文件合并成功")
.fileId(request.getFileId())
.filePath(finalFilePath)
.fileSize(new File(finalFilePath).length())
.fileMd5(actualMd5)
.completeTime(LocalDateTime.now())
.build();

} catch (Exception e) {
log.error("文件合并失败: fileId={}", request.getFileId(), e);

return FileMergeResponse.builder()
.success(false)
.message("文件合并失败: " + e.getMessage())
.fileId(request.getFileId())
.build();
}
}

/**
* 验证所有分片是否上传完成
* @param request 文件合并请求
* @return 是否完成
*/
private boolean validateAllChunks(FileMergeRequest request) {
String chunkDir = properties.getUploadPath() + "/" + request.getFileId();
File dir = new File(chunkDir);

if (!dir.exists()) {
return false;
}

File[] chunkFiles = dir.listFiles();
if (chunkFiles == null || chunkFiles.length != request.getTotalChunks()) {
return false;
}

// 检查分片文件是否完整
for (int i = 0; i < request.getTotalChunks(); i++) {
File chunkFile = new File(chunkDir + "/chunk_" + i);
if (!chunkFile.exists() || chunkFile.length() == 0) {
return false;
}
}

return true;
}

/**
* 生成最终文件路径
* @param request 文件合并请求
* @return 最终文件路径
*/
private String generateFinalFilePath(FileMergeRequest request) {
String finalDir = properties.getUploadPath() + "/final";
File dir = new File(finalDir);
if (!dir.exists()) {
dir.mkdirs();
}

return finalDir + "/" + request.getFileId() + "_" + request.getFileName();
}

/**
* 合并分片文件
* @param request 文件合并请求
* @param finalFilePath 最终文件路径
*/
private void mergeChunkFiles(FileMergeRequest request, String finalFilePath) throws IOException {
try (FileOutputStream fos = new FileOutputStream(finalFilePath)) {
String chunkDir = properties.getUploadPath() + "/" + request.getFileId();

for (int i = 0; i < request.getTotalChunks(); i++) {
String chunkPath = chunkDir + "/chunk_" + i;
File chunkFile = new File(chunkPath);

if (!chunkFile.exists()) {
throw new IOException("分片文件不存在: " + chunkPath);
}

try (FileInputStream fis = new FileInputStream(chunkFile)) {
byte[] buffer = new byte[8192];
int bytesRead;
while ((bytesRead = fis.read(buffer)) != -1) {
fos.write(buffer, 0, bytesRead);
}
}
}

fos.flush();
}
}

/**
* 计算文件MD5
* @param filePath 文件路径
* @return MD5值
*/
private String calculateFileMD5(String filePath) {
try {
MessageDigest md = MessageDigest.getInstance("MD5");
try (FileInputStream fis = new FileInputStream(filePath)) {
byte[] buffer = new byte[8192];
int bytesRead;
while ((bytesRead = fis.read(buffer)) != -1) {
md.update(buffer, 0, bytesRead);
}
}

byte[] hash = md.digest();
StringBuilder hexString = new StringBuilder();

for (byte b : hash) {
String hex = Integer.toHexString(0xff & b);
if (hex.length() == 1) {
hexString.append('0');
}
hexString.append(hex);
}

return hexString.toString();
} catch (Exception e) {
throw new RuntimeException("计算文件MD5失败", e);
}
}

/**
* 更新文件信息
* @param request 文件合并请求
* @param finalFilePath 最终文件路径
*/
private void updateFileInfo(FileMergeRequest request, String finalFilePath) {
try {
String key = "file_info:" + request.getFileId();
FileInfo fileInfo = (FileInfo) redisTemplate.opsForValue().get(key);

if (fileInfo != null) {
fileInfo.setFilePath(finalFilePath);
fileInfo.setCompleteTime(LocalDateTime.now());
fileInfo.setStatus("COMPLETED");

redisTemplate.opsForValue().set(key, fileInfo, Duration.ofDays(properties.getCleanupDays()));
}
} catch (Exception e) {
log.error("更新文件信息失败: fileId={}", request.getFileId(), e);
}
}

/**
* 清理分片文件
* @param fileId 文件ID
*/
private void cleanupChunkFiles(String fileId) {
try {
String chunkDir = properties.getUploadPath() + "/" + fileId;
File dir = new File(chunkDir);
if (dir.exists()) {
FileUtils.deleteDirectory(dir);
}
} catch (Exception e) {
log.error("清理分片文件失败: fileId={}", fileId, e);
}
}
}

6. 进度跟踪服务

6.1 进度跟踪服务

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
/**
* 进度跟踪服务
*/
@Service
public class ProgressTrackingService {

private final RedisTemplate<String, Object> redisTemplate;

public ProgressTrackingService() {
this.redisTemplate = new RedisTemplate<>();
}

/**
* 初始化上传进度
* @param fileInfo 文件信息
*/
public void initializeProgress(FileInfo fileInfo) {
try {
UploadProgress progress = UploadProgress.builder()
.fileId(fileInfo.getFileId())
.fileName(fileInfo.getFileName())
.fileSize(fileInfo.getFileSize())
.totalChunks(fileInfo.getTotalChunks())
.uploadedChunks(0)
.uploadedBytes(0)
.progressPercentage(0.0)
.status("UPLOADING")
.startTime(LocalDateTime.now())
.lastUpdateTime(LocalDateTime.now())
.completedChunks(new ArrayList<>())
.build();

String key = "upload_progress:" + fileInfo.getFileId();
redisTemplate.opsForValue().set(key, progress, Duration.ofDays(7));

} catch (Exception e) {
log.error("初始化上传进度失败: fileId={}", fileInfo.getFileId(), e);
}
}

/**
* 更新上传进度
* @param fileId 文件ID
* @param chunkIndex 分片索引
*/
public void updateProgress(String fileId, int chunkIndex) {
try {
String key = "upload_progress:" + fileId;
UploadProgress progress = (UploadProgress) redisTemplate.opsForValue().get(key);

if (progress != null) {
progress.setUploadedChunks(progress.getUploadedChunks() + 1);
progress.setUploadedBytes(progress.getUploadedBytes() + calculateChunkSize(fileId, chunkIndex));
progress.setProgressPercentage((double) progress.getUploadedChunks() / progress.getTotalChunks() * 100);
progress.setLastUpdateTime(LocalDateTime.now());

if (!progress.getCompletedChunks().contains(chunkIndex)) {
progress.getCompletedChunks().add(chunkIndex);
}

redisTemplate.opsForValue().set(key, progress, Duration.ofDays(7));
}

} catch (Exception e) {
log.error("更新上传进度失败: fileId={}, chunkIndex={}", fileId, chunkIndex, e);
}
}

/**
* 获取上传进度
* @param fileId 文件ID
* @return 上传进度
*/
public UploadProgress getProgress(String fileId) {
try {
String key = "upload_progress:" + fileId;
return (UploadProgress) redisTemplate.opsForValue().get(key);
} catch (Exception e) {
log.error("获取上传进度失败: fileId={}", fileId, e);
return null;
}
}

/**
* 暂停上传进度
* @param fileId 文件ID
*/
public void pauseProgress(String fileId) {
try {
String key = "upload_progress:" + fileId;
UploadProgress progress = (UploadProgress) redisTemplate.opsForValue().get(key);

if (progress != null) {
progress.setStatus("PAUSED");
progress.setLastUpdateTime(LocalDateTime.now());
redisTemplate.opsForValue().set(key, progress, Duration.ofDays(7));
}

} catch (Exception e) {
log.error("暂停上传进度失败: fileId={}", fileId, e);
}
}

/**
* 恢复上传进度
* @param fileId 文件ID
*/
public void resumeProgress(String fileId) {
try {
String key = "upload_progress:" + fileId;
UploadProgress progress = (UploadProgress) redisTemplate.opsForValue().get(key);

if (progress != null) {
progress.setStatus("UPLOADING");
progress.setLastUpdateTime(LocalDateTime.now());
redisTemplate.opsForValue().set(key, progress, Duration.ofDays(7));
}

} catch (Exception e) {
log.error("恢复上传进度失败: fileId={}", fileId, e);
}
}

/**
* 取消上传进度
* @param fileId 文件ID
*/
public void cancelProgress(String fileId) {
try {
String key = "upload_progress:" + fileId;
UploadProgress progress = (UploadProgress) redisTemplate.opsForValue().get(key);

if (progress != null) {
progress.setStatus("CANCELLED");
progress.setLastUpdateTime(LocalDateTime.now());
redisTemplate.opsForValue().set(key, progress, Duration.ofDays(7));
}

} catch (Exception e) {
log.error("取消上传进度失败: fileId={}", fileId, e);
}
}

/**
* 检查所有分片是否上传完成
* @param fileId 文件ID
* @return 是否完成
*/
public boolean isAllChunksUploaded(String fileId) {
try {
String key = "upload_progress:" + fileId;
UploadProgress progress = (UploadProgress) redisTemplate.opsForValue().get(key);

if (progress != null) {
return progress.getUploadedChunks() >= progress.getTotalChunks();
}

return false;
} catch (Exception e) {
log.error("检查分片上传完成状态失败: fileId={}", fileId, e);
return false;
}
}

/**
* 计算分片大小
* @param fileId 文件ID
* @param chunkIndex 分片索引
* @return 分片大小
*/
private long calculateChunkSize(String fileId, int chunkIndex) {
// 这里应该根据实际的分片大小计算
// 为了简化,返回固定大小
return 1024 * 1024; // 1MB
}
}

7. 分片上传控制器

7.1 分片上传控制器

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
/**
* 分片上传控制器
*/
@RestController
@RequestMapping("/chunk-upload")
public class ChunkUploadController {

@Autowired
private ChunkUploadService chunkUploadService;

@Autowired
private FileMergeService fileMergeService;

@Autowired
private ProgressTrackingService progressTrackingService;

/**
* 初始化上传
*/
@PostMapping("/initialize")
public ResponseEntity<Map<String, Object>> initializeUpload(
@RequestParam String fileName,
@RequestParam long fileSize,
@RequestParam String fileMd5) {
try {
FileInfo fileInfo = chunkUploadService.initializeUpload(fileName, fileSize, fileMd5);

Map<String, Object> response = new HashMap<>();
response.put("success", true);
response.put("fileId", fileInfo.getFileId());
response.put("totalChunks", fileInfo.getTotalChunks());
response.put("chunkSize", 1024 * 1024); // 1MB
response.put("message", "上传初始化成功");

return ResponseEntity.ok(response);

} catch (Exception e) {
log.error("初始化上传失败", e);

Map<String, Object> response = new HashMap<>();
response.put("success", false);
response.put("message", "初始化上传失败: " + e.getMessage());

return ResponseEntity.status(HttpStatus.INTERNAL_SERVER_ERROR).body(response);
}
}

/**
* 上传分片
*/
@PostMapping("/upload")
public ResponseEntity<Map<String, Object>> uploadChunk(
@RequestParam String fileId,
@RequestParam String fileName,
@RequestParam long fileSize,
@RequestParam int chunkIndex,
@RequestParam int totalChunks,
@RequestParam String chunkMd5,
@RequestParam String fileMd5,
@RequestParam MultipartFile chunk) {
try {
ChunkUploadRequest request = ChunkUploadRequest.builder()
.fileId(fileId)
.fileName(fileName)
.fileSize(fileSize)
.chunkIndex(chunkIndex)
.totalChunks(totalChunks)
.chunkMd5(chunkMd5)
.fileMd5(fileMd5)
.uploadId(fileId)
.build();

ChunkUploadResponse response = chunkUploadService.uploadChunk(request, chunk.getBytes());

Map<String, Object> result = new HashMap<>();
result.put("success", response.isSuccess());
result.put("message", response.getMessage());
result.put("chunkIndex", response.getChunkIndex());
result.put("progress", response.getProgress());

return ResponseEntity.ok(result);

} catch (Exception e) {
log.error("上传分片失败", e);

Map<String, Object> response = new HashMap<>();
response.put("success", false);
response.put("message", "上传分片失败: " + e.getMessage());

return ResponseEntity.status(HttpStatus.INTERNAL_SERVER_ERROR).body(response);
}
}

/**
* 获取上传进度
*/
@GetMapping("/progress/{fileId}")
public ResponseEntity<Map<String, Object>> getUploadProgress(@PathVariable String fileId) {
try {
UploadProgress progress = progressTrackingService.getProgress(fileId);

Map<String, Object> response = new HashMap<>();
response.put("success", true);
response.put("progress", progress);

return ResponseEntity.ok(response);

} catch (Exception e) {
log.error("获取上传进度失败: fileId={}", fileId, e);

Map<String, Object> response = new HashMap<>();
response.put("success", false);
response.put("message", "获取上传进度失败: " + e.getMessage());

return ResponseEntity.status(HttpStatus.INTERNAL_SERVER_ERROR).body(response);
}
}

/**
* 暂停上传
*/
@PostMapping("/pause/{fileId}")
public ResponseEntity<Map<String, Object>> pauseUpload(@PathVariable String fileId) {
try {
boolean success = chunkUploadService.pauseUpload(fileId);

Map<String, Object> response = new HashMap<>();
response.put("success", success);
response.put("message", success ? "暂停上传成功" : "暂停上传失败");

return ResponseEntity.ok(response);

} catch (Exception e) {
log.error("暂停上传失败: fileId={}", fileId, e);

Map<String, Object> response = new HashMap<>();
response.put("success", false);
response.put("message", "暂停上传失败: " + e.getMessage());

return ResponseEntity.status(HttpStatus.INTERNAL_SERVER_ERROR).body(response);
}
}

/**
* 恢复上传
*/
@PostMapping("/resume/{fileId}")
public ResponseEntity<Map<String, Object>> resumeUpload(@PathVariable String fileId) {
try {
boolean success = chunkUploadService.resumeUpload(fileId);

Map<String, Object> response = new HashMap<>();
response.put("success", success);
response.put("message", success ? "恢复上传成功" : "恢复上传失败");

return ResponseEntity.ok(response);

} catch (Exception e) {
log.error("恢复上传失败: fileId={}", fileId, e);

Map<String, Object> response = new HashMap<>();
response.put("success", false);
response.put("message", "恢复上传失败: " + e.getMessage());

return ResponseEntity.status(HttpStatus.INTERNAL_SERVER_ERROR).body(response);
}
}

/**
* 取消上传
*/
@PostMapping("/cancel/{fileId}")
public ResponseEntity<Map<String, Object>> cancelUpload(@PathVariable String fileId) {
try {
boolean success = chunkUploadService.cancelUpload(fileId);

Map<String, Object> response = new HashMap<>();
response.put("success", success);
response.put("message", success ? "取消上传成功" : "取消上传失败");

return ResponseEntity.ok(response);

} catch (Exception e) {
log.error("取消上传失败: fileId={}", fileId, e);

Map<String, Object> response = new HashMap<>();
response.put("success", false);
response.put("message", "取消上传失败: " + e.getMessage());

return ResponseEntity.status(HttpStatus.INTERNAL_SERVER_ERROR).body(response);
}
}

/**
* 手动合并文件
*/
@PostMapping("/merge")
public ResponseEntity<Map<String, Object>> mergeFile(@RequestBody FileMergeRequest request) {
try {
FileMergeResponse response = fileMergeService.mergeFile(request);

Map<String, Object> result = new HashMap<>();
result.put("success", response.isSuccess());
result.put("message", response.getMessage());
result.put("filePath", response.getFilePath());
result.put("fileSize", response.getFileSize());
result.put("fileMd5", response.getFileMd5());

return ResponseEntity.ok(result);

} catch (Exception e) {
log.error("合并文件失败", e);

Map<String, Object> response = new HashMap<>();
response.put("success", false);
response.put("message", "合并文件失败: " + e.getMessage());

return ResponseEntity.status(HttpStatus.INTERNAL_SERVER_ERROR).body(response);
}
}
}

8. 总结

通过SpringBoot+Redis实现大文件分片上传,我们成功构建了一个高效、可靠的文件上传系统。关键特性包括:

8.1 核心优势

  1. 分片上传: 将大文件分割成小块进行上传
  2. 断点续传: 支持上传中断后继续上传
  3. 进度跟踪: 实时跟踪上传进度
  4. 文件合并: 自动合并分片文件
  5. 错误处理: 完善的错误处理和重试机制

8.2 最佳实践

  1. 分片策略: 合理的分片大小和并发控制
  2. 进度管理: 实时进度跟踪和状态管理
  3. 错误处理: 完善的错误处理和重试机制
  4. 资源管理: 及时清理临时文件和缓存
  5. 性能优化: 高效的并发处理和内存管理

这套SpringBoot+Redis分片上传方案不仅能够处理大文件上传,还包含了断点续传、进度跟踪、文件合并等核心功能,是现代Web应用的重要技术组件。