ASP.NET Core  

Multi-File Upload with Chunking and Resume — Practical Guide (Angular + ASP.NET Core + SQL Server)

Large file uploads are common in enterprise apps (videos, CAD, backups). Network interruptions or browser crashes break uploads and frustrate users. Chunked uploads with resumable sessions solve this — upload a file in parts, let the client retry only missing chunks, and resume later.

This article gives a practical, production-ready design and implementation using Angular (frontend), ASP.NET Core (backend) and SQL Server. It covers architecture, database schema, endpoints, client logic (chunking, hashing, resume), server logic (session tracking, chunk validation, assembly), and operational best practices.

Overview

Key concepts

  • Chunking: Split file into fixed-size chunks (e.g. 5–10 MB).

  • Upload session: Server creates a session for each file with session id, expected chunk count and file hash/size.

  • Chunk upload: Client uploads each chunk with metadata (session id, chunk index).

  • Resume: Client asks server which chunks are received and uploads only missing ones.

  • Assembly: After all chunks uploaded, server assembles them into final file and validates checksum.

  • Multiple files: Support parallel sessions for multiple files.

  • Security: Authenticate requests, validate MIME types, enforce size limits, rate-limit.

  • Scalability: Store chunks on disk or blob storage; use consistent naming; consider using object storage multipart upload (S3/Blob).

Flowchart (smaller header)

User selects files → For each file: compute id/hash → Request upload session → Upload chunks (parallel) →
If interruption: query server for uploaded chunks → upload missing chunks → Request assembly → Server assembles & validates →
Return success → Client notifies user

Workflow (smaller header)

  1. Client computes (or requests) file fingerprint (e.g. SHA-256 or MD5) and file size.

  2. Client POSTs /api/uploads/init with file metadata; server creates UploadSession and returns sessionId, chunkSize, maxParallel.

  3. Client splits file into chunks and uploads chunks to /api/uploads/{sessionId}/chunk with headers Chunk-Index, Chunk-Size, Chunk-Hash. Uploads can be parallel.

  4. On retry/resume, client calls /api/uploads/{sessionId}/status to get a list of received chunk indices.

  5. When all chunks uploaded, client calls /api/uploads/{sessionId}/complete. Server validates (count, combined size, checksum) and assembles file. Optionally store assembled file to blob.

  6. Server updates DB and returns final file metadata.

ER Diagram (smaller header)

+--------------------------+
| UploadSession            |
+--------------------------+
| SessionId (PK) GUID      |
| FileName                 |
| FileSize BIGINT          |
| FileHash VARCHAR(128)    |
| ChunkSize INT            |
| TotalChunks INT          |
| UploadedChunksCount INT  |
| Status (Pending/Assembling/Complete/Failed) |
| CreatedBy                |
| CreatedOn DATETIME2      |
+--------------------------+

+--------------------------+
| UploadedChunk            |
+--------------------------+
| ChunkId INT IDENTITY PK  |
| SessionId GUID FK        |
| ChunkIndex INT           |
| Size INT                 |
| Hash VARCHAR(128)        |
| StoredPath NVARCHAR(2000)|
| UploadedOn DATETIME2     |
+--------------------------+

Architecture diagram (smaller header)

[ Angular Client ]  <--->  [ ASP.NET Core API (UploadController) ]  <---> [ Storage ]
    (compute hash, split)          (session, chunk endpoints)            (local disk or S3/Blob)
                                                            |
                                                            v
                                                   [ SQL Server - sessions & chunks ]

Sequence diagram (smaller header)

Client -> API: POST /init { filename, size, hash }
API -> DB: create UploadSession
API -> Client: return sessionId, chunkSize
Client -> API: POST /chunk (sessionId, index, chunkData)
API -> Storage: save chunk
API -> DB: upsert UploadedChunk
Client -> API: GET /status (sessionId)
API -> DB: return uploaded chunk indices
Client -> API: POST /complete (sessionId)
API -> validate chunks, assemble -> Storage final file, compute final hash
API -> DB: update session status Complete
API -> Client: return file url / success

Design Details & Decisions

  • Chunk size: default 5 MB — trade-off between overhead and retry cost.

  • Hashing: client computes file hash (SHA-256 preferred) to identify file uniquely; optionally compute per-chunk hash for integrity. If client hashing is expensive, server can compute during assembly.

  • Parallelism: upload N chunks in parallel (e.g. 3–6) for speed.

  • Storage: For single-server, store chunks on disk under uploads/sessions/{sessionId}/chunk_{index}. For distributed systems, use object storage (S3/Blob multipart or put parts) — then assembly may be serverless or server-assisted.

  • Atomic assembly: assemble into a temporary file and atomically move/rename on success.

  • Idempotency: re-uploading an already existing chunk should be safe (overwrite or ignore).

  • Expiration/cleanup: set session TTL (e.g. 7 days) and run cleanup job for incomplete sessions.

  • Security: require auth (JWT), validate user owns session, limit total size per account, prevent directory traversal.

  • Resume flow: client queries /status to receive list of uploaded chunks or missing indices. Server returns chunk index list or bitmap.

Database schema (SQL Server) — script

CREATE TABLE UploadSession (
  SessionId UNIQUEIDENTIFIER PRIMARY KEY,
  FileName NVARCHAR(512) NOT NULL,
  FileSize BIGINT NOT NULL,
  FileHash NVARCHAR(128) NULL,
  ChunkSize INT NOT NULL,
  TotalChunks INT NOT NULL,
  UploadedChunksCount INT NOT NULL DEFAULT 0,
  Status NVARCHAR(32) NOT NULL DEFAULT 'Pending',
  CreatedBy NVARCHAR(200) NULL,
  CreatedOn DATETIME2 DEFAULT SYSUTCDATETIME()
);

CREATE TABLE UploadedChunk (
  ChunkId INT IDENTITY(1,1) PRIMARY KEY,
  SessionId UNIQUEIDENTIFIER NOT NULL,
  ChunkIndex INT NOT NULL,
  Size INT NOT NULL,
  Hash NVARCHAR(128) NULL,
  StoredPath NVARCHAR(2000) NOT NULL,
  UploadedOn DATETIME2 DEFAULT SYSUTCDATETIME(),
  CONSTRAINT FK_Chunk_Session FOREIGN KEY (SessionId) REFERENCES UploadSession(SessionId)
);

CREATE INDEX IX_UploadedChunk_Session_ChunkIndex ON UploadedChunk(SessionId, ChunkIndex);

Backend (ASP.NET Core) — essential code snippets

These are concise, ready-to-adapt examples. For production, add dependency injection, logging, error handling and authentication.

DTOs

public record InitUploadRequest(string FileName, long FileSize, string? FileHash);
public record InitUploadResponse(Guid SessionId, int ChunkSize, int TotalChunks);

Program.cs — minimal DI

builder.Services.AddControllers();
builder.Services.AddDbContext<ApplicationDbContext>(options => options.UseSqlServer(conn));
builder.Services.AddScoped<IUploadService, UploadService>();
...
app.MapControllers();

UploadController.cs

[ApiController]
[Route("api/uploads")]
public class UploadsController : ControllerBase
{
    private readonly IUploadService _svc;
    private readonly IWebHostEnvironment _env; // for path
    public UploadsController(IUploadService svc, IWebHostEnvironment env) { _svc = svc; _env = env; }

    [HttpPost("init")]
    public async Task<IActionResult> Init([FromBody] InitUploadRequest req)
    {
        var session = await _svc.CreateSessionAsync(req.FileName, req.FileSize, req.FileHash);
        int chunkSize = _svc.DefaultChunkSize;
        int totalChunks = (int)Math.Ceiling((double)req.FileSize / chunkSize);
        return Ok(new InitUploadResponse(session.SessionId, chunkSize, totalChunks));
    }

    [HttpPost("{sessionId}/chunk")]
    public async Task<IActionResult> UploadChunk(Guid sessionId, [FromQuery] int index)
    {
        // check request contains body
        if (Request.ContentLength == null || Request.ContentLength == 0) return BadRequest("Empty chunk");

        using var ms = new MemoryStream();
        await Request.Body.CopyToAsync(ms);
        var data = ms.ToArray();

        // optional: header "X-Chunk-Hash"
        string? chunkHash = Request.Headers["X-Chunk-Hash"].FirstOrDefault();

        await _svc.SaveChunkAsync(sessionId, index, data, chunkHash);

        return Ok();
    }

    [HttpGet("{sessionId}/status")]
    public async Task<IActionResult> Status(Guid sessionId)
    {
        var received = await _svc.GetReceivedChunksAsync(sessionId);
        return Ok(received); // return array of int indices or bitmap
    }

    [HttpPost("{sessionId}/complete")]
    public async Task<IActionResult> Complete(Guid sessionId)
    {
        var result = await _svc.AssembleAsync(sessionId);
        if (!result.Success) return BadRequest(result.Message);
        return Ok(new { fileUrl = result.FileUrl });
    }
}

UploadService (core methods)

public interface IUploadService {
    int DefaultChunkSize { get; }
    Task<UploadSession> CreateSessionAsync(string fileName, long fileSize, string? fileHash);
    Task SaveChunkAsync(Guid sessionId, int index, byte[] data, string? hash);
    Task<int[]> GetReceivedChunksAsync(Guid sessionId);
    Task<(bool Success, string FileUrl, string? Message)> AssembleAsync(Guid sessionId);
}

Implementation notes (pseudocode):

  • CreateSessionAsync: create DB UploadSession row. Create folder uploads/{sessionId} on disk. Return session.

  • SaveChunkAsync:

    • Validate session exists and not completed.

    • Save chunk file to path uploads/{sessionId}/chunk_{index} (ensure safe filename).

    • Upsert UploadedChunk record (if already exists, update timestamp and size).

    • Optionally verify hash against computed hash of data.

    • Increment UploadedChunksCount if first time.

  • GetReceivedChunksAsync: query UploadedChunk for session and return list of ChunkIndex.

  • AssembleAsync:

    • Confirm UploadedChunksCount == TotalChunks.

    • Create temp file uploads/{sessionId}/assemble.tmp.

    • For i = 0 .. total-1: open chunk file chunk_{i} and append bytes to temp file.

    • Compute final hash (SHA-256) of assembled file and compare with FileHash if provided.

    • Move temp file to final storage path (e.g. completed/{sessionId}_{safeName}) atomically.

    • Update UploadSession.Status = Complete and return URL or path.

    • On failure, set status to Failed and return message.

  • Concurrency: use locks or transactional DB flags to prevent two assemblies running simultaneously.

Client (Angular) — practical implementation

Key parts:

  • Compute file fingerprint (optional) — use crypto.subtle.digest('SHA-256', arrayBuffer) (async).

  • Split file into chunks: use file.slice(start, end).

  • Upload with concurrency limit (e.g. 4 concurrent fetches).

  • Persist session info (sessionId, chunkSize) in localStorage to resume after reload.

  • On resume: call /status to get received chunks and upload the rest.

Angular service (simplified)

@Injectable({providedIn: 'root'})
export class ResumableUploadService {
  constructor(private http: HttpClient) {}

  async computeHash(file: File): Promise<string> {
    const chunkSize = 4 * 1024 * 1024; // optional sample hashing strategy to avoid hashing entire huge file
    const buffer = await file.arrayBuffer();
    const hashBuffer = await crypto.subtle.digest('SHA-256', buffer);
    const hashArray = Array.from(new Uint8Array(hashBuffer));
    return hashArray.map(b => b.toString(16).padStart(2,'0')).join('');
  }

  async startSession(file: File) {
    const hash = await this.computeHash(file);
    const res: any = await this.http.post('/api/uploads/init', { fileName: file.name, fileSize: file.size, fileHash: hash }).toPromise();
    return res as { sessionId: string, chunkSize: number, totalChunks: number };
  }

  async uploadFile(file: File, onProgress: (p:number)=>void) {
    const session = await this.startSession(file);
    const sessionKey = `upload:${session.sessionId}`;
    localStorage.setItem(sessionKey, JSON.stringify({ fileName: file.name, size: file.size }));

    const chunkSize = session.chunkSize;
    const total = session.totalChunks;
    const uploaded = new Set<number>();

    // check server for already received chunks (resume scenario)
    const rec: number[] = await this.http.get<number[]>(`/api/uploads/${session.sessionId}/status`).toPromise();
    rec.forEach(i => uploaded.add(i));

    // prepare indices to upload
    const indices = [];
    for (let i=0;i<total;i++) if (!uploaded.has(i)) indices.push(i);

    const concurrency = 4;
    let active = 0;
    let completed = uploaded.size;

    const uploadChunk = async (index: number) => {
      const start = index * chunkSize;
      const end = Math.min(start + chunkSize, file.size);
      const blob = file.slice(start, end);
      const form = new FormData();
      form.append('chunk', blob);
      // compute optional chunk hash (if desired)
      // const chunkHash = await this.computeChunkHash(blob);

      const url = `/api/uploads/${session.sessionId}/chunk?index=${index}`;
      await this.http.post(url, blob, { headers: new HttpHeaders({'Content-Type':'application/octet-stream'}) }).toPromise();
      completed++;
      onProgress(Math.round(completed * 100 / total));
    };

    // limited concurrency runner
    const runUploads = () => new Promise<void>((resolve, reject) => {
      let i = 0;
      const next = () => {
        if (i >= indices.length && active === 0) { resolve(); return; }
        while (active < concurrency && i < indices.length) {
          const idx = indices[i++];
          active++;
          uploadChunk(idx).then(() => { active--; next(); }).catch(err => reject(err));
        }
      };
      next();
    });

    await runUploads();
    // complete
    await this.http.post(`/api/uploads/${session.sessionId}/complete`, {}).toPromise();
    localStorage.removeItem(sessionKey);
    return true;
  }
}

Notes:

  • For very large files, computing full SHA-256 on client may block; you can use a progressive hashing library or let server verify final hash.

  • Send chunk binary directly as request body to simplify server side.

Component usage

onFileSelected(e: any) {
  const file: File = e.target.files[0];
  this.uploadService.uploadFile(file, (p)=>this.progress = p).then(()=>alert('Done')).catch(err=>console.error(err));
}

Edge cases & hardening

  • Partial chunk re-upload: Overwrite chunk file on save or compare hash to decide.

  • Concurrent clients: If same session used by multiple clients, track CreatedBy and prevent unauthorized writes.

  • Chunk ordering: Assembly must follow index order.

  • Disk fill / quota: Enforce per-user quotas; reject sessions if user exceeds storage.

  • Chunk corruption: Store chunk hash and compare on upload; if mismatch, request reupload.

  • Cleanup: Background job to remove sessions older than TTL and remove chunk files.

  • Timeouts: Keep chunk upload request short; allow retries.

  • Large file streaming: When assembling, stream rather than load entire file into memory. Use buffered copy.

Testing strategy

  • Test upload of various sizes (small < chunk, exact multiples, not multiple).

  • Simulate network interruption: pause client, reload page, resume.

  • Test re-upload of same chunk index with different content — server should handle overwrite or verify.

  • Test parallel uploads of multiple files.

  • Test assembly validation with wrong total chunk count or missing chunks.

  • Test security: unauthorized users should not upload to session.

Production recommendations

  • Use object storage (S3 or Azure Blob) with native multipart uploads for scale — client can perform multipart directly with signed URLs and you avoid server-side assembly. This is preferred for very large files.

  • If using server-side assembly, keep an autoscaling worker or service to perform assembly tasks and move final file to durable storage.

  • Use message queue to offload assembly to background worker and respond to client with pending state, then notify (email/WebSocket) when final file is ready. (If you prefer application-level async assembly, ensure you handle idempotency and do not rely on synchronous assembly in the request thread.)

  • Monitor disk usage, chunk counts and failed sessions; implement automated cleanup.

Example: folder layout on server (disk storage)

/data/uploads/
  sessions/
    {sessionId}/
       chunk_0
       chunk_1
       ...
    {sessionId}/assemble.tmp
  completed/
    {sessionId}_{sanitisedFilename}

Final checklist before production

  • Authentication & authorization in all endpoints.

  • Input validation (fileName sanitisation, maxFileSize, allowed mime types).

  • Rate limiting and per-user quota.

  • Virus / malware scanning pipeline on completed files.

  • Monitoring and alerts for failed assemblies / storage fill.

  • Proper logging and audit entries for uploads and deletes.

  • Secure temporary storage permissions and cleanup policy.