If you’ve ever tuned a high-throughput .NET service, you already know the cost of allocation churn. Repeatedly allocating short-lived arrays (e.g., for file I/O, network reads, JSON payloads) drives GC pressure, latency spikes, and cache misses. ArrayPool<T>
gives you a lightweight way to reuse arrays instead of recreating them, often a “free win” with minimal code changes.
Below I'll explain when to use ArrayPool<T>
, the gotchas that bite, and gives you copy-pasteable examples you can drop into APIs, background workers, and stream pipelines.
What is ArrayPool<T>
?
ArrayPool<T>
(in System.Buffers
) is a shared or custom pool that lets you:
You avoid allocating new arrays on every loop iteration. The pool hands you a previously used buffer, which dramatically reduces GC work under load.
using System.Buffers;
var pool = ArrayPool<byte>.Shared;
byte[] buffer = pool.Rent(64 * 1024); // request ~64KB
try
{
// use buffer[0..n]
}
finally
{
pool.Return(buffer, clearArray: false);
}
Tip: Always wrap the lifetime with try/finally
to guarantee the buffer is returned, even on exceptions.
When should you use it?
Use ArrayPool<T>
when you:
Allocate the same size arrays repeatedly (e.g., 8KB–1MB buffers).
Process streams (files, sockets, HTTP) in a loop.
Handle bursty workloads where GC pauses hurt tail latency.
Move large binary payloads (images, PDFs, message packs, Protobuf).
It’s overkill for tiny, one-off arrays or when code clarity matters more than micro-optimisations.
Core behaviours you must know
Size is a minimum, not exact
Rent(minLength)
can return a larger array. Use only the slice you filled.
Buffers are not zeroed
Data from a previous renter may be present. If you hold sensitive data (tokens, PII), either:
Return exactly once
Never double return or keep references after returning. Treat the array as invalid once returned.
Thread safety
ArrayPool<T>.Shared
is thread-safe, but your buffer usage must be too. Don’t share a single rented buffer across concurrent operations unless you synchronise access.
Example 1. Efficient stream copy with a pooled buffer
Classic pattern for copying streams (file uploads, proxies, etc.):
using System.Buffers;
using System.IO;
using System.Threading;
using System.Threading.Tasks;
public static class StreamCopy
{
private const int DefaultBufferSize = 64 * 1024; // 64KB
public static async Task CopyAsync(Stream source, Stream destination,
CancellationToken ct = default)
{
byte[] buffer = ArrayPool<byte>.Shared.Rent(DefaultBufferSize);
try
{
int read;
while ((read = await source.ReadAsync(buffer.AsMemory(0, buffer.Length), ct)) > 0)
{
await destination.WriteAsync(buffer.AsMemory(0, read), ct);
}
}
finally
{
ArrayPool<byte>.Shared.Return(buffer, clearArray: false);
}
}
}
Why this helps: under load, you avoid allocating a new 64KB on every loop iteration. That’s a lot of GC pressure gone.
Example 2. Reading a file without ballooning memory
Read file contents, then trim to exact size only at the very end:
using System.Buffers;
using System.IO;
using System.Threading;
using System.Threading.Tasks;
public static class FileReader
{
// Reads entire file; returns a tightly sized array.
public static async Task<byte[]> ReadAllBytesPooledAsync(
string path, int chunkSize = 128 * 1024, CancellationToken ct = default)
{
byte[] rented = ArrayPool<byte>.Shared.Rent(chunkSize);
try
{
await using var fs = File.OpenRead(path);
int total = 0;
int read;
// If file is larger than one chunk, we’ll grow via Array.Resize on a temporary
// *owned* array, but keep reusing a rented chunk buffer for reads.
byte[] owned = new byte[0];
while ((read = await fs.ReadAsync(rented.AsMemory(0, rented.Length), ct)) > 0)
{
int required = total + read;
if (owned.Length < required)
{
int newSize = owned.Length == 0 ? required : System.Math.Max(required, owned.Length * 2);
System.Array.Resize(ref owned, newSize);
}
rented.AsSpan(0, read).CopyTo(owned.AsSpan(total));
total += read;
}
// Trim to exact size
if (owned.Length != total)
System.Array.Resize(ref owned, total);
return owned;
}
finally
{
ArrayPool<byte>.Shared.Return(rented);
}
}
}
This pattern is great when you must return a tightly sized array but still want pooled reads.
Example 3. JSON serialisation with pooled buffers
Serialising to a Stream
with a reusable buffer:
using System.Buffers;
using System.IO;
using System.Text.Json;
using System.Threading;
using System.Threading.Tasks;
public static class JsonWriter
{
public static async Task WriteJsonAsync<T>(Stream destination, T value, CancellationToken ct = default)
{
byte[] buffer = ArrayPool<byte>.Shared.Rent(32 * 1024);
try
{
using var ms = new MemoryStream(buffer, writable: true);
await JsonSerializer.SerializeAsync(ms, value, cancellationToken: ct);
// MemoryStream over a pre-sized buffer doesn’t auto-expand;
// use its Position as the written length
var written = (int)ms.Position;
await destination.WriteAsync(buffer.AsMemory(0, written), ct);
}
finally
{
ArrayPool<byte>.Shared.Return(buffer, clearArray: true); // consider clearing for sensitive JSON
}
}
}
Example 4. Building a reusable helper (BufferScope
)
A small RAII-style wrapper removes the boilerplate and prevents leaks:
using System;
using System.Buffers;
public readonly ref struct BufferScope<T>
{
public T[] Array { get; }
private readonly bool _clear;
public BufferScope(int minimumLength, bool clearOnReturn = false)
{
Array = ArrayPool<T>.Shared.Rent(minimumLength);
_clear = clearOnReturn;
}
public void Dispose()
{
ArrayPool<T>.Shared.Return(Array, _clear);
}
}
Usage
// Note: ref struct -> must remain on the stack (no async across awaits).
public static void Process()
{
using var scope = new BufferScope<byte>(64 * 1024);
var buffer = scope.Array;
// ... do work ...
}
If you need to cross await
boundaries, make it a class and be disciplined about Dispose()
in a try/finally
.
Example 5. Scheduled retries/network reads (loop)
public static async Task<int> ReadUntilCompleteAsync(Stream stream, Memory<byte> target, CancellationToken ct)
{
int total = 0;
while (total < target.Length)
{
int n = await stream.ReadAsync(target.Slice(total), ct);
if (n == 0) break;
total += n;
}
return total;
}
public static async Task<int> ReadWithPoolAsync(Stream stream, int maxBytes, CancellationToken ct = default)
{
byte[] buffer = ArrayPool<byte>.Shared.Rent(maxBytes);
try
{
return await ReadUntilCompleteAsync(stream, buffer.AsMemory(0, maxBytes), ct);
}
finally
{
ArrayPool<byte>.Shared.Return(buffer);
}
}
ArrayPool<T>
vs MemoryPool<T>
vs ArrayBufferWriter<T>
ArrayPool<T>
-gives you arrays (T[]
). Perfect for APIs that want arrays (streams, crypto, legacy code).
MemoryPool<T>
-gives you IMemoryOwner<T>
, often backed by pinned/native buffers. Great when you want Memory<T>
slices without exposing arrays, or you need deterministic disposal and less GC pinning.
ArrayBufferWriter<T>
- a growable buffer implementing IBufferWriter<T>
. Excellent when producing variable-length data (JSON, encoders). Internally manages arrays (often from pools).
You can mix them: use ArrayPool<T>
for hot paths that demand arrays; use ArrayBufferWriter<T>
where IBufferWriter<T>
improves ergonomics.
Common pitfalls (and fixes)
Forgetting to return
Always use try/finally
. Consider wrappers to enforce disposal.
Sharing a buffer across threads
Don’t. Rent per operation or add locking.
Assuming exact size
Use only the written slice (buffer.AsSpan(0, count)
).
Leaking sensitive data
Use Return(buffer, clearArray: true)
or clear slices you wrote.
Double return or use-after-return
Treat a returned buffer as poisoned; never touch it again.
Quick benchmark sketch (BenchmarkDotNet)
If you want to verify for your scenario:
// <PackageReference Include="BenchmarkDotNet" Version="*" />
using BenchmarkDotNet.Attributes;
using BenchmarkDotNet.Running;
using System.Buffers;
public class BufferBench
{
private readonly byte[] _src = new byte[1024 * 1024]; // 1MB
[Benchmark(Baseline = true)]
public int AllocateEachTime()
{
int total = 0;
for (int i = 0; i < 64; i++)
{
var dst = new byte[16 * 1024];
total += _src.AsSpan(i * 1024, dst.Length).CopyTo(dst);
}
return total;
}
[Benchmark]
public int UsingArrayPool()
{
int total = 0;
var pool = ArrayPool<byte>.Shared;
var dst = pool.Rent(16 * 1024);
try
{
for (int i = 0; i < 64; i++)
{
total += _src.AsSpan(i * 1024, 16 * 1024).CopyTo(dst);
}
return total;
}
finally
{
pool.Return(dst);
}
}
}
public static class SpanExtensions
{
public static int CopyTo(this Span<byte> src, byte[] dst)
{
src.CopyTo(dst);
return src.Length;
}
}
// Run: BenchmarkRunner.Run<BufferBench>();
You should see lower allocations and often better throughput in the pooled version, especially as iteration counts rise.
Practical checklist
Use ArrayPool<T>.Shared
for hot paths with repeated buffers ≥8KB.
Wrap rentals in try/finally
.
Treat Rent(n)
as “≥ n”; slice to what you used.
Clear on return if handling secrets.
Don’t share a single buffer across concurrent ops.
Add unit tests that simulate exceptions to ensure you always return.
ArrayPool<T>
is a low friction optimisation that pays off in APIs, file processors, proxies, and any service that lives in a tight I/O loop. Start with the simple rent/return loop, measure with your real workload, and keep things readable. When you need tighter control or more ergonomic writes, layer in MemoryPool<T>
or ArrayBufferWriter<T>
.