For years, C# developers have relied on garbage collection to handle memory management automatically. And for most applications, that's perfectly fine. But as systems become more performance-critical-think high-frequency trading, real-time gaming, or massive data processing-even small garbage collection pauses can be problematic. That's where modern C#'s memory management features come in.
I've worked on systems where reducing allocations from gigabytes to megabytes per second made the difference
between meeting SLA requirements and facing angry customers. The tools we'll explore-Span<T>
,
Memory<T>
, and memory pools-aren't just academic curiosities. They're
practical solutions for real performance problems.
What makes these features special is that they give you low-level control without sacrificing safety. You can write allocation-free code that performs like C++, but with bounds checking and type safety that prevents the memory corruption bugs that plague unsafe languages. We'll explore how these pieces fit together and when you should reach for each one.
The Allocation Problem
Before diving into solutions, let's understand the problem. Every time you create an object in C#, you're allocating memory on the heap. Arrays, strings, classes-they all require heap allocation. And while the garbage collector does an excellent job of cleaning up, each allocation has costs: the time to allocate, the pressure on the GC, and eventually, the pause when collection happens.
Consider a simple string manipulation scenario that many developers write without thinking twice about performance:
// This creates multiple temporary strings
string ProcessText(string input)
{
return input.Trim()
.ToUpper()
.Replace(" ", "_")
.Substring(0, Math.Min(input.Length, 20));
}
Each method call potentially creates a new string object. For occasional use, this is fine. But call this method millions of times per second, and you're generating significant garbage that the GC must eventually clean up. Modern memory management techniques can often eliminate these allocations entirely.
Span<T>: Your Window into Memory
Span<T>
is probably the most important addition to C#'s memory
management toolkit. Think of it as a lightweight wrapper around a contiguous block of memory-whether
that memory is on the stack, heap, or even unmanaged. It provides a safe, uniform interface for working
with different kinds of memory.
// Span can wrap different memory sources
int[] heapArray = {1, 2, 3, 4, 5};
Span<int> heapSpan = heapArray;
Span<int> stackSpan = stackalloc int[5];
ReadOnlySpan<char> stringSpan = "Hello World".AsSpan();
// All work the same way
Console.WriteLine(heapSpan[0]);
Console.WriteLine(stackSpan.Length);
Console.WriteLine(stringSpan.Slice(0, 5).ToString());
What's powerful about Span<T>
is that it doesn't allocate.
It's a ref struct
, which means it lives on the stack and contains
only a pointer and length. This makes operations like slicing incredibly cheap-you're just creating
a new view of existing memory, not copying data.
ReadOnlySpan and String Processing
One area where Span<T>
really shines is string processing.
Let's revisit that text processing example and see how we can eliminate allocations:
string ProcessTextEfficiently(ReadOnlySpan<char> input)
{
var trimmed = input.Trim();
Span<char> buffer = stackalloc char[trimmed.Length];
for (int i = 0; i < trimmed.Length; i++)
{
char c = trimmed[i];
buffer[i] = c == ' ' ? '_' : char.ToUpper(c);
}
var maxLength = Math.Min(buffer.Length, 20);
return buffer.Slice(0, maxLength).ToString();
}
This version eliminates all intermediate string allocations. We use stack allocation for the working buffer and only create a final string at the end. For high-throughput scenarios, this can reduce garbage generation from gigabytes to nearly nothing.
Memory<T>: When You Need Async
Span<T>
has one significant limitation-it can't cross async
boundaries. Since it's a ref struct
, the compiler prevents you
from using it in async methods. That's where Memory<T>
comes in.
async Task ProcessDataAsync(Memory<byte> buffer)
{
// Memory can be used in async methods
var result = await ReadDataAsync(buffer);
// Convert to Span when you need to work with the data
Span<byte> span = buffer.Span;
ProcessBytes(span);
}
Memory<T>
is heap-allocated but provides a
.Span
property to get a Span<T>
when you need to work with the data. This pattern-pass Memory<T>
around, use Span<T>
for actual work-is becoming common in
performance-sensitive APIs.
Memory<T>
instead of Span<T>
.
The compiler prevents Span<T>
from being used in async methods
since it can't be safely stored on the heap.
Memory Pools: Reusing Allocations
Sometimes you can't avoid heap allocation entirely. Maybe you need variable-sized buffers, or you're
working with async operations that require Memory<T>
. In these
cases, memory pools let you reuse allocations instead of constantly creating new ones.
class BufferProcessor
{
private readonly ArrayPool<byte> _pool = ArrayPool<byte>.Shared;
public async Task ProcessStreamAsync(Stream stream)
{
byte[] buffer = _pool.Rent(4096);
try
{
int bytesRead = await stream.ReadAsync(buffer);
ProcessData(buffer.AsSpan(0, bytesRead));
}
finally
{
_pool.Return(buffer);
}
}
}
The array pool maintains a cache of buffers in different sizes. When you rent a buffer, you might get a recycled one instead of a fresh allocation. When you return it, it goes back into the pool for reuse. This pattern can dramatically reduce allocation rates in streaming scenarios.
Custom Memory Pools
For specialized scenarios, you might want to create custom memory pools. Here's a simple example for a specific data structure:
public class MessagePool
{
private readonly ConcurrentQueue<Message> _pool = new();
private readonly int _maxPoolSize;
public MessagePool(int maxPoolSize = 100) => _maxPoolSize = maxPoolSize;
public Message Rent()
{
if (_pool.TryDequeue(out var message))
{
message.Reset();
return message;
}
return new Message();
}
public void Return(Message message)
{
if (_pool.Count < _maxPoolSize)
_pool.Enqueue(message);
}
}
Custom pools give you fine-grained control over allocation patterns and can be tailored to your specific use cases. Just remember that pools introduce complexity-you need to be careful about object state and ensure items are properly reset before reuse.
ArrayPool<T>.Shared
isn't sufficient. Pools add complexity
and require careful state management-always reset objects before returning them to the pool.
Working with Unmanaged Memory
Sometimes you need to work with memory that isn't managed by the .NET runtime-maybe you're
interfacing with native libraries or working with memory-mapped files. Modern C# provides
safe ways to work with unmanaged memory through Span<T>
and related types.
unsafe void ProcessUnmanagedData(void* ptr, int length)
{
Span<byte> span = new Span<byte>(ptr, length);
// Now you can work with unmanaged memory safely
for (int i = 0; i < span.Length; i++)
{
span[i] = (byte)(span[i] ^ 0xFF); // XOR operation
}
}
By wrapping unmanaged memory in a Span<T>
, you get bounds
checking and type safety even when working with raw pointers. This is particularly useful when
interfacing with native libraries or working with embedded scenarios.
Span<T>
to get bounds checking and type safety.
Never work directly with raw pointers in performance-critical code without these safety nets.
Performance Patterns and Best Practices
After working with these memory management features across multiple projects, I've learned some
patterns that consistently deliver good results. First, start with Span<T>
for most scenarios-it's the most efficient option when you don't need async support.
Second, use stackalloc
for small, fixed-size buffers. The general
rule of thumb is to keep stack allocations under 1KB to avoid stack overflow issues. For larger
or variable-size buffers, consider array pools.
// Good: small, fixed-size buffer
Span<int> smallBuffer = stackalloc int[16];
// Better: use pool for larger buffers
var largeBuffer = ArrayPool<byte>.Shared.Rent(largeSize);
try { /* use buffer */ }
finally { ArrayPool<byte>.Shared.Return(largeBuffer); }
Third, be careful about mixing allocation strategies. If you're optimizing for zero allocation, make sure you're not accidentally allocating in unexpected places. Profiling tools like dotMemory or PerfView can help you identify allocation hotspots.
Real-World Application: High-Performance JSON Parsing
Let me share an example from a project where we needed to parse millions of JSON messages per
second. Traditional approaches using JsonSerializer
were too
slow and generated too much garbage. Here's how we solved it using modern memory management:
public readonly struct FastJsonReader
{
private readonly ReadOnlySpan<byte> _data;
public FastJsonReader(ReadOnlySpan<byte> data) => _data = data;
public bool TryGetString(ReadOnlySpan<byte> key, out ReadOnlySpan<byte> value)
{
// Zero-allocation JSON parsing using spans
var keyIndex = _data.IndexOf(key);
if (keyIndex == -1) { value = default; return false; }
// Find the value after the key (simplified)
var valueStart = keyIndex + key.Length + 3; // skip ":"
var valueEnd = _data.Slice(valueStart).IndexOf((byte)',');
if (valueEnd == -1) valueEnd = _data.Slice(valueStart).IndexOf((byte)'}');
value = _data.Slice(valueStart, valueEnd);
return true;
}
}
This approach eliminated all string allocations during parsing. We only converted to strings when absolutely necessary, and we used object pools for the few objects we did need to create. The result was a 10x improvement in throughput with virtually no garbage generation.
Measuring Impact
When optimizing memory allocation, measurement is crucial. Here's a simple pattern I use to measure allocation impact:
long MeasureAllocations(Action action)
{
GC.Collect();
var before = GC.GetTotalMemory(false);
action();
GC.Collect();
var after = GC.GetTotalMemory(false);
return after - before;
}
This gives you a rough idea of how much garbage your code generates. For more detailed analysis, tools like BenchmarkDotNet provide precise measurements of allocation rates and can help you understand the performance impact of your optimizations.
When to Optimize (and When Not To)
It's important to remember that these techniques add complexity. For most applications, regular garbage collection is perfectly adequate. I typically reach for advanced memory management when:
You're processing large volumes of data with tight latency requirements, you're seeing GC pressure in profiling tools, or you're working in resource-constrained environments. For typical business applications, focus on correctness and maintainability first.
That said, understanding these concepts makes you a better C# developer even if you don't use them daily. They help you understand the costs of your code and make informed decisions about when optimization is worthwhile.
Summary
Modern C# gives you fine-grained control over memory. Span<T>
and
Memory<T>
let you work safely with contiguous memory, while memory
pools help reduce allocations and GC pressure.
These features aren’t just optimizations-they change how we design systems. You can now build applications that are both safe and fast, scaling efficiently even under heavy load.