Learn Zig Series (#7) - Memory Management and Allocators
What will I learn
- You will learn the fundamental difference between stack and heap memory;
- why Zig makes every allocation explicit (no hidden malloc, no garbage collector);
- the
GeneralPurposeAllocatorwith automatic leak detection in debug mode; deferpatterns for guaranteeing deallocation;ArrayList-- Zig's growable dynamic array (the equivalent of Python'slist);StringHashMap-- key-value storage with string keys;- arena allocators for batch operations ("allocate many, free all at once");
FixedBufferAllocatorfor zero-heap, stack-only allocation;- the allocator parameter pattern -- how idiomatic Zig functions receive their allocation strategy;
- the
undefinedkeyword for uninitialized memory.
Requirements
- A working modern computer running macOS, Windows or Ubuntu;
- An installed Zig 0.14+ distribution (download from ziglang.org);
- The ambition to learn Zig programming.
Difficulty
- Beginner
Curriculum (of the Learn Zig Series):
- Zig Programming Tutorial - ep001 - Intro
- Learn Zig Series (#2) - Hello Zig, Variables and Types
- Learn Zig Series (#3) - Functions and Control Flow
- Learn Zig Series (#4) - Error Handling (Zig's Best Feature)
- Learn Zig Series (#5) - Arrays, Slices, and Strings
- Learn Zig Series (#6) - Structs, Enums, and Tagged Unions
- Learn Zig Series (#7) - Memory Management and Allocators (this post)
Learn Zig Series (#7) - Memory Management and Allocators
Welcome back! In episode #6 we built our own data types -- structs with fields and methods, enums with exhaustive switching, and tagged unions that model "one of several things" with full compiler enforcement. We composed them into an order processing system where Side enums, PriceSpec tagged unions, and Order structs all worked together with the error handling patterns from ep004. At the end I mentioned that everything we'd built so far lived on the stack -- fixed-size arrays, structs with known layouts, data whose size the compiler can determine before the program even runs.
This is where that changes. This is the episode where Zig becomes a systems programming language for real ;-)
Up to this point, every piece of data we've worked with had a size known at compile time. [5]f64 -- five floats, 40 bytes, done. Position{ .entry_price = 64000, .quantity = 0.5, .side = 1.0 } -- three floats, 24 bytes, the compiler knows exactly how much space to reserve. But real programs deal with data whose size you can't know in advance. A user types a command. A file has ten thousand lines. A network connection sends a variable-length message. A portfolio has 3 assets today and 30 tomorrow. You need memory that can grow.
In Python, you write my_list = [] and then my_list.append(whatever) a million times, and Python handles all the memory behind the scenes -- allocating, reallocating, garbage collecting, never asking you to think about it. In C, you call malloc(size), get a pointer back, use it, and eventually call free(ptr) -- hopefully in the right order, hopefully without forgetting, hopefully without freeing the same pointer twice. In Zig, every allocation is explicit, every allocation can fail, and you choose the allocation strategy. That sounds intimidating, but Zig makes it clean and composable. Let's dive right in.
Solutions to Episode 6 Exercises
Before we start on new material, here are the solutions to last episode's exercises. As always, if you actually typed these out and compiled them (and I really hope you did!), compare your solutions:
Exercise 1 -- Order struct with cost method:
const std = @import("std");
const Side = enum { buy, sell };
const Order = struct {
pair: []const u8,
side: Side,
quantity: f64,
price: f64,
fn cost(self: Order) f64 {
return self.quantity * self.price;
}
fn display(self: Order) void {
const side_str: []const u8 = switch (self.side) {
.buy => "BUY",
.sell => "SELL",
};
std.debug.print("{s} {d:.4} {s} @ ${d:.2} = ${d:.2}\n", .{
side_str, self.quantity, self.pair, self.price, self.cost(),
});
}
};
pub fn main() void {
const orders = [_]Order{
.{ .pair = "BTC/USD", .side = .buy, .quantity = 0.25, .price = 68000 },
.{ .pair = "ETH/USD", .side = .sell, .quantity = 3.0, .price = 3200 },
.{ .pair = "SOL/USD", .side = .buy, .quantity = 100, .price = 142 },
};
for (orders) |order| order.display();
}
Struct with an enum field, two methods (one pure computation, one display), and a for loop over an array of orders. Everything from ep005 and ep006 working together.
Exercise 2 -- TimeFrame with seconds and label:
const TimeFrame = enum {
m1, m5, m15, h1, h4, d1,
fn seconds(self: TimeFrame) u32 {
return switch (self) {
.m1 => 60, .m5 => 300, .m15 => 900,
.h1 => 3600, .h4 => 14400, .d1 => 86400,
};
}
fn label(self: TimeFrame) []const u8 {
return switch (self) {
.m1 => "1 Minute", .m5 => "5 Minutes", .m15 => "15 Minutes",
.h1 => "1 Hour", .h4 => "4 Hours", .d1 => "1 Day",
};
}
};
Exhaustive switching in both methods. Add a .w1 variant and the compiler forces you to update both seconds and label before it lets you compile.
Exercise 3 -- AlertCondition tagged union:
const AlertCondition = union(enum) {
price_above: f64,
price_below: f64,
pct_change: f64,
fn describe(self: AlertCondition) void {
switch (self) {
.price_above => |p| std.debug.print("Price above ${d:.2}", .{p}),
.price_below => |p| std.debug.print("Price below ${d:.2}", .{p}),
.pct_change => |pct| std.debug.print("Change exceeds {d:.1}%", .{pct}),
}
}
};
Three variants, three payloads, one switch. The |p| captures the value. Same pattern we used for OrderPrice in ep006.
Exercise 4 -- high/low/avg: iterate once tracking min, max, and sum, return anonymous struct struct { high: f64, low: f64, avg: f64 }. One pass through the data, three results. The anonymous struct pattern from ep005.
Exercise 5 -- Portfolio tracker: essentially the same nested-struct composition pattern from ep006. Portfolio contains [N]?Asset, tracks count, errors on overflow. If yours is different but works -- good. There's no single right answer.
Exercise 6 -- Order processing system: the full combined example from ep006. If you got it working, you've internalized structs, enums, tagged unions, and error handling as one cohesive toolkit. Well done.
Now -- memory!
Stack vs Heap: Two Kinds of Memory
Every program has two regions of memory for storing data: the stack and the heap. Understanding the difference is essential for writing correct Zig code, and it's also the single biggest conceptual gap between Zig and Python.
The Stack
The stack is fast, automatic, and limited. When you call a function, the runtime pushes a "stack frame" -- space for that function's local variables. When the function returns, the frame is popped and the memory is gone. No cleanup code needed. No "freeing" anything. The stack on most systems is around 8MB -- plenty for local variables, not enough for large datasets.
Everything we've written in episodes 2-6 lived on the stack:
fn doWork() void {
var buffer: [1024]u8 = undefined; // 1KB on stack -- auto-freed when doWork returns
_ = &buffer;
const prices = [_]f64{ 64000, 65200, 63800 }; // 24 bytes on stack
_ = prices;
}
The keyword undefined is new. It tells Zig "don't initialize this memory -- I'll fill it in myself." For a 1KB buffer that you're about to write data into, zero-initializing it first would be wasted work. undefined skips that. But beware: reading from undefined memory before writing to it is undefined behavior in release mode (and a detectable error in debug mode). Use it only when you know you'll write before you read.
Stack allocation is deterministic -- you know exactly when it happens (function entry) and when it's released (function exit). There's no fragmentation, no garbage collector, no surprises. This is why Zig prefers stack allocation whenever possible.
The Heap
The heap is slower, manual, and (practically) unlimited. You request memory from the operating system, use it, and then give it back. The heap is where dynamic data lives -- data whose size depends on user input, file contents, network messages, or anything else you can't predict at compile time.
In Zig, heap allocation goes through an allocator:
fn doWork(allocator: std.mem.Allocator) !void {
const buffer = try allocator.alloc(u8, 1024); // 1KB on heap
defer allocator.free(buffer); // YOU free it
// use buffer...
}
Two critical differences from the stack version. First, allocator.alloc can fail -- it returns an error union (![]u8). If the system is out of memory, you get an error. The try propagates it. Second, you are responsible for calling free. The defer from ep004 guarantees it runs when the scope exits, regardless of how. Remember when I said defer would become critical for memory management? Here it is.
This is the fundamental difference between Zig and Python. In Python, the garbage collector handles all of this invisibly. In Zig, it's explicit. The upside: no GC pauses, no hidden allocations, total control. The downside: you have to think about it. Having said that, defer makes it vastly more manageable than raw C-style malloc/free.
Allocators -- Zig's Key Innovation
Most languages have one way to allocate memory. C has malloc. C++ has new. Python has... whatever the runtime decides internally. Zig is different: it doesn't have a global allocator. Instead, functions that need to allocate memory receive an allocator as a parameter. This is the single most distinctive design decision in Zig's standard library, and once you understand why, you'll wonder why other languages don't do this.
const std = @import("std");
pub fn main() !void {
var gpa = std.heap.GeneralPurposeAllocator(.{}){};
defer {
const check = gpa.deinit();
if (check == .leak) {
std.debug.print("Memory leak detected!\n", .{});
}
}
const allocator = gpa.allocator();
// Allocate a single value on the heap
const ptr = try allocator.create(f64);
defer allocator.destroy(ptr);
ptr.* = 68423.50;
std.debug.print("Price: ${d:.2}\n", .{ptr.*});
// Allocate a dynamic slice on the heap
const prices = try allocator.alloc(f64, 5);
defer allocator.free(prices);
prices[0] = 64000;
prices[1] = 65200;
prices[2] = 63800;
prices[3] = 67100;
prices[4] = 68400;
var sum: f64 = 0;
for (prices) |p| sum += p;
std.debug.print("Average: ${d:.2}\n", .{sum / 5.0});
}
Output:
Price: $68423.50
Average: $65700.00
Let me walk through the allocator setup because this is the pattern you'll write at the top of almost every Zig program that uses dynamic memory.
GeneralPurposeAllocator(.{}){} creates the allocator instance. The (.{}) is a configuration struct (using all defaults). The {} at the end is the initializer. Yes, it looks weird the first time. You get used to it.
defer gpa.deinit() -- when main exits, the GPA checks for leaks. In debug mode (the default build), it reports any memory you allocated but never freed. This is HUGE. In C, you'd need an external tool like Valgrind to detect leaks. In Zig, the allocator itself does it. Free.
gpa.allocator() extracts the std.mem.Allocator interface. This is what you pass to functions. The allocator interface is generic -- functions that take std.mem.Allocator don't know (or care) what kind of allocator is behind it. GPA, arena, fixed buffer, testing allocator -- the function works the same way with any of them.
allocator.create(f64) allocates space for a single f64 on the heap and returns a pointer (*f64). allocator.destroy(ptr) frees it. For single values.
allocator.alloc(f64, 5) allocates a contiguous slice of 5 f64 values and returns []f64. allocator.free(prices) frees the slice. For arrays/slices.
Every create has a destroy. Every alloc has a free. Every allocation has a defer right after it. This is the discipline that prevents leaks. Miss a defer and the GPA will yell at you in debug mode.
If you've been following the Learn Python Series, think of this as the explicit version of what Python does behind my_list = [1, 2, 3]. Python silently allocates heap memory, manages reference counts, and eventually garbage collects. Zig makes you write three lines: allocate, defer free, use. More typing, zero mystery.
ArrayList -- Growable Dynamic Array
Fixed-size arrays are great when you know how many elements you need. But what about a list that grows? A log of trades coming in from a live feed? A collection of user commands? You don't know the count at compile time.
ArrayList is Zig's answer -- the equivalent of Python's list or C++'s std::vector:
const std = @import("std");
pub fn main() !void {
var gpa = std.heap.GeneralPurposeAllocator(.{}){};
defer _ = gpa.deinit();
const allocator = gpa.allocator();
var trades = std.ArrayList(f64).init(allocator);
defer trades.deinit();
// Record trade PnLs as they come in
try trades.append(2250.0);
try trades.append(-800.0);
try trades.append(1500.0);
try trades.append(-200.0);
try trades.append(3100.0);
var total: f64 = 0;
for (trades.items) |pnl| {
total += pnl;
std.debug.print("Trade: {d:>+10.2}, running total: {d:>+10.2}\n", .{ pnl, total });
}
std.debug.print("\nTotal PnL: {d:+.2} over {d} trades\n", .{ total, trades.items.len });
}
Output:
Trade: +2250.00, running total: +2250.00
Trade: -800.00, running total: +1450.00
Trade: +1500.00, running total: +2950.00
Trade: -200.00, running total: +2750.00
Trade: +3100.00, running total: +5850.00
Total PnL: +5850.00 over 5 trades
ArrayList(f64).init(allocator) creates an empty growable array of f64 values. It receives the allocator so it knows where to allocate memory when it needs to grow. Notice: we don't tell it a size upfront. It starts empty and allocates as needed.
defer trades.deinit() frees all the memory the ArrayList allocated internally. Same defer pattern -- set up cleanup immediately after creation.
try trades.append(2250.0) adds an element. And here's the thing that trips up Python developers: append can fail. It returns !void -- an error union. If the ArrayList needs to grow its internal buffer and the allocator can't provide more memory, you get an error. In Python, list.append() never visibly fails (if you run out of memory, the whole interpreter crashes). In Zig, the failure is in the type signature and you must handle it.
trades.items gives you the current contents as a regular slice ([]f64). You can iterate over it, pass it to functions that take []const f64, slice it further. Everything we learned about slices in ep005 applies.
If you know roughly how many elements you'll need, you can pre-allocate:
var trades = std.ArrayList(f64).init(allocator);
defer trades.deinit();
try trades.ensureTotalCapacity(100); // pre-allocate for 100 elements
Now the first 100 append calls won't need to reallocate. After that, it grows automatically (doubling the capacity each time, same as Python). This is a performance optimization, not a requirement -- the ArrayList works fine without it.
StringHashMap -- Key-Value Storage
When you need to look up values by string keys, StringHashMap is what you want. It's Zig's hash map with []const u8 keys:
const std = @import("std");
pub fn main() !void {
var gpa = std.heap.GeneralPurposeAllocator(.{}){};
defer _ = gpa.deinit();
const allocator = gpa.allocator();
var portfolio = std.StringHashMap(f64).init(allocator);
defer portfolio.deinit();
try portfolio.put("BTC", 0.5);
try portfolio.put("ETH", 5.0);
try portfolio.put("SOL", 50.0);
try portfolio.put("AVAX", 200.0);
// Look up a key that exists
if (portfolio.get("BTC")) |quantity| {
std.debug.print("BTC: {d:.4}\n", .{quantity});
}
// Look up a key that doesn't exist
if (portfolio.get("DOGE")) |quantity| {
std.debug.print("DOGE: {d:.4}\n", .{quantity});
} else {
std.debug.print("DOGE: not in portfolio\n", .{});
}
// Check existence without getting the value
std.debug.print("Has SOL: {}\n", .{portfolio.contains("SOL")});
std.debug.print("Has XRP: {}\n", .{portfolio.contains("XRP")});
// Iterate over all entries
std.debug.print("\n=== Full Portfolio ===\n", .{});
var iter = portfolio.iterator();
while (iter.next()) |entry| {
std.debug.print(" {s}: {d:.4}\n", .{ entry.key_ptr.*, entry.value_ptr.* });
}
std.debug.print("Total positions: {d}\n", .{portfolio.count()});
}
Output:
BTC: 0.5000
DOGE: not in portfolio
Has SOL: true
Has XRP: false
=== Full Portfolio ===
SOL: 50.0000
BTC: 0.5000
AVAX: 200.0000
ETH: 5.0000
Total positions: 4
portfolio.get("BTC") returns ?f64 -- an optional. Either the key exists and you get the value, or it doesn't and you get null. Same optional pattern from ep004, same if (optional) |value| unwrapping. Consistency is the theme.
portfolio.put("BTC", 0.5) can fail (allocation needed for internal buckets), so it returns !void. Same try pattern.
The iterator gives you entry.key_ptr.* and entry.value_ptr.* -- those .key_ptr and .value_ptr are pointers to the stored key and value. The .* dereferences them to get the actual data. This is our first real encounter with pointer dereferencing syntax, and we'll go much deeper on that in a future episode.
Note that the iteration order is NOT insertion order. Hash maps don't preserve order -- the entries come out in whatever order the internal hash table stores them. If you need ordered iteration, you'd sort the keys first or use a different data structure.
For Python developers: StringHashMap is the Zig equivalent of dict. But where Python's dict hides all the memory management and resizing behind a simple d["key"] = value syntax, Zig makes you set up the allocator, handle potential allocation failures, and clean up with deinit. More verbose, more control.
Arena Allocator -- "Free Everything At Once"
Sometimes you need to allocate a bunch of things, process them, and then throw everything away at once. Think about processing a batch of orders: you build up temporary data structures, compute results, and then the whole batch is done. You don't need to free each individual piece -- you want to wipe the slate clean in one operation.
That's what ArenaAllocator does:
const std = @import("std");
fn processBatch(arena_alloc: std.mem.Allocator) !void {
// Allocate freely -- no individual frees needed
var prices = std.ArrayList(f64).init(arena_alloc);
// NOTE: no defer deinit! The arena handles it.
try prices.append(68000.0);
try prices.append(67500.0);
try prices.append(69200.0);
try prices.append(68800.0);
var sum: f64 = 0;
for (prices.items) |p| sum += p;
const avg = sum / @as(f64, @floatFromInt(prices.items.len));
const label = try std.fmt.allocPrint(arena_alloc, "Batch avg: ${d:.2}", .{avg});
// NOTE: no defer free on label either!
std.debug.print("{s} ({d} prices)\n", .{ label, prices.items.len });
}
pub fn main() !void {
var gpa = std.heap.GeneralPurposeAllocator(.{}){};
defer _ = gpa.deinit();
var arena = std.heap.ArenaAllocator.init(gpa.allocator());
defer arena.deinit(); // frees ALL arena allocations at once
// Process multiple batches -- all memory freed when arena.deinit() runs
try processBatch(arena.allocator());
try processBatch(arena.allocator());
try processBatch(arena.allocator());
// Optionally reset between batches to reclaim memory mid-run:
// _ = arena.reset(.retain_capacity);
}
Output:
Batch avg: $68375.00 (4 prices)
Batch avg: $68375.00 (4 prices)
Batch avg: $68375.00 (4 prices)
Look at processBatch -- it allocates an ArrayList and a formatted string using the arena allocator, but there's no defer deinit and no defer free. The arena owns all that memory. When arena.deinit() runs in main, everything allocated through arena.allocator() is freed in one shot. Zero fragmentation. Zero individual cleanup.
The arena wraps a "backing allocator" (in this case, the GPA). It requests large chunks from the backing allocator and sub-divides them for individual allocations. Think of it as a scratch pad: write whatever you want, and when you're done, tear off the whole sheet.
When would you use this? Processing HTTP requests (allocate for the request, free when the response is sent). Parsing files (allocate while reading, free when done). Game frames (allocate temporary state, free at frame end). Any situation where a group of allocations has the same lifetime.
The arena is one of the reasons Zig's allocator system is so powerful. In C, you'd either track every individual malloc/free (error-prone) or use a custom memory pool (complex to implement). In Python, the garbage collector handles object lifetimes... most of the time (circular references, anyone?). In Zig, you pick the allocator that matches your usage pattern, and the pattern does the work.
FixedBufferAllocator -- Zero Heap
What if you don't want to touch the heap at all? Real-time systems, embedded code, hot loops where heap allocation would cause unpredictable latency -- sometimes you need to allocate from a pre-existing buffer on the stack:
const std = @import("std");
pub fn main() !void {
var buf: [4096]u8 = undefined;
var fba = std.heap.FixedBufferAllocator.init(&buf);
const allocator = fba.allocator();
// Allocate from the stack buffer -- no heap involved
const values = try allocator.alloc(f64, 10);
for (values, 0..) |*v, i| {
v.* = @as(f64, @floatFromInt(i + 1)) * 100.0;
}
var sum: f64 = 0;
for (values) |v| sum += v;
std.debug.print("Allocated {d} floats from stack buffer\n", .{values.len});
std.debug.print("Sum: {d:.0}\n", .{sum});
// What happens if you exceed the buffer?
const too_much = allocator.alloc(u8, 8000);
if (too_much) |_| {
std.debug.print("Got memory\n", .{});
} else |_| {
std.debug.print("Out of buffer space! (expected)\n", .{});
}
}
Output:
Allocated 10 floats from stack buffer
Sum: 5500
Out of buffer space! (expected)
FixedBufferAllocator carves allocations out of a byte buffer you provide. The buffer can be on the stack (as above) or anywhere else in memory. There's no system call, no heap interaction, completely deterministic performance. If you request more than what remains in the buffer, you get an error -- which is exactly the right behavior. No silent overflow, no corrupted data, just an error that your code can handle.
Notice undefined again for the buffer. We're about to write into it through the allocator, so zero-initializing would be wasted cycles.
The Allocator Parameter Pattern
Here's the idiomatic Zig pattern that ties everything together. Functions that need to allocate memory take an std.mem.Allocator as a parameter. The caller decides the strategy. The function doesn't know or care what kind of allocator it gets:
const std = @import("std");
fn buildSummary(allocator: std.mem.Allocator, pair: []const u8, price: f64) ![]u8 {
return try std.fmt.allocPrint(allocator, "{s}: ${d:.2}", .{ pair, price });
}
fn buildPortfolioReport(allocator: std.mem.Allocator, pairs: []const []const u8, prices: []const f64) ![]u8 {
var report = std.ArrayList(u8).init(allocator);
defer report.deinit();
const writer = report.writer();
try writer.writeAll("=== Portfolio Report ===\n");
var total_value: f64 = 0;
for (pairs, 0..) |pair, i| {
try writer.print(" {s}: ${d:.2}\n", .{ pair, prices[i] });
total_value += prices[i];
}
try writer.print(" Total: ${d:.2}\n", .{total_value});
return try report.toOwnedSlice();
}
pub fn main() !void {
var gpa = std.heap.GeneralPurposeAllocator(.{}){};
defer _ = gpa.deinit();
const allocator = gpa.allocator();
// buildSummary works with any allocator
const summary = try buildSummary(allocator, "BTC/USD", 68423.50);
defer allocator.free(summary);
std.debug.print("{s}\n", .{summary});
// So does buildPortfolioReport
const pairs = [_][]const u8{ "BTC/USD", "ETH/USD", "SOL/USD" };
const prices = [_]f64{ 34000, 16000, 7100 };
const report = try buildPortfolioReport(allocator, &pairs, &prices);
defer allocator.free(report);
std.debug.print("{s}\n", .{report});
}
Output:
BTC/USD: $68423.50
=== Portfolio Report ===
BTC/USD: $34000.00
ETH/USD: $16000.00
SOL/USD: $7100.00
Total: $57100.00
Look at buildSummary and buildPortfolioReport -- they take std.mem.Allocator, not GeneralPurposeAllocator specifically. This means you can call them with:
- A
GeneralPurposeAllocatorin production (with leak detection) - A
std.testing.allocatorin tests (reports leaks as test failures) - An
ArenaAllocatorwhen you want batch cleanup - A
FixedBufferAllocatorwhen you need zero-heap operation
Same functions, different strategies. The allocator is dependency-injected. If you've used dependency injection in other contexts (testing frameworks, web servers, configuration systems), this is the same concept applied to memory allocation. And it works brilliantly -- you write your allocation-using code once and swap strategies at the call site.
std.fmt.allocPrint is a standard library function that formats a string and allocates space for the result. The caller owns the returned slice and must free it. report.toOwnedSlice() transfers the ArrayList's internal buffer to the caller (who must free it), leaving the ArrayList empty.
Allocator Quick Reference
Here's a cheat sheet for choosing the right allocator:
| Allocator | Best for | Key property |
|---|---|---|
GeneralPurposeAllocator | Default. Development. Any program. | Leak detection in debug mode |
ArenaAllocator | Batch work. Many allocs, one free. | No individual frees needed |
FixedBufferAllocator | Real-time. Embedded. Zero heap. | Pre-allocated, deterministic |
page_allocator | Large allocations. Direct from OS. | Minimum 4KB pages |
std.testing.allocator | Unit tests. | Reports leaks as test failures |
Start with GeneralPurposeAllocator. It's the safe default. If profiling shows allocation is a bottleneck, switch to arena or fixed-buffer for the hot path. The allocator parameter pattern means you can change strategies without rewriting your functions.
Putting It Together: A Dynamic Portfolio Tracker
Let me show you how allocators, ArrayLists, HashMaps, structs, and everything from previous episodes combine into a real program. This builds on the fixed-size Portfolio struct from ep006, but now with dynamic sizing -- no more [4]?Asset with a hard cap:
const std = @import("std");
const Asset = struct {
symbol: []const u8,
quantity: f64,
fn value(self: Asset, price: f64) f64 {
return self.quantity * price;
}
};
fn buildPortfolio(allocator: std.mem.Allocator) !std.ArrayList(Asset) {
var assets = std.ArrayList(Asset).init(allocator);
errdefer assets.deinit();
try assets.append(.{ .symbol = "BTC", .quantity = 0.5 });
try assets.append(.{ .symbol = "ETH", .quantity = 5.0 });
try assets.append(.{ .symbol = "SOL", .quantity = 50.0 });
try assets.append(.{ .symbol = "AVAX", .quantity = 200.0 });
try assets.append(.{ .symbol = "DOT", .quantity = 100.0 });
return assets;
}
fn lookupPrice(prices: *const std.StringHashMap(f64), symbol: []const u8) ?f64 {
return prices.get(symbol);
}
pub fn main() !void {
var gpa = std.heap.GeneralPurposeAllocator(.{}){};
defer _ = gpa.deinit();
const allocator = gpa.allocator();
// Build price feed
var prices = std.StringHashMap(f64).init(allocator);
defer prices.deinit();
try prices.put("BTC", 68000.0);
try prices.put("ETH", 3200.0);
try prices.put("SOL", 142.0);
try prices.put("AVAX", 35.0);
try prices.put("DOT", 7.50);
// Build portfolio dynamically
var portfolio = try buildPortfolio(allocator);
defer portfolio.deinit();
// Display
std.debug.print("=== Portfolio ({d} assets) ===\n", .{portfolio.items.len});
var total_value: f64 = 0;
for (portfolio.items) |asset| {
if (lookupPrice(&prices, asset.symbol)) |price| {
const val = asset.value(price);
total_value += val;
std.debug.print(" {s}: {d:.4} x ${d:.2} = ${d:.2}\n", .{
asset.symbol, asset.quantity, price, val,
});
} else {
std.debug.print(" {s}: {d:.4} x (no price data)\n", .{
asset.symbol, asset.quantity,
});
}
}
std.debug.print(" ---\n", .{});
std.debug.print(" Total: ${d:.2}\n", .{total_value});
}
Output:
=== Portfolio (5 assets) ===
BTC: 0.5000 x $68000.00 = $34000.00
ETH: 5.0000 x $3200.00 = $16000.00
SOL: 50.0000 x $142.00 = $7100.00
AVAX: 200.0000 x $35.00 = $7000.00
DOT: 100.0000 x $7.50 = $750.00
---
Total: $64850.00
Notice errdefer assets.deinit() in buildPortfolio. This is the errdefer pattern from ep004 applied to memory: if any append fails (allocation error), the errdefer frees the ArrayList before the error propagates. If the function succeeds, the errdefer does nothing and the caller takes ownership. This is how you write functions that return allocated data safely -- errdefer for the error path, caller's defer for the success path.
Compare this to the fixed-size Portfolio from ep006. That one held at most 4 assets in a [4]?Asset array. This one holds any number of assets because ArrayList grows dynamically. Same Asset struct, same .value() method, but now without an artficial capacity limit. The allocator made the difference.
What You're Building Toward
We've covered the essentials of Zig's memory model: stack vs heap, the four main allocators, ArrayList, StringHashMap, the allocator parameter pattern, and defer/errdefer for leak-free cleanup. This is the foundation that everything else in Zig builds on.
When we get to pointers and memory layout in a future episode, you'll learn what actually happens underneath these allocators -- how a slice is a pointer plus a length, how create and destroy work at the pointer level, and how to build your own data structures from raw memory. You'll also see the relationship between the *T pointer type and the []T slice type, and why Zig distinguishes between single-item pointers and many-item pointers.
And when we get to comptime, you'll see something Zig can do that almost no other language can -- execute arbitrary code at compile time, generating types and functions based on compile-time data. The allocator system feeds into that: some allocations happen at compile time (compile-time arrays, string building) and some at runtime, and Zig's type system tracks which is which.
For now, the key takeaways: every allocation is explicit, every allocation can fail, defer is your best friend, and the allocator parameter makes your functions testable and flexible. Write a few programs with GeneralPurposeAllocator, let it catch your leaks in debug mode, and the pattern will become second nature.
Exercises
You know the drill by now. Type these out. Compile them. Read the compiler errors. Let the GPA's leak detector catch your mistakes -- it's there to help.
Create an
ArrayList([]const u8)of ticker names using the GPA. Append 5 tickers ("BTC", "ETH", "SOL", "AVAX", "DOT"), iterate over.itemsand print each one, thendeinit. Make sure your program has zero leaks (GPA will tell you).Use a
StringHashMap(f64)to store portfolio holdings (ticker -> quantity). Insert at least 4 entries. Look up keys that exist and keys that don't. Usecontainsto check membership. Print the total number of entries with.count(). Try removing an entry withportfolio.remove("ETH")and verify the count changes.Write a function
fn processData(allocator: std.mem.Allocator) !voidthat creates an ArrayList, appends some values, computes the average, and prints it. Call this function from main using three different allocators: a GPA, an arena, and a FixedBufferAllocator. Observe that the function works identically with all three.Deliberately introduce a memory leak: allocate something and remove its
defer free. Run your program in debug mode and observe the GPA's leak report. Then fix it. Understanding what the leak report looks like is important -- you'll see it in real debugging situations.Build the dynamic portfolio tracker from the walkthrough yourself, but add a
removeTickerfunction that searches the ArrayList for an asset by symbol, removes it withorderedRemove, and returns whether it was found. Add and remove a few assets and verify the final state.
Exercises 1-2 test the basic data structures with proper cleanup. Exercise 3 tests the allocator parameter pattern -- the same function, three allocators. Exercise 4 is intentionaly about breaking things to learn what breakage looks like. Exercise 5 combines structs from ep006 with allocators from this episode into something approaching a real application.
The allocator system is what separates Zig from languages like Python (hidden memory management) and from C (untracked memory management). Zig sits in the middle: you manage memory explicitly, but the tools are composable, the patterns are consistent, and the GPA catches mistakes before they reach production. Once you internalize the pattern -- allocate, defer free, use -- the explicitness stops feeling like burden and starts feeling like confidence. You know where your memory lives. You know when it gets freed. No mystery, no surprises.