Learn Zig Series (#13) - Interfaces via Type Erasure
What will I learn
- You will learn how Zig achieves polymorphism without inheritance or virtual methods;
- the
*anyopaquepattern for type-erased pointers; - building a Writer interface that works with files, buffers, and network sockets;
- function pointers stored in structs as vtable-like dispatch;
- the
@ptrCastand@alignCastbuiltins for type recovery; - comparing this pattern to Go interfaces, Rust traits, and C++ virtual methods;
- when type erasure is the right choice vs comptime generics;
- real-world examples from the Zig standard library (
std.mem.Allocator); - testing type-erased interfaces with injectable implementations.
Requirements
- A working modern computer running macOS, Windows or Ubuntu;
- An installed Zig 0.14+ distribution (download from ziglang.org);
- The ambition to learn Zig programming.
Difficulty
- Intermediate
Curriculum (of the Learn Zig Series):
- Zig Programming Tutorial - ep001 - Intro
- Learn Zig Series (#2) - Hello Zig, Variables and Types
- Learn Zig Series (#3) - Functions and Control Flow
- Learn Zig Series (#4) - Error Handling (Zig's Best Feature)
- Learn Zig Series (#5) - Arrays, Slices, and Strings
- Learn Zig Series (#6) - Structs, Enums, and Tagged Unions
- Learn Zig Series (#7) - Memory Management and Allocators
- Learn Zig Series (#8) - Pointers and Memory Layout
- Learn Zig Series (#9) - Comptime (Zig's Superpower)
- Learn Zig Series (#10) - Project Structure, Modules, and File I/O
- Learn Zig Series (#11) - Mini Project: Building a Step Sequencer
- Learn Zig Series (#12) - Testing and Test-Driven Development
- Learn Zig Series (#13) - Interfaces via Type Erasure (this post)
Learn Zig Series (#13) - Interfaces via Type Erasure
Welcome back! In episode #12 we covered Zig's built-in testing -- test blocks as first-class language constructs, std.testing.expect* assertions, the testing allocator that catches memory leaks automatically, TDD workflows, table-driven tests, and how to run tests across a multi-file project. We built a Stack using test-driven development and added test coverage to the step sequencer from ep011. That was the last "tooling" episode in this first batch.
Now we're shifting gears. The next few episodes explore patterns and techniques for writing larger Zig programs -- the kind of design decisions you'll face once your projects outgrow a single file. And the first one is a big one: how do you write code that works with "any type that has a write method"? How do you store different types in the same collection? How does std.mem.Allocator work -- you've been passing it around since ep007, but what's actually going on under the hood?
The answer is type erasure. And if you've been wondering why Zig doesn't have interfaces, traits, or abstract classes -- it's because it doesn't need them. It has something simpler and more explicit.
Here we go!
Solutions to Episode 12 Exercises
Before we get into type erasure, here are the solutions to last episode's exercises. If you wrote these yourself, compare your approaches:
Exercise 1 -- RingBuffer via TDD:
const std = @import("std");
const testing = std.testing;
fn RingBuffer(comptime T: type, comptime capacity: usize) type {
return struct {
buf: [capacity]T = undefined,
head: usize = 0,
tail: usize = 0,
count: usize = 0,
const Self = @This();
fn push(self: *Self, item: T) void {
self.buf[self.tail] = item;
self.tail = (self.tail + 1) % capacity;
if (self.count == capacity) {
self.head = (self.head + 1) % capacity;
} else {
self.count += 1;
}
}
fn pop(self: *Self) ?T {
if (self.count == 0) return null;
const item = self.buf[self.head];
self.head = (self.head + 1) % capacity;
self.count -= 1;
return item;
}
fn isEmpty(self: *const Self) bool { return self.count == 0; }
fn isFull(self: *const Self) bool { return self.count == capacity; }
};
}
test "ring buffer FIFO order" {
var rb = RingBuffer(i32, 4){};
rb.push(10);
rb.push(20);
rb.push(30);
try testing.expectEqual(@as(?i32, 10), rb.pop());
try testing.expectEqual(@as(?i32, 20), rb.pop());
try testing.expectEqual(@as(?i32, 30), rb.pop());
try testing.expectEqual(@as(?i32, null), rb.pop());
}
test "ring buffer wraps and overwrites oldest" {
var rb = RingBuffer(i32, 3){};
rb.push(1);
rb.push(2);
rb.push(3);
try testing.expect(rb.isFull());
rb.push(4); // overwrites 1
try testing.expectEqual(@as(?i32, 2), rb.pop());
try testing.expectEqual(@as(?i32, 3), rb.pop());
try testing.expectEqual(@as(?i32, 4), rb.pop());
}
test "ring buffer empty and full" {
var rb = RingBuffer(u8, 2){};
try testing.expect(rb.isEmpty());
rb.push('a');
rb.push('b');
try testing.expect(rb.isFull());
try testing.expect(!rb.isEmpty());
}
The whole thing is [capacity]T on the stack -- zero allocations. Tests first, implementation second. The testing allocator would report zero heap usage.
Exercise 2 -- portfolio tests:
const testing = std.testing;
test "addHolding increases count" {
var p = Portfolio{};
try p.addHolding("BTC", 2.5, 45000.0);
try testing.expectEqual(@as(usize, 1), p.count);
}
test "addHolding when full returns error" {
var p = Portfolio{};
for (0..Portfolio.MAX_HOLDINGS) |i| {
var name_buf: [3]u8 = undefined;
_ = std.fmt.bufPrint(&name_buf, "T{d:0>2}", .{i}) catch unreachable;
try p.addHolding(&name_buf, 1.0, 100.0);
}
try testing.expectError(error.PortfolioFull, p.addHolding("XXX", 1.0, 1.0));
}
test "totalValue with zero holdings" {
const p = Portfolio{};
try testing.expectApproxEqAbs(@as(f64, 0.0), p.totalValue(), 0.001);
}
test "largestHolding returns correct entry" {
var p = Portfolio{};
try p.addHolding("ETH", 10.0, 3000.0); // 30000
try p.addHolding("BTC", 1.0, 45000.0); // 45000 -- largest
try p.addHolding("SOL", 50.0, 150.0); // 7500
const largest = p.largestHolding().?;
try testing.expectEqualStrings("BTC", largest.ticker);
}
Each test is independent, sets up its own state, tests one behavior. The expectApproxEqAbs for floats avoids precision traps.
Exercise 3 -- parseKeyValue:
const ParseError = error{InvalidFormat};
const KeyValue = struct { key: []const u8, value: []const u8 };
fn parseKeyValue(input: []const u8) ParseError!KeyValue {
if (input.len == 0) return error.InvalidFormat;
const idx = std.mem.indexOfScalar(u8, input, '=') orelse return error.InvalidFormat;
return .{ .key = input[0..idx], .value = input[idx + 1 ..] };
}
test "parseKeyValue normal" {
const r = try parseKeyValue("name=scipio");
try testing.expectEqualStrings("name", r.key);
try testing.expectEqualStrings("scipio", r.value);
}
test "parseKeyValue empty value" {
const r = try parseKeyValue("key=");
try testing.expectEqualStrings("key", r.key);
try testing.expectEqualStrings("", r.value);
}
test "parseKeyValue no equals" {
try testing.expectError(error.InvalidFormat, parseKeyValue("noequals"));
}
test "parseKeyValue multiple equals" {
const r = try parseKeyValue("a=b=c");
try testing.expectEqualStrings("a", r.key);
try testing.expectEqualStrings("b=c", r.value); // split on FIRST =
}
test "parseKeyValue empty input" {
try testing.expectError(error.InvalidFormat, parseKeyValue(""));
}
Table-driven tests covering every edge case. Split on the FIRST = only -- "a=b=c" has key "a" and value "b=c".
Exercise 4 -- intentional memory leak:
test "intentional leak -- see what the testing allocator reports" {
const buf = try testing.allocator.alloc(u8, 100);
_ = buf;
// deliberately NOT freeing buf
// Run: zig test
// Output: "FAIL (leaked approximately 100 bytes)"
// Fix: uncomment the line below
// defer testing.allocator.free(buf);
}
That error message -- leaked approximately 100 bytes -- is burned into your brain now. The testing allocator catches leaks that the GPA would also catch, but at test time instead of runtime.
Exercise 5 -- StatTracker via TDD:
const StatTracker = struct {
min: f64 = std.math.inf(f64),
max: f64 = -std.math.inf(f64),
sum: f64 = 0,
count: u64 = 0,
fn add(self: *StatTracker, value: f64) void {
self.min = @min(self.min, value);
self.max = @max(self.max, value);
self.sum += value;
self.count += 1;
}
fn mean(self: *const StatTracker) f64 {
if (self.count == 0) return 0;
return self.sum / @as(f64, @floatFromInt(self.count));
}
};
test "stat tracker basics" {
var st = StatTracker{};
st.add(10.0);
st.add(20.0);
st.add(30.0);
try testing.expectApproxEqAbs(@as(f64, 10.0), st.min, 0.001);
try testing.expectApproxEqAbs(@as(f64, 30.0), st.max, 0.001);
try testing.expectApproxEqAbs(@as(f64, 20.0), st.mean(), 0.001);
try testing.expectEqual(@as(u64, 3), st.count);
}
test "stat tracker single value" {
var st = StatTracker{};
st.add(42.0);
try testing.expectApproxEqAbs(@as(f64, 42.0), st.min, 0.001);
try testing.expectApproxEqAbs(@as(f64, 42.0), st.max, 0.001);
try testing.expectApproxEqAbs(@as(f64, 42.0), st.mean(), 0.001);
}
TDD means you wrote try testing.expectApproxEqAbs(@as(f64, 20.0), st.mean(), 0.001) BEFORE implementing mean(). Tests drove the API. The init values use inf and -inf so the first add always sets both min and max.
Exercise 6 -- testing storage.zig save/load:
test "save and load roundtrip" {
const test_file = "test_pattern.seq";
// Save
var seq = Sequencer{};
seq.loadPreset(); // fill with known pattern
try storage.save(&seq, test_file);
// Load into fresh sequencer
var seq2 = Sequencer{};
try storage.load(&seq2, test_file);
// Verify grid matches
for (0..NUM_TRACKS) |t| {
for (0..NUM_STEPS) |s| {
try testing.expectEqual(seq.grid[t][s], seq2.grid[t][s]);
}
}
// Cleanup
std.fs.cwd().deleteFile(test_file) catch {};
}
Create, save, load, compare -- the fundamental roundtrip test. catch {} on deleteFile silently ignores cleanup failures (the file might not exist if save failed). This pattern -- temp file + defer delete -- is standard for testing I/O code.
Now -- interfaces!
The Problem: Different Types, Same Operation
You have a FileWriter, a BufferWriter, and a NetworkWriter. They all need a write(bytes) -> usize method. In Python you'd use duck typing -- just call .write() on whatever and hope it works. In Java, you'd define an interface Writer { int write(byte[] bytes); }. In Go, you'd define type Writer interface { Write(p []byte) (n int, err error) } and any type with that method signature implicitly satisfies it.
In Zig there's no interface keyword. No trait. No abstract class. No implicit satisfaction. But the standard library is FULL of polymorphic abstractions -- std.mem.Allocator, std.io.Writer, std.io.Reader, std.rand.Random. How?
The answer is two things stored together: a type-erased pointer and a function pointer (or a pointer to a struct of function pointers, which is essentially a vtable). The pointer points to the concrete implementation. The function pointer knows how to call the right method on that concrete type. Together they form an interface.
const Writer = struct {
ptr: *anyopaque,
writeFn: *const fn (*anyopaque, []const u8) anyerror!usize,
pub fn write(self: Writer, bytes: []const u8) !usize {
return self.writeFn(self.ptr, bytes);
}
};
That's the entire interface. Two fields. One public method that delegates to the function pointer. The caller sees Writer and calls .write(). The implementation behind the function pointer does the actual work. The caller never knows (or cares) what the concrete type is.
*anyopaque is Zig's way of saying "pointer to something, but I'm not telling you what". It's similar to void* in C, but the type system forces you to cast it explicitly before you can dereference it. You can't accidentally use an *anyopaque as a *FileWriter -- you have to go through @ptrCast and @alignCast.
Implementing a Concrete Type
Here's a FileWriter that satisfies the Writer interface:
const std = @import("std");
const FileWriter = struct {
file: std.fs.File,
pub fn writer(self: *FileWriter) Writer {
return .{
.ptr = @ptrCast(self),
.writeFn = @ptrCast(&writeImpl),
};
}
fn writeImpl(ptr: *anyopaque, bytes: []const u8) anyerror!usize {
const self: *FileWriter = @ptrCast(@alignCast(ptr));
return self.file.write(bytes);
}
};
Three things happening here:
writer() returns a Writer -- this is the "conversion" method. It takes *FileWriter (a concrete pointer) and wraps it into a Writer (the abstract interface) by erasing the type. @ptrCast(self) converts *FileWriter to *anyopaque. The concrete type information is gone. All that remains is the raw address.
writeImpl recovers the concrete type -- when the function pointer is called, it receives *anyopaque. The first thing it does is cast back: @ptrCast(@alignCast(ptr)) converts the opaque pointer back to *FileWriter. Now it can access self.file and call the real write method. @alignCast is needed because *anyopaque has no alignment information -- we need to tell the compiler "trust me, this pointer is properly aligned for a FileWriter".
The function pointer is @ptrCast(&writeImpl) -- this converts the concrete function signature fn(*anyopaque, []const u8) anyerror!usize to match the function pointer type in the Writer struct. Same signature, just stored as a pointer.
This is the vtable pattern. One level of indirection: instead of calling file_writer.write(bytes) directly, you call writer.writeFn(writer.ptr, bytes) which internally calls file_writer.write(bytes). One extra pointer dereference. That's the cost of runtime polymorphism.
A Buffer Writer
Now a second implementation of the same interface:
const BufferWriter = struct {
buf: []u8,
pos: usize = 0,
pub fn writer(self: *BufferWriter) Writer {
return .{
.ptr = @ptrCast(self),
.writeFn = @ptrCast(&writeImpl),
};
}
fn writeImpl(ptr: *anyopaque, bytes: []const u8) anyerror!usize {
const self: *BufferWriter = @ptrCast(@alignCast(ptr));
const available = self.buf.len - self.pos;
const to_write = @min(bytes.len, available);
@memcpy(self.buf[self.pos..][0..to_write], bytes[0..to_write]);
self.pos += to_write;
return to_write;
}
};
Same pattern. Different guts. The writer() method returns a Writer that points at this BufferWriter instance. The writeImpl copies bytes into an in-memory buffer instead of writing to a file. But from the outside, both look identical -- they're both Writer.
And here's the payoff. Any function that takes a Writer works with both:
fn logMessage(w: Writer, msg: []const u8) !void {
_ = try w.write(msg);
_ = try w.write("\n");
}
Call logMessage(file_writer.writer(), "hello") and it writes to a file. Call logMessage(buffer_writer.writer(), "hello") and it writes to a buffer. The function doesn't know or care. That's polymorphism -- different behavior through the same interface.
How std.mem.Allocator Actually Works
Now let me show you the real thing. The allocator you've been using since ep007 is this exact pattern, just with a proper vtable struct:
// Simplified from the standard library source
pub const Allocator = struct {
ptr: *anyopaque,
vtable: *const VTable,
pub const VTable = struct {
alloc: *const fn (ctx: *anyopaque, len: usize, ptr_align: u8, ret_addr: usize) ?[*]u8,
resize: *const fn (ctx: *anyopaque, buf: []u8, buf_align: u8, new_len: usize, ret_addr: usize) bool,
free: *const fn (ctx: *anyopaque, buf: []u8, buf_align: u8, ret_addr: usize) void,
};
pub fn alloc(self: Allocator, comptime T: type, n: usize) ?[]T {
// delegates to self.vtable.alloc(self.ptr, ...)
}
};
Instead of one function pointer, there's a VTable -- a struct containing multiple function pointers. This is the same concept as a C++ vtable, but explicit: you can see the struct, you can read the fields, and there's no hidden compiler magic.
Every allocator implementation -- GeneralPurposeAllocator, page_allocator, ArenaAllocator, FixedBufferAllocator, testing.allocator -- fills in these three function pointers differently. GPA tracks every allocation for leak detection. Page allocator goes straight to the OS. Arena allocator bumps a pointer forward and never frees individual allocations. But they all look like std.mem.Allocator from the outside.
This is why you can write fn process(allocator: std.mem.Allocator) and it works with ANY allocator. The function doesn't know which allocator it's using. It just calls allocator.alloc(), which dispatches through the vtable to the concrete implementation. Same pattern as our Writer, just with more methods.
And this is why the testing allocator from ep012 is so powerful -- it's just another allocator that fills in the vtable with leak-detecting implementations. Your production code uses GPA. Your test code uses the testing allocator. Same interface, different behavior. The code being tested never needs to change.
Building a Logger Interface
Let me walk through building a complete interface from scratch -- a Logger that supports multiple output backends. This is the kind of thing you'd build in a real application:
const std = @import("std");
const Logger = struct {
ptr: *anyopaque,
vtable: *const VTable,
const VTable = struct {
log: *const fn (*anyopaque, []const u8) void,
flush: *const fn (*anyopaque) void,
};
pub fn log(self: Logger, msg: []const u8) void {
self.vtable.log(self.ptr, msg);
}
pub fn flush(self: Logger) void {
self.vtable.flush(self.ptr);
}
};
const StdoutLogger = struct {
prefix: []const u8,
const vtable = Logger.VTable{
.log = @ptrCast(&logImpl),
.flush = @ptrCast(&flushImpl),
};
pub fn logger(self: *StdoutLogger) Logger {
return .{ .ptr = @ptrCast(self), .vtable = &vtable };
}
fn logImpl(ptr: *anyopaque, msg: []const u8) void {
const self: *StdoutLogger = @ptrCast(@alignCast(ptr));
std.debug.print("[{s}] {s}\n", .{ self.prefix, msg });
}
fn flushImpl(_: *anyopaque) void {
// stdout doesn't need flushing in debug.print
}
};
const BufferLogger = struct {
buffer: std.ArrayList(u8),
const vtable = Logger.VTable{
.log = @ptrCast(&logImpl),
.flush = @ptrCast(&flushImpl),
};
pub fn logger(self: *BufferLogger) Logger {
return .{ .ptr = @ptrCast(self), .vtable = &vtable };
}
fn logImpl(ptr: *anyopaque, msg: []const u8) void {
const self: *BufferLogger = @ptrCast(@alignCast(ptr));
self.buffer.appendSlice(msg) catch {};
self.buffer.append('\n') catch {};
}
fn flushImpl(ptr: *anyopaque) void {
const self: *BufferLogger = @ptrCast(@alignCast(ptr));
self.buffer.clearRetainingCapacity();
}
};
Notice the vtable-as-const-struct pattern. Each concrete type declares a const vtable with its function pointers filled in. The logger() method returns a Logger pointing to &vtable -- the vtable itself lives in static memory (it's const), so there's no allocation involved. The only thing that varies between instances is ptr (which points to the specific instance's data).
This is how the standard library does it too. The VTable is a compile-time constant. Only the data pointer is per-instance. Minimal memory overhead.
Testing with Type-Erased Interfaces
Here's where the pattern really shines. Because the interface accepts any implementation, you can inject test doubles trivially:
test "application uses logger correctly" {
var buf_logger = BufferLogger{
.buffer = std.ArrayList(u8).init(std.testing.allocator),
};
defer buf_logger.buffer.deinit();
// Inject the buffer logger where your app expects a Logger
processData(buf_logger.logger());
// Verify what was logged
try std.testing.expectEqualStrings(
"Processing started\nProcessing complete\n",
buf_logger.buffer.items,
);
}
fn processData(log: Logger) void {
log.log("Processing started");
// ... actual work ...
log.log("Processing complete");
}
No mocking framework. No dependency injection container. No abstract base class. Just pass a BufferLogger instead of a StdoutLogger and check what ended up in the buffer. The interface IS the seam. If you remember from ep012 where we talked about testability and architecture -- this is the same principle. Code that accepts an interface is code that can be tested with any implementation of that interface.
Type Erasure vs Comptime Generics
Zig gives you two ways to write "generic" code, and knowing when to use which is important:
Comptime generics (covered more in the next episode) monomorphize -- the compiler generates a seperate specialized version for each type. Zero indirection, zero runtime cost, but the type must be known at compile time.
Type erasure uses indirection -- one version of the code that dispatches through function pointers at runtime. Small overhead (one pointer dereference per call), but works when the concrete type isn't known until runtime.
| Situation | Use this |
|---|---|
| Type known at compile time | fn process(comptime T: type, item: T) |
| Stored in a collection (mixed types) | Type erasure |
| Plugin / callback systems | Type erasure |
| Maximum performance (tight loops) | Comptime generics |
| Standard library convention (allocators, writers) | Type erasure |
| Single concrete type, you just want code reuse | Comptime generics |
When you see fn myFunc(allocator: std.mem.Allocator) -- that's type erasure. The function accepts ANY allocator. When you see fn myFunc(comptime T: type) -- that's comptime generics. The compiler generates a specialized version per type.
Both are valid. Both are idiomatic Zig. They solve different problems. A function that takes an Allocator can be called with a GPA, a testing allocator, an arena, or any custom allocator without recompilation. A function that takes comptime T: type generates optimal code for each specific type but requires the type at compile time. You'll use both in real programs.
The Standard Library Pattern
If you browse the Zig standard library source code, you'll see this type erasure pattern everywhere. Here's the general shape:
- Define an interface struct with
ptr: *anyopaqueandvtable: *const VTable - The VTable struct contains one
*const fnper method - Each public method on the interface delegates to the corresponding vtable entry
- Concrete types define a
const vtableand a method that returns the interface
This is consistent across std.mem.Allocator, std.io.Writer, std.io.Reader, std.rand.Random, and more. Once you recognize the pattern, you'll see it everywhere. And once you can build your own -- as we did with Logger above -- you can create interfaces for any abstraction your program needs.
Having said that, don't over-abstract. If you have one StdoutLogger and you're never going to swap it for anything else, just use StdoutLogger directly. Type erasure adds a level of indirection -- use it when you actually need the polymorphism (multiple implementations, testing, plugin systems). If you're the only caller and there's only one implementation, a direct function call is simpler and faster.
Lifetime Gotcha: The Dangling Pointer
One thing to watch out for with type erasure -- the *anyopaque pointer must stay valid for as long as the interface value exists. This is the same ownership concern from ep008, but it bites harder here because the type system can't help you. Once the pointer is erased to *anyopaque, the compiler doesn't track what it points to anymore.
fn makeLogger() Logger {
var stdout_logger = StdoutLogger{ .prefix = "APP" };
return stdout_logger.logger(); // BUG: returns pointer to local!
}
The StdoutLogger lives on the stack. When makeLogger returns, that stack frame is gone. The Logger now contains a dangling *anyopaque pointing to freed memory. This compiles fine (the compiler can't see through *anyopaque) but crashes at runtime. Or worse -- it corrupts memory silently.
The fix: make sure the concrete type outlives the interface. Either allocate on the heap, store it in a struct that lives long enough, or ensure the interface doesn't escape the scope where the concrete type lives. defer helps -- if you create the interface and use it within one function, that's always safe.
Exercises
Build a
Readerinterface with aread(buffer: []u8) anyerror!usizemethod. ImplementSliceReaderthat reads from a fixed byte slice (tracking position), andZeroReaderthat always fills the buffer with zeroes. Write tests for both using the testing allocator (even though these particular readers don't allocate -- it's a good habit).Extend the Logger with a
levelparameter -- addconst Level = enum { info, warn, err };and change the log function signature tofn (*anyopaque, Level, []const u8) void. TheBufferLoggershould store the level as a prefix string ("[INFO]","[WARN]","[ERR]") alongside the message. Test that logging at different levels produces the correct output.Create a
Hasherinterface with two methods:update(bytes: []const u8) voidandfinal() u64. ImplementDjb2Hasher(start with 5381, for each byte:hash = hash *% 33 +% byte) andFnv1aHasher(start with 0xcbf29ce484222325, for each byte:hash ^= byte; hash *%= 0x100000001b3). Verify that both produce consistent output for the same input, and diferent output from each other.Read the source of
std.mem.Allocatorin the Zig standard library. Find the VTable definition. Then look atstd.heap.GeneralPurposeAllocatorand find where it fills in the vtable. Trace one call fromallocator.alloc(u8, 100)through the vtable dispatch to the GPA implementation. Write down each step.Refactor the step sequencer's
storage.zigfrom ep011 to use aWriterinterface instead of writing directly tostd.fs.File. Thesavefunction should accept aWriterinstead of a file path. This lets you test it by passing aBufferWriterand checking the output bytes, without touching the filesystem at all.Create a
Middlewarepattern: aLoggerthat wraps anotherLogger. TheTimestampLoggerprepends a timestamp to every message, then forwards to the inner logger. Chain two:TimestampLoggerwrapping aBufferLogger. Verify the output includes timestamps. This is the decorator pattern, implemented with type erasure.
Wat we geleerd hebben
- Type erasure =
*anyopaque+ function pointers (or vtable) = runtime polymorphism - The pattern: interface struct holds
ptr+vtable, concrete type provides a method that returns the interface @ptrCastand@alignCastrecover the concrete type inside the implementation functionsstd.mem.Allocatorandstd.io.Writerare the canonical standard library examples -- you've been using this pattern since ep007- VTables as
conststructs live in static memory -- zero allocation overhead per interface instance - Testing becomes trivial: inject a buffer-backed implementation through the interface, check what happened
- Comptime generics for compile-time dispatch, type erasure for runtime dispatch -- use the right tool for the job
- Watch lifetimes: the concrete type must outlive the interface value (the compiler can't check this through
*anyopaque)
Next time we're looking at the other side of the generics coin -- comptime parameters. How fn Stack(comptime T: type) type actually works, how to write functions that accept anytype, and how the compiler generates specialized code without the overhead of type erasure. If you've been curious about how that Stack(i32) from ep012's TDD example really works under the hood -- that's what we're covering ;-)