Learn Zig Series (#9) - Comptime (Zig's Superpower)
What will I learn
- You will learn what compile-time execution (comptime) is and why it matters;
- comptime variables and inline for loops that unroll at compile time;
- comptime function parameters and how they determine return types;
- how Zig implements generics -- functions that return types;
- building a generic RingBuffer data structure with comptime;
- compile-time type introspection with
@typeInfoand@typeName; - compile-time validation with
@compileError; - comptime data-driven patterns -- lookup tables baked into the binary;
- the comptime vs runtime mental model and when to use which;
- real-world patterns: generic containers, validation, reflection, and compile-time computed data.
Requirements
- A working modern computer running macOS, Windows or Ubuntu;
- An installed Zig 0.14+ distribution (download from ziglang.org);
- The ambition to learn Zig programming.
Difficulty
- Beginner
Curriculum (of the Learn Zig Series):
- Zig Programming Tutorial - ep001 - Intro
- Learn Zig Series (#2) - Hello Zig, Variables and Types
- Learn Zig Series (#3) - Functions and Control Flow
- Learn Zig Series (#4) - Error Handling (Zig's Best Feature)
- Learn Zig Series (#5) - Arrays, Slices, and Strings
- Learn Zig Series (#6) - Structs, Enums, and Tagged Unions
- Learn Zig Series (#7) - Memory Management and Allocators
- Learn Zig Series (#8) - Pointers and Memory Layout
- Learn Zig Series (#9) - Comptime (Zig's Superpower) (this post)
Learn Zig Series (#9) - Comptime (Zig's Superpower)
Welcome back! In episode #8 we went deep into pointers and memory layout -- single-item pointers (*T) vs const pointers (*const T), passing data by reference, how slices are really just pointer-plus-length under the hood, many-item pointers for C interop, and three different struct layouts (default, extern, packed). We used @ptrCast to reinterpret memory, built optional pointer chains for linked lists, and combined everything with the allocator patterns from ep007 in a heap-allocated price tracker.
At the end of ep007 I mentioned that comptime was coming -- Zig's ability to execute code at compile time, generating types and functions from compile-time data. And at the end of ep008 I said that pointers give you power over the machine at the lowest level. Well, comptime gives you power over the compiler itself ;-)
If I had to pick one feature that makes Zig unique among all programming languages I've worked with -- and I've worked with quite some over the years -- it would be comptime. Not error unions (those are brilliant, but other languages have sum types). Not explicit allocators (powerful, but conceptually straightforward). Comptime. The ability to run arbitrary Zig code during compilation, producing types, constants, and fully optimized data structures that get baked into the binary with zero runtime overhead. This single mechanism replaces C's preprocessor macros, C++ template metaprogramming, Rust's generics and proc macros, and Python's metaclasses. All of them. One keyword.
Let's dive right in.
Solutions to Episode 8 Exercises
Before we start on new material, here are the solutions to last episode's exercises. As always, if you actually typed these out and compiled them (and I really hope you did!), compare your solutions:
Exercise 1 -- swap two values:
const std = @import("std");
fn swap(a: *f64, b: *f64) void {
const tmp = a.*;
a.* = b.*;
b.* = tmp;
}
pub fn main() void {
var x: f64 = 64000.0;
var y: f64 = 68500.0;
std.debug.print("Before: x={d:.0}, y={d:.0}\n", .{ x, y });
swap(&x, &y);
std.debug.print("After: x={d:.0}, y={d:.0}\n", .{ x, y });
}
Classic pointer pattern -- take *f64 to both values, save one in a temp, swap via dereference. The &x and &y in main pass the addresses. After the swap, x holds 68500 and y holds 64000. Same memory, rearranged.
Exercise 2 -- linked list average with high/low: walk with while (current) |node|, accumulate sum, track min and max, divide by count. Same optional pointer unwrapping from ep004, applied to the ?*PriceNode chain.
Exercise 3 -- @sizeOf comparison: for { a: u8, b: u16, c: u32, d: u8 }:
- Default
struct: 8 bytes (Zig reorders for optimal alignment) extern struct: 12 bytes (C layout with padding between fields)packed struct: 8 bytes (no padding, tightly packed)
If your predictions matched, you understand alignment. If they didn't -- that's exactly why you run the experiment ;-)
Exercise 4 -- reverse a slice in place:
fn reverseSlice(values: []f64) void {
var lo: usize = 0;
var hi: usize = values.len - 1;
while (lo < hi) : ({ lo += 1; hi -= 1; }) {
const tmp = values[lo];
values[lo] = values[hi];
values[hi] = tmp;
}
}
Two indices converging from the ends, swapping as they go. The slice parameter []f64 gives mutable access, and the while with a continue expression (the ({ lo += 1; hi -= 1; }) part) keeps it clean. If you used a for loop instead -- that works too. There's more than one way.
Exercise 5 -- price tracker with removeFirst: save head.next before allocator.destroy(head), return the new head. The "save before free" pattern from the walkthrough. If the GPA reports zero leaks, you got it right.
Exercise 6 -- byte inspection with @ptrCast: cast *u32 to *[4]u8, print each byte in hex. For 0xCAFEBABE on little-endian: BE BA FE CA. Least significant byte first.
Now -- comptime!
What Is Comptime?
In most programming languages, code runs at exactly one time: when you execute the program. The compiler translates your source code into machine instructions, and those instructions run later. The compiler doesn't execute your functions -- it just translates them.
Zig is different. In Zig, the compiler can execute your code during compilation. Not a separate macro language. Not a template system. Not a preprocessor with its own syntax. Your actual Zig code -- the same functions, the same loops, the same conditionals you write for runtime -- can run at compile time. The results get baked into the binary as constants, types, or optimized code.
The keyword is comptime. When you mark something as comptime, you're telling the compiler: "evaluate this right now, during compilation, not later when the program runs."
Here's the simplest possible example:
const std = @import("std");
pub fn main() void {
comptime var sum: u32 = 0;
const values = [_]u32{ 1, 2, 3, 4, 5, 6, 7, 8, 9, 10 };
inline for (values) |v| {
sum += v;
}
std.debug.print("Sum: {d}\n", .{sum}); // 55
}
Output:
Sum: 55
The comptime var sum exists only during compilation. The inline for unrolls the loop at compile time -- each iteration happens inside the compiler, not in your running program. The binary that comes out contains just the constant 55. No loop. No addition. No runtime computation whatsoever. The compiler did all the work, and the program just prints a constant.
If you're coming from Python, you might think "so what? Python can compute sum(range(1, 11)) pretty fast too." True. But Zig's comptime operates at a completely different level. It doesn't just compute values -- it can compute types. It can create structs, set array sizes, generate specialized code paths, and validate constraints. All at compile time.
Let me show you what that actually means in practice.
Comptime Function Parameters
When a function parameter is marked comptime, the argument MUST be known at compile time. The compiler substitutes the value in and can use it for things that require compile-time knowledge -- like determining array sizes and return types:
const std = @import("std");
fn repeat(comptime n: u32, value: u8) [n]u8 {
return [_]u8{value} ** n;
}
pub fn main() void {
const five_x = repeat(5, 'X');
const ten_dash = repeat(10, '-');
std.debug.print("{s}\n{s}\n", .{ &five_x, &ten_dash });
}
Output:
XXXXX
----------
Look at that return type: [n]u8. The n comes from the comptime n: u32 parameter. When you call repeat(5, 'X'), the compiler substitues n = 5 and the return type becomes [5]u8. When you call repeat(10, '-'), the return type is [10]u8. Two different calls, two different return types. The compiler generates specialized code for each one.
Try calling repeat(5, 'X') and repeat(10, '-') in the same program and assigning them to variables -- you'll find they have different types. [5]u8 is NOT the same type as [10]u8. This is completely impossible in Python (no compile-time type generation), requires heavyweight template machinery in C++, and needs generic parameters plus const generics in Rust. In Zig, it's just... a function with a comptime parameter. That's it.
The key insight: comptime parameters don't exist at runtime. They're consumed by the compiler. The compiled binary doesn't contain any trace of n -- it contains the fully expanded, specialized function for each distinct call site.
How Zig Does Generics (Functions That Return Types)
This is where comptime goes from "neat optimization trick" to "fundamental language mechanism." In most languages, generics are a separate feature with their own syntax: ArrayList<T> in Java, Vec<T> in Rust, std::vector<T> in C++. Zig doesn't have a separate generics system. It doesn't need one. Generics in Zig are just functions that take comptime type parameters and return a type.
Let me show you the pattern, starting small:
const std = @import("std");
fn Pair(comptime A: type, comptime B: type) type {
return struct {
first: A,
second: B,
fn display(self: @This()) void {
std.debug.print("({}, {})\n", .{ self.first, self.second });
}
};
}
pub fn main() void {
const IntPair = Pair(i32, i32);
const MixedPair = Pair(f64, bool);
const coords = IntPair{ .first = 42, .second = 99 };
const trade_signal = MixedPair{ .first = 68500.0, .second = true };
coords.display(); // (42, 99)
trade_signal.display(); // (6.85e4, true)
}
Output:
(42, 99)
(6.85e4, true)
Read that function signature again: fn Pair(comptime A: type, comptime B: type) type. A function that takes two types as arguments and returns a type. The returned type is a struct with fields typed by the arguments. Pair(i32, i32) returns a struct with two i32 fields. Pair(f64, bool) returns a struct with one f64 and one bool. Different input types, different output types.
This is how ALL generic data structures work in Zig. std.ArrayList(f64) from ep007? It's a function that takes comptime T: type and returns a specialized struct with .items: []T, .append(), .deinit(), etc. std.StringHashMap(f64)? Same pattern -- a comptime function returning a specialized type. std.AutoHashMap(K, V)? Two comptime type parameters. Every generic container in the standard library is just a function that returns a type. No special generics syntax. No angle brackets. No where clauses. Just comptime.
Let me build something more substantial to drive this home.
Building a Generic RingBuffer
A ring buffer (also called circular buffer) is a fixed-size collection that wraps around. When it's full, new elements overwrite the oldest ones. Useful for keeping a sliding window of the last N data points -- recent prices, log entries, sensor readings, whatever. Let me build a generic one with comptime:
const std = @import("std");
fn RingBuffer(comptime T: type, comptime capacity: usize) type {
return struct {
items: [capacity]T = undefined,
head: usize = 0,
count: usize = 0,
const Self = @This();
fn push(self: *Self, value: T) void {
const idx = (self.head + self.count) % capacity;
self.items[idx] = value;
if (self.count < capacity) {
self.count += 1;
} else {
self.head = (self.head + 1) % capacity;
}
}
fn latest(self: Self) ?T {
if (self.count == 0) return null;
return self.items[(self.head + self.count - 1) % capacity];
}
fn oldest(self: Self) ?T {
if (self.count == 0) return null;
return self.items[self.head];
}
fn isFull(self: Self) bool {
return self.count == capacity;
}
fn get(self: Self, index: usize) ?T {
if (index >= self.count) return null;
return self.items[(self.head + index) % capacity];
}
fn average(self: Self) f64 {
if (self.count == 0) return 0;
var sum: f64 = 0;
for (0..self.count) |i| {
sum += @as(f64, @floatFromInt(self.items[(self.head + i) % capacity]));
}
return sum / @as(f64, @floatFromInt(self.count));
}
};
}
pub fn main() void {
var prices = RingBuffer(u32, 5){};
// Push 7 values into a buffer that holds 5
prices.push(64000);
prices.push(65200);
prices.push(63800);
prices.push(67100);
prices.push(68400); // buffer is now full
prices.push(66900); // overwrites 64000
prices.push(69200); // overwrites 65200
std.debug.print("Count: {d} (capacity: 5)\n", .{prices.count});
std.debug.print("Full: {}\n", .{prices.isFull()});
std.debug.print("Oldest: {d}\n", .{prices.oldest().?});
std.debug.print("Latest: {d}\n", .{prices.latest().?});
std.debug.print("5-period SMA: {d:.0}\n", .{prices.average()});
std.debug.print("\nAll values (oldest to newest):\n", .{});
for (0..prices.count) |i| {
std.debug.print(" [{d}] = {d}\n", .{ i, prices.get(i).? });
}
}
Output:
Count: 5 (capacity: 5)
Full: true
Oldest: 63800
Latest: 69200
5-period SMA: 67080
All values (oldest to newest):
[0] = 63800
[1] = 67100
[2] = 68400
[3] = 66900
[4] = 69200
Let me unpack what comptime is doing here.
fn RingBuffer(comptime T: type, comptime capacity: usize) type takes a type AND a size at compile time, and returns a completely specialized struct. RingBuffer(u32, 5) produces a struct with items: [5]u32. RingBuffer(f64, 100) would produce a different struct with items: [100]f64. Different types. Different sizes. Different structs. All from the same function.
const Self = @This() is a comptime-only built-in that returns the type of the enclosing struct. Since the struct is anonymous (it's created inside the function), @This() gives you a way to refer to it by name within its own methods. You saw self: *Account in ep006 and ep008 -- same idea, but Self is the comptime-computed name for the returned anonymous struct.
The undefined initializer on items means the array starts with uninitialized memory. We used undefined in ep007 for FixedBufferAllocator buffers. Same principle -- we'll write before we read, so zero-initializing would be wasted work.
The push method uses modular arithmetic (% operator) to wrap around. When count < capacity, we haven't filled the buffer yet, so we increment count. Once full, we advance head instead -- the oldest element's slot becomes the new element's slot. No allocation. No shifting. Constant time insertion regardless of buffer size.
Notice that latest() and oldest() return ?T -- optionals. Empty buffer returns null. Same optional pattern from ep004 and ep008. The average() method uses @floatFromInt for the integer-to-float conversion because Zig doesn't do implicit numeric conversions (we covered that in ep002).
The entire ring buffer lives on the stack. No allocator needed. No heap. The fixed capacity is part of the type itself, determined at compile time. Compare this to the ArrayList from ep007 which needs an allocator and grows dynamically -- ring buffers are for when you know the maximum size in advance and want zero-allocation, deterministic behavior.
Type Introspection with @typeInfo
Zig gives you full compile-time reflection -- the ability to inspect the struture of any type during compilation. No runtime reflection overhead. No reflection API that might fail. The compiler knows everything about every type, and @typeInfo lets your comptime code access that knowledge:
const std = @import("std");
const Trade = struct {
pair: []const u8,
price: f64,
quantity: f64,
side: bool,
};
fn describeStruct(comptime T: type) void {
const info = @typeInfo(T).@"struct";
std.debug.print("=== {s} ===\n", .{@typeName(T)});
std.debug.print(" Size: {d} bytes\n", .{@sizeOf(T)});
std.debug.print(" Fields ({d}):\n", .{info.fields.len});
inline for (info.fields) |field| {
std.debug.print(" .{s}: {s} (offset {d}, size {d})\n", .{
field.name,
@typeName(field.type),
@offsetOf(T, field.name),
@sizeOf(field.type),
});
}
}
pub fn main() void {
describeStruct(Trade);
}
Output:
=== main.Trade ===
Size: 40 bytes
Fields (4):
.pair: []const u8 (offset 0, size 16)
.price: f64 (offset 16, size 8)
.quantity: f64 (offset 24, size 8)
.side: bool (offset 32, size 1)
@typeInfo(T) returns a tagged union (remember those from ep006?) describing everything about a type -- its fields, their types, their default values, alignment, and more. The .@"struct" syntax accesses the struct-specific variant (the @"" syntax is Zig's way of using reserved words as identifiers).
inline for is critical here. A regular for loop runs at runtime. An inline for unrolls at compile time -- the compiler generates separate code for each field. This is necessary because field.name and field.type are comptime values that differ per iteration. A runtime loop can't handle values that are different at each iteration when those values determine types and offsets.
@typeName returns the human-readable name of a type as a []const u8 string. @sizeOf returns the size in bytes. @offsetOf returns the byte offset of a field within the struct. All comptime built-ins, all resolved during compilation. The binary doesn't contain any reflection machinery -- just the pre-computed print statements.
This is how serialization libraries, debug formatters, and ORM-like systems work in Zig. You write a generic function that inspects any struct's fields at compile time and generates specialized code for that specific struct. Zero runtime cost. No reflect package. No runtime type descriptors. Just comptime.
Imagine building a CSV exporter: iterate over fields at compile time, generate a header row from field names, then serialize each struct instance's values in field order. One generic function handles any struct you throw at it. That's the power of compile-time reflection.
Compile-Time Validation with @compileError
Comptime lets you catch invalid configurations before the program even exists as a binary. Instead of runtime checks that might not run until production, you get compile-time checks that fire during zig build:
const std = @import("std");
fn BoundedCounter(comptime max_val: u32) type {
if (max_val == 0) @compileError("BoundedCounter max must be at least 1");
if (max_val > 1_000_000) @compileError("BoundedCounter max exceeds safe limit");
return struct {
value: u32 = 0,
const max = max_val;
const Self = @This();
fn increment(self: *Self) void {
if (self.value < max) {
self.value += 1;
}
}
fn reset(self: *Self) void {
self.value = 0;
}
fn percentage(self: Self) f64 {
return @as(f64, @floatFromInt(self.value)) / @as(f64, @floatFromInt(max)) * 100.0;
}
};
}
pub fn main() void {
var counter = BoundedCounter(100){};
var i: u32 = 0;
while (i < 150) : (i += 1) {
counter.increment();
}
std.debug.print("Value: {d}/{d} ({d:.1}%)\n", .{
counter.value, BoundedCounter(100).max, counter.percentage(),
});
// These lines would NOT compile -- try uncommenting them:
// var zero = BoundedCounter(0){}; // "max must be at least 1"
// var huge = BoundedCounter(9999999){}; // "max exceeds safe limit"
}
Output:
Value: 100/100 (100.0%)
@compileError stops compilation with your message. It doesn't throw an exception. It doesn't log a warning. It prevents the binary from being created. The invalid configuration never reaches production because the program physically cannot be compiled with those parameters.
Compare this to runtime validation. In Python, you'd write if max_val == 0: raise ValueError(...) -- but that only fires when the code actually executes. If nobody tests with max_val=0, the bug ships. With @compileError, the bug is caught the moment someone writes BoundedCounter(0) anywhere in the codebase, regardless of whether that code path would ever execute at runtime. The compiler checks it because comptime evaluation is exhaustive -- every comptime code path is evaluated during compilation.
This pattern is all over Zig's standard library. Try creating an ArrayList of void or a HashMap with an invalid hash function -- you'll get a @compileError explaining exactly what you did wrong and why. It's documentation that enforces itself.
Comptime Data-Driven Patterns
Since comptime values get baked into the binary, you can define configuration and lookup tables as compile-time constants with zero runtime allocation:
const std = @import("std");
const Severity = enum { low, medium, high, critical };
const AlertRule = struct {
name: []const u8,
threshold: f64,
severity: Severity,
};
const alert_rules = [_]AlertRule{
.{ .name = "Price spike", .threshold = 5.0, .severity = .high },
.{ .name = "Volume surge", .threshold = 200.0, .severity = .medium },
.{ .name = "Flash crash", .threshold = -10.0, .severity = .critical },
.{ .name = "Drift warning", .threshold = 2.0, .severity = .low },
};
fn checkAlerts(change_pct: f64, volume_ratio: f64) void {
inline for (alert_rules) |rule| {
const triggered = switch (rule.severity) {
.critical => change_pct <= rule.threshold,
.high => change_pct >= rule.threshold,
.medium => volume_ratio >= rule.threshold,
.low => @abs(change_pct) >= rule.threshold,
};
if (triggered) {
std.debug.print("[{s}] {s} triggered (threshold: {d:.1})\n", .{
@tagName(rule.severity), rule.name, rule.threshold,
});
}
}
}
pub fn main() void {
std.debug.print("=== Scenario 1: moderate move ===\n", .{});
checkAlerts(3.2, 150.0);
std.debug.print("\n=== Scenario 2: flash crash ===\n", .{});
checkAlerts(-12.5, 450.0);
}
Output:
=== Scenario 1: moderate move ===
[low] Drift warning triggered (threshold: 2.0)
=== Scenario 2: flash crash ===
[critical] Flash crash triggered (threshold: -10.0)
[medium] Volume surge triggered (threshold: 200.0)
[low] Drift warning triggered (threshold: 2.0)
The alert_rules array is a compile-time constant -- it's baked into the binary's data section, not allocated on the stack or heap at runtime. The inline for unrolls the loop, generating a separate if check for each rule. No loop overhead, no array indexing, no bounds checking. The compiled code is equivalent to four sequential if statements, each with its specific threshold and severity hard-coded.
@tagName converts an enum variant to its name as a string -- another comptime built-in. And @abs is a built-in that computes the absolute value (works for both integers and floats).
This pattern is powerful for configuration that's known at compile time: feature flags, protocol definitions, supported formats, validation rules. In Python you'd put these in a dictionary or a config file and parse them at startup. In Zig, they exist as literal machine instructions in the binary -- no parsing, no allocation, no startup cost.
Comptime String Processing
Comptime isn't limited to numbers and types. You can process strings at compile time too. Here's a comptime function that formats a version string:
const std = @import("std");
fn comptimeConcat(comptime parts: []const []const u8, comptime sep: []const u8) []const u8 {
comptime {
var total_len: usize = 0;
for (parts, 0..) |part, i| {
total_len += part.len;
if (i < parts.len - 1) total_len += sep.len;
}
var result: [total_len]u8 = undefined;
var pos: usize = 0;
for (parts, 0..) |part, i| {
for (part) |c| {
result[pos] = c;
pos += 1;
}
if (i < parts.len - 1) {
for (sep) |c| {
result[pos] = c;
pos += 1;
}
}
}
return &result;
}
}
const app_name = "portfolio-tracker";
const version = comptimeConcat(&.{ "v", "1", ".", "4", ".", "2" }, "");
const full_banner = comptimeConcat(&.{ app_name, " ", version }, "");
pub fn main() void {
std.debug.print("{s}\n", .{full_banner});
}
Output:
portfolio-tracker v1.4.2
The comptime { ... } block forces everything inside to execute at compile time. The comptimeConcat function builds a string by computing the total length, creating a fixed-size array (using the comptime-known length!), and filling it character by character. All during compilation. The binary contains just the literal string "portfolio-tracker v1.4.2" -- no concatenation, no allocation, no std.fmt calls at runtime.
Notice the &.{ ... } syntax for anonymous struct/array literals. This is a common Zig pattern you'll see everywhere -- it creates an array inline. We've been using it since ep005 for format argument tuples in std.debug.print, and here it creates a []const []const u8 (a slice of string slices).
Comptime vs Runtime -- The Mental Model
Here's the framework for thinking about when to use comptime:
| Comptime | Runtime | |
|---|---|---|
| When | During zig build | During program execution |
| Inputs | Must be known at compile time | Can be dynamic (user input, files, network) |
| Outputs | Types, constants, baked data | Values, side effects, I/O |
| Cost | Longer compile time | CPU cycles during execution |
| Errors | @compileError -- program can't be built | Error unions, panics |
| Loops | inline for -- unrolled into the binary | Regular for -- executed each iteration |
| Memory | No allocation (baked into binary) | Stack or heap allocation |
The question to ask yourself: "Is this value known before the program runs?" If yes, consider making it comptime. If no, it must be runtime.
Some things MUST be comptime:
- Array sizes:
[n]u8requires comptimen - Type parameters:
ArrayList(T)requires comptimeT - Struct field types: determined at compile time
inline forloop ranges: must be comptime-known
Some things CANNOT be comptime:
- User input (you don't know it until the program runs)
- File contents (exist on disk, not in source code)
- Network data (arrives at runtime)
- Anything that depends on the program's state during execution
And some things CAN go either way -- that's where judgment comes in. A lookup table with 100 entries? Comptime if the entries are known at build time. A configuration object? Comptime if it's hardcoded, runtime if it's loaded from a config file. A mathematical function? Comptime if called with constant arguments, runtime if called with dynamic ones.
The beautiful thing about Zig's comptime is that regular code and comptime code are the same language. You don't learn a separate macro language like in C. You don't learn template syntax like in C++. You don't learn proc macro crates like in Rust. You just add the comptime keyword to existing Zig code, and the compiler executes it at build time. Same syntax, same semantics, different execution phase.
Putting It Together: A Comptime-Configured Monitoring System
Let me show you how comptime, generics, type introspection, and validation combine into something practical -- a monitoring system where the configuration is validated at compile time:
const std = @import("std");
fn Monitor(comptime Config: type) type {
// Validate that Config has the required fields
const info = @typeInfo(Config).@"struct";
comptime {
var has_name = false;
var has_threshold = false;
var has_check_fn = false;
for (info.fields) |field| {
if (std.mem.eql(u8, field.name, "name")) has_name = true;
if (std.mem.eql(u8, field.name, "threshold")) has_threshold = true;
if (std.mem.eql(u8, field.name, "check_fn")) has_check_fn = true;
}
if (!has_name) @compileError("Monitor config must have a 'name' field");
if (!has_threshold) @compileError("Monitor config must have a 'threshold' field");
if (!has_check_fn) @compileError("Monitor config must have a 'check_fn' field");
}
return struct {
config: Config,
trigger_count: u32 = 0,
last_value: f64 = 0,
const Self = @This();
fn check(self: *Self, value: f64) void {
self.last_value = value;
if (self.config.check_fn(value, self.config.threshold)) {
self.trigger_count += 1;
std.debug.print("[ALERT] {s}: value={d:.2} threshold={d:.2} (#{d})\n", .{
self.config.name,
value,
self.config.threshold,
self.trigger_count,
});
}
}
};
}
const UpperBound = struct {
name: []const u8,
threshold: f64,
check_fn: *const fn (f64, f64) bool = &above,
fn above(value: f64, threshold: f64) bool {
return value > threshold;
}
};
const LowerBound = struct {
name: []const u8,
threshold: f64,
check_fn: *const fn (f64, f64) bool = &below,
fn below(value: f64, threshold: f64) bool {
return value < threshold;
}
};
pub fn main() void {
var high_alert = Monitor(UpperBound){
.config = .{ .name = "Price ceiling", .threshold = 70000.0 },
};
var low_alert = Monitor(LowerBound){
.config = .{ .name = "Price floor", .threshold = 60000.0 },
};
const samples = [_]f64{ 65000, 68000, 71200, 59800, 72000, 63000, 58500 };
for (samples) |price| {
high_alert.check(price);
low_alert.check(price);
}
std.debug.print("\nHigh alerts: {d}, Low alerts: {d}\n", .{
high_alert.trigger_count, low_alert.trigger_count,
});
}
Output:
[ALERT] Price ceiling: value=71200.00 threshold=70000.00 (#1)
[ALERT] Price floor: value=59800.00 threshold=60000.00 (#1)
[ALERT] Price ceiling: value=72000.00 threshold=70000.00 (#2)
[ALERT] Price floor: value=58500.00 threshold=60000.00 (#2)
High alerts: 2, Low alerts: 2
This example pulls together everything from this episode and the previous ones:
Comptime generics: Monitor(Config) takes any configuration struct type. The function returns a specialized monitor type that's tailored to that specific config. Monitor(UpperBound) and Monitor(LowerBound) are completely different types with different internal logic.
Comptime validation: The comptime { ... } block at the top of Monitor inspects the config struct's fields using @typeInfo. If the config is missing name, threshold, or check_fn, compilation fails with a clear error message. Try passing in struct { x: u32 } and the compiler will tell you exactly what's missing. This is like a compile-time interface check -- Zig doesn't have interfaces, but comptime validation lets you enforce structural requirements.
Structs and methods from ep006. Pointer-based mutation from ep008 (the self: *Self parameter in check). Error patterns from ep004 (though this example doesn't use error unions -- it validates at compile time instead).
The function pointer field (check_fn: *const fn (f64, f64) bool) is a runtime value -- the actual comparison function. We haven't covered function pointers explicitly, but you've already seen them in action: every struct method that takes self is conceptually a function that receives a pointer to the struct. Here we're storing a function pointer as a field, which lets different config types use different comparison logic.
When NOT to Use Comptime
Comptime is powerful, but it's not always the right tool. Here are the situations where runtime is the better choice:
User-defined data: If the data comes from outside the program (config files, command-line args, databases, API responses), it's inherently runtime. You can't know it at compile time because it doesn't exist until the program runs.
Large lookup tables: Comptime data gets embedded in the binary. A table with 10,000 entries? That's 10,000 entries in your executable. Sometimes it's better to load data at runtime from a file.
Dynamic sizes: If you need a collection that grows based on runtime input, use ArrayList with an allocator (ep007). Comptime arrays have fixed sizes determined at compile time.
Complex computation: Comptime execution has limits. The compiler will bail out if your comptime code takes too many steps (to prevent infinite loops during compilation). For heavy computation, runtime is appropriate.
The rule of thumb: use comptime for configuration, types, validation, and small constant data. Use runtime for dynamic data, I/O, and user interaction. When you start splitting your programs into multiple files and modules, you'll find that comptime naturally handles the "shape" of your program (types, interfaces, validation) while runtime handles the "behavior" (logic, I/O, state changes).
Exercises
You know the drill by now. Type these out. Compile them. Read the compiler errors -- they're particularly educational for comptime code, because the compiler shows you exactly which comptime branch failed and why.
Write
fn Pair(comptime A: type, comptime B: type) typethat returns a struct with fieldsfirst: Aandsecond: B, plus adisplaymethod that prints both values. Create aPair(i32, f64)and aPair([]const u8, bool)and calldisplayon both.Write
fn generateSquares(comptime n: usize) [n]u32that returns an array where each element is the square of its index:[0, 1, 4, 9, 16, ...]. Use acomptime varand a loop inside acomptime { }block to fill the array. Call it withgenerateSquares(10)and print all values.Write a
describeTypefunction using@typeInfothat prints all fields of any struct passed to it -- field name, field type, and byte offset. Test it on at least two different structs.Create a
BoundedValuecomptime-generic type where the bounds are comptime parameters:fn BoundedValue(comptime min: i32, comptime max: i32) type. Use@compileErrorto reject cases wheremin >= max. The returned struct should have asetmethod that clamps the value to the bounds. Test it withBoundedValue(0, 100)and try to set values outside the range.Write a comptime lookup table: define a
constarray of structs (e.g. HTTP status codes with their text descriptions), then write a function that usesinline forto search the table at compile time and return the matching description. Call it with both a comptime argument (should be resolved at compile time) and a runtime argument (should use the unrolled loop at runtime).Build a generic
Stack(comptime T: type, comptime max_size: usize) typewithpush,pop,peek, andisEmptymethods. The stack should use a fixed-size array (no allocator needed). Validate at compile time thatmax_size > 0. Test it by pushing and popping values, and verify thatpopreturnsnullon an empty stack andpushreturnsfalsewhen the stack is full.
Exercises 1-2 test basic comptime mechanics. Exercise 3 tests type introspection. Exercise 4 combines generics with compile-time validation. Exercise 5 tests data-driven comptime patterns. Exercise 6 is a full generic data structure -- put everything together.
What Comptime Means For Your Zig Journey
We've now covered the core trio that makes Zig unique: explicit memory management (ep007 -- you choose the allocator), pointers and memory layout (ep008 -- you understand what's happening at the machine level), and now comptime (ep009 -- you control what the compiler does). These three features together give you a level of control that very few languages offer. You can decide exactly what happens at compile time vs runtime, exactly how memory is allocated and freed, and exactly how data is arranged in memory.
Everything we build from here will use these foundations. When we start organizing code into multiple files and modules, @import itself is a comptime operation -- the module system is built on the same comptime machinery. When we build a real project from scratch, comptime will let us define configurations that are validated before the program even runs, generic components that work with any data type, and compile-time computed tables that cost nothing at runtime.
Having said that, don't feel like you need to use comptime everywhere. It's a tool, not a religion. Start with runtime code. When you find yourself writing the same struct with different types, reach for a comptime generic. When you find yourself validating configuration at startup, consider whether that validation could happen at compile time instead. Let the need drive the adoption.
The fact that generics, validation, reflection, and code generation are all just... functions with comptime parameters... that's what makes Zig's design so elegant. One mechanism. No special syntax. No separate language. Just comptime.