v2026.3.23 — :service Async Concurrency
v2026.3.23 — :service Profile: True M:N Fiber Concurrency
Section titled “v2026.3.23 — :service Profile: True M:N Fiber Concurrency”Released: March 23, 2026
Janus now has production-grade structured concurrency. Not a wrapper around pthreads. Not green threads with a global lock. Real M:N fibers — lightweight cooperative tasks multiplexed onto OS worker threads with work-stealing, structured lifetimes, and typed channel communication.
What Changed
Section titled “What Changed”Fiber-Based Main
Section titled “Fiber-Based Main”main() now runs as a fiber on the CBC-MN scheduler. No boilerplate. No scheduler initialization. You write nursery do...end in main() and it works.
func main() do nursery do let h1 = spawn fetch("https://api.alpha.com/data") let h2 = spawn fetch("https://api.beta.com/data")
let a = await h1 let b = await h2
println("Alpha: ", a.status) println("Beta: ", b.status) endendBefore this release, main() ran on a bare OS thread. Nursery and spawn were available but required manual scheduler setup. Now the scheduler starts when the program starts and shuts down when main() returns.
Heap-Allocated Spawn Arguments
Section titled “Heap-Allocated Spawn Arguments”spawn now heap-allocates argument thunks via the scheduler’s arena. Previously, spawn arguments were stack-captured — which meant any non-trivial argument (strings, structs, slices) could be invalidated if the parent fiber’s stack frame advanced before the child read its arguments.
The new protocol:
- Evaluate all arguments on the spawning fiber’s stack
- Allocate a thunk struct on the scheduler arena
- Copy arguments into the thunk
- Pass the thunk pointer to the child fiber
This is M:N safe — fibers can be stolen by any worker thread, and the thunk remains valid for the child’s entire lifetime.
Individual Task Await
Section titled “Individual Task Await”await handle now targets a specific task — not “wait for all children.” This enables fine-grained result collection:
nursery do let fast = spawn quick_check() let slow = spawn deep_analysis()
let check = await fast if not check.ok do // Cancel slow task early -- don't wait fail CheckFailed end
let analysis = await slow publish(analysis)endThe implementation uses an atomic CAS waiter protocol:
- Awaiter registers on the target task’s waiter slot (atomic CAS)
- If the task already completed, CAS fails and the awaiter reads the result immediately (fast path)
- If the task is still running, the awaiter’s fiber parks and the completing fiber wakes it
No locks. No condition variables. Pure atomic coordination.
Per-Task Nursery Stacks
Section titled “Per-Task Nursery Stacks”Each spawned fiber now gets its own dedicated stack, allocated from the nursery’s memory pool. Stack sizes are profile-gated with per-spawn overrides:
| Profile | Default | Rationale |
|---|---|---|
:core | 64 KB | Compute-focused, minimal I/O |
:service | 256 KB | Real Zig stdlib interop (sorts, directory iteration) |
:cluster | 256 KB | Actors + supervisors |
:sovereign | 512 KB | Crypto operations, proof chains |
Previously, all fibers in a nursery shared a single pre-allocated stack region. This was fragile — a deep call chain in one fiber could overflow into another’s memory. Now each fiber is isolated.
CSP Channels
Section titled “CSP Channels”Typed, bounded, closeable channels for inter-fiber communication:
let work = channel(100)let results = channel(100)
nursery do // Producer spawn func() do for item in items do work.send(item) end work.close() end
// 4 workers for w in 0..4 do spawn func() do while let job = work.recv() do results.send(process(job)) end end end
// Collector spawn func() do var count = 0 while let result = results.recv() do store(result) count = count + 1 end println("Processed: ", count) endendFull API: send, recv, trySend, tryRecv, close, isClosed.
Select
Section titled “Select”Multiplexed channel operations with timeout and default cases:
select do recv data_ch |msg| do handle(msg) end recv control_ch |cmd| do execute(cmd) end timeout 5000 do println("Heartbeat timeout") endendPerformance Characteristics
Section titled “Performance Characteristics”- Fiber creation: ~2 us (stack allocation + thunk copy)
- Context switch: ~50 ns (register swap, no kernel transition)
- Channel send/recv (unbuffered): ~100 ns (atomic + fiber wake)
- Await (fast path): ~20 ns (single atomic CAS)
- Memory per fiber: ~4 KB overhead + configured stack size
The scheduler uses Chase-Lev work-stealing deques — idle workers steal from the tail of busy workers’ queues. This gives good load distribution without centralized locking.
The Full Picture
Section titled “The Full Picture”main() ─── fiber on scheduler │ └── nursery ├── spawn task_a() ─── fiber on worker thread 0 ├── spawn task_b() ─── fiber on worker thread 1 │ └── nursery (nested) │ ├── spawn subtask() ─── fiber (may be stolen by any worker) │ └── spawn subtask() ─── fiber ├── channel(10) ─── bounded typed message queue └── select ─── multiplexed channel waitEvery fiber belongs to a nursery. Every nursery cleans up before its scope exits. Every error propagates structurally. No orphans. No leaks. No silent failures.
What’s Next
Section titled “What’s Next”- Actor model —
:clusterprofile with virtual actors (grains) and location-transparent messaging - Supervision trees — automatic restart policies for long-running services
- Distributed channels — cross-node channel communication via the Libertaria protocol stack
- Preemptive fairness — optional reduction-counting for CPU-bound fibers that don’t yield
Documentation
Section titled “Documentation”- Tutorial: Structured Concurrency — hands-on introduction
- Reference: Concurrency Primitives — complete API documentation
- Reference: M:N Scheduler & Fibers — scheduler internals
“Structured concurrency is not a feature. It is a guarantee.”