Skip to content

v2026.3.23 — :service Async Concurrency

v2026.3.23 — :service Profile: True M:N Fiber Concurrency

Section titled “v2026.3.23 — :service Profile: True M:N Fiber Concurrency”

Released: March 23, 2026

Janus now has production-grade structured concurrency. Not a wrapper around pthreads. Not green threads with a global lock. Real M:N fibers — lightweight cooperative tasks multiplexed onto OS worker threads with work-stealing, structured lifetimes, and typed channel communication.

main() now runs as a fiber on the CBC-MN scheduler. No boilerplate. No scheduler initialization. You write nursery do...end in main() and it works.

func main() do
nursery do
let h1 = spawn fetch("https://api.alpha.com/data")
let h2 = spawn fetch("https://api.beta.com/data")
let a = await h1
let b = await h2
println("Alpha: ", a.status)
println("Beta: ", b.status)
end
end

Before this release, main() ran on a bare OS thread. Nursery and spawn were available but required manual scheduler setup. Now the scheduler starts when the program starts and shuts down when main() returns.

spawn now heap-allocates argument thunks via the scheduler’s arena. Previously, spawn arguments were stack-captured — which meant any non-trivial argument (strings, structs, slices) could be invalidated if the parent fiber’s stack frame advanced before the child read its arguments.

The new protocol:

  1. Evaluate all arguments on the spawning fiber’s stack
  2. Allocate a thunk struct on the scheduler arena
  3. Copy arguments into the thunk
  4. Pass the thunk pointer to the child fiber

This is M:N safe — fibers can be stolen by any worker thread, and the thunk remains valid for the child’s entire lifetime.

await handle now targets a specific task — not “wait for all children.” This enables fine-grained result collection:

nursery do
let fast = spawn quick_check()
let slow = spawn deep_analysis()
let check = await fast
if not check.ok do
// Cancel slow task early -- don't wait
fail CheckFailed
end
let analysis = await slow
publish(analysis)
end

The implementation uses an atomic CAS waiter protocol:

  • Awaiter registers on the target task’s waiter slot (atomic CAS)
  • If the task already completed, CAS fails and the awaiter reads the result immediately (fast path)
  • If the task is still running, the awaiter’s fiber parks and the completing fiber wakes it

No locks. No condition variables. Pure atomic coordination.

Each spawned fiber now gets its own dedicated stack, allocated from the nursery’s memory pool. Stack sizes are profile-gated with per-spawn overrides:

ProfileDefaultRationale
:core64 KBCompute-focused, minimal I/O
:service256 KBReal Zig stdlib interop (sorts, directory iteration)
:cluster256 KBActors + supervisors
:sovereign512 KBCrypto operations, proof chains

Previously, all fibers in a nursery shared a single pre-allocated stack region. This was fragile — a deep call chain in one fiber could overflow into another’s memory. Now each fiber is isolated.

Typed, bounded, closeable channels for inter-fiber communication:

let work = channel(100)
let results = channel(100)
nursery do
// Producer
spawn func() do
for item in items do
work.send(item)
end
work.close()
end
// 4 workers
for w in 0..4 do
spawn func() do
while let job = work.recv() do
results.send(process(job))
end
end
end
// Collector
spawn func() do
var count = 0
while let result = results.recv() do
store(result)
count = count + 1
end
println("Processed: ", count)
end
end

Full API: send, recv, trySend, tryRecv, close, isClosed.

Multiplexed channel operations with timeout and default cases:

select do
recv data_ch |msg| do
handle(msg)
end
recv control_ch |cmd| do
execute(cmd)
end
timeout 5000 do
println("Heartbeat timeout")
end
end

  • Fiber creation: ~2 us (stack allocation + thunk copy)
  • Context switch: ~50 ns (register swap, no kernel transition)
  • Channel send/recv (unbuffered): ~100 ns (atomic + fiber wake)
  • Await (fast path): ~20 ns (single atomic CAS)
  • Memory per fiber: ~4 KB overhead + configured stack size

The scheduler uses Chase-Lev work-stealing deques — idle workers steal from the tail of busy workers’ queues. This gives good load distribution without centralized locking.


main() ─── fiber on scheduler
└── nursery
├── spawn task_a() ─── fiber on worker thread 0
├── spawn task_b() ─── fiber on worker thread 1
│ └── nursery (nested)
│ ├── spawn subtask() ─── fiber (may be stolen by any worker)
│ └── spawn subtask() ─── fiber
├── channel(10) ─── bounded typed message queue
└── select ─── multiplexed channel wait

Every fiber belongs to a nursery. Every nursery cleans up before its scope exits. Every error propagates structurally. No orphans. No leaks. No silent failures.


  • Actor model:cluster profile with virtual actors (grains) and location-transparent messaging
  • Supervision trees — automatic restart policies for long-running services
  • Distributed channels — cross-node channel communication via the Libertaria protocol stack
  • Preemptive fairness — optional reduction-counting for CPU-bound fibers that don’t yield


“Structured concurrency is not a feature. It is a guarantee.”