Skip to content

Janus Profiles

Every great journey starts somewhere. Janus meets you where you are.

Janus is one language. But it’s also six different ways to use it — called profiles. Think of profiles as different lenses you can look through. The language stays the same. The lens changes what you can see and do.

Profiles are not different languages. Your :core code compiles identically under :service or :cluster. Nothing breaks. Nothing changes. Profiles only open doors — they never close them behind you.

Profiles restrict complexity, not capability. Start at :core, and every other profile is right there when you need it. You can even mix profiles within the same file with a single one-liner override.


ProfileWhat it isPerfect for
:coreThe honest foundation. Deterministic, minimal, explicit.Learning Janus, CLI tools, embedded systems, ESP32 hobby projects
:script:core with training wheels. REPL, inferred types, top-level code.Exploration, prototyping, data science, learning
:serviceApplication engineering. Error-as-values, channels, async.Web services, REST APIs, microservices, business logic
:clusterDistributed systems. Actors, grains, supervision trees.Game servers, chat systems, resilient backends, Metaverse infrastructure
:computeParallel compute. Tensors, GPU/NPU kernels, memory spaces.AI/ML inference, scientific computing, physics simulations
:sovereignTotal control. Raw pointers, effects, capabilities.Operating systems, device drivers, high-performance systems

┌─────────────────────────────────────────────────────────────┐
│ :sovereign │
│ Raw pointers · Full effects · Capabilities │
│ "The Citadel" │
├─────────────────────────────────────────────────────────────┤
│ :cluster │
│ Actors · Grains · Supervision · Migration │
│ "The Sanctum" │
├─────────────────────────────────────────────────────────────┤
│ :service │
│ Async · Channels · Error-as-values │
│ "The Workshop" │
├─────────────────────────────────────────────────────────────┤
│ :core │
│ Deterministic · Explicit · Honest │
│ "The Monastery" │
└─────────────────────────────────────────────────────────────┘

You never have to leave a profile. If :core does everything you need, stay there. There’s no shame in it. The Monastery is a perfectly valid place to live.

But if you ever want to add a GPU kernel to your :core script, or make your :service API distributed — the doors are open. That’s the Janus promise.


The foundation. Every other profile extends :core. Nothing is hidden, nothing is magical.

What you get:

  • 6 types: i64, f64, bool, String, Array, HashMap
  • 8 constructs: func, let, var, if, else, for, while, return
  • No concurrency. No async. No hidden allocations.
  • Native Janus modules first; explicit use zig only for bridge modules or generated wrappers

What it excludes:

  • No spawn, send, receive (those are :cluster)
  • No async/await (those are :service)
  • No comptime metaprogramming (that’s :sovereign)
  • No raw pointers (also :sovereign)

Perfect for:

  • Learning Janus from first principles
  • CLI tools and automation scripts
  • Embedded systems (ESP32, Raspberry Pi Pico)
  • Embedded in constrained environments
  • Anywhere determinism and predictability matter

Example:

func main() do
let fibs = [0, 1, 1, 2, 3, 5, 8, 13, 21, 34]
for n in fibs do
if n > 10 do
print_int(n)
end
end
end

Same capabilities as :core. But with the training wheels on.

What you get (on top of :core):

  • Implicit types (the compiler figures it out)
  • Top-level code (no main() required)
  • REPL — run janus script and type directly
  • Reflection via ASTDB access

What it excludes:

  • Not publishable — must migrate to :core for production

Perfect for:

  • Learning Janus interactively
  • Prototyping an idea in 10 minutes
  • Data exploration (Python-like feel)
  • Homework problems, algorithm competitions

Note: Use :script to explore. Use :core to ship.


Everything :core does — plus the tools for building real applications.

What you get (on top of :core):

  • Error-as-values: !Result, catch, try
  • Structured concurrency: async, await, nurseries
  • CSP channels: typed message passing between tasks
  • select — wait on multiple channels
  • Generics with constraints
  • using statement for resource cleanup
  • defer for guaranteed teardown

What it excludes:

  • No actors or grains (those are :cluster)
  • No tensors or GPU code (those are :compute)
  • No raw pointers or unsafe blocks (those are :sovereign)

Perfect for:

  • Web services and REST APIs
  • Background job processors
  • Database clients and ORMs
  • Business logic layer
  • Anything that needs to handle errors gracefully

Example:

async func fetch_user(id: UserId, ctx: *Context) !User do
let row = try ctx.db.query("SELECT * FROM users WHERE id = ?", [id])
return try User.from_row(row)
end
async func main() do
let user = try fetch_user(user_id, ctx)
print(user.name)
end

Everything :service does — plus the actor model for distributed, fault-tolerant systems.

What you get (on top of :service):

  • actor — a pinned concurrent entity with a typed mailbox
  • grain — a migratable actor that can move between nodes
  • supervisor — OTP-style restart hierarchies (one_for_one, one_for_all, rest_for_one)
  • Typed message protocols — sealed algebraic types for exhaustive matching
  • Capability contracts@requires declares hardware/software demands
  • Location transparency — address local and remote grains with the same syntax
  • Memory sovereignty tagsLocal.Exclusive, Session.Replicated, Volatile.Ephemeral

What it excludes:

  • No tensors or GPU kernels (those are :compute)
  • No raw pointers or unsafe blocks (those are :sovereign)

Perfect for:

  • Game servers that need to handle thousands of concurrent connections
  • Chat systems and real-time messaging platforms
  • Distributed databases and key-value stores
  • Metaverse infrastructure and virtual world backends
  • Anywhere a node crash should not take down the system

Example:

message StorageMsg {
Read { sector: u64, reply: Reply[Data] },
Write { sector: u64, data: []const u8, reply: Reply[void] },
}
@requires(cap: [.storage_nvme])
grain StorageGrain(msg: StorageMsg) do
receive do
StorageMsg.Read { sector, reply } => do
let data = try read_sectors(sector)
reply.send(Data.ok(data))
end
StorageMsg.Write { sector, data, reply } => do
try write_sectors(sector, data)
reply.send(void.ok())
end
end
end

Everything :core does — plus first-class support for tensors, streams, and hardware acceleration.

What you get (on top of :core):

  • tensor<T, Dims> — N-dimensional arrays with shape inference
  • Device streams — async GPU/NPU operations
  • Memory spaceson sram, on dram, on vram, on shared
  • Device targetingon device(npu), on device(gpu), on device(auto)
  • J-IR graph extraction — optimize before hitting hardware
  • Quantization — QVL, INT8, FP16 support

What it excludes:

  • No actors or grains (those are :cluster)
  • No effects system or unsafe blocks (those are :sovereign)

Perfect for:

  • AI inference (local LLMs, image classification, voice)
  • Scientific computing (physics, chemistry, climate models)
  • Signal processing and DSP
  • GPU compute shaders
  • Real-time video/image processing

Example:

func main() do
let weights = tensor<f32, [4096, 4096]>.load("model.bin")
let input = tensor<f32, [1, 4096]>.on(.vram)
let result = matmul(input, weights)
.quantize(.qvl)
.on(.npu)
print("Inference done in {result.latency_ms}ms")
end

All capabilities from all profiles — plus the tools for total system control.

What you get (on top of everything):

  • Raw pointers*T with manual memory management
  • Full comptime — compile-time execution, type-level programming
  • Complete effect system — with handler inlining
  • Multiple dispatch — full overload resolution
  • unsafe blocks — for the operations that genuinely need them
  • All capabilities from all profiles

What it excludes:

  • Nothing. You have everything.

Perfect for:

  • Operating system kernels
  • Device drivers
  • Bootloaders and firmware
  • Real-time systems with hard latency requirements
  • The core infrastructure that everything else runs on

Note: :sovereign is not “dangerous Janus.” It’s “honest Janus.” When you write unsafe {}, the compiler knows you’re doing something that could go wrong. It’s all explicit.

Example:

func write_to_device(addr: *volatile u32, value: u32) void
requires CapDeviceWrite
do
unsafe { addr.write(value) }
end

Some use cases naturally span multiple capability sets. Janus handles this with meta-profiles:

{.profile: game.} // Expands to: cluster + compute
{.profile: science.} // Expands to: core + compute
{.profile: cloud.} // Expands to: service + cluster
{.profile: embedded.} // Strict :core only — no runtime
Meta-ProfileCompositionBest for
:game:cluster + :computeGame engines, physics, real-time rendering
:science:core + :computeAstronomy, physics, climate modeling
:cloud:service + :clusterCloud-native microservices with failover
:metaverse:cluster + :computePersistent virtual worlds, social platforms
:embedded:core (strict only)Bare-metal, no runtime footprint

Capability:core:script:service:cluster:compute:sovereign
Basic types & control flow
Generics
Error-as-values (!T, try)
Async/await + channels
Actors & grains
Supervision trees
Typed message protocols
Capability contracts
Tensors & memory spaces
GPU/NPU targeting
Raw pointers & unsafe {}
Full comptime
Complete effect system
Zig stdlib (use zig)

“I want to…”

TaskStart HereUpgrade To
Blink an LED on my ESP32:core
Learn Janus in the REPL:script:core
Write a CLI tool:core
Scrape and analyze data:script:core + :compute
Build a REST API:service:cluster
Handle 10K concurrent WebSocket connections:service:cluster
Build a chat server that never goes down:service:cluster
Run a local LLM for inference:core + :compute:sovereign + :compute
Make a game server with physics:service:game (:cluster + :compute)
Write an operating system:core:sovereign
Build a distributed key-value store:service:cluster
Train a model on GPU:core + :compute:sovereign + :compute
Write a device driver:core:sovereign
Build a sensor fusion system on an embedded board:core:cluster (for resilience)
Prototype an AI feature for my app:script:service + :compute

Profiles don’t lock you out. Every profile gate can be lifted with a one-liner:

{.profile: core.} // This file is :core
func gpu_kernel(data: []f32) void
requires :compute // ← but THIS function uses :compute
do
let result = tensor<f32, [1024]>.on(.npu)
...
end

The gate goes up for that one function. The rest of your file stays in :core. This is the Janus promise: profiles are a starting point, not a prison.


Every profile has two modes:

  • AOT compiled — ahead-of-time, deterministic
  • Explicit types and allocators
  • No training wheels

Append ! to the profile name:

{.profile: service!.} // Fluid :service — inferred types, implicit allocator
{.profile: compute!.} // Fluid :compute — JIT, maximum convenience
AspectStrict (Monastery)Fluid (Bazaar)
CompilationAOTJIT/interpreted
TypesExplicitInferred
AllocatorsAlways visibleImplicit scratch arena
Top-level code
Publishable

If you are a beginner: Start at :script. Play. Explore. When you’re ready to ship, migrate to :core.

If you are building a web service: Start at :service. You get everything a backend needs. Scale to :cluster when you need resilience.

If you are building an AI/ML feature: Start at :core + :compute. Or just :compute if you know what you’re doing.

If you are building a distributed system: Start at :cluster. Virtual actors, supervision trees, and capability contracts are all there from day one.

If you are building an operating system: Start at :sovereign. This is the full power of Janus. The Citadel does not hold back.

And if you just want to blink an LED on a Saturday afternoon: :core is waiting for you. No ceremony required. No 200-page language spec to read first.


One language. Many lenses. No limits on where you go.