Skip to content

std.compute.polar

Profile: :compute

Polar embedding primitives for the Janus compute stack. Phase 0: conversions only. Similarity search in angle-domain arithmetic ships in Phase 1.


Most ML frameworks store embeddings as Cartesian vectors: a list of f32 values along orthogonal axes. The vector is a location. Distance metrics – cosine similarity, L2 – are computed by comparing locations.

Polar representation inverts this. A polar embedding stores what a signal is, not where it sits:

  • Radius – signal strength. How much energy or magnitude is present.
  • Angles – signal direction. The sequence of angles encodes the relationships between dimensions.

The practical consequence is that the representation becomes invariant to scaling. Two embeddings pointing in the same direction will have identical angles regardless of their magnitudes. Cosine similarity in Cartesian space collapses to a simple angle comparison in polar space – which can be computed with integer arithmetic (see std.math.trig_int).

For a deeper treatment of the philosophy behind this choice, see the article The Polar-Signal Paradigm.


use std.compute.polar

Requires the :compute profile. The :compute profile is currently WIP; this module is available in the experimental channel.


The central type. Holds one polar-form embedding of dimension N.

struct PolarEmbedding[N: usize] {
radius: f64, // signal strength — always non-negative
angles: [N - 1]f64, // direction angles in radians
}

An N-dimensional Cartesian vector has N components. Its polar form has one radius and N − 1 angles. The last Cartesian dimension is captured by the final atan2 call rather than an explicit angle slot, so angles has length N − 1.

Invariants:

  • radius >= 0.0
  • angles[0..N-2] in [0, π] (inner angles, computed via acos)
  • angles[N-2] in (-π, π] (the terminal angle, computed via atan2)

Convert a slice of f64 values to polar form. The input slice must have length N.

func from_cartesian[N: usize](v: []const f64) -> PolarEmbedding[N]

Algorithm:

  1. Compute radius as the L2 norm of v.
  2. If radius == 0.0, return a zero embedding with all angles set to 0.0.
  3. For each inner dimension i in 0..N-2: compute angles[i] = acos(v[i] / tail_norm_i) where tail_norm_i is the L2 norm of v[i..].
  4. For the final angle: angles[N-2] = atan2(v[N-1], v[N-2]).

This is the standard hyperspherical coordinate conversion. It is numerically stable because each acos argument is bounded by the L2 norm of the remaining suffix – never out of range.

Reconstruct a Cartesian vector from polar form. Returns a heap-allocated slice of length N.

func to_cartesian[N: usize](p: PolarEmbedding[N], allocator: Allocator) -> []f64

Algorithm (reconstruction loop):

Starting with v[0] = radius * cos(angles[0]), each subsequent component multiplies in a sin(angles[i]) factor and a cos(angles[i+1]) factor. The final component is radius * product(sin(angles[0..N-2])) * sin(angles[N-2]).

Doctrinal note: to_cartesian exists for debugging and round-trip verification. In production inference pipelines you should stay in polar space and operate on angles directly. Converting back to Cartesian defeats the purpose.


use std.compute.polar
use std.mem
func main(allocator: Allocator) do
// A 4-dimensional Cartesian embedding
let v: [4]f64 = [1.0, 2.0, 3.0, 4.0]
// Convert to polar
let p = polar.from_cartesian[4](v[..])
println("radius = ", p.radius) // ≈ 5.477
println("angles[0] = ", p.angles[0])
println("angles[1] = ", p.angles[1])
println("angles[2] = ", p.angles[2])
// Round-trip back to Cartesian
let reconstructed = polar.to_cartesian[4](p, allocator)
defer allocator.free(reconstructed)
// Should be within floating-point epsilon of the original
println("reconstructed[0] = ", reconstructed[0]) // ≈ 1.0
println("reconstructed[3] = ", reconstructed[3]) // ≈ 4.0
end
use std.compute.polar
func is_strong_signal[N: usize](v: []const f64, threshold: f64) -> bool do
let p = polar.from_cartesian[N](v)
return p.radius >= threshold
end
use std.compute.polar
/// Convert a batch of Cartesian rows to polar form.
/// Each row has dimension N. Total slice length: rows * N.
func batch_to_polar[N: usize](
rows: []const f64,
out: []polar.PolarEmbedding[N],
) do
let num_rows = rows.len / N
for 0..num_rows |i| do
let row = rows[i * N .. (i + 1) * N]
out[i] = polar.from_cartesian[N](row)
end
end

PhaseScopeStatus
Phase 0from_cartesian, to_cartesian, PolarEmbedding structThis document — available now
Phase 1Angle-domain similarity (cosine_polar, dot_polar) – no sqrt requiredPlanned
Phase 2Batch SIMD operations, polar_mean, polar_centroidPlanned
Phase 3Integer polar paths bridging into std.math.trig_int for NPU targetsPlanned

Phase 1 is where the real payoff lands. Cosine similarity between two polar embeddings reduces to comparing their angle sequences – no dot product, no square root. The formula is 1 - Σ|Δangle_i| / (N-1) * π, computed entirely in angle space.


Connection to PolarQuant, BitNet, and SASA Kenya

Section titled “Connection to PolarQuant, BitNet, and SASA Kenya”

std.compute.polar is the foundational layer for three converging systems:

  • PolarQuant – Quantises activations in polar space rather than Cartesian space. The key claim is that quantisation error in the radius dimension is perceptually less harmful than error in the angle dimensions, because angles encode direction (meaning) and radius encodes strength (magnitude).

  • BitNet × trig_int – When BitNet i8 weights are converted to polar via std.math.trig_int.cartesian_to_polar_i8, the result plugs directly into the integer polar pipeline. Phase 3 of this module will provide a typed bridge between the integer polar form and PolarEmbedding.

  • SASA Kenya inference – The 6× throughput target in SASA v0.2.0 depends on the angle-domain dot product landing in Phase 1 of this module. Until then, SASA uses from_cartesian for representation and falls back to standard cosine similarity for scoring.


See also: std.math.trig · std.math.trig_int · The Polar-Signal Paradigm