Launch-ready research system Typed. Bounded. Analysable. Long-context reasoning

Give LLMs recursion with rules.

λ-RLM turns recursive long-context reasoning from improvised agent loops into a typed functional runtime: inspectable plans, bounded leaf calls, and deterministic composition.

29/36
wins
across benchmark settings
+21.9
accuracy points
reported peak gain
4.1×
lower latency
reported speedup
typed runtime
01λ-rlm plan --task long_context_reasoning
02✓ SPLIT(document, k=8)
03✓ MAP(leaf_solver, bounded_context=τ*)
04✓ REDUCE(evidence, typed_aggregator)
05✓ answer = deterministic composition + neural leaves
No arbitrary recursive programs. No mystery control flow. Just typed composition.
The shift

From agentic chaos to typed recursion.

λ-RLM keeps the LLM for intelligence, but refuses to let it improvise the entire execution engine.

Standard RLM

Powerful, but the control flow is generated on the fly.

1LLM writes recursive code
2REPL loop executes it
3Control flow can drift
4Cost becomes hard to predict

λ-RLM

A typed functional runtime carries the recursion.

1Typed combinators
2SPLIT → MAP → REDUCE
3Bounded leaf calls
4Inspectable execution plans
Why it matters

Long context is not just a memory problem. It is a control-flow problem.

Standard recursive agents often bury the reasoning structure inside generated code. λ-RLM makes the structure first-class: split, solve, compose, inspect.

Old pattern
LLM as programmer of its own loop

Flexible, but difficult to bound, test, or reason about.

λ-RLM pattern
LLM as solver inside a typed runtime

The global algorithm becomes inspectable; the neural model handles bounded leaves.

Three reasons people should care

Less chaos. More structure. Better reasoning.

No arbitrary codegen

The runtime constrains recursion to a small set of typed combinators instead of asking the model to invent entire programs.

Inspectable plans

The execution plan is visible: SPLIT, MAP, FILTER, REDUCE. That makes it easier to debug and explain.

Faster by design

Bounded leaf calls let the system avoid throwing the whole long-context problem at the model every time.

At a glance

The core shift behind λ-RLM.

A visual summary of how λ-RLM moves from free-form recursive code generation to typed recursive reasoning.

λ
λ-RLM

Stop letting LLMs invent their own control flow.

Typed recursive long-context reasoning with SPLIT, MAP, FILTER, and REDUCE.

Before
LLM codegen → REPL loop → recursive drift → unpredictable cost
After
typed combinators → bounded leaves → deterministic composition
29/36wins
+21.9points
4.1×faster
λ

Recursive reasoning, but engineered.

Star the repo, read the paper, and help make long-context agents less chaotic.