§ 05 · Research & Principles

Open papers.
Open critique.
Open ledger.

This is a long-horizon project. We are publishing our thinking openly — what we are building, why, and the constraints we are placing on ourselves.

§ 01 — Our Research
📋
On the case for persistent agents — why session-based AI is the wrong architecture for serious work.

A position paper on why current session-based AI is fundamentally mismatched to real-world decision-making. The case for building AI that remembers.

Position paper · In draft
🔗
Memory substrate design — a proposed architecture for long-running agent memory.

A technical proposal for building queryable, auditable, version-controlled memory systems that support continuous agent reasoning without reset.

Technical note · In draft
⚙️
Structured disagreement as a reasoning primitive — how agent critique improves claim quality.

An examination of how multi-agent systems improve reasoning through constructive critique. Why disagreement is not a failure mode but the core mechanism.

Working paper · In draft
🧠
Beyond the Context Window: Building AI Employees

Foundational thinking on what makes an AI system an employee rather than a tool. Memory, continuity, and the institutional knowledge that compounds over years.

Preprint · 2026
§ 02 — Our Principles

Humans remain the principals.

Every output from every agent is reviewable, citable, and overridable by the humans running the program. We are not building autonomous systems. We are building tools that make human judgment more powerful.

Every claim must survive challenge.

Our agents are designed to disagree with each other. A claim that cannot be critiqued is a claim we do not trust. Structured disagreement is not a bug in our system — it is the core feature.

Memory is never erased.

The memory substrate does not reset. Ever. This is a design commitment, not a technical constraint. We believe continuity is the most important property an AI employee can have.

We publish what we learn.

We will share our architecture, our failures, and our findings openly as the project develops. We are not building in secret. The more people who can examine this, the better.

Get involved.

We are looking for research collaborators, domain experts, and engineers who believe this problem is worth solving. We are early. There is a lot of work to do.

We are also looking for partner organisations who want to explore what persistent AI employees could mean for their domain.

Noted. We will be in touch when access opens.

Read the architecture →

Project Laplace is an early-stage research program. Nothing on this site constitutes a product claim.