This is a long-horizon project. We are publishing our thinking openly — what we are building, why, and the constraints we are placing on ourselves.
A position paper on why current session-based AI is fundamentally mismatched to real-world decision-making. The case for building AI that remembers.
A technical proposal for building queryable, auditable, version-controlled memory systems that support continuous agent reasoning without reset.
An examination of how multi-agent systems improve reasoning through constructive critique. Why disagreement is not a failure mode but the core mechanism.
Foundational thinking on what makes an AI system an employee rather than a tool. Memory, continuity, and the institutional knowledge that compounds over years.
Humans remain the principals.
Every output from every agent is reviewable, citable, and overridable by the humans running the program. We are not building autonomous systems. We are building tools that make human judgment more powerful.
Every claim must survive challenge.
Our agents are designed to disagree with each other. A claim that cannot be critiqued is a claim we do not trust. Structured disagreement is not a bug in our system — it is the core feature.
Memory is never erased.
The memory substrate does not reset. Ever. This is a design commitment, not a technical constraint. We believe continuity is the most important property an AI employee can have.
We publish what we learn.
We will share our architecture, our failures, and our findings openly as the project develops. We are not building in secret. The more people who can examine this, the better.
We are looking for research collaborators, domain experts, and engineers who believe this problem is worth solving. We are early. There is a lot of work to do.
We are also looking for partner organisations who want to explore what persistent AI employees could mean for their domain.
Project Laplace is an early-stage research program. Nothing on this site constitutes a product claim.