The runtime that powers the attention layer.
Memory, model routing, scheduling,
proactive execution. Open source.
Lila Core is the engine under Lila.
It exists because the surface a person uses — phone, laptop, Slack thread, any of it — is not the part that matters. The part that matters is the model of attention underneath: what is on a user's mind this week, who they're in the middle of a conversation with, what they captured at 11pm and meant to come back to. That model has to be persistent, it has to be portable across surfaces, and it has to be composed by something other than the user. Lila Core is the runtime that does that work.
Transport-agnostic. Memory-centered. Routed.
Surfaces read from and write to a single working-memory store in Postgres. A consolidation runtime — scheduled and on-demand — folds new events into that memory through model-routed jobs. Nightly LLM-driven consolidation produces a generative working-memory layer the surfaces draw from. Source-ID receipts on every surfaced item. Surfaces are interchangeable; the runtime is the source of truth.
surfaces runtime memory iOS consolidator working_memory web ───► model router ───► postgres slack scheduler + semantic recall ... proactive ops + source receipts ▲ │ cron + on-demand
Read it, fork it, run it.
MIT-licensed. Issues open. Architecture and contribution notes in the README.
Why this is open.
Surfaces are interchangeable. Memory is the substrate. The runtime that pays attention to a person's life shouldn't be a black box, so this one isn't — it can be inspected, forked, or replaced. The reference client that runs on top of it is lila.surf. The full argument for why this layer should exist at all is in the manifesto.