Changelog
What's new
Track every feature, improvement, and milestone in MemoryLake's journey from inception to the AI memory infrastructure powering millions of users.
OpenClaw Domain Knowledge & Platform Scale
Massive domain knowledge injection for OpenClaw agents — 10+ domains including academic, finance, legal, medical, and more, totaling 10PB+ of structured knowledge. Plus on-premise deployment and QueryAgent v2.
- NewDomain knowledge injection: 10+ verticals (academic, finance, legal, medical, industrial, etc.) with 10PB+ structured knowledge
- NewArbitrary dimension sharing/isolation for domain knowledge across Instance, Agent, Session, and Team boundaries
- NewQueryAgent v2: enhanced chain-of-thought memory retrieval with domain-aware reasoning
- NewOn-premise deployment with Kubernetes operator for enterprises with strict data residency
- NewWhite-label option for enterprise partners and agent platform providers
- NewWPS365, Notion, Lark, and DingTalk workspace integrations
- ImprovedOverall LoCoMo accuracy improved to 95.1% with D1 v2 engine
- ImprovedPlatform now serving 1.5M+ users across 15K+ enterprises
- ImprovedOpenClaw agent memory operations: 50M+ Memory Code runs per month
OpenClaw Integration — One-Click Memory for Agents
Deep integration with OpenClaw agent framework. One-click installation brings MemoryLake's full memory stack to any OpenClaw agent — including AgentRL, advanced conflict detection, memory provenance, and multi-granularity memory management.
- NewOpenClaw one-click installation: add MemoryLake memory to any agent instantly
- NewAgentRL: PPO and Actor-Critic reinforcement learning — agents get smarter with every interaction
- NewAdvanced conflict detection: logic conflicts, implicit knowledge conflicts, and hallucination conflicts
- NewMemory provenance: Memory Time Travel to trace any memory back through its full history
- NewMemory source identification: know exactly where every piece of knowledge originated
- NewMulti-granularity memory management: Instance, Agent, and Session level memory with flexible sharing/isolation policies
- NewFoundation knowledge injection: terminology, base contracts, and Skills with arbitrary dimension sharing/isolation
- ImprovedMemory recall latency reduced to <30ms P99
- ImprovedConflict resolution accuracy improved to 97.8% with hallucination detection
Built-in Open Data & Memory Code
Access 40M+ academic papers, 3M+ SEC filings, live financial data, and more directly through MemoryLake. Memory Code enables programmable memory operations.
- NewBuilt-in open data: 40M+ academic papers, 3M+ SEC filings
- NewLive financial data feeds (market, economic indicators)
- New500K+ clinical trials and 2M+ drug compound datasets
- NewMemory Code: programmable memory operations via code runners
- NewDistributed computing for large-scale memory operations
- ImprovedData ingestion speed improved 10x with parallel pipeline
Skills Center & Multi-Agent Runtime
Skills Center enables static compilation of data/rules/knowledge into executable skills. Multi-Agent Runtime supports Super Plantree, Tools, Agents, and Teams orchestration.
- NewSkills Center: compile data, rules, and knowledge into reusable skills
- NewMulti-Agent Runtime with orchestration layer
- NewSuper Plantree: hierarchical task planning for agents
- NewAgent memory isolation and sharing policies
- NewAutoGPT, Manus, and OpenClaw agent integrations
- ImprovedMemory graph capacity scaled to 100M+ nodes
WorkBrain — Enterprise Knowledge Engine
Launch of WorkBrain: enterprise knowledge management and workflow intelligence. Let every person grow faster, let organizational capabilities continuously accumulate.
- NewWorkBrain: enterprise knowledge and workflow engine
- NewTeam memory sharing with access controls
- NewOffice 365 integration (Outlook, Teams, SharePoint)
- NewGoogle Workspace integration (Gmail, Drive, Calendar)
- NewMeeting memory: automatic extraction from transcripts
- NewOrganizational knowledge graph with department boundaries
MemoryLake-D1 Reasoning Engine
Launch of MemoryLake-D1 — our proprietary reasoning engine purpose-built for memory retrieval, conflict resolution, and multi-hop inference.
- NewMemoryLake-D1 reasoning engine for intelligent memory retrieval
- NewRL-based memory optimization and ranking
- NewAutomatic memory compression for long-term storage
- ImprovedToken cost reduction: 91% average savings vs raw context
- ImprovedMulti-hop reasoning accuracy improved to 91.2%
- FixedEdge case in temporal reasoning with timezone-aware events
General Availability — MemoryLake 1.0
MemoryLake is now generally available. Complete memory infrastructure with 6 memory types, Memory Passport, conflict detection, versioning, and enterprise security.
- NewGA release: stable API (v2), SLA-backed uptime guarantee
- NewAll 6 memory types production-ready
- NewQwen LLM integration
- NewNode.js SDK (memorylake-js v1.0)
- NewAdmin dashboard with real-time memory analytics
- ImprovedP99 latency reduced to <50ms for memory recall
- Improved99.9% uptime SLA for Business and Enterprise tiers
Enterprise Security & Compliance
SOC 2 Type II certification achieved. ISO 27001 compliance. Enterprise-grade security with encryption at rest and in transit, RBAC, and SSO.
- NewSOC 2 Type II certification
- NewISO 27001 compliance
- NewAES-256 encryption at rest, TLS 1.3 in transit
- NewRole-based access control (RBAC) with fine-grained permissions
- NewSSO integration (SAML 2.0, OIDC)
- NewGDPR and CCPA data handling compliance
- ImprovedRow-level security for multi-tenant deployments
LakeBuilder & Data Ingestion Pipeline
LakeBuilder: the universal data ingestion engine. Extract, transform, and load memories from PDFs, Excel, audio, video, databases, and APIs.
- NewLakeBuilder ingestion pipeline for multimodal data
- NewPDF extractor with table and chart understanding
- NewExcel/CSV structured data ingestion
- NewAudio transcription → memory extraction pipeline
- NewDatabase connector (PostgreSQL, MySQL, MongoDB)
- NewREST API and webhook-based data source connectors
- ImprovedMemory graph node capacity scaled to 10M+ per instance
LoCoMo Benchmark #1 — 94.03%
MemoryLake achieves global #1 on the LoCoMo long-conversation memory benchmark with 94.03% overall accuracy, validating our architecture across single-hop, multi-hop, temporal, and open-domain tasks.
- NewLoCoMo benchmark integration and evaluation pipeline
- ImprovedOverall accuracy: 94.03% (Single: 95.71%, Multi: 89.38%, Temporal: 95.47%, Open: 95.57%)
- ImprovedTemporal reasoning engine with calendar-aware indexing
- ImprovedOpen-domain recall with hybrid vector + graph retrieval
- NewBenchmark comparison dashboard in admin console
Conflict Detection Engine
Intelligent memory conflict detection and resolution. When memories from different sources contradict, MemoryLake automatically detects, flags, and resolves.
- NewReal-time conflict detection across memory sources
- NewConfigurable resolution strategies: recency, priority, confidence
- NewConflict audit trail with full provenance chain
- NewCross-source memory validation
- ImprovedMulti-hop reasoning extended to 4-hop chains
- ImprovedMemory ingestion throughput increased 5x
Git-like Memory Versioning
Every memory change is now tracked with full version history. Roll back, diff, and audit any memory state — the Git for AI memory.
- NewGit-like version control for all memory operations
- NewMemory diff view between any two versions
- NewRollback to any previous memory state
- NewImmutable audit log with cryptographic hashing
- NewReflection Memory type for meta-cognitive insights
- BreakingMemory API v1 → v2: new versioning endpoints
Memory Passport Alpha
First release of Memory Passport — portable, user-owned AI memory that follows you across LLMs. Initial support for ChatGPT and Claude integration.
- NewMemory Passport: portable cross-LLM memory identity
- NewChatGPT integration via custom GPT actions
- NewClaude integration via MCP (Model Context Protocol)
- NewPrivacy controls: granular memory sharing permissions
- NewAction Memory type for behavioral pattern tracking
- ImprovedAPI response latency reduced from 200ms to 80ms
Memory Graph & Multi-hop Reasoning
Introduction of the knowledge graph layer, enabling multi-hop reasoning across connected memories. A fundamental leap from flat storage to relational memory.
- NewKnowledge graph layer for entity-relationship mapping
- NewMulti-hop reasoning engine (2-hop initial support)
- NewBackground Memory type for user profile/preferences
- NewEvent Memory type with temporal indexing
- ImprovedVector indexing performance improved 3x with HNSW
- FixedMemory deduplication edge cases in concurrent writes
Project Genesis — Core Memory Engine
Initial internal release. The foundational memory storage and retrieval engine is born, supporting basic factual and conversation memory types.
- NewCore memory storage engine with vector-based indexing
- NewTwo memory types: Factual Memory and Conversation Memory
- NewBasic single-hop memory recall via REST API
- NewPostgreSQL-backed persistent storage layer
- NewInitial Python SDK (memorylake-py v0.1)
The journey continues...