anura-graph
Python client for the Anura Memory API.
Anura Memory provides two memory products for AI agents:
- GraphRag — Knowledge graph with automatic triple extraction, deduplication, and hybrid retrieval
- FilesRag — Markdown file storage with heading-based chunking and semantic search
Installation
pip install anura-graph
Quick Start
from graphmem import GraphMem
mem = GraphMem(api_key="gm_your_key_here")
# --- GraphRag ---
mem.remember("Alice is VP of Engineering at Acme Corp")
ctx = mem.get_context("What does Alice do?")
# --- FilesRag ---
mem.write_file("/notes/standup.md", "# Standup\n## 2026-02-21\n- Shipped auth module")
results = mem.search_files("auth module")
Configuration
from graphmem import GraphMem, RetryConfig
mem = GraphMem(
api_key="gm_your_key_here",
base_url="https://anuramemory.com", # default
retry=RetryConfig(
max_retries=3, # default
base_delay=0.5, # seconds, default
max_delay=10.0, # seconds, default
retry_on=[429, 500, 502, 503, 504], # default
),
timeout=30.0, # seconds, default
)
The client can be used as a context manager:
with GraphMem(api_key="gm_your_key_here") as mem:
mem.remember("Alice works at Acme")
API Reference
GraphRag
remember(text) -> RememberResult
Extract knowledge from text and store as triples in the graph.
result = mem.remember("Einstein was born in Ulm, Germany")
print(result.extracted_count) # 1
print(result.merged_count) # 1
If a new fact contradicts an existing one on a single-valued predicate (e.g., LIVES_IN, WORKS_AT), the old value is replaced and the conflict is returned:
r = mem.remember("Alice now works at Microsoft")
# r.conflicts → [ConflictResolution(subject_name="alice", predicate="WORKS_AT",
# old_object="google", new_object="microsoft", resolution="auto_recency")]
get_context(query, options?) -> ContextResult
Retrieve context from the knowledge graph.
from graphmem import ContextOptions
# JSON format (default)
ctx = mem.get_context("Einstein")
print(ctx.entities) # [{ "name": "Albert Einstein" }, ...]
# Markdown format (ideal for LLM system prompts)
ctx = mem.get_context("Einstein", ContextOptions(format="markdown"))
print(ctx.content) # "- Albert Einstein -> BORN_IN -> Ulm..."
# Hybrid mode (graph + vector + communities)
ctx = mem.get_context("Einstein", ContextOptions(mode="hybrid"))
search(entity, type_filter=None) -> SearchResult
Find an entity and its direct (1-hop) connections. In fuzzy mode, filter results by entity type.
result = mem.search("Alice")
print(result.edges)
# Search with type filter
result = mem.search("alice", type_filter="person")
ingest_triples(triples) -> IngestResult
Ingest pre-formatted triples directly (no LLM extraction).
from graphmem import Triple
result = mem.ingest_triples([
Triple(subject="TypeScript", predicate="CREATED_BY", object="Microsoft"),
])
get_graph(type_filter=None) -> GraphData
Get the full graph (nodes, edges, communities). Optionally filter by entity type.
# Filter graph by entity type
people = mem.get_graph(type_filter="person")
orgs = mem.get_graph(type_filter="person,organization")
delete_edge(id, blacklist=False)
Delete an edge. Optionally blacklist to prevent re-creation.
update_edge_weight(id, weight=None, increment=None)
Set or increment an edge's weight.
delete_node(id)
Delete a node and all its connected edges.
export_graph() -> ExportData
Export the graph as portable JSON.
import_graph(data) -> ImportResult
Import a graph export (merges, does not delete existing data).
list_communities() -> list[Community]
List all detected communities.
detect_communities() -> DetectCommunitiesResult
Run Louvain community detection + LLM summarization.
FilesRag
write_file(path, content, name=None) -> WriteFileResult
Create or update a markdown memory file. Files are automatically chunked by ## headings and indexed for semantic search.
result = mem.write_file(
"/docs/architecture.md",
"# Architecture\n\n## Backend\nNode.js with Prisma...\n\n## Frontend\nNext.js...",
)
print(result.file.id) # "clxx..."
print(result.chunk_count) # 3
print(result.created) # True
If a file already exists at the given path, its content is replaced and re-indexed.
list_files() -> list[MemoryFile]
List all files in the current project.
files = mem.list_files()
for f in files:
print(f"{f.path} ({f.size} bytes)")
read_file(id) -> FileWithContent
Get a file with its full content.
file = mem.read_file("file_id")
print(file.content) # full markdown
update_file(id, content, name=None) -> WriteFileResult
Update a file's content (re-chunks and re-indexes).
result = mem.update_file("file_id", "# Updated content\n...")
delete_file(id)
Delete a file and all its indexed chunks.
search_files(query, limit=None, file_id=None) -> list[FileSearchResult]
Semantic search across file chunks. Pass file_id to scope the search to a single file.
results = mem.search_files("authentication flow", limit=5)
# Search within a single file:
# results = mem.search_files("auth", file_id="file_abc123")
for r in results:
print(r.file["path"])
for chunk in r.chunks:
print(f" {chunk.heading_title} ({chunk.score:.2f})")
print(f" {chunk.excerpt}")
Projects
| Method | Description |
|---|---|
list_projects() | List all projects |
create_project(name) | Create a new project |
delete_project(id) | Delete a project |
select_project(id) | Switch active project |
Traces
| Method | Description |
|---|---|
list_traces(limit?, cursor?) | List query traces with pagination |
get_trace(id) | Get details for a specific trace |
Blacklist
| Method | Description |
|---|---|
list_blacklist(limit?, cursor?) | List blacklisted triples |
add_to_blacklist(subject, predicate, object) | Add a triple to the blacklist |
remove_from_blacklist(id) | Remove from blacklist |
Conflict Log
| Method | Description |
|---|---|
list_conflicts(limit?, cursor?) | List conflict resolution log entries (newest first) |
conflicts = mem.list_conflicts(limit=10)
for c in conflicts:
print(f"{c.subject_name} {c.predicate}: {c.old_object} → {c.new_object}")
Pending Facts
| Method | Description |
|---|---|
list_pending(limit?, cursor?) | List pending facts |
approve_fact(id) | Approve a pending fact |
reject_fact(id, blacklist?) | Reject a pending fact |
approve_all() | Approve all pending facts |
reject_all() | Reject all pending facts |
Usage
get_usage() -> UsageInfo
usage = mem.get_usage()
print(usage.tier) # "FREE"
print(usage.current_facts) # 42
print(usage.current_file_count) # 3
print(usage.current_file_storage) # 1024
Health
health() -> HealthResult
Error Handling
from graphmem import GraphMem, GraphMemError
try:
mem.read_file("nonexistent")
except GraphMemError as e:
print(f"API error {e.status}: {e}")
print(e.body) # raw response body
Rate Limiting
After each request, rate limit info is available:
mem.remember("some fact")
print(mem.rate_limit.remaining) # requests remaining
print(mem.rate_limit.limit) # total allowed per window
print(mem.rate_limit.reset) # unix timestamp when window resets
Types
All types are exported from the top-level package:
from graphmem import (
# GraphRag
RememberResult, ConflictResolution, ConflictLogEntry,
ContextResult, SearchResult, Triple,
GraphNode, GraphEdge, GraphData, Community,
# FilesRag
MemoryFile, FileWithContent, FileSearchResult, WriteFileResult,
# Config
RetryConfig, RateLimitInfo, UsageInfo,
)
License
MIT