The knowledge graph
your LLMs deserve
Mycelium turns unstructured documents into a queryable knowledge graph -- automatically. Push documents in, get structured entities and relationships out. Self-hosted, air-gappable, with a Wikipedia-like UI for human curation.
$ curl -X POST /api/v1/documents \
-d '{"text": "Acme Corp appointed Jane Smith as CEO on March 1st..."}'
// Extracted automatically:
{
"entities": ["Acme Corp", "Jane Smith"],
"relations": [{"CEO_OF": "Jane Smith → Acme Corp"}],
"temporal": "2024-03-01"
}
Built for sensitive environments
From document ingestion to knowledge retrieval, every component runs on your infrastructure.
Automatic Entity Extraction
Push documents in, get a structured knowledge graph out. Mycelium uses a local LLM to extract entities, relationships, and temporal metadata from unstructured text -- no manual annotation required.
Air-Gapped Deployment
The entire stack -- knowledge graph engine, LLM inference, and curation UI -- runs on your infrastructure with zero external network calls. Purpose-built for classified environments and data sovereignty requirements.
Hybrid Retrieval
Combine graph traversal, vector similarity, and full-text search in a single query. Bi-temporal tracking lets you query what the system knew at any point in time -- essential for auditable intelligence workflows.
How it works
Three steps from raw documents to structured knowledge.
Push Documents
Send unstructured text, PDFs, or HTML via a simple REST API. Documents are chunked, stored, and queued for processing.
LLM Extraction
A local LLM extracts entities, relationships, and temporal metadata. Quality reviewers validate and enrich the output automatically.
Query & Curate
Query via graph traversal, vector search, or full-text. Human curators refine the graph through a Wikipedia-like editorial UI.
Ready to build your
knowledge infrastructure?
Deploy Mycelium on your own infrastructure in minutes. No data leaves your network. No external API keys required.