Your research,
locally contained.
Papyrus is a self-contained environment. When you download the app, you are downloading the full intelligence—no external APIs or "phone home" scripts required.
macOS
v1.0.2 • Universal (M1/M2 & Intel)
Windows
v1.0.2 • Windows 10/11 (64-bit)
Linux
v1.0.2 • AppImage / Deb
How we stay offline
No Network Permissions
The Papyrus binary is compiled without networking headers. It physically cannot reach the internet.
Local Weights Only
Your LLM models are stored in your Application Support folder, never on our servers.
Zero Analytics
We don't track crashes, clicks, or usage. We prefer to know nothing about your work.
Frequently Asked
Will it run on my laptop?
Papyrus is optimized for Apple Silicon (M1/M2/M3) and NVIDIA GPUs (RTX 30 series+). It will run on standard CPUs, but response times will be slower.
Is it open source?
The core engine is open-source. You can inspect the build process on our GitHub to verify the offline integrity.
Notice: Large Language Models (LLMs) are heavy files. Initial setup requires a one-time download of the model weights (approx. 4GB - 8GB).