Your research,
locally contained.

Papyrus is a self-contained environment. When you download the app, you are downloading the full intelligence—no external APIs or "phone home" scripts required.

macOS

v1.0.2Universal (M1/M2 & Intel)

SHA-256 Checksum
e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855

Windows

v1.0.2Windows 10/11 (64-bit)

SHA-256 Checksum
7f83b1657ff1fc53b92dc18148a1d65dfc2d4b1fa3d677284addd200126d9069

Linux

v1.0.2AppImage / Deb

SHA-256 Checksum
62c2859c2567634898144298fc1c149afbf4c8996fb92427ae41e4649b934ca

How we stay offline

No Network Permissions

The Papyrus binary is compiled without networking headers. It physically cannot reach the internet.

Local Weights Only

Your LLM models are stored in your Application Support folder, never on our servers.

Zero Analytics

We don't track crashes, clicks, or usage. We prefer to know nothing about your work.

Frequently Asked

Will it run on my laptop?

Papyrus is optimized for Apple Silicon (M1/M2/M3) and NVIDIA GPUs (RTX 30 series+). It will run on standard CPUs, but response times will be slower.

Is it open source?

The core engine is open-source. You can inspect the build process on our GitHub to verify the offline integrity.

Notice: Large Language Models (LLMs) are heavy files. Initial setup requires a one-time download of the model weights (approx. 4GB - 8GB).