0up is a zero-knowledge, open-source, encrypted file sharing service.
From Wikipedia: Zero-knowledge refers to an online service that stores, transfers or manipulates data in a way that maintains a high level of confidentiality, where the data is only accessible to the data's owner (the client), and not to the service provider.
- Files are encrypted browser-side before being uploaded to an S3 compatible storage service.
- A shareable link is generated with the decryption key as part of the anchor component (
#) of the URL. (Anchor components are never sent to the server and therefore the decryption key always remains with the client.) - Using the generated link, files can be downloaded and decrypted browser-side.
In addition to the hosted version of 0up (https://0up.io), you're free to clone, customize, and host 0up in your own environment.
- An account with an S3-compatible storage service. We recommend Backblaze B2
- A Postgres database (local Postgres, Neon, Supabase, Vercel Postgres, etc.)
- Node.JS 18+
Using the S3-compatible provider of your choice, create a new bucket. For our examples below, we'll be using Backblaze B2
Create a public bucket. Adding encryption is optional, and arguably redundant, but it doesn't hurt.
We want files to automatically be deleted after 24 hours. To do that, we'll create a custom lifecycle policy.
Click on Lifecycle Settings and select Use custom lifecycle rules
We'll create a prefix of 1/ and apply a 1-day lifecycle policy to it.
CORS must be enabled on your S3 bucket to allow uploads and downloads. For B2, this requires installing the B2 CLI (the CORS Rules option in the B2 web interface will not suffice for our purposes).
Once you've installed and configured the B2 CLI, we can set the CORS rules with the following command. Make sure to set your particular values for allowedOrigins, and your-bucket-name:
b2 update-bucket \
--corsRules '[{"corsRuleName": "downloadFromAnyOriginWithUpload","allowedOrigins": ["http://localhost:5173","https://your-site.example"],"allowedHeaders": ["*"],"allowedOperations": ["s3_head","s3_get","s3_put"],"exposeHeaders": ["ETag"],"maxAgeSeconds": 3600}]' \
your-bucket-name allPrivateTo get started hosting your own instance of 0up, clone (or fork and clone) this repo.
git clone https://github.com/0sumcode/0up.git
cd 0up
npm iCopy .env.example to .env. Then open and edit the .env file and configure the parameters accordingly.
cp .env.example .envNEXT_PUBLIC_* variables are exposed to the browser (upload size limits, expire options, FAQ flags, organization name/contact). Everything else is server-only — your Postgres DATABASE_URL, S3 credentials, and CRON_SECRET.
0up uses Drizzle ORM against any Postgres database. Point DATABASE_URL at your database and run:
npm run db:migrateThis creates the upload and file tables. During development you can also browse the DB with npm run db:studio, and regenerate migrations after schema changes with npm run db:generate.
Now that all your configuration parameters have been set, you should be able to test your instance locally. The dev instance makes it easy to test and make customizations.
npm run devOpen http://localhost:3000.
0up is a Next.js App Router project and deploys cleanly to Vercel or any Node.js host.
Before you deploy:
- Ensure CORS is enabled on your S3 bucket for your production origin (see above).
- Set your
.envvariables in the hosting platform's environment (all ofNEXT_PUBLIC_*,DATABASE_URL,CRON_SECRET, andS3_*). - Run
npm run db:migrateagainst your production database.
To build for production:
npm run buildUploads that pass expire_at or are soft-deleted should be purged from both the database and S3. 0up ships with a cron route at /api/cron/cleanup guarded by Authorization: Bearer $CRON_SECRET.
On Vercel, vercel.json already schedules this route hourly.
On Netlify, a scheduled function at netlify/functions/cleanup.mts runs hourly and hits the cron route with the bearer token — nothing else to configure. Make sure CRON_SECRET is set in your Netlify environment variables.
On other hosts, set up your own cron (GitHub Actions, systemd timer, etc.) to hit the endpoint with the bearer token on whatever cadence you prefer.

