Publish pi coding agent sessions from one OSS project to a Hugging Face dataset.
It is an incremental pipeline for:
- collecting sessions for one project
- redacting exact secrets from your env file and
--secret - rejecting sessions that match user-provided deny patterns via
--deny - scanning redacted output with TruffleHog to detect and verify surviving secrets
- running LLM review on the remaining sessions
- uploading only sessions that pass all checks
Use it if you want to:
- publish a public dataset of your pi traces
- share real agent traces for analysis or training data
- keep project-specific sessions on Hugging Face over time without reprocessing everything on every run
It keeps state in a workspace, so repeated runs only process what changed (updated sessions, new sessions).
- pi coding agent session files
- session format: https://github.com/badlogic/pi-mono/blob/main/packages/coding-agent/docs/session.md
npm install -g pi-share-hf
npm install -g @mariozechner/pi-coding-agentInstall TruffleHog:
brew install trufflehogFor Linux and Windows, use the upstream install instructions:
For Hugging Face auth, create a write token and either:
export HF_TOKEN=hf_xxxor save it to:
~/.cache/huggingface/token
The CLI checks startup requirements and exits with install or auth instructions if something is missing.
Use one workspace per OSS project. In your OSS project directory:
- add
.pi/hf-sessions/to.gitignore - run
pi-share-hf initonce - run
pi-share-hf collectto gather changed and new sessions, redact known secrets, filter by--deny, scan with TruffleHog, and run LLM review - inspect what would be uploaded with
pi-share-hf list --uploadable,pi-share-hf grep, and the images folder if images are enabled - reject anything you do not want published
- run
pi-share-hf upload - repeat from step 3 whenever you want to publish new sessions
The workspace is incremental. It keeps the collected state so repeated runs only process what changed.
You can use pi-share-hf on multiple machines for the same project.
Inside your OSS project:
cd /path/to/my-project
echo ".pi/hf-sessions/" >> .gitignoreInitialize once:
# personal namespace
pi-share-hf init --repo myuser/my-project-sessions
# or org namespace
pi-share-hf init --repo my-project-sessions --organization myorgCollect sessions:
pi-share-hf collect \
--secret secrets.txt \
--deny deny.txt \
--provider openai-codex --model gpt-5.4 --thinking medium \
--parallel 4 \
README.md AGENTS.mdRecommended inputs:
secrets.txt: one known secret per line. Generate it just beforecollect, do not commit it, and delete it after use.deny.txt: one regex per line for names, topics, or projects that should never be publishedREADME.md AGENTS.md: project context for the LLM review so it can distinguish OSS work from unrelated work
You can also repeat flags directly:
--secret <file>or--secret <literal>--deny <file>or--deny <regex>
If you do not want a secrets file on disk, pass repeated --secret <literal> values instead.
Check what would be uploaded:
pi-share-hf list --uploadableSearch only the uploadable set:
pi-share-hf grep -i 'my-private-project|counterparty-name|finance'Reject anything you do not want published:
pi-share-hf reject 2026-01-16T11-03-04-216Z_b8b30402-d134-4f0d-9e6e-e2f72ada5a2f.jsonlUpload:
pi-share-hf upload --dry-run
pi-share-hf uploadIt only knows exact secret values.
Sources:
--env-file(default:~/.zshrc)--secret <file>with one secret per line--secret <literal>
This is deliberate. Exact values are high precision. Generic token regexes are noisy. TruffleHog handles generic secret detection after redaction.
TruffleHog scans the redacted output, not the original raw session.
That means:
- exact secrets should already be gone
- TruffleHog acts as a backstop for anything secret-like that survived
Any TruffleHog finding blocks the session automatically.
That includes:
verifiedunverifiedunknown
So you do not need to manually inspect TruffleHog hits to decide whether a session is uploadable. The reports are there for debugging, auditing, and understanding why a session was blocked.
Per-session TruffleHog reports are stored in:
.pi/hf-sessions/reports/<session>.trufflehog.json
Example:
{
"file": "...jsonl",
"redacted_hash": "sha256:...",
"findings": [
{
"detector": "NpmToken",
"status": "verified",
"line": 132,
"raw_sha256": "sha256:...",
"masked": "npm_tnl0***x4eE",
"verification_from_cache": false
}
],
"summary": {
"findings": 1,
"verified": 1,
"unverified": 0,
"unknown": 0,
"top_detectors": ["NpmToken"]
}
}The LLM sees project context files plus a plain-text transcript of the redacted session.
It answers:
- is this about the target OSS project?
- is it fit to publish publicly?
- does it still appear to contain sensitive data?
Review output includes:
about_project:yes | no | mixedshareable:yes | no | manual_reviewmissed_sensitive_data:yes | no | maybeflagged_partssummary
Review files are stored in:
.pi/hf-sessions/review/<session>.review.json
Changing provider, model, or thinking level changes the review cache key. The key includes the redacted session hash, context file hashes, provider, model, thinking level, deny-pattern hash, prompt version, and chunk size. If you rerun review with different settings, existing review sidecars for those sessions are replaced.
upload pushes only sessions that passed deterministic checks, TruffleHog, and LLM review.
It skips sessions that are manually rejected, missing review data, failed review, or already unchanged on the remote dataset.
Use upload --dry-run first if you want counts without pushing anything.
Before uploading, inspect what is currently uploadable:
pi-share-hf list --uploadableUseful checks:
- search the uploadable set with
pi-share-hf grep - review
deny.txtand reruncollectif you discover a new never-publish topic - inspect
.pi/hf-sessions/images/when image preservation is enabled - inspect
.pi/hf-sessions/reports/*.trufflehog.jsononly if you want to debug or audit why a session was blocked by TruffleHog - reject anything suspicious manually with
pi-share-hf reject
Typical grep checks:
pi-share-hf grep -i 'private-project|counterparty|finance|agreement|royalt'
pi-share-hf grep -i 'gmail|calendar|drive|slack'Creates .pi/hf-sessions/, writes workspace.json, and records which project directory maps to which Hugging Face dataset repo.
By default it uses:
- current directory as
--cwd .pi/hf-sessionsas--workspace- preserved embedded images
pi-share-hf init --repo user/dataset
pi-share-hf init --repo dataset-name --organization myorgMain options:
--cwd <dir>project directory to map to pi session storage--repo <id>HF dataset repo--organization <name>optional namespace when--repois a bare name--workspace <dir>workspace dir, default.pi/hf-sessions--no-imagesstrip embedded images from redacted output
Collects sessions for the configured project, redacts literal secrets, runs TruffleHog on changed redacted files, and runs the LLM review to write or update review sidecars.
By default it uses:
.pi/hf-sessionsas--workspace~/.zshrcas--env-fileREADME.mdandAGENTS.mdas context files when present- current pi settings unless you override provider, model, or thinking
pi-share-hf collect [context-files...]Main options:
--workspace <dir>workspace, default.pi/hf-sessions--env-file <path>secret source file, default~/.zshrc--secret <file>|<text>repeatable--forcereprocess all sessions--provider <name>review provider override--model <id>review model override--thinking <level>review thinking override--parallel <n>concurrent LLM reviews--deny <file>|<regex>reject sessions matching this pattern--session <file>process one session only
Reruns only the LLM review step on already-redacted sessions in the workspace.
By default it uses:
.pi/hf-sessionsas--workspaceREADME.mdandAGENTS.mdas context files when present- current pi settings unless you override provider, model, or thinking
pi-share-hf review [context-files...]Uses the same review-related flags as collect.
Marks a session as never uploadable by adding it to reject.txt.
By default it uses .pi/hf-sessions as --workspace.
pi-share-hf reject <session.jsonl|image.png>If you pass an extracted image path, the owning session is rejected.
Lists sessions from the workspace.
By default it uses .pi/hf-sessions as --workspace.
pi-share-hf list --uploadableSearches only the currently uploadable sessions.
By default it uses .pi/hf-sessions as --workspace.
pi-share-hf grep -i 'finance|counterparty|private-project'Uploads the current uploadable sessions and updates the remote dataset manifest.
By default it uses .pi/hf-sessions as --workspace.
pi-share-hf upload --dry-run
pi-share-hf uploadUses the built-in TypeScript Hugging Face client. No huggingface-cli is needed.
.pi/hf-sessions/
workspace.json
manifest.local.jsonl
remote-manifest.jsonl
manifest.jsonl
redacted/ public candidate files
reports/ private deterministic + TruffleHog reports
review/ private LLM review sidecars
review-chunks/ private transcript chunks
images/ extracted preserved images for uploadable sessions
reject.txt
manifest.jsonl
<session>.jsonl
Each uploaded *.jsonl file is a redacted pi session.
Session format docs:
npm run check