GenAI Unplugged

GenAI Unplugged

I Spent 2 Weeks Reverse-Engineering Substack's API. Now Claude Runs My Analytics.

I reverse-engineered Substack's internal endpoints and built an MCP server that lets Claude answer analytics questions your dashboard can't. 18 tools, $0 hosting, full build log inside.

Apr 02, 2026
∙ Paid

Every time I open my Substack dashboard, the numbers are right there. Subscribers. Views. Open rate.

But none of them answer the question I actually care about: what’s working?

Which posts are converting free subscribers to paid? Is my traffic coming from email, search, or Notes shares? Is engagement actually improving, or did I just get lucky on one post?

The data exists. It’s just scattered across a dozen dashboard pages with no way to connect them. You can see your open rate. You can see your traffic sources. You can’t see the line between them.

So I built an MCP server that lets Claude do the analysis for me. 18 tools covering Substack analytics, subscribers, and revenue. Built on Substack’s internal API - the same endpoints that power SubflowAI in production.

This is the full build log - the decisions, the legal research, the moment something broke, and the architecture choice that made everything else simple.

GenAI Unplugged is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.


New to MCP? Here’s the 60-Second Version

MCP (Model Context Protocol) is an open standard that lets AI assistants like Claude connect to external data sources - databases, APIs, files, anything. Instead of copy-pasting data into a chat window, you wire up a small server that Claude can call directly.

Think of it like this:

Without MCP, Claude can only work with what you paste into the conversation. With MCP, Claude can reach out and pull data from your tools on its own - your Substack dashboard, your database, your project files.

An MCP server exposes “tools” - small functions Claude or any MCP client can call. You might have a get_subscriber_stats tool that hits Substack’s API and returns your numbers. Claude sees the tool, calls it when relevant, and uses the result in its response.

The server runs locally on your machine (no cloud needed), communicates through stdin/stdout, and you add it to Claude’s config with a few lines of JSON.

What Is MCP? Model Context Protocol Explained Simply
What Is MCP? Model Context Protocol Explained Simply

If you want the full MCP foundation, here are the two must-reads from my FREE 8-lesson MCP course:

  • What is MCP? - the concept in plain English

  • Build an MCP Server in 30 Minutes - hands-on with FastMCP

You don’t need to read these to follow this build log - I’ll explain everything as we go. But if you want to go deeper after, the full course covers architecture, tools, resources, multi-agent collaboration, and shared memory.


How My Substack MCP Server Started

Few weeks ago I read a post where Karen Spinner wrote about building an MCP server to connect her newsletter archive with Claude Desktop. I followed her advice and built a similar MCP server to search my newsletter archive with 45 articles in a local SQLite database having engagement metrics and content pillar analysis.

But archive servers read a local database of past content. What if I could talk to Substack’s live dashboard? Real-time subscriber counts. Post-by-post traffic breakdowns. Which articles are driving paid conversions right now?

The problem: Substack has no public API.

The solution: I’d already reverse-engineered their internal endpoints two weeks earlier.


The Backstory: How I Already Had the Pieces

Two weeks before this build, I’d spent a weekend systematically mapping every API call Substack’s dashboard makes. I ended up with a comprehensive map of Substack’s internal API, every surface of the platform, and a standalone Python client (SubstackClient, 342 lines) with rate limiting, error handling, and pagination built in.

That wasn’t a weekend project for fun. I needed those APIs for SubflowAI, a Chrome extension I built for scheduling Substack Notes.

SubflowAI has been in production since January 2026: 69 users, 310+ commits across 11 releases, paying customers.

The same API endpoints that power SubflowAI’s analytics dashboard - subscriber growth tracking, post performance, engagement data - are what I wired into this MCP server.

Every endpoint in that client has been battle-tested by real users on real schedules. When authentication breaks, I know within hours. When Substack changes an endpoint, I catch it. That production history matters. When I decided to build the MCP server, I wasn’t guessing whether the APIs would work. I already knew they did.

Stack Your Work: Why the MCP Server Took 3 Hours
Stack Your Work: Why the MCP Server Took 3 Hours

The Gap: What Existing Servers Are Missing

So when I sat down to build, my first instinct was: I’m the first person to do this. I wasn’t. A quick GitHub search turned up six existing Substack MCP servers. For about 10 minutes, I thought the project was dead.

Then I actually looked at what they built. They all solve content management. Read posts. Create drafts. Manage content.

The Gap: Content Management vs Analytics Intelligence
The Gap: Content Management vs Analytics Intelligence

None of them extend into the analytics layer:

  • Post traffic sources (email vs. search vs. social vs. direct)

  • Per-post engagement (who liked, comment summaries)

  • Subscriber growth timeseries with network attribution

  • Dashboard KPIs with period-over-period comparison

  • Revenue/ARR tracking

  • Which posts drive paid conversions

These endpoints exist in Substack’s dashboard. They’re just hidden behind internal API calls that nobody had mapped until the SubflowAI work forced me to.

Share GenAI Unplugged


What You’ll Need

If you want to follow along and build your own version:

  • Python 3.10+ (basic familiarity - you’ll mostly be copying patterns)

  • Claude Code or Claude Desktop (to use the MCP server)

  • A Substack publication with dashboard access

  • Your substack.sid cookie from Chrome DevTools (I’ll show you how)

  • FastMCP library (pip install fastmcp)

  • Cost: $0 (no API keys, no hosting, everything runs locally)

Getting your session cookie (step zero):

  1. Log into your Substack dashboard in Chrome

  2. Press F12 to open DevTools

  3. Click the Application tab, then Cookies in the sidebar, then the .substack.com entry

  4. Find the cookie named substack.sid - copy its full value (starts with s%3A...)

  5. That’s your API key. It lasts about 30 days before you need to refresh it.


Hour 0: The Architecture Decision That Changed Everything

The first question wasn’t “what tools do I build?” It was “how does the user’s session cookie travel?”

MCP servers can run two ways:

  1. HTTP (hosted server). Users connect via URL, like Notion’s MCP server. Our server proxies calls to Substack.

  2. stdio (local subprocess). The server runs on the user’s machine. API calls go directly from their computer to Substack.

I chose stdio. Here’s why.

The substack.sid cookie grants full account control. Post deletion. Subscriber email access. DM reading. Having that cookie transit through any third-party server - even one I control - is a trust-killer.

With stdio, the cookie never leaves the user’s machine. Zero hosting costs. Zero availability concerns. Zero security liability. Same pattern Firecrawl, Perplexity, and Notion use for their MCP servers. If you want the full picture of how MCP hosts, clients, and servers communicate, that architecture context is useful here.

What I told Claude Code:

I want to build a Substack MCP server using FastMCP in Python. It should use stdio transport, import our existing SubstackClient, and expose analytics tools that none of the existing servers have. Start with the dashboard KPI endpoint.

Claude generated the server skeleton in under 2 minutes. The architecture was right. The tool definitions needed work.

MCP Servers stdio vs HTTP: Why the Cookie Stays Local
MCP Servers stdio vs HTTP: Why the Cookie Stays Local

The Legal Question I Had to Answer Before Going Further

This was the pivot point. I’d planned to open-source the server. But my next thought was: Can I legally distribute or publish a tool that uses reverse-engineered Substack APIs?

I spent an hour researching this. Here’s what I found.

In our favor:

  • hiQ Labs v. LinkedIn (Ninth Circuit): Accessing publicly available data isn’t “unauthorized access” under the Computer Fraud and Abuse Act

  • Van Buren v. United States (Supreme Court, 2021): Narrowed the CFAA - ToS violations alone don’t equal hacking

  • DMCA interoperability exception: Reverse engineering for interoperability is explicitly legal

  • Two unofficial Substack packages (substack-api, substack-sdk) already exist on npm using cookie auth. Neither has been taken down.

Against us:

  • Substack’s ToS broadly prohibits “unauthorized automated access”

  • A 2024 jury case found credential-based scraping CAN violate CFAA (but that was scraping other people’s data, not your own)

  • Substack could send a cease-and-desist

The deciding factor

With stdio architecture, the cookie never touches any servers. Users access their own data on their own account. The server is read-only (no write operations in v1). And we enforce rate limiting (1.5-second delay between calls).

Verdict: Document the process, not the package

The gray area is distributing a ready-made tool that uses reverse-engineered endpoints. Teaching someone how to build one themselves? Entirely different category. That’s why this is a build log, not a GitHub release.

Leave a comment


How to Discover Substack’s APIs Yourself

You don’t need anyone else’s endpoint map to get started. This approach works on its own. PluggedIn subscribers get 20 of mine, but the method below is what matters.

You can find the most useful ones in about 10 minutes with Chrome DevTools. Here’s how.

Open your Substack dashboard in Chrome. Now:

Step 1. Press F12 to open DevTools. Click the Network tab.

Step 2. In the filter bar, click “Fetch/XHR” - this filters out images, CSS, and scripts, showing only API calls.

Step 3. Navigate to any dashboard page. Watch the API calls appear in real-time.

Step 4. Click any call. You’ll see:

  • The URL (always starts with /api/v1/)

  • The HTTP method (GET, POST, PUT)

  • The full response body - this IS the API schema

Step 5. Try it now. Go to your Stats page. You’ll see calls to:

  • /api/v1/publish-dashboard/summary-v2 - your KPI dashboard

  • /api/v1/post_management/published - all your posts with stats

  • /api/v1/publication/stats/subscribers - subscriber counts

That’s 3 endpoints in 2 minutes. Each one can become an MCP tool. Navigate to more pages - Notes, Subscribers, individual post stats - and you’ll find more. The URL patterns are consistent. The response JSON is clean.

This manual approach will get you 10-15 solid endpoints. Enough to build a useful MCP server.


Try It: A Minimal Working Example

Before we get into the full implementation, here’s a single-tool MCP server you can run right now. It connects to your Substack dashboard and returns your subscriber stats:

python
# substack_mini.py
import os, requests
from fastmcp import FastMCP

mcp = FastMCP("Substack Mini")

@mcp.tool()
def get_subscriber_stats() -> str:
    """Get your subscriber counts: total, free, paid."""
    cookie = os.environ["SUBSTACK_SID"]
    subdomain = os.environ.get("SUBSTACK_SUBDOMAIN", "yourpub")
    session = requests.Session()
    session.cookies.set("substack.sid", cookie, domain=".substack.com")
    session.headers.update({"User-Agent": "Mozilla/5.0"})
    data = session.get(
        f"https://{subdomain}.substack.com/api/v1/publication/stats/subscribers"
    ).json()
    lines = ["# Subscriber Stats\n"]
    for key, val in data.items():
        lines.append(f"- **{key.replace('_', ' ').title()}:** {val:,}" if isinstance(val, int) else f"- **{key}:** {val}")
    return "\n".join(lines)

if __name__ == "__main__":
    mcp.run()

Add it to your Claude Code config:

json
{
  "substack-mini": {
    "command": "python3",
    "args": ["substack_mini.py"],
    "env": {
      "SUBSTACK_SID": "s%3Ayour-cookie-here",
      "SUBSTACK_SUBDOMAIN": "yourpub"
    }
  }
}

Restart Claude Code. Ask: “What are my subscriber stats?”

That’s one endpoint, one tool, working in under 5 minutes. Build from there.


Get PluggedIn

This is where the free build log ends. What follows is the 20 curated analytics endpoints (weeks of reverse-engineering, documented once), the full 18-tool server architecture, and the complete setup.

See what’s included →

This post is for paid subscribers

Already a paid subscriber? Sign in
© 2026 Dheeraj Sharma · Privacy ∙ Terms ∙ Collection notice
Start your SubstackGet the app
Substack is the home for great culture