Installation
Install the SDK with npm:Node
Usage
- Get an API key from firecrawl.dev
- Set the API key as an environment variable named
FIRECRAWL_API_KEYor pass it as a parameter to theFirecrawlAppclass.
Node
Scraping a URL
Scrape a single URL and get back structured page data with thescrape method.
Node
Crawling a Website
Crawl an entire website starting from a single URL with thecrawl method. You can set a page limit, restrict to specific domains, and choose output formats. See Pagination for auto and manual pagination.
Node
Sitemap-Only Crawl
Usesitemap: "only" to crawl sitemap URLs only (the start URL is always included, and HTML link discovery is skipped).
Node
Start a Crawl
Start a crawl without waiting for it to finish usingstartCrawl. The method returns a job ID you can poll later. Use crawl instead when you want to block until completion. See Pagination for paging behavior and limits.
Node
Checking Crawl Status
Check whether a crawl is still running, completed, or failed with thecheckCrawlStatus method. Pass the job ID returned by startCrawl.
Node
Cancelling a Crawl
Cancel a running crawl with thecancelCrawl method. Pass the job ID returned by startCrawl.
Node
Mapping a Website
Discover all URLs on a website with themap method. Pass a starting URL and get back a list of discovered pages.
Node
Crawling a Website with WebSockets
Stream crawl results in real time with thecrawlUrlAndWatch method. You receive each page as it is crawled instead of waiting for the entire job to finish.
Node
Pagination
Firecrawl endpoints for crawl and batch return anext URL when more data is available. The Node SDK auto-paginates by default and aggregates all documents; in that case next will be null. You can disable auto-pagination or set limits.
Crawl
Use the waiter methodcrawl for the simplest experience, or start a job and page manually.
Simple crawl (auto-pagination, default)
- See the default flow in Crawling a Website.
Manual crawl with pagination control (single page)
- Start a job, then fetch one page at a time with
autoPaginate: false.
Node
Manual crawl with limits (auto-pagination + early stop)
- Keep auto-pagination on but stop early with
maxPages,maxResults, ormaxWaitTime.
Node
Batch Scrape
Use the waiter methodbatchScrape, or start a job and page manually.
Simple batch scrape (auto-pagination, default)
- See the default flow in Batch Scrape.
Manual batch scrape with pagination control (single page)
- Start a job, then fetch one page at a time with
autoPaginate: false.
Node
Manual batch scrape with limits (auto-pagination + early stop)
- Keep auto-pagination on but stop early with
maxPages,maxResults, ormaxWaitTime.
Node
Browser
Launch cloud browser sessions and execute code remotely.Create a Session
Node
Execute Code
Node
Node
Node
Profiles
Save and reuse browser state (cookies, localStorage, etc.) across sessions:Node
Connect via CDP
For full Playwright control, connect directly using the CDP URL:Node
List & Close Sessions
Node
Scrape-Bound Interactive Session
Use a scrape job ID to keep interacting with the replayed page context from that scrape:interact(jobId, {...})runs code in the scrape-bound browser session.- First
interactcall auto-initializes the session from the scrape context. - Additional
interactcalls on the same job ID reuse that live browser state. stopInteraction(jobId)stops the interactive session when you are done.
Node
Error Handling
The SDK throws descriptive exceptions for any errors returned by the Firecrawl API. Wrap calls intry/catch blocks as shown in the examples above.
Are you an AI agent that needs a Firecrawl API key? See firecrawl.dev/agent-onboarding/SKILL.md for automated onboarding instructions.

