Skip to content

feat: add URLScan.io as passive subdomain source#1710

Merged
dogancanbakir merged 4 commits intodevfrom
add-urlscan-source
Feb 3, 2026
Merged

feat: add URLScan.io as passive subdomain source#1710
dogancanbakir merged 4 commits intodevfrom
add-urlscan-source

Conversation

@Jigardjain
Copy link
Copy Markdown
Contributor

@Jigardjain Jigardjain commented Jan 23, 2026

Add URLScan.io as Passive Subdomain Source

Description

This PR adds URLScan.io as a new high-quality passive subdomain enumeration source for Subfinder.

Changes

  • ✨ Added new source: pkg/subscraping/sources/urlscan/urlscan.go
  • 🔧 Registered URLScan source in pkg/passive/sources.go
  • ⚡ Added rate limit configuration in pkg/runner/options.go (urlscan=2/s)

Source Details

Property Value
Name urlscan
API Endpoint https://urlscan.io/api/v1/search/
API Key Required (header: api-key)
Rate Limit 2 requests/second
Free Tier 1000 requests/day
IsDefault true
HasRecursiveSupport true

Why URLScan.io?

  • High-quality, actively maintained service
  • Free tier with generous limits (1000 req/day)
  • Unique subdomain data from web scanning
  • Better alternative to deprecated sources
  • Used by other popular tools

Testing

✅ Successfully tested with uber.com (19+ subdomains found)
✅ No linter errors
✅ Follows project code conventions
✅ Proper error handling for rate limits

Configuration

Users add API key to config:
urlscan:

  • your_api_key_here

Summary by CodeRabbit

  • New Features

    • URLScan added as a passive discovery source.
    • Optional API key support for authenticated URLScan queries.
    • Default rate limit for URLScan set to 1 request/second.
  • Tests

    • Test expectations updated to include URLScan across relevant source lists.
    • Additional sources added to ignore list for key-less test runs.
  • Bug Fixes

    • Improved error reporting in certificate-log processing.

✏️ Tip: You can customize this high-level summary in your review settings.

@coderabbitai
Copy link
Copy Markdown

coderabbitai bot commented Jan 23, 2026

Warning

Rate limit exceeded

@Jigardjain has exceeded the limit for the number of commits that can be reviewed per hour. Please wait 2 minutes and 26 seconds before requesting another review.

⌛ How to resolve this issue?

After the wait time has elapsed, a review can be triggered using the @coderabbitai review command as a PR comment. Alternatively, push new commits to this PR.

We recommend that you space out your commits to avoid hitting the rate limit.

🚦 How do rate limits work?

CodeRabbit enforces hourly rate limits for each developer per organization.

Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout.

Please see our FAQ for further information.

Walkthrough

Adds a new urlscan passive source with API-key support, paginated search_after queries, retry/backoff and statistics; registers it in passive sources, updates default rate limits and tests, and adjusts error wrapping in a ctlogs file. (49 words)

Changes

Cohort / File(s) Summary
New URLScan source
pkg/subscraping/sources/urlscan/urlscan.go
Adds urlscan.Source implementing API key management, Run with paginated authenticated requests (search_after), JSON decoding, subdomain extraction, retry/backoff on 429/503, and statistics/result emission via channel.
Passive sources registration
pkg/passive/sources.go
Appended &urlscan.Source{} to AllSources to include urlscan among passive sources.
Default rate limits
pkg/runner/options.go
Added urlscan=1/s to defaultRateLimits.
Tests updated
pkg/passive/sources_test.go, pkg/passive/sources_wo_auth_test.go
Updated expected source lists to include urlscan; added leakix, reconeer, and sitedossier to ignored sources in TestSourcesWithoutKeys.
Error handling tweak
pkg/subscraping/sources/facebook/ctlogs.go
Replaced errorutil wrapping with fmt.Errorf(...) when reporting JSON unmarshal errors.

Sequence Diagram(s)

sequenceDiagram
    participant Runner as Runner/Caller
    participant Source as urlscan.Source
    participant API as urlscan.io API
    participant Consumer as Results Consumer

    rect rgba(200,230,255,0.5)
    Runner->>Source: Run(ctx, domain, session)
    end

    rect rgba(230,255,200,0.5)
    Source->>Source: select random API key
    Source->>API: GET /search/?q=... (api-key header, paginated, search_after)
    API-->>Source: JSON response (results, has_more, sort values)
    Source->>Source: parse results, extract subdomains, update stats
    Source->>API: GET next page (with search_after) [retry/backoff on 429/503]
    end

    rect rgba(255,230,200,0.5)
    Source->>Consumer: emit Subdomain results (channel)
    Consumer-->>Source: context cancel / receive
    Source->>Source: update counters, close channel
    end
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~45 minutes

Poem

🐇 I hop through pages, sniff each hidden name,
Keys on my back, I dance around the rate,
I fetch and parse and tally every claim,
I back off when the servers hesitate,
A curious rabbit mapping every gate.

🚥 Pre-merge checks | ✅ 2 | ❌ 1
❌ Failed checks (1 warning)
Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 0.00% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (2 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The pull request title accurately summarizes the main change: adding URLScan.io as a new passive subdomain source to the project.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch add-urlscan-source

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@Jigardjain Jigardjain linked an issue Jan 23, 2026 that may be closed by this pull request
@Jigardjain
Copy link
Copy Markdown
Contributor Author

@dogancanbakir leakix test is failing here. Let me know what needs to be done?

@dogancanbakir
Copy link
Copy Markdown
Member

@Jigardjain This is great! 🔥 We can do the following:

  • See if the URLScan supports pagination; if yes, we can add it, so we don't miss any data coming from the API
  • Create an issue to track leakix test failure

@Jigardjain
Copy link
Copy Markdown
Contributor Author

@dogancanbakir URLscan supports pagination and it is been implemented. Also, i have created an issue for the leakix test failing.

Copy link
Copy Markdown

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🤖 Fix all issues with AI agents
In `@pkg/subscraping/sources/urlscan/urlscan.go`:
- Around line 60-66: The Run method resets s.errors, s.results, and s.requests
but not s.skipped, causing stale stats; update the Run(ctx context.Context,
domain string, session *subscraping.Session) method to set s.skipped = false at
the start alongside the other counters so Statistics() reflects the current run
state (modify the Source.Run function where s.errors/s.results/s.requests are
initialized).
- Around line 210-244: The current request loop returns resp,nil for non-OK
statuses (when err==nil), causing callers like enumerate to decode error
payloads as success; update the logic in the retry block (the code handling
resp, attempt, maxRetries, backoff in urlscan.go) so that for any resp with
StatusCode != http.StatusOK and not a retryable status (not 429/503) you call
session.DiscardHTTPResponse(resp) and return a non-nil error (e.g.
fmt.Errorf("unexpected status %d", resp.StatusCode)) instead of returning
resp,nil; keep existing retry behavior for 429/503 using the
X-Rate-Limit-Reset-After header, ctx cancellation, and exponential backoff.

Comment thread pkg/subscraping/sources/urlscan/urlscan.go
Comment thread pkg/subscraping/sources/urlscan/urlscan.go Outdated
Copy link
Copy Markdown

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🤖 Fix all issues with AI agents
In `@pkg/subscraping/sources/urlscan/urlscan.go`:
- Around line 210-259: The final "return nil, fmt.Errorf(\"max retries
exceeded\")" after the retry loop is unreachable because every path inside the
for attempt := 0; attempt <= maxRetries; attempt++ loop returns; remove that
trailing return to clean up dead code (delete the return nil, fmt.Errorf("max
retries exceeded") line in pkg/subscraping/sources/urlscan/urlscan.go) or
alternatively change the loop to a strict < maxRetries form if you intend a
post-loop fallback; target the retry loop using the session.Get / attempt logic
and the trailing return statement for the fix.
🧹 Nitpick comments (1)
pkg/subscraping/sources/urlscan/urlscan.go (1)

116-122: Unnecessary DiscardHTTPResponse call when resp is nil.

When makeRequestWithRetry returns an error, it already discards the response and returns nil for resp. Line 120 calls DiscardHTTPResponse(resp) with nil, which is a no-op.

🧹 Suggested cleanup
 		resp, err := s.makeRequestWithRetry(ctx, session, searchURL, headers)
 		if err != nil {
 			results <- subscraping.Result{Source: s.Name(), Type: subscraping.Error, Error: err}
 			s.errors++
-			session.DiscardHTTPResponse(resp)
 			return
 		}

Comment thread pkg/subscraping/sources/urlscan/urlscan.go Outdated
Add URLScan.io as a new passive subdomain enumeration source with full
pagination support and robust rate limiting handling.

Features:
- Fetches subdomains from URLScan.io Search API
- Implements cursor-based pagination using search_after parameter
- Extracts domains from task.domain, task.url, page.domain, page.url fields
- Requires API key (free tier available at urlscan.io)

Rate Limiting:
- Conservative pagination delay (10s between pages) to respect strict burst limits
- Exponential backoff retry logic for 429/503 responses
- Respects X-Rate-Limit-Reset-After header for dynamic backoff
- Limited to 5 pages max (500 results) to avoid quota exhaustion

Configuration:
- Max 5 pages per enumeration (configurable via maxPages constant)
- 100 results per page (configurable via maxPerPage constant)
- 2 retry attempts for rate-limited requests
- 20 second initial backoff, doubles on each retry

Changes:
- pkg/subscraping/sources/urlscan/urlscan.go: New URLScan source implementation
- pkg/passive/sources.go: Register URLScan source
- pkg/passive/sources_test.go: Add URLScan to test lists
- pkg/runner/options.go: Add urlscan to source options
- .github/workflows/build-test.yml: Add URLSCAN_API_KEY secret

Closes: Feature request for URLScan.io integration
Replace deprecated github.com/projectdiscovery/utils/errors package
with standard Go error wrapping using fmt.Errorf to fix staticcheck
SA1019 linter errors.
Add leakix, reconeer, and sitedossier to ignored sources list:
- leakix: now requires API key (returns 401)
- reconeer: now requires API key (returns 401)
- sitedossier: flaky, returns no results in CI
Remove custom pagination delay and retry logic since the session
already handles rate limiting via MultiRateLimiter. This aligns
with how other sources (shodan, virustotal) are implemented.
@dogancanbakir dogancanbakir merged commit 8a57c4f into dev Feb 3, 2026
10 checks passed
@dogancanbakir dogancanbakir deleted the add-urlscan-source branch February 3, 2026 17:12
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Add URLScan.io as passive subdomain source

2 participants