Skip to main content

Understanding Rate Limits: 20 Requests per Second

Sorsa API enforces a simple, universal rate limit to keep the service fast and stable for everyone.

The rule: 20 requests per second

Every API key is limited to 20 requests per second. That is the only rate limit. There are no separate limits per endpoint, no 15-minute windows, no hourly resets, and no differences between subscription plans. If you stay under 20 req/s, you will never see a rate limit error. A few details worth knowing:
  • The limit applies per API key, not per IP address. If you have multiple keys, each has its own 20 req/s allowance.
  • Every request counts equally. A single /info call and a /tweet-info-bulk call with 100 tweets both count as one request.
  • The limit is not a sliding window. It resets every second. If you send 20 requests at T+0.00, you can send another 20 at T+1.00.

What happens when you exceed the limit

If you send more than 20 requests in a single second, the API returns 429 Too Many Requests for the excess requests. No data is lost and your key is not penalized. Simply wait until the next second and retry. There are no x-ratelimit-remaining or x-ratelimit-reset headers. Since the limit resets every second, tracking remaining credits in headers would add complexity without practical value.

How to stay within the limit

For most use cases, you will never hit 20 req/s during normal operation. If you are running batch jobs or processing large datasets, here are two simple approaches. Option 1: Fixed delay between requests The safest approach is to add a 50ms pause (1/20th of a second) between consecutive requests. This guarantees you never exceed the limit. Python:
import time
import requests

API_KEY = "YOUR_API_KEY"
BASE_URL = "https://api.sorsa.io/v3"

usernames = ["elonmusk", "naval", "paulg", "vaborsh"]

for username in usernames:
    response = requests.get(
        f"{BASE_URL}/info",
        params={"username": username},
        headers={"ApiKey": API_KEY}
    )
    print(response.json()["display_name"])
    time.sleep(0.05)  # 50ms between requests
JavaScript:
const API_KEY = "YOUR_API_KEY";
const BASE_URL = "https://api.sorsa.io/v3";

const usernames = ["elonmusk", "naval", "paulg", "vaborsh"];

for (const username of usernames) {
  const res = await fetch(`${BASE_URL}/info?username=${username}`, {
    headers: { "ApiKey": API_KEY }
  });
  const data = await res.json();
  console.log(data.display_name);
  await new Promise(r => setTimeout(r, 50)); // 50ms between requests
}
Option 2: Retry on 429 If you prefer to run at full speed and handle rate limits reactively, catch 429 responses and retry after a short pause. Python:
import time
import requests

def fetch_with_retry(url, headers, max_retries=3):
    for attempt in range(max_retries):
        response = requests.get(url, headers=headers)

        if response.status_code == 429:
            time.sleep(1)
            continue

        return response.json()

    raise Exception("Rate limit: max retries exceeded")
JavaScript:
async function fetchWithRetry(url, headers, maxRetries = 3) {
  for (let attempt = 0; attempt < maxRetries; attempt++) {
    const res = await fetch(url, { headers });

    if (res.status === 429) {
      await new Promise(r => setTimeout(r, 1000));
      continue;
    }

    return await res.json();
  }
  throw new Error("Rate limit: max retries exceeded");
}
In practice, combining both approaches works best: use a small delay between requests to avoid 429s in the first place, and add retry logic as a safety net.

Batch endpoints reduce the need for high throughput

Before optimizing for speed, consider whether batch endpoints can reduce your total request count:
  • /info-batch fetches multiple user profiles in a single request
  • /tweet-info-bulk fetches up to 100 tweets in a single request
One batch request counts the same as one regular request against both your rate limit and your quota. If you are fetching data for many users or tweets, batching is almost always more efficient than sending individual requests in parallel. For more patterns like this, see the Optimizing API Usage guide.

Need a higher limit?

If your project requires sustained throughput above 20 req/s (for example, 100+ req/s for a real-time monitoring pipeline), contact us at [email protected] or reach us on Discord to discuss dedicated infrastructure options.

Next steps

  • Pagination - Fetch large datasets efficiently within the rate limit
  • Error Codes - Full reference for 400, 403, 404, 429, and 500 responses