Skip to content

Rate Limits

SheetsToJson enforces rate limits to ensure fair usage and system stability. Rate limits vary by subscription plan.

Rate Limit Tiers

PlanRequests/MonthRequests/DayRequests/HourBurst Limit
Free Trial3,000100105/min
Starter10,0003335020/min
Professional100,0003,333500100/min
Business1,000,00033,3335,000500/min

Burst Limits

Burst limits prevent sudden spikes in traffic. You can make up to the burst limit in rapid succession, but sustained usage is limited by the hourly and daily caps.

Rate Limit Headers

Every API response includes rate limit information in the headers:

http
X-RateLimit-Limit: 10000
X-RateLimit-Remaining: 9847
X-RateLimit-Reset: 1705795200
  • X-RateLimit-Limit: Total requests allowed in current period
  • X-RateLimit-Remaining: Requests remaining in current period
  • X-RateLimit-Reset: Unix timestamp when the limit resets

Rate Limit Exceeded

When you exceed your rate limit, you'll receive a 429 Too Many Requests response:

json
{
  "success": false,
  "error": "rate_limit_exceeded",
  "message": "Rate limit exceeded. Try again in 3600 seconds.",
  "retry_after": 3600,
  "limit": {
    "allowed": 10000,
    "current": 10042,
    "reset_at": 1705795200
  }
}

The Retry-After header tells you how many seconds to wait before retrying:

http
HTTP/1.1 429 Too Many Requests
Retry-After: 3600

Handling Rate Limits

Exponential Backoff

Implement exponential backoff when you hit rate limits:

javascript
async function fetchWithRetry(url, options, maxRetries = 3) {
  for (let i = 0; i < maxRetries; i++) {
    const response = await fetch(url, options);

    if (response.status === 429) {
      const retryAfter = response.headers.get('Retry-After');
      const delay = retryAfter ? parseInt(retryAfter) * 1000 : Math.pow(2, i) * 1000;

      console.log(`Rate limited. Retrying in ${delay}ms...`);
      await new Promise(resolve => setTimeout(resolve, delay));
      continue;
    }

    return response;
  }

  throw new Error('Max retries exceeded');
}

Check Remaining Requests

Monitor your rate limit status before making requests:

javascript
async function makeRequest(url, options) {
  const response = await fetch(url, options);

  const remaining = response.headers.get('X-RateLimit-Remaining');
  const reset = response.headers.get('X-RateLimit-Reset');

  console.log(`Requests remaining: ${remaining}`);
  console.log(`Resets at: ${new Date(reset * 1000).toISOString()}`);

  if (parseInt(remaining) < 100) {
    console.warn('⚠️  Rate limit running low!');
  }

  return response;
}

Request Queuing

Implement a queue to avoid bursting the rate limit:

javascript
class RateLimitedQueue {
  constructor(maxPerMinute) {
    this.queue = [];
    this.processing = false;
    this.maxPerMinute = maxPerMinute;
    this.interval = 60000 / maxPerMinute; // ms between requests
  }

  async enqueue(fn) {
    return new Promise((resolve, reject) => {
      this.queue.push({ fn, resolve, reject });
      if (!this.processing) {
        this.process();
      }
    });
  }

  async process() {
    this.processing = true;

    while (this.queue.length > 0) {
      const { fn, resolve, reject } = this.queue.shift();

      try {
        const result = await fn();
        resolve(result);
      } catch (error) {
        reject(error);
      }

      if (this.queue.length > 0) {
        await new Promise(r => setTimeout(r, this.interval));
      }
    }

    this.processing = false;
  }
}

// Usage
const queue = new RateLimitedQueue(20); // 20 requests per minute

async function fetchData(id) {
  return queue.enqueue(() =>
    fetch(`https://api.sheetstojson.com/api/v1/abc123/Users/${id}`, {
      headers: { 'X-API-Key': 'your_key' }
    })
  );
}

Optimization Strategies

1. Caching

Cache responses to reduce API calls:

javascript
const cache = new Map();
const CACHE_TTL = 5 * 60 * 1000; // 5 minutes

async function getCachedData(url, options) {
  const cacheKey = `${url}:${JSON.stringify(options)}`;
  const cached = cache.get(cacheKey);

  if (cached && Date.now() - cached.timestamp < CACHE_TTL) {
    console.log('Cache hit!');
    return cached.data;
  }

  const response = await fetch(url, options);
  const data = await response.json();

  cache.set(cacheKey, {
    data,
    timestamp: Date.now()
  });

  return data;
}

2. Batch Operations

Create multiple rows in a single request:

javascript
// ❌ Bad: 100 requests
for (const user of users) {
  await fetch('/api/v1/abc123/Users', {
    method: 'POST',
    body: JSON.stringify(user)
  });
}

// ✅ Good: 1 request
await fetch('/api/v1/abc123/Users', {
  method: 'POST',
  body: JSON.stringify(users) // Array of users
});

3. Pagination

Use appropriate page sizes:

javascript
// ❌ Bad: Fetching all 10,000 rows
const response = await fetch('/api/v1/abc123/Users?limit=10000');

// ✅ Good: Paginate through data
async function* paginateUsers(pageSize = 100) {
  let offset = 0;

  while (true) {
    const response = await fetch(
      `/api/v1/abc123/Users?limit=${pageSize}&offset=${offset}`
    );
    const { data, meta } = await response.json();

    yield data;

    if (offset + pageSize >= meta.total) break;
    offset += pageSize;
  }
}

// Usage
for await (const users of paginateUsers()) {
  console.log(`Processing ${users.length} users...`);
  // Process batch
}

4. Webhooks (Coming Soon)

Instead of polling for changes, use webhooks to get notified:

javascript
// ❌ Bad: Poll every minute (1,440 requests/day)
setInterval(async () => {
  const response = await fetch('/api/v1/abc123/Users');
  checkForChanges(await response.json());
}, 60000);

// ✅ Good: Use webhooks (0 requests)
// Configure webhook in dashboard to receive change notifications

Monitoring Usage

Dashboard Analytics

View your usage in the dashboard:

  • Requests over time (hourly, daily, monthly)
  • Endpoints usage breakdown
  • Peak usage times
  • Rate limit incidents

Usage API

Get programmatic access to your usage data:

bash
curl -H "X-API-Key: your_api_key" \
  https://api.sheetstojson.com/billing/usage

Response:

json
{
  "success": true,
  "data": {
    "current_period": {
      "start": "2025-01-01T00:00:00Z",
      "end": "2025-02-01T00:00:00Z",
      "requests_used": 8234,
      "requests_limit": 10000,
      "percentage_used": 82.34
    },
    "daily_average": 265,
    "projected_usage": 9845
  }
}

Upgrading Your Plan

If you're consistently hitting rate limits:

  1. Review your usage - Check analytics to find optimization opportunities
  2. Implement caching - Reduce duplicate requests
  3. Upgrade your plan - Get higher limits instantly

View pricing plans

Rate Limit FAQs

Do rate limits reset immediately after the period ends?

Yes, rate limits reset exactly at the end of each period (hour, day, or month).

Are rate limits per API key or per account?

Rate limits are per account, shared across all your API keys.

Do failed requests count toward rate limits?

  • ✅ Yes: 4xx errors (bad requests, not found, etc.)
  • ❌ No: 5xx errors (server errors)
  • ❌ No: 429 rate limit errors

Can I request a temporary rate limit increase?

Business plan customers can contact support for temporary increases during expected traffic spikes.

What happens if I exceed my monthly limit?

Your API will return 429 errors until the next billing cycle. Upgrade your plan to restore access immediately.

Best Practices

  1. Monitor Usage: Check rate limit headers with every request
  2. Implement Retry Logic: Always handle 429 responses with exponential backoff
  3. Cache Aggressively: Store responses that don't change frequently
  4. Batch Operations: Combine multiple operations into single requests
  5. Use Pagination: Don't fetch more data than you need
  6. Plan Ahead: Upgrade before hitting limits, not after

Next Steps

Built with VitePress