n8n API Pagination & Rate Limits: Reliable Integrations Without Timeouts

Master API pagination and rate limits in n8n: cursor/page strategies, retries with backoff, batching, and resilient patterns for large data syncs.

12 min read
Intermediate
2025-09-20

n8n API Pagination & Rate Limits: Reliable Integrations Without Timeouts

Large API syncs fail for two reasons: you pull too much too fast (rate limits) or you don’t paginate correctly (missed/duplicate records). This guide shows how to implement cursor/page pagination, respect provider limits, and add retries with backoff in n8n so your data syncs finish reliably.

Tip: Looking for webhook hardening? Read our companion guide: n8n Webhook Best Practices.

What You’ll Build

A resilient flow that:

  • Fetches all pages using page or cursor strategies
  • Handles 429/5xx with retries and exponential backoff
  • Batches processing using Split In Batches
  • Resumes safely using checkpoints to avoid re-fetching

Pagination Strategies

1) Page/Offset Pagination

  • Params: page + limit or offset + limit
  • Risk: gaps/duplicates when data changes during iteration
// Example: GET /items?page={{$json.page}}&limit=100
const page = $json.page ?? 1
return [{ page, limit: 100 }]

Loop pattern:

  1. Initialize page=1
  2. HTTP Request → parse items
  3. IF items.length < limit → done
  4. Else page++ and continue

2) Cursor/Token Pagination (Recommended)

  • Param: cursor or next token from response
  • Stable for changing datasets
// Extract next cursor
const next = $json.response?.next || null
return [{ next }]

Loop pattern:

  1. Start with empty cursor
  2. HTTP Request with cursor
  3. Save next cursor; continue until null

Rate Limits & Backoff

Handle 429 and transient 5xx with exponential backoff and jitter.

// Backoff helper
function backoff(attempt) {
  const base = 500; // ms
  const max = 8000;
  const jitter = Math.floor(Math.random() * 250);
  return Math.min(max, base * 2 ** attempt) + jitter;
}

// In Code node around HTTP call status
let attempt = $json.attempt ?? 0
const status = $json.status
if (status === 429 || (status >= 500 && status < 600)) {
  const waitMs = backoff(attempt)
  return [{ retry: true, waitMs, attempt: attempt + 1 }]
}
return [{ retry: false }]

Use a Wait node with waitMs, then retry the HTTP node. Cap attempts (e.g., 6) and send to DLQ on failure.

Batching & Memory Safety

Use Split In Batches to process records without loading everything into memory. Recommended batch sizes: 50–500 depending on API payload size.

// Normalize each item
return $json.items.map((it) => ({
  id: it.id,
  email: it.email,
  updatedAt: it.updated_at
}))

Checkpointing: Resume Where You Left Off

Persist progress so restarts don’t repeat work:

  • Page model: store last successful page
  • Cursor model: store last cursor/since timestamp
  • Item model: store highest updatedAt or id seen

Write/read checkpoints from a KV/DB (HTTP to your service, Notion, Supabase). Update only after a batch succeeds.

Reference Architecture

  • Set (init page/cursor/since)
  • HTTP Request (fetch)
  • Code (extract items, next/page)
  • IF (retry?) → Wait → HTTP again
  • Split In Batches (process items)
  • Code/HTTP (upsert to DB/CRM)
  • Update checkpoint
  • IF (has more?) → Loop

Best Practices

  1. Prefer cursor pagination when available
  2. Request the minimal fields; compress if supported
  3. Respect Retry-After header over your backoff when present
  4. Use a unique idempotency key for upserts to avoid duplicates
  5. Log rate-limit hit counts and average backoff per run
  6. Run long syncs in windows (e.g., last 24h) and backfill separately

Troubleshooting

  • Duplicates: ensure upserts by unique key and consistent checkpointing
  • Gaps: switch to cursor pagination or lock time windows
  • 429 storms: reduce concurrency, increase delay, align with provider quotas
  • Memory pressure: lower batch size; avoid building giant arrays

Deployment Considerations

  • Schedule long syncs during off-peak hours
  • Increase execution timeout limits carefully; prefer chunked runs
  • Store execution summaries (counts, last cursor, error rate)

Real‑World Examples

  • CRM backfill: 2M records via cursor + 300 batch + nightly schedule
  • Shopify orders: respect Retry-After, pause on 429, resume with cursor
  • GitHub issues: ETag caching to skip unchanged pages

Next Steps

  1. Identify the API’s pagination model (page vs cursor)
  2. Add backoff logic and Retry-After handling to your HTTP nodes
  3. Introduce batching and checkpoint persistence
  4. Monitor metrics and tune batch size/limits over time

Related Reading

Topics Covered

N8n PaginationN8n Rate LimitsRetriesBackoffBatchingApi Sync

Ready for More?

Explore our comprehensive collection of guides and tutorials to accelerate your tech journey.

Explore All Guides
Weekly Tech Insights

Stay Ahead of the Curve

Join thousands of tech professionals getting weekly insights on AI automation, software architecture, and modern development practices.

No spam, unsubscribe anytimeReal tech insights weekly