n8n API Pagination & Rate Limits: Reliable Integrations Without Timeouts
Large API syncs fail for two reasons: you pull too much too fast (rate limits) or you don’t paginate correctly (missed/duplicate records). This guide shows how to implement cursor/page pagination, respect provider limits, and add retries with backoff in n8n so your data syncs finish reliably.
Tip: Looking for webhook hardening? Read our companion guide: n8n Webhook Best Practices.
What You’ll Build
A resilient flow that:
- Fetches all pages using page or cursor strategies
- Handles 429/5xx with retries and exponential backoff
- Batches processing using
Split In Batches
- Resumes safely using checkpoints to avoid re-fetching
Pagination Strategies
1) Page/Offset Pagination
- Params:
page
+limit
oroffset
+limit
- Risk: gaps/duplicates when data changes during iteration
// Example: GET /items?page={{$json.page}}&limit=100
const page = $json.page ?? 1
return [{ page, limit: 100 }]
Loop pattern:
- Initialize page=1
- HTTP Request → parse
items
- IF
items.length < limit
→ done - Else
page++
and continue
2) Cursor/Token Pagination (Recommended)
- Param:
cursor
ornext
token from response - Stable for changing datasets
// Extract next cursor
const next = $json.response?.next || null
return [{ next }]
Loop pattern:
- Start with empty
cursor
- HTTP Request with
cursor
- Save
next
cursor; continue until null
Rate Limits & Backoff
Handle 429 and transient 5xx with exponential backoff and jitter.
// Backoff helper
function backoff(attempt) {
const base = 500; // ms
const max = 8000;
const jitter = Math.floor(Math.random() * 250);
return Math.min(max, base * 2 ** attempt) + jitter;
}
// In Code node around HTTP call status
let attempt = $json.attempt ?? 0
const status = $json.status
if (status === 429 || (status >= 500 && status < 600)) {
const waitMs = backoff(attempt)
return [{ retry: true, waitMs, attempt: attempt + 1 }]
}
return [{ retry: false }]
Use a Wait
node with waitMs
, then retry the HTTP node. Cap attempts (e.g., 6) and send to DLQ on failure.
Batching & Memory Safety
Use Split In Batches
to process records without loading everything into memory. Recommended batch sizes: 50–500 depending on API payload size.
// Normalize each item
return $json.items.map((it) => ({
id: it.id,
email: it.email,
updatedAt: it.updated_at
}))
Checkpointing: Resume Where You Left Off
Persist progress so restarts don’t repeat work:
- Page model: store last successful
page
- Cursor model: store last
cursor
/since
timestamp - Item model: store highest
updatedAt
orid
seen
Write/read checkpoints from a KV/DB (HTTP to your service, Notion, Supabase). Update only after a batch succeeds.
Reference Architecture
- Set (init page/cursor/since)
- HTTP Request (fetch)
- Code (extract
items
,next
/page
) - IF (retry?) → Wait → HTTP again
- Split In Batches (process items)
- Code/HTTP (upsert to DB/CRM)
- Update checkpoint
- IF (has more?) → Loop
Best Practices
- Prefer cursor pagination when available
- Request the minimal fields; compress if supported
- Respect
Retry-After
header over your backoff when present - Use a unique idempotency key for upserts to avoid duplicates
- Log rate-limit hit counts and average backoff per run
- Run long syncs in windows (e.g., last 24h) and backfill separately
Troubleshooting
- Duplicates: ensure upserts by unique key and consistent checkpointing
- Gaps: switch to cursor pagination or lock time windows
- 429 storms: reduce concurrency, increase delay, align with provider quotas
- Memory pressure: lower batch size; avoid building giant arrays
Deployment Considerations
- Schedule long syncs during off-peak hours
- Increase execution timeout limits carefully; prefer chunked runs
- Store execution summaries (counts, last cursor, error rate)
Real‑World Examples
- CRM backfill: 2M records via cursor + 300 batch + nightly schedule
- Shopify orders: respect
Retry-After
, pause on 429, resume with cursor - GitHub issues: ETag caching to skip unchanged pages
Next Steps
- Identify the API’s pagination model (page vs cursor)
- Add backoff logic and
Retry-After
handling to your HTTP nodes - Introduce batching and checkpoint persistence
- Monitor metrics and tune batch size/limits over time