The API uses rate limiting to ensure fair usage and protect system stability. Understanding these limits helps you build reliable integrations.
Rate limits are applied per API key and reset on a rolling window:
Per Minute
requests per minute
Per Hour
requests per hour
Note: Rate limits are shared across all endpoints for a given API key. Using multiple API keys allows for independent rate limit quotas.
Every API response includes headers to help you track your rate limit status:
| Header | Description | Example |
|---|---|---|
X-RateLimit-Limit | Maximum requests per minute | 120 |
X-RateLimit-Remaining | Remaining requests in current window | 115 |
X-RateLimit-Reset | Unix timestamp when the window resets | 1703520000 |
Retry-After | Seconds to wait (only on 429) | 60 |
Check these headers to monitor your usage and avoid hitting limits:
curl -i https://deadlinkradar.com/api/v1/links \
-H "Authorization: Bearer dlr_your_api_key"
# Response headers include:
# X-RateLimit-Limit: 120
# X-RateLimit-Remaining: 119
# X-RateLimit-Reset: 1703520000When you exceed the rate limit, you'll receive a 429 Too Many Requests response:
{
"error": {
"code": "RATE_LIMITED",
"message": "Rate limit exceeded. Please wait before making more requests.",
"status": 429,
"details": {
"retryAfter": 60,
"limit": 120,
"remaining": 0
}
}
}Implement retry logic with exponential backoff to gracefully handle rate limits:
# After receiving 429, wait and retry
# Check Retry-After header for wait time
curl https://deadlinkradar.com/api/v1/links \
-H "Authorization: Bearer dlr_your_api_key"
# If 429 response, wait for Retry-After seconds
sleep 60 # Wait based on Retry-After header
curl https://deadlinkradar.com/api/v1/links \
-H "Authorization: Bearer dlr_your_api_key"Monitor rate limit headers
Track X-RateLimit-Remaining to proactively slow down before hitting limits
Implement exponential backoff
On 429 responses, wait with increasing delays: 1s, 2s, 4s, 8s...
Use batch endpoints
Create or delete multiple links in a single request to reduce API calls
Cache responses when possible
Avoid redundant requests by caching link statuses locally
Use webhooks for status updates
Instead of polling, receive push notifications when link statuses change
Spread requests over time
Avoid bursts by queuing and spacing out requests
If you need higher rate limits for your application, contact us to discuss enterprise options.
Contact Support