Rate Limits
Understanding Flameup API rate limits and quotas
Overview
Flameup implements rate limiting to ensure fair usage and platform stability. Rate limits are applied per API key and vary by endpoint type.
Rate Limit Tiers
Public
75 requests/minute
Unauthenticated endpoints
Authenticated
120 requests/minute
With valid API key
Enterprise
Custom limits
Tailored to your needs
Current Rate Limits
Rate limits are applied globally per API key, not per endpoint:
| Authentication | Limit | Window |
|---|---|---|
| Public (no API key) | 75/min | Per IP |
| Authenticated (API key) | 120/min | Per API key |
Rate Limit Headers
Every API response includes rate limit information in headers:
X-RateLimit-Limit: 1000
X-RateLimit-Remaining: 950
X-RateLimit-Reset: 1705320660
| Header | Description |
|---|---|
X-RateLimit-Limit | Maximum requests allowed in the window |
X-RateLimit-Remaining | Requests remaining in current window |
X-RateLimit-Reset | Unix timestamp when the window resets |
Handling Rate Limits
When you exceed the rate limit, you'll receive a 429 Too Many Requests response:
{
"error": "Rate limit exceeded",
"message": "Too many requests. Please try again later."
}
The response headers will contain timing information for when you can retry.
Implementing Backoff
class FlameupClient {
constructor(apiKey) {
this.apiKey = apiKey;
this.rateLimitRemaining = 1000;
this.rateLimitReset = null;
}
async request(endpoint, options = {}) {
// Check if we should wait
if (this.rateLimitRemaining <= 0 && this.rateLimitReset) {
const waitTime = this.rateLimitReset - Date.now();
if (waitTime > 0) {
await this.sleep(waitTime);
}
}
const response = await fetch(endpoint, {
...options,
headers: {
'Authorization': `Bearer ${this.apiKey}`,
...options.headers
}
});
// Update rate limit tracking
this.rateLimitRemaining = parseInt(
response.headers.get('X-RateLimit-Remaining') || '1000'
);
this.rateLimitReset = parseInt(
response.headers.get('X-RateLimit-Reset') || '0'
) * 1000;
if (response.status === 429) {
// Use X-RateLimit-Reset header or default wait time
const resetTime = parseInt(response.headers.get('X-RateLimit-Reset') || '0') * 1000;
const waitTime = resetTime > Date.now() ? resetTime - Date.now() : 60000;
await this.sleep(waitTime);
return this.request(endpoint, options); // Retry
}
return response;
}
sleep(ms) {
return new Promise(resolve => setTimeout(resolve, ms));
}
}
import time
import requests
class FlameupClient:
def __init__(self, api_key):
self.api_key = api_key
self.rate_limit_remaining = 1000
self.rate_limit_reset = None
def request(self, endpoint, method='GET', **kwargs):
# Check if we should wait
if self.rate_limit_remaining <= 0 and self.rate_limit_reset:
wait_time = self.rate_limit_reset - time.time()
if wait_time > 0:
time.sleep(wait_time)
headers = kwargs.pop('headers', {})
headers['Authorization'] = f'Bearer {self.api_key}'
response = requests.request(
method, endpoint, headers=headers, **kwargs
)
# Update rate limit tracking
self.rate_limit_remaining = int(
response.headers.get('X-RateLimit-Remaining', 1000)
)
self.rate_limit_reset = int(
response.headers.get('X-RateLimit-Reset', 0)
)
if response.status_code == 429:
# Use X-RateLimit-Reset header or default wait time
reset_time = int(response.headers.get('X-RateLimit-Reset', 0))
wait_time = max(reset_time - time.time(), 60) if reset_time else 60
time.sleep(wait_time)
return self.request(endpoint, method, **kwargs) # Retry
return response
Best Practices
Instead of making individual requests, batch when possible:
// Bad: 100 individual requests
for (const user of users) {
await flare.createPerson(user);
}
// Good: 1 batch request
await flare.batchUpsertPeople(users);
Batch endpoints are more efficient and have separate, higher limits.
Quota Limits
In addition to rate limits, some operations have quota limits:
| Resource | Limit | Period |
|---|---|---|
| People | 100,000 | Per workspace |
| Events | 10,000,000 | Per month |
| Campaigns | 100 | Per workspace |
| API Keys | 20 | Per workspace |
| Device tokens | 10 per person | Per person |
Requesting Higher Limits
If you need higher rate limits:
- Optimize your integration - Use batching, caching, and webhooks
- Contact support - Explain your use case for limit increases
- Upgrade your plan - Higher tiers include higher limits
- Enterprise plans - Custom limits available for large-scale deployments
Last updated today