API Rate Limiting
Complete guide to API rate limiting in Netasampark.
Overview
Rate limiting protects the API from abuse and ensures fair usage. All endpoints have rate limits based on endpoint type and user authentication status.
Rate Limit Headers
Every API response includes rate limit headers:
X-RateLimit-Limit: 120
X-RateLimit-Remaining: 115
X-RateLimit-Reset: 1703500800
- Limit: Maximum requests allowed in the window
- Remaining: Requests remaining in current window
- Reset: Unix timestamp when the limit resets
Rate Limit Tiers
Public Endpoints
- Limit: 60 requests per minute
- Window: 1 minute
- Endpoints:
/api/register,/api/login,/api/auth/*
Authenticated Endpoints
- Limit: 120 requests per minute
- Window: 1 minute
- Endpoints: All protected endpoints
Authentication Endpoints
- Limit: 5 requests per minute
- Window: 1 minute
- Endpoints:
/api/auth/login,/api/auth/register
OTP Endpoints
- Limit: 3 requests per minute
- Window: 1 minute
- Endpoints:
/api/auth/send-otp,/api/auth/verify-otp
Rate Limit Exceeded
When rate limit is exceeded, API returns:
Status Code: 429 Too Many Requests
Response:
{
"success": false,
"error_code": "rate_limit_exceeded",
"message": "Too many requests. Please try again later.",
"retry_after": 60
}
Headers:
Retry-After: 60
X-RateLimit-Limit: 120
X-RateLimit-Remaining: 0
X-RateLimit-Reset: 1703500800
Handling Rate Limits
Exponential Backoff
Implement exponential backoff when rate limited:
async function makeRequest(url, options, retries = 3) {
try {
const response = await fetch(url, options);
if (response.status === 429) {
const retryAfter = parseInt(response.headers.get('Retry-After') || '60');
if (retries > 0) {
await new Promise(resolve => setTimeout(resolve, retryAfter * 1000));
return makeRequest(url, options, retries - 1);
}
}
return response;
} catch (error) {
throw error;
}
}
Request Queuing
Queue requests to avoid hitting rate limits:
class RequestQueue {
constructor(maxConcurrent = 10) {
this.queue = [];
this.running = 0;
this.maxConcurrent = maxConcurrent;
}
async add(requestFn) {
return new Promise((resolve, reject) => {
this.queue.push({ requestFn, resolve, reject });
this.process();
});
}
async process() {
if (this.running >= this.maxConcurrent || this.queue.length === 0) {
return;
}
this.running++;
const { requestFn, resolve, reject } = this.queue.shift();
try {
const result = await requestFn();
resolve(result);
} catch (error) {
reject(error);
} finally {
this.running--;
this.process();
}
}
}
Best Practices
- Monitor Rate Limits: Check headers on every response
- Implement Backoff: Use exponential backoff for retries
- Batch Requests: Combine multiple operations when possible
- Cache Responses: Cache data to reduce API calls
- Respect Limits: Don't try to bypass rate limits
Increasing Limits
For higher rate limits:
- Contact support
- Provide use case details
- Request limit increase
- Wait for approval
Next Steps
Need help? Contact Support