Rate Limiting
The Devdraft API implements rate limiting to ensure fair usage and optimal performance for all clients. Each API client is limited to 100 requests per minute with automatic reset windows and informative response headers to help you manage your integration effectively.
Rate LimitsCopied!
Current Limits
Parameter |
Value |
Description |
---|---|---|
Requests per Minute |
100 |
Maximum requests allowed per minute |
Window Type |
Fixed Window |
Resets every minute on the minute boundary |
Identification |
API Key |
Based on your |
Scope |
Per API Key |
Each API key has its own independent limit |
Reset Behavior
-
Window Duration: 60 seconds
-
Reset Time: Every minute on the minute (e.g., 12:00:00, 12:01:00, 12:02:00)
-
Counter Reset: Hard reset to 0 at each window boundary
Response HeadersCopied!
Every API response includes rate limiting information in the headers:
X-RateLimit-Limit: 100
X-RateLimit-Remaining: 73
X-RateLimit-Reset: 1640995260
Header |
Description |
Example |
---|---|---|
|
Maximum requests allowed per window |
|
|
Requests remaining in current window |
|
|
Unix timestamp when the window resets |
|
Rate Limit ResponsesCopied!
Successful Request (Within Limit)
HTTP/1.1 201 Created
X-RateLimit-Limit: 100
X-RateLimit-Remaining: 73
X-RateLimit-Reset: 1640995260
Content-Type: application/json
{
"id": "txn_abc123",
"status": "pending"
}
Rate Limited Request (429 Too Many Requests)
HTTP/1.1 429 Too Many Requests
X-RateLimit-Limit: 100
X-RateLimit-Remaining: 0
X-RateLimit-Reset: 1640995260
Retry-After: 45
Content-Type: application/json
{
"statusCode": 429,
"message": "Rate limit exceeded",
"retryAfter": 45
}
Error Response Fields
Field |
Type |
Description |
---|---|---|
|
|
HTTP status code (429) |
|
|
Human-readable error message |
|
|
Seconds until you can retry |
Client ImplementationCopied!
1. Monitor Rate Limit Headers
Always check the rate limit headers in your responses:
async function makeAPIRequest(endpoint, data) {
const response = await fetch(endpoint, {
method: 'POST',
headers: {
'Content-Type': 'application/json',
'x-client-key': process.env.API_KEY,
'x-client-secret': process.env.API_SECRET,
},
body: JSON.stringify(data),
});
// Extract rate limit information
const limit = parseInt(response.headers.get('X-RateLimit-Limit'));
const remaining = parseInt(response.headers.get('X-RateLimit-Remaining'));
const reset = parseInt(response.headers.get('X-RateLimit-Reset'));
console.log(`Rate Limit: ${remaining}/${limit} remaining, resets at ${new Date(reset * 1000)}`);
if (response.status === 429) {
const retryAfter = parseInt(response.headers.get('Retry-After'));
throw new Error(`Rate limited. Retry after ${retryAfter} seconds`);
}
return response.json();
}
2. Implement Retry Logic with Exponential Backoff
async function createPaymentWithRetry(paymentData, maxRetries = 3) {
for (let attempt = 1; attempt <= maxRetries; attempt++) {
try {
return await createPaymentIntent(paymentData);
} catch (error) {
if (error.message.includes('Rate limited') && attempt < maxRetries) {
// Wait before retrying (exponential backoff)
const delay = Math.min(Math.pow(2, attempt) * 1000, 60000);
console.log(`Rate limited. Waiting ${delay}ms before retry ${attempt + 1}`);
await new Promise(resolve => setTimeout(resolve, delay));
continue;
}
throw error;
}
}
}
3. Request Queuing for High-Volume Applications
class APIRequestQueue {
constructor(apiKey) {
this.apiKey = apiKey;
this.queue = [];
this.processing = false;
this.requestsThisMinute = 0;
this.windowStart = Date.now();
}
async addRequest(endpoint, data) {
return new Promise((resolve, reject) => {
this.queue.push({ endpoint, data, resolve, reject });
this.processQueue();
});
}
async processQueue() {
if (this.processing || this.queue.length === 0) return;
this.processing = true;
while (this.queue.length > 0) {
// Reset counter if we're in a new minute
if (Date.now() - this.windowStart >= 60000) {
this.requestsThisMinute = 0;
this.windowStart = Date.now();
}
// Wait if we've hit the limit
if (this.requestsThisMinute >= 100) {
const waitTime = 60000 - (Date.now() - this.windowStart);
await this.sleep(waitTime);
continue;
}
const request = this.queue.shift();
try {
const result = await this.makeAPIRequest(request.endpoint, request.data);
this.requestsThisMinute++;
request.resolve(result);
} catch (error) {
request.reject(error);
}
// Small delay between requests to avoid bursts
await this.sleep(100);
}
this.processing = false;
}
sleep(ms) {
return new Promise(resolve => setTimeout(resolve, ms));
}
}
PracticesCopied!
Recommended Practices
-
Always Check Headers: Monitor
X-RateLimit-Remaining
to prevent hitting limits -
Respect Retry-After: When you get a 429, wait the specified time before retrying
-
Implement Exponential Backoff: Don't retry immediately after being rate limited
-
Spread Requests: Distribute requests evenly across the minute rather than bursting
-
Queue Requests: Use a queue system for high-volume applications
-
Handle Gracefully: Show user-friendly messages when rate limited
-
Monitor Usage: Track your usage patterns to optimize request timing
Practices to Avoid
-
Don't Ignore Headers: Never ignore the rate limit headers in responses
-
Don't Burst Requests: Avoid making all 100 requests at the start of each minute
-
Don't Retry Immediately: Always wait when told to by the
Retry-After
header -
Don't Share Keys: Use separate API keys for different environments/applications
-
Don't Hammer When Limited: Stop making requests when you hit the limit
Error Handling StrategiesCopied!
1. Graceful Degradation
async function createPaymentWithFallback(paymentData) {
try {
return await createPaymentIntent(paymentData);
} catch (error) {
if (error.message.includes('Rate limited')) {
// Queue for later or show user message
return {
success: false,
message: 'System is busy. Your request has been queued.',
queueId: await queuePaymentForLater(paymentData),
};
}
throw error;
}
}
2. User Feedback
function handleRateLimit(error) {
if (error.message.includes('Rate limited')) {
showUserMessage({
type: 'warning',
title: 'Request Limit Reached',
message: 'You\'ve reached the request limit. Please wait a moment before trying again.',
retryButton: true,
retryAfter: error.retryAfter,
});
}
}
Request Distribution ExamplesCopied!
Good: Evenly Distributed Requests
Minute 1: ████████████████████ (100 requests over 60 seconds)
Timeline: ▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓
Result: ✅ Smooth performance, no rate limiting
Bad: Burst Requests
Minute 1: ████████████████████ (100 requests in first 10 seconds)
Timeline: ████████████████████................................................
Result: ❌ Rate limited for remaining 50 seconds
Testing Your IntegrationCopied!
1. Basic Rate Limit Test
describe('Rate Limiting Integration', () => {
test('should handle rate limits gracefully', async () => {
const responses = [];
// Make requests up to the limit
for (let i = 0; i < 100; i++) {
try {
const response = await makeAPIRequest('/api/v0/payment-intents/stablecoin', testData);
responses.push({ success: true, response });
} catch (error) {
responses.push({ success: false, error: error.message });
}
}
// Verify we can make 100 successful requests
const successful = responses.filter(r => r.success).length;
expect(successful).toBe(100);
// Verify 101st request is rate limited
try {
await makeAPIRequest('/api/v0/payment-intents/stablecoin', testData);
fail('Expected rate limit error');
} catch (error) {
expect(error.message).toContain('Rate limited');
}
});
});
2. Usage Monitoring
class RateLimitMonitor {
constructor() {
this.stats = {
totalRequests: 0,
rateLimitedRequests: 0,
averageRemaining: 0,
};
}
recordRequest(response) {
this.stats.totalRequests++;
if (response.status === 429) {
this.stats.rateLimitedRequests++;
}
const remaining = parseInt(response.headers.get('X-RateLimit-Remaining'));
this.stats.averageRemaining = (this.stats.averageRemaining + remaining) / 2;
}
getUsageReport() {
const rateLimitRate = (this.stats.rateLimitedRequests / this.stats.totalRequests) * 100;
return {
totalRequests: this.stats.totalRequests,
rateLimitedRequests: this.stats.rateLimitedRequests,
rateLimitRate: `${rateLimitRate.toFixed(2)}%`,
averageRemaining: Math.round(this.stats.averageRemaining),
efficiency: `${(100 - rateLimitRate).toFixed(1)}%`,
};
}
}
// Usage
const monitor = new RateLimitMonitor();
// Record each response
monitor.recordRequest(response);
// Get usage report
console.log(monitor.getUsageReport());
Multiple Environment SetupCopied!
Development vs Production
Use separate API keys for different environments:
const config = {
development: {
apiKey: process.env.DEV_API_KEY,
apiSecret: process.env.DEV_API_SECRET,
baseUrl: 'https://api-dev.example.com',
},
production: {
apiKey: process.env.PROD_API_KEY,
apiSecret: process.env.PROD_API_SECRET,
baseUrl: 'https://api.example.com',
},
};
const currentConfig = config[process.env.NODE_ENV];
Load Balancing Across Keys
For high-volume applications, consider using multiple API keys with a round-robin approach:
class LoadBalancer {
constructor(apiKeys) {
this.apiKeys = apiKeys;
this.currentIndex = 0;
}
getNextKey() {
const key = this.apiKeys[this.currentIndex];
this.currentIndex = (this.currentIndex + 1) % this.apiKeys.length;
return key;
}
async makeRequest(endpoint, data) {
const apiKey = this.getNextKey();
return makeAPIRequest(endpoint, data, apiKey);
}
}
TroubleshootingCopied!
Common Issues
Q: Why am I getting rate limited when I'm not making many requests?
-
Check if you have multiple application instances using the same API key
-
Verify you're not sharing API keys between development and production
-
Make sure you're not making rapid bursts of requests
Q: How can I check my current rate limit status?
-
Make any API request and check the
X-RateLimit-Remaining
header -
The headers are included in every response, including error responses
Q: Can I get a higher rate limit?
-
Contact our support team to discuss your use case and potential limit increases
-
Consider optimizing your request patterns first
Getting Help
If you're experiencing issues with rate limiting:
-
Check the response headers for current status
-
Verify your retry logic handles 429 responses correctly
-
Review your request patterns for optimization opportunities
-
Contact support with your API key and specific error details