Overview
The Vepler API implements rate limiting to ensure fair usage and maintain service quality for all users. This guide explains how rate limiting works and how to handle it effectively.
Rate Limit Tiers
Your rate limits depend on your subscription plan:
Plan Requests/Second Requests/Month Burst Limit Concurrent Requests Free 2 1,000 10 2 Starter 10 10,000 50 10 Professional 50 100,000 200 50 Enterprise 500+ Unlimited Custom Custom
Burst limits allow temporary spikes above your rate limit for handling traffic bursts.
Every API response includes headers with rate limit information:
X-RateLimit-Limit : 50 # Your rate limit per window
X-RateLimit-Remaining : 45 # Requests remaining in current window
X-RateLimit-Reset : 1640995200 # Unix timestamp when window resets
X-RateLimit-Retry-After : 30 # Seconds to wait if rate limited
Handling Rate Limits
SDK Auto-Retry
The Vepler SDK automatically handles rate limiting with exponential backoff:
const vepler = new SDK ({
apiKey: process . env . VEPLER_API_KEY ,
retries: 3 , // Automatic retry with backoff
timeout: 30000
});
// SDK will automatically retry if rate limited
const property = await vepler . property . get ( 'UK-123' );
Manual Retry Logic
If implementing your own retry logic:
async function makeRequestWithRetry ( fn : () => Promise < any >, maxRetries = 3 ) {
for ( let i = 0 ; i < maxRetries ; i ++ ) {
try {
return await fn ();
} catch ( error ) {
if ( error . status === 429 ) {
const retryAfter = error . headers [ 'x-ratelimit-retry-after' ] || Math . pow ( 2 , i );
console . log ( `Rate limited. Retrying after ${ retryAfter } seconds...` );
await new Promise ( resolve => setTimeout ( resolve , retryAfter * 1000 ));
} else {
throw error ;
}
}
}
throw new Error ( 'Max retries exceeded' );
}
Best Practices
1. Implement Caching
Cache responses to reduce API calls:
import { createCache } from '@vepler/sdk/cache' ;
const cache = createCache ({
ttl: 300 , // 5 minutes
max: 1000 // Maximum cached items
});
const vepler = new SDK ({
apiKey: process . env . VEPLER_API_KEY ,
cache
});
2. Batch Requests
Combine multiple operations into single requests:
// ❌ Don't make individual requests
const prop1 = await vepler . property . get ( 'UK-123' );
const prop2 = await vepler . property . get ( 'UK-456' );
const prop3 = await vepler . property . get ( 'UK-789' );
// ✅ Batch requests instead
const properties = await vepler . property . getBulk ({
ids: [ 'UK-123' , 'UK-456' , 'UK-789' ]
});
3. Use Webhooks
For real-time updates, use webhooks instead of polling:
// ❌ Avoid polling
setInterval ( async () => {
const listing = await vepler . listings . get ( 'LISTING-123' );
checkForChanges ( listing );
}, 5000 );
// ✅ Use webhooks
await vepler . webhooks . create ({
url: 'https://yourapp.com/webhooks' ,
events: [ 'listing.updated' ],
filters: { listingId: 'LISTING-123' }
});
4. Implement Request Queuing
Queue requests to stay within limits:
class RateLimitedQueue {
private queue : Array <() => Promise < any >> = [];
private processing = false ;
private requestsPerSecond : number ;
constructor ( requestsPerSecond : number ) {
this . requestsPerSecond = requestsPerSecond ;
}
async add < T >( fn : () => Promise < T >) : Promise < T > {
return new Promise (( resolve , reject ) => {
this . queue . push ( async () => {
try {
const result = await fn ();
resolve ( result );
} catch ( error ) {
reject ( error );
}
});
if ( ! this . processing ) {
this . process ();
}
});
}
private async process () {
this . processing = true ;
const delay = 1000 / this . requestsPerSecond ;
while ( this . queue . length > 0 ) {
const fn = this . queue . shift ();
if ( fn ) {
await fn ();
await new Promise ( resolve => setTimeout ( resolve , delay ));
}
}
this . processing = false ;
}
}
// Usage
const queue = new RateLimitedQueue ( 10 ); // 10 requests per second
const property = await queue . add (() =>
vepler . property . get ( 'UK-123' )
);
Monitoring Usage
Check Current Usage
const usage = await vepler . account . getUsage ();
console . log ({
used: usage . requestsUsed ,
limit: usage . requestsLimit ,
remaining: usage . requestsRemaining ,
resetsAt: usage . resetsAt
});
Set Up Alerts
Configure alerts when approaching limits:
vepler . on ( 'rateLimitWarning' , ( info ) => {
if ( info . percentageUsed > 80 ) {
console . warn ( `Rate limit warning: ${ info . percentageUsed } % used` );
// Send notification or scale down requests
}
});
Rate Limit Errors
When rate limited, you’ll receive a 429 response:
{
"error" : {
"code" : "RATE_LIMIT_EXCEEDED" ,
"message" : "Too many requests. Please retry after 30 seconds." ,
"details" : {
"limit" : 50 ,
"window" : "1m" ,
"retryAfter" : 30
}
}
}
Increasing Rate Limits
If you need higher rate limits:
Upgrade Your Plan : Check available plans at vepler.com/pricing
Contact Sales : For enterprise needs, contact sales@vepler.com
Request Temporary Increase : For special events or migrations
Rate Limit Exemptions
Certain endpoints are exempt from rate limiting:
/health - Health check endpoint
/auth/* - Authentication endpoints
/webhooks/test - Webhook testing
FAQ
How are rate limits calculated?
Rate limits use a sliding window algorithm. Each request is tracked with a timestamp, and the window moves continuously rather than resetting at fixed intervals.
Do failed requests count against limits?
Yes, all requests count against your rate limit, including those that result in errors. This prevents abuse through invalid requests.
Can I check my usage programmatically?
Yes, use the /account/usage endpoint or check the rate limit headers in any API response.
What happens when I exceed the limit?
You’ll receive a 429 status code with a Retry-After header indicating when you can make requests again.