Skip to main content

Overview

The Vepler API meters usage on a per-request basis. Your API usage is tracked and billed according to your subscription plan. If you exceed your plan’s limits, the API will return a 429 Too Many Requests response.

Handling 429 Responses

When you exceed your usage limits, the API returns a 429 status code:
{
  "error": {
    "code": "rate_limit_exceeded",
    "message": "Too many requests. Please retry shortly."
  }
}

Retry with Exponential Backoff

Implement exponential backoff when you receive a 429 response:
async function requestWithBackoff(
  fn: () => Promise<unknown>,
  maxRetries = 3,
  baseDelay = 1000
): Promise<unknown> {
  for (let i = 0; i < maxRetries; i++) {
    try {
      return await fn();
    } catch (error) {
      if (error.statusCode !== 429) throw error;
      if (i === maxRetries - 1) throw error;

      const delay = Math.min(baseDelay * Math.pow(2, i), 30000);
      console.log(`Rate limited, retrying in ${delay}ms...`);
      await new Promise(resolve => setTimeout(resolve, delay));
    }
  }
}

Best Practices

1. Use Batch Endpoints

Where available, use batch endpoints to fetch multiple records in a single request:
import { SDK } from '@vepler/sdk';

const vepler = new SDK({ apiKey: process.env.VEPLER_API_KEY });

// Instead of individual requests
// const prop1 = await vepler.property.getV1PropertyLocationIds({ locationIds: 'id1' });
// const prop2 = await vepler.property.getV1PropertyLocationIds({ locationIds: 'id2' });

// Batch multiple IDs in one request
const response = await vepler.property.getV1PropertyLocationIds({
  locationIds: 'id1,id2,id3'
});

2. Cache Responses

Cache frequently accessed data to reduce API calls:
const cache = new Map<string, { data: unknown; expiry: number }>();

async function getCached(key: string, fetcher: () => Promise<unknown>, ttlMs = 300_000): Promise<unknown> {
  const cached = cache.get(key);
  if (cached && cached.expiry > Date.now()) {
    return cached.data;
  }

  const data = await fetcher();
  cache.set(key, { data, expiry: Date.now() + ttlMs });
  return data;
}

3. Use Request Queuing

Queue requests to control throughput:
class RequestQueue {
  private queue: Array<{ fn: () => Promise<unknown>; resolve: (v: unknown) => void; reject: (e: unknown) => void }> = [];
  private processing = false;
  private delayMs: number;

  constructor(requestsPerSecond: number) {
    this.delayMs = 1000 / requestsPerSecond;
  }

  async add<T>(fn: () => Promise<T>): Promise<T> {
    return new Promise((resolve, reject) => {
      this.queue.push({ fn, resolve: resolve as (v: unknown) => void, reject });
      if (!this.processing) this.process();
    });
  }

  private async process(): Promise<void> {
    this.processing = true;

    while (this.queue.length > 0) {
      const item = this.queue.shift();
      if (!item) break;

      try {
        const result = await item.fn();
        item.resolve(result);
      } catch (error) {
        item.reject(error);
      }

      if (this.queue.length > 0) {
        await new Promise(resolve => setTimeout(resolve, this.delayMs));
      }
    }

    this.processing = false;
  }
}

// Usage
const queue = new RequestQueue(10);
const result = await queue.add(() =>
  vepler.property.getV1PropertyLocationIds({ locationIds: 'p_0x000123456789' })
);

4. Select Only the Fields You Need

Use the attributes parameter to request only the fields you need, reducing response size and improving performance:
const response = await vepler.property.getV1PropertyLocationIds({
  locationIds: 'p_0x000123456789',
  attributes: 'address,pricing,roomDetails'
});

Increasing Your Limits

If you need higher usage limits:
  1. Upgrade your plan at app.vepler.com
  2. Contact sales at sales@vepler.com for enterprise requirements

Next Steps