API Rate Limits

Understanding and working with API quotas.

To ensure the stability and performance of our platform, Open Ledger implements rate limits on API requests. This guide explains how our rate limiting works, how to monitor your usage, and best practices for optimizing your integration.

Rate Limit Structure

Open Ledger uses a tiered rate limiting system based on your plan type and specific endpoints:

PlanGlobal RequestsWrite OperationsRead Operations
Developer60 per minute30 per minute60 per minute
Professional300 per minute120 per minute300 per minute
Enterprise1000 per minute500 per minute1000 per minute
Enterprise PlusCustomCustomCustom

Rate limits vary by environment to accommodate different testing and production needs:

EnvironmentRate Limit MultiplierNotes
Sandbox2x your plan limitsHigher limits to facilitate development and testing
Staging1.5x your plan limitsModerately higher limits for integration testing
Production1x your plan limitsStandard limits based on your plan

Rate Limit Categories

Rate limits are applied in three primary categories:

Global Requests

Total API requests across all endpoints. This is the broadest limit.

Write Operations

Requests that create, update, or delete resources (POST, PUT, PATCH, DELETE).

Read Operations

Requests that read resources without modifying them (GET).

Additionally, some specific endpoints have their own dedicated rate limits:

EndpointSpecific LimitNotes
/v1/reports/generate10 per minuteReport generation is resource-intensive and has a lower limit
/v1/bulk/import5 per hourBulk import operations are limited to prevent system overload
/v1/ai/analyze20 per hour

AI analysis endpoints have separate limits due to compute requirements

/v1/webhooks/test10 per hourTest webhook endpoints are limited to prevent misuse

Rate Limit Headers

To help you monitor your rate limit usage, Open Ledger includes special headers in all API responses:

HeaderDescription
X-RateLimit-LimitThe maximum number of requests allowed in the current time period
X-RateLimit-RemainingThe number of requests remaining in the current time period
X-RateLimit-Reset

The time at which the current rate limit window resets in UTC epoch seconds

X-RateLimit-Category

The rate limit category being applied (global, write, read, or endpoint-specific)

Example Headers

1HTTP/1.1 200 OK
2Content-Type: application/json
3X-RateLimit-Limit: 60
4X-RateLimit-Remaining: 59
5X-RateLimit-Reset: 1630094380
6X-RateLimit-Category: global

We recommend monitoring these headers in your API integration to track your usage and avoid hitting rate limits.

Rate Limit Errors

When you exceed a rate limit, the API will respond with a 429 Too Many Requests HTTP status code and the following error response:

1{
2 "error": {
3 "type": "rate_limit_error",
4 "code": "rate_limit_exceeded",
5 "message": "Rate limit exceeded. Please retry after 30 seconds.",
6 "retry_after": 30,
7 "request_id": "req_abc123xyz456"
8 }
9}

The response includes a retry_after value (in seconds) indicating how long you should wait before retrying the request. The API also includes a Retry-After header with the same value.

Monitoring Your Rate Limit Usage

You can monitor your rate limit usage in several ways:

Developer Dashboard

The Open Ledger Developer Dashboard provides real-time and historical views of your API usage and rate limit status.

Response Headers

Track the rate limit headers in your API responses to monitor usage in real-time from your application.

Rate Limit API

Use the /v1/rate_limits endpoint to get comprehensive information about your current rate limits and usage.

Usage Reports

Download monthly usage reports from the Developer Dashboard to analyze your API consumption patterns.

Using the Rate Limit API

You can query your current rate limit status using the Rate Limit API:

1// Check current rate limit status
2const rateLimits = await openledger.rateLimits.retrieve();
3
4console.log("Global rate limit:", rateLimits.data.global);
5console.log("Write operations limit:", rateLimits.data.write);
6console.log("Read operations limit:", rateLimits.data.read);
7
8// Sample response:
9// {
10// "data": {
11// "global": {
12// "limit": 60,
13// "remaining": 45,
14// "reset": 1630094380
15// },
16// "write": {
17// "limit": 30,
18// "remaining": 25,
19// "reset": 1630094380
20// },
21// "read": {
22// "limit": 60,
23// "remaining": 50,
24// "reset": 1630094380
25// },
26// "endpoints": {
27// "/v1/reports/generate": {
28// "limit": 10,
29// "remaining": 8,
30// "reset": 1630094380
31// }
32// }
33// }
34// }

Example: Implementing a Robust Rate Limit Handler

Here’s a complete example of a robust rate limit handler using our JavaScript SDK:

1class OpenLedgerClient {
2 constructor(apiKey, options = {}) {
3 this.openledger = new OpenLedger({
4 apiKey,
5 environment: options.environment || "production",
6 });
7
8 this.requestQueue = new Map();
9 this.rateLimits = {
10 global: { limit: 60, remaining: 60, reset: 0 },
11 write: { limit: 30, remaining: 30, reset: 0 },
12 read: { limit: 60, remaining: 60, reset: 0 },
13 };
14 }
15
16 async makeRequest(category, endpoint, method, ...args) {
17 // Get the target method from the SDK
18 const methodParts = endpoint.split(".");
19 let target = this.openledger;
20 for (const part of methodParts) {
21 target = target[part];
22 }
23
24 if (typeof target !== "function") {
25 throw new Error(`Invalid endpoint: ${endpoint}`);
26 }
27
28 // Check if we're near rate limit and should throttle
29 const rateLimit = this.rateLimits[category];
30 const now = Math.floor(Date.now() / 1000);
31
32 if (rateLimit.remaining < 5 && rateLimit.reset > now) {
33 const delay = (rateLimit.reset - now) * 1000;
34 console.log(
35 `Approaching rate limit for ${category}. Delaying request for ${delay}ms`
36 );
37 await new Promise((resolve) => setTimeout(resolve, delay));
38 }
39
40 try {
41 // Make the actual request
42 const response = await target.apply(this.openledger, args);
43
44 // Update rate limit info from headers
45 this.updateRateLimits(response.headers);
46
47 return response;
48 } catch (error) {
49 if (error.status === 429) {
50 const retryAfter = error.headers["retry-after"]
51 ? parseInt(error.headers["retry-after"], 10) * 1000
52 : 30000;
53
54 console.log(
55 `Rate limit exceeded. Retrying after ${retryAfter / 1000} seconds...`
56 );
57
58 // Wait and retry
59 await new Promise((resolve) => setTimeout(resolve, retryAfter));
60 return this.makeRequest(category, endpoint, method, ...args);
61 }
62
63 throw error;
64 }
65 }
66
67 updateRateLimits(headers) {
68 if (headers["x-ratelimit-limit"]) {
69 const category = headers["x-ratelimit-category"] || "global";
70
71 this.rateLimits[category] = {
72 limit: parseInt(headers["x-ratelimit-limit"], 10),
73 remaining: parseInt(headers["x-ratelimit-remaining"], 10),
74 reset: parseInt(headers["x-ratelimit-reset"], 10),
75 };
76 }
77 }
78
79 // Helper methods for common operations
80 async getAccount(id) {
81 return this.makeRequest("read", "accounts.retrieve", "GET", id);
82 }
83
84 async createTransaction(data) {
85 return this.makeRequest("write", "transactions.create", "POST", data);
86 }
87
88 async listTransactions(params) {
89 return this.makeRequest("read", "transactions.list", "GET", params);
90 }
91}
92
93// Usage:
94const client = new OpenLedgerClient("YOUR_API_KEY", {
95 environment: "production",
96});
97
98async function main() {
99 // These requests will be automatically rate-limited if needed
100 const account = await client.getAccount("acct_12345");
101 console.log("Account:", account.data);
102
103 const transactions = await client.listTransactions({ limit: 10 });
104 console.log("Transactions:", transactions.data);
105}
106
107main().catch(console.error);

Concurrency Limits

In addition to rate limits, Open Ledger also enforces concurrency limits on certain operations. These limit how many simultaneous requests of a certain type can be processed:

OperationConcurrency LimitNotes
Report Generation2 per accountMaximum of 2 reports can be generated simultaneously
Bulk Data Import1 per accountOnly one bulk import can run at a time
AI Analysis3 per accountMaximum of 3 simultaneous AI analysis operations
Bank Reconciliation2 per accountMaximum of 2 simultaneous reconciliation processes

When you exceed a concurrency limit, the API will respond with the same 429 Too Many Requests status code, but with a different error code:

1{
2 "error": {
3 "type": "rate_limit_error",
4 "code": "concurrent_request_limit",
5 "message": "Too many concurrent requests for this operation. Please wait for existing operations to complete.",
6 "request_id": "req_abc123xyz456"
7 }
8}