Explore this content on our NEW Developer Docs website.

Rate Limiting and Backoff Guide

What is Rate Limiting?

Rate limiting is the act of limiting how many API calls a user can make in a given period of time to a server. This helps to protect against attacks as well as ensure that a single user or group of users cannot monopolize all of the bandwidth. EasyPost also rate limits to protect our carrier partners, ensuring we don't flood their APIs with too much traffic in a short period of time. Let's assume EasyPost had an arbitrary rate limit of 10 requests per minute; this would mean that any given user would only be able to send through 10 requests per minute before the API would start returning 429 HTTP errors ("Too Many Requests").

Our API dynamically adjusts user's rate limits based on system load, action taken, and other variables. As such, it's important to implement retry and backoff logic to handle rate limiting as the exact limit could change day-to-day and is not guaranteed to be a single hard limit.

I Got Rate Limited, Now What?

If you receive a 429 HTTP error ("Too Many Requests"), you have been rate limited. This could mean that you sent us too many requests in a short period of time, that we are experiencing higher load (eg: during peak season) and that we are requesting you slow your requests, or that one or more of your connections has been open for too long. So what can you do about this?

  • Implement retry logic to attempt to retry your request again (our Python client library will automatically retry on your behalf, other libraries may gain this feature in the future)
  • Temporarily slow down your requests
    • Implement backoff logic that pairs with the above retry logic in the event they are still unsuccessful with growing intervals between requests until rate limiting is over
  • Use TLS session resumption and connection pooling when possible
  • Implement timeout logic so that connections do not stay open indefinitely or for long periods of time (all of our client libraries have configurable timeouts built-in)
  • Optimize your workflows to reduce the API requests you make to EasyPost
    • When making dozens or more Shipments in a short period of time, use the Batches endpoint. Batches are ideal when attempting to generate scanforms for dozens or more shipments at once. NOTE: Batches use an asynchronous workflow. Each shipment must be processed and labels or data may not be immediately available after submission. Use webhooks to get notified when batches have been processed
    • When sending multiple shipments from a single location to another single location, use the Orders endpoint
    • When creating a Shipment, send the Address, Parcel, and Customs Info data along with the same request instead of creating each object individually
    • When retrieving objects, use filtering to shorten the time window or number of results returned. Pair the above with smart pagination use to get to the results you need

Retry and Backoff Example

A simple example of how one might accomplish retry and backoff logic using the 'requests' library in Python:

import requests
from urllib3.util.retry import Retry

retry_strategy = Retry(
    total=3,
    backoff_factor=1,
    status_forcelist=[
        429,
        500,
        502,
        503,
        504,
    ],
    allowed_methods=[
        "DELETE",
        "GET",
    ],
)
requests_session = requests.Session()
requests_http_adapter = requests.adapters.HTTPAdapter(max_retries=retry_strategy)
requests_session.mount(prefix="https://", adapter=requests_http_adapter)

If you are using one of our client libraries, you could wrap each EasyPost function call with try/catch and parse the error status/message and retry on failure.