Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Suggestion: Multiple req_throttle() limits in one request's policies #555

Open
pbulsink opened this issue Sep 26, 2024 · 1 comment
Open

Comments

@pbulsink
Copy link

Some APIs have multiple throttle limits: i.e. no more than 4 requests/second and no more than 20 requests per minute and no more than 200 requests/hour.

Current behavior (as far as I've been able to verify) is that placing multiple req_throttle() policies on a request results in just the last policy being enforced.

A desired behavior (if feasable) is a pool of throttles, with delay required to satisfy all throttles, for the above scenario:

req <- request(url) %>%
  req_throttle(4/1) %>%
  req_throttle(20/60) %>%
  req_throttle(200/3600)

results in stepwise increase in throttle delays. So 20 requests could happen in as little as 5 seconds, but once 20 requests are sent the throttle engages the next limit and holds until one minute is past to continue sending requests. This same occurs with respecting the 200 request/hour limit.

The benefit of this is it permits bursts of activities to occur quickly, but respects the larger scale limit(s) in place should users be doing more significant API access tasks.

@hadley
Copy link
Member

hadley commented Sep 26, 2024

This would require splitting the parameters in two (i.e. number of requests and time limit). But are you sure you need this? Most modern APIs will return a rate-limit header that you can respond to dynamically with req_retry().

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants