I’ve been coming back to the same project a few times. It’s essentially just a program that interacts with an API. Only problem is whenever I get back to it, I realize how annoying it is to debug through all the “too many requests” responses I get back from the API because it has a max of 200 requests per second.

On solution would be to filter out those responses but that just feels like the wrong move, so I’m guessing the better solution would be to put some sort of rate limiter on my program. My two questions are: does that seem like a good solution and if it is, do I embed the rate limiter in my program, i.e. using the ratelimit crate or would a better solution be to run my program in a container and connect it to a reverse proxy(I think) container and control rate limiting from there?

  • nous@programming.dev
    link
    fedilink
    English
    arrow-up
    15
    ·
    1 day ago

    Don’t ignore the responses. If you abuse it too much there is a chance that the api will just block you permanently and is generally seen as not very nice, it does take resources on both ends to process even that response.

    The ratelimit crate is an OK solution for this and simple enough to implement/include in your code but can create a miss-match between your code and the API. If they ever change the limits you will need to adjust your program.

    A proxy solution seems overly complex in terms of infra to setup and maintain so I would avoid that.

    A better solution can depend on the API. Quite often they send back the request quotas you have left either on every request or when you exceed the rate limit. You can build into your client for the API (or create a wrapper if you have not done so already) that understands these values and backs off when the limits are reached or nearly reached.

    Otherwise there are various things you can do depending on the complexity rate limit rules they have. The ratelimit crate is probably good for more complex things but you can just delay all requests for a while if the rate-limiting on the API is quite simple.

    You can also do an exponential backoff algorithm if you are not sure at all what the rules are (basically quickly retry with an exponentially increasing delay until you get a successful response with an upper limit on the delay). This is also a great all round solution for other types of failures to stop your systems from hammering them if they ever encounter a different problem or go down for some reason. Though not the best if you have more info about the time you should be waiting.

  • orclev@lemmy.world
    link
    fedilink
    arrow-up
    13
    ·
    1 day ago

    You absolutely should not be just ignoring too many requests responses. The entire point of putting rate limits on APIs is to reduce resource usage and while it doesn’t take many resources to serve up a request denied message that amount isn’t zero. If you continue to hammer an API that has rate limited you at some point they will decide your traffic is malicious and just start blackholing all your requests.

    I’m honestly not sure what the best way to do rate limiting would be, I suspect that might depend on a number of factors such as what web client and async framework you’re using, but I would recommend if at all possible using a library rather than rolling your own. The library you found so far seems reasonable enough at least as a first attempt.

  • kn33@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    ·
    edit-2
    1 day ago

    I don’t know Rust, I’m just here to chill. I can tell you what I would do, and have done, in PowerShell to solve this. From there you can translate that to Rust.

    Let’s go with your limit of 200 requests per second. At the start of the script, I create a stopwatch. Literally, a stopwatch that’s tied to real world time and can be reset. Then, I have a variable that counts my requests. Every time I make a request, I increment it. Before every request, I check if the variable is 200. If it is, check the timer to see if a second has passed. If not, calculate how much time is left until a second has passed and sleep for that amount of time. After doing that check/sleep, reset the request counter and the stopwatch. From there, continue on.

  • taaz@biglemmowski.win
    link
    fedilink
    English
    arrow-up
    1
    ·
    edit-2
    1 day ago

    Dunno if its applicable to your rust http client but in python’s aiohttp you can set maximum active (tcp) connections per host for the connection pool.
    It does not ensure it won’t ever go over the rate (especially if your requests are small) just does not let the program burst the server down. It’s usually my first choice - dumb in the long run but quick and cheap.