9 Eylül 2021 Perşembe

Rate Limiting Algorithms

Giriş
Algoritmalar şöyle
1. Token Bucket
2. Leaky Bucket
3. Fixed Window Counter
4. Sliding Logs
5. Sliding Window Counter

Counter algorithm aka Fixed window counter
Açıklaması şöyle Bu algoritma aslında bence Token Bucket ile aynı şey.
The counter algorithm is a bit simple and rude to use the counter to limit the current. Generally, we will limit the number of requests that can pass in one second. For example, the current limit qps is 100. The idea of ​​​​the algorithm is to start the timing from the first request. Within 1s, each time a request comes, the count is incremented by 1. If the accumulated number reaches 100, all subsequent requests will be rejected. After 1s is over, restore the count to 0 and restart the count. 

The specific implementation can be as follows: for each service call, the AtomicLong#incrementAndGet() method can be used to add 1 to the counter and return the latest value, and compare the latest value with the threshold value.

This implementation method, I believe everyone knows there is a drawback: if I have passed 100 requests in the first 10ms of the unit time 1s, then in the next 990ms, I can only refuse the request, we call this phenomenon. for the “spike phenomenon”

Leaky Bucket aka Spike Control Policy - Sabit Hızdadır
Leaky Bucket yazısına taşıdım

Token Bucket - Burst Destekler
Token Bucket yazısına taşıdım

Sliding window log
Açıklaması şöyle. Diyelim ki dakikada 2 istek işliyoruz. En son gelen istekten bir dakika öncesine gideriz ve sliding window logda toplam kaç tane istek olduğunu sayarız. Eğer bu sayı iki veya daha büyükse yeni istek reddedilir. Eğer küçükse yani yeni istek kabul edilecekse 1 dakikadan önceki eski istekler de logdan silinir.
Sliding window log algorithm keeps a log of request timestamps for each user (Generally redis sorted sets are used for this). When a request comes, we first remove all outdated timestamps before inserting the new request time to the log. Then we decide whether this request should be processed depending on whether the log size has exceeded the limit.
Örnek
Şöyle yaparız. Burada window olarak HashMap kullanmanın bir özelliği yok. List te olabilirdi
public static void main(String[] args) {
  int[] data = {2, 4, 6, 8, 10, 12, 14, 16, 18, 20, 21}; // input data
  int windowSize = 4; // size of the sliding window
  int slideInterval = 4; // slide interval

  Map<Integer, Integer> window = new HashMap<>(); // the sliding window

  for (int i = 0; i < data.length; i++) {
    window.put(i, data[i]); // add the current element to the window

    if (window.size() == windowSize) { // check if the window is full

      String joined = window.values().stream()
                        .map(Object::toString)
                        .collect(Collectors.joining(", "));
      System.out.println("Window : " + joined);

      Iterator<Map.Entry<Integer, Integer>> iterator = window.entrySet().iterator();
      for (int remove = 0; remove < slideInterval && iterator.hasNext(); remove++) {
        iterator.next();
        iterator.remove();
      }
      i = (i - windowSize) + slideInterval + 1; // slide the window
    }
  }
}
Çıktı şöyle
Window : 2, 4, 6, 8 Window : 12, 14, 16, 18


Hiç yorum yok:

Yorum Gönder