28 Mart 2023 Salı

AWS ElastiCache - Redis

Giriş
Açıklaması şöyle. Yani Memcached veya Redis olabilir. Memcached artık kaybolmaya başladığı için bunu Redis kabul etmek lazım
There are two types of in-memory caching engines:
1. Memcached — designed for simplicity, so used with you need the simplest model possible.
2. Redis — works for a wide range of use cases and have multi AZ. You can also complete backups/restores of redis.
Redis
Redis'in pahalı olduğu söyleniyor. ElastiCache kullanmadan önce Amazon Elastic Compute Cloud (EC2) üzerinde denenebilir. Açıklaması şöyle
Run a Docker redis in ec2 for a while first. Make sure it serves your purposes. Figure out how the sizing works for you. Remember redis is meant to hold ephemeral data in memory, so please don’t treat it as a DB. It’s best to design your system assuming that redis may or may not lose whatever is inside. Clustering or HA via Sentinel, can all come later when you know you need it!
Redis Kullanımı
İki şekilde kullanılabilir. Cluster mode enabled veya disabled olarak. Kısaca 
1 Redis-CME
2.Redis-CMD

Açıklaması şöyle
Elasticache provides two flavors of Redis, Cluster Mode Enabled (Redis-CME) and Cluster Mode Disabled(Redis-CMD). 
Redis-CMD
Açıklaması şöyle. Tek bir cluster ve tek bir shard var. Cluster'da primary ve replica düğümler var.
Broadly speaking, for Redis-CMD, there is only one shard. The shard comprises the primary node along with the read replicas. Read replicas maintain a copy of the data from the cluster’s primary node.
Redis-CME
Açıklaması şöyle. Tek bir cluster var ancal bu sefer 250 taneye kadar shard olabiliyor. Her bir shard için primary ve replica düğümler var.
For Redis-CME, Elasticache allows up to 250 shards for a Redis cluster. Each shard has a primary node and read replicas. Elasticache distributes the keyspace evenly across all the shards. When reading or writing data to the cluster, the client itself determines which shard to use based on the keyspace. This design pushes the work to the client and avoids any potential single points of failure.
Redis-CME Problemi
Açıklaması şöyle. Yani bir object graph gibi bir şeye ihtiyaç varsa, CME iyi olmayabilir
When we started using Redis, we soon realized use-cases where we wanted to fetch multiple Redis keys at once (think Friends of Friends) and to avoid multiple network round trips, we thought of using mget call in Redis which allows us to do so.
...
The caveat here was the sharded nature of our setup. This meant that essentially the keys that we were trying to fetch in one-go, could be distributed across multiple shards, i.e. across multiple nodes, defeating our fetch all-at-once requirement.
Bu problem için Redis Hashtags kullanılabilir. Açıklaması şöyle. Ancak bunu yapınca veri tüm shardlara eşit şekilde dağılmayabilir
To solve this, we started using Redis Hashtags which helped us localize all related data on a single Redis node. Redis Cluster provides the concept of hashtags, according to which, if there is a substring between {} brackets in a key, only what is inside the string is hashed. An example of this in our application would be {RELATED_STOCKS}_AAPL. By surrounding the RELATED_STOCKS section of each key in curly braces, we are effectively storing them in the same slot. This means that we can store an entire group by calling MSET once and fetch related stocks for multiple stocks at once using MGET. This usage pattern became more and more prevalent across our application. For example, all related stocks for all the stocks were now mapped to the same hash key in Redis because of {RELATED_STOCKS} hashtag, and hence same hash slot and shard on Redis, making keys distributed less evenly across nodes, which can cause certain nodes to use more memory than others.


27 Mart 2023 Pazartesi

Distributed Snapshot

Problem Tanımı
Açıklaması şöyle
Point in Time Snapshots is critical for capturing the “consistent” state of systems, which can be restored in case of any loss of system state, making your system fault tolerant.

Taking a snapshot of one particular server is easy. You define a cut-off time and at that time, the state of the server(local state) at that exact time can be captured for the snapshot.

However, Snapshot in distributed systems, i.e on all the nodes in a cluster, is a challenging problem because nodes in a cluster don’t have a common/global clock. Hence, it’s cannot be guaranteed that all the nodes in the cluster will capture their local state at the same “instant”.

In addition to the “local” state, there could be additional states associated with the distributed system, which are in transit i.e messages send from node 1 to node 2, but hasn’t arrived at node 2 yet.

The other constraint during snapshots is that it should not be a “stop the world” process and it should not alter the actual computations!!


In short, we need the distributed snapshot to create a “Consistent” snapshot of the global state of the distributed system, without impacting actual computations on that system.
Algoritma
Açıklaması şöyle
The algorithm used for capturing distributed snapshots is the Chandy-Lamport algorithm(Yes! Leslie Lamport is also behind Lamport Clocks).

Rate Limiting Algorithms Leaky/Leaking Bucket aka Spike Control Policy - Sabit Hızdadır

Giriş
Açıklaması şöyle. Bu yöntem de üst sınır aşılırsa istek es geçilmez ve sonra işlenmek üzere kuyruğa atılır. Bu yöntemi gerçekleştiren bir sınıf burada. Kuyruk ta artık dolarsa, yeni istekler dikkate alınmıyor
The Spike Control Policy is provided for smoothing API traffic. The policy ensures that within any given period of time, no more than the maximum configured requests are processed.
If there is no request quota in the current window, the policy allows requests to be queued for later reprocessing without closing the connection to the client.
Açıklaması şöyle
The leaky bucket limits the constant outflow rate, which is set to a fixed value. For example, if the outflow rate is set to one request per second, it cannot process two requests per second. This ensures the outflow rate is always stable, regardless of the inflow rate.
Şeklen şöyle
Bir başka şekil şöyle

Gerçekleştirim şöyle
In terms of algorithm implementation, a queue can be prepared to save requests, and a thread pool (ScheduledExecutorService) can be used to periodically obtain requests from the queue and execute them, and multiple concurrent executions can be obtained at one time.

This algorithm also has drawbacks after use: it cannot cope with short-term burst traffic.
Bu algoritmanın bir dezavantajı şöyle
This algorithm also has drawbacks after use: it cannot cope with short-term burst traffic.

24 Mart 2023 Cuma

Rate Limiting Algorithms - Token Bucket - Burst Destekler

Giriş
Açıklaması şöyle
The token-bucket algorithm is explained with the analogy of a bucket with finite capacity, into which tokens are added at a fixed rate. But it can’t fill up infinitely. If a token arrives when the bucket is complete, it’s discarded. On every request, some tokens are removed from the bucket. The request is rejected if there are fewer tokens in the bucket.
Belli bir süre örneğin 1 dakika içinde 100 tane istek işlenir. 100 istek arasındaki süre önemli değildir. 1 dakika geçince tekrar 100 tane token kovaya eklenir. Şeklen şöyle

Dolayısıyla token'lar bir anda tüketilebilir. Açıklaması şöyle
The token bucket allows for sudden increase in traffic to some extent, while the leaky bucket is mainly used to ensure the smooth outflow rate.
Bu algoritmanın bir problemi şöyle. Yani bir sonraki pencerenin tam bitiminde ve bir sonraki pencerenin tam başında kova tam dolarken iki tane 100'lük istek gelirse üst sınır aşma ihtimali var. Bir anda 200 istek işlerken bulabiliriz kendimizi. Böylece sistem aslında saniyede en fazla 100 istek işleyebilirlen aslında saniyede 200 istek işlemek zorunda kalıyor
I’ll show the problem with a perfect example to short explain the idea:

1. At some moments, our bucket contains 100 tokens.
2. At the same time, we consume 100 tokens.
3. After one second, the refiller again fills 100 tokens.
4. At the same time, we consume 100 tokens.
Açıklaması şöyle
Implementation idea: You can prepare a queue to save tokens, and periodically generate tokens through a thread pool and put them in the queue. Every time a request comes, get a token from the queue and continue to execute.

22 Mart 2023 Çarşamba

AWS Lambda Cold Start

Giriş
Cold Start probleminin sebebi Scale To Zero

Scale To Zero
Açıklaması şöyle. Yani istek yoksa, ortam söndürülüyor, ve her şey en baştan başlıyor
What is “Scale to Zero”?
Simply put, “Scale to Zero” is a regular container deployment that automatically scales to zero instances when there is no incoming traffic. The moment a request hits the service, an instance is started to handle it. This is also commonly referred to as “cold start,” similar to e.g., AWS Lambdas.

Nearly all major cloud providers support “Scale to Zero” in some form or the other.
Cold Start Problemi
Açıklaması şöyle
When Lambda receives a request to execute a task, it starts by downloading the code from S3 buckets and creating an execution environment based on the predefined memory and its corresponding compute resources. If there is any initialization code, Lambda runs it outside the environment and then runs the handler code. The time required for downloading the code and preparing the execution environment is counted as the cold start duration. After executing the code, Lambda freezes the environment so that the same function can run quickly if invoked again. If you run the function concurrently, each invocation gets a cold start. There will also be a code start if the code is updated. The typical time for cold starts falls between 100 ms and 1 second. In light of the foregoing, Lambda falls short in the Lambda vs Fargate race regarding cold starts. However, Provisioned Concurrency is a solution to reduce cold starts.

The runtime choice will also have an impact on Lambda cold starts. For instance, Java runtime involves multiple resources to run the JVM environment, which delays the start. On the other hand, C# or Node.js runtime environments offer lower latencies.
Çözümler şöyle
1. Lambda Provisioned Concurrency
2. SnapStart

1. Lambda Provisioned Concurrency
Açıklaması şöyle. Yani sürekli çalışan lambda demek
In other words, this facilitates the creation of pre-warmed Lambdas waiting to serve incoming requests. As this is pre-provisioned, the configured number of provisioned environments will be up and running all the time even if there are no requests to cater to. Therefore, this contradicts the very essence of serverless environments. Also, since environments are provisioned upfront, this feature is not free and comes with a considerable price.
2. SnapStart
Açıklaması şöyle. Yani sürekli çalışmaya hazır lambda demek. “AWS console”—> “Configuration”—> “General Configuration”—> “Edit.” menüsünden etkinleştirmek gerekir.
With SnapStart, Lambda initializes your function when you publish a function version. Lambda takes a Firecracker micro VM snapshot of the memory and disk state of the initialized execution environment, encrypts the snapshot, and caches it for low-latency access. When you invoke the function version for the first time, and as the invocations scale up, Lambda resumes new execution environments from the cached snapshot instead of initializing them from scratch, improving startup latency. The best part is that, unlike provisioned concurrency, there is no additional cost for SnapStart. SnapStart is currently only available for Java 11 (Corretto) runtime.
Açıklaması şöyle
Lambda SnapStart for Java is a new feature that enables you to resume new execution environments from cached snapshots without initializing them from scratch. It helps improve startup latency. However, this feature is only available for Java 11-managed runtime environments. There are other limitations as well. It does not support provisioned concurrency, Amazon X-Ray, Amazon EFS, or arm64 architecture. Moreover, you cannot use ephemeral storage of more than 512 MB.



21 Mart 2023 Salı

Hard Real-time Nedir?

Giriş
Hard Real-time kelimesinin tercümesi konusunda bir uzlaşma yok. Bazen critical real-time da deniliyor.
1. Sıkı Gerçek Zaman
2. Katı Gerçek-Zaman
3. Sert Gerçek-Zaman
gibi tercümeler var
 
Açıklaması şöyle.
Hard real-time: Missing a deadline is a total system failure. Delays or spikes are not accepted. Hence, the goal is to ensure that all deadlines are met,
Açıklaması şöyle
Hard real-time is a deterministic network with zero spikes and zero latency. That’s a requirement for embedded systems using programming languages like C, C++, or Rust to implement safety-critical software like flight control systems or collaborative robots (cobots).... the right technology for safety-critical latency requirements.

20 Mart 2023 Pazartesi

Cache Stratejileri - Cache Access Patterns Write-Around

Giriş
Açıklaması şöyle
In Write-around approach, we update the DML commands on the data to database first and then database makes asynchronous calls to the cache for updating the key.
Açıklaması şöyle
Write request goes around the cache straight to DB and acknowledge is sent back, data is not sent to cache. Data is written to the cache when there is the first cache miss.
Açıklaması şöyle
In this design, cache entry only expires when exceeds the pre-set TTL. There is no cache invalidation nor cache update in the write path. The advantage is that the implementation is very simple, but at the cost of even more cache staleness — as long as the TTL window.
Dezavantajı
Written data won't immediately be read back from cache

Cache Stratejileri - Cache Access Patterns Write-Behind veya Write Back

Giriş
Şeklen şöyle

Aslında Write-Through ile aynıdır. Tek fark veri tabanı güncellemesi senkron değil asenkron yapılır
Açıklaması şöyle
Write-behind approach is very much similar to write-through, just that the database write calls are asynchronous in fashion.
Açıklaması şöyle
In a write-behind cache, a write request only updates the cache. Then another background process asynchronously updates the DB with the new entries in the cache. The asynchronous DB update can be implemented as periodic batch update, and the workload can be scheduled to run during mid-night, i.e. when the DB load is low.
Açıklaması şöyle
first write into database and then into cache.
Avantajı
Açıklaması şöyle
In write-heavy environments where slight data loss is tolerable


Cache Stratejileri - Cache Access Patterns Write-Through

Şeklen şöyle

Açıklaması şöyle. Yani önce veri tabanı güncellenir, sonra cache güncellenir
1. The application writes the data directly to the cache.
2. The cache updates the data in the main database. When the write is complete, both the cache and the database have the same value and the cache always remains consistent.
MapWriter kullanılır. Açıklaması şöyle
whenever any “write” request comes, it will come through the cache to the DB. Write is considered successful only if data is written successfully in the cache and in DB.
Açıklaması şöyle. Yani veri tabanı güncelleninceye kadar veri kilitlenir
The write-through strategy means that a write request first updates the DB, then updates the cache
...
Distributed lock is a critical component to guarantee atomic update to both the cache layer and the DB layer. 


Cache Stratejileri - Cache Access Patterns Read-Through

Giriş
Şeklen şöyle


MapLoader kullanılır. Açıklaması şöyle. Uygulama sadece Cache'e erişir. Cache gerekiyorsa, veri tabanından sorgulama yapar.
1. The App never interacts with DB directly but always via Cache.
2. On a cache miss, the cache will read from DB and enrich the cache storage.
3. On a cache hit, data is served from the cache.

You can see, the DB is reached very infrequently and the response is fast since the caches are mostly in-memory (Redis/ Memcached). 
Avantajı
Açıklaması şöyle
Keeps cache consistently populated by handling misses automatically

Read-Through ve Request Collapsing Kavramı
Bir nesne için aynı anda çok fazla istek gelirse, Cache veri tabanına çok fazla sayıda istek gönderir. Bu isteklerin birleştirilmesine Request Collapsing deniliyor.

Cache-aside Okuma vs Read-Through
Şeklen şöyle

14 Mart 2023 Salı

Yazılım Mimarisi - Hexagonal Architecture İle Repository

Giriş
Domain sadece bir arayüzü yani portu biliyor.

Örnek
Elimizde şöyle bir arayüz olsun
public interface StudentRepository {

    Student save(Student student);
    Optional<Student> retrieveStudentWithEmail(ContactInfo contactInfo);
    Publisher<Student> saveReactive(Student student);

}
Bu arayüz için domain tarafında test yazalım
public class StudentRepositoryTest {

  StudentRepository studentRepository;

  @Test
  public void shouldCreateStudent() {
    Student expected = ...;
    Student actual = studentRepository.save(expected);
    ...
  }

  @Test
  public void shouldUpdateExistingStudent() {
    Student expected = randomExistingStudent();
    Student actual = studentRepository.save(expected);
    ...
  }
}
Şimdi bellekte repository gerçekleştirimi yani adapter için test yazalım
public class StudentRepositoryInMemoryIT extends StudentRepositoryTest {

  @BeforeEach
  public void setup() {
    super.studentRepository = new StudentRepositoryInMemory();
  }
}
Şimdi de gerçek veri tabanı gerçekleştirimi yani adapter için test yazalım. Burada SpringBoot kullanılıyor
@Testcontainers
@ContextConfiguration(classes = {PersistenceConfig.class})
@DataJpaTest
@AutoConfigureTestDatabase(replace = AutoConfigureTestDatabase.Replace.NONE)
public class StudentRepositoryJpaIT extends StudentRepositoryTest {

  @Autowired
  public StudentRepository studentRepository;

  @Container
  public static PostgreSQLContainer container = new PostgreSQLContainer("postgres:latest")
    .withDatabaseName("students_db")
    .withUsername("sa")
    .withPassword("sa");


  @DynamicPropertySource
  public static void overrideProperties(DynamicPropertyRegistry registry){
    registry.add("spring.datasource.url", container::getJdbcUrl);
    registry.add("spring.datasource.username", container::getUsername);
    registry.add("spring.datasource.password", container::getPassword);
    registry.add("spring.datasource.driver-class-name", container::getDriverClassName);
  }

  @BeforeEach
  public void setup() {
    super.studentRepository = studentRepository;
  }
}



13 Mart 2023 Pazartesi

Quorum Nedir ? - Çoğunluğu Sağlayan Sayı

Giriş
NotYazılım Mimarisi - Replica yazısına bakabilirsiniz.

Açıklaması şöyle. Quorum oy çokluğu demek
Distributed consensus based on a quorum model
How much is Enough?
Açıklaması şöyle. Yani çoğunluğu sağlayan en küçük sayı Quorum olarak yeterli.
We defined Quorum as the minimum number of servers that need to acknowledge an operation before it can be considered successful. But what's a good number, such that we get both, good application performance & consistency.
We generally prefer for majority of nodes in the cluster to acknowledge any operation for it to be considered successful. Thus, for a N node cluster, Quorum should be of N/2 + 1 nodes.

What if I choose Quorum to be > N/2 + 1? 
Well, you will have more number of nodes to acknowledge your changes, and so you'll have a performance hit, when compared to choosing N/2 + 1 as your Quorum. Refer to Scenario 1 for understanding the impact.

What if I choose Quorum to be < N/2 + 1? 
In this case, only minority of the nodes in the cluster are guaranteed to have the changes. In case those nodes go down/are network partitioned, the changes wouldn't be visible to the end users & would have consistency issues. Refer to Scenario 2 for understanding the impact.
Bir başka açıklama şöyle
The basis of the algorithm is that for any action to take place there must be quorum which is decided by (N/2) +1, with N voting members of the distributed system. When an action is requested, it must be voted on by all voting members. If it receives greater than 50% of the votes, then the action can take place. In this case, the action is written to a log on each node which is the source of truth. By having distributed consensus, you get the security of knowing anything written to the log is an allowed action as well as having log replication and leader elections.
Real Life Usage
Açıklaması şöyle
1) Consensus algorithms like Paxos, Raft are all quorum based.
2) Cassandra uses write quorum to ensure data consistency, where a write is considered successful only after its replicated to at least quorum of replica nodes.
3) Leader Election happens only if leader gets vote from majority of the servers i.e Quorum.

9 Mart 2023 Perşembe

Docker ve Kafka Connect

Örnek
Şöyle yaparız. jars dizini altında ojdbc8.jar dosyası var.
FROM confluentinc/cp-kafka-connect:5.3.0

ENV KAFKA_HEAP_OPTS "-Xms1G -Xmx3G"
EXPOSE 8083 8083

ADD jars/* /etc/kafka-connect/jars/


7 Mart 2023 Salı

Apache Beam - Batch and Streaming Data Processing

Giriş
Şeklen şöyle. Apache Beam farklı dillerde kodlanabilir ve farklı Runner'lar kullanabilir


Gradle
Şöyle yaparız
implementation("org.apache.beam:beam-sdks-java-core:2.45.0")
runtimeOnly("org.apache.beam:beam-runners-direct-java:2.45.0")
Örnek
Şöyle yaparız
public class App {
  public static void main(String[] args) {
    PipelineOptions options = PipelineOptionsFactory.create();
    // Create pipeline
    Pipeline p = Pipeline.create(options);
    // Read text data from Sample.txt
    PCollection<String> textData = p.apply(TextIO.read().from("Sample.txt"));
    // Write to the output file with wordcounts as a prefix
    textData.apply(TextIO.write().to("wordcounts"));
    // Run the pipeline
    p.run().waitUntilFinish();
  }
}
Çıktı wordcounts-00000-of-00001 dosyasındadır. Açıklaması şöyle
1. Create a PipelineOption.
2. Create a Pipeline with the option.
3. Add the logic to read data from Sample.txt to the pipeline and get the return value as PCollection, which is an abstraction of dataset in Apache Beam.
4. Add another step to write the return value in the previous step to output file with name starting with wordcounts.
5. Lastly, run and finish the pipeline.



Prometheus Blackbox Exporter

Giriş
Açıklaması şöyle
The blackbox-exporter is an exporter that can monitor various endpoints — URLs on the Internet, your LoadBalancers in AWS, or Services in a Kubernetes cluster, such as MySQL or PostgreSQL databases.

Blackbox Exporter can give you HTTP response time statistics, response codes, information on SSL certificates, etc.
Açıklaması şöyle
The Prometheus Blackbox Exporter is an open-source tool developed under the Prometheus umbrella. Once installed, it opens an HTTP port 9115 and exposes two metric paths
1. The /metrics endpoint returns metrics about the Blackbox Exporter running
2. The /probe endpoint retrieves metrics from a target supplied as a request parameter.

The latter is the more interesting endpoint, allowing us to check multiple targets. Targets can be HTTP endpoints, but ICMP (Ping), DNS, or raw TCP is also permitted (see the documentation for details).
probe Endpoint
Açıklaması şöyle
The /probe endpoint accepts the following parameters :
target - the HTTP endpoint to check (e.g. https://ping7.io)
module - the response check we want to conduct. OOTB, the Blackbox exporter comes with the http_2xx and the http_post_2xx module. Both check for a 2xx HTTP response code and query the target via HTTP GET or POST request.
debug - can be set to true to retrieve debug information for the current probe.

You can verify your Blackbox Exporter installation by calling it via 
curl:curl "http://localhost:9115/probe?target=https://ping7.io&module=http_2xx"
relabel_configs 
Açıklaması şöyle
Using the magic of relabel_configs in Prometheus, you can query multiple search terms in a single scrape job definition.
Örnek
Şöyle yaparız
scrape_configs:
- job_name: 'blackbox'
    metrics_path: /blackbox/probe
    scheme: https
    authorization:
      # Your ping7.io API token stored in this file
      credentials_file: ping7io-api-token
    params:
      module: [http_2xx]
      target: ["https://www.zalando.de/katalog/"]
    static_configs:
      - targets:
        - schuhe
        - hose
    - schwarz
    - sale
    relabel_configs:
      # store target as search_term
      - source_labels: [__address__]
        regex: '(.*)'
        target_label: search_term

      # build new target by concatenating
      # target param and static target config
      - source_labels: [__param_target, __address__]
        separator: "?q="
        target_label: __param_target

      # store new target as instance
      - source_labels: [__param_target]
        target_label: instance

      # use the ping7.io exporter endpoint
      - target_label: __address__
        replacement: check.ping7.io
Açıklaması şöyle
... you list the search terms to query as targets. In the relabel_configs we append them to the target parameter and query the Blackbox Exporter for response time metrics for this specific search term.

In the example below, we query https://www.zalando.de/katalog/?q=schuhe as a first target for response times. After relabeling, the Prometheus is scraping the URL https://check.ping7.io/blackbox/probe?target=https://www.zalando.de/katalog/?q=schuhe&module=http_200 for metrics. And the best thing is: you can add as many search terms as you like.





6 Mart 2023 Pazartesi

Kafka OAuth

Giriş
Açıklaması şöyle
KIP-255 introduced OAuth Authentication via SASL/OAUTHBEARER, and KIP-768 added default implementation for Login/Validator Callback Handlers(aka OIDC) that use client_credentials Oauth2.0 grant type.
Burada hem broker hem de client tarafında OAuth kullanmak için bir örnek var

PASETO - Platform-Agnostic Security Token

Giriş
Açıklaması şöyle
PASETO (Platform-Agnostic SEcurity TOken) is a specification and reference implementation for secure stateless tokens. It is pronounced paw-set-oh (pɔːsɛtəʊ).

PASETO encodes claims to be transmitted in a JSON (RFC8259) object and is either encrypted symmetrically or signed using public-key cryptography.
JPaseto için bir yazı burada

PASETO Vs JOSE (JWS, JWE and JWT)
Açıklaması şöyle. Yani sadece tanımlı olan şifreleme algoritması kullanılabilir
The key difference between PASETO and the JOSE family of standards (JWS [RFC7516], JWE [RFC7517], JWK [RFC7518], JWA [RFC7518], and JWT [RFC7519]) is that JOSE allows implementors and users to mix and match their own choice of cryptographic algorithms (specified by the “alg” header in JWT), while PASETO has clearly defined protocol versions to prevent unsafe configurations from being selected.
PASETO token format
version.purpose.payload
veya
version.purpose.payload.footer



Redis - Key-Value Veri Yapısı

SET
Şöyle yaparız
SET lock_name arbitrary_lock_value NX EX 10 # acquire the lock
# ... do something to the shared resource DEL lock_name # release the lock
SET NX
Atomic olarak çalışır. Eski hali SETNX. Sonradan SET komutuna NX parametresi haline geldi. Açıklaması şöyle
Because Redis has atomic writes, we can use a SET NX operation to only set the key if it’s not there. This means that Redis guarantees that only one worker will the race for a particular key. The official documentation says that the Redlock algorithm is preferred over SET NX because it “but offers better guarantees and is fault tolerant.” We are not looking to use a lock with Redis, however, just a one-way gate that prevents multiple execution. As long as Redis can prevent the race condition where two workers read the same value for the gate and both pass through it, that is all we need.
Örnek
Şöyle yaparız
SETNX lock_name true

DEL lock_name
EXPIRE
Örnek
Şöyle yaparız
SETNX lock_name arbitrary_lock_value
EXPIRE lock_name 10