16 Haziran 2021 Çarşamba

Yazılım Mimarisi - Replica/Replication - Çoğaltma

Giriş
Replication kelimensin Türkçesi çoğaltma

Replication V.S. Cache - Çoğaltma ve Ön Bellek
Açıklaması şöyle. Cache latency problemi içindir. Yazılım Mimarisi - Cache ölçeklemek için kullanılan diğer yöntem olan cache konusunu ele alıyor
From the perspective of scalability in distributed system design, cache and replication are used for different goals. Cache is in memory and is used to improve the latency. Replication is still in disk and is used to scale out read throughput and enhance durability.
Replication ve Scaling - Çoğaltma ve Ölçeklendirme
Replication, ölçekleme için kullanılan yöntemlerden birisi. Açıklaması şöyle
Caching is one of the two ways(the other is replication) to scale read heavy applications. 
Açıklaması şöyle
There are many techniques to scale a relational database: master-slave replication, master-master replication, federation, sharding, denormalization, and SQL tuning.
- Replication usually refers to a technique that allows us to have multiple copies of the same data stored on different machines.
- Federation (or functional partitioning) splits up databases by function.
- Sharding is a database architecture pattern related to partitioning by putting different parts of the data onto different servers and the different user will access different parts of the dataset
- Denormalization attempts to improve read performance at the expense of some write performance by coping of the data are written in multiple tables to avoid expensive joins..
- SQL tuning.
Data Replication vs. Data Synchronization - Veri Çoğaltması ve Veri Eş Uyumluluğu
Açıklaması şöyle. Yani Data Replication aynı veri tabanı içinde olur, Data Synchronization ise faklı veri tabanlarının eş uyumlu hale gelmesidir.
Data Replication:
Data replication involves creating multiple copies of data and distributing them across different systems or nodes(usually called standbys).

Data Synchronization:
Data synchronization, on the other hand, focuses on maintaining consistency and accuracy between the source of truth and other data sources.
Naive Methods of Data Replication - Çoğaltmanın Bön Yöntemleri
1. Kaynak veri tabanının gönderilecek değişiklikleri bellekte tutması. Eğer hedef veri tabanı ile bağlantı kaybolursa, kaynak sistemin belleği yetmeyeceği için çoğaltma bozulur

1. Primary Replica - Master-slave replication
Şeklen şöyle
Açıklaması şöyle
Only the primary DB host handles DB updates. The update on primary is synced to replicas via bin log replay. Most mainstream databases like MySQL have built in support for this setup. Read request is load balanced(LB) to the replicas.
2. Primary Replica Zayıflıkları
2.1 Primary Failure
Açıklaması şöyle
Github has shared their solution (here and here). The idea is to have a separate system that constantly monitors the status of master and the lag on each replica. The monitor will detect the primary’s failure and adjust the network topology to promote one replica as the new primary. This requires being exposed to many low level network details. I find it intimidating to depend on unfamiliar open source projects doing tricky stuff on the network.

Many NoSQL databases have symmetric hosts thus have good support for node failures. I believe the main benefit today from a NoSQL database like Cassandra is the ease of operation.
2.2 Consistency
Açıklaması şöyle
The primary replica set up will result in update delay in replicas and is a classic eventual consistency model. Essentially we trade strong consistency for read scalability. Eventual consistency is enough for most applications, except for ones requiring ‘read your write’ consistency.

‘Read your write’ consistency can be improved by forcing the read request to primary if it’s following a write. Or naively force the read to wait for several seconds so that all replicas have caught up. When there are replicas not in the same datacenter(DC), the read will also need to be restricted to the same DC.
2.3 High Watermark
Açıklaması şöyle. Burada yazma ve okuma işlemleri Master'a gidiyor, ancak Master isteği işledikten sonra daha Replica'ya gönderemeden çöküyor. Yeni Master seçilince de bu işlemden haberi olmuyor
Let's assume, the leader received a write operation. The leader wrote the transaction on the WAL. Let's also take that a consumer read the operation immediately after it was written, and before the operation could be propagated to all the followers, the leader crashed.

Post the leader crash, the cluster would undergo Leader Election, & one of the followers becomes the new leader for that partition. However, the latest changes from the previous leader were not replicated to the new leader, i.e new leader is behind the old leader.

Now let's assume, another consumer tries to read the latest record. Since the new leader doesn't have the latest write, this consumer doesn't know about that record. This leads to data inconsistency/data loss, which is exactly we didn't want!!

Note: We do have these transactions in the WAL on the old leader, but those log entries cannot be recovered until the old leader becomes alive again.
Açıklaması şöyle
To overcome the problem, we use the concept of High Watermark.

The leader keeps track of the indexes of the entries that have been successfully replicated on each follower. The high-water mark index is the highest index, which has been replicated on the quorum of the followers.

The leader can push the high-water mark index to all followers as part of a heartbeat message(in case it's a push based model)/leader can respond to the pull request from the followers with the high watermark index.
Açıklaması şöyle.  Yani master quorum sayısı kadar replica nın asgari watermark değerini hesaplar. Bu değer değişince replica'lara duyurur
The leader gets pull requests from the followers, with the latest offset they are in sync with. Hence the leader can easily make a call on when to update the high watermark. Once the high watermark is updated on the leader, with the next fetch, the leader will propagate the updated high watermark to the followers.
...
This guarantees that even if the leader fails and another leader is elected, the client will not see any data inconsistencies, as any client would not have read anything beyond the high watermark. This is how we can prevent inconsistent reads while ensuring high availability and resiliency........
2.4 Hazelcast EntryProcessor
Burada bir soru var

EntryProcessor ile direkt member üzerinde veriyi değiştirebilmek mümkün. Ancak burada karşımıza iki tane farklı çözüm çıkıyor
1. Güncelleme sadece Primary üzerinde yapılır ve veri asenkron olarak Replica'ya gönderir

2. Primary ve Replica aynı kodu çalıştırarak güncellemeyi birbirlerinden bağımsız olarak yaparlar.  Bu verinin her yerde daha hızlı güncellenmesini sağlar. 
Eğer Replica üzerinde hata olursa Primary belli aralıklarla Replica ile senkronize olduğu için en son veri de biraz gecikmeyle de olsa Replica'ya ulaşır.

3. Master-master replication
Açıklaması şöyle
Each database server can act as the master at the same time as other servers are being treated as masters. At some point in time, all of the masters sync up to make sure that they all have correct and up-to-date data.

Here are some advantages of master-master replication.
- If one master fails, the other database servers can operate normally and pick up the slack. When the database server is back online, it will catch up using replication.
- Masters can be located in several physical sites and can be distributed across the network.
- Limited by the ability of the master to process updates.
3.1 Conflict Resolution
Bazı yöntemler şöyle
3.1. Conflict avoidance
Açıklaması şöyle
It is the simplest strategy to avoid conflicts. We just need to ensure that all writes for a particular record goes to the same leader, or more aptly to the same data center. It might look simple, but edge cases, when the entire data center is down or such may hamper the entire application.
3.2. Convergent Conflict Resolution
Açıklaması şöyle
In multi-leader replication, there is no defined ordering of writes, thus making it unclear what the final value should be. This inconsistency questions the durability of the data and every replication must ensure that the data is the same at all places. This method of handling conflicts can be done in various ways :
- LWW(Last Write Win) — Each write is given a unique ID and the write with the highest write is chosen as the winner.
- Give each replica a unique Id and let writes originated at higher-numbered replicas take precedence over the lower counterparts.
- Merge the values.
- Record the conflict in an explicit data structure that preserves all the information , and write application code that resolves conflict later by notifying the user.
3.3. Custom conflict resolution logic
Açıklaması şöyle
Most multi-leader replication tools provide the option to custom define your conflict resolution in the application code. On write, as soon as a conflict is detected, the conflict handler is called, and it runs in the background to resolve it. On read, if a conflict is detected, all conflicting writes are stored and the next time data is read, these multiple versions of the data are returned to the application, which in turn prompts the user or automatically resolve the conflict, and write back to the database.
3.4. Automatic conflict resolution
Açıklaması şöyle
There has been a lot of research on building automatic conflict resolutions which would be intelligent enough to resolve the conflicts caused by concurrent data modifications.

- Conflict-free replicated data types( CRDTs) are a family of data structures for sets, maps, ordered lists, counters, etc that can be concurrently edited by multiple users. It uses two-way merges.
- Mergeable data structure tracks history explicitly, just like it, and uses a three-way merge function
- Operational transformation is the algorithm behind collaborative editing applications such as Google docs. It’s a whole big topic which is very interesting to study.

4. Replication ve Consistency
Hem replica yapıp hem de consistency için kullanılan bazı çözümler şöyle
1. Read-Impose Write-Consult-Majority

2. Leader-based Replication 
Tüm yazma işlemleri Leader'a yönlendirilir. Leader yazar ve veriyi diğerlerine dağıtır

3. Leased-Leader-based Replication




Hiç yorum yok:

Yorum Gönder