Açıklaması şöyle. Yani aynı satıra çok fazla istek gelmesi ve bu isteklerin mecburen beklemesi
At its core, hot row contention arises from how databases manage concurrent data modifications. In a typical relational database (like MySQL, PostgreSQL, or SQL Server), when a transaction needs to update a row, it acquires an exclusive lock on that row. This lock prevents other transactions from modifying the same row simultaneously, ensuring data integrity and consistency (the “I” and “C” in ACID).When many concurrent transactions converge on the exact same row — our “hot row”— they are forced to queue up, waiting for the current lock holder to finish.
Açıklaması şöyle
The problem isn't how many keys each shard has. It's how much traffic each key attracts.
Bazı Çözümler
1. Append-Only Ledger Model: Prioritizing Writes
Örnek ver
2. Internal Sharding of Hot Accounts: Divide and Conquer
Bu işi hem okuma ağırlıklı (read heavy) hem de yazma (write heavy ) olarak düşünebiliriz
Okuma için bir örnek burada
# Single copy: one node handles all reads for taylorswiftcache.get("user:taylorswift") # Always hits shard_1# Replicated: spread reads across N copiesdef get_hot_key(key):replica_id = random.randint(0, NUM_REPLICAS - 1)replica_key = f"{key}:replica:{replica_id}"result = cache.get(replica_key)if result:return result# Fallback to primaryresult = cache.get(key)return resultdef set_hot_key(key, value):# Write to primarycache.set(key, value)# Fan out to all replicasfor i in range(NUM_REPLICAS):cache.set(f"{key}:replica:{i}", value)
Yazma için bir örnek burada
# Single counter: all writes hit one keyredis.incr("post:viral:likes") # 100K writes/sec on ONE node# Sharded counter: spread writes across N sub-keysNUM_COUNTER_SHARDS = 100def increment_like(post_id):shard = random.randint(0, NUM_COUNTER_SHARDS - 1)redis.incr(f"post:{post_id}:likes:shard:{shard}")def get_like_count(post_id):total = 0pipe = redis.pipeline()for shard in range(NUM_COUNTER_SHARDS):pipe.get(f"post:{post_id}:likes:shard:{shard}")results = pipe.execute()return sum(int(r or 0) for r in results)
3. (In-Memory) Buffers and Batching: Absorbing the Spikes
Açıklaması şöyle. Yani aynı satıra çok fazla istek gelmesi ve bu isteklerin mecburen beklemesi
This technique involves intercepting incoming transactions and temporarily holding them in a fast in-memory buffer or a dedicated caching system (like Redis) instead of writing each one directly to the main database. These buffered transactions are then flushed to the persistent database in larger, consolidated batches.
4. Event-Driven Architecture (CQRS): Ultimate Separation of Concerns
Açıklaması şöyle. Yani aynı satıra çok fazla istek gelmesi ve bu isteklerin mecburen beklemesi
This architectural pattern addresses contention by making the write path highly optimized for appending events, which is inherently less contentious. Read paths query dedicated data models that don’t compete with write operations. This separation allows write and read workloads to be scaled independently to a very high degree.
5. Before overhauling your architecture — Optimistic Locking (OCC)