26 Mart 2026 Perşembe

Yazılım Mimarisi - Idempotency ve Phantom Write

Giriş
Açıklaması şöyle
You typically implement idempotency like this:
  1. Check if request already processed (via key / timestamp / PK)
  2. If not → write data
  3. If yes → skip
Eğer check işlemi atomic değilse problem oluyor.

Failure Mode 1: The TTL Expiry Trap
Açıklaması şöyle
The most common idempotency implementation stores a request key with a time-to-live (TTL) — typically 24 or 48 hours. The assumption is that any duplicate will arrive within that window. In practice, this assumption frequently breaks.
Açıklaması şöyle
The fix: Never use TTL-only idempotency for operations with unbounded retry windows. Instead, use a database-backed idempotency store with a three-state model (IN_PROGRESS, COMPLETED, FAILED) where the expires_at column drives a cleanup job for storage management — not correctness. The cleanup window should be set significantly longer than your worst-case replay window (7 days minimum for Kafka-based systems).
Failure Mode 2: The Partial Execution Ghost
Açıklaması şöyle
A request arrives, the system writes the idempotency key with status IN_PROGRESS, begins processing, writes half the data, and crashes — JVM OOM, container eviction, network partition. The idempotency key is now in IN_PROGRESS state. When the retry arrives, the system faces an impossible decision: did the original operation complete or not?
Açıklaması şöyle
The fix: Wrap both the business logic and the idempotency state transition in a single database transaction. If the transaction rolls back, both the business data and the idempotency status roll back together. For stale IN_PROGRESS keys (where the original processor is likely dead), use a configurable timeout threshold to reclaim and re-execute safely.
Failure Mode 3: The Concurrent Check Race
Burada check koşulu atomic değil. Açıklaması şöyle
The fix: Use INSERT ... ON CONFLICT DO NOTHING (PostgreSQL 9.5+) to make the check-and-claim atomic. If the RETURNING clause yields no rows, the key already existed — fetch its status with SELECT ... FOR UPDATE. For non-blocking behavior, SELECT ... FOR UPDATE SKIP LOCKED lets the second instance return 409 Conflict immediately rather than waiting.
Failure Mode 4: The Layer Mismatch
Açıklaması şöyle
The fix: Propagate a correlation ID from the original request as a Kafka header, and have every downstream consumer enforce its own idempotency barrier using that ID as the deduplication key.
Spring Boot + SQL Server
Kod şöyle. Burada 
Partial Execution tek transaction ile çözülüyor.
The Concurrent Check Race, DuplicateKeyException ile çözülüyor. Eğer Postgres kullanıyor olsaydık exception yerine SQL'in kaç tane satırı değiştirdiğine bakacaktır
- The Layer Mismatch sorunu outbox pattern ile çözülüyor.
@Service
@RequiredArgsConstructor
public class IdempotentService {
  private final JdbcTemplate jdbc;
  public record Response(String result) {}

  @Transactional
  public Response handleRequest(String idempotencyKey, String payload) {
    try {
      // Attempt barrier insert (atomic)
      // SQL Server:
      // INSERT INTO idempotency_table (idempotency_key, status)
      // VALUES (?, 'IN_PROGRESS')
      jdbc.update(
        "INSERT INTO idempotency_table (idempotency_key, status) VALUES (?, 'IN_PROGRESS')",
        idempotencyKey
      );

      // First request owns the key → perform business logic
      String result = doBusinessLogic(payload);

      // Insert into outbox for async processing
      // SQL Server:
      // INSERT INTO outbox_table (idempotency_key, payload) VALUES (?, ?)
      jdbc.update(
        "INSERT INTO outbox_table (idempotency_key, payload) VALUES (?, ?)",
        idempotencyKey, result
      );

      // Mark barrier as completed and store result
      // SQL Server:
      // UPDATE idempotency_table SET status='COMPLETED', response=? WHERE idempotency_key=?
      jdbc.update(
        "UPDATE idempotency_table SET status='COMPLETED', response=? WHERE idempotency_key=?",
        result, idempotencyKey
      );
      return new Response(result);
     } catch (DuplicateKeyException ex) {
      // Barrier row already exists → handle duplicate
       // SQL Server:
       // SELECT * FROM idempotency_table WITH (UPDLOCK, ROWLOCK) WHERE idempotency_key=?
       IdempotencyRecord record = jdbc.queryForObject(
         "SELECT status, response FROM idempotency_table WITH (UPDLOCK, ROWLOCK) WHERE idempotency_key=?",
         (rs, rowNum) -> new IdempotencyRecord(rs.getString("status"), rs.getString("response")),
         idempotencyKey
       );

       switch (record.status) {
         case "COMPLETED":
           // Return cached result
           return new Response(record.response);
         case "IN_PROGRESS":
           // Someone else is working → can wait or throw 409
           throw new IllegalStateException("Request is already in progress");
         case "FAILED":
           // Previous attempt failed → allow retry
           throw new IllegalStateException("Previous attempt failed, safe to retry");
         default:
           throw new IllegalStateException("Unknown barrier state: " + record.status);
         }
      }
  }

  private String doBusinessLogic(String payload) {
    // your domain logic here
    return "processed:" + payload;
  }

  private static class IdempotencyRecord {
      final String status;
      final String response;
      IdempotencyRecord(String status, String response) {
        this.status = status;
        this.response = response;
      }
  }
}
Eğer hem SQL Server hem de Postgres için çalışsın istiyorsak şöyle yaparızz
    
    
@Service
@RequiredArgsConstructor
public class IdempotentService {

    private final JdbcTemplate jdbc;

    public record Response(String result) {}

    @Transactional
    public Response handleRequest(String idempotencyKey, String payload) {
        boolean isWinner = false;

        try {
            // --------------------------
            // Attempt atomic barrier insert
            // --------------------------
            // Postgres:
            // INSERT INTO idempotency_table (idempotency_key, status)
            // VALUES (?, 'IN_PROGRESS')
            // ON CONFLICT DO NOTHING
            //
            // SQL Server:
            // INSERT INTO idempotency_table (idempotency_key, status)
            // VALUES (?, 'IN_PROGRESS')
            int rows = jdbc.update(
                    "INSERT INTO idempotency_table (idempotency_key, status) VALUES (?, 'IN_PROGRESS')",
                    idempotencyKey
            );

            // Postgres: rows == 1 → winner
            // SQL Server: INSERT succeeded → winner
            isWinner = rows == 1;

        } catch (DuplicateKeyException ex) {
            // SQL Server only: duplicate → loser
            isWinner = false;
        }

        if (isWinner) {
            // --------------------------
            // Winner executes business logic
            // --------------------------
            String result = doBusinessLogic(payload);

            // Insert into outbox (side effect)
            // INSERT INTO outbox_table (idempotency_key, payload) VALUES (?, ?)
            jdbc.update(
                    "INSERT INTO outbox_table (idempotency_key, payload) VALUES (?, ?)",
                    idempotencyKey, result
            );

            // Mark barrier as completed + store response
            // UPDATE idempotency_table SET status='COMPLETED', response=? WHERE idempotency_key=?
            jdbc.update(
                    "UPDATE idempotency_table SET status='COMPLETED', response=? WHERE idempotency_key=?",
                    result, idempotencyKey
            );

            return new Response(result);
        } else {
            // --------------------------
            // Loser reads existing row safely
            // --------------------------
            // SQL Server: SELECT ... WITH (UPDLOCK, ROWLOCK) WHERE idempotency_key=?
            // Postgres: SELECT * FROM idempotency_table WHERE idempotency_key=?
            IdempotencyRecord record = jdbc.queryForObject(
                    "SELECT status, response FROM idempotency_table " +
                            (isPostgres() ? "" : "WITH (UPDLOCK, ROWLOCK) ") +
                            "WHERE idempotency_key=?",
                    (rs, rowNum) -> new IdempotencyRecord(rs.getString("status"), rs.getString("response")),
                    idempotencyKey
            );

            switch (record.status) {
                case "COMPLETED":
                    return new Response(record.response);
                case "IN_PROGRESS":
                    throw new IllegalStateException("Request already in progress");
                case "FAILED":
                    throw new IllegalStateException("Previous attempt failed, safe to retry");
                default:
                    throw new IllegalStateException("Unknown barrier state: " + record.status);
            }
        }
    }

    private boolean isPostgres() {
        // Detect DB type from DataSource or JdbcTemplate if needed
        return true; // placeholder, implement detection
    }

    private String doBusinessLogic(String payload) {
        return "processed:" + payload;
    }

    private static class IdempotencyRecord {
        final String status;
        final String response;

        IdempotencyRecord(String status, String response) {
            this.status = status;
            this.response = response;
        }
    }
}


25 Mart 2026 Çarşamba

23 Mart 2026 Pazartesi

Cache Stratejileri Sunumu

Summary

  • In real systems:
    • 80% → Cache-Aside + Eviction
    • High-scale → Add these:
      • Stampede protection
      • Two-level cache
      • Event invalidation
  • Spring mainly supports:
    • Cache-Aside (natively)
    • Partial Write-Through
    • Eviction patterns
  • @Cacheable, @CachePut, @CacheEvict are mainly Cache-Aside tools
  • Advanced patterns require custom logic or cache provider features
  • High-scale systems often combine:
    • Cache-Aside + Eviction
    • Two-Level Cache
    • Stampede Protection
    • Event-Driven Invalidation
  • Spring annotations alone are not enough for advanced caching—you end up:
    • Using Caffeine / Redis features directly
    • Or writing custom cache layers

Read-Heavy Strategies

  • Cache-Aside - Implemented by App
  • Read-Through - Implemented by Cache Provider
  • Refresh-Ahead - Implemented by Cache Provider

Write-Heavy Strategies

  • Write-Through - Implemented by Cache Provider
  • Write-Behind (aka Write-Back) - Implemented by Cache Provider
  • Write-Around - Implemented by App

1. Cache-Aside (Lazy Loading)

App reads from cache → if miss → load from DB → put in cache. Cache is not responsible for loading; application does it.

@Service
public class UserService {
    @Cacheable(value = "users", key = "#id")
    public User getUser(Long id) {
        return userRepository.findById(id)
                .orElseThrow();
    }
}

2. Write-Through

Write goes to cache and DB synchronously. Cache always up-to-date.

@CachePut(value = "users", key = "#user.id")
public User saveUser(User user) {
    return userRepository.save(user);
}

3. Read-Through

Cache itself loads data (app doesn’t call DB directly). App only talks to cache provider. Cache abstracts loading logic. Provider like Hazelcast / Redis with loader.

4. Write-Behind

Write goes to cache → DB updated asynchronously later. Very fast writes.

public void saveUser(User user) {
    cache.put(user.getId(), user);

    asyncExecutor.submit(() -> {
        userRepository.save(user);
    });
}

5. Refresh-Ahead

Cache refreshes entries before expiration to avoid cache miss spikes. Not supported via Spring annotations.

Caffeine.newBuilder()
    .refreshAfterWrite(Duration.ofMinutes(5))
    .build(key -> loadFromDb(key));

6. Cache Eviction / Invalidation

Explicitly remove/update cache when data changes.

@CacheEvict(value = "users", key = "#id")
public void deleteUser(Long id) {
    userRepository.deleteById(id);
}

7. Write-Around

Writes go directly to DB, cache updated only on read. Prevents cache from being updated on writes. Cache becomes stale by design. Relies on future reads to populate.

@Service
public class OrderService {

    @Autowired
    private OrderRepository orderRepository;

    @Autowired
    private CacheManager cacheManager;

    public void createOrder(Order order) {
        orderRepository.save(order); // cache not updated
    }

    @Cacheable(value = "userOrders", key = "#userId")
    public List getOrdersForUser(Long userId) {
        return orderRepository.findByUserId(userId);
    }
}

8. Negative Caching Control

Cache “not found” results. Example: user not found → cache null. Prevents repeated DB hits. Key insight: unless="#result == null" avoids caching null values.

@Cacheable(value = "users", key = "#id", unless = "#result == null")
public User getUser(Long id) {
    return userRepository.findById(id).orElse(null);
}

9. Two-Level Cache

L1 (in-memory) + L2 (distributed like Redis). L1: Caffeine, L2: Redis. Must combine manually.

10. Cache Stampede Protection (Önbellek yığılması)

Prevent many threads from hitting DB on same miss. Only one thread fetches DB; others wait or use cache.

11. Read-Repair

If stale data detected → fix cache during read. Not supported via @Cacheable.

public User getUser(Long id) {
    User cached = cache.get(id);

    if (cached != null && isStale(cached)) {
        User fresh = userRepository.findById(id).orElse(null);
        cache.put(id, fresh); // repair
        return fresh;
    }

    if (cached != null) {
        return cached;
    }

    User fresh = userRepository.findById(id).orElse(null);
    cache.put(id, fresh);
    return fresh;
}

12. Event-Driven Cache Invalidation

Use events (Kafka, etc.) to invalidate/update cache entries.

19 Mart 2026 Perşembe

Amazon Web Service (AWS) EventBridge - “Kafka-lite, fully managed, rule-based event routing

Giriş
Akış şöyle
Webhooks → simple HTTP push (external trigger mechanism)
Amazon EventBridge → event router (central nervous system)
AWS Lambda → code runner (brain doing the work)
Burada bir tane örnek mimari var. Bu mimaride Webhook çağrıları direkt mikroservislere tetikleyeceğine önce AWS EventBridge'a giriyorlar. Daha sonra buradan yönlendiriliyorlar. Bu mimarideki önemli noktalar şöyle
The Patterns Nobody Documents
Here’s what I learned building this that you won’t find in AWS documentation.

Pattern 1: Event Normalization at the Edge
Don’t let raw external events onto your bus. Ever. Your webhook handler should transform vendor-specific payloads into domain events. When we integrated PayPal, our services didn’t care. They still received payment.completed events with the same schema.

Pattern 2: Event Versioning from Day One
We screwed this up initially. Six months in, we needed to change the event schema. Half our services were still consuming v1 events. Now every event includes a version field, and EventBridge rules route based on version. Services can migrate on their own schedule.

Pattern 3: Dead Letter Queues for Everything
This saved us during Black Friday. A bug in the inventory service caused it to reject 15% of order.created events. Because we had DLQs configured, those events sat safely in a queue while we fixed the bug, then we replayed them. Zero lost orders.

Pattern 4: Archive Anything That Touches Money
EventBridge archiving is criminally underused. We archive every payment-related event for 90 days. When customers dispute charges, we have perfect audit trails. When the finance team needs transaction reports, we replay archived events. Cost? $47/month for 2.1M archived events.

17 Mart 2026 Salı

MIT lisans

MIT vs LGPL
Açıklaması şöyle
LGPL says, “you can use this code, but if you change it, you must share your changes under the same terms.” MIT says, “Do whatever you want.” One protects the community. The other lets corporations take without giving back.


LGPL - GNU Lesser General Public License

LGPL
GPL'in kısıtlayıcı olduğu düşünüldüğü için LGPL (GNU Lesser General Public License) çıkmıştır.
Açıklaması şöyle
LGPL says, “you can use this code, but if you change it, you must share your changes under the same terms.”
LGPL Kodu Kullanırsak ve Uygulamamızı Dağıtırsak (Distribution)
GPL ile LGPL'in ayrıştığı en önemli nokta bence bu. LGPL yazılımı kullanıyorsak ve kendi ürünümüzü satıyorsak, kaynak kodumuzu açmak zorunda değiliz. Açıklaması şöyle. Eğer kaynak kodumuzu açmak istersek kendi kodumuz da LGPL lisanslı olmalı.
Yes, you can distribute your software without making the source code public and without giving recipients the right to make changes to your software.

The LGPL license explicitly allows such usages of libraries/packages released under that license.

10 Mart 2026 Salı

Medallion Architecture

Giriş
Açıklaması şöyle
Medallion architecture is a data design pattern that organizes data into three layers:

Bronze Layer (Raw):
  • Data ingested in its original format
  • Minimal transformation
  • Append-only historical record
  • No data quality enforcement
Silver Layer (Refined):
  • Cleaned and conformed data
  • Schema enforced
  • Deduplicated
  • Validated
  • Still fairly granular
Gold Layer (Curated):
  • Business-level aggregations
  • Denormalized for consumption
  • Optimized for specific use cases
  • Analytics-ready
Origin: Popularized by Databricks around 2019-2020 as part of the lakehouse pattern.