25 Temmuz 2019 Perşembe

Apache Cassandra - Column-Oriented Database - Availability Önemlidir

Giriş
Büyük veriyi (Big Data) için 7 tane V önemli
The five Vs of Big Data have expanded to seven – Volume, Velocity, Variety, Variability, Veracity, Visualization, and Value.
Büyük veriyi saklamak için kullanılan yöntemler şöyle
Normalization
Partitioning
Horizontal Sharding
Vertical Sharding
Data Replication
ScyllaDB vs Apache Cassandra
Açıklaması şöyle
it is also a wide column store-based NoSQL database like Cassandra and is also API compatible with Amazon DynamoDB and Cassandra. The major difference between ScyllaDB and Cassandra is it is written in C++ but Cassandra is written in JAVA, and instead of relying on page-cache, it stores row-cache making it more optimized and fast than Cassandra.

Column Oriented DB Ne Demek?

Wide-Column Store
Açıklaması şöyle. Cassandra bunu destekler. Her satır farklı sütunlara sahip olabilir.
A wide-column store like Apache Cassandra or ScyllaDB allows rows (as minimal units of replication, analogous to rows as records in Postgres) to store a great, but most importantly, variable number of columns.

These are great for the sort of data later to be used in aggregations and statistical analysis where events come in large, occasionally inconsistent batches: a fairly fixed schema but of variable width.

The variable width of rows concept is what some might argue, allows flexibility in terms of the events it can store: one event (row) can have fields name(string), address(string), and phone(string), with the next event having name(string), shoe_size(int), and favorite_color(string). Both events can be stored as rows in the same column family (analogous to a table in PostgreSQL or MySQL).
Cassandra İsmi
Açıklaması şöyle. Kimsenin inanmadığı bir kâhinin ismi niye seçilmiş bilmiyorum :)
In Greek Mythology, Cassandra was a priestess of Apollo. Cassandra was cursed to have prophecies that were never to be believed.
Sürümler
2011 yılında 0.7 sürümü vardı
2016 yılında 3.0 sürümü vardı
2021 yılında ise 4.0 sürümü var

Çıkış Amacı
Açıklaması şöyle
Initially developed as an open source alternative to Amazon DynamoDB and Google Cloud Bigtable, Cassandra has had a major impact on our industry.
Cassandra'nın En İyi Kullanım Senaryosu
Açıklaması şöyle
When Cassandra works the best? In append-only scenarios, like time-series data or Event Sourcing architecture (e.g. based on Akka Persistence).Be careful, it’s not a general-purpose database.
Eklemeler yapıldıktan sonra bazı read işleminde hızlı. Açıklaması şöyle
It’s a great tool and we like it, but too often we see teams run into trouble using it. We recommend using Cassandra carefully. Teams often misunderstand the use case for Cassandra, attempting to use it as a general-purpose data store when in fact it is optimized for fast reads on large data sets based on predefined keys or indexes. (…)
Yazmasının hızlı olmasını sebebi sebebi kullanılan engine. Açıklaması şöyle
Storage Engine Type alo
LSM Tree, As a rule of thumb LSM Trees are faster for writes than B Tree(so if your application is write heavy you could consider this option), Though for read its always recommended to do a benchmarking with your particular workload. Old benchmarks have been inconclusive.
Okuma Hızı
Açıklaması şöyle
In Cassandra, reads can be more expensive than writes due to the distributed nature of the database.

When data is written to Cassandra, it is stored in a partition.

Each partition is replicated across multiple nodes in the cluster to ensure fault tolerance and high availability.

When a write occurs, the data is written to the node responsible for that partition and then propagated to the replicas.

Reads, on the other hand, require coordination between multiple nodes in the cluster.

When a read request is made, Cassandra must first determine which nodes are responsible for the data being requested, and then retrieve the data from those nodes.

This coordination and data retrieval process can be slower and more complex than a write operation, especially if the data being requested is stored across many different nodes.

Yazma Hızı
Bir soru şöyle
Q : Why Writes in Cassandra Are So Fast?
A : To achieve this high performance, Cassandra has a unique write pattern. Cassandra has a few different data structures that it uses:

- Commit Log (Disk)
- Memtable (Memory)
- SSTable (Disk)
All three of these data structures are involved in every write process.
1. Commit Log, WAL gibi. Eğer yazma işlemi yarım kalırsa bu dosyadan kurtarılabiliyor
2. Memtable'a yani belleğe yazma işlemi de başarılı ise bu düğüm istek için onay gönderiyor.

Update Senaryosu
Bazı sorular ve cevaplar şöyle. Yani aynı anda iki istemci aynı satırın farklu sütunlarını güncelleyebilir.
Q : What if two different clients want to update mobile and email separately in a concurrent fashion?
A : When you create the above object in Cassandra, you issue the following command:

CREATE TABLE user (
    name text PRIMARY KEY,
    mobile text,
    email text,
    address text
);
INSERT and UPDATE requests happen as below:

INSERT INTO user (name, mobile, email, address)
VALUES ('kousik', '9090909090', 'knath@test.com', 'xyz abc');

UPDATE users SET email = 'knath222@test.com' WHERE name = 'kousik';

UPDATE users SET phone = '7893839393' WHERE name = 'kousik';

Cassandra is designed to handle each column separately. You can issue individual update columns as you do in traditional relational databases. Metadata like last update time is maintained for each column. Update to a particular column for some matching data affects only that column. Thus updates to the columns are finer-grained.
Bir başka soru şöyle
Q: What happens if two different clients update the same column for the same key?
A: Cassandra applies Last Write Wins ( LWW ) strategy to resolve the conflicting updates. Since fine-grained granular updates are on individual columns, it’s not practically possible that all clients end up updating the same column concurrently - their updates would be distributed across columns. Thus Cassandra survives conflicting updates even though clocks are coarsely synchronized to NTP, although it’s a good practice to always keep the clocks synchronized to NTP with the highest possible accuracy, a good such article can be found here.

Q. Still there is a chance that Cassandra loses data due to conflicting updates in the same column, right?
A. Yes, technically it’s possible, however, since updates are spread across columns, the effect should be less, better if you have clocks synced to NTP through the appropriate daemon, that should help as well.

Replication
Not : Consistent Hashing yazısına bakabilirsiniz.
Consistent hashing ile bulunan düğüm haricinde bazı düğümlere daha kopyalama yapılır. Replication her zaman consistency kuralını bozar. Açıklaması şöyle. Yani Eventual Consistency sunulur. Bu konuda BASE Properties For Distributed Database Transactions yazısına bakabilirsiniz.
The disadvantage of non-transactional platforms (Cassandra, for example) is that although one could avoid the locking of shared resources by linearly scaling out the data across the cluster (assuring availability), it comes at the cost of consistency.
Ayrıca replication için leaderless replication kullanılır. Replication varsa mutlaka conflict resolution da gerekir. Açıklaması şöyle
Replication:

Replication Type: Leaderless Replication.

Considering you have n replicas, every write will eventually go to all the nodes, but you can decide a number w which is the number of replicas it should synchronously update for every write request. Now if your application is write heavy, you could make w=1, then to offset the inconsistency it will create till the time we reach eventual consistency, every read request will read from more number of nodes(let’s call this number r). Generally its suggested to keep w+r>n (and this in no way ensures strong consistency), but you can go for <n, if availability is what is more important for your application. The point here is you can move across a spectrum here from very high availability/very low consistency to very low availability/high consistency by tweaking w and r.

Consistency: Eventual Consistency, it’s imperative to note here that irrespective of what database configuration parameters you give, strong consistency is just not guaranteed in cassandra. This is another characteristic that can rule out this database.

Conflict Resolution: A leaderless model calls for conflict resolution. The supported strategy is LWW (Last Write Wins). Will this do?
Single Point of Failure
Açıklaması şöyle.
“There isn’t any central node in Cassandra. Every node is a peer, there is no master – there is no single point of failure.”
Açıklaması şöyle
It is easy to achieve strong consistency in master based distributed systems. However, it also means that there is a compromise on the system's availability if the master is down. Cassandra is a master-less system and trades-off availability over consistency. It falls under the AP category of the CAP theorem, and hence is highly available and eventually consistent by default.
Sharding Nedir?
İki çeşit sharding var
1. Vertical Sharding
Tabloların daha kolay okunması için farklı veri tabanlarında saklanması olarak düşünülebilir.

2. Horizontal Sharding
Aynı tablonun N tane parçaya bölünerek farklı makinelerde saklanması olarak düşünülebilir.

Horizontal Sharding Nedir?
Cassandra bunu destekler. Tablo ortadan ikiye bölünerek farklı veri tabanlarında saklanır. Açıklaması şöyle. Primary Key veya Index alanı sharding key olarak kullanılır.
For sharding the data, a key is required and is known as a shard key. This shard key is either an indexed field or an indexed compound field that exists in every document in the collection.
Sharding İle Dikkat Edilmesi Gereken Şey
Veriyi bir kere daha bölmeye çalışmak yük getirir. Açıklaması şöyle.
Once sharding is employed, redistributing data is an important problem. Once your database is sharded, it is likely that the data is growing rapidly. Adding an additional node becomes a regular routine. It may require changes in configuration and moving large amounts of data between nodes. It adds both performance and operational burden.
Cassandra bunu dert etmiyor. Açıklaması şöyle
Rebalancing: Cassandra uses a strategy to make the number of partitions proportional to the number of nodes. If a new node is added. Some partitions are chosen split in half and are transferred to this new node. Very closely resembles ‘consistent hashing’.
Keyspace
Şeklen şöyle


Primary Key - Var
Primary Key yazısına taşıdım

Foreign Key - Yok
Açıklaması şöyle
There is no foreign key in Cassandra. As a result, it does not provide the concept of Referential Integrity.
Secondary Index
Açıklaması şöyle
Secondary Indexes: Partitioning of secondary indexes is by document. Each partition maintains its own secondary index. Write only needs to deal with the partition in which you are writing the document. Also called local index. Reading requires scatter-gather. Read queries need to be made to all secondary indexes. Thus the read queries are quite expensive. Even parallel queries are prone to tail latency amplification.
Açıklaması şöyle
If the query needs to be performed on the column that is not partition and clustering key we can create a secondary index

A secondary index is stored in a separate column family. It is the best work for columns with a medium level of distinct values and it is not replicated to another node. Keep in mind that as the volume of data increases secondary index queries become slower and they should not be used on frequently updated columns. 
Örnek
Şöyle yaparız
CREATE INDEX latitude_index ON sensor_events(latitude);

SELECT * FROM sensor_events WHERE latitude=48.95562;

Cassandra Dosya Kilitlemesi
Cassandra dosyaları sadece yazma amaçlı kilitliyor.

Lightweight Transactions
Açıklaması şöyle
Paxos has been a long-established consensus protocol and was adopted by Cassandra in 2013 for what was called “lightweight transactions.” Lightweight because it ensures that a single partition data change is isolated in a transaction, but more than one table or partition is not an option. In addition, Paxos requires multiple round trips to gain a consensus, which creates a lot of extra latency and fine print about when to use lightweight transactions in your application.

The Raft protocol was developed as the next generation to replace Paxos and several systems such as Etcd, CockroachDB and DynamoDB adopted it. It reduced round trips by creating an elected leader.

The downside for Cassandra in this approach is that leaders won’t span data centers, so multiple leaders are required (see Spanner). Having an elected leader also violates the “shared-nothing” principles of Cassandra and would layer new requirements on handling failure. If a node goes down, a new leader has to be elected.
ACID Transactions - Yeni
Açıklaması şöyle. Accord Consensus Algorithm yazısına bakabilirsiniz
ACID transactions are coming to Apache Cassandra. Globally available, general-purpose transactions that work the way Cassandra works. This isn’t some trick with fine print or application of some old technique.

It’s due to an extraordinary computer science breakthrough called Accord (pdf) from a team at Apple and the University of Michigan. 
ACID Transactions - Eski
Eskiden Apache Cassandra ACID Transactions desteklemiyordu. Açıklaması şöyle
Cassandra does not provide ACID properties. It only provides AID property. 
Açıklaması şöyle
Cassandra provides weak transactions.

Atomicity: Provided across single node. Not provided when several statements execute across multiple nodes. This means ‘all or none’ doesn’t really exist if transaction is spanning multiple nodes.

Consistency: Implements Paxos which is an implementation of total order broadcast, TOB guarantees the same order of the operations across replicas though doesn’t gurantee the time when the messages will be delivered, so you can see stale values on some replicas.

Isolation: Paxos provides isolation in Compare and set operations.

Durability: Provides durability using multiple replicas.
Write İşlemi
Açıklaması şöyle
Before dwelling into various steps which are employed in writing data in Cassandra, let us first learn some of the key terms. They are:

Commit log: The commit log is basically a transactional log. It’s an append-only file. We use it when we encounter any system failure, for transactional recovery. Commit log offers durability.

Memtable : Memtable is a memory cache that stores the copy of data in memory. It collects writes and provides the read for the data which are yet to be stored to the disk. Generally, Each node has a memtable for each CQL table.

SSTable (Sorted Strings Table): These are the immutable, actual files on the disk. This is a persistent file format used by various databases to take the in-memory data stored in memtables.

It then orders it for fast access and stores it on disk in a persistent, ordered, immutable set of files.

Immutable means SSTables are never modified. It is the final destination of the data in memtable.
Açıklaması şöyle
1. First of all, we can write Writes to any random node in the cluster (called Coordinator Node).
2. We then write to commit log and then it writes data in a memory structure called memtable. The memtable stores write by sorting it till it reaches a configurable limit, and then flushes it.
3. Every writes includes a timestamp.
4. We put the memtable in a queue when the memtable content exceeds the configurable threshold or the commit log space, and then flush it to the disk (SSTable),
5. The commit log is shared among tables. SSTables are immutable, and we cannot write them again after flushing the memtable. Thus, a partition is typically stored across multiple SSTable files.
Şeklen şöyle


Read İşlemi
Açıklaması şöyle
Read operation in Cassandra takes O(1) complexity.
Açıklaması şöyle
1. First of all, Cassandra checks whether the data is present within the memtable. If it exists, Cassandra combines the data with SSTable and return the result.
2. If the data is not present in memTable, Cassandra will try to read it from all SSTable along with using various optimisations.
3. After that, Cassandra will be checking for the row cache. The row cache, if enabled, stores a subset of the partition data stored on disk in the SSTables in memory.
4. Then it uses bloom-filter (it helps to point if a partition key exists in that SSTable) to determine if this particular SSTable contains the key.
5. Suppose the bloom-filter determines a key to be existing on an SSTable, Cassandra will be checking the key cache subsequently. Key cache is an off-heap memory structure that stores the partition index.
6. Now If a partition key is present in key-cache, then the read process skips partition summary and partition index. Consequently, it goes directly to the compression offset map.
7. If in compression offset map the partition key exists, then once the Compression Offset Map identifies the key, we can fetch the desired data from the correct SSTable.
8. If the data is still not available, the coordinator node will request for some read repair.
Şeklen şöyle


Cassandra Query Language - CQL
Cassandra Query Language - CQL yazısına taşıdım

git commit seçeneği - Yerel Depoya Dosya Gönderir

Giriş
Bu komut ile dosyalarımız yerel depoya gönderilir. Commit mesajı aslında oldukça önemli ve belli bir şekle uymalı. Ancak çoğu insan sadece basit bir cümle yazıp geçiyor.

--amend seçeneği
Açıklaması şöyle. En commit'ten sonra değişen dosyaları tekrar en commit'e dahil eder
Git will fix the very last commit with your new message and any changes you might have staged.

There’s only one thing to keep in mind: you should only use --amend on local commits that you haven’t yet pushed to a remote repository.
Örnek
Şöyle yaparız
Let say you had following uncommitted files in the start:

FileA.txt
FileB.txt
You make some changes in them and then make the commit by following command:

git commit -m ‘First Commit’

Let say, after making a commit you realize that you need to make some further changes in FileB.txt . In that case instead of making a new commit after changing FileB.txt you can use the — amend flag to update the previous commit like following

git commit — amend

Also after some time if you realize that you also need to modify the commit message of the last commit that you made. You can also achieve that by using the amend flag using the following command:

git commit —amend -m ‘First Commit Modified’
Örnek
Sadece commit mesajını değiştirmek için şöyle yaparız.
git commit --amend -m "A useful message"
-m seçeneği
Commit mesajını belirtir.
Örnek
Şöyle yaparız.
$ git commit -m "Your message"
Örnek
Şöyle yaparız
$ git commit -m <title> -m <description>
Örnek
Şöyle yaparız.
git commit -m "init "; git push; git status
Commit Mesajı
Örnek
Başlıkta "fixed" değil "fix" fiili kullanılıyor. Başlığın sonunda da Jira issue numarası yazıyor. Mesaj olarak ta gerekli açıklamalar var
Fix foo test [jira-1920]

Changed blah blah


ISTQB Chapter 1 - Fundamentals of Testing (Test Faaliyetinin Amacı Anlatılıyor)

Giriş
International Software Testing Qualifications Board (ISTQB) Chapter 1 - Fundamentals of Testing ile ilgili notlarım şöyle.

Test mühendisi şu sertifikaları alabilir.
Foundation Level -> Advanced Level -> Expert Level
Bu sınavda hedefler K1,K2,K3,K4 olarak kodlanmış.
K1 hatırlanması gereken hedef
K2 anlanması gereken hedef
K3 uygulama gerektiren hedef
K4 incelenmesi gereken hedef

1.1 Hata Neden Oluşur
İnsan yanlış yapar (mistake) -> Kodda Hata/Kusur Olur (Defect,Fault,Bug) -> Bu kod çalışınca Arıza/Başarısızlık (Failure) olur ve biz Hata/Kusur'un farkına varırız.
Yani
mistake -> defect/fault/bug -> failure
Mistake = İnsan hatası
Bug = Hata
Defect = Hata
Fault = Kusur
Failure = Arıza, Başarısızlık

Test Nedir?
Test hataları (failure/defect) bulmak amacıyla gerçekleştirilen faaliyettir. Sistemin test edilmiş olması hatasız olduğunu (defect/bug free) veya doğru çalıştığını garanti etmez. Yani şu cümle yanlış.
The purpose of testing is to demonstrate absence of defects.
Test ekibinin amacı failure oluşturarak fault/defect/bug'ları bulmak.

1.3 Test Prensipleri
7 tane
1. Testing shows presence of defects
2. Exhaustive testing is impossible
3. Early testing.
Gereksinim hatalı ise, tasarım hatalıdır, kod hatalıdır, test hatalıdır, dokümantasyon hatalıdır. Erken teşhis hayat kurtarır :)
4. Defect clustering
5. Pesticide paradox
6. Testing is context dependent
7. Absence-of-errors fallacy
Testing Ensures a Bug-Free Product anlamına gelmez. Açıklaması şöyle
While QA testing can reveal bugs in the software, it cannot guarantee the absence of bugs. A software QA tester will implement rigorous testing procedures to catch as many system errors as is feasible.

However, it is simply not possible to ensure that a product is 100 percent free of bugs, even with an unlimited budget and zero time constraints. Many final software products will have some bugs, while still meeting the requirements of the project and being functional. The most skilled tester will not be able to guarantee that the software is defect-free. What he or she can do is reduce bugs to a minimum and determine that the software is functional and meets the requirements or usability.
Hatta en kritik yazılımlarda bile hata olabiliyor.

Örnek - Uzay Mekiği Yazılımı
Açıklaması şöyle.
Although the Space Shuttle flight software was of outstanding quality, it's completely incorrect to think that there was only one bug. There were many known bugs in the flight software (FSW).
1.5  Test Psikolojisi
Test işlemi yıkıcıdır (destructive). Açıklaması şöyle.
Start testing the application by intent of finding defects/errors. Don't think beforehand that there will not be any bugs in the application. If you test the application by intention of finding defects you will definitely succeed to find those subtle defects also.
İşte bence bu yüzden adanmış bir test ekibi lazım. Çünkü kodu yazanlar bazen kör olabiliyorlar. Açıklaması şöyle.
The person who wrote the code is often blind to its limitations because they are too familiar with it. Skilled testers know how to work to avoid allowing familiarity to blind them to problems.

22 Temmuz 2019 Pazartesi

GoF - Decorator Örüntüsü

Decorator İsmi
Decorator için isim seçmek her zaman en büyük problem :)
- FilteredCollectionDelegationWork gibi isimlerin iyi olmadığını düşünenler var.

Decorator - Yapısal Örüntü
Decorator bir yapısal örüntüdür. En önemli faydaları nesneleri birleştirerek daha geniş yapılar oluşturması, kalıtım hiyerarşisinde niceliğin artmasını engellemesi (explosion of subclasses) ve legacy kodlarda, kalıtıma dokunmadan değişiklik yapabilmeyi kolaylaştırması olarak sayılabilir.

Bu örüntü iki şekilde gerçekleştirilebilir.
1. Kaltım Kullanarak
2. Kalıtım Kullanmadan Sadece Sarmalayarak. Bu kullanımda ismine sadece Wrapper demek daha doğru olabilir.

Her iki kullanımda da amaç gerçek nesneyi sarmalayarak davranışını değiştirmektir. Böylece o ana kadar sınıf hiyerarşisinde olmayan yeni bir davranış şekli elde edilebilir. Açıklaması şöyle.
Decorators dynamically alter the functionality of a function, method, or class without having to directly use subclasses or change the source code of the function being decorated.
Decorator ile "Attribute,Aspect,Trait" arasında farklılıklar bulunur.

Decorator ve Arayüzden Kalıtım - Birinci Kullanım Şekli
Decorator Örüntüsü - Kalıtım Kullanarak yazısına taşıdım.

Sarmalama İşlemi - İkinci Kullanım Şekli
Sarmalama işleminde sınıfa Wrapper demek daha iyi. Bir yorum şöyle
Trust me, "Decorator" doesn't make any sense in English either. The reason you see this term is due to the influence of a software pattern book written 20 years ago by the so-called "Gang of Four." They described patterns in common use but decided to give them their own, sometimes odd names. "Decorator" is one example. Everyone I knew called this a "Wrapper" class because you "wrap" other classes inside another. Much more descriptive, don't you think?
Buna verilen cevap şöyle.
"Decorator" refers to the pattern's purpose, whereas "Wrapper" refers to the pattern's form.
Örnek
Sarmalama işlemini şöyle yaparız.
TextEmail txtEmail = new TextEmail();
SecuredEmail securedEmail = new SecuredEmail (txtEmail);
Decorator ve Proxy
Decoeator ve Proxy birbirlerine çok benziyorlar. Proxy şöyledir

Decorator ve Private Alanlar
Decorator, davranışını değiştirmek istediği nesnenin private alanlarına erişme konusunda sıkıntı çekebilir. Bu alanları public yapmak istemiyorsak reflection kullanmak gerekebilir.

Decorator Örüntüsüne Çok Benzeyen Nesneye Özellik Ekleme ve Çıkarma
Bazen nesneler belli özellikleri listeler halinde tutarlar. Örneğin roller gibi. Rollerin eklenip çıkarılması davranışı değiştirmiyor sadece nesneye veri ekleyip çıkarıyorsa decorator'e gerek yoktur.

Decorator ve Mock
Decorator ve Mock nesnesi içerdikleri nesneyi taklit ettikleri, yani oymuş gibi davrandıkları için bir açıdan benzerler. Ancak kullanım sahaları tamamen farklıdır. Decorator gerçek kodun içinde kullanılırken, Mock nesneleri kodu test etmek için kullanılırlar.

Domain Driven Design - Bounded Context

Giriş
Domain Driven Design'ın en önemli kavramlarından birisi olan Bounded Context ile ilgili notlarım

Bounded Context Nedir?
Bounded Context bir yazılımı daha küçük alt bileşenlere bölmek içindir. MIL-STD- 498 terminolojisi ile konuşursak CSCI (Computer Software Configuration Item) yazılım ise CSC (Computer Software Component) gibi düşünülebilir. Bir bakıma tabii ki :) Açıklaması şöyle.  
Building just one domain model for entire e-commerce will be tough to comprehend and implement in the code. Bounded context helps split the e-commerce domain into smaller subdomains: E.g. Inventory, Shopping Cart, Product Catalog, Fulfilment & Shipment, and Payment. We can use technics like event-storming to identify such subdomains and bounded contexts. So we now have Inventory bounded context, Product Catalog bounded Context, and so on…
Bounded Context ve Ubiquitous Language İlişkisi Nedir?
Konuşulan ortak dildeki (Ubiquitous Language) bazı kavramlar bağlama göre farklı anlamlara gelebilir. Bounded Context kavramın hangi bağlamda kullanıldığını belirtir. Örneğin Product sınıfı context'e göre farklı bir şeyi temsil edebilir. Açıklaması şöyle
It is important to note that the Product in each bounded context has very different behavior. In Inventory, Context Product is concerned about weight, expiry date, and supplier, whereas in Shopping Cart bounded context, the expiry, the Supplier of the Product, is not in the picture. So it is better to model different Product classes in each bounded context instead of having a common Product class across the bounded context.
Bir örnek şöyle
Let us consider an enterprise application in the telecom domain. There will be more than 70 applications in the system. Imagine 70 applications that have to integrate successfully to run the business. It starts with a person approaching the service provider like Airtel, Jio, etc., for a new connection. The moment he approaches the service provider, he will be considered as a Lead. He is not the customer yet. If he shows interest in any plan, he will be considered as an opportunity. His details will go to application verification systems.

If everything is fine, the helpline guy or the sales guy will call and confirm his plan. Once he confirms a plan, he will be provisioned into the system. Then he becomes a customer. His account will be created in a profile application. Note that the same person is identified as lead, opportunity, and customer in different applications.

His details are captured in CRM — customer relationship management system, billing system, sales and marketing system, package management system, fraud management system, inventory systems, analytics tools, dealer management systems, secondary sales systems, revenue leakage tools, debt management systems, etc.

Now imagine if you want a single model — customer in this enterprise application. The leads and opportunities system will be interested in details like what is his existing service provider, through which channel did he get to know about us, etc. Profiling and the account creation systems will be interested in other details like name, address, age, profession, etc. The CRM system might be interested if there are any previous service tickets raised by the person. The sales and marketing team will be interested in his profile and usage details to get an idea of what packages could be recommended to him down the line. The billing system will be focused on the billing address and payment mode, etc. Similarly, the fraud management system, inventory systems, analytics tools, etc., will be interested in other details.

Imagine how confusing your model will be if it includes all these details. The address required for the billing application is the billing address. For the profiling system or CRM application, it will be the current address and permanent address. If the same model is used across the system, at some point in time, the billing team could feel that naming the address field as billing address is more appropriate than the current address and rename the current address field to the billing address. The model integrity is compromised which in turn breaks the system.

So, when the billing team says address, it might be billing address and when the CRM team says address it might be mailing address. The CRM team is not aware of the billing address and the billing team might not be aware of the other addresses. If these two teams discuss with each other the model you could guess the confusion it creates because of the conceptual differences.

Even if both the addresses are saved as different attributes, the billing address will be redundant and irrelevant to profiling and other applications.

The same person is a Lead in leads application and opportunity in the opportunities application. He is a customer in the provisions and accounting systems. In case he did not pay the bills, he will be a defaulter in another system. If you observe, the details of the same person are interpreted differently in different applications based on the context. This is where the bounded contexts come into the picture.

The sales team cannot go to the leads/opportunities team and ask for the customer details because their model is not supposed to have customers but instead have leads /opportunities. They will understand only if you ask for the leads or opportunities details.

To get rid of these, you need to define the boundaries of your model and confine it to a context. Otherwise, there will be a lot of confusion and chaos that creeps into your system and makes the model unmanageable and impure.
Bounded Context ve Noun Based Models
Açıklaması şöyle
Noun based domain models have the downside of introducing high functional coupling in a system.

Which is contrary to the recommendation to keep coupling low and cohesion high.

When data is grouped in noun based models, subsequent steps in a business workflow need access to show, or amend, some of the data embedded in that model.

This couples each subsequent step to that functional model.

As more nouns are introduced to the model, the functional coupling tends to increase.

A better way to divide your domain model is by business capability.

Only store the data required by the respective capability in a local model. Individual capabilities usually only care about a subset of the data.

As the workflow progresses, pass data from one capability to the next.

This can be done either via the frontend, or via event publishing in the backend (depends on how much time there is between the steps).
Şeklen şöyle


Bounded Context'in Varlığını Nasıl Anlarız?
 Açıklaması şöyle. Her ortak dildeki bir kavramın farklılaştığını görüyorsak, orada bir Bounded Context olabilir.
So bounded context is a linguistic boundary! Any time you see that the Product is acting differently, it is a clue that there are different bounded contexts in the play.

One bounded context typically has few (or one) micro-services
Şeklen şöyle. Burada iki tane bounded context var. Her ikisinde de Customer ve Product kavramları var ancak kendi bağlamlarında farklıklar gösteriyorlar. Aslında bu şekli yukarıdaki Noun Based Model'e uymuyor.


Ayrıca Bounded Context'in Kendi Domain Modeli ve Kendi Ubiquitous Language'i Vardır
Bounded Context kendi başına bir birim olduğu için kendine mahsus bir domain modele ve dile sahiptir. Açıklaması şöyle
The ubiquitous language applies within a bounded context. Each bounded context has its own ubiquitous language. It is a language that is spoken both by the Business Teams and the Development teams. It helps them to communicate better.

If your Business team is talking in terms of Database tables, then as the Development Team, you have influenced them incorrectly.
Context Map Nedir?
Açıklaması şöyle
Context Map defines the relationship between different bounded contexts.

How different bounded contexts communicate with each other and how they influence each other.
Bounded Context ve Yeniden Kullanılabilirlik
Açıklaması şöyle. Amaç yeniden kullanılabilir bir yapı ortaya çıkartmak değil. Amaç mantıksal olarak bölünmüş ve yalıtılmış yapılar yaratmak
The promise of reusable components, just as the idea of reusable business logic across applications didn't turn out to be practical. Modern trends reflect this insight well. The microservices approach suggests that instead of reusing code, we should separate things and make them easily replaceable. Domain-Driven Design's Bounded Context concept says the same: that there should be clearly separated contexts that create semantic boundaries, which, in turn, make sharing "business code" among contexts unnecessary and unwelcome by definition.
Dikey bölümleme için açıklama şöyle.
New trends like Domain-Driven Design and microservices advocate splitting applications vertically instead of horizontally. And, new types of development processes and organization, like cross-functional and DevOps teams, support this vertical slicing and scaling much more efficiently.
Isolation Layer
Açıklaması şöyle.
Code reuse between Bounded Contexts  should be avoided. The integration of functionality and data must go through a translation. The translation logic provided by the Isolation Layer.
Isolation Layer için 3 temel yöntem var. Bunlar şöyle.
Customer/Supplier
Conformist
Anticorruption Layer (ACL)
Anticorruption Layer (ACL) Nedir
Aslında ismi Translation Layer'da olabilirdi. Anti Corruption Layer yazısına taşıdım

Bounded Context ve İşbirliği
Benim için ufuk açıcı yazı şu. Her bounded context diğeri ile olan ilişkisini "Foreign Key" benzeri yapılar ile yürütüyor. Aslında bu kullanım bounded context'in daha büyük bir aggregate'in parçası olduğunu gösteriyor. Bu büyük aggregate daha küçük ve yönetilebilir hale getirilmiş.

Örnek
Elimizde şöyle bir mesaj olsun. Foreign key'leri göndererek mesajı işleyen diğer kod parçalarına da bilgiye erişme imkanı tanırız.
{
  "Status": "Closed",
  "RentalAgreementID": 1234,
  "CustomerID": 8965,
  "VehicleID": 98263,
  "RentalAgent": 24352,
  "Broker": 6723
}
Bounded Context'ler Arası İletişim - Domain Events
Domain Events yazısına taşıdım.

Plugin Yapısı ve Bounded Context
Şöyle bir yapı ile karşılaştım. Her plugin için bir bounded context yapılmıştı. Plugin kendi bilgilerini burada saklıyordu.
AbstractBoundexContext <-(extends)-AppBoundexContext
                ^-(extends)--PluginABoundedContex,PluginBBoundedContex
Ana uygulama AppBoundedContext'i kullanıyordu. Her plugin ise sadece kendi bounded context'ini biliyordu.

Ana uygulama çalışırken plugin içindeki bazı bilgiler var ise farklı davranıyordu. Dolayısıyla iki tane bounded context arasında işbirliği (collaboration) gerekiyordu.

Bu durumda kurtulmak için geliştirenler PluginABoundedContex içindeki bilgileri AppBoundexContext'e geçmişlerdi.
PluginABoundedContex --(calls)--> AppBoundexContext
Ana uygulamada oluşan olayları plugine bildirmek için de listener yapısı vardı.
AppBoundexContext -- (notifies)-->PluginABoundedContex

Yazılım Mimarisi - Hexagonal Architecture

Giriş
Bu mimari eski değil. Açıklaması şöyle
In 2005, Alistair Cockburn realized that there wasn’t much difference between how the user interface and the database interact with an application, since they are both external actors which are interchangeable with similar components that would, in equivalent ways, interact with an application. By seeing things this way, one could focus on keeping the application agnostic of these “external” actors, allowing them to interact via Ports and Adapters, thus avoiding entanglement and logic leakage between business logic and external components.
Hexagonal Architecture kavramını ilk olarak burada gördüm. Bu mimarinin ana gayesi şöyle. Yani uygulamamıza kolay bir şekilde bir başka arayüz takılması. Eğer uygulama konsoldan çalışıyorsa, kolayca Web arayüzü de takılabiliyor.
Hexagonal Architecture is a software architecture that allows an application to be equally driven by users, programs, automated tests, or batch scripts and to be developed in isolation from its run-time target system. The intent is to create an application to work without either a User Interface or a database so that we can run an automated regression test against the application, work with an application when a run-time system, such as a database is not available, or integrate applications without user interface.
Bu mimari bence en çok mikro servis geliştirirken işe yarıyor.

1. Yazılım Kısımları
Yazılımı iki kısma bölüyor.
1. Inside elements - Core Business Logic/Application Business Logic/Domain layer bulunur
2. Outside elements - DBs, external APIs, UI vesaire

- Kafka, Avro ve Spring-Boot kullanan bir örnek burada

Core Business Logic / Domain
Bu katmanda mümkün olduğunda herhangi bir altyapı, framework bağımlılığı olmamalı.

Örnek
Spring bağımlılığı olan ve olmayan kod şöyle. Bağımlılığı olmayan tercih edilmeli.
// Bad Practice
@Component
public class ExampleClass {
  @Autowired
  private SomeClass someClass;

  // some business code here
}
//Good Practice
public class ExampleClass {
  private final SomeClass someClass;

  public ExampleClass(SomeClass someClass) {
    this.someClass = someClass;
  }
 // some business code here
}

2. Port ve Adapter
Bu katmanlar arasında iletişim için port ve adapter kavramını kullanıyor. Şeklen şöyle

"Driving Side" inbound olarak düşünülebilir. "Driven Side" outbound olarak düşünülebilir. Açıklaması şöyle
Ports in hexagonal architecture refer to the interfaces that allow inbound or outbound flow. An inbound port exposes the core application functionality to the outside world. 
3. Port Nedir? - Inbound veya Outbound  Çalışan Arayüz
Açıklaması şöyle. Port arayüz anlamına gelir. Adapter ise Portu gerçekleştiren veya kullanan kod parçası anlamına gelir.
The connection between the inside and the outside part of our application is realized via abstractions called ports and their implementation counterparts called adapters. For this reason, this architectural style is often called Ports and Adapters.
Core Business Logic, sadece Port kavramını bilir. Port yani arayüz üzerinden okuma, yazma yapar. Adaptor ise Port arayüzünü gerçekleştirir. Şeklen şöyle. Burada Domain kodlarının sadece portları kullandığı görülebilir.

Port ve Adapter kolayca değiştirilebilir olmalı. Açıklaması şöyle.
We simply provide abstract ports and implement adapters to them, regardless of the type of actor the inside is communicating with. This means we can swap out the UI, the same way we swap out the database. We can easily swap out both for the testing purposes and there won’t be any significant implementation differences.
Port Örnekleri
Bazı port örnekleri şöyle
-userInterfacePort
-playerInputPort
-fooRepositoryPort

Hexagonal Architecture İle Repository yazısına bakabilirsiniz

userInterfacePort "Inside elements" içindeki bir servisi çağırabilir. Servisimiz de "Outside elements" içindeki bir repository portu çağırabilir.

Örnek
Elimizde iki tane Port olsun
//Dış dünyaya sağlanan servis
public interface PizzaService {
  public void createPizza(Pizza pizza);
  public Pizza getPizza(String name);
  public List<Pizza> laodPizza();
}

//Dış dünyanın sağladığı servis
public interface PizzaRepo {
  public void createPizza(Pizza pizza);
  public Pizza getPizza(String name);
  public List<Pizza> getAllPizza();
}

Eleştiri
Hexagonal gösterime okunaksız olduğu için eleştiri var. Katmanlı gösterimi okumak çok daha kolay deniyor. Şekle şöyle Aslında bu eleştiri okunabilirlik konusunda haklı. Ayrıca Domain paketlerini çok daha net gösterdiği için belki yine hakı


4. Adapter Nedir? - Portu Gerçekleştiren veya Kullanan Kod
Adapter ikiye ayrılabilir.

4.1. Primary Adapter - Dışarından Yapılan Sorgular
Inbound port olarak düşünülebilir. Uygulamanın çekirdeğine doğru yapılan çağrıdır. Burada Adapter nesnesi, Port arayüzünü kullanır. Core içinde Port arayüzünü gerçekleştiren bir "Application Service" bulunur. 
Örnek
Elimize şöyle bir adapter olsun. Burada adapter'in teknoloji bağımlığı olduğu görülebilir.
/**
 * Driving Adapter
 * Location: src/user-interface/adapter/OrderAdapter.ts
 */
class OrderAdapter extends HttpRequestHandler {

    private orderService: OrderService;

    constructor(orderService: OrderService) {
        super();
        this.orderService = orderService;
    }

    public async createOrder(req: Request): Promise<Response> {
        const createOrderCommand = new CreateOrderCommand(req);
        const orderResult = await this.orderService.handle(createOrderCommand);

        return this.createResponse(orderResult);
    }
}
Kullandığı port şöyle olsun. Kodda direkt Port arayüzü kullanılmıyor ama kullanılabilirdi de.
/**
 * Driving Port
 * Location: src/application/port/DrivingPort.ts
 */
interface DrivingPort {

    handle(command: OrderCommand): Promise<boolean>;
}

/**
 * Application service implementing Driving Port
 * Location: src/application/service/OrderService.ts
 */
class OrderService implements DrivingPort {

    private order: Order;
    private orderRepository: OrderRepositoryAdapter;

    constructor(orderRepository: DatabasePort) {
        this.orderRepository = orderRepository;
    }

    public async handle(command: CreateOrderCommand): Promise<boolean> {
        this.order = Order.create(command);

        try {
            return await this.orderRepository.save(this.order);
        } catch (error) {
            return false;
        }
    }
}

4.2. Secondary Adapter - Dışarıya Yapılan Çağrılar
Outbound port olarak düşünülebilir. Uygulamadan veri tabanına veya 3. taraf API'lere yapılan çağrıdır. Burada Adapter nesnesi, Port arayüzünü gerçekleştirir.

Örnek
İki tane String alan ve bunların anagram olup olmadığını dönen bir mikro servisimiz olsun. Bu servisi hem konsoldan hem de Rest çağrısı ile kullanmak isteyelim. Şöyle yaparız.
1. Domain Layer
IAnagramServicePort (Inbound Port) : Okuma işlemi için metod sunar.
AnagramService :  IAnagramServicePort'u gerçekleştirir. Bunu Spring servisi gibi düşünebiliriz

IAnagramMetricPort (Outbound Port) : Dışarıdaki respository'e metrikleri yazar.

2. Outside Layer
ConsoleAnagramAdaptor : IAnagramServicePort'u gerçekleştirir. Konsol arayüzü bu adaptörü kullanarak uygulamamızla etkileşir.

RestAnagramController : IAnagramServicePort'u gerçekleştirir. Http isteğini Java nesnelerine çevirir ve uygulamamızla etkileşir.. Yani Rest çağrılarına cevap verir.
5. Nerede Kullanılabilir?
Hexagonal Mimari en çok mikro servis mimarisindeki servisler yani daha küçük yazılım parçaları için kullanılıyor. Ancak kullanılması uygun olmayan yerler de var. Açıklaması şöyle
Hexagonal Architecture is not suitable for all situations. Like Domain-Driven Design, this is really applicable if you got a real business domain. For an application which transform a data to another format, that’s might be overkill.

6. Eksikler

6.1. Katmanları İç İçe Geçmesi
Hexagonal mimari BL ve UI katmanları arasında kullanılan DTO'lardan kaynaklanan katmanların iç içe geçmesi konusunda bir şey söylemiyor.

Aslında aynı problem Layered Architecture kullanımında da var. Port ve Adapter kolayca değiştirilebilir olsa da, bir yerde sonra Data Oriented veya Object Oriented mesajlaşma yaklaşımından birisi seçilmek zorunda kalınıyor. Bu seçim büyük ihtimalle Data Oriented DTO nesneleri oluyor. Dolayısıyla ekranda gösterilecek renkler/fontlar için yine her iki katman tarafından aynı şekilde yorumlanması gereken bilgiler saçılmaya başlıyor.

6.2. Core Business Logic İçinde Etkileşim
Core Business Logic içinde etkileşim, haberleşme en zor konulardan birisi. Bu doğru kurgulanmazsa Spagetti yapılar ortaya çıkıyor. Hexagonal mimari yine bu konuda bir öneri de bulunmuyor.