30 Ocak 2023 Pazartesi

Google Authenticator Plugin

Google Account sayfasına git
Security menüsüne tıkla
Signing to Google başlığı altında 2-Step Verification etkinleştirilir.  2-Step Verification  menüsü altında Authenticator App ayarları yapılır ve bir QR kodu ortaya çıkar. Bu QR kodu Chrome içinde kurulu olan Authenticator plugin ile taratılır.

Docker Compose ve Redis Sentinel Mode

Örnek
Şöyle yaparız. Burada bir redis master ve bir redis sentinel var
version: '2'

networks:
  app-tier:
    driver: bridge

services:
  redis:
    image: 'bitnami/redis:latest'
    environment:
      - ALLOW_EMPTY_PASSWORD=yes
    ports:
      - '6379:6379'
    networks:
      - app-tier
  redis-sentinel:
    image: 'bitnami/redis-sentinel:latest'
    environment:
      - REDIS_MASTER_HOST=localhost
      - REDIS_SENTINEL_RESOLVE_HOSTNAMES=yes
    ports:
      - '26379:26379'
    networks:
      - app-tier
Örnek
Şöyle yaparız. REDIS_MASTER_SET ile sentinelin hangi set için çalıştığı belirtilir
version: '3.8'
services:
  redis-master:
    container_name: redis-master
    image: 'bitnami/redis:latest'
    environment:
      - REDIS_REPLICATION_MODE=master
      - REDIS_PASSWORD=redispassword
    ports:
      - "6379:6379"
  redis-slave:
    container_name: slave-redis
    image: 'bitnami/redis:latest'
    environment:
      - REDIS_REPLICATION_MODE=slave
      - REDIS_MASTER_HOST=redis-master
      - REDIS_MASTER_PASSWORD=redispassword
      - REDIS_PASSWORD=redispassword
    ports:
      - "7000:6379"
    depends_on:
      - redis-master
  redis-sentinel-1:
    image: 'bitnami/redis-sentinel:latest'
    container_name: sentinel-1
    environment:
      - REDIS_MASTER_SET=mymaster
      - REDIS_MASTER_HOST=127.0.0.1
      - REDIS_MASTER_PASSWORD=redispassword
      - REDIS_SENTINEL_DOWN_AFTER_MILLISECONDS=10000
    depends_on:
      - redis-master
      - redis-slave
    ports: 
       - "26379:26379"
  redis-sentinel-2:
    image: 'bitnami/redis-sentinel:latest'
    container_name: sentinel-2
    environment:
      - REDIS_MASTER_SET=mymaster
      - REDIS_MASTER_HOST=127.0.0.1
      - REDIS_MASTER_PASSWORD=redispassword
      - REDIS_SENTINEL_DOWN_AFTER_MILLISECONDS=10000
    depends_on:
      - redis-master
      - redis-slave
    ports: 
      - "26380:26379"
  redis-sentinel-3:
    image: 'bitnami/redis-sentinel:latest'
    container_name: sentinel-3
    environment:
      - REDIS_MASTER_SET=mymaster
      - REDIS_MASTER_HOST=127.0.0.1
      - REDIS_MASTER_PASSWORD=redispassword
      - REDIS_SENTINEL_DOWN_AFTER_MILLISECONDS=10000
    depends_on:
      - redis-master
      - redis-slave
    ports: 
      - "26381:26379"


Fault Tolerance (Hataya Dayanıklılık) vs High Availability (Yüksek Süreklilik)

Giriş
Açıklaması şöyle
1. High Available sistemler çökebilirler
2. High Available sistemler yapamadım veya hata oluştu şeklinde cevap verebilirler. 
3. Fault Tolerant sistemler bu hatayı aşmanın bir yolunu bulurlar.
In other words, if, for instance, a web request is being processed by your highly available platform, and one of the nodes crashes, that user will probably get a 500 error back from the API, but the system will still be responsive for following requests. In the case of a fault-tolerant platform, the failure will somehow (more on this in a minute) be worked-around and the request will finish correctly, so the user can get a valid response. The second case will most likely take longer, due to the extra steps.
Bir başka daha detaylı açıklama şöyle
Fault tolerance implies zero service interruptions. If there is a failure somewhere the system will instantly switch to the backup solution and service will continue without interruption. High availability, on the other hand, implies that services are, well, highly available but not always available. A system can be highly available but not fault tolerant. I generally consider high availability to be an aspect of fault tolerance. In that, it addresses a certain type of “fault” (availability), but doesn’t necessarily talk about other aspects.

This is somewhat of a contrived example, but basically everyone watches streaming content, so let’s consider a digital rights management service that determines whether a viewer can watch a particular video. The service could be configured to be highly available, in that it will always serve and return queries. However, it may not handle certain backend data correctly and get into a state where it returns errors, or denies all requests. In this case, it would be highly available (it is reachable and is returning an answer), but it is not fault tolerant, because something in the system has caused it to misbehave.

The caveat with this example is that there’s a fine line between a “bug” and fault tolerance. But the idea of fault tolerance is that the system can handle unexpected events gracefully and continue providing an excellent user experience. (If this example is interesting to you, I recommend taking a look at how Netflix’s chaos monkey randomly terminates instances in production to ensure that engineers implement their services to be resilient to instance failures).
Maliyet
Açıklaması şöyle. High Available sistemler, Fault Tolerant sistemler göre daha az maliyetli olurlar
High Availability: Similar to fault tolerance, but more cost effective at the expense of comparatively more, but acceptable downtime. This works on the software side, and uses redundant systems and smart fault detection and correction strategies for it to function.

26 Ocak 2023 Perşembe

Apache Lisans

Her lisansta olduğu gibi bu lisansı da verebilmek için önce kodun yazan kişiye ait olması gerekir

Apache Lisansı ve GPL 
Apache Lisansı, GPL ile çalışırken sorun çıkartmaz. Açıklaması şöyle.
Apache license tends to not cause issues/concerns because it does not impose requirements that GPL does on the software that uses Apache-licensed components.
Burada bir örnek var. Apache Lisansınsa sahip bir ürünün kodu alınıp değiştirilmeden bir başka  Apache Lisanslı üründe kullanılmış.
Meanwhile, the PostgreSQL License is compatible with CockroachDB’s own Apache License, which enables reuse of (some of) PostgreSQL’s own source code in CockroachDB unchanged. In contrast, MySQL (and its successor MariaDB) is released under the GNU GPL, which prevents direct reuse of MySQL code in CockroachDB.
Apache Lisansı ve LGPL 
Apache lisansı, LGPL'den bile daha serbesttir. Açıklaması şöyle
GNU Lesser General Public License (LGPL), also known as a copyleft license. Teams that use LGPL must distribute derivatives of LGPL software with the same license.

Apache license, which lets users distribute or modify the software without restriction. The Apache license is not copyleft.
GLPv3 İçinde Apache Kullanmak
GLPv3 projelerde Apache kodu kullanılabilir. Açıklaması  şöyle.
Apache 2 software can therefore be included in GPLv3 projects, because the GPLv3 license accepts our software into GPLv3 works. However, GPLv3 software cannot be included in Apache projects. The licenses are incompatible in one direction only, and it is a result of ASF's licensing philosophy and the GPLv3 authors' interpretation of copyright law.
Derivative Work Nedir
Açıklaması şöyle
"Derivative Works" shall mean any work, whether in Source or Object form, that is based on (or derived from) the Work and for which the editorial revisions, annotations, elaborations, or other modifications represent, as a whole, an original work of authorship. For the purposes of this License, Derivative Works shall not include works that remain separable from, or merely link (or bind by name) to the interfaces of, the Work and Derivative Works thereof.

18 Ocak 2023 Çarşamba

Cache Stratejileri - Cache Access Patterns Cache-aside

Giriş
Okuma ve yazma işlemlerini kendimiz kodla yapıyoruz

1. Okuma
Okuma şeklen şöyle
Açıklaması şöyle. Aslında bu işlemi cache alt yapısına yaptırırsak Read-Through elde ederiz.
1. Whenever a requests comes to the application, it firsts checks the requested data in the cache.
2. If yes, the cache returns the data.
3. Otherwise, the application queries the data from the database, updates the cache on the way back and then returns the data.
Kod olarak şöyle
String cacheKey = "hello world";
String cacheValue = redisCache.get(cacheKey);
// got cache
if (cacheValue != null) {
    return cacheValue;
} else {
    //no cache, read from database
    cacheValue = getDataFromDB();
    // write date to cache
    redisCache.put(cacheValue);
}
2. Yazma
Yazma ise şöyle. Burada bir çok seçenek var. Önce veri tabanı güncellenebilir, veya cache güncellenebilir, cache silinebilir. Bunlar şöyle
1. Update the cache first, then update the database.
2. Update the database first, then update the cache.
3. Delete the cache first, then update the database.
4 Update the database first, then delete the cache.
Hangisi kullanılırsa kullanılsın, iki tane işlem olduğu için birisinin başarısız olma veya tutarsız sonuç dönme ihtimali var. Bu yüzden cache nesnelerine genellikle zaman aşımı ve bayatlama süresi konuluyor. Böylece bir müddet sonra eventual consistency elde ediliyor. 

Çözümler ve Etkileri
1. Update the database first, then delete the cache
Açıklaması şöyle
After updating the database, the corresponding records of the cache should be cleared immediately. When the same request comes in next time, it will be taken from the database first and the latest result will be written back to the cache.
Problemler
1. Eventual Consistency
Şeklen şöyle. Burada A işlemi bitirinceye kadar B halen eski veriyi okuyor


2. Uygulama veri tabanını günceller ancak cache güncellemesi yapmadan önce ölür. 
Açıklaması şöyle
..  when A wants to update the data, A is killed after finishing the database update, probably due to bugs or application upgrade and so on. Then the data in the cache will remain inconsistent for a long time, until the next update or timeout.
3. Lost Update
Şeklen şöyle
Bu aslında Lost Update problemi ile aynı. A eski veriyi okuyor ve ve Cache'e bunu yazıyor

4. Double Delete
Tutarlılığı artıran bir çözüm de şöyle. Buna double delete deniliyor
Delete the cache first.
Write database.
Sleep for 500 milliseconds, then delete the cache.


gcloud container fleet seçeneği

Örnek
Şöyle yaparız
# Register our clusters to Fleet
gcloud container fleet memberships register gke-pri-fleet \
    --gke-cluster europe-west4/<<kubernetes_cluster_name_output_from_gke_primary>> \
    --enable-workload-identity \
    --project=<<your_gcp_project_id>>

gcloud container fleet memberships register gke-sec-fleet \
    --gke-cluster europe-west2/kubernetes_cluster_name_output_from_gke_secondary>> \
    --enable-workload-identity \
    --project=<<your_gcp_project_id>>

# Enable Ingress on Primary Fleet
gcloud container fleet ingress enable \
   --config-membership=gke-pri-fleet
Burada iki tane cluster var. Hercluster'ın önünde de fleet var. Şeklen şöyle




gcloud services seçeneği - Google API'lerini Etkinleştirir

Giriş
Google API'leri şöyle

cloudresourcemanager.googleapis.com
container.googleapis.com
dns.googleapis.com
gkehub.googleapis.com
multiclusteringress.googleapis.com
multiclusterservicediscovery.googleapis.com
trafficdirector.googleapis.com

enable seçeneği
Bazı API'ler etkin olmayabilir. Bunları etkinleştirmek için kullanılır

Örnek
Şöyle yaparız
gcloud services enable \
    multiclusteringress.googleapis.com \
    gkehub.googleapis.com \
    container.googleapis.com \
    --project=<<your_gcp_project_id>>
Örnek
Şöyle yaparız
gcloud services enable gkehub.googleapis.com --project $1
gcloud services enable dns.googleapis.com --project $1
gcloud services enable trafficdirector.googleapis.com --project $1
gcloud services enable cloudresourcemanager.googleapis.com --project $1
gcloud services enable multiclusterservicediscovery.googleapis.com --project $1

Google Cloud Multi Region Kubernetes Cluster

Bir örnek burada

Yazılım Mimarisi Deployment - Canlıya Geçirme - Rolling Deployment - Eski Sistem Yavaş Yavaş Kapatılır

Giriş
Açıklaması şöyle
The rolling deployment strategy is the default strategy by Kubernetes that slowly replaces the old pods of the previous version with the pods of the new version. 
Şeklen şöyle


Eski sistemler teker teker kapatılır
Blue sayısı kadar green servis çalıştırılır. İşini bitiren blue servisler teker teker kapatılır. Bunun adımları şöyle. Burada önemli olan blue servisin işlemi bitirdiğini anlayabilmek. Eğer green yani yeni sistemde problem varsa blue sistem kapatılmadığı için rollback edilebilir. Buna aynı zamanda "Rolling Upgrade" de veya "Rolled Updates" deniliyor.
- Standing up a matching number of “green” instances of your microservice that contain the change that you wish to deploy.
- Once you’re happy that those green instances are healthy, adding them into the load balancer so that they receive traffic.
- Removing the blue instances from the load balancer, and once they have finished processing any inflight requests, throwing them away.
Açıklaması şöyle
1. Start a new already upgraded server (server 3).
2. move server 1 clients to server 3.
3. Upgrade server 1
4. Move server 2 clients to server 1.
5. Delete server 2 as you now have servers 1 and 3 running the upgraded software?
Rolling Deployment için açıklama şöyle
 A Rolling deployment deploys progressively with automated validation and health checks in each step. In a rolling deployment model, the new application services are added to the shared load balancer and the traffic will start to be shared between the old and new applications. Automated validation in each new replica helps determine if the release is still on track or if rollback is necessary.

16 Ocak 2023 Pazartesi

H2 Veri Tabanı Kullanımı

Docker
Şöyle yaparız
docker pull jesperdj/h2 # This command starts a new H2 container in detached mode (-d), # maps the container's 8082 and 9092 ports to the same ports on the host machine (-p), # and gives the container a name (-name). docker run -d -p 8082:8082 -p 9092:9092 --name h2 jesperdj/h2 # Open browser http://localhost:8082 # To log in to the H2 database, enter the following information on the login page: Driver Class: org.h2.Driver JDBC URL: jdbc:h2:mem:testdb User Name: sa Password: (leave this field blank)
Bir örnekte te burada

Sütun Tipleri
Şöyle
BOOLEAN: A Boolean column can store true or false values.

TINYINT: A TINYINT column can store a small integer value between -128 and 127.

SMALLINT: A SMALLINT column can store an integer value between -32,768 and 32,767.

INT: An INT column can store a larger integer value between -2,147,483,648 and 2,147,483,647.

BIGINT: A BIGINT column can store a very large integer value between -9,223,372,036,854,775,808 and 9,223,372,036,854,775,807.

DECIMAL: A DECIMAL column can store a fixed-point decimal value with a specified precision and scale.

FLOAT: A FLOAT column can store a floating-point value with single precision.

DOUBLE: A DOUBLE column can store a floating-point value with double precision.

DATE: A DATE column can store a date value.

TIME: A TIME column can store a time value.

TIMESTAMP: A TIMESTAMP column can store a date and time value.

BLOB: A BLOB column can store binary data, such as images or documents.

CLOB: A CLOB column can store large text data, such as XML or HTML.

TEXT 
Açıklaması şöyle
H2 supports a TEXT column type, which can be used to store large amounts of character data in a table. The TEXT type is used to store strings of up to 2^31-1 bytes (approximately 2GB) in size.
Örnek
Şöyle yaparız
CREATE TABLE my_table (
    id INT PRIMARY KEY,
    my_text_column TEXT
);
INSERT INTO my_table (id, my_text_column) VALUES (1, 'This is some text data.');
SELECT * FROM my_table;

Web Konsol
Şöyle yaparız
http://localhost:8082/
Bağlantı bilgileri
Saved Settings:	Generic H2 (Embedded)
Setting Name:	Generic H2 (Embedded)
  
Driver Class:	org.h2.Driver
JDBC URL:	 jdbc:h2:my-db-name
User Name:	sa
Password:
CREATE DATABASE Cümlesi
H2 bunu desteklemez. JDBC URL ile verilen veri tabanını otomatik yaratır, ancak bir veri tabanı içinden yeni bir veri tabanı yaratılmasına izin vermez.

MERGE Cümlesi
Söz dizimi şöyle
MERGE INTO tableName [ ( columnName [,...] ) ] 
[ KEY ( columnName [,...] ) ] 
{ VALUES { ( { DEFAULT | expression } [,...] ) } [,...] | select } 
Örnek
Şöyle yaparız
CREATE TABLE customer (
  id NUMBER, 
  name VARCHAR(20), 
  age NUMBER, 
  address VARCHAR(20), 
  salary NUMBER,
  PRIMARY KEY (id)
);

INSERT INTO customer (id,name,age,address,salary)
VALUES (1, 'orcun', 47, 'ankara', 2000);

MERGE INTO customer (id,name,age,address,salary)
KEY (id) VALUES (1, 'Colak', 46, 'istanbul', 2500);




13 Ocak 2023 Cuma

Web Assembly - WASM

Giriş
Not : Embedding Wasm in Hazelcast yazısına bakılabilir. Kodu burada
Docker üzerinde WASM çalıştırmak için bir yazı burada

WebAssemby (Wasm) Nedir?
Açıklaması şöyle. Tarayıcıda çalışıyor. Üst seviye bir programlama dili, alt seviye bir başka dile derleniyor.
WebAssembly, or Wasm for short, is a new type of code that can be run in modern web browsers and provides new features and major gains in performance. It is not primarily intended to be written by hand; rather, it is designed to be an effective compilation target for source languages like C, C++, Rust, etc.

That means developers can code web client applications in a programming language of their choice, compile them down to Wasm, and run them inside the browser at near-native speed. Additionally, Wasm brings other advantages, such as portability, security (via sandboxed execution in the browser), and debugability.

Şeklen şöyle



12 Ocak 2023 Perşembe

Apache Kafka ksqlDB Streams and Tables

Giriş
1. CRATE STREAM ile kafka topic'ten okuyan bir stream yaratılır
2. CREATE TABLE ile stream bir materialized view haline getirilir
Şeklen şöyle. Stream ile aggregation yapılır. Table ile de join'ler yapılır.


CREATE TABLE
Örnek - Primary Key
Şöyle yaparız
CREATE TABLE MyTable ( sensorId VARCHAR PRIMARY KEY, timestamp VARCHAR, value DECIMAL(19,5) ) WITH ( KAFKA_TOPIC = '<x>-telemetry', VALUE_FORMAT = 'JSON' ); SELECT * FROM MyTable; SELECT * FROM MyTable WHERE sensorId = "127";
Örnek - ETL
Debezium source tanımlamak için şöyle yaparız
# Create customers, products ve orders topics CREATE SOURCE CONNECTOR `mysql-connector` WITH( "connector.class"= 'io.debezium.connector.mysql.MySqlConnector', "tasks.max"= '1', "database.hostname"= 'mysql', "database.port"= '3306', "database.user"= 'root', "database.password"= 'debezium', "database.server.id"= '184054', "database.server.name"= 'dbserver1', "database.whitelist"= 'inventory', "table.whitelist"= 'inventory.customers,inventory.products,inventory.orders', "database.history.kafka.bootstrap.servers"= 'kafka:9092', "database.history.kafka.topic"= 'schema-changes.inventory', "transforms"= 'unwrap', "transforms.unwrap.type"= 'io.debezium.transforms.ExtractNewRecordState', "key.converter"= 'org.apache.kafka.connect.json.JsonConverter', "key.converter.schemas.enable"= 'false', "value.converter"= 'org.apache.kafka.connect.json.JsonConverter', "value.converter.schemas.enable"= 'false');
Bu 3 tabloyu birleştirmek için şöyle yaparız
# Join all streams CREATE STREAM S_CUSTOMER (ID INT, FIRST_NAME string, LAST_NAME string, EMAIL string) WITH (KAFKA_TOPIC='dbserver1.inventory.customers', VALUE_FORMAT='json'); CREATE TABLE T_CUSTOMER AS SELECT id, latest_by_offset(first_name) as fist_name, latest_by_offset(last_name) as last_name, latest_by_offset(email) as email FROM s_customer GROUP BY id EMIT CHANGES; CREATE STREAM S_PRODUCT (ID INT, NAME string, description string, weight DOUBLE) WITH (KAFKA_TOPIC='dbserver1.inventory.products', VALUE_FORMAT='json'); CREATE TABLE T_PRODUCT AS SELECT id, latest_by_offset(name) as name, latest_by_offset(description) as description, latest_by_offset(weight) as weight FROM s_product GROUP BY id EMIT CHANGES; CREATE STREAM s_order ( order_number integer, order_date timestamp, purchaser integer, quantity integer, product_id integer) WITH (KAFKA_TOPIC='dbserver1.inventory.orders',VALUE_FORMAT='json'); CREATE STREAM SA_ENRICHED_ORDER WITH (VALUE_FORMAT='AVRO') AS select o.order_number, o.quantity, p.name as product, c.email as customer, p.id as product_id, c.id as customer_id from s_order as o left join t_product as p on o.product_id = p.id left join t_customer as c on o.purchaser = c.id partition by o.order_number emit changes;
Sonucu PostgreSQL'e yazmak için şöyle yaparız
CREATE SINK CONNECTOR `postgres-sink` WITH( "connector.class"= 'io.confluent.connect.jdbc.JdbcSinkConnector', "tasks.max"= '1', "dialect.name"= 'PostgreSqlDatabaseDialect', "table.name.format"= 'ENRICHED_ORDER', "topics"= 'SA_ENRICHED_ORDER', "connection.url"= 'jdbc:postgresql://postgres:5432/inventory?user=postgresuser&password=postgrespw', "auto.create"= 'true', "insert.mode"= 'upsert', "pk.fields"= 'ORDER_NUMBER', "pk.mode"= 'record_key', "key.converter"= 'org.apache.kafka.connect.converters.IntegerConverter', "key.converter.schemas.enable" = 'false', "value.converter"= 'io.confluent.connect.avro.AvroConverter', "value.converter.schemas.enable" = 'true', "value.converter.schema.registry.url"= 'http://schema-registry:8081' );
CREATE SOURCE TABLE
Repartition için kullanılır

Örnek
Şöyle yaparız. Burada her algılayıcı ve timestamp için partition oluşturuluyor
# Step 1 CREATE STREAM MyStream ( sensorId VARCHAR KEY, timestamp VARCHAR, value DECIMAL(19,5) ) WITH ( KAFKA_TOPIC = '<x>-telemetry', VALUE_FORMAT = 'JSON' ); # Step 2 CREATE STREAM MyStreamRepartitioned WITH (key_format='json') AS SELECT STRUCT(sensorId:=sensorId, timestamp:=timestamp) AS myStruct, VALUE from MyStream PARTITION BY STRUCT(sensorId:=sensorId, timestamp:=timestamp); # Step 3 CREATE SOURCE TABLE RepartitionedTable (myStruct struct<sensorId VARCHAR, timestamp VARCHAR> PRIMARY KEY, value VARCHAR ) WITH ( KAFKA_TOPIC='<abc>REPARTITIONED', VALUE_FORMAT='json', KEY_FORMAT='json');
Açıklaması şöyle
Note the <abc>REPARTITIONED topic name. This topic name is dynamically created by KSQL DB for the repartitioned stream in step 2.
It is visible in the list of topics in your Confluent cloud environment, so you can take it from there.

Now you can query according to the timestamp field too. Note that the query extracts the values from the JSON serving as the primary key in our source table:
Şöyle yaparız
SELECT EXTRACTJSONFIELD(myStruct -> sensorId, '$.sensorId') AS sensor, myStruct -> timestamp as time, VALUE as val FROM RepartitionedTable WHERE EXTRACTJSONFIELD(myStruct -> sensorId, '$.sensorId') = '127' AND EXTRACTJSONFIELD(myStruct -> timestamp as time) >= '12:00' EMIT CHANGES;

Docker Compose ve Kafka ksqlDB

Giriş
Image olarak 
confluentinc/ksqldb-server
confluentinc/ksqldb-cli
kullanılır

Örnek
Şöyle yaparız
version: '2'

services:
  zookeeper:
    image: confluentinc/cp-zookeeper:7.0.1
    hostname: zookeeper
    container_name: zookeeper
    ports:
      - "2181:2181"
    environment:
      ZOOKEEPER_CLIENT_PORT: 2181
      ZOOKEEPER_TICK_TIME: 2000

  broker:
    image: confluentinc/cp-kafka:7.0.1
    hostname: broker
    container_name: broker
    depends_on:
      - zookeeper
    ports:
      - "29092:29092"
    environment:
      KAFKA_BROKER_ID: 1
      KAFKA_ZOOKEEPER_CONNECT: 'zookeeper:2181'
      KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT
      KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://broker:9092,PLAINTEXT_HOST://localhost:29092
      KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
      KAFKA_GROUP_INITIAL_REBALANCE_DELAY_MS: 0
      KAFKA_TRANSACTION_STATE_LOG_MIN_ISR: 1
      KAFKA_TRANSACTION_STATE_LOG_REPLICATION_FACTOR: 1

  ksqldb-server:
    image: confluentinc/ksqldb-server:0.25.1
    hostname: ksqldb-server
    container_name: ksqldb-server
    depends_on:
      - broker
    ports:
      - "8088:8088"
    environment:
      KSQL_LISTENERS: http://0.0.0.0:8088
      KSQL_BOOTSTRAP_SERVERS: broker:9092
      KSQL_KSQL_LOGGING_PROCESSING_STREAM_AUTO_CREATE: "true"
      KSQL_KSQL_LOGGING_PROCESSING_TOPIC_AUTO_CREATE: "true"

  ksqldb-cli:
    image: confluentinc/ksqldb-cli:0.25.1
    container_name: ksqldb-cli
    depends_on:
      - broker
      - ksqldb-server
    entrypoint: /bin/sh
    tty: true

9 Ocak 2023 Pazartesi

Apache Kafka kafka-avro-console-producer komutu

Örnek
Şöyle yaparız
kafka-avro-console-producer 
  --broker-list localhost:9092 
  --topic atm-fraud-accounts-topic 
  --property value.schema=$(cat src/main/avro/Account.avsc | tr -d '\040\011\012\015') 
  --property schema.registry.url=http://localhost:8081 
  < test-data/accounts.txt


Docker ve Redis

Örnek
Şöyle yaparız
$ docker run - name redisDemo -d redis
$ docker exec -it redisDev redis-cli
Örnek
Şöyle yaparız
$ docker run --name my-redis -p 6379:6379 -d redis


5 Ocak 2023 Perşembe

Helm ve Prometheus

Örnek
Şöyle yaparız
helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
helm repo update

helm install my-kube-prometheus-stack prometheus-community/kube-prometheus-stack
Örnek
Şöyle yaparız
$ helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
$ helm repo update

$ kubectl create ns monitoring
$ helm -n monitoring install prometheus prometheus-community/kube-prometheus-stack
Exporter
Her exporter bir de servis yaratır. Bu servisin metrics adresine gidersek exporter hakkında bilgiler görürüz.
127.0.0.1:9108/metrics
Örnek - blackbox-exporter
Blackbox Exporter yazısına bakabilirsiniz. Şöyle yaparız
helm -n monitoring upgrade 
  — install prometheus-blackbox prometheus-community/prometheus-blackbox-exporter
Örnek - elasticsearch-exporter
Şöyle yaparız
helm install prometheus-elasticsearch-exporter \ 
  prometheus-community/prometheus-elasticsearch-exporter

Google Cloud Firestore - NoSQL Document Database

Cloud Datastore and Firestore
Açıklaması şöyle
Datastore is a highly scalable NoSQL Document Database. It automatically scales and partitions data as it grows. Recommended for use cases needing flexible schema with transactions. Examples: User Profile and Product Catalogs. Datastore can handle upto a few terabytes of data.

Here are some of the important features:

1. Supports Transactions, Indexes and SQL-like queries (GQL)
2. Does NOT support Joins or Aggregate (sum or count) operations

Firestore is the new version of Datastore. I call it Datastore++. It is optimized for multi-device access. It provides an offline mode and data synchronization across multiple devices - mobile, IoT etc.
Choosing Between Cloud Firestore, Datastore vs Cloud BigTable
Açıklaması şöyle
Cloud Datastore is managed serverless NoSQL document database. It provides ACID transactions, SQL-like queries, and indexes. It is designed for transactional mobile and web applications.

Firestore is the next version of Datastore with additional capabilities like Strong consistency and Mobile and Web client libraries.

Firestore and Datastore are recommended for small to medium databases (0 to a few Terabytes).

Cloud BigTable on the other hand, is a managed, scalable NoSQL-wide column database. It is NOT serverless (You need to create instances).

BigTable is recommended for data sizes greater than 10 terabytes. It is usually used for large analytical and operational workloads.

BigTable is NOT recommended for transactional workloads. It does NOT support multi-row transactions - it supports ONLY single-row transactions.

Google Cloud Bigtable - Wide Column NoSQL Veri Tabanı

Giriş
Açıklaması şöyle
Bigtable is a distributed key/value storage system built at Google in 2005. It is designed to scale to store billions of rows and handle millions of reads/writes per second.
Bir başka açıklama şöyle. Yani sadece satır için transaction desteği verir
BigTable is recommended for data sizes greater than 10 terabytes. It is usually used for large analytical and operational workloads.

BigTable is NOT recommended for transactional workloads. It does NOT support multi-row transactions - it supports ONLY single-row transactions.
Bir başka önemli nokta şöyle
It is NOT serverless (You need to create instances).
Nerede Kullanılır
Açıklaması şöyle
A classic use case for Bigtable is storing sensor data and then running MapReduce jobs over it. This type of use case is a perfect fit for Bigtable because it typically requires high throughput and low latency. Additionally the nature of this data is likely parse with high cardinality.
Consistency Primitives
Açıklaması şöyle
Guarantees: Bigtable supports strong consistency within a single region and supports eventual consistency in cross regional deployments. Strong consistency means that all readers and writers get the same consistent view of the data.

Transactions: Bigtable does not support general purpose transactions. However, it does support single row transactions. A single row transaction enables reading and updating a single row as an atomic operation.

Intra-Row Guarantees: All updates to a single row are atomic. This is impressive and useful given the high cardinality of columns within a row.
Kullanımıyla ilgili bir açıklama şöyle
- A table is lexicographically sorted by row key. This enables schema designers to control the relative locality of their data by carefully selecting row key.

- A single table is designed to have on the order of 100 column families. Within each column family an arbitrary number of qualifiers can be used.

- Bigtable is great at modeling sparse data because if a column qualifier is not specified it does not take up any space in the row. Therefore a typical use case of Bigtable will involve having million of unique qualifiers within a table but each individual row will be smallish because it will be sparse relative to the set of all column qualifiers in the table.

All data is immutable in Bigtable. When a new record is written either a new qualifier is added to a family or a new timestamp is added to a cell — data is never modified.

- The timestamps in the cells can either be assigned by the user or assigned by Bigtable. If the user assigns timestamps it is the responsibility of the user to ensure the timestamps are unique.

- All data in Bigtable (with one small exception we will ignore) are simply strings.
BigTable içinde bir sürü tablo olabilir. Ancak row mantığı biraz farklı. Şeklen şöyle
Açıklaması şöyle
Row Key
As part of a table’s schema a row key must be defined. A row key uniquely identifies a single row within a table. In our example employeeID was selected as the row key and we are looking at the row where employeeID=25.

Column Family - Aslında Map'in İsmi
As part of a table’s schema column families must be defined. Column families are used to store buckets of related entities. Our example shows two column families Contact Information and Manager Rating.

Column Qualifier - Map İçindeki Key Değerleri, Value Cell'dir
Within a family there can be arbitrary qualifiers. The qualifiers within a family should be related to each other. Qualifiers should be thought of as data rather than as part of the schema.

Cell and Timestamped Value - Cell İçinde Map Vardır. Map'in Key Değeri Timestamp
A row key, column family and column qualifier uniquely identify a single cell. A cell holds a collection of values. These values are organized into a map where the key is a timestamp and the value is a piece of user data.

Google Cloud SQL - İlişkisel Veri Tabanı

Cloud SQL Nedir?
Açıklaması şöyle
Cloud SQL is a Fully Managed Relational Database service.

Here are some of the important features:

1. Supports MySQL, PostgreSQL, and SQL Server
2. Regional Service providing High Availability (99.95%)
3. Option to use SSDs or HDDs (For best performance: use SSDs)
4. Automatic encryption (tables/backups), maintenance and updates
5. High availability and failover: Create a Standby with automatic failover
6. Read replicas for reading workloads - Options: Cross-zone, Cross-region and External (NON-Cloud SQL DB)
7. Automatic storage increase without downtime (for newer versions)
8. Point-in-time recovery: Enable binary logging
9. Backups (Automated and on-demand backups)
Cloud SQL vs Cloud Spanner
Cloud Spanner yazısına bakabilirsiniz. Açıklaması şöyle.
Use Cloud Spanner(Expensive) instead of Cloud SQL for relational transactional applications if:

1. You have huge volumes of relational data (TBs) OR
2. You need infinite scaling for a growing application (to TBs) OR
3. Do you need a Global (distributed across multiple regions) Database OR
4. You need higher availability (99.999%)

4 Ocak 2023 Çarşamba

aws configure seçeneği

Giriş
Yaratılan "Access Key" değerlerini içeri alır. Bu komutun sonucunda 
1. ~/.aws/credentials ve 
2. ~/.aws/config
isimli iki dosya oluşur

~/.aws/credentials Dosyası - Hassas Bilgileri İçerir
Örnek
[default]
aws_access_key_id=AKIAIOSFODNN7EXAMPLE
aws_secret_access_key=wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY
~/.aws/credentials Dosyası - Profile ayarlarını içerir
[default] profile için komut seçeneğinde "--profile ..." parametresini belirmeye gerek yoktur

Örnek
[default]
region=us-west-2
output=json
Kullanım
Örnek
Şöyle yaparız
$ aws configure
(Replace your access key when prompted) AWS Access Key ID [None]: ABCDEFGHIAZBERTUCNGG (Replace your secret key when prompted) AWS Secret Access Key [None]: uMe7fumK1IdDB094q2sGFhM5Bqt3HQRw3IHZzBDTm (you can put your own availability zone here ) Default region name [None]: ap-south-1 Default output format [None]: json
Örnek
Şöyle yaparız
# to setup aws configuration for terraform
aws configure
AWS Access Key ID [None]: <PASTE>
AWS Secret Access Key [None]: <PASTE>

# to see, where are your credentials saved.
cat $HOME/.aws/credentials
configure sso seçeneği
Komutun detayları  burada. Single sign on içindir. 

Örnek - Profile İle Kullanım
Şöyle yaparız
> aws configure sso
SSO start URL [None]: https://foo.awsapps.com/start
SSO Region [None]: us-east-1
Attempting to automatically open the SSO authorization page in your default browser.
If the browser does not open or you wish to use a different device to authorize this 
request, open the following URL:

https://device.sso.us-east-1.amazonaws.com/

Then enter the code:

ABC-DEFG
The only AWS account available to you is: 123456789012
Using the account ID 123456789012
The only role available to you is: PowerUserAccess
Using the role name "PowerUserAccess"
CLI default client Region [None]: us-east-1
CLI default output format [None]: json
CLI profile name [PowerUserAccess-123456789012]: poweruser

To use this profile, specify the profile name using --profile, as shown:

aws s3 ls --profile poweruser
Şifrenin tarihi geçince şöyle yaparız
$ aws sso login 
Tüm çıktı şöyle
> aws s3 ls --profile poweruser

The SSO session associated with this profile has expired or is otherwise invalid. 
To refresh this SSO session run aws sso login with the corresponding profile.

> aws sso login --profile poweruser
Attempting to automatically open the SSO authorization page in your default browser.
If the browser does not open or you wish to use a different device to authorize this
request, open the following URL:

https://device.sso.us-east-1.amazonaws.com/

Then enter the code:

ABCD-EFGH
Successfully logged into Start URL: https://fooo.awsapps.com/start

> aws s3 ls --profile poweruser
...

3 Ocak 2023 Salı

git worktree seçeneği

Giriş
 git worktree seçeneği bizi IntelliJ ile sürekli branch değiştirmekten kurtarıyor. Çünkü bazen IntelliJ  sürekli bir şeyleri indekslediği için çalışmak zor oluyor

Örnek
Şöyle yaparız
# Add a new worktree
git worktree add ../new-dir existing-branch

# Remove a worktree
git worktree remove ../new-di