30 Aralık 2022 Cuma

OpenTelemetry Java Agent

Giriş
Logları direkt bir backend sunucuya gönderir


Örnek
Uygulamayı çalıştırmak için şöyle yaparız
java -javaagent:opentelemetry-javaagent.jar -jar catalog.jar 
Örnek
Şöyle yaparız
java -javaagent:opentelemetry-javaagent.jar -jar target/*.jar
Örnek
Şöyle yaparız
# 1. Add them to the startup commands

java -javaagent:path/to/opentelemetry-javaagent.jar \
  -Dotel.service.name=your-service-name -jar myApplication.jar

# 2. Use JAVA_TOOL_OPTIONS and other environment variables

export JAVA_TOOL_OPTIONS="-javaagent:path/to/opentelemetry-javaagent.jar"
export OTEL_SERVICE_NAME="your-service-name"
java -jar myApplication.jar
docker-composer.override.otel.yml
Örnek
Şöyle yaparız
curl --create-dirs -O -L --output-dir ./otel \
https://github.com/open-telemetry/opentelemetry-java-instrumentation/releases/latest/download/opentelemetry-javaagent.jar
docker-composer.override.otel.yml şöyledir
#docker-compose.override.otel.yml
version: '3'

services:
  [your-service]:
  volumes:
      - "./otel/opentelemetry-javaagent.jar:/otel/opentelemetry-javaagent.jar"
  environment:
   - JAVA_TOOL_OPTIONS=-javaagent:/otel/opentelemetry-javaagent.jar
   - OTEL_SERVICE_NAME=[your-service]
   - DEPLOYMENT_ENV=DOCKER_LOCAL
  extra_hosts:
        - "host.docker.internal:host-gateway"
Çalıştırmak için şöyle yaparız
docker compose -f docker-compose.yml -f docker-compose.override.otel.yml up -d

SpringBoot
Örnek
Açıklaması şöyle
OpenTelemtry java agent uses the Mapped Diagnostic Context(MDC) to propagate the TraceID and SpanID within the service. You can print the TraceID and SpanID in the log line by extracting the values from MDC in the console pattern. Just add this to yourapplication.yml file:
Şöyle yaparız
logging:
  level:
    root: INFO
  pattern:
    console: '[%d{yyyy-MM-dd HH:mm:ss.SSS}] [%mdc{trace_id}/%mdc{span_id}] [%thread] %-5level %C:%M:%L - %msg%n'
O zaman çıktı şöyle
[2023–04–14 06:53:02.166] [0eef6546864ff6a0129a4b6ce06f7cbf/c3e97a8c1174afd2] [http-nio-8080-exec-7] INFO ...

Exporter Selection
3 tane değişken tanımlanabilir
1. OTEL_TRACES_EXPORTER     Trace exporter to be used. Default “otlp”
2. OTEL_METRICS_EXPORTER    Metrics exporter to be used. Default “otlp”
3. OTEL_LOGS_EXPORTER     Logs exporter to be used. Default “otlp”

OTEL_TRACES_EXPORTER
Trace'lerin hangi formatta gönderileceğini belirtir. otlp, jaeger, zipkin, none gibi değerler alabilir. Varsayılan değer otlp

Console Backend
Örnek
Şöyle yaparızopentelemetry-javaagent-all.jar opentelemetry agent için jar.Servisimizi ismi my-service
OTEL_SERVICE_NAME=my-service \
OTEL_TRACES_EXPORTER=logging \
java -javaagent:./opentelemetry-javaagent.jar \
     -jar target/*.jar
Çıktısı şöyle
//Console output
INFO io.opentelemetry.exporter.logging.LoggingSpanExporter - '/owners/{ownerId}' :
99af87eeaa19b83d014463b046884e56 632e2d2a5931bd17 SERVER [tracer: io.opentelemetry.tomcat-7.0:1.13.1-
alpha] AttributesMap{data={net.transport=ip_tcp, http.target=/owners/11, thread.id=120, http.flavor=1.1,
http.status_code=200, net.peer.ip=0:0:0:0:0:0:0:1, thread.name=http-nio-8080-exec-10,
http.host=localhost:8080, http.route=/owners/{ownerId}, http.user_agent=Mozilla/5.0 (Macintosh; Intel
Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/100.0.4896.127 Safari/537.36,
http.method=GET, net.peer.port=50778, http.scheme=http}, capacity=128, totalAddedValues=13}
Jaeger Backend
Jaeger'a özel ayarların açıklaması şöyle
OTEL_EXPORTER_JAEGER_ENDPOINT : Full URL of the Jaeger HTTP endpoint
OTEL_EXPORTER_JAEGER_TIMEOUT : Maximum time (in milliseconds) the Jaeger exporter will wait for each batch export
OTEL_EXPORTER_JAEGER_USER : Username to be used for HTTP basic authentication
OTEL_EXPORTER_JAEGER_PASSWORD : Password to be used for HTTP basic authentication
Örnek
Şöyle yaparız
OTEL_SERVICE_NAME=my-service 
OTEL_TRACES_EXPORTER=jaeger 
OTEL_EXPORTER_JAEGER_ENDPOINT=http://localhost:14250 \
  java -javaagent:./opentelemetry-javaagent.jar -jar target/*.jar
Aspecto Backend
Örnek
Şöyle yaparız. Bu sefer OTEL_TRACES_EXPORTER tanımlamaya gerek yok. Varsayılan değer otlp ve Aspecto da bu formatı destekliyor. Sadece OTEL_EXPORTER_OTLP_TRACES_ENDPOINT yeterli
OTEL_SERVICE_NAME=my-service 
OTEL_EXPORTER_OTLP_HEADERS=Authorization={ASPECTO_AUTH}
OTEL_EXPORTER_OTLP_TRACES_ENDPOINT=https://otelcol.aspecto.io:4317 \
  java -javaagent:./opentelemetry-javaagent.jar \
       -jar target/*.jar
Lightstep Backend
Lightstep normal opentelemetry-javaagent.jar'ı sarmalayan bir başka jar lightstep-opentelemetry-javaagent.jar isimli bir başka jar sağlıyor
Örnek
Şöyle yaparız. Bu sefer OTEL_TRACES_EXPORTER tanımlamaya gerek yok. Varsayılan değer otlp ve Lightstep de bu formatı destekliyor. Sadece OTEL_EXPORTER_OTLP_TRACES_ENDPOINT yeterli
export LS_ACCESS_TOKEN=your-token
export OTEL_SERVICE_NAME=springboot-demo
export OTEL_EXPORTER_OTLP_ENDPOINT=https://ingest.lightstep.com:443

java -javaagent:lightstep-opentelemetry-javaagent.jar -jar target/*.jar
Örnek
Şöyle yaparız. Bu sefer OTEL_TRACES_EXPORTER tanımlamaya gerek yok. Varsayılan değer otlp ve Lightstep de bu formatı destekliyor. Ayrıca OTEL_EXPORTER_OTLP_ENDPOINT değeri de belirtilmiyor çünkü varsayılan değer tanımlı O da http://localhost:4318.  Yani yerel bilgisayarda Lightstep çalışıyor.
export OTEL_SERVICE_NAME=springboot-demo_jaeger
export LIGHTSTEP_ACCESS_TOKEN="<your_ls_access_token"

java -javaagent:opentelemetry-javaagent.jar -jar target/*.jar
Örnek
Şöyle yaparız
export LS_ACCESS_TOKEN=your-token
export OTEL_SERVICE_NAME=springboot-demo
export OTEL_EXPORTER_OTLP_ENDPOINT=https://ingest.lightstep.com:443

26 Aralık 2022 Pazartesi

Protobuf protoc komutu

Giriş
Söz dizimi şöyle
protoc <options> <filename>
-I seçeneği
Girdi olarak kullanılacak dizini belirtir

--java_out seçeneği
Örnek
Şöyle yaparız
protos=...
javaOut=...

for proto in $(find "$protos" -name "*.proto"); do
  echo "Generating Java code for $proto"

  protoc \
    -I "$ROOT"/protos \
    -I "$ROOT" \
    --java_out="$javaOut" \
    "$proto"

  protoc \
    -I "$ROOT"/protos \
    -I "$ROOT" \
    --grpc-java_out="$javaOut" \
    --plugin=protoc-gen-grpc-java=/usr/local/bin/protoc-gen-grpc-java \
    "$proto"
done
python_out seçeneği
Örnek
Şöyle yaparız. Bulunduğumuz dizindeki employee.proto dosyasını python'a çevirir ve çıktıyı yine bulunduğumuz dizine yazar
protoc -I=. - python_out=. employee.proto

Docker Compose ve Confluent Schema Registry

Image olarak şunlar kullanılır
confluentinc/cp-zookeeper:7.2.1
confluentinc/cp-kafka:7.2.1
confluentinc/cp-schema-registry:7.2.1
confluentinc/cp-enterprise-control-center:7.2.1

cp-server veya cp-kafka
Kafka broker

cp-schema-registry
Schema Registry sunucusu

cp-enterprise-control-center
Açıklaması şöyle
Confluent Control Center

The Confluent Control Center is a web based tool for monitoring the health of the cluster and observing the metrics exported from the broker, as well as metrics gathered from the application consumers and producers. 
Bununla schema yaratma ve yönetme işleri görsel olarak yapılabilir.  http://localhost:9021 adresine gitmek gerekir.

Örnek
Şöyle yaparız
version: "3"

services:
  zookeeper:
    image: confluentinc/cp-zookeeper:5.4.0
    hostname: zookeeper
    container_name: zookeeper
    ports:
      - "2181:2181"
    environment:
      ZOOKEEPER_CLIENT_PORT: 2181
      ZOOKEEPER_TICK_TIME: 2000

  broker:
    image: confluentinc/cp-server:5.4.0
    hostname: broker
    container_name: broker
    depends_on:
      - zookeeper
    ports:
      - "9092:9092"
    environment:
      KAFKA_BROKER_ID: 1
      KAFKA_ZOOKEEPER_CONNECT: "zookeeper:2181"
      KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT
      KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://broker:29092,PLAINTEXT_HOST://localhost:9092
      KAFKA_METRIC_REPORTERS: io.confluent.metrics.reporter.ConfluentMetricsReporter
      KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
      KAFKA_GROUP_INITIAL_REBALANCE_DELAY_MS: 0
      KAFKA_CONFLUENT_LICENSE_TOPIC_REPLICATION_FACTOR: 1
      CONFLUENT_METRICS_REPORTER_BOOTSTRAP_SERVERS: broker:29092
      CONFLUENT_METRICS_REPORTER_ZOOKEEPER_CONNECT: zookeeper:2181
      CONFLUENT_METRICS_REPORTER_TOPIC_REPLICAS: 1
      CONFLUENT_METRICS_ENABLE: "true"
      CONFLUENT_SUPPORT_CUSTOMER_ID: "anonymous"

  kafka-tools:
    image: confluentinc/cp-kafka:5.4.0
    hostname: kafka-tools
    container_name: kafka-tools
    command: ["tail", "-f", "/dev/null"]
    network_mode: "host"

  schema-registry:
    image: confluentinc/cp-schema-registry:5.4.0
    hostname: schema-registry
    container_name: schema-registry
    depends_on:
      - zookeeper
      - broker
    ports:
      - "8081:8081"
    environment:
      SCHEMA_REGISTRY_HOST_NAME: schema-registry
      SCHEMA_REGISTRY_KAFKASTORE_CONNECTION_URL: "zookeeper:2181"

  control-center:
    image: confluentinc/cp-enterprise-control-center:5.4.0
    hostname: control-center
    container_name: control-center
    depends_on:
      - zookeeper
      - broker
      - schema-registry
    ports:
      - "9021:9021"
    environment:
      CONTROL_CENTER_BOOTSTRAP_SERVERS: 'broker:29092'
      CONTROL_CENTER_ZOOKEEPER_CONNECT: 'zookeeper:2181'
      CONTROL_CENTER_SCHEMA_REGISTRY_URL: "http://schema-registry:8081"
      CONTROL_CENTER_REPLICATION_FACTOR: 1
      CONTROL_CENTER_INTERNAL_TOPICS_PARTITIONS: 1
      CONTROL_CENTER_MONITORING_INTERCEPTOR_TOPIC_PARTITIONS: 1
      CONFLUENT_METRICS_TOPIC_REPLICATION: 1
      PORT: 9021
Örnek
Şöyle yaparız
services:
    zookeeper:
        image: confluentinc/cp-zookeeper:7.2.1
        hostname: zookeeper
        container_name: zookeeper
        ports:
            - "2181:2181"
        environment:
            ZOOKEEPER_CLIENT_PORT: 2181
            ZOOKEEPER_TICK_TIME: 2000

    kafka:
        image: confluentinc/cp-kafka:7.2.1
        hostname: kafka
        container_name: kafka
        depends_on:
            - zookeeper
        ports:
            - "9092:9092"
        environment:
            KAFKA_BROKER_ID: 1
            KAFKA_ZOOKEEPER_CONNECT: "zookeeper:2181"
            KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT
            KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://kafka:29092,PLAINTEXT_HOST://localhost:9092
            KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
            KAFKA_GROUP_INITIAL_REBALANCE_DELAY_MS: 0
            KAFKA_CONFLUENT_LICENSE_TOPIC_REPLICATION_FACTOR: 1
            KAFKA_CONFLUENT_BALANCER_TOPIC_REPLICATION_FACTOR: 1
            KAFKA_TRANSACTION_STATE_LOG_MIN_ISR: 1
            KAFKA_TRANSACTION_STATE_LOG_REPLICATION_FACTOR: 1
            KAFKA_CONFLUENT_SCHEMA_REGISTRY_URL: http://schema-registry:8081

    schema-registry:
        image: confluentinc/cp-schema-registry:7.2.1
        hostname: schema-registry
        container_name: schema-registry
        depends_on:
            - zookeeper
            - kafka
        ports:
            - "8081:8081"
        environment:
            SCHEMA_REGISTRY_HOST_NAME: schema-registry
            SCHEMA_REGISTRY_KAFKASTORE_BOOTSTRAP_SERVERS: 'kafka:29092'
            SCHEMA_REGISTRY_LISTENERS: http://0.0.0.0:8081

    control-center:
        image: confluentinc/cp-enterprise-control-center:7.2.1
        hostname: control-center
        container_name: control-center
        depends_on:
            - kafka
            - schema-registry
        ports:
            - "9021:9021"
        environment:
            CONTROL_CENTER_BOOTSTRAP_SERVERS: 'kafka:29092'
            CONTROL_CENTER_SCHEMA_REGISTRY_URL: "http://schema-registry:8081"
            CONTROL_CENTER_REPLICATION_FACTOR: 1
            CONTROL_CENTER_INTERNAL_TOPICS_PARTITIONS: 1
            CONTROL_CENTER_MONITORING_INTERCEPTOR_TOPIC_PARTITIONS: 1
            CONFLUENT_METRICS_TOPIC_REPLICATION: 1
            PORT: 9021


22 Aralık 2022 Perşembe

git diff seçeneği - Show Changes Not Yet Staged

Giriş
Açıklaması şöyle
Show Changes Not Yet Staged
Örnek
Açıklaması şöyle
If you face a situation where you need to share some changes in the code base that you have made with your co-worker but you cannot yet push your changes to the remote repository, you can use the patch command of git. Here is how it works:

After you have made all the changes you can run the following command:

git diff > filename.patch

git diff > filename.patch will create a file that will contain all uncommitted the changes you have made so far in the code base. Then you can simple share this file with your coworker via slack,email,USB etc .

You coworker will simply download you patch file and run the following command:

git apply filename.patch

This would apply all the code changes you have made in the code base on your coworker’s copy of the code.

Örnek
branch ve master arasındaki farkı görmek için şöyle yaparız.
git diff featureXYZ_branch master
Örnek
Şöyle yaparız.
git diff remotes/origin/featureXYZ_branch remotes/origin/master

16 Aralık 2022 Cuma

Debezium JSON Örnekleri

Giriş
Yukarıda schema ile payload için alanlar tanımlıdır. Açıklaması şöyle
before: This field contains the state of the record before the operation. It is optional because it may not be present for all events.
after: This field contains the state of the record after the operation. It is always present for INSERT and UPDATE events.
source: This field contains information about the Debezium connector that generated the event.
op: This field specifies the type of operation performed on the record.
ts_ms: This field specifies the timestamp of the event in milliseconds since the Unix epoch.
transaction: This field contains information about the transaction in which the event occurred. It is optional because it can be null for non-transactional operations.

Read
Örnek
{
  "schema": {
    "type": "struct",
    "fields": [ ... ],
    "optional": false,
    "name": "dbserver1.inventory.customers.Envelope"
  },
  "payload": {
    "before": null,
    "after": {
      "id": 1004,
      "first_name": "Anne",
      "last_name": "Kretchmar",
      "email": "annek@noanswer.org"
    },
    "source": {
      "version": "1.6.1.Final",
      "connector": "mysql",
      "name": "dbserver1",
      "ts_ms": 1630246982521,
      "snapshot": "true",
      "db": "inventory",
      "sequence": null,
      "table": "customers",
      "server_id": 0,
      "gtid": null,
      "file": "mysql-bin.000008",
      "pos": 154,
      "row": 0,
      "thread": null,
      "query": null
    },
    "op": "r",
    "ts_ms": 1630246982521,
    "transaction": null
  }
}
Insert İle Yeni Satır - Create
op alanı c yani CREATE gelir. Payload kısmında before ve after bölümleri var. Bu bölümlerde sütun isimleri var. Yeni satır ise before alanı null gelir.
Örnek
   "payload":{ 
      "before":null,
      "after":{ 
         "id":1005,
         "first_name":"Vlad",
         "last_name":"Mihalcea",
         "email":"vlad@acme.org"
      },
      "source":{ 
         "name":"dbserver1",
         "server_id":223344,
         "ts_sec":1500369632,
         "gtid":null,
         "file":"mysql-bin.000003",
         "pos":364,
         "row":0,
         "snapshot":null,
         "thread":13,
         "db":"inventory",
         "table":"customers"
      },
      "op":"c",
      "ts_ms":1500369632095
   }
}
Update
op alanı u yani UPDATE gelir. Hem before hem de after kısmı doludur
Örnek
{
"payload":{ "before":{ "id":1005, "first_name":"Vlad", "last_name":"Mihalcea", "email":"vlad@acme.org" }, "after":{ "id":1005, "first_name":"Vlad", "last_name":"Mihalcea", "email":"vlad.mihalcea@acme.org" }, "source":{ "name":"dbserver1", "server_id":223344, "ts_sec":1500369929, "gtid":null, "file":"mysql-bin.000003", "pos":673, "row":0, "snapshot":null, "thread":13, "db":"inventory", "table":"customers" }, "op":"u", "ts_ms":1500369929464 } }
Delete
op alanı d yani DELETE gelir. after kısmı null gelir
Örnek
{
    "payload":{ 
      "before":{ 
         "id":1005,
         "first_name":"Vlad",
         "last_name":"Mihalcea",
         "email":"vlad.mihalcea@acme.org"
      },
      "after":null,
      "source":{ 
         "name":"dbserver1",
         "server_id":223344,
         "ts_sec":1500370394,
         "gtid":null,
         "file":"mysql-bin.000003",
         "pos":1025,
         "row":0,
         "snapshot":null,
         "thread":13,
         "db":"inventory",
         "table":"customers"
      },
      "op":"d",
      "ts_ms":1500370394589
   }
}

Docker ve Debezium

Örnek
Şöyle yaparız
> docker run -it \
--name zookeeper \
-p 2181:2181 \
-p 2888:2888 \
-p 3888:3888 
debezium/zookeeper:0.5
 
> docker run -it \
--name kafka \
-p 9092:9092 \
--link zookeeper:zookeeper \
debezium/kafka:0.5
 
> docker run -it \
--name mysql \
-p 3306:3306 \
-e MYSQL_ROOT_PASSWORD=debezium \
-e MYSQL_USER=mysqluser \
-e MYSQL_PASSWORD=mysqlpw 
debezium/example-mysql:0.5
 
> docker run -it \
--name kafka-connect \
-p 8083:8083 \
-e GROUP_ID=1 \
-e CONFIG_STORAGE_TOPIC=my_connect_configs \
-e OFFSET_STORAGE_TOPIC=my_connect_offsets \
--link zookeeper:zookeeper \
--link kafka:kafka \
--link mysql:mysql \
debezium/connect:0.5
 
> docker run -it 
--name kafka-watcher \ 
--link zookeeper:zookeeper \
debezium/kafka:0.5 watch-topic -a -k dbserver1.inventory.customers


14 Aralık 2022 Çarşamba

Hashicorp Vault

Kurulum
Şöyle yaparız
brew tap hashicorp/tap
brew install hashicorp/tap/vault
brew upgrade hashicorp/tap/vault
Vault Dev Mode
Açıklaması şöyle
A common scenario when starting with Vault is to use the dev mode. This doesn’t require any setup and it works directly with your local vault installation. The dev mode is insecure and will loose data on every restart, since it stores data in-memory.

The vault server can be started with vault server -dev
Seal/Unseal
Açıklaması şöyle
When you run Vault as a dev server, it will automatically unseal Vault.

When you run it on a production server, then every initialized Vault server is started in the sealed state.

This means that Vault can access the physical storage, but it can’t read any of it because it doesn’t know how to decrypt it.

The process of teaching Vault how to decrypt the data is known as unsealing the Vault.

Unsealing has to happen every time Vault starts and can be done via the API and via the CLI.
Secret Engines
Açıklaması şöyle
Secrets engines are components which store, generate, or encrypt data.

Some secrets engines like the key/value secrets engine (like the one we used earlier) simply store and read data. Other secrets engines connect to other services and generate dynamic credentials on demand. Other secrets engines provide encryption as a service.
Database Secrets Engine
Açıklaması şöyle
When an authenticated entity, say an instance of your backend application, requests database access and is authorised to do so, this secrets engine creates a database user and password with the corresponding lifetimes and access rights.
 Şeklen şöyle
Açıklaması şöyle
If configured, the credentials have a limited lifetime and expire quickly if not renewed. This renewal process exists for the tokens used to communicate with vault as well as the credentials for the database. These refresh mechanism are shown below.
Yenileme şeklen şöyle

token hierarchy
Açıklaması şöyle
Another nice concept worth mentioning here, is the token hierarchy.If a token is considered compromised, it can be revoked. This also revokes all other tokens it spawned.

Docker
Dev Mode'da çalıştırmak için iki tane orta değişkenini 
1. VAULT_DEV_ROOT_TOKEN ve 
2. VAULT_DEV_LISTEN_ADDRESS 
tanımlamak gerekir. 
Örnek
Şöyle yaparız
docker run \
  -d \
  -p 8200:8200 \
  --cap-add=IPC_LOCK \
  -e 'VAULT_DEV_ROOT_TOKEN_ID=myroot' \
  -e 'VAULT_DEV_LISTEN_ADDRESS=0.0.0.0:8200' \
  vault
Ekrana yazılan Unseal Key ve Root Token değerlerini saklamak gerekir. Root Token değeri Vault ile etkileşim için gerekir.

http://0.0.0.0:8200/ui/ veya http://127.0.0.1:8200/ adresine gideriz ve Root Token ile giriş yaparız

CLI
login
Şöyle yaparız
export VAULT_ADDR=’http://127.0.0.1:8200'
vault login <roottoken>
status seçeneği
Şöyle yaparız
$ vault status 

Key             Value
---             -----
Seal Type       shamir
Initialized     true
Sealed          false
Total Shares    1
Threshold       1
Version         1.11.3
Build Date      2022-08-26T10:27:10Z
Storage Type    inmem
Cluster Name    vault-cluster-f1a3049f
Cluster ID      50bd8f06-9533-5da1-e36a-fce85433822a
HA Enabled      false
Docker Compose
Örnek
Şöyle yaparız
version: '3.6'
services:
  vault:
    image: vault:latest
    container_name: vault
    restart: on-failure:10
    ports:
      - "8200:8200"
    cap_add:
      - IPC_LOCK

  db:
    image: postgres
    restart: always
    environment:
      POSTGRES_PASSWORD: supersecure
    ports:
      - "5432:5432"




Serverless Ne Demek

Giriş
Açıklaması şöyle
The Litmus Test for Serverless
1. Nothing to provision, nothing to manage
2. Usage-based pricing with no minimums
3. Ready with a single API call
4. No planned downtime
5. No instances
Bazı örnekler şöyle
Examples are AWS Lambda, EventBridge, DynamoDB, and Step Functions. But what makes these serverless compared to services like Amazon Aurora or ECS?


13 Aralık 2022 Salı

Filebeat

Filebeat
Önce neden Filebeat gerektiğine bakalım. Açıklaması şöyle. Yani Logstash'in performans sıkıntısı yüzünden ortaya çıkmış.
The original task of Logstash is monitoring logs and transforming them into a meaningful set of fields and eventually streaming the output to a defined destination. However, it has an issue with performance.

So, Elastic has launched Filebeat that use for monitoring logs and streaming the output to a defined destination.

And Logstash acts as an aggregator that ingests data from a multitude of sources, transforms it, and then sends it to your favorite “stash.”
Açıklaması şöyle.
Filebeat: Filebeat is a lightweight shipper for forwarding and centralizing log data. Installed as an agent on your servers, Filebeat monitors the log files or locations that you specify, collects log events, and forwards them either to Elasticsearch or Logstash for indexing.
Şeklen şöyle. Tüm container'ları dinler ve logları Logstash'e veya direkt Elastic'e gönderir

1. Docker Compose
Filebeat için bir tane filebeat.yaml dosyası belirtmek gerekir. Şöyle yaparız
filebeat:
    image: elastic/filebeat:8.0.1
    user: root
    volumes:
      - ./filebeat/config/filebeat.yml:/usr/share/filebeat/filebeat.yml:ro
      - type: bind
        source: /var/lib/docker/containers
        target: /var/lib/docker/containers // Listen to running container files
        read_only: true
      - type: bind
        source: /var/run/docker.sock 
        target: /var/run/docker.sock // It's the main entry point for Docker API.
        read_only: true
depends_on:
      - logstash
filebeat.yaml Dosyası 
Örnek  - logstash
Şöyle yaparız. Docker loglarını Logstash'e gönderir. 
filebeat.autodiscover:
  providers:
    - type: docker
      hints.enabled: true

output.logstash: 
  hosts: ["logstash:5000"]

logging.level: error
Örnek  - logstash
Şöyle yaparız. Docker loglarını Logstash'e gönderir. 
filebeat.inputs:
- type: log
  paths:
    - "myapp.log"

output.logstash:
  hosts: ["192.168.1.1:4561"]
Örnek - ElasticSearch
Şöyle yaparız. Docker loglarını ElasticSearch' e gönderir. 
output.elasticseach: 
  hosts: ["elasticsearch:5000"]
Örnek - ElasticSearch
Şöyle yaparız. Docker loglarını ElasticSearch' e gönderir. 
filebeat.inputs:
- type: log paths: - "myapp.log" output.elasticsearch: hosts: ["http://192.168.1.1:9200"]
Örnek
Şöyle yaparız. Bizim json dosyalarımızı ElasticSearch' e gönderir.
filebeat.inputs:
- type: log
  enabled: true
  paths:
  - /metrics/query-output.log
  json.add_error_key: true
  fields:
    type: "db.metrics"

output.elasticsearch:
  hosts: ["http://elasticsearch:9200"]

2. Kubernetes
Örnek
Elimizde bir ConfigMap olsun. /var/log/*.log dosyasından okur ve Logstash'e gönderir
apiVersion: v1
kind: ConfigMap
metadata:
  name: filebeat-config
  labels:
    component: filebeat
data:
  conf.yaml: |
    filebeat.inputs:
    - type: log
      paths:
        - '/var/log/*.log'
    output:
      logstash:
        hosts: [ "logstash:5044" ]
Şöyle yaparız
apiVersion: apps/v1
kind: Deployment ... spec: containers: ... - name: filebeat image: elastic/filebeat:7.16.3 args: - -c - /etc/filebeat/conf.yaml - -e volumeMounts: - name: filebeat-config mountPath: /etc/filebeat - name: log mountPath: /var/log volumes: - name: log emptyDir: {} - name: filebeat-config configMap: name: filebeat-config

12 Aralık 2022 Pazartesi

Python Notlarım

apt vs pip3
Elimizde şöyle iki tane seçenek olsun. apt ile kurulum tercih edilmeli
pip3 install setuptools

apt install python3-setuptools
pip3 komutu
intall seçeneği
Örnek
requirements.txt şöyle olsun
PyYAML
jsonschema
jinja2
Şöyle yaparız
pip3 install -r requirements.txt

9 Aralık 2022 Cuma

Cache Stratejileri - Cache Access Patterns

Giriş
Yöntemler şöyle. Her bir yöntemin kendine göre consistency (tutarlılık) getirisi ve götürüsü var

Okuma Ağırlıklı Yöntemler
2. Read-Through : Tek transaction gibi. Veri tabanı işlemi senkron

Yazma Ağırlıklı Yöntemler
1. Write-Through : Tek transaction gibi. Veri tabanı işlemi senkron
2. Write-Behind aka Write Back : Önce asenkron olarak veri tabanı işlemi sonra cache





















7 Aralık 2022 Çarşamba

Docker Compose ve Elasticsearch

Giriş
Elastic Search UI için açıklama şöyle
Install this free browser plugin Elasticvue for the access to Elasticsearch with UI. The plugin connects to http://localhost:9200 by default. Otherwise, you will need to configure the connection.

Uygulamamızın log4j2.xml dosyasında şöyle yaparız
<?xml version="1.0" encoding="UTF-8"?>
<Configuration>
  <Properties>
    <Property name="defaultPattern">[%highlight{%-5level}] %d{DEFAULT} %c{1}.%M() 
      - %msg%n%throwable{short.lineNumber}</Property>
  </Properties>
  <Appenders>
    <Socket name="socket" host="${sys:logstash.host.name:-localhost}" 
      port="${sys:logstash.port.number:-9999}" reconnectionDelayMillis="5000">
      <PatternLayout pattern="${defaultPattern}" />
    </Socket>
  </Appenders>
  <Loggers>
    <Root level="info">
      <AppenderRef ref="rollingFile"/>
    </Root>
  </Loggers>
</Configuration>
log4j2.xml
Örnek
Şöyle yaparız
elasticsearch:
    image: elasticsearch:8.7.1
    ports:
      - "9200:9200"
      - "9300:9300"
    environment:
      discovery.type: single-node
      xpack.security.enabled: false
      ES_JAVA_OPTS: "-Xms1g -Xmx1g"

Örnek - elasticsearch kubernetes
PersistentVolumeClaim için şöyle yaparız
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: elastic-pvc
  namespace: default
  labels:
    app: elastic-pvc
spec:
  storageClassName: nfs-client
  accessModes:
  - ReadWriteOnce
  resources:
    requests:
      storage: 20Gi
Şöyle yaparız
apiVersion: apps/v1
kind: Deployment
metadata:
  name: elastic
  namespace: default
  labels:
    app: elastic
spec:
  selector:
    matchLabels:
      app: elastic
  replicas: 1
  template:
    metadata:
      labels:
        app: elastic
    spec:
      containers:
      - name: elastic
        image: docker.elastic.co/elasticsearch/elasticsearch:7.5.2
        imagePullPolicy: IfNotPresent
        resources:
          requests:
            cpu: 1000m
            memory: 1024Mi
          limits:
            cpu: 1000m
            memory: 2048Mi
        env:
        - name: discovery.type
          value: "single-node"
        ports:
        - containerPort: 9200
          name: elastic-port
        - containerPort: 9300
          name: elastic-intra
        volumeMounts:
        - name: elastic-data
          mountPath: /usr/share/elasticsearch/data
      volumes:
        - name: elastic-data
          persistentVolumeClaim:
            claimName: elastic-pvc 
      restartPolicy: Always
service için şöyle yaparız
apiVersion: v1
kind: Service
metadata:
  name: elastic-svc
  namespace: default
spec:
  selector:
    app: elastic
  clusterIP: None
  ports:
  - port: 9200
    name: rest
  - port: 9300
    name: intra
Örnek - elasticsearch + logstash + kibana
Şöyle yaparız
version: '3'

services:
  elasticsearch:
    image: elasticsearch:7.10.1
    container_name: elasticsearch
    volumes:
      - ./volumes/es/elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml
    ports:
      - "9200:9200"
      - "9300:9300"
    environment:
      ES_JAVA_OPTS: "-Xmx256m -Xms256m"
  logstash:
    image: logstash:7.10.1
    container_name: logstash
    command: -f /etc/logstash/conf.d/
    volumes:
      - ./volumes/logstash/:/etc/logstash/conf.d/
    ports:
      - "9999:9999"
    environment:
      LS_JAVA_OPTS: "-Xmx256m -Xms256m"
    depends_on:
      - elasticsearch
  kibana:
    image: kibana:7.10.1
    container_name: kibana
    volumes:
      - ./volumes/kibana/:/usr/share/kibana/config/
    ports:
      - "5601:5601"
    depends_on:
      - elasticsearch
Örnek
Şöyle yaparız. Burada filebeat sonradan kurulduğu için yok
version: '2.2'

services:

  elasticsearch:
    image: docker.elastic.co/elasticsearch/elasticsearch:7.9.2
    container_name: elasticsearch
    environment:
      - node.name=elasticsearch
      - discovery.seed_hosts=elasticsearch
      - cluster.initial_master_nodes=elasticsearch
      - cluster.name=docker-cluster
      - bootstrap.memory_lock=true
      - "ES_JAVA_OPTS=-Xms512m -Xmx512m"
    ulimits:
      memlock:
        soft: -1
        hard: -1
    volumes:
      - esdata1:/usr/share/elasticsearch/data
    ports:
      - 9200:9200

  kibana:
    image: docker.elastic.co/kibana/kibana:7.9.2
    container_name: kibana
    environment:
      ELASTICSEARCH_URL: "http://elasticsearch:9200"
    ports:
      - 5601:5601
    depends_on:
      - elasticsearch

volumes:
  esdata1:
    driver: local


1 Aralık 2022 Perşembe

git show seçeneği - Bir Commit Hakkında Bilgi Verir

Giriş
Açıklaması şöyle
git show can show you a high-level view of changes in a commit, but it also lets you see changes to specific files.
Değişen Dosya Hakkında Bilgi almak için şöyle yaparız
git show <commit> -- <filepath>
--stat seçeneği
Özet bilgi verir
Örnek
Şöyle yaparız
git show <commit> --stat




22 Kasım 2022 Salı

Docker Compose ve Localstack

Örnek
Şöyle yaparız
version: '3.9'
services:
  aws-local:
    container_name: aws-local
    image: localstack/localstack:1.3
    ports:
      - "4566:4566"
      - "8283:8080"
    environment:
      - "SERVICES=sqs,sns,secretsmanager"
Örnek - volume
Şöyle yaparız
version: "3.8"

services:
  localstack:
    container_name: "${LOCALSTACK_DOCKER_NAME-localstack_main}"
    image: localstack/localstack:0.14.2
    network_mode: bridge
    ports:
      - "127.0.0.1:4566:4566"            # LocalStack Gateway
      - "127.0.0.1:53:53"                #
      - "127.0.0.1:53:53/udp"            #
      - "127.0.0.1:443:443"              #
      - "127.0.0.1:4510-4530:4510-4530"  # ext services port range
      - "127.0.0.1:4571:4571"            #
    environment:
      - DEBUG=${DEBUG-}
      - SERVICES=${SERVICES-}
      - DATA_DIR=${DATA_DIR-}
      - LAMBDA_EXECUTOR=local
      - LOCALSTACK_API_KEY=${LOCALSTACK_API_KEY-}
      - HOST_TMP_FOLDER=${TMPDIR:-/tmp/}localstack
      - DOCKER_HOST=unix:///var/run/docker.sock
      - DISABLE_CORS_CHECKS=1
    volumes:
      - "${TMPDIR:-/tmp}/localstack:/tmp/localstack"
      - "/var/run/docker.sock:/var/run/docker.sock"
Hangi servislerin çalıştığını görmek için şöyle yaparız
http://localhost:4566/health 
Çıktı şöyle. Örneğin S3 çalışıyor
{
  "features": {
    "initScripts": "initialized"
  },
  "services": {
    "acm": "available",
    "apigateway": "available",
    "cloudformation": "available",
    "cloudwatch": "available",
    "config": "available",
    "dynamodb": "available",
    "dynamodbstreams": "available",
    "ec2": "available",
    "es": "available",
    "events": "available",
    "firehose": "available",
    "iam": "available",
    "kinesis": "available",
    "kms": "available",
    "lambda": "available",
    "logs": "available",
    "opensearch": "available",
    "redshift": "available",
    "resource-groups": "available",
    "resourcegroupstaggingapi": "available",
    "route53": "available",
    "route53resolver": "available",
    "s3": "available",
    "s3control": "available",
    "secretsmanager": "available",
    "ses": "available",
    "sns": "available",
    "sqs": "running",
    "ssm": "available",
    "stepfunctions": "available",
    "sts": "available",
    "support": "available",
    "swf": "available",
    "transcribe": "available"
  },
  "version": "1.1.1.dev"
}
Örnek
Şöyle yaparız. Burada dynamo db için bazı başlangıç scriptleri veriliyor
version: '3.9'

networks:
  tasks-network:
    driver: bridge

services:
  ...
  tasks-localstack:
    image: localstack/localstack:latest
    container_name: tasks-localstack
    environment:
      - DEBUG=0
      - SERVICES=dynamodb
      - EAGER_SERVICE_LOADING=1
      - DYNAMODB_SHARE_DB=1
      - AWS_DEFAULT_REGION=ap-southeast-2
      - AWS_ACCESS_KEY_ID=DUMMY
      - AWS_SECRET_ACCESS_KEY=DUMMY
      - DOCKER_HOST=unix:///var/run/docker.sock
    ports:
      - "4566:4566"
    volumes:
      - ./utils/docker-volume/localstack:/var/lib/localstack"
      - ./utils/docker-volume/dynamodb/items/devices.json:/var/lib/localstack/devices.json
      - ./utils/docker-volume/dynamodb/scripts/create-resources.sh:/etc/localstack/init/ready.d/create-resources.sh
      - /var/run/docker.sock:/var/run/docker.sock
    networks:
      - tasks-network
devices.json şöyle
{
  "id": {"S": "123"},
  "name": {"S": "Device name"},
  "description": {"S": "Device description"},
  "status": {"S": "OFF"}
}
create-resources.sh şöyle
#!/bin/bash

echo "CREATING DEVICES TABLE..."
awslocal dynamodb create-table                                \
  --table-name Devices                                        \
  --attribute-definitions AttributeName=id,AttributeType=S    \
  --key-schema AttributeName=id,KeyType=HASH                  \
  --billing-mode PAY_PER_REQUEST
echo "DONE!"

echo ""
echo "PUTTING DEVICE ITEM..."
awslocal dynamodb put-item                                    \
    --table-name Devices                                      \
    --item file:///var/lib/localstack/devices.json
echo "DONE!"
init script
Açıklaması şöyle
The volume section specifies a directory on a PC mapped to a directory inside the container. On the container startup the Localstack checks this directory for bash files, and if it finds executes them. It is useful to create resources, configs, etc. This way you write commands once in the bash file and Localstack executes them automatically on a startup, so you don’t need to type the command manually each time you spin up a container.
Örnek
Şöyle yaparız
version: '3.8'

services:
  localstack:
    image: localstack/localstack
    ports:
      - '4566:4566' # LocalStack endpoint

    environment:
      - DOCKER_HOST=unix:///var/run/docker.sock
    volumes:
      - ./localstack-script:/etc/localstack/init/ready.d
      - '/var/run/docker.sock:/var/run/docker.sock'
Örnek
Şöyle yaparız. Burada bir s3 bucket yaratılıyor
version: "3.8"

services:
  localstack:
    container_name: localstack_main
    image: localstack/localstack:latest
    ports:
      - "127.0.0.1:4566:4566"            # LocalStack Gateway
      - "127.0.0.1:4510-4559:4510-4559"  # external services port range
    environment:
      - DEBUG=1
      - AWS_ACCESS_KEY_ID=test
      - AWS_SECRET_ACCESS_KEY=test          
      - AWS_DEFAULT_REGION=eu-west-1 # Region where your localstack mocks to be running
      - DOCKER_HOST=unix:///var/run/docker.sock
    volumes:
      - "${LOCALSTACK_VOLUME_DIR:-./volume}:/var/lib/localstack"
      - "/var/run/docker.sock:/var/run/docker.sock"
      - ./aws/init-aws.sh:/etc/localstack/init/ready.d/init-aws.sh
Docker Compose dosyası ile aynı dizinde bulunan aws/init-aws.sh dosyası şöyledir
#!/bin/bash
awslocal s3 mb s3://my-test-bucket
Örnek
Şöyle yaparız. Burada init-scripts dizini localstack'e gösteriliyor. 
version: '3.8'
services:
  localstack:
    container_name: localstack
    image: localstack/localstack:0.11.6
    ports:
      - "4566-4599:4566-4599"
    environment:
      - SERVICES=sqs
    volumes:
      - ./init-scripts:/docker-entrypoint-initaws.d
init-scripts dizinindeki bir dosya şöyle olsun
#!/bin/bash
echo "########### Setting up localstack profile ###########"
aws configure set aws_access_key_id access_key --profile=localstack
aws configure set aws_secret_access_key secret_key --profile=localstack
aws configure set region sa-east-1 --profile=localstack

echo "########### Setting default profile ###########"
export AWS_DEFAULT_PROFILE=localstack

echo "########### Setting SQS names as env variables ###########"
export SOURCE_SQS=source-sqs
export DLQ_SQS=dlq-sqs

echo "########### Creating DLQ ###########"
aws --endpoint-url=http://localstack:4566 sqs create-queue --queue-name $DLQ_SQS

echo "########### ARN for DLQ ###########"
DLQ_SQS_ARN=$(aws --endpoint-url=http://localstack:4566 sqs get-queue-attributes\
                  --attribute-name QueueArn --queue-url=http://localhost:4566/000000000000/"$DLQ_SQS"\
                  |  sed 's/"QueueArn"/\n"QueueArn"/g' | grep '"QueueArn"' | awk -F '"QueueArn":' '{print $2}' | tr -d '"' | xargs)

echo "########### Creating Source queue ###########"
aws --profile=localstack --endpoint-url=http://localstack:4566 sqs create-queue --queue-name $SOURCE_SQS \
     --attributes '{
                   "RedrivePolicy": "{\"deadLetterTargetArn\":\"'"$DLQ_SQS_ARN"'\",\"maxReceiveCount\":\"2\"}",
                   "VisibilityTimeout": "10"
                   }'

echo "########### Listing queues ###########"
aws --endpoint-url=http://localhost:4566 sqs list-queues

echo "########### Listing Source SQS Attributes ###########"
aws --endpoint-url=http://localstack:4566 sqs get-queue-attributes\
                  --attribute-name All --queue-url=http://localhost:4566/000000000000/"$SOURCE_SQS"
Açıklaması şöyle
This file has a couple of commands, that will be executed sequentially.

1. Localstack profile is created
2. DLQ is created
3. ARN for DLQ is obtained
4. Source SQS is created with redrive policy. In the redrive policy ARN for DLQ is specified and maxReciveCount wich tells Source SQS how many times client can receive message before it will be transferred to DLQ. A visibility timeout is set to 10 seconds. More option with explanations can be found here.
5. A list of the created queues is returned.
6. A list of attributes of a Source queue is returned. It confirms that Source SQS has attributes specified in the creation command.