31 Ekim 2022 Pazartesi

git restore seçeneği

Giriş
Git 2.23 ile geliyor. git checkout iki tane işi birden yapıyor
1. Switch branches veya
2. Restore working tree files

Bu işleri iki farklı seçeneği böldüler
Örnek
Şöyle yaparız
git restore README.md
# same as 'git checkout -- README.md'
git restore --staged README.md
# same as 'git reset HEAD README.md'

git switch seçeneği

Giriş
Git 2.23 ile geliyor. git checkout iki tane işi birden yapıyor
1. Switch branches veya
2. Restore working tree files

Bu işleri iki farklı seçeneği böldüler

Örnek
Şöyle yaparız
git switch develop
# same as 'git checkout develop'

git switch -c new-branch
# same as 'git checkout -b new-branch'

OpenTelemetry Nedir?

OpenTelemetry Nedir?
Açıklaması şöyle. Yani Open Tracing ve OpenCensus projelerinin birleşmesinden ortaya çıkmıştır.
The OpenTelemetry website states that:

"OpenTelemetry is a collection of tools, APIs, and SDKs. Use it to instrument, generate, collect, and export telemetry data (metrics, logs, and traces) to help you analyze your software's performance and behavior."

OpenTelemetry was created by merging the popular OpenTracing and OpenCensus projects. It is a standard that integrates with many open source and commercial products written in many programming languages. Implementations of OpenTelemetry are in varying stages of maturity.

At its core, OpenTelemetry contains the Collector, a vendor-agnostic way to receive, process, and export telemetry data.
OpenTelemetry Nasıl Çalışır
- Uygulama OpenTelemetry SDK ile instrument edilir.
- Artık loglar bir OpenTelemetry Collector'a gönderilir. 
- Collector üzerinde gerekli filtrelemeler yapılır ve 
- Collector logları bir backend'e gönderir.
Şeklen şöyle


instrumentation-annotations
Maven
Şu satırı dahil ederiz
<dependency>
  <groupId>io.opentelemetry.instrumentation</groupId>
  <artifactId>opentelemetry-instrumentation-annotations</artifactId>
  <version>1.17.0-alpha</version>
</dependency>
Örnek
Şöyle yaparız
import io.opentelemetry.instrumentation.annotations.SpanAttribute
import io.opentelemetry.instrumentation.annotations.WithSpan

@WithSpan("ProductHandler.fetch")
suspend fun fetch(@SpanAttribute("id") id: Long): Result<Product> {
  ...
}
OpenTelemetry Backend Nedir?
OpenTelemetry Backend yazısına taşıdım

OpenTelemetry Operator
Şöyle yaparız
kubectl apply \
-f https://github.com/open-telemetry/opentelemetry-operator/releases/latest/download/opentelemetry-operator.yaml



30 Ekim 2022 Pazar

Jenkins Credentials

Örnek - token
Açıklaması şöyle
Go to Manage Jenkins-> Manage Credentials-> Choose Secret Text type credential and paste the token which is generated in Jfrog Platform and add ID and Description as artifactory-access-token

Örnek - Kullanıcı İsmi ve Şifre
Açıklaması şöyle
To publish an image to your repository you will need to add the credentials of your Docker Hub account to Jenkins, do so by navigating to Manage Jenkins > Manage Credentials > System > Global Credentials > Add Credentials. Add your credentials and give them an id that you will use later.
Şeklen şöyle




29 Ekim 2022 Cumartesi

Jenkins Global Tool Configuration

Giriş
Menü şöyle
Manage Jenkins > Global Tool Configuration
Node.Js ve Docker Kurulum
Açıklaması şöyle
Go back to the dashboard and then to Manage Jenkins > Global Tool Configuration. For both Docker and NodeJS, give it a name and select Install automatically, then select install from nodejs.org and from docker.com. Click Save/Apply.
Node.js şeklen şöyle

Docker şeklen şöyle


27 Ekim 2022 Perşembe

sqlline komutu

Giriş
MySQL için mysqlsh kullanılabilir

-d seçeneği
Kullanılacak JDBC sürücüsünü belirtir.
Örnek
Şöyle yaparız
$ ./sqlline -d com.hazelcast.jdbc.Driver -u jdbc:hazelcast://localhost:5701

25 Ekim 2022 Salı

Docker Compose ve Prometheus

Örnek
Şöyle yaparız
version: "3"
services:
  app:
    image: quiz-server:latest
    container_name: 'quiz-server'
    build:
      context: ./
      dockerfile: Dockerfile
    ports:
    - '8080:8080'
  prometheus:
    image: prom/prometheus
    container_name: 'prometheus'
    volumes:
    - ./monitor/:/etc/prometheus/
    ports:
    - '9090:9090'
  grafana:
    image: grafana/grafana
    container_name: 'grafana'
    ports:
    - '3000:3000'
    depends_on:
    - prometheus
Örnek
Şöyle yaparız
services:
  prometheus:
      image: prom/prometheus:v2.35.0
      network_mode: host
      container_name: prometheus
      restart: unless-stopped
      volumes:
        - ./data/prometheus/config:/etc/prometheus/
      command:
        - "--config.file=/etc/prometheus/prometheus.yaml"
Örnek
Şöyle yaparız
version: "3.8"

services:
  prometheus:
    container_name: prometheus
    image: prom/prometheus:v2.44.0
    user: root
    volumes:
      - ./prometheus:/etc/prometheus
    command:
      - --config.file=/etc/prometheus/prometheus-config.yaml
      - --log.level=debug
    ports:
      - "9090:9090"
    networks:
      - prometheus-network

networks:
 prometheus-network:
./prometheus/prometheus-config.yaml dosyası Prometheus tarafından /etc/prometheus/prometheus-config.yaml olarak görülür

node_exporter
node_exporter ile işletim sistemini de izlemek mümkün. 
Örnek 
Şöyle yaparız
version: "3.9"

services:
  microservices_postgresql:
    image: postgres:latest
    container_name: microservices_postgresql
    expose:
      - "5432"
    ports:
      - "5432:5432"
    restart: always
    environment:
      - POSTGRES_USER=postgres
      - POSTGRES_PASSWORD=postgres
      - POSTGRES_DB=bank_accounts
      - POSTGRES_HOST=5432
    command: -p 5432
    volumes:
      - ./docker_data/microservices_pgdata:/var/lib/postgresql/data
    networks: [ "microservices" ]

  redis:
    image: redis:latest
    container_name: microservices_redis
    ports:
      - "6379:6379"
    restart: always
    networks: [ "microservices" ]

  prometheus:
    image: prom/prometheus:latest
    container_name: prometheus
    ports:
      - "9090:9090"
    command:
      - --config.file=/etc/prometheus/prometheus.yml
    volumes:
      - ./monitoring/prometheus.yml:/etc/prometheus/prometheus.yml:ro
    networks: [ "microservices" ]

  node_exporter:
    container_name: microservices_node_exporter
    restart: always
    image: prom/node-exporter
    ports:
      - '9101:9100'
    networks: [ "microservices" ]

  grafana:
    container_name: microservices_grafana
    restart: always
    image: grafana/grafana
    ports:
      - '3000:3000'
    networks: [ "microservices" ]

  zipkin:
    image: openzipkin/zipkin:latest
    restart: always
    container_name: microservices_zipkin
    ports:
      - "9411:9411"
    networks: [ "microservices" ]


networks:
  microservices:
    name: microservices

21 Ekim 2022 Cuma

Google Cloud - Google Cloud Storage

GCS kullanan yazılımları test etmek için TestContainers kullanılabilir. Bir örnek burada. Açıklaması şöyle
Google Cloud offers a series of emulators for Testcontainers, such as an emulator for BigQuery or PubSub. Unfortunately, an emulator for Google Cloud Storage is not available.

However, fsouza, a GitHub user, generously made an Open Source library for emulating Google Cloud Storage API.
Create a Bucket
Açıklaması şöyle
From the left menu, go to Cloud Storage and then you can create a bucket.
Şeklen şöyle


Service account
Google Cloud Storage'a erişmek için Service Account lazım. Şöyle yaparız
- Click on the left menu and IAM & Admin -> Service Accounts

- Click on the CREATE SERVICE ACCOUNT


- Now you can fill in the required details.
- Then click on the created service account email.
- Then click on KEYS -> ADD KEY -> Create new key as below.


- Then it will ask about the key type, choose JSON and the key will be downloaded. This is required for the Spring Boot service.




Apache Kafka Rebalancing Protocol

Rebalance Nedir?
Açıklaması şöyle
But you can ask, what will happen when we add a new consumer to the existing and running group? The process of switching ownership from one consumer to another is called rebalance. It is a small break from receiving messages for the whole group. The idea of choosing which partition goes to which consumer is based on the coordinator election problem.
Rebalance işleminde "stop the world" yaklaşımı benimseniyor. Açıklaması şöyle
There is, however, a drawback of the default rebalancing approach. Each consumer gives up its entire assignment of topic partitions, and no processing takes place until the topic partitions are reassigned — sometimes referred to as a “stop-the-world” rebalance. To compound the issue, depending on the instance of the ConsumerPartitionAssignor used, consumers are simply reassigned the same topic partitions that they owned prior to the rebalance, the net effect being that there is no need to pause work on those partitions. This implementation of the rebalance protocol is called eager rebalancing because it prioritizes the importance of ensuring that no consumers in the same group claim ownership over the same topic partitions. 
Rebalance Sıkıntıları
Açıklaması şöyle
Kafka tries to rebalance partitions every time rolling new code on each machine. Unsurprisingly, this problem is notorious in Kafka / Kafka’s stream applications world. Here is how the protocol works when rolling updates:

- When we roll out new code, let us say on an instance A, which hosts a stream application, it will close the current running stream application. As the consequence, which will send a “leave group” request to the group coordinator.

- After instance A finishes rolling out a new code, instance A joins again the old consumer group by sending a “join group” request.

- Kafka membership protocol will resolve again and divide partitions again for each node. Instance A might receive a different partition, which needs to rebuild again the state store from the changelog topic respected to the assigned partition.

- Even worse, when rebalancing is happening, the whole consumer group will stop-the-world, all operations, including the cluster-internal one, cannot work normally and wait for rebalancing to finish. After that, some consumers will stop and restore the original state from the changelog topics. If changelog topics are huge, the service could be slowed down/unresponsive for a long time before coming back to the normal state.
Çözümler şöyle
1. Optimize consumer configurations
2. Optimize state store configurations
3. Change to Static Membership
4. Maintain active and inactive clusters
5. Incremental Cooperative Rebalancing
6. Moving from stateful consumers to stateless consumers
7. Upgrading to Kafka’s stream 2.6
8. Accept downtime

1. Optimize consumer configurations
Burada bir yazı var
Burada bir yazı daha var
session.timeout.ms alanı kullanılarak consumer için timeout atanabiliyor. Açıklaması şöyle. Yani yeni sürüm çalışmaya başlarken heartbeat süresini artırdığımız için Kafka farkına varmıyor
Default value: 3000 (3 seconds)

Timeout for the heartbeat thread. Prevent unnecessary rebalancing due to network issues or the application doing a GC that causes heartbeats not to be sent to the broker coordinator.

It’s also good to understand the differences between session timeout and polling timeout. See https://stackoverflow.com/questions/39730126/difference-between-session-timeout-ms-and-max-poll-interval-ms-for-kafka-0-10

5. Incremental Cooperative Rebalancing Nedir?
Açıklaması şöyle. "stop the world" yaklaşımı yerine group leader'in değişen consumer'ları yayınlaması bekleniyor. Böylece consumer durmak zorunda kalmıyor.
First introduced to Kafka Connect in Apache Kafka 2.3, incremental cooperative rebalancing has now been implemented for the consumer group protocol too. With the cooperative approach, consumers don’t automatically give up ownership of all topic partitions at the start of the rebalance. Instead, all members encode their current assignment and forward the information to the group leader. The group leader then determines which partitions need to change ownership — instead of producing an entirely new assignment from scratch. Now a second rebalance is issued, but this time, only the topic partitions that need to change ownership are involved. It could be revoking topic partitions that are no longer assigned or adding new topic partitions. For the topic partitions that are in both the new and old assignment, nothing has to change, which means continued processing for topic partitions that aren’t moving. The bottom line is that eliminating the "stop-the-world" approach to rebalancing and only stopping the topic partitions involved means less costly rebalances, thus reducing the total time to complete the rebalance. Even long rebalances are less painful now that processing can continue throughout them. 

This positive change in rebalancing is made possible by using the CooperativeStickyAssignor. The CooperativeStickyAssignor makes the trade-off of having a second rebalance but with the benefit of a faster return to normal operations. To enable this new rebalance protocol, you need to set the partition.assignment.strategy to use the new CooperativeStickyAssignor. Also, note that this change is entirely on the client-side. To take advantage of the new rebalance protocol, you only need to update your client version. If you’re a Kafka Streams user, there is even better news. Kafka Streams enables the cooperative rebalance protocol by default, so there is nothing else to do.

14 Ekim 2022 Cuma

Prometheus prometheus.yml Dosyası

Giriş
Açıklaması şöyle
The default configuration file has four main configuration sections: globalalerting, rule_files, and scrape_configs.
1. global Alanı
Açıklaması şöyle
The global section contains global configurations for the entire Prometheus.

The field scrape_interval defines how long Prometheus will pull data once, we specify 15 seconds above.

The field evaluation_interval defines how long Prometheus will re-evaluate the rule once, temporarily we do not need to care about this configuration.
Örnek
Şöyle yaparız
global:
  scrape_interval: 15s

scrape_configs:
  - job_name: 'prometheus'
    scrape_interval: 5s
    static_configs:
      - targets: ['localhost:9090']
Örnek
Şöyle yaparız
# my global config
global:
  # Set the scrape interval to every 15 seconds. Default is every 1 minute.
  scrape_interval: 15s 
  # Evaluate rules every 15 seconds. The default is every 1 minute.
  evaluation_interval: 15s 
  # scrape_timeout is set to the global default (10s).
# Alertmanager configuration
alerting:
  alertmanagers:
    - static_configs:
        - targets:
          # - alertmanager:9093
# Load rules once and periodically evaluate them according to the global 
#'evaluation_interval'.
rule_files:
  # - "first_rules.yml"
  # - "second_rules.yml"

# A scrape configuration containing exactly one endpoint to scrape:
# Here it's Prometheus itself.
scrape_configs:
  # The job name is added as a label `job=<job_name>` to any timeseries scraped from this 
  # config.
  - job_name: "prometheus"
    # metrics_path defaults to '/metrics'
    # scheme defaults to 'http'.
    static_configs:
      - targets: ["localhost:9090"]
2. alerting Alanı
Açıklaması şöyle
The alerting section contains the configuration of the tool that we will send an alert to if there is a problem with our system, as mentioned above for Prometheus, we will use Alertmanager. Right now we don’t need to use an alert, so we close it with a #.
3. rule_files Alanı
Açıklaması şöyle
The rulefiles section will contain a configuration that defines the rule when Prometheus will need to fire an alert via the Alertmanager and the rules about recording, which we will learn about later. 
Açıklaması şöyle
Alerting rules are used to define conditions under which alerts are triggered. As an essential part of monitoring and reliability engineering, you can set up notifications via various channels such as email, Slack, or Squadcast to help detect and resolve issues before they become critical.

In this case, the rule_files field points to a directory containing alert rules, which define the conditions under which alerts are triggered. Triggered alerts get sent to the specified Alertmanager targets, which you can further configure to send notifications to various channels, such as email or the Squadcast platform.

Örnek
Şöyle yaparız
global:
  scrape_interval: 15s
  evaluation_interval: 1m

rule_files:
  - /etc/prometheus/rules/*.rules

scrape_configs:
  - job_name: 'darwin-service-1'
    scrape_interval: 5s
    static_configs:
      - targets: ['darwin-service-1:80']
    relabel_configs:
      - source_labels: [job]
        target_label: job
        replacement: 'darwin-new-service'
    resources:
      requests:
        memory: 2Gi
        cpu: 1
      limits:
        memory: 4Gi
        cpu: 2

    alerting:
      alertmanagers:
      - static_configs:
        - targets:
          - alertmanager:9093
  - job_name: 'darwin-service-2'
    scrape_interval: 10s
    static_configs:
      - targets: ['darwin-service-2:80']
    resources:
      requests:
        memory: 1Gi
        cpu: 0.5
      limits:
        memory: 2Gi
        cpu: 1

    alerting:
      alertmanagers:
      - static_configs:
        - targets:
          - alertmanager:9093
4. scrape_configs Alanı

Consul
Örnek
Şöyle yaparız. Burada consul_sd_configs ve my-service isimli servisler kullanılıyor
# my global config
global:
  scrape_interval:     15s # By default, scrape targets every 15 seconds.
  evaluation_interval: 15s # By default, scrape targets every 15 seconds.
  # scrape_timeout is set to the global default (10s).

scrape_configs:
# consul job
- job_name: 'consul'
  consul_sd_configs:
  - server: 'apm-showroom-consul-server:8500'
    services:
    - my-service
  metrics_path: '/actuator/prometheus'
  relabel_configs:
  - source_labels: [__meta_consul_service_id]
    regex: 'my-service-(.*)-(.*)-(.*)'
    replacement: 'my-service-$1'
    target_label: node
  - source_labels: [__meta_consul_service_id]
    target_label: instance
# prometheus job
- job_name: 'prometheus'
  static_configs:
  - targets: 
    - 'apm-showroom-prometheus:9090'
Açıklaması şöyle
Prometheus scrape all the instances related to the service my-service by resolving them dynamically.

No need to know the ip address nor the hostname nor the port.

You can see in the prometheus.yml configuration that we create a new label node thanks to the relabel_config this label will have the node name without the reference to the metrics type.



Docker ve Prometheus

Örnek
Şöyle yaparız
docker pull prom/prometheus
Çalıştırmak için şöyle yaparız
docker run -d -p 9090:9090 -v $PWD/prometheus.yml:/etc/prometheus/prometheus.yml 
  prom/prometheus
Örnek
Şöyle yaparız. Bu durumda volume olarak dizin veriliyor
docker network create local

docker run 
  --name prometheus 
  --network local 
  -p 9090:9090 
  -v /etc/prometheus:/etc/prometheus 
  -d 
  prom/prometheus
Örnek - node_exporter
Şöyle yaparız
wget https://github.com/prometheus/node_exporter/releases/download/v1.4.0/node_exporter-1.4.0.linux-amd64.tar.gz
sudo docker run -it -p 9100:9100 --name node ubuntu:18.04

sudo docker cp node_exporter-1.4.0.linux-amd64.tar.gz node:/node_exporter-1.4.0.linux-amd64.tar.gz

#In docker container
tar xvfz node_exporter-1.4.0.linux-amd64.tar.gz
cd node_exporter-1.4.0.linux-amd64
./node_exporter &

Prometheus PromQL

Giriş
Prometheus tarafından kullanılan sorgulama dilidir. Söz dizimi şöyle
<metric name>{<label name>=<label value>, ...} <samples>
Bir PromQL Introduction yazısı burada

Örnek
Şöyle yaparız
up{job="prometheus"}
Örnek
Şöyle yaparız
container_cpu_load_average_10s{id="/docker", instance="10.0.2.15:8080", job="docker"} 0
Metodlar
Bazı PromQL metodları şöyle
sum()
avg()
topk()


11 Ekim 2022 Salı

Elastic Stack - Log Management

Giriş
Açıklaması şöyle
Elasticsearch is an open-source search engine and analytics store used by a variety of applications from search in e-commerce stores, to internal log management tools using the ELK stack (short for “Elasticsearch, Logstash, Kibana”).
Kurulum ve konfigürasyon için bir yazı burada

Diğer Seçenekler
1. Jaeger + ElasticSearch.
2. FluentD + ElasticSearch. Bir örnek burada

Elastic Stack - Log Management Çözümler
Çözümler şöyle. Logstash var/yok ve Filebeat var/yok şeklinde
1. Application -> Filebeat -> Logstash -> Elasticsearch
2. Application -> Filebeat -> Elasticsearch
3. Application (Java) + Logstash-logback-encoder -> Logstash -> Elasticsearch
Logstash-logback-encoder
Filebeat'e gerek yok. Uygulama logback'e yeni bir appender takar ve logları direkt Elasticsearch' gönderir
Maven
Şöyle yaparız
<dependency>
    <groupId>net.logstash.logback</groupId>
    <artifactId>logstash-logback-encoder</artifactId>
    <version>7.3</version>
    <scope>runtime</scope>
</dependency>
Gradle
Şöyle yaparız
implementation("net.logstash.logback:logstash-logback-encoder:7.3")
Örnek
logback.xml şöyledir. Burada Appender olarak LogstashTcpSocketAppender kullanılıyor. 
<property name="STACK_TRACE_COUNT" value="15"/>
<property name="CLASS_NAME_LENGTH" value="40"/>

<appender name="LOGSTASH" class="net.logstash.logback.appender.LogstashTcpSocketAppender">
  <destination>192.168.1.1:4560</destination>
  <addDefaultStatusListener>false</addDefaultStatusListener>

  <encoder class="net.logstash.logback.encoder.LoggingEventCompositeJsonEncoder">
    <providers>
      <pattern>
        <pattern>{"app_name": "myapp", "app_version":"1.0.0", "hostname": "${HOSTNAME}"}</pattern>
      </pattern>
      <mdc/>
      <timestamp/>
      <message/>
      <threadName/>
      <logLevel/>
      <callerData/>
      <stackTrace>
        <throwableConverter class="net.logstash.logback.stacktrace.ShortenedThrowableConverter">
          <maxDepthPerThrowable>${STACK_TRACE_COUNT}</maxDepthPerThrowable>
          <shortenedClassNameLength>${CLASS_NAME_LENGTH}</shortenedClassNameLength>
          <rootCauseFirst>true</rootCauseFirst>
        </throwableConverter>
      </stackTrace>
    </providers>
  </encoder>
</appender>


<root level="${ROOT_LEVEL}">
   <appender-ref ref="CONSOLE"/>
   <appender-ref ref="LOGSTASH"/>
</root>
Örnek
logback.xml'de şöyle yaparız
<?xml version="1.1" encoding="UTF-8"?>
<configuration>
  <appender name="JSON" class="ch.qos.logback.core.ConsoleAppender">
    <encoder class="net.logstash.logback.encoder.LogstashEncoder">
      <includeMdc>true</includeMdc>
    </encoder>
  </appender>


  <root level="INFO">
    <appender-ref ref="JSON"/>
  </root>
</configuration>
Artık log çıktısı şöyle
{
  "@timestamp": "2023-06-17T13:41:01.134+01:00",
  "@version": "1",
  "message": "Hello World",
  "logger_name": "no.f12.application",
  "userId": "user-id-something",
  "documentId": "document-id",
  "documentType": "legal",
  "thread_name": "somethread",
  "level": "INFO",
  "level_value": 20000
}

Beats + Logstash + Elasticsearch + Kibana Seçeneği
Şeklen şöyle. Beats + Logstash + Elasticsearch + Kibana ile oluşur. 


Bir başka şekil şöyle
Açıklaması şöyle. Şekillerde Beat diye gösterilmesine rağmen container ismi Filebeat
Elasticsearch: Elasticsearch is a distributed, free and open search and analytics engine for all types of data, including textual, numerical, geospatial, structured, and unstructured.

Logstash: Logstash is a light-weight, open-source, server-side data processing pipeline that allows you to collect data from a variety of sources, transform it on the fly, and send it to your desired destination.

Kibana: Kibana is an free and open frontend application that sits on top of the Elastic Stack, providing search and data visualization capabilities for data indexed in Elasticsearch.

Filebeat: Filebeat is a lightweight shipper for forwarding and centralizing log data. Installed as an agent on your servers, Filebeat monitors the log files or locations that you specify, collects log events, and forwards them either to Elasticsearch or Logstash for indexing.
Filebeat
Filebeat yazısına taşıdım

Kibana
Kibana yazısına taşıdım

Örnek - Her şeyi Birlikte Kullanan 
filebeat.cm.yaml şöyle olsun. /var/log/*.log dosyasından okur ve logstash'e gönderir
apiVersion: v1
kind: ConfigMap
metadata:
  name: filebeat-config
  labels:
    component: filebeat
data:
  conf.yaml: |
    filebeat.inputs:
    - type: log
      paths:
        - '/var/log/*.log'
    output:
      logstash:
        hosts: [ "logstash:5044" ]
uygulamamız ve filebeat şöyle olsun
apiVersion: apps/v1
kind: Deployment metadata: name: busybox labels: component: busybox spec: strategy: type: Recreate selector: matchLabels: component: busybox template: metadata: labels: component: busybox spec: containers: - name: busybox image: busybox args: - sh - -c - > while true; do echo $(date) - filebeat log >> /var/log/access.log; sleep 10; done volumeMounts: - name: log mountPath: /var/log - name: filebeat image: elastic/filebeat:7.16.3 args: - -c - /etc/filebeat/conf.yaml - -e volumeMounts: - name: filebeat-config mountPath: /etc/filebeat - name: log mountPath: /var/log volumes: - name: log emptyDir: {} - name: filebeat-config configMap: name: filebeat-config
Açıklaması şöyle
In the Pod above we mount the Filebeat configuration file into the /etc/filebeat/conf.yaml file and use the args to specify that configuration file for Filebeat.

Our application container writes a log to the file /var/log/access.log every 10s. We use emptyDir volumes to share storage between two containers.

logstash.cm.yaml şöyle olsun
apiVersion: v1
kind: ConfigMap
metadata:
  name: logstash
  labels:
    component: logstash
data:
  access-log.conf: |
    input {
      beats {
        port => "5044"
      }
    }
    output {
      elasticsearch {
        hosts => [ "elasticsearch:9200" ]
      }
    }
logstash.yaml şöyle olsun
apiVersion: apps/v1
kind: Deployment
metadata:
  name: logstash
  labels:
    component: logstash
spec:
  strategy:
    type: Recreate
  selector:
    matchLabels:
      component: logstash
  template:
    metadata:
      labels:
        component: logstash
    spec:
      containers:
        - name: logstash
          image: logstash:7.16.3
          ports:
            - containerPort: 5044
          volumeMounts:
            - name: logstash-config
              mountPath: /usr/share/logstash/pipeline
      volumes:
        - name: logstash-config
          configMap:
            name: logstash
---
apiVersion: v1
kind: Service
metadata:
  name: logstash
  labels:
    component: logstash
spec:
  ports:
  - port: 5044
  selector:
    component: logstash
Elastichsearch şöyle olsun
apiVersion: apps/v1
kind: Deployment
metadata:
  name: elasticsearch
  labels:
    component: elasticsearch
spec:
  strategy:
    type: Recreate
  selector:
    matchLabels:
      component: elasticsearch
  template:
    metadata:
      labels:
        component: elasticsearch
    spec:
      containers:
        - name: elasticsearch
          image: elasticsearch:7.16.3
          ports:
            - containerPort: 9200
              name: client
            - containerPort: 9300
              name: nodes
          env:
            - name: JAVA_TOOL_OPTIONS
              value: -Xmx256m -Xms256m
            - name: discovery.type
              value: single-node
          resources:
            requests:
              memory: 500Mi
              cpu: 0.5
            limits:
              memory: 500Mi
              cpu: 0.5
---
apiVersion: v1
kind: Service
metadata:
  name: elasticsearch
  labels:
    component: elasticsearch
spec:
  ports:
  - port: 9200
    name: client
  - port: 9300
    name: nodes
  selector:
    component: elasticsearch
Kibana şöyle olsun
apiVersion: apps/v1
kind: Deployment
metadata:
  name: kibana
  labels:
    component: kibana
spec:
  strategy:
    type: Recreate
  selector:
    matchLabels:
      component: kibana
  template:
    metadata:
      labels:
        component: kibana
    spec:
      containers:
        - name: kibana
          image: kibana:7.16.3
          ports:
            - containerPort: 5601
---
apiVersion: v1
kind: Service
metadata:
  name: kibana
  labels:
    component: kibana
spec:
  ports:
  - port: 5601
  selector:
    component: kibana
Açıklaması şöyle
Now, go to menu Stack Management > Index patterns and create an index pattern, then go to menu Discover and you’ll see the logs we collected from the busybox container.
Örnek - Kibana
Şöyle yaparız
apiVersion: apps/v1
kind: Deployment
metadata:
  name: kibana
  namespace: default
  labels:
    app: kibana
spec:
  selector:
    matchLabels:
      app: kibana
  replicas: 1
  template:
    metadata:
      labels:
        app: kibana
    spec:
      containers:
      - name: kibana
        image: docker.elastic.co/kibana/kibana:7.5.2
        imagePullPolicy: IfNotPresent
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
          limits:
            cpu: 1024m
            memory: 512Mi
        env:
          - name: ELASTICSEARCH_HOSTS
            value: '["http://elastic-svc:9200"]'
          - name: SERVER_NAME
            value: 'https://kibana.example.com'
        ports:
        - containerPort: 5601
          name: kibana
      restartPolicy: Always
---
apiVersion: v1
kind: Service
metadata:
  name: kibana-svc
  namespace: default
spec:
  selector:
    app: kibana
  type: ClusterIP
  ports:
  - port: 5601
    targetPort: 5601
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: kibana-ingress
  namespace: default
  annotations:
    cert-manager.io/cluster-issuer: letsencrypt-prod
    nginx.ingress.kubernetes.io/auth-type: basic
    nginx.ingress.kubernetes.io/auth-secret: kibana-basic-auth
    nginx.ingress.kubernetes.io/auth-realm: 'Authentication Required'
spec:
  tls:
    - hosts:
      - kibana.example.com
      secretName: kibana-tls
  rules:
  - host: kibana.example.com
    http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: kibana-svc
            port:
              number: 5601




GitHub CLI

Pull Request Checkout
GitHub "fork model" ile kullanılınca IntellJ nedense Pull Request'leri göstermiyordu. Ben de GitHub CLI kurdum. Bir Pull Request'i checkout etmek için master branch'in olduğu dizine gideriz ve şöyle yaparız
gh pr checkout 22440
-R seçeneği
Bir seferinden nasıl olduğunu anlamadım ama gh pr checkout ... komutunu çalıştırınca bana hangi repository'i kullanmak istediğimiz sordu
? Which should be the base repository (used for e.g. querying issues) for this directory? 
orcunc/hazelcast-code-samples
hazelcast/hazelcast-code-samples
Ben de yanlışlıkla kendi repomu gösterdim ama daha sonra şu hatayı verdi
GraphQL: Could not resolve to a PullRequest with the number of 576. (repository.pullRequest
Ben de diğer repoyu göstermek için -R seçeneğini kullandım. Şöyle yaptım
gh pr checkout 576 -R hazelcast/hazelcast-code-samples
Jenkins
Jenkins makinamızda kendi Pull Request'imi test amaçlı koşmak istersem şöyle yaparım
1. Bir projeyi kopyalarım.
2. Configure/Kaynak Kodu Yönetimi altında 
Repository URL : git@github.com:orcunc/hazelcast-enterprise.git
Branch Specifier : fix/5.4/hz-2488_wan_compact_schema
olur ve yapılandırma tetiklenir



10 Ekim 2022 Pazartesi

Product Manager

Giriş
Product Manager kullanılan yazılım geliştirme yöntemine göre farklı şeyler yapabilir

1. Scrum Açısından 
Epic'leri Kim Oluşturur
Açıklaması şöyle. Product Owner Epic'leri oluşturur. Epic'ler daha sonra Story'lere bölünür.
Product Managers had provided the requirements which Product Owner had broken into Epics
Yani Product Manager gereksinimleri sağlar

2. Scaled Agile Framework (SAFe) Açısından
Açıklaması şöyle. Benzer şekildedir ama açıktan gereksinimleri sağlar denmiyor.
The Safe Agilist and Product Manager define and break up larger pieces of work (often inherited from the portfolio process above) and then pass the pieces into the teams.
3. Esas Ne Yapar
Product Manager uzun soluklu bir bakışa sahiptir. Açıklaması şöyle.
Contrary to a project, that can comprise a temporary slot in time, a product itself is more long term. In essence, product management revolves around the product, i.e. anything that can be offered to a market to solve an issue or cater to a need (or want).

The Product Manager is responsible for a product’s success through the whole product lifecycle. Focusing on the “what” more than the “how,” a Product Manager holds a high-level view and steers the growth and progress of the product.
Bir başka açıklama şöyle
Product management is not about:
 
- Asking customers about the requirements
- Writing detailed specifications
- Creating prototypes instead of designers
- Instructing developers on what to do
- Verifying and accepting the work of others
- Obsessing over velocity, deadlines, and roadmaps
- Mastering the role of the Scrum PO to perfection
- Acting like the CEO of the product

Anyone can do that.

It's about:
 
- Understanding the market and the business in depth
- Focusing on customer's problems, needs, and desires
- Collaborating closely with engineers and designers
- Identifying opportunities, ideating solutions, and tackling the risks together
- Marrying customer goals and business goals
- Influencing others to work toward the common goal
- Being humble (it's ok not to be the smartest person in the room)
- Leading without authority
 
Start with those questions:
 
- Why are we building this thing?
- Why are we building it now?
- For whom are we building it?
- What's the unique value of our product?
- How is it aligned with the company's vision?
- How is it aligned with the business strategy?
- What does success look like? How can we measure it?
- What are the critical customer jobs (functional, emotional, social)?
- How will it affect our customers and users?
- How will it create value for the business?
- Can we buy it instead of building it?
- How can we make sure that our customers would love it?
- Can our business support it (e.g., legal, finances)?
- How can we bring it to the market? Do we have the required channels?
- Is it feasible? Can our engineers implement it?
- Should we do it at all? Are there any ethical considerations?
- What are the other risks? How can we mitigate them?
- What are the riskiest assumptions? How can we validate them?
Roadmap Nedir?
Açıklaması şöyle. Bir fikirler listesidir. Fikirler de farklı farklı kaynaklardan keşfedilir
Roadmap is not a backlog of tasks.
It is a list of ideas that need testing to enhance your product to delight your customers.

Ideas are discovered via data analysis, customer feedback, research, or plain ideation.

Great ideas can not be predicted :) Assumptions need to be tested.

After discovery, Continuous Delivery puts minimum viable product improvements to production.

Tests bring new data, which is used to discover new ideas, and this product enhancement loop continues forever.




9 Ekim 2022 Pazar

Docker ve NGINX

Giriş
Elle hazırlanan conf dosyası /etc/nginx/conf.d/ dizinine kopyalanır. Yeni ismi default.conf veya başka bir conf ismi olabilir

Örnek
Şöyle yaparız
FROM nginx
LABEL "Project"="Vproject"
LABEL "Author"="Onumaku chibuike"

RUN rm -rf /etc/nginx/conf.d/default.conf
COPY nginvproapp.conf /etc/nginx/conf.d/vproapp.conf
nginvproapp.conf dosyası şöyledir
upstream vproapp {
 server vproapp:8080;
}
server {
  listen 80;
  location / {
    proxy_pass http://vproapp;
  }
}
Örnek
Şöyle yaparız
FROM nginx
RUN rm /etc/nginx/conf.d/default.conf
COPY nginx.conf /etc/nginx/conf.d/default.conf