22 Haziran 2023 Perşembe

Prometheus prometheus.yml Dosyası - Scrape Configurations

Giriş
Burada sorgulanacak sunucuların bilgileri tanımlı. Açıklaması şöyle
The attribute job_name is used to define the name of the target we monitor, here we set it as “prometheus”.

The targets attribute is used to define the address of the target, which will be an array of addresses.

The default configuration of Prometheus is to monitor itself, when you run Prometheus it will listen on port 9090 and provide a path /metrics for us to get its metrics.
static_configs Alanı
Sorgulanacak sunucuların adreslerini belirtir.

Örnek
Şöyle yaparız
global:
  scrape_interval: 15s
scrape_configs:
  - job_name: 'darwin-service-1'
    scrape_interval: 5s
    static_configs:
      - targets: ['darwin-service-1:80']
    relabel_configs:
      - source_labels: [job]
        target_label: job
        replacement: 'darwin-new-service'

  - job_name: 'darwin-service-2'
    scrape_interval: 10s
    static_configs:
      - targets: ['darwin-service-2:80']
Açıklaması şöyle
In this example, two separate static_configs blocks are defined, each with a different job_name. The first block scrapes a single target, darwin-service-1:80, every 5 seconds, while the second block scrapes a single target, darwin-service-2:80, every 10 seconds.
Örnek - Jenkins
Jenkins'e Prometheus eklentisi kurulur. Kurduktan sonra adres şöyledir "<Public-
IP:8080/prometheus>"
Şöyle yaparız
- job_name: "Jenkins Job"
  static_configs:
    - targets: ["<Public IP of Jenkins Node:8080"]
Örnek
Şöyle yaparız
global:
scrape_interval: 5s scrape_configs: - job_name: prometheus honor_labels: true static_configs: - targets: [ "localhost:9090" ] - job_name: 'spring_micrometer' metrics_path: '/actuator/prometheus' scrape_interval: 5s static_configs: - targets: ['your_host_ip:8080']
metrics_path Alanı
Örnek
Şöyle yaparız
scrape_configs:
  - job_name: 'Spring Boot Application input'
    metrics_path: '/actuator/prometheus'
    scrape_interval: 2s
    static_configs:
      - targets: ['localhost:8000']
        labels:
          application: "My Spring Boot Application"
Örnek
Şöyle yaparız. Bu bir spring uygulamasını sorgular
scrape_configs:
    - job_name: 'spring-actuator'
      metrics_path: '/actuator/prometheus'
      scrape_interval: 5s
      static_configs:
        - targets: ['127.0.0.1:8080']
Örnek
Şöyle yaparız. Bu bir spring uygulamasını sorgular
global:
  scrape_interval:   15s

  external_labels:
    monitor: 'quiz-server-monitoring'

scrape_configs:
- job_name:       'quiz-server'
  scrape_interval: 10s
  metrics_path: '/actuator/prometheus'
  static_configs:
  - targets: ['app:8080']
    labels:
      application: 'quiz-server'
Örnek
Şöyle yaparız. Bu bir spring uygulamasını sorgular
global:
  scrape_interval: 10s
scrape_configs:
  - job_name: 'spring_micrometer'
    metrics_path: '/actuator/prometheus'
    scrape_interval: 5s
    static_configs:
      - targets: ['192.168.2.8:8080']
relabel_configs Alanı - Relabeling 
Açıklaması şöyle
Relabeling allows you to transform or modify the scraped data labels before storing them in the time-series database. This is useful for modifying labels to match your naming conventions or adding additional metadata to the scraped data.
Örnek
Açıklaması şöyle
For instance, if you want to modify the job label of darwin-service-1 and save the scraped metrics to a new value, say darwin-new-service, you relabel the prometheus.yml configuration file as follows.
Şöyle yaparız
global:
  scrape_interval: 15s
scrape_configs:
  - job_name: 'darwin-service-1'
    scrape_interval: 5s
    static_configs:
      - targets: ['darwin-service-1:80']
    relabel_configs:
      - source_labels: [job]
        target_label: job
        replacement: 'darwin-new-service'

  - job_name: 'darwin-service-2'
    scrape_interval: 10s
    static_configs:
      - targets: ['darwin-service-2:80']
resources Alanı
Açıklaması şöyle
In a default Prometheus configuration, you deploy containers without resource limits, consequently leading to suboptimal performance of the operating cluster. Instead, you can configure resource consumption at the job, instance, or global level by explicitly defining the limit in the prometheus.yml configuration file.   
Şöyle yaparız
global:
  scrape_interval: 15s
scrape_configs:
  - job_name: 'darwin-service-1'
    scrape_interval: 5s
    static_configs:
      - targets: ['darwin-service-1:80']
    relabel_configs:
      - source_labels: [job]
        target_label: job
        replacement: 'darwin-new-service'
    resources:
      requests:
        memory: 2Gi
        cpu: 1
      limits:
        memory: 4Gi
        cpu: 2

  - job_name: 'darwin-service-2'
    scrape_interval: 10s
    static_configs:
      - targets: ['darwin-service-2:80']
    resources:
      requests:
        memory: 1Gi
        cpu: 0.5
      limits:
        memory: 2Gi
        cpu: 1
alerting Alanı
Açıklaması şöyle. Yani alert olursa hangi kanala gönderileceği burada tanımlı
The alerting section contains the configuration of the tool that we will send an alert to if there is a problem with our system
Açıklaması şöyle
Alerting rules are used to define conditions under which alerts are triggered. As an essential part of monitoring and reliability engineering, you can set up notifications via various channels such as email, Slack, or Squadcast to help detect and resolve issues before they become critical.

In this case, the rule_files field points to a directory containing alert rules, which define the conditions under which alerts are triggered. Triggered alerts get sent to the specified Alertmanager targets, which you can further configure to send notifications to various channels, such as email or the Squadcast platform.
Örnek
Şöyle yaparız
global:
  scrape_interval: 15s
  evaluation_interval: 1m

rule_files:
  - /etc/prometheus/rules/*.rules

scrape_configs:
  - job_name: 'darwin-service-1'
    scrape_interval: 5s
    static_configs:
      - targets: ['darwin-service-1:80']
    relabel_configs:
      - source_labels: [job]
        target_label: job
        replacement: 'darwin-new-service'
    resources:
      requests:
        memory: 2Gi
        cpu: 1
      limits:
        memory: 4Gi
        cpu: 2
    alerting:
      alertmanagers:
      - static_configs:
        - targets:
          - alertmanager:9093
  - job_name: 'darwin-service-2'
    scrape_interval: 10s
    static_configs:
      - targets: ['darwin-service-2:80']
    resources:
      requests:
        memory: 1Gi
        cpu: 0.5
      limits:
        memory: 2Gi
        cpu: 1

    alerting:
      alertmanagers:
      - static_configs:
        - targets:
          - alertmanager:9093


19 Haziran 2023 Pazartesi

aws sso seçeneği

login seçeneği
Örnek
Şöyle yaparız. Tarayıcıyı başlatır ve  AWS SSO login sayfasında yönlendirir. Burada kullanıcı adı ve şifreyi gireriz.
$ aws sso login 
Tüm çıktı şöyle
> aws s3 ls --profile poweruser

The SSO session associated with this profile has expired or is otherwise invalid. 
To refresh this SSO session run aws sso login with the corresponding profile.

> aws sso login --profile poweruser
Attempting to automatically open the SSO authorization page in your default browser.
If the browser does not open or you wish to use a different device to authorize this
request, open the following URL:

https://device.sso.us-east-1.amazonaws.com/

Then enter the code:

ABCD-EFGH
Successfully logged into Start URL: https://fooo.awsapps.com/start

> aws s3 ls --profile poweruser
...

12 Haziran 2023 Pazartesi

Supervisory Control and Data Acquisition - SCADA

Giriş
ındustrial IoT ve Industry 4.0 ile daha da önemli hale geldi. Şeklen şöyle

Gelişimi şöyle
First generation: Monolithic
Second generation: Distributed
Third generation: Networked
Fourth generation: Web-based
Fifth generation: Cloud-native and open SCADA systems.


9 Haziran 2023 Cuma

Quotient Filter - Probabilistic Data Structure - False Positive Bulunur

Giriş
Açıklaması şöyle
A quotient filter is a space-efficient probabilistic data structure that is used to test whether an item is a member of a set. The quotient filter will always say yes if an item is a set member. However, the quotient filter might still say yes although an item is not a member of the set (false positive). The quotient filter stores only a part of the item’s hash fingerprint along with additional metadata bits.
False Positive Etkisi
Şeklen şöyle. En sağda False Positive yüzünden gereksiz bir veri tabanı erişime yapılıyor




7 Haziran 2023 Çarşamba

Polyglot Persistence Nedir ?

Giriş
Açıklaması şöyle
Polyglot persistence is a term that refers to using multiple data storage technologies within a single system, in order to meet varying data storage needs.
İlk Nerede Ortaya Çıktı
Bu terim 2011 yılı civarında Martin Fowler Tarafından icat edilmiş. Orijinal yazı burada

Açıklaması şöyle. Çıkış noktası ise mikro servis mimarisine doğru gidişte her servisin kendi ihtiyacına göre verisini saklaması.
This concept of platform diversity was extended by Martin Fowler to databases in approx. 2011 when he coined (in what he calls his "bliki" - blog/wiki) the term polyglot persistence. His point was that different functionalities within a system can be best served by different databases
...
With the current move away from software monoliths and the trend for microservices where each area of a system is a different programme, possibly/probably living in a different container and/or server, this model is becoming more and more pertinent since the universal data interchange language of JSON and RESTful web services means that the impedance mismatch between different database systems is mitigated and not as important as it perhaps once was.

Google Cloud Datastream

Giriş
Açıklaması şöyle. CDC için sadece Oracle, MySQL, ve PostgreSQL veri tabanlarını destekler
If you use GCP and want to CDC, the limitation of Datastream is that it only supports a small number of databases at the moment.

6 Haziran 2023 Salı

Vector Database Nedir ?

Giriş
Açıklaması şöyle
A vector database is specifically engineered to efficiently deal with vector data. So, what’s vector data? It represents data points in multi-dimensional space, a mathematical approach to defining real-world information.

Consider this, you have an assortment of images. Each of these images can be represented as a vector in a high-dimensional space where each dimension relates to some feature of the image (like color, shape, or texture). By comparing these vectors, we can find similar images. Sounds neat, right?

This capability is pivotal because it enables similarity search — a type of search where you’re fishing for things that are similar, not necessarily exact replicas. This is a game-changer in many domains, like recommendation systems and machine learning.
Technologies behind vector databases
1. Indexing
Açıklaması şöyle
The foundation of vector databases lies in data indexing. Through techniques like inverted indexing, vector databases can efficiently conduct similarity searches by grouping and indexing vector features. Furthermore, vector quantization techniques aid in mapping high-dimensional vectors to lower-dimensional spaces, resulting in reduced storage and computational requirements. By leveraging indexing techniques, vector databases enable efficient searching of vectors using various operations such as vector addition, similarity calculation, and clustering analysis.
2. Vector Quantization
Açıklaması şöyle
Furthermore, vector quantization techniques aid in mapping high-dimensional vectors to lower-dimensional spaces, resulting in reduced storage and computational requirements. By leveraging indexing techniques, vector databases enable efficient searching of vectors using various operations such as vector addition, similarity calculation, and clustering analysis.
3. Storage
Açıklaması şöyle
Regarding the storage aspect of vector databases, it is noteworthy that indexing techniques take precedence over the choice of underlying storage.


Low Code Nedir ?

Giriş
Açıklaması şöyle. Yani eski sürükle bırak yöntemi gibi
Low code development refers to a visual development approach that empowers you to create applications with minimal manual coding. It provides a graphical interface and pre-built components that allow developers and non-technical users to rapidly build and deploy applications. This approach significantly reduces the time and effort required to create software solutions.

1 Haziran 2023 Perşembe

Staged Event-Driven Architecture - SEDA

Giriş
Açıklaması şöyle
In SEDA architecture, the flow of events through the system is divided into stages, and each stage has a bounded event queue. The processing of events is handled asynchronously and in a pipelined manner, allowing for better control over system resources and improved throughput.
Yani aslında bir çeşit pipeline gibi düşünülebilir. Her aşamayı (stage) farklı bir thread veya thread pool çalıştırır.