22 Kasım 2022 Salı

Docker Compose ve Localstack

Örnek
Şöyle yaparız
version: '3.9'
services:
  aws-local:
    container_name: aws-local
    image: localstack/localstack:1.3
    ports:
      - "4566:4566"
      - "8283:8080"
    environment:
      - "SERVICES=sqs,sns,secretsmanager"
Örnek - volume
Şöyle yaparız
version: "3.8"

services:
  localstack:
    container_name: "${LOCALSTACK_DOCKER_NAME-localstack_main}"
    image: localstack/localstack:0.14.2
    network_mode: bridge
    ports:
      - "127.0.0.1:4566:4566"            # LocalStack Gateway
      - "127.0.0.1:53:53"                #
      - "127.0.0.1:53:53/udp"            #
      - "127.0.0.1:443:443"              #
      - "127.0.0.1:4510-4530:4510-4530"  # ext services port range
      - "127.0.0.1:4571:4571"            #
    environment:
      - DEBUG=${DEBUG-}
      - SERVICES=${SERVICES-}
      - DATA_DIR=${DATA_DIR-}
      - LAMBDA_EXECUTOR=local
      - LOCALSTACK_API_KEY=${LOCALSTACK_API_KEY-}
      - HOST_TMP_FOLDER=${TMPDIR:-/tmp/}localstack
      - DOCKER_HOST=unix:///var/run/docker.sock
      - DISABLE_CORS_CHECKS=1
    volumes:
      - "${TMPDIR:-/tmp}/localstack:/tmp/localstack"
      - "/var/run/docker.sock:/var/run/docker.sock"
Hangi servislerin çalıştığını görmek için şöyle yaparız
http://localhost:4566/health 
Çıktı şöyle. Örneğin S3 çalışıyor
{
  "features": {
    "initScripts": "initialized"
  },
  "services": {
    "acm": "available",
    "apigateway": "available",
    "cloudformation": "available",
    "cloudwatch": "available",
    "config": "available",
    "dynamodb": "available",
    "dynamodbstreams": "available",
    "ec2": "available",
    "es": "available",
    "events": "available",
    "firehose": "available",
    "iam": "available",
    "kinesis": "available",
    "kms": "available",
    "lambda": "available",
    "logs": "available",
    "opensearch": "available",
    "redshift": "available",
    "resource-groups": "available",
    "resourcegroupstaggingapi": "available",
    "route53": "available",
    "route53resolver": "available",
    "s3": "available",
    "s3control": "available",
    "secretsmanager": "available",
    "ses": "available",
    "sns": "available",
    "sqs": "running",
    "ssm": "available",
    "stepfunctions": "available",
    "sts": "available",
    "support": "available",
    "swf": "available",
    "transcribe": "available"
  },
  "version": "1.1.1.dev"
}
Örnek
Şöyle yaparız. Burada dynamo db için bazı başlangıç scriptleri veriliyor
version: '3.9'

networks:
  tasks-network:
    driver: bridge

services:
  ...
  tasks-localstack:
    image: localstack/localstack:latest
    container_name: tasks-localstack
    environment:
      - DEBUG=0
      - SERVICES=dynamodb
      - EAGER_SERVICE_LOADING=1
      - DYNAMODB_SHARE_DB=1
      - AWS_DEFAULT_REGION=ap-southeast-2
      - AWS_ACCESS_KEY_ID=DUMMY
      - AWS_SECRET_ACCESS_KEY=DUMMY
      - DOCKER_HOST=unix:///var/run/docker.sock
    ports:
      - "4566:4566"
    volumes:
      - ./utils/docker-volume/localstack:/var/lib/localstack"
      - ./utils/docker-volume/dynamodb/items/devices.json:/var/lib/localstack/devices.json
      - ./utils/docker-volume/dynamodb/scripts/create-resources.sh:/etc/localstack/init/ready.d/create-resources.sh
      - /var/run/docker.sock:/var/run/docker.sock
    networks:
      - tasks-network
devices.json şöyle
{
  "id": {"S": "123"},
  "name": {"S": "Device name"},
  "description": {"S": "Device description"},
  "status": {"S": "OFF"}
}
create-resources.sh şöyle
#!/bin/bash

echo "CREATING DEVICES TABLE..."
awslocal dynamodb create-table                                \
  --table-name Devices                                        \
  --attribute-definitions AttributeName=id,AttributeType=S    \
  --key-schema AttributeName=id,KeyType=HASH                  \
  --billing-mode PAY_PER_REQUEST
echo "DONE!"

echo ""
echo "PUTTING DEVICE ITEM..."
awslocal dynamodb put-item                                    \
    --table-name Devices                                      \
    --item file:///var/lib/localstack/devices.json
echo "DONE!"
init script
Açıklaması şöyle
The volume section specifies a directory on a PC mapped to a directory inside the container. On the container startup the Localstack checks this directory for bash files, and if it finds executes them. It is useful to create resources, configs, etc. This way you write commands once in the bash file and Localstack executes them automatically on a startup, so you don’t need to type the command manually each time you spin up a container.
Örnek
Şöyle yaparız
version: '3.8'

services:
  localstack:
    image: localstack/localstack
    ports:
      - '4566:4566' # LocalStack endpoint

    environment:
      - DOCKER_HOST=unix:///var/run/docker.sock
    volumes:
      - ./localstack-script:/etc/localstack/init/ready.d
      - '/var/run/docker.sock:/var/run/docker.sock'
Örnek
Şöyle yaparız. Burada bir s3 bucket yaratılıyor
version: "3.8"

services:
  localstack:
    container_name: localstack_main
    image: localstack/localstack:latest
    ports:
      - "127.0.0.1:4566:4566"            # LocalStack Gateway
      - "127.0.0.1:4510-4559:4510-4559"  # external services port range
    environment:
      - DEBUG=1
      - AWS_ACCESS_KEY_ID=test
      - AWS_SECRET_ACCESS_KEY=test          
      - AWS_DEFAULT_REGION=eu-west-1 # Region where your localstack mocks to be running
      - DOCKER_HOST=unix:///var/run/docker.sock
    volumes:
      - "${LOCALSTACK_VOLUME_DIR:-./volume}:/var/lib/localstack"
      - "/var/run/docker.sock:/var/run/docker.sock"
      - ./aws/init-aws.sh:/etc/localstack/init/ready.d/init-aws.sh
Docker Compose dosyası ile aynı dizinde bulunan aws/init-aws.sh dosyası şöyledir
#!/bin/bash
awslocal s3 mb s3://my-test-bucket
Örnek
Şöyle yaparız. Burada init-scripts dizini localstack'e gösteriliyor. 
version: '3.8'
services:
  localstack:
    container_name: localstack
    image: localstack/localstack:0.11.6
    ports:
      - "4566-4599:4566-4599"
    environment:
      - SERVICES=sqs
    volumes:
      - ./init-scripts:/docker-entrypoint-initaws.d
init-scripts dizinindeki bir dosya şöyle olsun
#!/bin/bash
echo "########### Setting up localstack profile ###########"
aws configure set aws_access_key_id access_key --profile=localstack
aws configure set aws_secret_access_key secret_key --profile=localstack
aws configure set region sa-east-1 --profile=localstack

echo "########### Setting default profile ###########"
export AWS_DEFAULT_PROFILE=localstack

echo "########### Setting SQS names as env variables ###########"
export SOURCE_SQS=source-sqs
export DLQ_SQS=dlq-sqs

echo "########### Creating DLQ ###########"
aws --endpoint-url=http://localstack:4566 sqs create-queue --queue-name $DLQ_SQS

echo "########### ARN for DLQ ###########"
DLQ_SQS_ARN=$(aws --endpoint-url=http://localstack:4566 sqs get-queue-attributes\
                  --attribute-name QueueArn --queue-url=http://localhost:4566/000000000000/"$DLQ_SQS"\
                  |  sed 's/"QueueArn"/\n"QueueArn"/g' | grep '"QueueArn"' | awk -F '"QueueArn":' '{print $2}' | tr -d '"' | xargs)

echo "########### Creating Source queue ###########"
aws --profile=localstack --endpoint-url=http://localstack:4566 sqs create-queue --queue-name $SOURCE_SQS \
     --attributes '{
                   "RedrivePolicy": "{\"deadLetterTargetArn\":\"'"$DLQ_SQS_ARN"'\",\"maxReceiveCount\":\"2\"}",
                   "VisibilityTimeout": "10"
                   }'

echo "########### Listing queues ###########"
aws --endpoint-url=http://localhost:4566 sqs list-queues

echo "########### Listing Source SQS Attributes ###########"
aws --endpoint-url=http://localstack:4566 sqs get-queue-attributes\
                  --attribute-name All --queue-url=http://localhost:4566/000000000000/"$SOURCE_SQS"
Açıklaması şöyle
This file has a couple of commands, that will be executed sequentially.

1. Localstack profile is created
2. DLQ is created
3. ARN for DLQ is obtained
4. Source SQS is created with redrive policy. In the redrive policy ARN for DLQ is specified and maxReciveCount wich tells Source SQS how many times client can receive message before it will be transferred to DLQ. A visibility timeout is set to 10 seconds. More option with explanations can be found here.
5. A list of the created queues is returned.
6. A list of attributes of a Source queue is returned. It confirms that Source SQS has attributes specified in the creation command.



17 Kasım 2022 Perşembe

Slack API

Giriş
Açıklaması şöyle
The Slack API base URL is https://slack.com/api and it uses token authentication. You can find your token in a Slack application that you need to create to start using the API. Token is visible in the configuration page of your application at https://api.slack.com/apps/YOUAR_APP_ID.
Yani "https://slack.com/api/chat.postMessage" adresine Token Authentication ile post yaparız

Amazon Simple Notification Service (AWS) SNS - Push Approach

Giriş
Açıklaması şöyle
Simple notification service allows user to publish messages to topic. A user subscribe to topic(s). Whenever a message is published to the topic by publisher, subscriber receives the message published in the topic. Both publisher and consumer are unaware of each other. They do not communicate directly.
Açıklaması şöyle
Amazon Simple Notification Service (SNS) is an API for sending notifications to applications and people. For many developers, the key to SNS is the “people” part—the ability to send push and SMS messages to customers. SNS’s API endpoints allow you to send individual messages, but most of the service’s functionality is built around SNS topics for sending batches of notifications over time.
Adımlar şöyle
- From Services, look for SNS and click on it.
- Open SNS console and from the left panel, select topics.
- Click on create a topic.
- Fill in the name for the topic and keep the default values the same.
- We are done with SNS!
Gradle
Şu satırı dahil ederiz
implementation 'io.awspring.cloud:spring-cloud-starter-aws-messaging:2.3.5'
SNS API
AmazonSNSClient Sınıfı
constructor
AmazonSNSClientBuilder.standard() metodu kullanılır
Örnek
Şöyle yaparız
import com.amazonaws.auth.AWSStaticCredentialsProvider;
import com.amazonaws.auth.BasicAWSCredentials;
import com.amazonaws.services.sns.AmazonSNSClient;
import com.amazonaws.services.sns.AmazonSNSClientBuilder;

@Configuration
public class AwsSNSConfig {
  @Value("${cloud.aws.region.static}")
  private String region;

  @Value("${cloud.aws.credentials.access-key}")
  private String awsAccessKey;

  @Value("${cloud.aws.credentials.secret-key}")
  private String awsSecretKey;

  
  @Bean
  public AmazonSNSClient getAWSSNSClient() {
    return (AmazonSNSClient) AmazonSNSClientBuilder.standard()
      .withRegion(region)
      .withCredentials(
        new AWSStaticCredentialsProvider(new BasicAWSCredentials(awsAccessKey, 
                                                                 awsSecretKey)))
      .build();
    }
}
publish metodu
Örnek
Şöyle yaparız
import com.amazonaws.services.sns.AmazonSNSClient;
import com.amazonaws.services.sns.model.PublishRequest;
import com.amazonaws.services.sns.model.SubscribeRequest;

@RestController
public class SNSController {
  private final static String TOPIC_ARN = "---your sns topic arn---";

  @Autowired
  AmazonSNSClient amazonSNSClient;

  @PostMapping("/publish")
  public String publishMessageToSNSTopic() {
    PublishRequest publishRequest = 
      new PublishRequest(TOPIC_ARN, "Hi", "demo from sns");
    amazonSNSClient.publish(publishRequest);
    return "notification send successfully.";
  }
}


14 Kasım 2022 Pazartesi

Google Cloud Spanner - İlişkisel Veri Tabanı

Cloud Spanner Nedir
Açıklaması şöyle. Çok büyük miktarda veriyi global ölçekte ve strong transactional consistency ile kullanabilmeyi sağlar.
Cloud Spanner is a fully managed, mission-critical, relational(SQL), globally distributed database with VERY high availability. It provides strong transactional consistency at the global scale. It can scale to petabytes of data with automatic sharding.

Here are some of the important features:
1. Scales horizontally for reads and writes: In comparison, Cloud SQL provides read replicas BUT you cannot horizontally scale write operations with Cloud SQL!
2.Regional and Multi-Regional configurations
3. Expensive (compared to Cloud SQL): You pay for nodes & storage
Spanner SQL
Açıklaması şöyle
Although Google’s Spanner’s SQL dialect is inspired by MySQL, it is not fully compatible with it either.
Cloud SQL vs Cloud Spanner
Cloud SQL yazısına bakabilirsiniz. Açıklaması şöyle.
Use Cloud Spanner(Expensive) instead of Cloud SQL for relational transactional applications if:

1. You have huge volumes of relational data (TBs) OR
2. You need infinite scaling for a growing application (to TBs) OR
3. Do you need a Global (distributed across multiple regions) Database OR
4. You need higher availability (99.999%)
Google True Time
Google Cloud Spanner, zaman senkronizasyonu için Google True Time kullanır

Distributed Joins
Distributed joins are commonly considered too expensive to use for real-time transaction processing. That is because, besides joining data, they also frequently require moving or shuffling data between nodes in a cluster, which can significantly affect query response times and database throughput. However, there are certain optimizations that can completely eliminate the need to move data to enable faster joins.
Ancak yine de Distributed Join yapabilmek için 4 tane optimizasyon var. Bunlar şöyle
1. Shuffle join
2. Broadcast join
3. Co-located join 
4. Pre-computed join
Açıklaması şöyle
Shuffle and broadcast joins are more suitable for batch or near real-time analytics. For example, they are used in Apache Spark as the main join strategies. Co-located and pre-computed joins are faster and can be used for online transaction processing with real-time applications. They frequently rely on organizing data based on unique storage schemes supported by a database.

Join Adımları
Join işlemi 3 adımdan ibarettir. Açıklaması şöyle
- The first step is to move data between nodes in the cluster, such that rows that can potentially be combined based on a join condition end up on the same nodes. Data movement is usually achieved by shuffling or broadcasting data. 
- The second step is to compute a join result locally on each node. This usually involves one of the fundamental join algorithms, such as a nested-loop, sort-merge, or hash join algorithm. 
- The last step is to merge or union local join results and return the final result. In many cases, it is possible to optimize a distributed join by eliminating one or even two steps from this process.
Shuffle join
Açıklaması şöyle
A shuffle join re-distributes rows from both tables among nodes based on join key values, such that all rows with the same join key value are moved to the same node. Depending on a particular algorithm used to compute joins, a shuffle join can be a shuffle hash join, shuffle sort-merge join, and so forth.
Broadcast join
Açıklaması şöyle
A broadcast join moves data stored in only one table, such that all rows from the smallest table are available on every node. Depending on a particular algorithm used to compute joins, a broadcast join can be a broadcast hash join, broadcast nested-loop join, and so forth.
Co-located join 
Açıklaması şöyle
A co-located join does not need to move data at all because data is already stored such that all rows with the same join key value reside on the same node. Data still needs to be joined using a nested-loop, sort-merge, or hash join algorithm.
Pre-computed join
Açıklaması şöyle
A pre-computed join does not need to move data or compute joins locally on each node because data is already stored in a joined form. This type of join skips data movement and join computation and goes directly to merging and returning results.
Hepsi
Şeklen şöyle. Burada Co-located join ve Pre-computed join tipleri veriyi node'lar arasında taşımıyor.


Google Cloud Spanner ile Co-located join 
Açıklaması şöyle
Co-located joins can perform significantly faster than shuffle and broadcast joins because they avoid moving data between nodes in a cluster. To use co-located joins, a distributed database needs to have a mechanism to specify which related data entities must be stored together on the same node. In Google Cloud Spanner, this mechanism is called table interleaving.

Logically independent tables can be organized into parent-child hierarchies by interleaving tables. This results in a data locality relationship between parent and child tables, such that one or more rows from a child table are physically stored together with one row from a parent table. For two tables to be interleaved, the parent table primary key must also be included as the prefix of the child table primary key. In other words, the child table primary key must consist of the parent table primary key followed by additional columns.
Elimizde şöyle tablolar olsun

Şöyle yaparız. Burada tabloların "INTERLEAVE IN PARENT" ile bağlandığı görülebilir.






10 Kasım 2022 Perşembe

Fault Tolerance ve Resiliency İçin Dead-Letter Queue Örüntüsü

Giriş
Bir mesaj bir kaç defa denendikten sonra (Retry Örüntüsü) halen işlenemiyorsa muhtemelen insan müdahalesi gerekir. Dead Letter Channel aynı zamanda Dead Letter Queue olarak ta bilinir. Şeklen şöyle
Açıklaması şöyle
1. Under normal circumstances, the application processes each event in the source topic and publishes the result to the target topic
2. Events that cannot be processed, for example, those that don’t have the expected format or are missing required attributes, are routed to the error topic
3. Events for which dependent data is not available are routed to a retry topic where a retry instance of your application periodically attempts to process the events
Eğer birbirine bağımlı mesajlar varsa bunların da sırayı muhafaza etmek için aynı kuyruğa gönderilmesine dikkat etmek gerekir

Dead-Letter Kelime Anlamı Nedir?
Açıklaması şöyle.
What Is a Dead Letter Queue?
In English vocabulary, dead letter mail is undeliverable mail that cannot be delivered to the addressee. A dead-letter queue (DLQ), sometimes known as an undelivered-message queue, is a holding queue for messages that cannot be delivered to their destinations due to something.

According to Wikipedia — In message queueing the dead letter queue is a service implementation to store messages that meet one or more of the following failure criteria:

- Message that is sent to a queue that does not exist
- Queue length limit exceeded
- Message length limit exceeded
- Message is rejected by another queue exchange
- Message reaches a threshold read counter number because it is not consumed. Sometimes this is called a “back out queue”
İnsan Müdahalesi İçin Bazı Örnekler
Örnek
Bir örnek şöyle
A message arriving in the error queue can trigger an alert and the support team can decide what to do. And this is important: You don't need to automate all edge cases in your business process. What's the point in spending a sprint to automate this case, if it only happens once every two years? The costs will definitely outweigh the benefits. Instead, we can define a manual business process for handling these edge cases.

In our example, if Bob from IT sees a message in the error queue, he can inspect it and see that it failed with a CannotShipOrderException. In this case, he can notify the Shipping department and they can use another shipment provider. But all of this happens outside of the system, so the system is less complex and easier to build.
Örnek
Hatalı alan için bir örnek şöyle
However, if the error is not ever possible to solve via a retry process (such as a never ever unhandled case, maybe a corrupt field value, e.g.), you should create an “error topic” which is called a “dead-letter queue”.

9 Kasım 2022 Çarşamba

git diff-tree seçeneği - Bir Commit'teki Değişen Dosyaları Gösterir

Giriş
Açıklaması şöyle
Q :How will you find a list of files that has been modified in a particular commit?
A : The command to get a list of files that has been changed in a particular commit is:
git diff-tree –r {commit hash}

-r flag allows the command to list individual files
commit hash lists all the files that were changed or added in the commit.

git revert seçeneği - Undo İçindir

Giriş
Söz dizimi şöyle
git revert <commit>
Açıklaması şöyle
Creates a new commit to revert the specified commit
Eğer commit belirtilmezse Head'den bir önceki sürümü alarak yeniden commitler.  Açıklaması şöyle
Creates a new commit to revert the specified commit
Undo için iki seçenek var. Açıklaması şöyle
There are two processes through which you can revert a commit:
1. Remove or fix the bad file in a new commit and push it to the remote repository. Then commit it to the remote repository using:
git commit –m “commit message”
2. Create a new commit to undo all the changes that were made in the bad commit. Use the following command:
git revert <commit id>
Yani 
1. Ya dosyayı değiştirip tekrar commit'leyeğiz, 
2. Ya da dosyayı bir önceki haline getirip, değiştirip tekrar commit'leyeğiz

Örnek
Elimizde şu sürüm olsun.
a -> b -> c -> d(HEAD)
Şöyle yaparız
git revert HEAD
Elimizde şu geçer. Bir önceki sürümde bulunan e harfi geri gelir. Şimdi bu dosyayı tekrar commit'lemek gerekir.
a -> b-> c -> d -> e(HEAD)


7 Kasım 2022 Pazartesi

Docker Compose ve Debezium

Giriş
Debezium aslında 4 tane şeyin aynı anda çalışması ile oluşan bir şey. Bunlar
1. zookeeper
2. kafka
3. kafka-connect
4. debezium

Image olarak şunlar kullanılabilir
debezium/server
debezium/connect
outbox-transformer

Şu değişkenler belirtilir
BOOTSTRAP_SERVERS : Kafka adresi
GROUP_ID
CONFIG_STORAGE_TOPIC
OFFSET_STORAGE_TOPIC
STATUS_STORAGE_TOPIC

Ayrıca şu edeğişkenler ile Avro sunucusu belirtilir
KEY_CONVERTER
VALUE_CONVERTER
CONNECT_KEY_CONVERTER_SCHEMA_REGISTRY_URL
CONNECT_VALUE_CONVERTER_SCHEMA_REGISTRY_URL

Örnek - postgres + debezium + kafka
Şöyle yaparız
version: '3.1'
services:
    postgres:
        ...
    zookeeper:
        ...
    kafka:
        ...
    connector:
        image: debezium/connect:latest
        ports:
          - "8083:8083"
        environment:
          GROUP_ID: 1
          CONFIG_STORAGE_TOPIC: my_connect_configs
          OFFSET_STORAGE_TOPIC: my_connect_offsets
          BOOTSTRAP_SERVERS: kafka:9092
        depends_on:
          - zookeeper
          - postgres
          - kafka
Örnek - postgres + debezium + kafka
Şöyle yaparız
services:
  db:
    ...

  zookeeper:
    ...

  kafka:
    ...

  connect:
    image: debezium/connect
    ports:
      - "8083:8083"
    environment:
      - BOOTSTRAP_SERVERS=kafka:9092
      - GROUP_ID=1
      - CONFIG_STORAGE_TOPIC=my_connect_configs
      - OFFSET_STORAGE_TOPIC=my_connect_offsets
      - STATUS_STORAGE_TOPIC=my_connect_statuses
    depends_on:
      - zookeeper
      - kafka
Örnek - postgres + debezium + kafka
Şöyle yaparız
version: "3.5"

services:
  # Install postgres and setup the user service database
  postgres:
    ...

  # Install zookeeper.
  zookeeper:
   ...

  # Install kafka and create needed topics.
  kafka:
    ...

  # Install debezium-connect and add outbox-transformer here.
  debezium-connect:
    container_name: custom-debezium-connect
    image: outbox-transformer
    hostname: debezium-connect
    ports:
      - '8083:8083'
    environment:
      GROUP_ID: 1
      CONFIG_STORAGE_TOPIC: debezium_connect_config
      OFFSET_STORAGE_TOPIC: debezium_connect_offsets
      STATUS_STORAGE_TOPIC: debezium_connect_status
      BOOTSTRAP_SERVERS: kafka:29092
    depends_on:
      - kafka
      - postgres
Örnek - postgres + debezium + kafka + avro
Şöyle yaparız
version: “3.7”
services:
  postgres:
   ...
  zookeeper:
   ...
  kafka:
    ...
  kafka-ui:
   ...
  debezium:
    image: debezium/connect:1.4
    environment:
    BOOTSTRAP_SERVERS: kafka:9092
    GROUP_ID: 1
    CONFIG_STORAGE_TOPIC: connect_configs
    OFFSET_STORAGE_TOPIC: connect_offsets
    KEY_CONVERTER: io.confluent.connect.avro.AvroConverter
    VALUE_CONVERTER: io.confluent.connect.avro.AvroConverter
    CONNECT_KEY_CONVERTER_SCHEMA_REGISTRY_URL: http://schema-registry:8081
    CONNECT_VALUE_CONVERTER_SCHEMA_REGISTRY_URL: http://schema-registry:8081
    depends_on: [kafka]
    ports:
      — 8083:8083
  schema-registry:
   ...
Örnek
Şöyle yaparız. Burada Debezium Kafka yerine Redis'e yazıyor. Connector ayarları conf dizinindeki application.properties dosyasında
version: '3.1'
services:
  redis:
    image: redis
    ports:
      - 6379:6379
    depends_on:
      - postgres
  postgres:
    image: postgres
    restart: always
    environment:
      POSTGRES_USER: postgres
      POSTGRES_PASSWORD: postgres
    volumes:
      - ./postgresql.conf:/etc/postgresql/postgresql.conf
      - ./init:/docker-entrypoint-initdb.d
    command:
      - "-c"
      - "config_file=/etc/postgresql/postgresql.conf"
    ports:
      - 5432:5432
  debezium:
    image: debezium/server
    volumes:
      - ./conf:/debezium/conf
      - ./data:/debezium/data
    depends_on:
      - redis