26 Ekim 2023 Perşembe

Messagepack

Giriş
Açıklaması şöyle
MessagePack is a great choice when you need a balance between speed and cross-language compatibility. It’s suitable for real-time applications and situations where data size reduction is crucial.

20 Ekim 2023 Cuma

Google Cloud - Google Kubernetes Engine (GKE)

Giriş
Açıklaması şöyle. İlk üçüne bakarsak gittikçe daha fazla Google tarafından yönetilen (managed) ortam sağlıyor
Google has five options for running containers which are:

1. GKE Standard
2. GKE Autopilot
3. Cloud Run
4. App Engine Flex : App Engine Flex has been more or less completely superseded by Cloud Run.
4. GCE with Containers : Only really appropriate for very small deployments.
GKE
Açıklaması şöyle
if you have an Azure-based deployment, you can assign specific zones to Azure Kubernetes Service (AKS). If you use Google Cloud, you can leverage Google Kubernetes Engine (GKE) to select multi-zone or region clusters (each option offers different benefits and drawbacks in terms of redundancy, cost, and proximity to the end-user).
GKE Standard vs Autopilot
Açıklaması şöyle
The main difference between these is that Autopilot applies a level of Google opinionation to the cluster and makes node management their responsibility.... Interestingly Google has recently made autopilot the default option when provisioning new clusters, recommending it for the majority of workloads with a potentially lower TCO as per the diagram below.
Cloud Run
Açıklaması şöyle
Cloud Run is Google’s ‘serverless’ container offering where all you need to do is to deploy your container image to the service and Google takes care of everything else. 


18 Ekim 2023 Çarşamba

aws s3 seçeneği

Giriş
Açıklaması şöyle
The main difference between aws s3 command and aws s3api command is that 
aws s3 command is a higher-level abstraction that provides a more simplified and easier-to-use interface, while the aws s3api command provides a more direct and granular interface to the underlying S3 API.

1. s3 komutu

s3 cp
cp
Söz dizimi şöyle
aws s3 cp <kaynak dosya veya dizin>  <s3://bucket ismi> [-- seçenekleri]
Örnek - cp ile kopyalama
Şöyle yaparız
aws --endpoint-url=http://localhost:4566 \
  s3 cp cafezin.png \
  s3://bucket-example
Örnek
Şöyle yaparız
aws s3api create-bucket --bucket your-unique-bucket-name --region us-east-1

echo "Hello, S3!" > hello.txt
aws s3 cp hello.txt s3://your-unique-bucket-name/

Örnek - cp ile recursive kopyalama
Şöyle yaparız
aws s3 cp /path/to/batch_input s3://my-bucket/batch-input/ --recursive
s3 ls - ls ile listeleme
Örnek
Şöyle yaparız
aws --endpoint-url=http://localhost:4566 \
 s3 ls \
 s3://bucket-example/
s3 mb
Örnek
s3 bucket yaratmak için şöyle yaparız
aws s3 mb s3://my.private.maven
s3 rb
Örnek
Şöyle yaparız
aws s3 rb s3://your-unique-bucket-name --force
aws iam delete-role --role-name CrossRegionReplicationRole
s3 presign - Shareable Link
Örnek
Şöyle yaparız
aws s3 presign s3://your-unique-bucket-name/hello.txt --expires-in 3600

s3 website
Örnek
bucket_policy.json içinde şöyle yaparız
{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Sid": "PublicReadGetObject",
      "Effect": "Allow",
      "Principal": "*",
      "Action": "s3:GetObject",
      "Resource": "arn:aws:s3:::testbucket/*"
    }
  ]
}
Şöyle yaparız
# sync the website folder that contains our files to the S3 bucket 
aws --endpoint-url=http://localhost:4566 s3 sync .\website\ s3://testbucket

# enable static website hosting on the bucket and configure the index and error documents:
aws --endpoint-url=http://localhost:4566 s3 website s3://testbucket/ \
  --index-document index.html 
  --error-document error.html

2. s3api komutu

s3api create bucket
Örnek
Localstack kullanıyorsak onun üzerinde yaratmak için şöyle yaparız
aws --endpoint-url=http://127.0.0.1:4566 \
  s3api create-bucket \
  --bucket bucket-example
s3api list-objects
Örnek
Şöyle yaparız
# create s3 bucket
aws --endpoint-url=http://localhost:4566 s3api create-bucket --bucket testbucket

# list s3 buckets
aws --endpoint-url=http://localhost:4566 s3api list-buckets

# copy test file to the created bucket.
aws --endpoint-url=http://localhost:4566 s3 cp test.txt s3://testbucket

# check files
aws --endpoint-url=http://localhost:4566 s3api list-objects --bucket testbucket
3. s3api put-bucket-versioning
Örnek
Şöyle yaparız
aws s3api put-bucket-versioning --bucket your-unique-bucket-name
  --versioning-configuration Status=Enabled



Localstack awslocal komutu

Giriş
Açıklaması şöyle
When interacting with LocalStack to emulate AWS services it’s important to configure your AWS CLI or SDK to point to the LocalStack endpoint URL. This allows you to interact with LocalStack easily without having to specify the --endpoint-url option every time you run a command.

Another option is installing a tool called “awslocal” which is a wrapper around the AWS CLI for LocalStack. It automatically configures the CLI to use the LocalStack endpoint URL, saving you from the manual step of specifying the --endpoint-url option.
Açıklaması şöyle
awslocal is a thin wrapper and a drop-in replacement for the aws command that runs commands directly against LocalStack
1. awslocal Olmadan
Örnek
Şöyle yaparız
# create s3 bucket
aws --endpoint-url=http://localhost:4566 s3api create-bucket --bucket testbucket

# list s3 buckets
aws --endpoint-url=http://localhost:4566 s3api list-buckets

# copy test file to the created bucket.
aws --endpoint-url=http://localhost:4566 s3 cp test.txt s3://testbucket

# check files
aws --endpoint-url=http://localhost:4566 s3api list-objects --bucket testbucket
Örnek
bucket_policy.json içinde şöyle yaparız
{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Sid": "PublicReadGetObject",
      "Effect": "Allow",
      "Principal": "*",
      "Action": "s3:GetObject",
      "Resource": "arn:aws:s3:::testbucket/*"
    }
  ]
}
Şöyle yaparız
# sync the website folder that contains our files to the S3 bucket 
aws --endpoint-url=http://localhost:4566 s3 sync .\website\ s3://testbucket

# enable static website hosting on the bucket and configure the index and error documents:
aws --endpoint-url=http://localhost:4566 s3 website s3://testbucket/ \
  --index-document index.html 
  --error-document error.html
2. Bash Başlatmak
Localstack içinden awslocal çıkıyor. Container'a bash açmak yeterli. Şöyle yaparız
docker exec -it <container_id> /bin/bash
sqs seçeneği
Örnek
Şöyle yaparız
$awslocal sqs create-queue --queue-name test-queue

$awslocal sqs list-queues

{
  "QueueUrls" : [
    http:/localhost:4566:/000000/test-queue"
  ]
}
s3 seçeneği
Örnek - High Level Command
Şöyle yaparız. Yeni bir bucket yaratır
awslocal s3 mb s3://my-test-bucket
Örnek -  - Low Level Command
Şöyle yaparız. Yeni bir bucket yaratır
awslocal s3api create-bucket \
--bucket mybucket \
--create-bucket-configuration LocationConstraint=eu-central-1

Localstack Nedir

Giriş
Açıklaması şöyle. Yani AWS servisleri için bir  emülatör
LocalStack is a cloud service emulator that runs in a single container on your laptop or in your CI environment. With LocalStack, you can run your AWS applications or Lambdas entirely on your local machine without connecting to a remote cloud provider! Whether you are testing complex CDK applications or Terraform configurations, or just beginning to learn about AWS services, LocalStack helps speed up and simplify your testing and development workflow.
Açıklaması şöyle
LocalStack is a cloud service emulator that runs AWS services solely on your laptop without connecting to a remote cloud provider .
Kurulum
Açıklaması şöyle
There are several ways to install LocalStack (LocalStack CLILocalStack CockpitDocker, Docker-ComposeHelm).
Docker
Örnek
Şöyle yaparız
docker run --rm -it 
  -p 4566:4566 
  -p 4510-4559:4510-4559 
 localstack/localstack

16 Ekim 2023 Pazartesi

Radix Tree

Giriş
Açıklaması şöyle
Radix Tree is a compressed prefix trees (trie) that work really well for fast lookups.
Trie yazısına bakabilirsiniz

Örnek
Elimizde şu kelimeler olsun
romane
romanus
romulus
rubens
ruber
rubicon
rubicundus
Trie hali şöyledir


Radix Tree hali şöyledir









13 Ekim 2023 Cuma

SQL Bomb

Örnek
Şöyle yaparız
SELECT a || a || a || a as a from (SELECT 'aaaa' || rand() as a) t
SELECT a || a || a || a as a from ("SELECT 'aaaa' as a) t

3 Ekim 2023 Salı

GoF - Abstract Factory Örüntüsü

Örnek
Elimizde şöyle bir hiyerarşi olsun
// Abstract Product - Button
public interface Button {
  void render();
}

// Concrete Product - WindowsButton
public class WindowsButton implements Button {
  @Override
  public void render() {
    System.out.println("Rendering a Windows button");
  }
}

// Concrete Product - MacButton
public class MacButton implements Button {
  @Override
  public void render() {
    System.out.println("Rendering a Mac button");
  }
}
Şöyle yaparız
// Abstract Factory
public interface GUIFactory {
  Button createButton();
}

// Concrete Factory - WindowsFactory
public class WindowsFactory implements GUIFactory {
  @Override
  public Button createButton() {
    return new WindowsButton();
  }
}

// Concrete Factory - MacFactory
public class MacFactory implements GUIFactory {
  @Override
  public Button createButton() {
    return new MacButton();
  }
}


Amazon Web Service EC2 Instance Çeşitleri

Hazır Tanımlanmış EC2'ler şöyle.
They are grouped into families that emphasize some possibilities for your workloads:

General Purpose (Genel Amaçlı) – also known as balanced instances, best for web servers, microservices, small and medium databases, development environments, and code repositories.

Compute Optimized (İşlem İçin Optimize Edilmiş) – designed for compute-intensive workloads, like batch processing, data analytics, scientific modeling, dedicated gaming servers, machine learning, and high-performance computing.

Memory-Optimized (Bellek İçin Optimize Edilmiş) – memory-intensive applications that process large data sets in memory, such as databases and real-time streaming.

Accelerated Computing (Hızlandırılmış Bilişim) – used for graphics processing, machine learning, data pattern matching, and other numerically intensive workloads.

Storage Optimized (Depolama İçin Optimize Edilmiş) – designed for high, sequential read and write access to very large data sets on local storage. Best for NoSQL databases, in-memory databases, data warehousing, Elasticsearch, analytics workloads.
General Purpose  - Burstable Instances
Açıklaması şöyle
In AWS, there are many Instance Families. One of them is burstable general-purpose instances, which are basically T Instance Family.

The T Instance Family offers a baseline CPU performance but it also has the ability to burst above the baseline at any time as logs as required. Which is essential for business-critical or unknown behavior of the workloads.
Burstable Olması Ne Demek
Açıklaması şöyle
Burstable Instances earn CPU credits while running below the baseline and spending them when bursting.
Bazı kavramlar şöyle
Earned Credits: The amount of credits an instance earns while running
Used Credits: When a burstable instance is in the running state, it will continuously use CPU credits. 
Accrued Credits: Difference between the earned credits and used credits is called accrued credits.
Örnek - General Purpose T2.medium
Açıklaması şöyle
For a typical, simple microservice application, a minimum configuration of t2.medium instance type should do the work. T2 instances are the lowest-cost general purpose instance type. You can easily change your instance types if after a while your needs change.