Giriş
gcloud container clusters create ile yaratılan cluster silinir
Şöyle yaparız
gcloud container clusters delete <my-cluster>
gcloud container clusters delete <my-cluster>
kubectl config get-contexts CURRENT NAME CLUSTER AUTHINFO NAMESPACE docker-desktop docker-desktop docker-desktop * gke_hazelcast-33_us-central1-c_orcun-test gke_hazelcast-33_us-central1-c_orcun-test gke_hazelcast-33_us-central1-c_orcun-test
kubectl config use-context my-context-name
gcloud container clusters get-credentials <my-cluster>
gcloud container clusters get-credentials orcun-test --zone us-central1-c --project hazelcast-33
gcloud container clusters get-credentials mycluster --zone us-central1-a
The minimum spanning tree ensures that your internet traffic gets delivered even when cables break.Graph içinde cycle yani döngüler olmamalı. Yani Directed Acyclic olmalı. Açıklaması şöyle.
Topological sort is used in project planning to decide which tasks should be executed first.
Disjoint sets help you efficiently calculate currency conversions between NxN currencies in linear time
Graph coloring can in theory be used to decide which seats in a cinema should remain free during a infectious disease outbreak.
Detecting strongly connected components helps uncover bot networks spreading misinformation on Facebook and Twitter.
DAGs are used to perform very large computations distributed over thousands of machines in software like Apache Spark and Tensorflow
For instance, the vertices of the graph may represent tasks to be performed, and the edges may represent constraints that one task must be performed before another; in this application, a topological ordering is just a valid sequence for the tasks. A topological ordering is possible if and only if the graph has no directed cycles, that is, if it is a directed acyclic graph (DAG)
Root hariç her düğümde 3 pointer ve en az 1 leaf node olmalı.
Algorithms like Dijkstra's enable your navigation system / GPS to decide which roads you should drive on to reach a destination.
The Hungarian Algorithm can assign each Uber car to people looking for a ride (an assignment problem)
Chess, Checkers, Go and Tic-Tac-Toe are formulated as a game tree (a degenerate graph) and can be "solved" using brute-force depth or breadth first search, or using heuristics with minimax or A*
Flow networks and algorithms like maximum flow can be used in modelling utilities networks (water, gas, electricity), roads, flight scheduling, supply chains.
Bayesian networks were used by NASA to select an operating system for the space shuttle
Neural networks are used in language translation, image synthesis (such as fake face generation), color recovery of black-and-white images, speech synthesis
A —> B —> <— C
|
v
E <— F —> D —> G
X -> <- Y
node : neighbors
A : [B]
B : [C, E]
C : [B]
D : [G]
E : []
F : [E, D]
G : []
X : [Y]
Y : [X]
A--5--B
| /
2 3
| /
C
4. Graph İçin Kullanılan Temel Veri Yapıları* 1 -> 2 4
* 2 -> 3 1
* 3 -> 2 4
* 4 -> 3 1
Düğümün komşularını saklamak için vector, list, hashmap gibi herhangi bir veri yapısı kullanılabilir. Basit bir Java örneğipublic class SimpleAdjacencyList {
private Map<Integer, List<Vertex>> adjacencyList = new HashMap<>();
}
Yukarıdaki kodda Map Value değeri için şöyle bir sınıf tanımlamak daha iyi olabilir.public class DirectedGraphNode {
String val;
List<DirectedGraphNode> neighbors;
}
What is Session Fixation?In Session Fixation attacks, the attacker hijacks a valid user session. We said that we sign the cookie in order to be sure that no one can hijack another user's valid session. But what if the attacker has his own valid session and tries to associate it with another user? in this case he can perform actions on behalf of the victim.The problems occur when we are not generating new sessionIds(unique identifier) on actions like Login.How can the Attacker do this?One of the cases is when attacker has physical access to the computer. As an attacker, I go to the university and I choose one of the shared computers, then I sign into my account on the vulnerablewebsite.com and then without doing the logout (which normally destroys the session in the server store), I leave an open login page on vulnerablewebsite.com and before that I have to copy my valid sessionId. Now the victim is using this computer and if the victim signs in, the attacker sessionId is associated with the victim's account.
Redis Changes its Protocol: From version 7.4, Redis will use RSALv2 and SSPLv1 protocols instead of fulfilling the OSI’s definition of “open-source software.”
Redis is no longer open source. In March 2024 the project was relicensed, leaving its vast community confused.
Following the announcement on Redis relicensing, several forks of the project started to pop up, such as Redict and Garnet.
Valkey was established under The Linux Foundation by former Redis maintainers, and brought together important figures from the Redis community, as well as leading industry giants including AWS, Google Cloud, Oracle and others. Valkey has rapidly gained momentum and just reached General Availability (GA).
Valkey keeps Redis’ existing open source license, namely BSD 3-clause.
Valkey’s Technical Steering Committee currently has six members: Madelyn Olson of Amazon, Zhao Zhao of Alibaba, Ping Xie of Google, Viktor Söderqvist of Ericsson, Wen Hui of Huawei and Zhu Binbin of Tencent. They duly deserve the credit for initiating and driving the fork.Linux dağıtımları da Valkey'e dönmeye başladılar. Bunlar AlmaLinux, Fedora, Alpine, Ubuntu
MessagePack is a great choice when you need a balance between speed and cross-language compatibility. It’s suitable for real-time applications and situations where data size reduction is crucial.
Google has five options for running containers which are:1. GKE Standard2. GKE Autopilot3. Cloud Run4. App Engine Flex : App Engine Flex has been more or less completely superseded by Cloud Run.4. GCE with Containers : Only really appropriate for very small deployments.
if you have an Azure-based deployment, you can assign specific zones to Azure Kubernetes Service (AKS). If you use Google Cloud, you can leverage Google Kubernetes Engine (GKE) to select multi-zone or region clusters (each option offers different benefits and drawbacks in terms of redundancy, cost, and proximity to the end-user).GKE Standard vs Autopilot
The main difference between these is that Autopilot applies a level of Google opinionation to the cluster and makes node management their responsibility.... Interestingly Google has recently made autopilot the default option when provisioning new clusters, recommending it for the majority of workloads with a potentially lower TCO as per the diagram below.
Cloud Run is Google’s ‘serverless’ container offering where all you need to do is to deploy your container image to the service and Google takes care of everything else.
The main difference between aws s3 command and aws s3api command is thataws s3 command is a higher-level abstraction that provides a more simplified and easier-to-use interface, while the aws s3api command provides a more direct and granular interface to the underlying S3 API.
aws s3 cp <kaynak dosya veya dizin> <s3://bucket ismi> [-- seçenekleri]
aws --endpoint-url=http://localhost:4566 \ s3 cp cafezin.png \ s3://bucket-example
aws --endpoint-url=http://localhost:4566 \ s3 ls \ s3://bucket-example/
aws s3 mb s3://my.private.maven
aws s3 rb s3://your-unique-bucket-name --force aws iam delete-role --role-name CrossRegionReplicationRole
aws s3 presign s3://your-unique-bucket-name/hello.txt --expires-in 3600
{ "Version": "2012-10-17", "Statement": [ { "Sid": "PublicReadGetObject", "Effect": "Allow", "Principal": "*", "Action": "s3:GetObject", "Resource": "arn:aws:s3:::testbucket/*" } ] }
# sync the website folder that contains our files to the S3 bucket aws --endpoint-url=http://localhost:4566 s3 sync .\website\ s3://testbucket # enable static website hosting on the bucket and configure the index and error documents: aws --endpoint-url=http://localhost:4566 s3 website s3://testbucket/ \ --index-document index.html --error-document error.html
aws --endpoint-url=http://127.0.0.1:4566 \ s3api create-bucket \ --bucket bucket-example
# create s3 bucket aws --endpoint-url=http://localhost:4566 s3api create-bucket --bucket testbucket # list s3 buckets aws --endpoint-url=http://localhost:4566 s3api list-buckets # copy test file to the created bucket. aws --endpoint-url=http://localhost:4566 s3 cp test.txt s3://testbucket # check files aws --endpoint-url=http://localhost:4566 s3api list-objects --bucket testbucket
aws s3api put-bucket-versioning --bucket your-unique-bucket-name --versioning-configuration Status=Enabled
When interacting with LocalStack to emulate AWS services it’s important to configure your AWS CLI or SDK to point to the LocalStack endpoint URL. This allows you to interact with LocalStack easily without having to specify the --endpoint-url option every time you run a command.Another option is installing a tool called “awslocal” which is a wrapper around the AWS CLI for LocalStack. It automatically configures the CLI to use the LocalStack endpoint URL, saving you from the manual step of specifying the --endpoint-url option.
awslocal is a thin wrapper and a drop-in replacement for the aws command that runs commands directly against LocalStack
# create s3 bucket aws --endpoint-url=http://localhost:4566 s3api create-bucket --bucket testbucket # list s3 buckets aws --endpoint-url=http://localhost:4566 s3api list-buckets # copy test file to the created bucket. aws --endpoint-url=http://localhost:4566 s3 cp test.txt s3://testbucket # check files aws --endpoint-url=http://localhost:4566 s3api list-objects --bucket testbucket
{ "Version": "2012-10-17", "Statement": [ { "Sid": "PublicReadGetObject", "Effect": "Allow", "Principal": "*", "Action": "s3:GetObject", "Resource": "arn:aws:s3:::testbucket/*" } ] }
# sync the website folder that contains our files to the S3 bucket aws --endpoint-url=http://localhost:4566 s3 sync .\website\ s3://testbucket # enable static website hosting on the bucket and configure the index and error documents: aws --endpoint-url=http://localhost:4566 s3 website s3://testbucket/ \ --index-document index.html --error-document error.html
docker exec -it <container_id> /bin/bash
$awslocal sqs create-queue --queue-name test-queue $awslocal sqs list-queues { "QueueUrls" : [ http:/localhost:4566:/000000/test-queue" ] }
awslocal s3 mb s3://my-test-bucket
awslocal s3api create-bucket \ --bucket mybucket \ --create-bucket-configuration LocationConstraint=eu-central-1
LocalStack is a cloud service emulator that runs in a single container on your laptop or in your CI environment. With LocalStack, you can run your AWS applications or Lambdas entirely on your local machine without connecting to a remote cloud provider! Whether you are testing complex CDK applications or Terraform configurations, or just beginning to learn about AWS services, LocalStack helps speed up and simplify your testing and development workflow.
LocalStack is a cloud service emulator that runs AWS services solely on your laptop without connecting to a remote cloud provider .
SELECT a || a || a || a as a from (SELECT 'aaaa' || rand() as a) tSELECT a || a || a || a as a from ("SELECT 'aaaa' as a) t
// Abstract Product - Buttonpublic interface Button {void render();}// Concrete Product - WindowsButtonpublic class WindowsButton implements Button {@Overridepublic void render() {System.out.println("Rendering a Windows button");}}// Concrete Product - MacButtonpublic class MacButton implements Button {@Overridepublic void render() {System.out.println("Rendering a Mac button");}}
// Abstract Factory public interface GUIFactory { Button createButton(); } // Concrete Factory - WindowsFactory public class WindowsFactory implements GUIFactory { @Override public Button createButton() { return new WindowsButton(); } } // Concrete Factory - MacFactory public class MacFactory implements GUIFactory { @Override public Button createButton() { return new MacButton(); } }
They are grouped into families that emphasize some possibilities for your workloads:General Purpose (Genel Amaçlı) – also known as balanced instances, best for web servers, microservices, small and medium databases, development environments, and code repositories.Compute Optimized (İşlem İçin Optimize Edilmiş) – designed for compute-intensive workloads, like batch processing, data analytics, scientific modeling, dedicated gaming servers, machine learning, and high-performance computing.Memory-Optimized (Bellek İçin Optimize Edilmiş) – memory-intensive applications that process large data sets in memory, such as databases and real-time streaming.Accelerated Computing (Hızlandırılmış Bilişim) – used for graphics processing, machine learning, data pattern matching, and other numerically intensive workloads.Storage Optimized (Depolama İçin Optimize Edilmiş) – designed for high, sequential read and write access to very large data sets on local storage. Best for NoSQL databases, in-memory databases, data warehousing, Elasticsearch, analytics workloads.
In AWS, there are many Instance Families. One of them is burstable general-purpose instances, which are basically T Instance Family.The T Instance Family offers a baseline CPU performance but it also has the ability to burst above the baseline at any time as logs as required. Which is essential for business-critical or unknown behavior of the workloads.
Burstable Instances earn CPU credits while running below the baseline and spending them when bursting.
Earned Credits: The amount of credits an instance earns while runningUsed Credits: When a burstable instance is in the running state, it will continuously use CPU credits.Accrued Credits: Difference between the earned credits and used credits is called accrued credits.
For a typical, simple microservice application, a minimum configuration of t2.medium instance type should do the work. T2 instances are the lowest-cost general purpose instance type. You can easily change your instance types if after a while your needs change.
This pattern could be used whenever a message cannot fit the supported message limit of the chosen message bus technology.
For example, a message may contain a set of data items that may be needed later in the message flow, but that are not necessary for all intermediate processing steps. We may not want to carry all this information through each processing step because it may cause performance degradation and makes debugging harder.Sending such large messages to the message bus directly is not recommended, because they require more resources and bandwidth to be consumed. Also, most messaging platforms have limits on message size, so you may need to work around these limits for large messages.
Mesaj işlendikten sonra veri tabanından silinebilirStore the entire message payload into an external service, such as a database. Get the reference to the stored payload and send just that reference to the message bus. The reference acts like a claim check used to retrieve a piece of luggage, hence the name of the pattern. Clients interested in processing that specific message can use the obtained reference to retrieve the payload, if needed.
mTLS helps ensure that the traffic is secure and trusted in both directions between a client and server. This provides an additional layer of security for users who log in to an organization’s network or applications. It also verifies connections with client devices that do not follow a login process, such as Internet of Things (IoT) devices.Nowadays, mTLS is commonly used by microservices or distributed systems in a zero trust security model to verify each other.
TLS client authentication (requiring clients to present certs) is something you usually see on VPN servers, enterprise WPA2 WiFi access points, and corporate intranets. These are all closed systems where the sysadmin has full control over issuing certs to users, and they use this to control which users have access to which resources. This makes no sense in a public website setting, and is definitely a non-standard config for an HTTPS webserver.
The certificate of the client is only used to authenticate the client. It is not used in key exchange which happens before the client even sends the certificate and proves ownership of the private key. The client certificates is thus neither directly nor indirectly included in the traffic encryption or MAC
In this type of pagination, the client sends a request specifying the page number and the number of items per page. Ultimately, this translates into an SQL query using limit and offset.
GET /api/v1/bookings?page=1&size=3
Token pagination can be implemented in different ways. I’ll show you one of them.In the initial request, we send a typical search query. The response contains the token to retrieve the next portion of data.In following requests, we send only the next token.To avoid storing unnecessary states on the backend, we can embed the search query and the ID of the last returned item into a token. We can then compress the data using gzip or snappy and convert it, for instance, to base62.
/api/v1/logs?categories=bookings,orders&size=100&sort=createdAt:desc Response { data: [...], next: "2H83GdysPu" }
Request /api/v1/logs?next=2H83GdysPu Response { data: [...], next: "v95Gdkta3d" }
Let’s say we have multiple sources: booking_logs and order_logs, and we need to return combined data from both sources.With token pagination, we can embed the last returned ID from each source into the token. When executing a request for 3 items, we select 3 items from each source, and then sort them on the backend to determine which specific items to return.
The authorization server will respond with a JSON object containing the following properties:Şeklen şöyle
- token_type with the value Bearer
- expires_in with an integer representing the TTL of the access token
- access_token the access token itself
- refresh_token a refresh token that can be used to acquire a new access token when the original expires
Intended for the server-to-server authentication, this flow describes an approach when the client application acts on its own behalf rather than on behalf of any individual user. In most scenarios, this flow provides the means to allow users to specify their credentials in the client application, so it can access the resources under the client’s control.
The OAuth 2.0 Client Credentials Grant type is exclusively used for scenarios in which no user exists (CRON jobs, scheduled tasks, other data workloads, etc.).
...
The goal of the Client Credentials Grant is to allow two machines to communicate securely. In this grant type, you have a client (think of this as your application) making API requests to another service (this is your resource server).
Before OAuth 2.0, the way developers handled server-to-server authentication was with HTTP Basic Auth. Essentially, this boiled down to a developer that would send over a server’s unique username and password (often referred to as an ID and secret) on each request. The API service would then validate this username and password on every request by connecting to a user store (database, LDAP, etc.) in order to validate the credentials.
+---------+ +---------------+ | | | | | |>--(A)- Client Authentication --->| Authorization | | Client | | Server | | |<--(B)---- Access Token ---------<| | | | | | +---------+ +---------------+
localhost:8080/oauth/token?grant_type=client_credentials&scope=any
Please make sure you've added your clientId and client secret in the basic auth header of the authorization tab in postman and you get a successful response like this.
{"access_token": "qbE0ipKzzX5FNj3OVe8LWu40T_s","token_type": "bearer","expires_in": 43199,"scope": "any"}