31 Mart 2022 Perşembe

GitHub

1. GitHub'a Link Verme
Eğer bir dosyadaki belirli satırlara link vermek istersek dosya linkinin sonuna  #LBaşlangıçSatırNo-LBitişSatırNo eklenir

Örnek
Şöyle yaparız

2. Fork
Fork işlemi artık sadece "default" dalı getiriyor. Başka bir branch'i de getirmek için kendi repoma giderim. "View all branches" yaparım. "New branch" düğmesine tıklarım ve source olarak esas Github'ı veririm

3. Notifications
Github üzerindeki benden gözden geçirmek istenilen Pull Request'leri görmek için  şöyle yaparız
https://github.com/notifications?query=reason%3Areview-requested

4. Arama
Örnek - dosya ismi
filename:XX ile dosya ismine göre aramak için şöyle yaparız
https://github.com/search?q=org%3AOptivaInc+filename:Cd_fi.xml

5. Dosya İçinde Arama
in:file arama özelliği kullanılıyor

Örnek - kelime
"octocat" kelimesini arar
https://github.com/search?q=octocat+in%3Afile&type=Code
Örnek
Şöyle yaparız
SpringJUnitConfig  lang:Java owner:OrcunColak 

6. Belli Bir Repository Dosyaları İçinde Arama

Örnek
OptivaInc şirketine ait repository'lerdeki dosyalarda "octocat" kelimesini arar
https://github.com/search?q=org%3AOptivaInc+octocat+in%3Afile&type=Code
Şöyle yaparız
https://github.com/search?q=org%3AOptivaInc+INTPcrbdb%3A+in%3Afile&type=Code
Örnek
OptivaInc şirketine ait repository'lerdeki dosyalarda "octocat" kelimesini  uzantısı gradle olan dosyalarda arar
https://github.com/search?p=1&q=org%3AOptivaInc+octocat+in%3Afile+extension%3Agradle&type=Code
Issue
Created By ile Arama
Şöyle yaparız
https://github.com/hazelcast/hazelcast/issues/created_by/ocolak
Filtre bölümüne şunu yazabiliriz
is:open is:issue archived:false author:ocolak
İki repodaki label'lara bakmak için şöyle yaparız
is:issue repo:hazelcast/hazelcast repo:hazelcast/hazelcast-enterprise is:open label:"Type: Test-Failure" sort:created-desc 
GitHub Pull Request
Pull Request için açılan gözden geçirmeye yorum varsa, soruyu soran kişi Resolve Conversation işlemini yapar.

GitHub Review
1. Koda yorumları gir. Yorumlar Add single comment veya Start a review olarak girilebilir. Açıklaması şöyle
There are two options:
1. Add a single comment
2. Start a review

If you click on add single comment, the comment will be made immediately.

If you click on start a review, you’ll have the chance to write more comments before sending them at once. To end the review, you need to click on the Review changes button and select submit review.
 Start a review olarak girdiysek her girilen yorum Pending olarak gösterilir. Gözden geçirme bitince Review changes düğmesine tıklanır ve Request changes düğmesine tıklanır

Git Hook
Açıklaması şöyle
Git hooks are scripts that run automatically every time a particular event occurs in a Git repository. They let you customize Git’s internal behavior and trigger customizable actions at key points in the development life cycle.
maven için git-build-hook plugin kullanılabilir

Web Hook
Açıklaması şöyle
Go to the settings of your GitHub repository and go to Webhooks. Add a new webhook, add the previously generated public URL + “/github-webhook/” to the Payload URL and change the content type to application/json. Select Just the push event to trigger the webhook, and finally click Add webhook.
Şeklen şöyle




24 Mart 2022 Perşembe

Jaro-Winkler Similarity

Jaro Similarity
Açıklaması şöyle
Created by Matthew A. Jaro in 1989, the Jaro Similarity metric compares two strings and gives us a score that represents how similar they are.

The result is a number between 0 and 1, where 0 means the strings are completely different and 1 means they match exactly.

The first step to calculating the Jaro similarity is to count the characters that match between the two strings.

But, to be considered a 'match', the characters do not need to be in the same place in both strings - they just need to be near to each other.

This accounts for the common typing mistake where you accidentally enter some characters in the wrong order.
Jaro-Winkler Similarity
Açıklaması şöyle
This modification of Jaro Similarity was proposed in 1990 by William E. Winkler.

The 'Jaro-Winkler' metric takes the Jaro Similarity above, and increases the score if the characters at the start of both strings are the same.

In other words, Jaro-Winkler favours two strings that have the same beginning.

21 Mart 2022 Pazartesi

DevOps Maturity Assessment Models

Giriş
DevOps Olgunluk Modelleri şöyle
Phase 0: Disorganized 
No DevOps process is in place, or the management has no idea how beneficial automation and integration are. Development and operations teams work independently, and the software is tested manually. Desired changes take a long time to go into production.

Phase 1: Structured
Some processes are put in place, but they are very loosely defined, and there is little or no automation. Companies in this phase experiment with DevOps practices on small teams before scaling it to larger IT projects.

Phase 2: Managed 
A more mature process is defined, including automation for some essential tasks. Agile practices are widely adopted in the development and operations sectors.

Phase 3: Measured
Teams have a strong understanding of DevOps practices, and automation replaces most manual processes. Agile performance metrics are defined and incorporated into the process. Performance information is gathered via automation and fed back into the process to drive improvements.

Phase 4: Optimized
The focus in this phase is continuous improvement, and DevOps processes are entrenched across teams. You are running experiments across different parts of your architecture and using insights gained from your data to make changes and improve performance.

15 Mart 2022 Salı

Apache Kafka Message Delivery Semantics - Exactly Once Yani Kafka Transactions

Giriş
Spring ile nasıl yapılacağını gösteren bir yazı burada. Kafka kısmını detaylı açıklayan yazı da burada.

Exactly Once Nedir?
Açıklaması şöyle
So what about exactly once semantics (i.e. the thing you actually want)? When consuming from a Kafka topic and producing to another topic (as in a Kafka Streams application), we can leverage the new transactional producer capabilities in 0.11.0.0 that were mentioned above. The consumer's position is stored as a message in a topic, so we can write the offset to Kafka in the same transaction as the output topics receiving the processed data. If the transaction is aborted, the consumer's position will revert to its old value and the produced data on the output topics will not be visible to other consumers, depending on their "isolation level." In the default "read_uncommitted" isolation level, all messages are visible to consumers even if they were part of an aborted transaction, but in "read_committed," the consumer will only return messages from transactions which were committed (and any messages which were not part of a transaction).
...
So effectively Kafka supports exactly-once delivery in Kafka Streams, and the transactional producer/consumer can be used generally to provide exactly-once delivery when transferring and processing data between Kafka topics. Exactly-once delivery for other destination systems generally requires cooperation with such systems, but Kafka provides the offset which makes implementing this feasible (see also Kafka Connect). Otherwise, Kafka guarantees at-least-once delivery by default, and allows the user to implement at-most-once delivery by disabling retries on the producer and committing offsets in the consumer prior to processing a batch of messages.
Exactly Once Sadece Kafka Stream İçin Geçerlidir
Açıklaması şöyle
One very important and often missed detail is that Kafka supports exactly-once delivery only in Kafka Streams. To turn it on just change a config option processing.guarantee from at_least_once (default option) to exactly_once_v2.

But even Streams applications have limitations. If your consumer reads events from Kafka and makes changes in the relational database, Kafka won’t revert it. And if your consumer sends SMS-notifications Kafka can’t revert them either, even when Kafka Streams library is used. These are limitations the developer should always keep in mind.

Why do we talk about “reverting” changes? It’s because the only way to handle the message exactly once is to do it in one transaction.
Exactly Once İçin Gereken Şeyler
Exactly once için iki tane şey gerekir
1. Producer İçin Idempotent Writes 
2. Producer veConsumer İçin Transactions

Producer İçin Idempotent writes
Idempotent Write için genel iki çözüm var. Açıklama şöyle. Kafka 2. maddeyi uyguluyor
There are two approaches to getting exactly-once semantics during data production:

1. Use a single writer per partition and every time you get a network error check the last message in that partition to see if your last write succeeded
2. Include a primary key (UUID or something) in the message and deduplicate it to the consumer.
Producer açısında hata 3 tane hata durumu olabilir. Açıklaması şöyle
... three cases, when the producer doesn’t receive the acknowledgement from the broker and decides to send the message again:

1. The broker didn’t receive the message, so obviously there is no ack
2. The broker received the message, but sending an ack failed
3. The broker received the message and also successfully sent the ack, but it took more than the producer’s waiting timeout

The producer will retry in all the cases, but in two of them(2 and 3) it will lead to a duplicate.

Nor I, nor probably Kafka developers know the way to solve this problem on the producer’s side. Thus all the work for deduplication lies on the broker, who guarantees that the message will be written to the log only once. To achieve this, there is a sequence number assigned to messages (I described a similar approach in the article about the Idempotent Consumer pattern [6]). So, to be exact, it’s not the idempotent producer, but the smart broker, that deduplicates messages.

To enable this functionality in Kafka it’s enough to configure the producer with the enable.idempotence=true option.
Kafka 5 tane producer için Idempotent Write destekler. Açıklaması şöyle
One of the at least once guarantee scenarios given above covered the case of a producer that is unable to determine if a previous publish call succeeded, so pushes the batch of messages again. In previous versions of Kafka, the broker had no means of determining if the second batch is a retry of the previous batch. From Kafka 0.11 onwards, producers can opt-in to idempotent writes (it’s disabled by default), by setting the configuration flag enable.idempotence to true. This causes the client to request a producer id (pid) from a broker. The pid helps the Kafka cluster identify the producer. With idempotence enabled, the producer sends the pid along with a sequence number with each batch of records. The sequence number logically increases by one for each record sent by the same producer. Given the sequence number of the first record in the batch along with the batch record count, the broker can figure out all the sequence numbers for a batch. With idempotence enabled, when the broker receives a new batch of records, if the sequence numbers provided are ones it has already committed, the batch is treated as a retry and ignored (a ‘duplicate’ acknowledgement is sent back to the client).

When idempotent writes first came out in v0.11 the brokers could only deal with one inflight batch at a time per producer in order to guarantee ordering of messages from the same producer. From Kafka 1.0.0, support for idempotent writes with up to 5 concurrent requests (max.in.flight.requests.per.connection=5) from the same producer are now supported. This means you can have up to 5 inflight requests and still be sure they will be written to the log in the correct order. This works even in the face of batch retries or Kafka partition leader changes since in these cases the cluster will reorder them for you.
Varsayılan sayının 5 yerine 1 olması gerektiğini söyleyen bir açıklama şöyle
max.in.flight.requests.per.connection — defaults to 5, which may result in messages being published out-of-order if one (or more) of the enqueued messages times out and is retried. This should have been defaulted to 1.

Producer ve Consumer İçin Transactions
İki başlıkta incelemek lazım

1. Producer İçin Transactions
Açıklaması şöyle
Transactions give us the ability to atomically update data in multiple topic partitions. All the records included in a transaction will be successfully saved, or none of them will be.

Transactions are enabled through producer configuration. Clients need to first enable idempotent writes (enable.idempotence=true) and provide a transactional id (transactional.id=my-tx-id). The producer then needs to register itself with the Kafka cluster by calling initTransactions. The transactional id is used to identify the same producer across process restarts. When reconnecting with the same transactional id, a producer will be assigned the same pid and an epoch number associated with that pid will be incremented. Kafka will then guarantee that any pending transactions from previous sessions for that pid will either be committed or aborted before the producer can send any new data. Any attempt by an old zombie instance of the producer with an older epoch number to perform operations will now fail.

Once registered, a producer can send data as normal (outside a transaction) or initiate a new transaction by calling beginTransaction. Only one transaction can be active at a time per producer. From within the transaction, the standard send method can be called to add data to the transaction. Additionally, if a producer is sourcing data from Kafka itself, it can include the progress it is making reading from the source in the transaction by calling sendOffsetsToTransaction. In Kafka the default method for saving consumer progress is to save offsets back to an internal topic in Kafka and hence this action can be included in a transaction.

Once all required messages and offsets are added to the transaction, the client calls commitTransaction to attempt to commit the changes atomically to the Kafka cluster. The client is also able to call abortTransaction if they no longer wish to go ahead with the transaction, likely due to some error.

The producer and brokers do not wait until the transaction is committed before writing the data to the data logs. Instead the brokers write records to the logs as they arrive. Transactional messages are also bracketed with special control messages that indicate where a transaction has started and either committed or aborted. Consumers now have an additional configuration parameter called isolation.level that must be set to either read_uncommitted (the default) or read_committed. When set to read_uncommitted, all messages are consumed as they become available in offset ordering. When set to read_committed, only messages not in a transaction, or messages from committed transactions are read in offset ordering. If a consumer with isolation.level=read_committed reaches a control message for a transaction that has not completed, it will not deliver any more messages from this partition until the producer commits or aborts the transaction or a transaction timeout occurs. The transaction timeout is determined by the producer using the configuration transaction.timeout.ms (default 1 minute).
Açıklaması şöyle
How Kafka transactions work

After the message is written to the Kafka log and the broker guarantees that it was done without duplicates, it should be just handled and written to the next topic in one transaction. But how to do it?

A Kafka transaction is a set of changes written in the log, which itself is stored in the internal Kafka topic. This log is managed by a special entity called Transaction Coordinator. In order to invoke a transaction several steps should be completed:

The consumer finds the Transaction Coordinator. This happens when the application starts. It sends its configured transactionalID (if it exists) to the coordinator and receives the producerID. This is needed in case the application restarts and tries to register itself again with the same transactionalID. When the restarted application starts a new transaction, the Transaction Coordinator aborts all the pending transactions started by the previous instance.
When the application consumes new messages it starts the transaction
When the application writes messages to any other topics it sends this information to its Transaction Coordinator. The coordinator stores information about all the changed partitions in its internal topic.
This is an important detail. Using Kafka Streams API you don’t have to send these messages to the coordinator manually, Streams library will do it for you. But if you write messages to the topic directly, it won’t be written into the transaction log even if this topic is in the same cluster.

Another important thing about transactions is that all the messages written during the transaction will not be exposed to the consumers until this transaction is committed.

4. The transaction commits or fails. If it’s aborted, the coordinator adds an “Abort” mark to the transaction in the internal topic and the same mark to all the messages written during the transaction.

5. When the transaction commits, the process is almost the same. The coordinator adds a “Commit” mark to the transaction and to all the messages. That mark will make these messages available for the consumers.

Don’t you forget that consumer offsets are also stored in their own topic? It means that committing offsets is the same as writing a message to the output topic. And this message can also be marked “Abort” or “Commit” which affects whether the same message will be consumed the second time or not. Obviously, when it’s marked as “Commit”, it will not, and when it’s marked as “Abort” the whole transaction will start from the beginning — consuming messages.
Producer şöyle yapar
producer.initTransactions();
producer.beginTransaction();
sourceOffsetRecords.forEach(producer::send);
outputRecords.forEach(producer::send);
producer.commitTransaction();
Kafka Transactions İle Bazı Problemler
Açıklaması şöyle
A critical point to understand, and why this pattern is often not a good fit to meet the requirements of a messaging application, is that all other actions occurring as part of the processing can still happen multiple times, on the occasions where the original message is redelivered. If for example the application performs REST calls to other applications, or performs writes to a database, these can still happen multiple times. The guarantee is that the resulting events from the processing will only be written once, so downstream transaction aware consumers will not have to cater for duplicates being written to the topic.

...
However database transactions and Kafka transactions are separate, and in order to perform them atomically would need to be done as a distributed transaction, using a ChainedTransactionManager for example in Spring. Using distributed transactions should generally be avoided as there is a significant performance penalty, increased code complexity, and failure scenarios that could leave the two resources (the Kafka broker and the database) with an inconsistent view of the data

Enabling Kafka Transactions For Producer
Açıklaması şöyle
To enable transactions the producer must be configured to enable transactions, which requires setting the producer transactional Id on the producer factory. With this in place, Kafka can now write messages using transactions. This setting also implicitly sets the producer to be idempotent. This means that any transient errors occurring during the message produce does not result in duplicate messages being written. 
...
Finally a transaction manager must be implemented to manage the transaction.

The producing of any outbound message must be surrounded by a transaction. The following is the transactional flow:
  1. First beginTransaction is called
  2. Messages are published by the Producer
  3. The consumer offsets are also sent to the Producer in order that these are included in the transaction.
  4. The commitTransaction is called to complete the transaction.
Bunu Spring ile yapmak kolay. Şöyle yaparız
When using Spring Kafka this boilerplate code is taken care of for the developer. They need only annotate the method responsible for writing the outbound events with @Transactional. Finally wire in a KafkaTransactionManager to the Spring context to make this available for managing the transaction. 
Enabling Kafka Transactions For Consumer
Açıklaması şöyle
In order to guarantee the exactly-once semantics a consumer must be configured with read isolation.level of READ_COMMITTED. This ensures it will not read transactional messages written to topic partitions until the message is marked as committed. (The consumer can however consume non-transactional messages that are written to the topic partition).
...
By default consumers are configured with a read isolation.level of READ_UNCOMMITTED. If a transactional message was written to a topic, for such a consumer this is therefore immediately available for consumption, whether or not the transaction is subsequently committed or aborted.



14 Mart 2022 Pazartesi

Jenkinsfile Declarative Pipeline - Yeni Yöntem

Giriş
Açıklaması şöyle. Stage'ler içinde step'ler olabilir.
Declarative pipelines always begin with the word pipeline.
...
Declarative pipelines break down stages into individual stages that can contain multiple steps.
Groovy kullanmaz.  Açıklaması şöyle
In contrast to the scripted pipeline, the declarative Jenkins pipeline doesn't permit a developer to inject code. A Groovy script or a Java API reference in a declarative pipeline will cause a compilation failure.
Stages -> stage -> steps -> post -> failure
şeklinde bir sıra izlenir

Pipeline Block
Burada agent, stages, options vs belirtiliyor
agent Blok
Örnek - any
Şöyle yaparız
pipeline {
  agent any
  stages {
    stage('Build') {
      steps {
      }
  }
  ...
}
Örnek - none
Şöyle yaparız
pipeline {
    agent none
    stages {
      ...
      stage('build image') {
        agent { label "slave" }
        ...
    }

    stage('deploy to production') {
      agent { label "master" }
      ...
      }
    }
  }
}
Açıklaması şöyle
Let’s start with agent where certain phase will be executed. It can be the entire pipeline or certain stage. At top level agent, we set it to none so we need to set each stage what agent we want to use.
Örnek - master
Şöyle yaparız
pipeline {

  agent {
    node {
      label 'master'
    }
  }

  options {
    buildDiscarder logRotator( 
      daysToKeepStr: '16', 
      numToKeepStr: '10'
    )
  }

  stages {
    ...
  }   
}
Örnek - docker
Şöyle yaparız
agent {
  docker {
    image 'alxibra/forstok-apigateway:0.0.1'
    label 'slave'
  }
}
Açıklaması şöyle
It means we run our stage in docker environment with base image alxibra/forstok-apigateway:0.0.1 . label means which server with certain label you want to execute. we have 2 servers to run Jenkins, we label our server with master and slave .
tools
Tool alanı altında Jenkins'e kurulu bir plugin ismi ve onunla ilgili bir ayar verilir.
Örnek
Şöyle yaparız. Burada Jenkins'e JDK plugin kurulu ve plugine tanıtılmış olan JDK kurulumlarından birisi olan Java17'yi kullanması isteniyor.
pipeline {
  agent any
    
  tools {
    jdk 'Java17'
  }
    
  stages {
    stage('...) {
      steps {
        ...
      }
    }
  }
}
options
Örnek
Şöyle yaparız
pipeline {

  options {
    ansiColor('xterm')
  }
  ...
  stages {
    ...
  }
}
Stage Block
stage block içinde başka stage block olabilir.

environment
credentials vs gibi şeyler belirtilir
Örnek - credentials
Şöyle yaparız. "Manage Jenkins > Security > Manage Credentials" ayarlarında tanımlıdır
stage ('QA') {
  parallel {
    stage('test') {
      ...

    environment {
      DATABASE_NAME_TEST = credentials('DATABASE_NAME_TEST')
      DATABASE_USERNAME_TEST = credentials('DATABASE_USERNAME_TEST')
      DATABASE_PASSWORD_TEST = credentials('DATABASE_PASSWORD_TEST')
      DATABASE_PORT_TEST = credentials('DATABASE_PORT_TEST')
      DATABASE_HOST_TEST = credentials('DATABASE_HOST_TEST')
      LAZADA_CLIENT_SECRET = credentials('LAZADA_CLIENT_SECRET')
      DISABLE_RATE_LIMIT = 'true'
    }
    steps {
      ...
    }
  }
}
parallel
Birden fazla stage paralel çalıştırılabilir. 
Örnek
Şöyle yaparız
stage('QA'){
  parallel {
    stage('linter'){
      .......
    }
    stage('test') {
      ........
    }
  }
}
post
Açıklaması şöyle
post section is additional step after certain stage is done or fail. We define 2 conditions the success and failure . They both send notification to our slack the status. 
Örnek
Şöyle yaparız
stage('deploy to production') {
      
  steps {
   ...
  }
  post {
    success {
      slackSend message: "${env.JOB_NAME} is success, info: ${env.BUILD_URL}",
                    color: 'good'
    }
    failure {
      slackSend message: "${env.JOB_NAME} fails, info: ${env.BUILD_URL}",
                    color: 'danger'
    }
  }
}

when Directive
stage içine yazılır. Stage'in çalışıp çalışmamasına karar verir
Örnek
Şöyle yaparız
stage('deploy to production') {
  agent { label "master" }
  when {
    branch 'master'
  }
  ...
}
Örnek - not
Şöyle yaparız
when {
  not {
    branch 'master'
  }
}
Steps Block
Çalıştırılacak komutları içerir
Örnek
Şöyle yaparız
steps('test') {
  sh 'bundle exec rspec'
}
withCredentials
Örnek
Şöyle yaparız
steps {
  withCredentials([file(credentialsId: 'APIGATEWAY_MASTER_KEY', variable: 'master_key')]){
    sh 'sudo cp /$master_key config/master.key'
    sh 'bin/production'
  }
}

Kullanım Örnekleri
git
Örnek
Şöyle yaparız
pipeline {
  agent { label 'node-1' }
  stages {
    stage('Source') {
      steps {
        git 'https://github.com/digitalvarys/jenkins-tutorials.git''
      }
    }
    stage('Compile') {
      tools {
        gradle 'gradle4'
      }
      steps {
        sh 'gradle clean compileJava test'
      }
    }
  }
}
junit
Örnek
Şöyle yaparız. Burada make build sistemi kullanılıyor.
pipeline { 
  agent any 
  options {
    skipStagesAfterUnstable()
  }
  stages {
    stage('Build') { 
      steps { 
        sh 'make' 
      }
    }
    stage('Test'){
      steps {
        sh 'make check'
        junit 'reports/**/*.xml' 
      }
    }
    stage('Deploy') {
      steps {
        sh 'make publish'
      }
    }
  }
}
maven
tools altında maven tanımlanır
Örnek
Şöyle yaparız
CODE_CHANGES = getGitCodeChanges()
pipeline { agent any //Hangi Jenkins üzerine çalışacak tools { maven 'Maven' } parameters { string(name : 'VERSION', defaultValue:'', description:'') choice(name : 'VERSIONS', choices['1.1.0','1.2.0'],description:'') booleamParam(name:'executeTests' defaultValue : true,description:'') } environment { NEW_VERSION = '1.3.0' } stages { stage("build") { //checkout, build , test, deploy, cleanup gibi when { expression { BRANCH_NAME == 'dev'&& CODE_CHANGES == true } } steps { echo "Building version ${NEW_VERSION}" //double quote because of variable sh "mvn install" } } stage("test") { when { expression { params.executeTests } } steps { echo 'Testing...' } }
Şöyle yaparız
pipeline {
  agent any
  tools {
	maven 'mvn'
	//version 3.0.5
  }
  stages {
    stage('unit test') {
      steps {
      echo "Environment selected: ${params.envSelected}"
      sh 'mvn test -Punit-tests'
      }
      post {
        failure {
          mail to: 'vivek.sinless@gmail.com',
          subject: 'Dude your Azuga-RUC Pipeline failed. Check your Unit Tests',
          body: 'Unit Test Cases Failure'
        }
      }
    }
    stage('integration test') {
      steps {
        echo "Environment selected: ${params.envSelected}"
        sh 'mvn test -Pintegration-tests'
      }
      post {
        failure {
          mail to: 'vivek.sinless@gmail.com',
          subject: 'Dude your Azuga-RUC Pipeline failed. Check your integration tests',
          body: 'Integration Test Cases Failure'
        }
      }
    }
    stage('SonarQube Analysis') {
      steps {
        //def mvn = tool 'mvn';
        withSonarQubeEnv('sonar') {
          sh "mvn sonar:sonar"
        }
      }
    }
    stage('Build Jars') {
      steps {
        sh 'mvn clean package'
      }
    }
  } //stages
} //pipeline
Örnek
Şöyle yaparız. Böylece bir mvn komutu çalıştırılır
withMaven() {
  sh 'mvn dockerfile:build dockerfile:push exec:exec'
}
Örnek - maven + docker
Şöyle yaparız
pipeline {
  agent any  
  environment {
    MAVEN_ARGS=" -e clean install"
    registry = ""
    dockerContainerName = 'bookapi'
    dockerImageName = 'bookapi-api'
  }
  stages {
    stage('Build') {
      steps {
         withMaven(maven: 'MAVEN_ENV') {
           sh "mvn ${MAVEN_ARGS}"
        }
      }
    }
     
    stage('clean container') {
      steps {
        sh 'docker ps -f name=${dockerContainerName} -q | xargs --no-run-if-empty docker container stop'
        sh 'docker container ls -a -f name=${dockerContainerName} -q | xargs -r docker container rm'
        sh 'docker images -q --filter=reference=${dockerImageName} | xargs --no-run-if-empty docker rmi -f'
      }
    }
    stage('docker-compose start') {
      steps {
       sh 'docker compose up -d'
      }
    }
  }
}

parameters 
Örnek
Şöyle yaparız
pipeline {
  agent any
  tools {
	maven 'mvn'
	//version 3.0.5
  }
  parameters {
    choice(
	  name: 'envSelected',
	  choices: ['dev', 'test', 'prod'],
	  description: 'Please choose en environment where you want to run?'
    )
  }
  
} //pipeline
Şeklen şöyle
Kullanımı şöyle
stage('Run Spring Boot App') {
  steps {
    script {
      if (env.envSelected == "dev" || env.envSelected == "test") {
        ...
      } else {
        ...
      }
    }
  }
}

sonar
Örnek - gradle
Şöyle yaparız
apipeline {

  agent any

  options {
    timeout(time: 20, unit: 'MINUTES')
    buildDiscarder(logRotator(numToKeepStr: '5'))
  }
  

  environment {
    JAVA_HOME='/home/ajanthan/applications/jdk-11.0.2'

    SOURCE_REPOSITORY_URL = '<GIT_URL>'
    BRANCH = 'develop'
    GIT_CREDINTIALS_ID='<GIT_CRED>'
    
    SONAR_URL='http://localhost:9000'
    SONAR_LOGIN='<SONAR_QUBE_TOKEN>'

  }

  stages {

    stage ('Checkout source') {
      steps {
        echo "Checkout SCM"
        git branch: "${BRANCH}",credentialsId: "${GIT_CREDINTIALS_ID}",url: "${SOURCE_REPOSITORY_URL}"
      }
    }
    
    stage ('Gather facts') {
      steps {
          script {
    
          version = sh (script: "./gradlew properties -q | grep \"version:\" | awk '{print \$2}'",returnStdout: true).trim();
          groupid = sh (script: "./gradlew properties -q | grep \"group:\" | awk '{print \$2}'",returnStdout: true).trim();
          artifactId = sh (script: "./gradlew properties -q | grep \"name:\" | awk '{print \$2}'",returnStdout: true).trim();
    
          }
          echo "Building project in version: $version , groupid: $groupid , artifactId : $artifactId";
        }
    }

    stage ('Build JAR') {
       steps {
        echo "Building version ${version}"
        sh (script: "./gradlew clean build -x test",returnStdout: true)
       }
    }

     stage ('Unit Tests') {
       steps {
         echo "Running Unit Tests"
         sh (script: "./gradlew test",returnStdout: true)
       }
     }
     
     stage ('Code Analysis') {
       steps {
        echo "Running Code Analysis"

        sh (script: "./gradlew sonarqube -Dsonar.host.url=$SONAR_URL -Dsonar.login=$SONAR_LOGIN -Dsonar.sources=./src/main -Dsonar.projectKey=$artifactId  -Dsonar.projectVersion=$version",returnStdout: true)

       }
     }
     

    }
}
s

Jenkinsfile Scripted Pipeline - Eski Yöntem

Giriş
Groovy DSL kullanılır. Açıklaması şöyle
Jenkinsfiles, using a domain specific language based on the Groovy programming language, are persistent files that model delivery pipelines “as code”, containing the complete set of encoded steps (steps, nodes, and stages) necessary to define the entire application life-cycle
Açıklaması şöyle. Sadece stage'lerden ibarettir
Scripted pipelines, on the other hand, always begin with the word node
...
Scripted pipelines use Groovy code and references to the Jenkins pipeline DSL within the stage elements without the need for steps.
Scripted pipeline tercih edilmemeli deniliyor. Açıklaması şöyle
The development community found that pipeline builders with strong Java and Groovy skills, but little experience with Jenkins, would often write complicated Groovy code to add functionality that's already available through the Jenkins DSL.
Script Söz Dizimi
1. Parametre Tanımlama
Örnek
Şöyle yaparız
properties([
  parameters([
    booleanParam(
      name: 'ARTIFACTORY_PUBLISH',
      defaultValue: false,
      description: 'If checked, the publish step is executed'
    ),
    stringParam(
      name: 'CUSTOMER',
      defaultValue: "",
      description: 'the customer\'s specific packages to deploy'
    )
  ])
])
Boolean parametreye şöyle erişiriz
def shouldDeployToArtifactory() {
  return "${params.ARTIFACTORY_PUBLISH}" == "true"
}
Boolean parametreye şöyle erişiriz
if ("${params.CUSTOMER}" != "") {
  ...
}
2. Node Tanımlama
Örnek
En basit hali şöyle
node {
  // groovy script
}
Örnek
Şöyle yaparız
node {
  stage('Build') {
  }
  stage('Test') {
  }
  stage('Deploy') {
  }
}
Örnek
Şöyle yaparız
node ('node-1') {
  stage('Source') {
    git 'https://github.com/digitalvarys/jenkins-tutorials.git''
  }
  stage('Compile') {
    def gradle_home = tool 'gradle4'
    sh "'${gradle_home}/bin/gradle' clean compileJava test"
  }
}
checkout
class olarak bir sürü farklı seçenek verilebilir. Bir liste burada

Git Açıklamaları
İki kullanım şekli var
git branch: "${branch}",credentialsId: '...',url: 'git@github.com:...'
veya
checkout([...])
extensions açıklaması şöyle
Extensions add new behavior or modify existing plugin behavior for different uses. Extensions help users more precisely tune plugin behavior to meet their needs.

Extensions include:
- Clone extensions modify the git operations that retrieve remote changes into the agent workspace. The extensions can adjust the amount of history retrieved, how long the retrieval is allowed to run, and other retrieval details.
- Checkout extensions modify the git operations that place files in the workspace from the git repository on the agent. The extensions can adjust the maximum duration of the checkout operation, the use and behavior of git submodules, the location of the workspace on the disc, and more.
Changelog extensions adapt the source code difference calculations for different cases.
- Tagging extensions allow the plugin to apply tags in the current workspace.
- Build initiation extensions control the conditions that start a build. They can ignore notifications of a change or force a deeper evaluation of the commits when polling.
- Merge extensions can optionally merge changes from other branches into the current branch of the agent workspace. They control the source branch for the merge and the options applied to the merge.
Örnek
Şöyle yaparız. Burada Git kullanılıyor ve shallow olmaması isteniyor
stage("SCM checkout") {
  sendSlackGreenNotification("Build ${env.BUILD_NUMBER} started for  ${env.JOB_NAME}.  
                              Details: ${env.BUILD_URL}")
  checkout([
    $class: 'GitSCM',
    branches: scm.branches,
    doGenerateSubmoduleConfigurations: scm.doGenerateSubmoduleConfigurations,
    extensions: scm.extensions + [
      [$class: 'WipeWorkspace'],
      [$class: 'PruneStaleBranch'],
      [$class: 'CloneOption', noTags: false, reference: '', shallow: false], 
      [$class: 'GitLFSPull'],
      [$class: 'LocalBranch']
    ],
    submoduleCfg: [],
    userRemoteConfigs: scm.userRemoteConfigs
  ])
}

def sendSlackGreenNotification(message) {
  notifySlack(message, "good")
}
Örnek
Şöyle yaparız. Burada shallow olması isteniyor ve depth belirtiliyor.
checkout([
  $class: 'GitSCM',
  branches: [[name: "*/${params.BRANCH}"]],
  extensions: [[
    $class: 'CloneOption',
    shallow: true,
    depth:   1,
    timeout: 30
  ]],
  userRemoteConfigs: [[
    url:           params.SCM_URL,
    credentialsId: 'MY_GIT_CREDENTIALS'
  ]]
])
Örnek
Şöyle yaparız
String branch = "master"
String Registry = "alicanuzun/springboot"
String currentContext = "minikube"
String serviceName = "springboot"
String environment = "default"
String chartPath = "apps/springboot"

timestamps() {
  node(label: 'master') {
    stage('Git Checkout') {
      git branch: "${branch}",
        credentialsId: 'SpringBoot-GitHub',
        url: 'git@github.com:alican-uzun/springboot'
    }
    stage("Docker Build") {
      sh "docker build -t alicanuzun/springboot ."
    }
    stage("Login to DockerHub"){
      withCredentials([usernamePassword(credentialsId: 'dockerhub-creds',
       usernameVariable: 'DOCKERHUB_USERNAME', passwordVariable: 'DOCKERHUB_PASSWORD')]){
       sh('docker login --username $DOCKERHUB_USERNAME --password $DOCKERHUB_PASSWORD')
      }
    }
    stage("Push Image to DockerHub"){
      sh """
        docker tag ${Registry}:latest ${Registry}:prod-${currentBuild.number}
        docker push ${Registry}:prod-${currentBuild.number}
        """
    }
    stage('Helm Chart Git Checkout'){
      git branch: "${branch}",
        credentialsId: 'Helm-Charts-GitHub',
        url: 'git@github.com:alican-uzun/helm-charts'
    }
    stage("Deploy on Minikube Cluster"){
      sh  "helm upgrade --install " +
        "${serviceName} ${chartPath} -f ${chartPath}/values.yaml -n ${environment} " + 
        "--set image.tag=prod-${currentBuild.number} " +
        "--set image.repository=${Registry} " +
        "--namespace=${environment} "
        "--kube-context=${currentContext}"
    }
  }
}
env
BRANCH_NAME Alanı
Örnek
Şöyle yaparız
def isDevelopBranch() {
  def current_branch = env.BRANCH_NAME
  echo 'Current branch is: ' + current_branch

  return current_branch != null && current_branch.startsWith("develop")
}

def isReleaseBranch() {
  def current_branch = env.BRANCH_NAME
  echo 'Current branch is: ' + current_branch

  return current_branch != null && current_branch.startsWith("release/")
}

def shouldDeployToArtifactory() {
  return isDevelopOrReleaseBranch() || "${params.ARTIFACTORY_PUBLISH}" == "true"
}
stage('Build') {
  ...
  if (shouldDeployToArtifactory()) {
    stage("Push to Artifactory") {
    echo "Publishing to Artifactory..."
    withCredentials([usernamePassword(credentialsId: 'ARTIFACTORY_PUBLISH',
                                      passwordVariable: 'ARTIFACTORY_PASSWORD', 
                                      usernameVariable: 'ARTIFACTORY_USER')]) {
      runGradleSteps("publish")
    }
  }
}
Curl Kullanımı
Örnek
Şöyle yaparız
def healthTest(deploymentIp, envName) {
  STATUS_CODE= sh (
    script: "curl -sL -x snort-us-central1-a.gcp.foo.com:3128 -w '%{http_code}' 
      $deploymentIp:10509 -o /dev/null \
      --max-time 15",
      returnStdout: true
    )
  echo "Status code $STATUS_CODE"
  if(STATUS_CODE == "200") {
    echo "$envName environment has come up and ready to use"
  } else {
    currentBuild.result = 'ABORTED'
    error("$envName hasn\'t come up. Health test failed")
  }
}
Slack Kullanımı
Örnek
Elimizde şöyle bir kod olsun
def sendSlackGreenNotification(message) {
  notifySlack(message, "good")
}

def sendSlackRedNotification(message) {
  notifySlack(message, "danger")
}

def notifySlack(text, color) {
  def slackURL = "https://foo.slack.com/services/hooks/jenkins-ci/12345"
  def slack_channel = "foo-jenkins"
  def payload = JsonOutput.toJson([text      : text,
                                   channel   : slack_channel,
                                   username  : "jenkins",
                                   icon_emoji: ":jenkins:",
                     		   color  : color])
    sh "curl -X POST -x snort-us-central1-a.gcp.foo.com:3128 --data-urlencode \'payload=${payload}\' ${slackURL}"
}
Şöyle yaparız
stage("My Stage") {
  sendSlackGreenNotification("Build ${env.BUILD_NUMBER} started for ${env.JOB_NAME} at 
    ${env.BUILD_URL}")
  ...
  sendSlackGreenNotification("Completed for ${env.JOB_NAME}")
}
sh Kullanımı
Örnek
Şöyle yaparız
def execute_on_gcp(envName) {
  withEnv(['http_proxy=', 'https_proxy=']) {
    def CLUSTER_NAME = "mycluster"
     def PROJECT_ID = "product-foo"

    sh "gcloud auth activate-service-account --key-file=.gcp/product-foo.json"
    sh "gcloud container clusters get-credentials ${CLUSTER_NAME} --zone us-central1-a --project ${PROJECT_ID}"
    sh "kubectl apply -f .gcp/foo-db-${envName}.yml"
    sh "kubectl apply -f .gcp/foo-farm-${envName}.yml"
    DB_POD_NAME = sh(
      script: "kubectl get pods --all-namespaces | grep rw60-db-${envName} | awk '{print \$2}'",
      returnStdout: true
    ).trim();
    DB_POD_IP = sh(
      script: "kubectl exec ${DB_POD_NAME} -- cat /etc/hosts | grep rw60-db-${envName} | awk '{print \$1}'",
      returnStdout: true
    ).trim();
    try {
      sh "kubectl exec ${DB_POD_NAME} -- su -c \"nohup /oracle-data/data-import.sh\""
    } catch(err) {
      echo "Errors are expected on data import"
    }
  }
}
withEnv
Açıklaması şöyle
You can use withEnv to set environment variables inside of your pipeline. The pipeline then makes the variables accessible to external processes spawned inside it.
Örnek
Şöyle yaparız
node {
  withEnv(['MY_NAME_IS=Eric']) {
    sh echo 'My name is $MY_NAME_IS'
  }
}