26 Şubat 2021 Cuma

OpenCV Sharr Edge Detection

Giriş
OpenCV Edge Detection için bir sürü yöntem sağlıyor.

Örnek
Açıklaması şöyle
OpenCV has several ways to compute edge detection, and we are going to use Scharr, as it performs quite well. Scharr computes a derivative, so it detects the difference in colors in the image. We are interested in the X axis, and we want the result to be a 64-bit float, so our call would be like this:
Şöyle yaparız
edge_x = cv2.Scharr(channel, cv2.CV_64F, 1, 0)
Açıklaması şöyle
As Scharr computes a derivative, the values can be both positive and negative. We are not interested in the sign, but only in the fact that there is an edge. So, we will take the absolute value: 
Şöyle yaparız
edge_x = np.absolute(edge_x)
Açıklaması şöyle
 Another issue is that the values are not bounded on the 0–255 value range that we expect on a single channel image, and the values are floating points, while we need an 8-bit integer. We can fix both the issues with the following line: 
Şöyle yaparız
edge_x = np.uint8(255 * edge_x / np.max(edge_x))


24 Şubat 2021 Çarşamba

Ingress Filtering

Giriş
Açıklaması şöyle
a technique used to ensure that incoming packets are actually from the networks from which they claim to originate.
Örnek
Elimizde şöyle bir ağ olsun
                                                       | 192.1.1.1
                                 +------------------- GW4
                                 | 192.168.1.1
             +----------------- GW2------+
             | 192.168.16.1              | 192.168.33.1
      +---- GW1 -----+             ---- GW3 ----
      |              |
192.168.16.15   192.168.16.243
     YOU           ALICE
Eğer kendi IP adresimizden farklı bir adresle IP paketi göndersek bile "Ingress Filtering" bu paketi düşürür. Açıklaması şöyle
You send out a packet with a source address of 10.0.0.36 and directed to, say, 172.16.16.172. GW1, serving your network which is 192.168.16.0/24, only expected to receive packets matching 192.168.16.0/24, and your packet doesn't match, so it is dropped. You could spoof a connection coming from your "neighbour" Alice, but no more.

Even if GW1 did not complain, it would forward the packet to the next hop, which serves the whole 192.168.0.0/16 branch and also would ignore your packet.

And so on and so forth (the IP I used are actually not all that routable, but it's an example).

MongoDB bulkWrite

Giriş
Açıklaması şöyle
Use bulk write to write multiple database changes with a single batch query
Örnek
Şöyle yaparız
//Don'ts
db.collection.updateOne({itemId: 1}, { $set: { "stock" : 3} })
db.collection.updateOne({itemId: 2}, { $set: { "stock" : 1} })
db.collection.updateOne({itemId: 3}, { $set: { "stock" : 4} })
//Do's
db.collection.bulkWrite(
   [
     { updateOne :
       {
         "filter": {itemId: 1},
         "update": {$set: {"stock" : 3}
       }
     },
     { updateOne :
       {
         "filter": {itemId: 2},
         "update": {$set: {"stock" : 1}
       }
     },
     { updateOne :
       {
         "filter": {itemId: 3},
         "update": {$set: {"stock" : 4}
       }
     }      
   ]
);

Deep Learning

Giriş
Deep Learning aslında Machine Learning 'in alt kümesi. Açıklaması şöyle
A part of machine learning, DL can process and use data to better understand the context in the unstructured data, thereby improving the accuracy of automated analysis of the text. 
Kavramlar
Şu kavramlar kullanılır
1. Cost Function
2. Activation Functions
3. Recurrent Neural Networks (RNN)
4. Backpropagation
5. Long Short-Term Memory Networks (LSTM)
6. Convolutional Neural Networks (CNNs)
7. Hyper-parameters
8. Batch and Stochastic Gradient Descent
Deep Learning Altta Artificial Neural Network Kullanır
Açıklaması şöyle.
Deep learning can be termed as an approach to machine learning where learning from past data happens based on artificial neural networks (a mathematical model mimicking the human brain).
Artificial Neural Network Nedir
Açıklaması şöyle. Neuron'lardan oluşur. Her neuron belli bir ağırlığa sahiptir ve bir işi yapar. Örneğin bir resimde bıyıklı bir adam olup olmadığını anlayan bir neuron'lar olsun. Bir tane neuron bıyık arar, bir başka neuron adam arar. Bu ikisinin çıktısı belli bir eşik üzerindeyse var veya yok cevabı verilir.
An artificial neural network is a bunch of computation units called neurons laid out in one or more layers while the neurons being connected with each other. Neuron as a computation unit can be expressed as a weighted sum of inputs and looks like the following:

\(w_0 + w_1x_1 + w_2x_2 + w_3x_3 + ... + w_nx_n\)

In the above equation, the \(w_n\) represents the weight and \(x_n\) represents the corresponding input. Each neuron is associated with what is called an activation function, which decides on the output of the neuron. When all the neurons across different layers are connected with each other, the neural network is also called a fully-connected neural network.

A neural network having just one neuron can be called as a single-layer neural network. It is called the perceptron. A neural network having one input layer, one hidden layer, and one output layer is called a multi-layer perceptron (MLP) network.
Dense Network Nedir
Açıklaması şöyle. Yani öndeki katmandaki nöron arkadaki katmandaki her nörona bağlıdır
A layer that is densely connected to its preceding layer means that every neuron in the layer is connected to every other neuron in the layer above it. In artificial neural network networks, this layer is the one that is most frequently utilized.
Deep Neural Network Nedir
İki veya daha fazla artificial neural network varsa deep neural network olarak adlandırılır. Şeklen şöyledir


Her dikey katmandaki neuron, bir sonraki katmandaki neuron'lara bağlıdır.

TensorFlow
Açıklaması şöyle
TensorFlow is an open-source software library for machine and deep learning. It was created by the Google Brain Team and has a wide range of uses, such as neural networks for computer vision, natural language processing, and self-driving vehicles.

With high performance and efficiency, TensorFlow enables users to construct, optimize, and assess mathematical expressions involving multi-dimensional arrays. Researchers and developers choose it because it offers a versatile and high-level API for building, honing, and deploying machine learning models. Additionally, TensorFlow has a sizable and expanding contributor community that offers a plethora of tools, tutorials, and resources for users to employ.
Örnek
Şöyle yaparız
layer_0 = tf.keras.layers.Dense(units=1, input_shape=[1])
model = tf.keras.Sequential([layer_0])
Açıklaması şöyle
To create the first layer and assemble it into a model, you can start by tf.keras.Dense method and providing two parameters as input for a basic Dense model:

- units=1 is the number of neurons in the layer, and it will represent internal variables the layer must use to try to figure out how to solve the problem.
- input_shape=[1] is the other parameter which indicates the input to this layer is a single value.
Sonra model compile edilir. Şöyle yaparız
model.compile(loss='mean_squared_error',
optimizer=tf.keras.optimizers.Adam(0.1))
Açıklaması şöyle
Now it is time to compile our model before starting to fit our training input into it. Compiling the model is pretty simple; you just need to call the compile method on your model object. But to do so, you also need to provide the loss and optimizer functions.

In this case, we are using the mean squared error as our loss function. The degree of inaccuracy in statistical models is gauged by the mean squared error, or MSE. Between the observed and projected values, it evaluates the average squared difference. The MSE is equal to 0 when a model is error-free, and its value increases when the model error does as well.

To do the optimization, we are using the Adam optimization algorithm. In place of the conventional stochastic gradient descent method, Adam is an optimization technique that may be used to iteratively update network weights depending on training data.
Sonra model veri ile beslenir. Şöyle yaparız
history = model.fit(inputs, outputs, epochs=400, verbose=0)
Açıklaması şöyle
After compiling the model, now it is time to fit our training data into the model. Using the fit method and providing the input samples and corresponding outputs, we can train our model.

- epochs: An epoch is an iteration over the entire inputs and outputs data provided which in our case we have set to 400.
- verbose: This argument controls how much output the method produces and can be set to ‘auto’, 0, 1, or 2. 0 is the silent mode.
Ve çıktı alınır
print(model.predict([39])) # [[243.605]]

23 Şubat 2021 Salı

OpenCv cvtColor metodu

Giriş
İmzası şöyle
void cvtColor (InputArray sr, OutputArray dst, int code )
Resmin renklerini değiştirir. OpenCV resimleri BGR formatında yükler.
GRAY ile kaynak resim 1 kanallıdır
BGR ile kaynak resim 3 kanallıdır
BGRA ile kaynak resim 4 kanallıdır

Kaynak resmin renklerini değiştirmeden önce doğru kanal sayısına sahip olduğundan emin olmak gerekir.

Örnek
Resmin renk uzayını değiştirmekteki bir amaç contour 'ları bulmak olabilir. Şöyle yaparız
cv::Mat input = cv::imread("../inputData/RotatedRect.png");

// convert to grayscale (you could load as grayscale instead)
cv::Mat gray;
cv::cvtColor(input,gray, CV_BGR2GRAY);

// compute mask (you could use a simple threshold if the image is always as good as
// the one you provided)
cv::Mat mask; cv::threshold(gray, mask, 0, 255, CV_THRESH_BINARY_INV | CV_THRESH_OTSU); // find contours (if always so easy to segment as your image, you could just
// add the black/rect pixels to a vector)
std::vector<std::vector<cv::Point>> contours; std::vector<cv::Vec4i> hierarchy; cv::findContours(mask,contours, CV_RETR_EXTERNAL, CV_CHAIN_APPROX_SIMPLE);
Örnek - 4 kanal
Şöyle yaparız.
Mat srcColor = ...
Mat dstGray;
cvtColor(srcColor, dstGray, CV_BGRA2GRAY);
Örnek
Daha genel bir çözüm için şöyle yaparız.
const cv::Mat img = ...;

//Convert the input image to the input image format
cv::Mat sample;
if (img.channels() == 3 && num_channels_ == 1)
  cv::cvtColor(img, sample, cv::COLOR_BGR2GRAY);
else if (img.channels() == 4 && num_channels_ == 1)
  cv::cvtColor(img, sample, cv::COLOR_BGRA2GRAY);
else if (img.channels() == 4 && num_channels_ == 3)
  cv::cvtColor(img, sample, cv::COLOR_BGRA2BGR);
else if (img.channels() == 1 && num_channels_ == 3)
  cv::cvtColor(img, sample, cv::COLOR_GRAY2BGR);
else
  sample = img;
Örnek
Eğer aynı resmi değiştirmek istersek şöyle yaparız. Yeni OpenCV sürümüğnde CV_ile başlayan sabitler COLOR_ ile başlayan sabitlerle yer değiştirdi.
cvtColor (image, image, COLOR_BGR2GRAY);
Örnek
OpenCV tarafından yüklenmeyen bir resmi siyah beyaz yapmak için şöyle yaparız.
Mat srcColor = ...;
Mat srcGray;
cvtColor(srcColor,srcGray, CV_RGB2GRAY);
Örnek
Diğer renk uzayları arasında dönüşüm yapmak için şöyle yaparız
cvtColor(img, dstimg, cv::COLOR_YUV2BGR);
cvtColor(img, dstimg, cv::COLOR_BGR2RGB);
cvtColor(img, dstimg, CV_BGR2HSV);

22 Şubat 2021 Pazartesi

MongoDB aggregate ve avg

Örnek
Şöyle yaparız
> db.car.aggregate([
... { $group: { _id: "$make", avg_price: { $avg: "$price" }}}
... ])
{ "_id" : "hyundai", "avg_price" : 36333.333333333336 }
{ "_id" : "BMW", "avg_price" : 47400 }
{ "_id" : "ford", "avg_price" : 35333.333333333336 }
SQL olarak şöyledir
mysql> select make, avg(price)
    -> from car
    -> group by make;
+---------+------------+
| make    | avg(price) |
+---------+------------+
| BMW     | 47400.0000 |
| ford    | 35333.3333 |
| hyundai | 36333.3333 |
+---------+------------+
Örnek
Şöyle yaparız. Yukarıdaki örnek ile aynı. Sadece 2019 modeller seçiliyor.
> db.car.aggregate([
... { $match: { year: "2019" }}, ... { $group: { _id: "$make", avg_price: { $avg: "$price" }}} ... ]) { "_id" : "BMW", "avg_price" : 53000 } { "_id" : "ford", "avg_price" : 42000 } { "_id" : "hyundai", "avg_price" : 41000 }
SQL olarak şöyledir
mysql> select make, avg(price)
    -> from car
    -> where year = "2019"
    -> group by make;
+---------+------------+
| make    | avg(price) |
+---------+------------+
| BMW     | 53000.0000 |
| ford    | 42000.0000 |
| hyundai | 41000.0000 |
+---------+------------+

MongoDB update metodu

Örnek
Şöyle yaparız
> db.car.update(
... { make: "bmw" },
... { $set: { make: "BMW" }},
... { multi: true }
... )
WriteResult({ "nMatched" : 5, "nUpserted" : 0, "nModified" : 5 })
SQL olarak şöyledir
mysql> update car
    -> set make = "BMW"
    -> where make = "bmw";
Query OK, 5 rows affected (0.05 sec)
Rows matched: 5  Changed: 5  Warnings

Machine Learning - Support Vector Machines (SVM) Yöntemi

Giriş
Bu yöntem popüler. Açıklaması şöyle
Support Vector Machines (SVM) are one of the most popular supervised learning methods in Machine Learning(ML). Many researchers have reported superior results compared with older ML techniques.

SVM can be applied on regression problems as well as classification problems, ....
SVM Linear Applications ve Nonlinear Applications için kullanılabilir.

Linear Regression elimizdeki noktalara bakarak bir doğrusal denklem yani çizgi denklemi bulmak demek

SVM Linear Applications
Açıklaması şöyle
A popular classifier for linear applications because SVM’s have yielded excellent generalization performance on many statistical problems with minimal prior knowledge and also when the dimension of the input space(features) is very high.
SVM Nonlinear Applications
Açıklaması şöyle
A popular classifier for linear applications because SVM’s have yielded excellent generalization performance on many statistical problems with minimal prior knowledge and also when the dimension of the input space(features) is very high.
Şeklen şöyle
Maximum margin hyperplane
Açıklaması şöyle
The objective is to find the line passing as far as possible from all points
Kernel Trick
Açıklaması şöyle
SVM uses a Kernel trick to transform to a higher nonlinear dimension where an optimal hyperplane can more easily be defined.
Kernel tipleri şöyle
- Linear Kernel
- Polynomial Kernel     
- RBF - Radial Basis Function Kernel
- Gaussian     kernel
- Hyperbolic     tangent kernel
Neural Networks vs SVMs
SVM Convex veri için daha uygun. Açıklaması şöyle
One important argument is SVM is convex but NN is generally not. Having a convex problem is desirable because we have more tools to solve it more reliable.

If we know our data, we can pick a better model to fit data better. For example, if we have some data like donut shape. Like this
Doğru kernel seçimi de önemli. Açıklaması şöyle
using SVM with right kernel is better than using NN and NN may overfit data in this case.
Neural Networks aslında SVN'den daha eski. Açıklaması şöyle.
Historically, neural networks are older than SVMs and SVMs were initially developed as a method of efficiently training the neural networks. So, when SVMs matured in 1990s, there was a reason why people switched from neural networks to SVMs. Later, as data sets grew larger and more complex, so that feature selection became a (even bigger) problem, while, at the same time, computational power rose, people switched back again.

This development already suggests that both have their strengths and weaknesses and that there is, as Haitao says, no free lunch.

Essentially, both methods do some kind of data transformation to "send" them into a higher dimensional space. What the kernel function does for the SVMs, the hidden layers do for neural networks. The last, output layer in the network also performs a linear separation of the so transformed data. So this is not the core difference.
Her iki yöntem de verinin boyutunu (dimension) artırır. Açıklaması şöyle.
As you can see below, a two-layer neural network, with 5 neurons in the hidden layer, can perfectly separate the two classes. The blue class can be fully enclosed in a pentagon (pale blue) area. Each neuron in the hidden layer determines a linear boundary---a side of the pentagon, producing, say, +1 when its input is a point on the "blue" side of the line and -1 otherwise (it could also produce 0, it doesn't really matter).

I have used different colours to highlight which neuron is responsible for which boundary. The output neuron (black) simply checks (performs a logical AND, which is again a linearly separable function) whether all hidden neurons give the same, "positive" answer. Observe that this last neuron has five inputs. I.e. its input is a 5-dimensional vector. So the hidden layers have transformed 2D data into 5D data.

Burada SVM'nin veri boyutu büyüdükçe boundary çizerken zorlandığı anlatılıyor
Notice, however, that the boundaries drawn by the neural network are somewhat arbitrary. You can shift and rotate them slightly without really affecting the result. How the network draws the boundary is somewhat random; it depends on the initialisation of the weights and on the order you present the training set to it. This is where SVMs differ: They are guaranteed to draw the boundary mid-way between the closest points of the two classes! It can be (has been) shown that this boundary is the optimal one. Finding the boundary is a convex (quadratic) optimisation problem for which fast algorithms exist. Also, the kernel trick has the computational advantage that it's usually much faster to compute a single non-linear function than to pass the vector through many hidden layers.

However, since SVMs never compute the boundary explicitly, but through the weighted sum of the kernel functions over the pairs of the input data, the computational effort scales quadratically with the data set size. For large data sets this quickly becomes impractical.

Also, when the data are high-dimensional (think of images, with millions of pixels) the SVMs might become overwhelmed by the curse of dimensionality: It becomes too easy to draw a good boundary on the training set, but which has poor generalisation properties. Convolutional neural networks, on the other hand, are capable of learning the relevant features from the data.
Yani temel tavsiye şöyle
In summary, my suggestion is to use SVMs for low-dimensional, small data sets and neural networks for high-dimensional large data sets.