It took me a lot of time to decide what is the best intro for this article. I should have published this article more than a year ago and I just want to say:
This article is a continuation to the first one. Please go back and check it out if you have not already, since it heavily depends on it.
We are going to cover the following in this article:
1. Implementation for Sobel filter through X axis (Vertically).
2. Implementation for Sobel filter through Y axis (Horizontally).
3.Implementation for Sobel filter through X and Y together.
The main usage of Sobel filter is for Edge Detection. It does that by calculating the gradient of the image at each pixel which results in finding the largest increase from light to dark pixels and the rate of change. Edge can be defined as a set of contiguous pixel positions where a sudden change of the intensity values occur in the pixel. So:
Sobel Filter -> Edge Detection -> Object Boundaries.
The edges can be found by convolving 3x3 Sobel kernel through the image. The convolution process is done for every pixel in the image by multiplying each of the Sobel kernel values with the corresponding pixel values from the image, then summing up the values and replacing the source pixel with the result as shown in the example figure below.
Regarding boundary pixels, where we cannot apply the convolution operation, you have either of the two following options:
1. Expand the image by adding/padding zeros to boundaries while applying the filter.
2. Keep the original values for the boundary pixels.
The image is stored in memory as a contiguous array. For example, if the image resolution is 25*25 and it has 3 channels (RGB) and the data type is integer (0–255), then the total image size in memory will be:
img_size_in_byte = number_of_channels * img_width*img_height = 25*25*3 = 1875.