A few months ago, I wrote an article about a highpass filter and how it can be used for edge detection in Image Processing applications. Today, I present you an improved method to extract the edges of an image and a technique to modify the outcome in order to become more visual appealing.

When you look at a digital image, you see many little “pieces” of light close together which we call pixels. From a computer’s point of view, a pixel is nothing but 0s and 1s. The computer sends these binary numbers to the monitor and then they are being transformed to light waves with different wave lengths. In general, each pixel is the superposition of three binary numbers and each binary number is a gray level scale of Red, Blue or Green, because we can create *almost* any color from just these three colors. A gray level scale is the amount that each color contributes to the sum. In this post we transform the image from RGB to gray, so when we talk about “gray level scale” we mean that each pixel has one binary number associated with it and the smaller this number the darker the pixel. In addition, we will use 8 bit binary numbers, which means that our image matrices will have a maximum value of 255 and a minimum of 0.

I am sure that by now you have come up with many algorithms to detect the edge of an image (I know I had). All you have to do is find the areas where the pixels tend to have very different values. This is why the highpass filter of the previous post worked. Let me remind you the basic idea.

An image in MATLAB is a two dimensional matrix with values ranging from 0 to 255. We created another two dimensional matrix and we set each element to be equal with the difference of two elements along a row. To be more precise, let’s say our original image matrix is A. The final B image matrix is B=A(i,j+1)-A(i,j). Obviously, the image matrix B has one less column than A. This a very simple method to detect the edges of an image (just eight lines of code) but it’s far from perfect. Its main disadvantage is that it takes into account only one direction. The method we are going to analyze next uses four direction vectors.

The Sobel Operator is using the eight closest pixels of a pixel and averages the direction vectors. Take a look at the image above. If **e** was the center of your coordinate system and you wanted to go from **i** to **a**, then you would have to follow a vector of magnitude and direction . For the case where your starting point is **h** and your destination **b**, you have a vector of magnitude and direction and so on. These vectors are the directional derivatives. They show us how much our function is going to change if we move along these directions. Now, if we sum these four vectors and divide them by four we have the average gradient vector G. This is the Sobel Operator. The only thing left to do right know is apply it to our image and take the norm. Our new image matrix will have two rows and two columns less. It is now apparent why this method is superior to the one we described earlier. It takes into account every change that takes place in the neighborhood of a pixel.

The extra technique that can improve the quality of our results is called Thresholding. When we apply the Sobel Operator to the image of interest, we get an other gray scale image S. We now proceed to enhance the differences of our image in binary way. We find the element of our matrix which partitions our matrix to a certain magnitude percentage. This is our threshold value. I built a MATLAB function that does just that. It takes a matrix A and a value and outputs the element of A such that % of A’s elements are smaller than . The rest is straightforward. Every element of the S matrix, which is smaller than the threshold, is assigned the value 0. Otherwise, it becomes 255. You can see code and the new image below.

function [out] = percentage_intermedian(A,k) [x,y]=size(A); A=reshape(A,[1,x*y]); [z,w]=sort(A); out=A(w(floor(0.01*k*x*y)));

clear A=imread('superman.jpg'); B=rgb2gray(A); [x,y]=size(B); for k=2:1:x-1 for l=2:1:y-1 a=B(k-1,l-1); b=B(k-1,l); c=B(k-1,l+1); d=B(k,l-1); f=B(k,l+1); g=B(k+1,l-1); h=B(k+1,l); i=B(k+1,l+1); C(k,l)=norm([double(0.25*((c-g-a+i)/4+(f-d)/2)),double(0.25*((c-g+a-i)/4+(b-h)/2))]); end end threshold=percentage_intermedian(C,97); [x,y]=size(C); for k=1:1:x for l=1:1:y if C(k,l)>threshold C(k,l)=255; else C(k,l)=0; end end end image(C)

The old image…

… and the new one. Much better, isn’t it?