Home > Imaging, sage, Scientific Computing > Edge detection: The Scale Space Theory

Edge detection: The Scale Space Theory

January 21, 2011 Leave a comment Go to comments

Consider an image as a bounded function f: \square_2 \to \mathbb{R} with no smoothness or structure assumptions a priori. Most relevant information of a given image is contained in the contours of the mapped objects: Think for example of a bright object against a dark background—the area where these two meet presents a curve where the intensity f(\boldsymbol{x}) varies strongly. This is what we refer to as an “edge.”

Initially, we may consider the process of detection of an edge by the simple computation of the gradient \nabla f(\boldsymbol{x}) = \big( \tfrac{\partial f}{\partial x_1}, \tfrac{\partial f}{\partial x_2} \big): This gradient should have a large intensity \lvert \nabla f(\boldsymbol{x}) \rvert and a direction \tfrac{\nabla f(\boldsymbol{x})}{\lvert \nabla f(\boldsymbol{x}) \rvert} which indicates the perpendicular to the curve. It therefore looks sound to simply compute the gradient of f and choose the points where these values are large. This conclusion is a bit unrealistic for two reasons:

  1. The points where the gradient is larger than a given threshold are open sets, and thus don’t have the structure of curves.
  2. Large gradient may arise in certain locations of the image due to tiny oscillations or noise, but completely unrelated to the objects being mapped. As a matter of fact, there is no reason to assume the existence or computability of any gradient at all in a given digital image.

Read more…

  1. No comments yet.
  1. No trackbacks yet.

Leave a comment