Home > Imaging, sage, Scientific Computing > Edge detection: The Scale Space Theory

## Edge detection: The Scale Space Theory

Consider an image as a bounded function $f: \square_2 \to \mathbb{R}$ with no smoothness or structure assumptions a priori. Most relevant information of a given image is contained in the contours of the mapped objects: Think for example of a bright object against a dark background—the area where these two meet presents a curve where the intensity $f(\boldsymbol{x})$ varies strongly. This is what we refer to as an “edge.”
Initially, we may consider the process of detection of an edge by the simple computation of the gradient $\nabla f(\boldsymbol{x}) = \big( \tfrac{\partial f}{\partial x_1}, \tfrac{\partial f}{\partial x_2} \big):$ This gradient should have a large intensity $\lvert \nabla f(\boldsymbol{x}) \rvert$ and a direction $\tfrac{\nabla f(\boldsymbol{x})}{\lvert \nabla f(\boldsymbol{x}) \rvert}$ which indicates the perpendicular to the curve. It therefore looks sound to simply compute the gradient of $f$ and choose the points where these values are large. This conclusion is a bit unrealistic for two reasons: