Scale-space theory is a framework for
multi-scale signal representation developed by the
computer vision,
image processing and
signal processing communities with complementary motivations from
physics and
biological vision. It is a formal theory for handling image structures at different
scales, by representing an image as a one-parameter family of smoothed images, the
scale-space representation, parametrized by the size of the smoothing
kernel used for suppressing fine-scale structures. The parameter
in this family is referred to as the
scale parameter, with the interpretation that image structures of spatial size smaller than about
have largely been smoothed away in the scale-space level at scale
.