Scale-space theory is a framework for
multi-scale signal representation developed by the
computer vision,
image processing and
signal processing communities with complementary motivations from
physics and
biological vision. It is a formal theory for handling image structures at different
scales, by representing an image as a one-parameter family of smoothed images, the
scale-space representation, parametrized by the size of the smoothing
kernel used for suppressing fine-scale structures. The parameter
![](http://info.babylon.com/onlinebox.cgi?rt=GetFile&uri=!!ARV6FUJ2JP&type=0&index=3854)
in this family is referred to as the
scale parameter, with the interpretation that image structures of spatial size smaller than about
![](http://info.babylon.com/onlinebox.cgi?rt=GetFile&uri=!!ARV6FUJ2JP&type=0&index=1576)
have largely been smoothed away in the scale-space level at scale
![](http://info.babylon.com/onlinebox.cgi?rt=GetFile&uri=!!ARV6FUJ2JP&type=0&index=3854)
.