Noise Reduction in Images: Some Recent Edge-Preserving Methods
Beschreibung
vor 26 Jahren
We introduce some recent and very recent smoothing methods which
focus on the preservation of boundaries, spikes and canyons in
presence of noise. We try to point out basic principles they have
in common; the most important one is the robustness aspect. It is
reflected by the use of `cup functions' in the statistical loss
functions instead of squares; such cup functions were introduced
early in robust statistics to down weight outliers. Basically, they
are variants of truncated squares. We discuss all the methods in
the common framework of `energy functions', i.e we associate to
(most of) the algorithms a `loss function' in such a fashion that
the output of the algorithm or the `estimate' is a global or local
minimum of this loss function. The third aspect we pursue is the
correspondence between loss functions and their local minima and
nonlinear filters. We shall argue that the nonlinear filters can be
interpreted as variants of gradient descent on the loss functions.
This way we can show that some (robust) M-estimators and some
nonlinear filters produce almost the same result.
focus on the preservation of boundaries, spikes and canyons in
presence of noise. We try to point out basic principles they have
in common; the most important one is the robustness aspect. It is
reflected by the use of `cup functions' in the statistical loss
functions instead of squares; such cup functions were introduced
early in robust statistics to down weight outliers. Basically, they
are variants of truncated squares. We discuss all the methods in
the common framework of `energy functions', i.e we associate to
(most of) the algorithms a `loss function' in such a fashion that
the output of the algorithm or the `estimate' is a global or local
minimum of this loss function. The third aspect we pursue is the
correspondence between loss functions and their local minima and
nonlinear filters. We shall argue that the nonlinear filters can be
interpreted as variants of gradient descent on the loss functions.
This way we can show that some (robust) M-estimators and some
nonlinear filters produce almost the same result.
Weitere Episoden
In Podcasts werben
Kommentare (0)