\documentstyle[twoside,amssymb]{article}
% amssymb is used for R in Real numbers
\pagestyle{myheadings}
\markboth{\hfil Stable evaluation of differential operators \hfil EJDE--1997/15}%
{EJDE--1997/15\hfil Otmar Scherzer\hfil}
\begin{document}
\title{\vspace{-1in}\parbox{\linewidth}{\footnotesize\noindent
{\sc Electronic Journal of Differential Equations},
Vol.\ {\bf 1997}(1997), No.\ 15, pp. 1--12. \newline
ISSN: 1072-6691. URL: http://ejde.math.swt.edu or http://ejde.math.unt.edu
\newline ftp (login: ftp) 147.26.103.110 or 129.120.3.113}
\vspace{\bigskipamount} \\
Stable evaluation of differential operators and linear and
nonlinear multi--scale filtering
\thanks{ {\em 1991 Mathematics Subject Classifications:}
65J10, 65J15, 65K10, 35J60, 26A45.\hfil\break\indent
{\em Key words and phrases:} Nondifferntiable optimization problems,
regularization, \hfil\break\indent
inverse problems, image reconstruction, bounded variation norm.
\hfil\break\indent
\copyright 1997 Southwest Texas State University and University of
North Texas. \hfil\break\indent
Submitted July 8, 1997. Published September 10, 1997.\hfil\break\indent
Partly supported by the Christian--Doppler--Society, Austria}}
\date{}
\author{Otmar Scherzer}
\maketitle
\begin{abstract}
Diffusion processes create multi--scale analyses, which enable
the generation of simplified pictures, where for
increasing scale the image gets sketchier. In many practical
applications the ``scaled image'' can be characterized via a
variational formulation as the solution of a minimization problem
involving unbounded operators. These unbounded operators can be
evaluated by regularization techniques. We show that the
theory of stable evaluation of unbounded operators can be applied
to efficiently solve these minimization problems.
\end{abstract}
\newtheorem{theorem}{Theorem}[section]
\newtheorem{example}[theorem]{Example}
\def\bea{\begin{eqnarray}}
\def\eea{\end{eqnarray}}
\def\beas{\begin{eqnarray*}}
\def\eeas{\end{eqnarray*}}
\def\be{\begin{equation}}
\def\ee{\end{equation}}
\def\aa{A_{{\lambda^2}\beta\gamma}}
\def\bv{{\mbox{BV}}}
\def\dif{\nabla .}
\def\lgs{L_{\gamma}^*}
\def\jbg{J_{\beta\gamma}}
\section{Introduction}
\label{sec1}
Morel and Solimini describe in their book \cite{MorSol95} a
{\it{multi--scale analysis}}:
\begin{quotation}
Multi--scale analysis is concerned with the generation of a sequence of
simplified pictures $u_\lambda$ from a single initial picture $u_0^\delta$,
where $u_\lambda$ appears to be a rougher and sketchier version of
$u_0^\delta$ as $\lambda$ increases.
\end{quotation}
In the literature various possibilities for generating multi--scale
analyses have been proposed.
One common method to generate a multi--scale analysis is by a
{\it{diffusion process}}, such as
\bea
\label{1.1}
\begin{array}{rcl}
\frac{\partial u}{\partial t}(x,t) &=& \Delta u(x,t)\,\\
u(x,0) &=& u_0^\delta(x)\;.
\end{array}
\eea
It is easy to see that in ${\Bbb R}^2$ convolution of the signal $u_0^\delta$
at each
scale is equivalent to the solution of the heat equation with
the signal as initial datum \cite{Koe84}.
The solution of (\ref{1.1}) for the initial datum $u_0^\delta$ with
bounded quadratic norm is
$$
u(x,t) = G_t \star u_0^\delta(x) = \int G_t(x-y)u_0^\delta(y)\,dy\;$$
where $G_\sigma(x) = \frac{1}{4\pi\sigma} \exp (-\|x\|^2/4\sigma)$
($x \in {\Bbb R}^2$).
The sequence $u_\lambda(.)=u(.,\lambda)$ defines a multi--scale
analysis, which satisfies the following properties: Let $S_\lambda$
be the mapping $u_0^\delta \to S_\lambda(u_0^\delta) := u_\lambda$
\begin{enumerate}
\item $u_\lambda \to u_0^\delta$ as $\lambda \to 0$
\item $S_\lambda(u_0^\delta)$ only depends on $S_{\lambda'}(u_0^\delta)$ if
$\lambda > \lambda'$
\item If $A$ is an isometry, then
$S_\lambda(u_0^\delta \circ A) = (S_\lambda(u_0^\delta)) \circ A$.
\end{enumerate}
In image processing and image enhancement techniques one easily
understands the necessity of a previous low--pass filtering,
i.e., by convolution of the signal with a Gaussian kernel:
If the signal $u_0^\delta$ is noisy, the gradient has a lot of irrelevant
spikes, which have to be eliminated.
As we will see in Section \ref{sec3} the linear multi--scale analysis
based on (\ref{1.1}) has a tendency to over smooth data.
Many attempts have been made in the literature to construct diffusion
processes which satisfy a {\it{strong causality}} principle, that
is, the associated multi--scale analysis reconstructs
edges and corners of images, while simultaneously eliminating
highly oscillating noise in the image. Diffusion processes where
the associated multi--scale analysis satisfies a strong causality
principle have been found to be mostly nonlinear.
In Section \ref{sec2} we give an outline of linear and nonlinear
diffusion processes and motivate associated multi--scale analysis.
The multi--scale analysis can be the basis for efficient algorithms
for denoising images and edge--detection. In Section \ref{sec3}
we will outline the concept of stable evaluation of unbounded
operators and we show in Sections \ref{sec3}, \ref{sec4} that
this concept is relevant for the multi--scale analysis associated
with diffusion processes.
In Section \ref{sec5} we will discuss the numerical
implementation of a multi--scale analysis associated with a nonlinear
diffusion process and we will show some numerical results.
\section{Variational formulations of diffusion processes}
\label{sec2}
\subsection{The Hildreth--Marr theory}
The {\it energy functional} associated with the differential equation
(\ref{1.1}) is
$$ E(u) = \int | \nabla u |^2\,.$$
This functional gives an evaluation of the amount of information in $u$.
The energy semi norm is the right functional to be associated with
(\ref{1.1}) since the heat equation is the descent method associated
with this energy. If we discretize (\ref{1.1}) we end up with
$$u(x,t) - u_0^\delta(x) = t \Delta u(x,t)\;$$
which is the solution of a variational problem: roughly speaking,
numerically solving the heat equation is therefore equivalent to
minimizing the following functional iteratively with
$t=\lambda^2$ small:
$$E_\lambda(u) = \lambda^2 \int_\Omega |\nabla u|^2 + \int_\Omega (u-u_0^\delta)^2.$$
In this formulation the scaling parameter appears explicitly as the
multiplier of the smoothing term. The solution $u_\lambda$ of this
problem can be defined as the ``analysis of $u$ at scale $\lambda$''
as well as the solution of the heat equation at time $t=\lambda^2$.
\subsection{The Perona and Malik theory}
For an edge--detection filtering technique Perona and Malik
(see e.g. \cite{MorSol95}) suggested
the {\it nonlinear} diffusion process
\bea
\label{2.1}
\begin{array}{rcl}
\frac{\partial u}{\partial t}(x,t) &=&
\nabla. \left( f(|\nabla u|) \nabla u \right)(x,t)\,\\
u(x,0) &=& u_0^\delta(x)\;,
\end{array}
\eea
where $f$ is a smooth non-increasing function satisfying
$$f(0)=1, \quad f(s) \geq 0 \quad \mbox{ and }
\lim_{s \to \infty} f(s) \to 0\,.$$
The idea of using this type of nonlinear filtering technique is that
the smoothing process is conditional: if $\nabla u(x)$ is large, then
the diffusion will be low and therefore the exact location of the edges
in the image are preserved. If $\nabla u(x)$ is small, then the diffusion will tend
to smooth around $x$. Similarly to the Hildreth--Marr theory
one can derive a variational formulation of the Perona and Malik model.
In fact, if we set $\tilde{f}(a) = f (\sqrt{a})$ then equation (\ref{2.1}) can be
rewritten as
\beas
\begin{array}{rcl}
\frac{\partial u}{\partial t}(x,t) &=&
\nabla. \left( \tilde{f}(|\nabla u|^2) \nabla u \right)(x,t)\,\\
u(x,0) &=& u_0^\delta(x)\;,
\end{array}
\eeas
which yields a multi--scale analysis
\be
\label{2.2}
E_\lambda(u) = \lambda^2 \int_\Omega \tilde{F}(|\nabla u|^2)\, +
\int_\Omega (u-u_0^\delta)^2\,,
\ee
where $\tilde{F}(t) = \int_0^t \tilde{f}(s)\,.$
\subsection{A nonlinear filtering technique studied by Osher and Rudin}
Based on the Perona and Malik theory Osher and Rudin \cite{OshRud90}
considered a nonlinear diffusion process for edge detection:
\bea
\label{2.3}
\begin{array}{rcl}
\frac{\partial u}{\partial t}(x,t) &=&
\nabla. \left( \frac{ \nabla u } {|\nabla u|} \right)(x,t)\\
u(x,0) &=& u_0^\delta(x)\;.
\end{array}
\eea
The energy functional associated with this diffusion process is
$$E(u) = \int_\Omega |\nabla u|\;.$$
As above an associated multi--scale analysis can be motivated in terms of the functional:
\be
\label{2.4}
E_\lambda(u) = \lambda^2 \int_\Omega |\nabla u|\; + \;
\int_\Omega (u-u_0^\delta)^2\;.
\ee
\subsection{Higher order nonlinear filtering techniques}
Diffusion filters of higher order play an important role in parameter
estimation problems \cite{Sch98}. An appropriate filtering
technique for enhancing jumps in derivatives of signals is
\be
\label{2.5}
\frac{\partial u}{\partial t}(x,t) = H. \left( \frac{Hu}{|Hu|} \right)(x,t)\;.
\ee
Here $Hu$ denotes the Hessian of $u$ and $H. = (\dif \dif)$.
The multi--scale analysis associated with this diffusion process is
$$E_\lambda(u) = \lambda^2 \int_\Omega |H u|\; + \;
\int_\Omega (u-u_0^\delta)^2\;.$$
\section{Unbounded operators}
\label{sec3}
In this section we consider the problem of computing values of an
unbounded operator $L$. Here and in the following of this section,
we will always denote by $L:{\cal{D}}(L) \subseteq H_1 \to H_2$ a
closed densely defined unbounded linear operator between two Hilbert
spaces $H_1$ and $H_2$. The problem of computing values $y = Lu_0$,
for $u_0 \in D(L)$, is then ill--posed in the sense that small
perturbations in $u_0$ may lead to data $u_0^\delta$ satisfying
$\|u_0-u_0^\delta\| \leq \delta$, but $u_0^\delta \notin {\cal{D}}(L)$, or
even if $u_0^\delta \in {\cal{D}}(L)$, it may happen that $Lu_0^\delta
\not\to Lu_0$ as $\delta \to 0$.
Morozov has studied a stable method for approximating the value $Lu_0$
when only approximate data $u_0^\delta$ is available (see \cite{Mor84}
for information on Morozov's work). This method takes as an
approximation to $y = Lu_0$ the vector
$$y_{\lambda^2}^\delta = Lz_{\lambda^2}^\delta,$$
where $z_{\lambda^2}^\delta$ minimizes the functional
$$\|z - u_0^\delta\|^2 + {\lambda^2} \|Lz\|^2 \quad ({\lambda^2} > 0)$$
over $D(L)$. For $L = \nabla$ the approximate $z_{\lambda^2}^\delta$ of $u_0$
is the solution of the Hildreth--Marr problem at level ${\lambda^2}$.
In \cite{GroSch93} we discussed the {\it{optimal}} choice of the
scaling parameter ${\lambda^2}$ in relation to the noise level $\delta$
and smoothness assumptions on the solution and the numerical realization.
Of particular interest for image reconstruction and image enhancement
techniques is the choice of the scaling parameter. The theory developed
for the stable evaluation of unbounded operators to optimally choose
the {\it{regularization}} parameter is {\bf directly} applicable to
optimally choosing the {\it{scaling}} parameter in the Hildreth--Marr theory.
In order to clarify the choice of the scaling parameter we cite a result
developed in \cite{GroSch93}.
\begin{theorem}
If $u_0 \in {\cal{D}}(L^*L)$ and $u_0 \not\in {\cal{N}}(L)$, then
$\|y_{{\lambda^2}(\delta)}^\delta - Lu_0\| = O(\sqrt{\delta})\,,$
when ${\lambda^2}(\delta)$ is chosen such that
\be
\label{3.1}
\|u_{{\lambda^2}(\delta)}^\delta - u_0\| = O(\sqrt{\delta})\;.
\ee
\end{theorem}
Roughly speaking in the particular case $L = \nabla$ the assumption
$u_0 \in {\cal{D}}(L^*L)$ means $u_0 \in H^2$, and the derivative of $u_0$
can be reconstructed with a rate $O(\sqrt{\delta})$ if ${\lambda^2}(\delta)$
is chosen according to the discrepancy principle (\ref{3.1}).
Other strategies for optimally choosing the regularization parameters
when calculating unbounded operators can be found in \cite{GroSch93}.
For more background on the stable evaluation of unbounded operators we
refer to \cite{Gro92}.
From a simple example, by taking $L=\nabla$, ${\cal{D}}(L) = H^1({\Bbb R})$,
the Sobolev space of functions possessing a weak derivative in $L^2({\Bbb R})$,
it follows that $z_{\lambda^2}^\delta \in H^1({\Bbb R})$.
This shows that at any scale discontinuities of the original image are
smeared out, and thus the method is not strongly causal. Nonlinear filtering
techniques have turned out to be strongly causal and therefore, we study
below numerical implementation of nonlinear filtering techniques based on
the nonlinear diffusion process developed by Rudin, Osher, and Fatemi
\cite{RudOshFat92}.
\section{Unbounded operators in nonlinear multi--scale analysis}
\label{sec4}
The multi--scale analysis associated with the nonlinear diffusion process
developed by Osher and Rudin is sometimes called
{\it{regularization with functions of bounded variation}},
since the scale--energy functional (\ref{2.4}) is finite for any
function $u \in \bv$, that is, the space of all functions which satisfy
$$\int\limits_\Omega |\nabla u| :=
\sup \left\{ \int\limits_\Omega u \nabla. v dx\,:\,
v \in C_0^\infty(\Omega,{\Bbb R}^d), |v(x)| \leq 1,
x \in \Omega \right\} < \infty\;.$$
In practical applications, since the functional (\ref{2.4}) is not
differentiable, the minimizer of (\ref{2.4}) is frequently approximated
by the solution of the
(nonlinear) {\it{least squares problem}} (see e.g.
\cite{DobVog95,DobSan94,DobSch96})
\be
\label{4.1}
\min_{u \in L^2(\Omega)} \left\{ \frac{1}{2} \|u-u_0^\delta\|^2_{L^2(\Omega)} +
{\lambda^2} J_\beta(u)
\right\}\,,
\ee
where
\be
\label{4.2}
J_\beta(u):=\int_{\Omega} \sqrt{ |\nabla u|^2 + \beta^2}\;;
\ee
here $\Omega \subseteq {\Bbb R}^d$. The term $J_\beta(u)$ is used
for stabilization and is therefore called the {\it{regularizing term}}.
Vogel and Oman \cite{VogOma96} introduced a fixed point iteration method
to minimize (\ref{4.1}), (\ref{4.2}):
The corresponding (formally derived) {\it{Euler--Lagrange}} equation is
$$
u - {\lambda^2} \nabla .
\left( \frac{1}{\sqrt{|\nabla u|^2 + \beta^2}} \nabla u\right)
= u_0^\delta\;,
$$
which can be expressed in operator notation as
\be
\label{4.3}
(I + A_{{\lambda^2}\beta}(u)) u = u_0^\delta,
\ee
where
$$
A_{{\lambda^2}\beta}(u)v := - {\lambda^2} \nabla .
\left( \frac{1}{\sqrt{|\nabla u|^2 + \beta^2}} \nabla v\right)\;.$$
A fixed point iteration for (\ref{4.3}) can be defined by
$$
u^{n+1} = (I + A_{{\lambda^2}\beta}(u^n))^{-1} u_0^\delta =: F(u^n),
\quad n=0,1,...$$
In actual computations the minimization of (\ref{4.1}) is difficult
to handle, since it involves evaluations of the {\it{unbounded}}
operator $\nabla$. To overcome this difficulty we proposed in
\cite{DobSch96} to approximate the values of the unbounded operator
$\nabla$ and use the minimizing element of
\be
\label{4.4}
\min_{u \in L^2(\Omega)} \left\{ \frac{1}{2} \|u-u_0^\delta\|^2_{L^2(\Omega)}
+ {\lambda^2} \jbg(u)
\right\}\;,
\ee
where
\be
\label{4.5}
\jbg(u):=\int_{\Omega} \sqrt{ |L_\gamma u|^2 + \beta^2},
\ee
as an approximation for the desired image. In practical applications
the operator $L_\gamma$ might be, for instance, a finite difference
scheme, where the parameter $\gamma$ represents the grid-size.
Denoting by $L_\gamma^*$ the adjoint of $L_\gamma$, the Euler--Lagrange
equation for a minimizer of (\ref{4.4}) is
$$
u + {\lambda^2} \lgs
\left( \frac{1}{\sqrt{|L_\gamma u |^2 + \beta^2}} L_\gamma u \right) =
u_0^\delta\;.
$$
Extending the ideas of Vogel and Oman \cite{VogOma96} and Dobson and Vogel
\cite{DobVog95} we studied (in \cite{DobSch96}) the iterative scheme
\be
\label{4.6}
u^{n+1} = (I+\aa(u^n))^{-1}u_0^\delta =: F(u^n),
\ee
where
\be
\label{4.7}
\aa(u)v:= \frac{{\lambda^2}}{\beta} \lgs
\left( \frac{1}{\sqrt{\frac{|L_\gamma u |^2}{\beta^2} + 1}} L_\gamma v\right).
\ee
As long as the parameters ${\lambda^2}, \beta, \gamma$ are chosen
appropriately (\ref{4.6}),(\ref{4.7}) is convergent. This
setting reflects computational relevance since in numerical
simulations the operator $\nabla$ is
(almost) never calculated exactly, but approximately evaluated,
e.g. by some finite difference scheme.
In numerical calculations (see e.g. \cite{DobSan94,ChaLio95})
regularization with functions of bounded variation turned out
to be very efficient to determine discontinuities in signals
and images. This can be motivated both by arguing that the
associated diffusion process (\ref{2.3}) is designed to optimally
enhance edges in images (see e.g. \cite{OshRud90,MorSol95})
or from arguing for the least squares functional (\ref{4.1})
(respectively (\ref{2.4})) directly. Namely,
efficient regularizing schemes have to satisfy the following
two properties:
\begin{enumerate}
\item The regularizing term evaluated at the function to be
recovered is finite;
in signal and image processing, the images of most interest
are piecewise constant and thus have finite bounded variation
(see e.g. \cite{EvaGar92,Zie80}), i.e.,
$$\int\limits_\Omega |\nabla u_0| :=
\sup \left\{ \int\limits_\Omega u_0 \nabla. v dx\,:\,
v \in C_0^\infty(\Omega,{\Bbb R}^d), |v(x)| \leq 1,
x \in \Omega \right\} < \infty\;.$$
\item The recovery of images from noisy measurements is ill--posed,
in the sense that small perturbations in the data $u_0^\delta$ may
lead to significant distortions of the picture
(mathematically the ill-posedness is due to the fact that the
bounded variation semi norm -- which represents a natural norm
in measuring the distortion of the picture -- does not depend
continuously on the $L^2$ -- norm -- which represents the natural
norm for noise in the data), the regularizing term has to
have appropriate stabilizing properties.
\end{enumerate}
In practical applications (such as in non--destructive testing)
frequently discontinuities in derivatives have to be recovered
instead of the function itself. Examples include electrical
conductivity problems in ${\Bbb R}^1,{\Bbb R}^2$, where the idealized model
is the following: Given some domain $\Omega$ in ${\Bbb R}^1$ or ${\Bbb R}^2$;
the voltage potential $u_0$ inside of $\Omega$ satisfies
\be
\label{4.8}
\nabla .(\sigma \nabla u_0) = 0 \quad \mbox{ in } \Omega.
\ee
If the electrical conductivity $\sigma$ is discontinuous, then by
jump relations (see e.g. \cite{Isa90}) the resulting solution of
(\ref{4.8}), the voltage potential $u_0$, reveals discontinuities
in the derivatives of $u_0$. Therefore, discontinuities in the
material (i.e., discontinuities in $\sigma$)
can be detected from measurement data $u_0^\delta$ of $u_0$
and by {\it{denoising}} $u_0^\delta$ and detecting discontinuities
in derivatives of $u_0$.
For estimating discontinuities in electrical conductivity problems by
denoising the data $u_0^\delta$ the two demands on an efficient regularization
scheme suggest the use a regularizing (semi) norm which satisfies the
two properties:
\begin{enumerate}
\item For functions with jumps in derivatives the regularizing norm is finite
\item The stabilizing term has to cope with the ill--posedness.
\end{enumerate}
A reasonable semi norm satisfying both properties is
$$
|Hu| = \sum_{i=1}^d \sum_{j=1}^d
\left| \frac{\partial^2 u}{\partial x_i \partial x_j} \right|;
$$
here $Hu$ denotes the {\it{Hessian}} of $u$, i.e.,
\beas
H = \left(
\begin{array}{rcl}
\frac{\partial^2 u}{\partial x_1^2} & .... &
\frac{\partial^2 u}{\partial x_1 \partial x_d} \\
... & ... & ...\\
\frac{\partial^2 u}{\partial x_d \partial x_1} & .... &
\frac{\partial^2 u}{\partial x_d^2}
\end{array}
\right).
\eeas
For the numerical solution of the parameter estimation
problem described in (\ref{4.8}) we propose to use a
{\it{higher order bounded variation denoising}} algorithm
\be
\label{4.9}
\min_{u \in L^2(\Omega)} \frac{1}{2}
\|u-u_0^\delta\|^2_{L^2(\Omega)} + {\lambda^2} \jbg(u),
\ee
where
$$\jbg(u):=\int_{\Omega} \sqrt{ |H_\gamma u|^2 + \beta^2}\,;$$
here the meaning of $H_\gamma$ is a stable approximation of the Hessian
in ${\Bbb R}^d$.
Formally (\ref{4.9}) looks like (\ref{4.5}), and actually
the analysis of the fixed point iteration is formally the same (see
\cite{Sch98}).
\section{Numerical results: Higher order bounded variation regularization
in comparison to a linear multi--scale algorithm}
\label{sec5}
In this section we present some numerical experiments to
estimate discontinuities in derivatives of a one-dimensional
signal from its noisy measurements.
We solved the denoising problem both with regularization
with {\it{higher order bounded variations}} and with a
{\it linear multi--scale algorithm} and compared the numerical
results.
In the numerical experiments presented below, when functions
with derivatives of bounded variation were used for regularization,
we applied the iterative scheme (\ref{4.9}) in a discrete setting.
\begin{example}
\label{ex5.1}
{\rm
{\bf{Regularization with functions with derivatives of higher order
bounded variations:}}
The first example is to determine the discontinuity
in the derivative of the piecewise continuously differentiable function
\bea
\label{5.1}
\begin{array}{rcl}
f : [0,1] &\to& {\Bbb R}\,.\\
x &\to& \frac{1}{2}-\left| \frac{1}{2} - x \right|
\end{array}
\eea
In Figure \ref{fig5.1} the unperturbed signal and its derivative
are plotted.
We added to $97$ uniformly distributed measurement points random noise
to the data (see Figure \ref{fig5.2}) to simulate noisy measurements of
the signal.
\begin{figure}[ht]
\unitlength1cm
\begin{minipage}[b]{6.0cm}
\begin{picture}(4.0,4.5)
\put(-0.75,-3.0){\special{psfile=st_01.eps hscale=35 vscale=35}}
\end{picture}
\par
{\caption{\label{fig5.1}}}
\end{minipage}
\hfill
\begin{minipage}[b]{6.0cm}
\begin{picture}(4.0,4.5)
\put(-0.75,-3.0){\special{psfile=st_02.eps hscale=35 vscale=35}}
\end{picture}
\par
{\caption{\label{fig5.2}}}
\end{minipage}
\end{figure}
Figures \ref{fig5.3}, \ref{fig5.4} show reconstructions with
regularization with {\it{higher order derivatives of bounded
variation}}.
\begin{figure}[ht]
\unitlength1cm
\begin{minipage}[b]{6.0cm}
\begin{picture}(4.0,4.5)
\put(-0.75,-3.0){\special{psfile=st_03.eps hscale=35 vscale=35}}
\end{picture}
\par
{\tiny{\caption{\label{fig5.3} ${\lambda^2} = 1.e-4$,
$\beta = 1.e-2$,
$\gamma = 1/m$, $m=96$}}}
\end{minipage}
\hfill
\begin{minipage}[b]{6.0cm}
\begin{picture}(4.0,4.5)
\put(-0.75,-3.0){\special{psfile=st_04.eps hscale=35 vscale=35}}
\end{picture}
\par
{\tiny{\caption{\label{fig5.4} ${\lambda^2} = 1.e-3$,
$\beta = 1.e-2$, $\gamma = 1/m$, $m=96$}}}
\end{minipage}
\end{figure}
In each of the Figures \ref{fig5.3}, \ref{fig5.4}
the dotted line represents the reconstructed function, while the dashed
dotted line represents its derivative, which was calculated by a forward
difference scheme.
In the bottom line of these figures the actual parameter setting of
$\lambda^2, \beta, \gamma$ is plotted.
}
\end{example}
\begin{example}
\label{ex5.2}
{\rm {\bf{A linear multi--scale algorithm:}}
Here the same denoising problem as in Example \ref{ex5.1} is studied,
but as resolving algorithm we used a linear multi--scale algorithm.
In Figures \ref{fig5.5}, \ref{fig5.6} we have plotted the Tikhonov
regularized solutions, i.e., the minimizing elements of
\be
\label{5.2}
\min \|u - u_0^\delta\|^2_{L^2} + {\lambda^2} \|u_{xx}\|^2_{L^2}
\ee
for different values of the regularization parameter ${\lambda^2}$.
The reconstruction in Figures \ref{fig5.5} - \ref{fig5.6} the
$\star$--line shows reconstructions with a linear multi--scale algorithm if it is
implemented with noisy boundary data, and the ``$\times$''--line shows its
derivative; lines $-\cdot$ and $--$ denote the reconstruction and its
derivative if ``a--priori'' information on exact boundary data is
available and implemented.
\begin{figure}[ht]
\unitlength1cm
\begin{minipage}[b]{6.0cm}
\begin{picture}(4.0,4.5)
\put(-0.75,-3.0){\special{psfile=st_05.eps hscale=35 vscale=35}}
\end{picture}
\par
{\tiny{\caption{\label{fig5.5} ${\lambda^2} = 1.e-4$, $m=96$}}}
\end{minipage}
\hfill
\begin{minipage}[b]{6.0cm}
\begin{picture}(4.0,4.5)
\put(-0.75,-3.0){\special{psfile=st_06.eps hscale=35 vscale=35}}
\end{picture}
\par
{\tiny{\caption{\label{fig5.6} ${\lambda^2} = 1.e-3$, $m=96$}}}
\end{minipage}
\end{figure}
}
\end{example}
It is apparent from the numerical experiments that regularization
with higher order bounded variation resolves discontinuities in derivatives
efficiently; in comparison with a linear multi--scale algorithm the location
of the discontinuities can be sharply estimated.
Of course there is a price to pay: a linear multi--scale algorithm requires
only the solution of one sparse matrix problem. For regularization with functions of higher order bounded variation
a sparse matrix problem (of the same dimension as in a linear multi--scale algorithm)
has to be solved in each iteration.
\begin{thebibliography}{10}
\bibitem{ChaLio95}
A.~Chambolle and P.-L. Lions.
\newblock Image recovery via total variation minimization and related problems.
\newblock {\em Numer. Math.}, 72:168-188, 1997.
\bibitem{DobSan94}
D.~Dobson and F.~Santosa.
\newblock An image enhancement technique for electrical impedance tomography.
\newblock {\em Inverse Problems}, 10:317--334, 1994.
\bibitem{DobSch96}
D.~Dobson and O.~Scherzer.
\newblock Analysis of regularized total variation penalty methods for
denoising.
\newblock {\em Inverse Problems}, 12:601--617, 1996.
\bibitem{DobVog95}
D.~Dobson and C.R. Vogel.
\newblock Convergence of an iterative method for total variation denoising.
\newblock {\em SIAM J. on Numerical Analysis}.
\newblock to appear.
\bibitem{EvaGar92}
L.~C. Evans and R.~F. Gariepy.
\newblock {\em Measure Theory and Fine Properties of Functions}.
\newblock CRC Press, Ann Arbor, 1995.
\bibitem{Gro92}
C.~W. Groetsch.
\newblock Spectral methods for linear inverse problems with unbounded
operators.
\newblock {\em J. Approx. Th.}, 70:16--28, 1992.
\bibitem{GroSch93}
C.~W. Groetsch and O.~Scherzer.
\newblock Optimal order of convergence for stable evaluation of differential
operators.
\newblock {\em Electronic Journal of Differential Equations}, 1993 No.4:1--10,
1993.
\bibitem{Isa90}
V.~Isakov.
\newblock {\em Inverse Source Problems}.
\newblock Amer. Math. Soc., Providence, 1990.
\bibitem{Koe84}
J.~J. Koenderink.
\newblock The structure of images.
\newblock {\em Biol. Cybern.}, 50:363--370, 1984.
\bibitem{MorSol95}
J.-M. Morel and S.~Solimini.
\newblock {\em Variational Methods in Image Segmentation}.
\newblock Birkh\"auser, Basel, 1995.
\bibitem{Mor84}
V.~A. Morozov.
\newblock {\em Methods for solving incorrectly posed problems}.
\newblock Springer Verlag, New York, Berlin, Heidelberg, 1984.
\bibitem{OshRud90}
S.~Osher and L.~Rudin.
\newblock Feature oriented image enhancement using shock filters.
\newblock {\em SIAM J. on Numerical Analysis}, 27:919--940, 1990.
\bibitem{RudOshFat92}
L.I. Rudin, S.~Osher, and E.~Fatemi.
\newblock Nonlinear total variation based noise removal algorithms.
\newblock {\em Physica D}, 60:259--268, 1992.
\bibitem{Sch98}
O.~Scherzer.
\newblock Denoising with higher order derivatives of bounded variation and an
application to parameter estimation.
\newblock Accepted for publication in Computing.
\bibitem{VogOma96}
C.~Vogel and M.~Oman.
\newblock Iterative methods for total variation denoising.
\newblock {\em SIAM J. Stat. Sci. Comput}, 17:227--238, 1996.
\bibitem{Zie80}
W.~P. Ziemer.
\newblock {\em Weakly Differentiable Functions}.
\newblock Springer Verlag, New York, Berlin, Heidelberg, 1980.
\end{thebibliography}
{\sc Otmar Scherzer\newline
Institut f\"ur Industriemathematik\newline
Universit\"at Linz\newline
Altenberger Str. 69,
A--4040 Linz, Austria}\newline
E-Mail address: scherzer@indmath.uni-linz.ac.at
\end{document}