\documentstyle[twoside]{article} \pagestyle{myheadings} \begin{document} \markboth{\hfil OPTIMAL ORDER OF CONVERGENCE\hfil EJDE--1993/04}% {EJDE--1993/04\hfil C.W. Groetsch and O. Scherzer\hfil} \ifx\Box\undefined \newcommand{\Box}{\diamondsuit}\fi \title{\vspace{-1in}\parbox{\linewidth}{\footnotesize\noindent {\sc Electronic Journal of Differential Equations}\newline Vol. 1993(1993), No. 04, pp. 1-10. Published October 14, 1993.\newline ISSN 1072-6691. URL: http://ejde.math.swt.edu or http://ejde.math.unt.edu \newline ftp (login: ftp) 147.26.103.110 or 129.120.3.113 } \vspace{\bigskipamount} \\ The Optimal Order of Convergence for Stable Evaluation of Differential Operators \thanks{{\em 1991 Mathematics Subject Classifications:} Primary 47A58, Secondary 65J70.\newline\indent {\em Key words and phrases:} Regularization, unbounded operator, optimal convergence, stable.\newline\indent \copyright 1993 Southwest Texas State University and University of North Texas\newline\indent Submitted: June 14, 1993.\newline\indent Supported by the Austrian Fonds fur F\"orderung der wissenschaftlichen Forschung,\newline\indent project P7869-PHY, the Christian Doppler Society, Austria (O.S.), and NATO\newline\indent project CRG-930044 (C.W.G.) } } \date{} \author{C.W. Groetsch and O. Scherzer} \maketitle \renewcommand{\theequation}{\arabic{section}.\arabic{equation}} \def\eps{\epsilon} \def\Lra{\Leftrightarrow} \def\notto{\to\!\!\!\!\!\!/} \newtheorem{theorem}{Theorem}[section] \newtheorem{proposition}{Proposition}[section] \newtheorem{corollary}{Corollary}[section] \newtheorem{remark}{Remark}[section] \newtheorem{lemma}{Lemma}[section] \newtheorem{example}{Example}[section] \begin{abstract} An optimal order of convergence result, with respect to the error level in the data, is given for a Tikhonov-like method for approximating values of an unbounded operator. It is also shown that if the choice of parameter in the method is made by the discrepancy principle, then the order of convergence of the resulting method is suboptimal. Finally, a modified discrepancy principle leading to an optimal order of convergence is developed. \end{abstract} \section{Introduction} \setcounter{equation}{0} Suppose that $L:{\cal{D}}(L) \subseteq H_1 \to H_2$ is a closed densely defined unbounded linear operator from a Hilbert space $H_1$ into a Hilbert space $H_2$. The problem of computing values $y=Lx$, for $x \in {\cal{D}}(L)$, is then ill--posed in the sense that small perturbations in $x$ may lead to data $x^\delta$ satisfying $\|x-x^\delta\| \leq \delta$, but $x^\delta \notin {\cal{D}}(L)$, or, even if $x^\delta \in {\cal{D}}(L)$, it may happen that $Lx^\delta \notto \;Lx$ as $\delta \to 0$, since the operator $L$ is unbounded. Morozov has studied a stable method for approximating the value $Lx$ when only approximate data $x^\delta$ is available (see \cite{7} for information on Morozov's work). This method takes as an approximation to $y=Lx$ the vector $y_\alpha^\delta = L z_\alpha^\delta$, where $z_\alpha^\delta$ minimizes the functional \begin{equation} \label{eq1} \|z-x^\delta\|^2+\alpha\|Lz\|^2\;\;\;\;\;(\alpha > 0) \end{equation} over ${\cal{D}}(L).$ This is equivalent to \begin{equation} \label{eq2} y_\alpha^\delta = L(I + \alpha L^*L)^{-1}x^\delta. \end{equation} Morozov shows that if $\alpha=\alpha(\delta) \to 0$ as $\delta \to 0$, in such a way that $\frac{\delta}{\sqrt{\alpha}} \to 0$, then $y_\alpha^\delta \to Lx$ as $\delta \to 0$. He also develops an a posteriori method, the {\it discrepancy principle}, for choosing the parameter $\alpha$, depending on the data $x^\delta$, that leads to a stable convergent approximation scheme for $Lx$. As a simple concrete example of this type of approximation, consider differentiation in $L^2({\bf R})$. That is, the operator $L$ is defined on $H^1({\bf R})$, the Sobolev space of functions possessing a weak derivative in $L^2({\bf R})$, by $Lx = x^\prime$. For a given data function $x^\delta \in L^2({\bf R})$ satisfying $\| x-x^\delta \| \leq \delta$, the stabilized approximate derivative (1.2) is easily seen (using Fourier transform analysis) to be given by \[ y^\delta_\alpha(s) = \int^\infty_{-\infty} \sigma_\alpha (s-t)x^\delta(t) \,dt\, \] where the kernel $\sigma_\alpha$ is given by \[ \sigma_\alpha(t) = -\frac{{\mbox{sign}}\,(t)}{2\alpha} \exp(-|t|/\sqrt{\alpha}). \] Another concrete example of this stable evaluation method is provided by the Dirichlet to Neumann map. Consider for simplicity the unit disk $D$ and unit circle $\partial D$. For a given function $g$ on $\partial D$ we denote by $u$ the function which is harmonic in $D$ and takes boundary values $g$. The operator $L$ is then defined by $Lg = \frac{\partial u}{\partial n}$. To be more specific, $L$ is the closed operator defined on the dense subspace \[ {\cal D}(L) = \left\{ g\in L^2(\partial D): \sum_{n\in {\bf Z}} |n|^2 |\hat{g}(n)|^2 < \infty \right\} \] of $L^2(\partial D)$ by \[ (Lg)(e^{i\theta}) = \sum_{n\in {\bf Z}} |n| \hat{g}(n) \exp(in\theta) \] where \[ \hat{g}(n) = \frac{1}{2\pi} \int^{2\pi}_0 g(t) e^{-int} \,dt\,. \] The stable approximation (1.2) for $Lx$, given approximate data $x^\delta$, is then \[ y^\delta_\alpha (e^{i\theta}) = \sum_{n\in {\bf Z}} \left( \frac{n}{1+\alpha n^2} \right) \hat{x^\delta} (n) \exp (in \theta). \] Our aim is to provide an order of convergence result for $\{y_\alpha^\delta\}$ and to show that this order of convergence is essentially best possible. Our approach, which is inspired by a work of Lardy \cite{6} on generalized inverses of unbounded operators, is based on spectral analysis of certain {\it bounded} operators associated with $L$ (see also \cite{5} where other consequences of this approach are investigated). We also determine the best possible order of convergence when the discrepancy principle is used to determine $\alpha$. This order of convergence turns out to be suboptimal. For results of a similar nature pertaining to Tikhonov regularization for solving first kind equations involving bounded operators, see \cite[Chapter 3]{4}. Finally, we propose a modification of the discrepancy principle for approximating values of an unbounded operator that leads to an optimal convergence rate. \section{Order of Convergence} \setcounter{equation}{0} To establish the order of convergence of (\ref{eq2}) it will be convenient to reformulate (\ref{eq2}) as \begin{equation} \label{eq3} y_\alpha^\delta = L\check{L} [ \alpha I + (1 - \alpha)\check{L}]^{-1} x^\delta \end{equation} where $\check{L} = (I + L^* L)^{-1}$ and $L \check{L}$ are known to be bounded everywhere defined linear operators and $\check{L}$ is self--adjoint with spectrum $\sigma(\check{L}) \subseteq [0,1]$ (see, e.g. \cite[p.307]{8}). Because $x^\delta$ in (\ref{eq3}) is operated upon by a product of bounded operators, we see that for fixed $\alpha > 0$, $y_\alpha^\delta$ depends continuously on $x^\delta$, that is, the approximations $\{y_\alpha^\delta\}$ are stable. The representation (\ref{eq3}) has certain advantages in that the dependence of $y_\alpha^\delta$ on bounded operators ($\check{L}$ and $L\check{L}$), which are independent of $\alpha$, is explicit. To further simplify the presentation, we introduce the functions $$T_\alpha(t) = (\alpha + (1-\alpha)t)^{-1},\;\;\;\alpha>0,\;t \in [0,1].$$ We then have $$y_\alpha^\delta = L\check{L} T_\alpha(\check{L}) x^\delta.$$ The approximation with no data error will be denoted by $y_\alpha$: $$y_\alpha = L\check{L} T_\alpha(\check{L}) x.$$ \begin{theorem} \label{th1} {\rm If $x \in {\cal{D}}(LL^*L)$ and $\alpha=\alpha(\delta)$ satisfies $\frac{\alpha^3}{\delta^2} \to C > 0$ as $\delta \to 0$, then $\|y_\alpha^\delta - Lx\| = O \bigl( \delta^{\frac{2}{3}} \bigr).$ } \end{theorem} {\bf{Proof:}} Let $w=(I+LL^*)Lx.$ Then $Lx=\hat{L}w$, where $\hat{L} = (I+LL^*)^{-1}$. Since $$ L\check{L} = L(I + L^*L)^{-1} = (I + LL^*)^{-1}L=\hat{L}L $$ and $ Lx = (I+LL^*)^{-1}w = \hat{L}w$, we obtain from (\ref{eq3}) \begin{eqnarray*} y_\alpha-Lx &=& L \left( \check{L} - [\alpha I + (1-\alpha) \check{L} ] \right) [\alpha I + (1-\alpha) \check{L} ]^{-1} x \\ &=& \alpha L ( \check{L} - I ) [\alpha I + (1-\alpha) \check{L} ]^{-1} x \\ &=& \alpha ( \hat{L} - I ) [\alpha I + (1-\alpha) \hat{L} ]^{-1} Lx \\ &=& \alpha(\hat{L}-I)T_\alpha(\hat{L})\hat{L}w. \end{eqnarray*} Since $\|T_\alpha(\hat{L})\hat{L}\| \leq 1$, we find that \begin{equation} \label{eq4} \|y_\alpha - Lx \| = O(\alpha). \end{equation} Also, \begin{eqnarray} \label{eq5} \begin{array}{rcl} \|y_\alpha^\delta - y_\alpha\|^2 & = & (L^*L \check{L} T_\alpha(\check{L})(x^\delta -x), \check{L}T_\alpha(\check{L})(x^\delta-x)) \\ & = & ((I-\check{L})T_\alpha(\check{L})(x^\delta-x), \check{L}T_\alpha(\check{L})(x^\delta-x))\\ & \leq & \|I - \check{L}\|\frac{\delta^2}{\alpha} \end{array} \end{eqnarray} since $\|T_\alpha(\check{L})\| \leq \frac{1}{\alpha}.$ Therefore, $$\|y_\alpha^\delta - y_\alpha\| = O \left( \frac{\delta}{\sqrt{\alpha}} \right).$$ We then have $$\|y_\alpha^\delta - Lx\| = O(\alpha) + O\left( \frac{\delta}{\sqrt{\alpha}} \right) = O\bigl( \delta^{\frac{2}{3}} \bigr),$$ since $\frac{\alpha^3}{\delta^2} \to C > 0.$ \hfill$\Box$ This theorem shows that under the regularity condition $x \in {\cal{D}}(LL^*L)$ on the exact data the order of convergence $O\left(\delta^\frac{2}{3}\right)$ is attainable by the approximation (\ref{eq3}) using approximate data with error level $\delta$. In the next section we show that this order is best possible, except for the trivial case when $x \in N(L)$, i.e., when $Lx=0$. \section{Optimality} \setcounter{equation}{0} We begin by showing that any improvement in the order $ O\bigl( \delta^{\frac{2}{3}} \bigr)$ entails a certain convergence rate for the parameter $\alpha$. \begin{lemma} \label{le1} {\rm If $ x \notin N(L)$ and $\|y_\alpha^\delta-Lx\| = o \bigl( \delta^{\frac{2}{3}} \bigr)$ for all $x^\delta$ satisfying $\|x-x^\delta\| \leq \delta$, then $\alpha = o \bigl( \delta^\frac{2}{3} \bigr).$ } \end{lemma} {\bf Proof:} Let $x^\delta = x - \delta u$, where $u$ is a unit vector and let $e_\alpha^\delta = y_\alpha^\delta - Lx$. Then \begin{eqnarray*} [\alpha I + (1-\alpha)\hat{L}] e_\alpha^\delta &=& [\alpha I + (1-\alpha)\hat{L}] \left( L\check{L} (\alpha I + (1-\alpha)\check{L})^{-1}x - Lx \right) - \\ & &\quad \delta [\alpha I + (1-\alpha)\hat{L}] L\check{L} (\alpha I + (1-\alpha)\check{L})^{-1}u \\ &=& \alpha (\hat{L} - I)Lx - \delta L\check{L} u. \end{eqnarray*} Since $\|e_\alpha^\delta\| = o \bigl( \delta^{\frac{2}{3}} \bigr)$, by assumption, and since $$ \|\delta L \check{L} u\| \leq \delta \|L \check{L}\| = o\bigl( \delta^{\frac{2}{3}} \bigr),$$ we find that $$ \frac{\alpha}{\delta^\frac{2}{3}} \|(\hat{L}-I)Lx\| \to 0,\mbox{\rm{ as }} \delta \to 0.$$ But $x \notin N(L)=N((\hat{L}-I)L)$ and hence $\alpha = o \left( \delta^\frac{2}{3} \right).$ \hfill$\Box$ We now show that for a wide class of operators the order of convergence $O\left( \delta^\frac{2}{3} \right)$ can not be improved. We will consider the important class of operators $L^*L$ which have a divergent sequence of eigenvalues. Such is the case if $L$ is the derivative operator, when $-L^*L$ is the Laplacian operator, or more generally whenever $L$ is a differential operator for which $\check{L}$ is compact. \begin{theorem} \label{th2} {\rm If $L^*L$ has eigenvalues $\mu_n \to \infty$ and $\|y_\alpha^\delta-Lx\| = o\left( \delta^\frac{2}{3} \right)$ for all $x^\delta$ with $\|x-x^\delta\| \leq \delta$, then $x \in N(L)$. } \end{theorem} {\bf{Proof:}} If $x \notin N(L)$, then $\alpha = o \bigl( \delta^{\frac{2}{3}} \bigr)$, by Lemma \ref{le1}. Let $e_\alpha^\delta = y_\alpha^\delta - Lx$, then $$\|e_\alpha^\delta\|^2 = \|y_\alpha - Lx\|^2+2(y_\alpha-Lx,y_\alpha^\delta-y_\alpha) + \|y_\alpha^\delta-y_\alpha\|^2$$ and by hypothesis $\frac{\|y_\alpha-Lx\|^2}{\delta^{\frac{4}{3}}} \to 0$ as $\delta \to 0$ (since $x^\delta=x$ satisfies $\|x-x^\delta\| \leq \delta$). Therefore we must have \begin{equation} \label{eq6} \frac{2(y_\alpha - Lx,y_\alpha^\delta-y_\alpha)+\|y_\alpha^\delta-y_\alpha\|^2} {\delta^\frac{4}{3}} \to 0 \mbox{ as } \delta \to 0. \end{equation} Suppose that $\{u_n\}$ are orthonormal eigenvectors of $L^*L$ associated with $\{\mu_n\}$. Then $\{u_n\}$ are eigenvectors of $\check{L}$ associated with the eigenvalues $\lambda_n=\frac{1}{1+\mu_n}$ and $\lambda_n \to 0$ as $n \to \infty$. Now let $x^\delta=x+\delta u_n$. Then \begin{eqnarray*} \|y_\alpha^\delta - y_\alpha\|^2 &=& \delta^2 \left( \check{L} (\alpha I + (1-\alpha)\check{L})^{-1} u_n, L^*L\check{L} (\alpha I + (1-\alpha)\check{L})^{-1} u_n \right)\\ &=& \delta^2 \lambda_n^2 \mu_n (\alpha+(1-\alpha)\lambda_n)^{-2}\\ &=& \delta^2 \lambda_n(1-\lambda_n)(\alpha+(1-\alpha)\lambda_n)^{-2}. \end{eqnarray*} Therefore, if $\delta=\delta_n=\lambda_n^{\frac{3}{2}}$, then $\delta_n \to 0$ as $n \to \infty$ and \begin{equation} \label{eq7} \frac{\|y_\alpha^{\delta_n} - y_\alpha\|^2}{\delta_n^{\frac{4}{3}}} = (1 - \lambda_n) \left( \frac{\alpha}{\delta_n^{\frac{2}{3}}} + 1 -\alpha \right)^{-2} \to 1 \mbox{ as } n \to \infty. \end{equation} Finally, we have $$\frac{|(y_\alpha - Lx,y_\alpha^{\delta_n} - y_\alpha)|}{\delta_n^\frac{4}{3}} \leq \frac{\|y_\alpha - Lx\|}{\delta_n^\frac{2}{3}} \frac{\|y_\alpha^{\delta_n} - y_\alpha\|}{\delta_n^\frac{2}{3}} \to 0.$$ This, along with (\ref{eq7}), contradicts (\ref{eq6}) and hence $x \in N(L)$. \hfill$\Box$ \section{The Discrepancy Principle} \setcounter{equation}{0} We may write the approximation $y_\alpha^\delta$ to $Lx$ as \begin{equation} \label{eq8} y_\alpha^\delta = Lz_\alpha^\delta \; \; \mbox{where} \; \; z_\alpha^\delta = \check{L} T_\alpha(\check{L})x^\delta. \end{equation} Morozov \cite[p.125]{7} has shown that if $\|x^\delta\| > \delta$ (i.e., the signal--to--noise ratio is greater than one), then there is a unique $\alpha=\alpha(\delta)>0$ such that \begin{equation} \label{eq9} \|z_{\alpha(\delta)}^\delta - x^\delta\| = \delta. \end{equation} Moreover, he showed that $y_{\alpha(\delta)}^\delta \to Lx$ as $\delta \to 0$. We now provide an order of convergence for this method and show that, in general, it can not be improved. \begin{theorem} \label{th3} {\rm If $x \in {\cal{D}}(L^*L)$ and $x \notin N(L)$, then $\|y_{\alpha(\delta)}^\delta - Lx\| = O(\sqrt{\delta})$. } \end{theorem} {\bf{Proof:}} First note that $$[\alpha I + (1 - \alpha) \check{L}] (z_\alpha^\delta - x^\delta) = \alpha (\check{L} - I)x^\delta.$$ Moreover, note that (cf. (\ref{eq3})) $ \| \alpha I + (1-\alpha) \check{L} \| \leq 1.$ Therefore, if $\alpha$ is chosen by (\ref{eq9}), then $$\alpha\|(\check{L} - I)x^\delta\| \leq \|z_\alpha^\delta - x^\delta\|=\delta$$ and hence $$\|(\check{L} - I) x^\delta\| \leq \frac{\delta}{\alpha(\delta)}.$$ Since $x \notin N(L)$, we have $x \notin N(\check{L}-I)$ and hence $$0 < \|(\check{L}-I)x\| \leq \liminf_{\delta \to 0} \frac{\delta}{\alpha(\delta)}.$$ We therefore have \begin{equation} \label{eq10} \alpha=O(\delta). \end{equation} Since $z_\alpha^\delta$ minimizes (\ref{eq1}) over ${\cal{D}}(L)$ and $\alpha(\delta)$ satisfies (\ref{eq9}), it follows that \begin{eqnarray*} \delta^2 + \alpha(\delta) \|L z_{\alpha(\delta)}^\delta\|^2 &=& \|z_{\alpha(\delta)}^\delta - x^\delta\|^2 + \alpha(\delta)\|Lz_{\alpha(\delta)}^\delta\|^2 \\ &\leq& \|x-x^\delta\|^2+\alpha(\delta)\|Lx\|^2\\ &\leq& \delta^2 + \alpha(\delta)\|Lx\|^2 \end{eqnarray*} and hence $\|Lz_{\alpha(\delta)}^\delta\| \leq \|Lx\|$. We then have \begin{eqnarray*} \|y_{\alpha(\delta)}^\delta - Lx\|^2 &=& \|y_{\alpha(\delta)}^\delta\|^2 - 2(y_{\alpha(\delta)}^\delta,Lx) + \|Lx\|^2\\ &\leq& 2(Lx - y_{\alpha(\delta)}^\delta,Lx) = 2 (x-z_{\alpha(\delta)}^\delta,L^*Lx)\\ &\leq& 4\delta \|L^*Lx\| \end{eqnarray*} and hence $\|y_{\alpha(\delta)}^\delta - Lx\| = O(\sqrt{\delta}).$ \hfill$\Box$ It turns out that if the parameter is chosen by the discrepancy method (\ref{eq9}), then the order of convergence derived in Theorem \ref{th3} can not be improved in general. To see this, suppose that $\check{L}$ has a sequence of eigenvalues $\lambda_n \to 0$ and that$\{u_n\}$ is a corresponding sequence of orthonormal eigenvectors. Furthermore, let $\lambda_n=\frac{1}{1+\mu_n}$,$x=u_1$, and \\ $x^{\delta_n}=u_1+\delta_n u_n.$ An easy calculation then gives \begin{equation} \label{eq11} \|y_{\alpha}^{\delta_n} - Lx\|^2 \geq \frac{\lambda_n^2}{(\alpha+(1-\alpha)\lambda_n)^2}\delta_n^2 \mu_n. \end{equation} Now set $\delta_n = \frac{\mu_n}{(1+\mu_n)^2}$, then $\delta_n \to 0$ as $n \to \infty$. We will show that if $\alpha$ satisfies (\ref{eq10}), then $\|y_\alpha^{\delta_n} - Lx\| = o(\sqrt{\delta_n})$ is not possible. Indeed, if this were the case, then by (\ref{eq11}) we have $$ \left( \frac{\alpha}{\delta_n} +(1-\alpha)\frac{\lambda_n}{\delta_n} \right)^{-2} = \mu_n \lambda_n^2 \delta_n (\alpha + (1-\alpha)\lambda_n)^{-2} \to 0$$ and hence $\frac{\alpha}{\delta_n} + (1-\alpha)\frac{\lambda_n}{\delta_n} \to \infty$. But if $\alpha$ is chosen by (\ref{eq9}), then by (\ref{eq10}), $\frac{\alpha}{\delta_n}$ is bounded and hence $\frac{\lambda_n}{\delta_n} \to\infty$. But $\frac{\lambda_n}{\delta_n} = \frac{1}{\mu_n} +1 \to 1,$ a contradiction. In the next section we show how the discrepancy principle can be modified to recover the optimal order of convergence. \section{Optimal Discrepancy Methods} \setcounter{equation}{0} Engl and Gfrerer \cite{1},\cite{2},\cite{3} have developed discrepancy principles of optimal order for approximating solutions of bounded linear operator equations of the first kind by Tikhonov regularization. In this section we investigate similar principles for approximating values of unbounded linear operators. We begin by considering the function $$ \rho(\alpha)=\alpha^2\|z_\alpha^\delta - x^\delta\|.$$ By using a spectral representation of the operator $\check{L}T_\alpha(\check{L})$ which defines $z_\alpha^\delta$ via (\ref{eq8}), it is easy to see that the function $\alpha \to \rho(\alpha)$ is continuous, strictly increasing and satisfies $$\lim_{\alpha \to 0+} \rho(\alpha) = 0 \mbox{ and } \lim_{\alpha \to \infty} \rho(\alpha) = \infty$$ (we assume that $x^{\delta} \not\in N(L)$, for otherwise the approximations are trivial). Therefore, there is a unique $\alpha=\alpha(\delta)>0$ satisfying \begin{equation} \label{eq12} \|z_\alpha^\delta - x^\delta\|= \frac{\delta^2}{\alpha^2}. \end{equation} We will show that the modified discrepancy principle (\ref{eq12}) leads, under suitable conditions, to an optimal order of convergence for the approximations $y_\alpha^\delta$ to $Lx$. \begin{theorem} \label{th4} {\rm Suppose $x \in {\cal{D}}(L^*L)$ and $x \notin N(L)$. If $\alpha=\alpha(\delta)$ is chosen by condition (\ref{eq12}), then $\frac{\delta^2}{\alpha^3(\delta)} \to \|L^*Lx\|>0$ as $\delta \to 0$. } \end{theorem} {\bf{Proof:}} To simplify notation we set $\alpha=\alpha(\delta)$ in the proof. First we show that $\alpha \to 0$ as $\delta \to 0$. Since $$[\alpha I + (1 - \alpha)\check{L}](z_\alpha^\delta-x^\delta) = \alpha(\check{L} - I)x^\delta \mbox{ and } \|\check{L}\| \leq 1,$$ we have \begin{equation} \label{eq13} \alpha \|(\check{L} - I)x^\delta\| \leq \|z_\alpha^\delta - x^\delta\| = \frac{\delta^2}{\alpha^2}. \end{equation} Also, $(\check{L}-I)x^\delta \to (\check{L}-I)x \neq 0$ as $\delta \to 0$ since $\check{L}x=x$ implies $L^*Lx=0$, i.e., $x \in N(L)$.\\ Therefore, from (\ref{eq13}), we find that $\alpha \to 0$ as $\delta \to 0$. Next we show that $\frac{\delta}{\alpha} \to 0$ as $\delta \to 0$. In fact, $$\|z_\alpha-z_\alpha^\delta\| = \| \check{L}T_\alpha(\check{L})(x-x^\delta)\| \leq \delta $$ and $$x-z_\alpha = x - \check{L}T_\alpha(\check{L}) x \to 0 \; \; \mbox{as} \; \; \alpha \to 0$$ since $N(\check{L})=\{0\}$ and $tT_\alpha(t) \to 1$ as $\alpha \to 0$ for each $t \neq 0$. Therefore $\|x-z_\alpha^\delta\| \to 0$ as $\delta \to 0$. We then have $$ \frac{\delta^2}{\alpha^2} = \|z_\alpha^\delta - x^\delta\| \leq \|z_\alpha^\delta - x\| + \delta \to 0 \mbox{ as } \delta \to 0$$ and hence $\frac{\delta}{\alpha} \to 0$ as $\delta \to 0.$\\ We can now show that $L^*Lz_\alpha^\delta \to L^*Lx$ as $\delta \to 0$.\\ Indeed, \begin{equation} \label{eq14} L^*Lz_\alpha - L^*Lx = (\check{L}T_\alpha(\check{L}) - I)L^*Lx \to 0 \; \; \mbox{as} \; \; \delta \to 0 \end{equation} and $$ L^*L(z_\alpha^\delta - z_\alpha) = (I - \check{L})T_\alpha(\check{L})(x^\delta - x),$$ therefore, \begin{equation} \label{eq15} \|L^*L(z_\alpha^\delta-z_\alpha)\| \leq \|(I-\check{L})\|\frac{\delta}{\alpha}. \end{equation} Since $\|T_\alpha(\check{L})\| \leq \frac{1}{\alpha}$. But, since $\frac{\delta}{\alpha} \to 0,$ we find from (\ref{eq14}) and (\ref{eq15}) that $$L^*Lz_\alpha^\delta \to L^*Lx \mbox{ as } \delta \to 0.$$ Finally, we have $$x^\delta - z_\alpha^\delta = \alpha L^*Lz_\alpha^\delta$$ and hence, by (\ref{eq12}) $$\frac{\delta^2}{\alpha^3} = \|L^*L z_\alpha^\delta \| \to \|L^*L x\| \mbox{ as } \delta \to 0.$$ \hfill$\Box$ From Theorem \ref{th1} and \ref{th4} we immediately obtain \begin{corollary} \label{cor1} {\rm If $x \in {\cal{D}}(LL^*L)$, $x \notin N(L)$ and $\alpha=\alpha(\delta)$ is chosen by (\ref{eq12}), then $\|y_{\alpha(\delta)}^\delta - Lx\| = O\bigl( \delta^{\frac{2}{3}} \bigr).$ } \end{corollary} The Corollary requires the ``smoothness'' condition $x \in {\cal{D}}(LL^*L)$ in order to guarantee the optimal convergence rate, but it is possible to obtain a ``quasi--optimal'' rate without any additional smoothness assumptions on the data $x$. It follows from the proof of Theorem \ref{th1} (specifically, from (\ref{eq5})), that \begin{equation} \label{eq16} \frac{1}{2}\| y_\alpha^\delta - Lx \|^2 \leq \| y_\alpha - Lx \|^2 + C\frac{\delta^2}{\alpha}. \end{equation} Let $m(x,\delta)$ be the infimum, over $\alpha > 0$, of the right hand side of (\ref{eq16}). It is possible, following ideas of Engl and Gfrerer \cite{2}, to choose a parameter $\alpha = \alpha(\delta)$ such that $\|y_{\alpha(\delta)}^\delta - Lx \|^2$ has the same order as $m(x,\delta)$ which we call the quasi-optimal rate. In fact, minimizing the right hand side of (\ref{eq16}) leads to a condition of the form $$f(\alpha,x):=\left( [\alpha(I-\check{L})T_\alpha(\check{L})]^3x,x \right) = C\delta^2.$$ If we denote the spectral resolution of the identity generated by the operator $\check{L}$ by $\{ E_\lambda : \lambda \in [0,1]\}$, then \[ f(\alpha,z) = \int^1_0 \left[ \frac{\alpha(1-\lambda)}{\alpha(1-\lambda) + \lambda} \right]^3 \,d\|E_\lambda z\|^2\,. \] >From this it follows that for any $z \not\in N(L)$, $f(\cdot,z)$ is a monotonically increasing continuous function satisfying \[ \lim_{\alpha\rightarrow 0} f(\alpha,z) = 0 \; \; \mbox{and} \; \; \lim_{\alpha\rightarrow\infty} f(\alpha,z) = \|Pz\|^2, \] where $P$ is the orthogonal projector of $H_1$ onto $N(L)^\perp$. Therefore, for any $\delta > 0$ and $x^\delta \not\in N(L)$ and any positive constant $\gamma$ which is dominated by the signal-to-noise ratio of data $x^\delta$, that is, satisfying \[ 0 < \gamma < \|Px^\delta\|/\delta, \] there is a unique choice of the parameter $\alpha = \alpha(\delta)$ satisfying \[ f(\alpha(\delta),x^\delta) = (\gamma\delta)^2. \] It can be shown, but we will not provide the details, that, this a posteriori choice of the parameter always leads to the quasi-optimal rate $\|y_\alpha^\delta - Lx\|^2=O(m(x,\delta))$, without any additional smoothness assumptions on the data $x$. \begin{thebibliography}{99} \bibitem{1} H.W.~Engl,~~Discrepancy principles for Tikhonov regularization of ill-posed problems leading to optimal convergence rates, J. Optimiz. Theory Appl. 49(1987), 209--215. MR 88b:49045. \bibitem{2} H.W.~Engl and H.~Gfrerer,~~A posteriori parameter choice for general regularization methods for solving linear ill--posed problems, Appl. Numer. Math. 4(1988), 395--417. MR 89i:65060. \bibitem{3} H.~Gfrerer,~~An a posteriori parameter choice for ordinary and iterated Tikhonov regularization of ill--posed problems leading to optimal convergence rates, Math. Comp. 49(1987), 507--522. MR 88k:65049. \bibitem{4} C.W.~Groetsch,~~The Theory of Tikhonov Regularization for Fredholm Equations of the First Kind, Pitman, London, 1984. \bibitem{5} C.W.~Groetsch,~~Spectral methods for linear inverse problems with unbounded operators, J. Approx. Th. 70(1992), 16-28. \bibitem{6} L.J.~Lardy,~~A series representation of the generalized inverse of a closed linear operator,Att. Accad. Naz. Lincei Cl. Sci. Mat. Natur., Ser VIII, 58(1975), 152--157. MR 48\# 13540. \bibitem{7} V.A.~Morozov,~~Methods for Solving Incorrectly Posed Problems, Springer--Verlag, New York, 1984. \bibitem{8} F.~Riesz and B.~Sz--Nagy,~~Functional Analysis, Ungar, New York, 1955. \end{thebibliography} \vspace{0.1in} \noindent \begin{tabular}{ll} Department of Mathematical Sciences \qquad & Institut f\"ur Mathematik \\ University of Cincinnati & Universit\"at Linz\\ Cincinnati, OH 45221-0025 & A--4040 Linz\\ USA & Austria \\ E-mail: groetsch\@ucbeh.san.uc.edu & scherzer\@indmath.uni-linz.ac.at \end{tabular} \end{document}