\documentclass[reqno]{amsart}
\AtBeginDocument{{\noindent\small
{\em Electronic Journal of Differential Equations},
Vol. 2005(2005), No. 02, pp. 1--18.\newline
ISSN: 1072-6691. URL: http://ejde.math.txstate.edu or http://ejde.math.unt.edu
\newline ftp ejde.math.txstate.edu (login: ftp)}
\thanks{\copyright 2005 Texas State University - San Marcos.}
\vspace{9mm}}
\begin{document}
\title[\hfilneg EJDE-2005/02\hfil Solution dependence \dots ]
{Solution dependence on problem parameters for initial-value
problems associated with the Stieltjes Sturm-Liouville equations}
\author[L. Battle\hfil EJDE-2005/02\hfilneg]
{Laurie Battle}
\address{Laurie Battle\hfill\break
Department of Mathematics and Computer Science \\
Campus Box 017 \\
Georgia College and State University \\
Milledgeville, GA, 31061, USA}
\email{laurie.battle@gcsu.edu}
\date{}
\thanks{Submitted September 10, 2004. Published January 2,2005.}
\subjclass[2000]{34A12, 34A30}
\keywords{Initial value problems; continuous dependence; linear systems}
\begin{abstract}
We examine properties of solutions to a 2n-dimensional
Stieltjes Sturm-Liouville initial-value problem.
Existence and uniqueness of a solution has been previously
proven, but we present a proof in order to establish properties
of boundedness, bounded variation, and continuity.
These properties are then used to prove that the solutions
depend continuously on the coefficients and on the initial
conditions under certain hypotheses. In a future paper,
these results will be extended to eigenvalue problems, and we will
examine dependence on the endpoints and boundary data in addition
to the coefficients. We will find conditions under which the eigenvalues
depend continuously and differentiably on these parameters.
\end{abstract}
\maketitle
\numberwithin{equation}{section}
\newtheorem{theo}{Theorem}[section]
\newtheorem{lemma}[theo]{Lemma}
\newtheorem{cor}[theo]{Corollary}
\newtheorem{rem}[theo]{Remark}
\section{Introduction}\label{intro}
In this work, we examine properties of solutions of generalized
2n-dimensional Sturm-Liouville initial value problems of the form
\begin{equation}\label{slprob}
\begin{gathered}
dy=Ay\,dt+dP\,z \\
dz=(dQ-\lambda dW)y+Dz\,dt
\end{gathered}
\end{equation}
on an interval $[a,b]$.
Existence and uniqueness of a solution over the class of quasi-continuous
functions has already been established \cite{Hinton}, but we include part of
the proof in section \ref{IVP} to establish certain bounds and continuity
properties of the solution. In section \ref{seqIVP}, we determine conditions
under which the solution depends continuously on the coefficients.
This is accomplished by taking a sequence of initial value problems and
finding conditions under which the sequence of solutions converges to the
solution of the limit problem.
This work generalizes some earlier results. Kong and Zettl \cite{Kong1},
\cite{Kong3}, and Kong, Wu, and Zettl \cite{Kong1.1}, \cite{Kong2},
consider scalar, self-adjoint equations, whereas this work allows for Stieltjes
Hamiltonian systems. Thus difference equations are included in our formulation.
These authors take sequences of initial value problems to examine dependence
of the solution on the problem data, and they require $L^1$ convergence of the
coefficients. For example, in the second order equation $-(py')'+qy=\lambda wy$,
the approximate equation $-(p_1y')'+q_1y=\lambda w_1y$ is considered to be close
to the original equation if $\int_a^b |\frac{1}{p}-\frac{1}{p_1} |+\int_a^b
|q-q_1|+ \int_a^b |w-w_1|$ is small. We take the same approach of using
sequences of initial value problems but allow for more general modes of
convergence on the coefficients, with two sequences of coefficients converging
weakly in $L^1$, one sequence converging uniformly, and two sequences
converging pointwise. The $L^1$ convergence used in \cite{Kong1}-\cite{Kong3}
is a special case of our convergence conditions. In another related work,
Reid \cite{Reid} addresses this problem for Hamiltonian systems, but we relax
his hypotheses on the data and on the modes of convergence.
Knotts-Zides \cite{Knotts} extends Reid's results to more general conditions,
but her problem is only $2$-dimensional and requires $A=D=0$ in (\ref{slprob}).
The references mentioned above actually apply the results to eigenvalue problems
rather than initial value problems. Likewise, the results we find for initial
value problems will be extended to eigenvalue problems in a later paper.
We will find conditions under which not only the solutions, but also the
eigenvalues, depend continuously on the problem parameters. In addition,
conditions will be found under which the eigenvalues depend differentiably on
the problem data.
\section{Preliminaries}\label{pre}
Here we give a preliminary discussion of Stieltjes integrals and
some previous results by Hinton \cite{Hinton}.
In this work, we take $N$ to be the ring of $2n \times 2n$ matrices and
we define the norm as follows: First we define the vector norm by
$\|\bar{x}\| := \sum_i |x_i|$ for $\bar{x}= (x_1, \dotsc, x_{2n})^T$.
Then define the norm on $N$ by $\|A\|:=\sum_{i,j} |a_{ij}|$ for
$A=\{a_{ij}\}$.
It follows that for $A$ in $N$ and $\bar{x}$ a $2n$-vector,
$\|A\bar{x}\|\le \|A\| \|\bar{x}\|$. Let $\mathbb{R}^m$ be the
space of $m$-vectors.
For some interval $[a,b]$ with $a**0$ and
such that for each $x$,
\[
m(x)\le K+ (L)\int_a^x m(s)\,dh(s),
\]
then
\[
m(x)\le Ke^{[h(x)-h(a)]}
\]
for each $x$.
\end{theo}
Let $\mathcal{F}$ be the set of all functions $F:[a,b]\times [a,b]
\to N$ such that
\begin{enumerate}
\item $F(x,x)=I$ for all $x$,
\item $F$ is quasi-continuous with respect to its first variable, and
\item there is a real nondecreasing function $g$ on $[a,b]$ such
that $g(a)=0$ and
%{\footnotesize
$$\|F(t,x)-F(t,y)\|\le \|g(x)-g(y)\|$$%}
for all $t$, $x$, and $y$. Such a function $g$ is called a
{\it super function} for $F$.
\end{enumerate}
In this paper, we consider a problem that involves a function $F(t)$,
which is not an element of $\mathcal{F}$ since it is a function of
only one variable. However, we can define $\tilde{F}(t,x):= I+F(x)-F(t)$,
which can be shown to be an element of $\mathcal{F}$.
This allows the following two theorems to apply to our case.
\begin{theo}\label{qcint}
If $F\in \mathcal{F}$, $Q:[a,b]\to N$ is quasi-continuous,
$X=L$ or $X=R$, and $P$ is defined on $[a,b]$ by
\[
P(t)=(X)\int_a^t d_sF(t,s) \,Q(s),
\]
then $P$ is quasi-continuous. Moreover, if $F$ is continuous
with respect to its first variable, then
$P$ is continuous.
\end{theo}
\begin{theo}\label{homog}
Given $F\in \mathcal{F}$, there is a unique $M \in \mathcal{F}$ that is
a solution of
\[
M(t,x)=I+ (L)\int_x^t d_sF(t,s) \,M(s,x)
\]
for all $t$ and $x$. Moreover, if $F$ is continuous with respect
to its first variable, then so is $M$.
\end{theo}
\begin{theo}\label{inhomog}
If $F\in \mathcal{F}$ and $G$ is a quasi-continuous function from
$[a,b]$ to $N$ or to $\mathbb{R}^{2n}$, then there is a unique
quasi-continuous function $Y$ on $[a,b]$ such that
\[
Y(t)=G(t)+ (L)\int_x^t d_sF(t,s) \,Y(s).
\]
\end{theo}
Now we state two convergence theorems for Stieltjes integrals
\cite{Hilde}, followed by a convergence theorem for a sequence
of functions.
\begin{theo}[Helly's Integral Convergence Theorem]\label{Helly}
Let $f$ be a continuous function on $[a,b]$ and let $\{g_n\}$ be a
sequence of functions, uniformly of bounded variation on $[a,b]$, converging
to a function $g$ at every point of $[a,b]$.
Then
\[
\lim_{n\to \infty} \int_a^b f\,dg_n = \int_a^b f\,dg.
\]
\end{theo}
\begin{theo}[Osgood's Theorem] \label{Osgood}
Let $g$ be a function of bounded variation on $[a,b]$ and let
$\{f_n\}$ be a sequence of functions which is uniformly bounded
and converges pointwise to a function $f$ on $[a,b]$. If
$\int_a^b f_n \,dg$ and $\int_a^b f \,dg$ exist, then
\[
\lim_{n\to \infty} \int_a^b f_n \,dg = \int_a^b f\,dg.
\]
\end{theo}
\begin{theo}[Helly's Pointwise Convergence Theorem]\label{Helly2}
If $f_n$ is a sequence of functions, uniformly of bounded variation
on $[a,b]$ such that $f_n(a)$ is bounded in $n$, then there exists a
subsequence $f_{n_m}$ and a function $f$ of bounded variation such
that $\lim_{m\to \infty} f_{n_m} =f$ at every point of $[a,b]$.
\end{theo}
Here we introduce some notation that will be used throughout this
paper. Let $\mathcal{P}[a,b]$ denote the set of all partitions of the
interval $[a,b]$. Also, define, for $f$ integrable, $I_f(t) := \int_a^t f$.
\section{The Initial Value Problem}\label{IVP}
We consider the system of $2n$ equations
\begin{equation}\label{genprob}
\begin{gathered}
dy=Ay\,dt+dP\,z \\
dz=(dQ-\lambda dW)y+Dz\,dt
\end{gathered}
\end{equation}
which can be written as the Stieltjes integral equation
\begin{equation} \label{SIE}
\begin{bmatrix}
y(t,\lambda) \\
z(t,\lambda)
\end{bmatrix}
= \begin{bmatrix}
y(a,\lambda) \\
z(a,\lambda)
\end{bmatrix}
+\int_a^t
\begin{bmatrix}
A(s)ds & dP(s) \\
dM(s,\lambda) & D(s)ds
\end{bmatrix}
\begin{bmatrix}
y(s,\lambda) \\
z(s,\lambda)
\end{bmatrix}
\end{equation}
on $[a,b] \times K$, where K is a compact set in $\mathbb{C}$, and
$M(t,\lambda)=Q(t)- \lambda W(t)$.
Here, $y$ and $z$ are $n$-vectors and $A$, $P$, $Q$, $W$, and $D$ are $n
\times n$ real matrices. We require the following
conditions on the coefficients:
\begin{equation}\label{hyp}
\begin{aligned}
& A,D \in L_1([a,b]); \\
& P=P^T \text{ is continuous and nondecreasing with } P(a)=0; \\
& Q=Q^T \text{ is of bounded variation on } [a,b]; \\
& W=W^T \text{ is nondecreasing with } W(a)=0,
\end{aligned}
\end{equation}
\begin{rem} \rm
\begin{enumerate}
\item It is a consequence of a matrix function being of bounded variation that
each component is of bounded variation.
\item When we say a real symmetric matrix A is positive, we mean
that all of the eigenvalues of A are positive. This is
equivalent to the condition that $\langle A \bar{x},\bar{x} \rangle >0$
for all $\bar{x}\ne \bar{0}$. The meaning of a matrix function
$A(t)$ being nondecreasing follows accordingly, i.e., $A(t)$ is
nondecreasing if $A(t_2)-A(t_1)$ is nonnegative for $t_2\ge t_1$.
\item The symmetry condition on $Q$, $P$, and $W$ is needed later
for self-adjoint problems, but it is not needed to prove
existence of a solution.
\end{enumerate}
\end{rem}
In Reid's paper \cite{Reid}, the derivatives of $P$, $Q$, and $W$ are required
to be $L_1$ functions. We are allowing for more generality in the
coefficients that does not require a Banach space.
We define the following terms:
$$L :=
\Big\|
\begin{bmatrix}
y(a,\lambda) \\
z(a,\lambda)
\end{bmatrix}
\Big\| \quad
F(t,\lambda):=
\begin{bmatrix}
\int_a^t A(s)\,ds & P(t) \\
M(t,\lambda) & \int_a^t D(s)\,ds
\end{bmatrix}
$$
$$
f(t,\lambda):= \int_a^t \| A(s) \| \,ds + \int_a^t \| D(s) \| \,ds +
\bigvee_a^t P+ \bigvee_a^t M(\cdot,\lambda).
$$
Note that $\bigvee_a^t M(\cdot,\lambda)\le \bigvee_a^t Q +
|\lambda| \bigvee_a^t W$.
Suppose we fix $\lambda$ and let
$Y(t):=
\begin{bmatrix}
y(t,\lambda) \\
z(t,\lambda)
\end{bmatrix}$.
Then (\ref{SIE}) can be written in the form
$Y(t)=Y(a)+ \int_a^t dF(s)\,Y(s)$. We now show that for fixed $\lambda$,
$\tilde{F}(t,x):= I+F(x,\lambda)-F(t,\lambda)$ is an element of the set
$\mathcal{F}$ which was defined in section \ref{pre}.
The first condition, $\tilde{F}(x,x)=I$ is clearly satisfied.
For the second condition, it suffices to show that $F(t,\lambda)$ is
quasi-continuous in its first variable. We know $P$ is continuous by
assumption. The remaining elements of $F$ are now shown to be of
bounded variation, which implies they are quasi-continuous. We know
$\int_a^t A$ and $\int_a^t D$ are of bounded
variation since $A$, $D\in L_1$, and $Q$ is of bounded variation by
assumption. $P$ and $W$ are of bounded variation because they are
nondecreasing.
The third condition is satisfied for $g(t):=f(t,x)$.
Since $\tilde{F}(t,x) \in \mathcal{F}$, Theorems \ref{qcint},
\ref{homog}, and \ref{inhomog} apply to our problem, meaning that
equation \ref{genprob} has a unique quasi-continuous solution.
We repeat a proof of existence and uniqueness to establish certain uniform
bounds, to determine the continuity properties of the solution, and to
establish a Lipschitz condition with respect to a spectral parameter.
Define successive approximations for $k=1,2,3,\dots $ as follows:
Given initial conditions $y(a,\lambda)$ and $z(a,\lambda)$, let
\begin{gather}\label{SA1}
\begin{bmatrix}
y^{(0)}(t,\lambda) \\
z^{(0)}(t,\lambda)
\end{bmatrix}
= \begin{bmatrix}
y(a,\lambda) \\
z(a,\lambda)
\end{bmatrix} \\
\label{SA}
\begin{bmatrix}
y^{(k)}(t,\lambda) \\
z^{(k)}(t,\lambda)
\end{bmatrix}
= \begin{bmatrix}
y^{(0)}(t,\lambda) \\
z^{(0)}(t,\lambda)
\end{bmatrix}
+\int_a^t
\begin{bmatrix}
A(s)\,ds & dP(s) \\
dM(s,\lambda) & D(s)\,ds
\end{bmatrix}
\begin{bmatrix}
y^{(k-1)}(s,\lambda) \\
z^{(k-1)}(s,\lambda)
\end{bmatrix}
\end{gather}
First we examine some properties of the successive approximations.
The proof of the following lemma follows by a standard argument.
\begin{lemma}\label{SAprop1}
If successive approximations are defined as in \eqref{SA1},
\eqref{SA} subject to the conditions in \eqref{hyp}, then
$$
\Big\|
\begin{bmatrix}
y^{(k)}(t,\lambda) \\
z^{(k)}(t,\lambda)
\end{bmatrix}
- \begin{bmatrix}
y^{(k-1)}(t,\lambda) \\
z^{(k-1)}(t,\lambda)
\end{bmatrix}\Big\| \le L\dfrac{f(t,\lambda)^k}{k!}
$$
for $k=1,2,3,\dots$
\end{lemma}
\begin{lemma}\label{SAbounds}
If successive approximations are defined as in \eqref{SA1},
\eqref{SA} subject to the conditions in \eqref{hyp}, then
$ \Big\|
\begin{bmatrix}
y^{(k)}(t,\lambda) \\
z^{(k)}(t,\lambda)
\end{bmatrix}
\Big\|$ and
$\bigvee_a^t
\begin{bmatrix}
y^{(k)}(\cdot,\lambda) \\
z^{(k)}(\cdot,\lambda)
\end{bmatrix}$
are bounded independently of $k$ and $(t,\lambda)\in [a,b] \times K$.
\end{lemma}
\begin{proof}
Using Lemma \ref{SAprop1},
\begin{align*}
\Big\|
\begin{bmatrix}
y^{(k)}(t,\lambda) \\
z^{(k)}(t,\lambda)
\end{bmatrix}
\Big\|
&\le
\Big\|
\begin{bmatrix}
y^{(0)}(t,\lambda) \\
z^{(0)}(t,\lambda)
\end{bmatrix}
\Big\| +
\sum_{i=1}^k
\Big\|
\begin{bmatrix}
y^{(i)}(t,\lambda) \\
z^{(i)}(t,\lambda)
\end{bmatrix}
- \begin{bmatrix}
y^{(i-1)}(t,\lambda) \\
z^{(i-1)}(t,\lambda)
\end{bmatrix}
\Big\| \\
&\le L+ L \sum_{i=1}^k \frac{f(t,\lambda)^i}{i!} \le Le^{f(t,\lambda)},
\end{align*}
Recall also that
\begin{equation}\label{indep}
f(t,\lambda)\le \int_a^t\|A\| +\int_a^b\|D\|+\bigvee_a^b
P+\bigvee_a^b Q +|\lambda| \bigvee_a^b W,
\end{equation}
which is bounded independently of $(t,\lambda)\in [a,b] \times K$.
We now turn to the bound on the total variaton.
Let $T=\{t_i\}_0^m \in \mathcal{P}[a,t]$.
Using Theorem \ref{intbd} and letting $C$ be a bound on
$\big\|
\begin{bmatrix}
y^{(k)}(\cdot,\lambda) \\
z^{(k)}(\cdot,\lambda)
\end{bmatrix}
\big\|$,
\begin{align*}
\Big\|
\bigvee_a^t
\begin{bmatrix}
y^{(k)}(\cdot,\lambda) \\
z^{(k)}(\cdot,\lambda)
\end{bmatrix}
\big\|
&= \sup_{T\in P[a,t]} \sum_{i=1}^{m}
\Big\|
\begin{bmatrix}
y^{(k)}(t_i) \\
z^{(k)}(t_i)
\end{bmatrix}
- \begin{bmatrix}
y^{(k)}(t_{i-1}) \\
z^{(k)}(t_{i-1})
\end{bmatrix}
\Big\| \\
&= \sup_{T\in P[a,t]} \sum_{i=1}^{m}
\Big\|
\int_{t_{i-1}}^{t_i}
\begin{bmatrix}
A(s)\,ds & dP(s) \\
dM(s,\lambda) & D(s)\,ds
\end{bmatrix}
\begin{bmatrix}
y^{(k-1)}(s,\lambda) \\
z^{(k-1)}(s,\lambda)
\end{bmatrix}
\Big\| \\
& \le \sup_{T\in P[a,t]} C \sum_{i=1}^{m}
\int_{t_{i-1}}^{t_i} dv_F(s) \\
&=
C \int_a^t dv_F(s) \le C\int_a^t df(s,\lambda) = Cf(t,\lambda),
\end{align*}
since $\Delta v_F$ is bounded by $\Delta f$. Independence follows from
(\ref{indep}).
\end{proof}
Now we examine the continuity of solutions, starting by proving
that each successive approximation is quasi-continuous in the
first variable. This will be used in the subsequent theorem to prove
the solution is quasi-continuous.
\begin{lemma}\label{cont1}
As a function of the first variable, $y^{(k)}(t,\lambda)$ is
continuous and $z^{(k)}(t,\lambda)$ is quasi-continuous on
$[a,b]$, for $k=0,1,2,\dotsc$. If $Q$ and $W$ are continuous, then
$z^{(k)}(t,\lambda)$ is continuous on $[a,b]$, for $k=0,1,2,\dotsc$.
\end{lemma}
\begin{proof}
The quasi-continuity of $y^{(k)}(\cdot,\lambda)$ and
$z^{(k)}(\cdot,\lambda)$ follows from the fact
that they are of bounded variation (Lemma \ref{SAbounds}). Now we
prove that $y^{(k)}(\cdot,\lambda)$ is actually continuous.
Fix $\lambda \in K$. By definition of successive approximations,
for $k=1,2,3,\dotsc$
\[
y^{(k)}(t,\lambda)=y(a,\lambda)+\int_a^t A(s)\,y^{(k-1)}(s,\lambda)\,ds
+\int_a^t dP(s)\,z^{(k-1)}(s,\lambda).
\]
Let $g(t)= \int_a^tA(s)\,y^{(k-1)}(s,\lambda)\,ds$, and $h(t)=\int_a^t
dP(s)\,z^{(k-1)}(s,\lambda)$. To show that $g$ is continuous, note that
$$
\|g(t_2)-g(t_1)\|=
\Big\|\int_{t_1}^{t_2} A(s)\,y^{(k-1)}(s,\lambda)\,ds\Big\|
\le C \int_{t_1}^{t_2}\|A(s)\|\,ds,
$$
where $C$ is a uniform bound on
$ \big\| \begin{bmatrix}
y^{(k)}(t,\lambda) \\
z^{(k)}(t,\lambda)
\end{bmatrix}\big\| $.
Since $A \in L_1([a,b])$, $I_{\|A\|}(t) :=\int_a^t
\|A(s)\|\,ds$ is absolutely continuous, which implies $g$ is continuous.
Now for $h$, we have
\begin{align*}
\|h(t_2)-h(t_1)\| &=\Big\|\int_{t_1}^{t_2}
dP(s)\,z^{(k-1)}(s,\lambda)\Big\| \\
&\le \int_{t_1}^{t_2} dv_P(s) \,\|z^{(k-1)}(s,\lambda)\| \\
&\le C (v_P(t_2)-v_P(t_1)).
\end{align*}
Since $P$ is continuous, $v_P$ is also continuous; hence $h$ is continuous.
Now both $g$ and $h$ are continuous, so
$y^{(k)}(t,\lambda)$ is continuous as a function of $t$.
If $Q$ and $W$ are continuous, then $z^{(k)}(t,\lambda)$ can be shown to
be continuous using a similar argument.
\end{proof}
\begin{theo}\label{existence}
The initial value problem \eqref{SIE} has a
unique solution in the space of quasi-continuous functions. This
solution is bounded in norm and in total variation independently of
$t$ and $\lambda$ on $[a,b] \times K$.
\end{theo}
\begin{proof}
We prove the existence of a solution by showing it is
the limit of the sequence of successive approximations.
By Lemma \ref{SAprop1}, we have for $p>k$,
\begin{align*}
\Big\|
\begin{bmatrix}
y^{(p)}(t,\lambda) \\
z^{(p)}(t,\lambda)
\end{bmatrix}
- \begin{bmatrix}
y^{(k)}(t,\lambda) \\
z^{(k)}(t,\lambda)
\end{bmatrix} \Big\|
&\le \sum_{i=k+1}^p
\Big\|
\begin{bmatrix}
y^{(i)}(t,\lambda) \\
z^{(i)}(t,\lambda)
\end{bmatrix}
- \begin{bmatrix}
y^{(i-1)}(t,\lambda) \\
z^{(i-1)}(t,\lambda)
\end{bmatrix}
\Big\| \\
&\le L\sum_{i=k+1}^p \frac{f(t,\lambda)^i}{i!}
\le L\frac{f(t,\lambda)^{k+1}}{(k+1)!}e^{f(t,\lambda)} \\
&\le L\frac{f(b,\lambda)^{k+1}}{(k+1)!}e^{f(b,\lambda)}
\to 0 \quad \text{as } k\to \infty.
\end{align*}
Thus $\{\left[ y^{(k)}, z^{(k)}\right]\}$
is uniformly Cauchy in $[a,b]$.
Taking the limit as $k\to \infty$ in equation (\ref{SA})
and using the fact that the convergence of the successive
approximations is uniform, we get that
$$
\begin{bmatrix}
y(t,\lambda) \\
z(t,\lambda)
\end{bmatrix}
:= \lim_{k\to \infty}
\begin{bmatrix}
y^{(k)}(t,\lambda) \\
z^{(k)}(t,\lambda)
\end{bmatrix}
$$
is a solution. We know that
each successive approximation is quasi-continuous in $t$ and uniformly
bounded in norm and in total variation by Lemma \ref{SAbounds}, so this
solution has these same properties.
The proof of uniqueness is a standard argument, using the Gronwall inequality
from Theorem \ref{Gronwall}.
\end{proof}
The first part of the following corollary follows from the fact that the
functions $y^{(k)}(t,\lambda)$ are continuous with respect to $t$ and that they
converge uniformly to $y(t,\lambda)$ as $k\to \infty$. The
second part follows from Lemma \ref{cont1} and the uniform convergence
of the successive approximations.
\begin{cor}\label{ycont}
\begin{enumerate}
\item
$y(t,\lambda)$ is continuous with respect to $t$.
\item
If $Q$ and $W$ are continuous on $[a,b]$, then
$z(t,\lambda)$ is continuous for $t\in [a,b]$.
\end{enumerate}
\end{cor}
We conclude with some additional properties of solutions. First a
Lipschitz condition in $\lambda$, and then a property concerning
total variation of the solution.
\begin{lemma}\label{Lipschitz}
For any $\lambda_1$ and $\lambda_2$ in a compact set $K\subseteq \mathbb{C}$, we
have the Lipschitz condition
\[
\Big\|
\begin{bmatrix}
y(t,\lambda_2) \\
z(t,\lambda_2)
\end{bmatrix}
- \begin{bmatrix}
y(t,\lambda_1) \\
z(t,\lambda_1)
\end{bmatrix}
\Big\| \le c|\lambda_2- \lambda_1 |,
\]
where $c$ is a constant independent of $t\in [a,b]$ and
$\lambda_1, \lambda_2 \in K$.
\end{lemma}
\begin{proof}
Let
\begin{align*}
\phi (t) &:=
\Big\|
\begin{bmatrix}
y(t,\lambda_2) \\
z(t,\lambda_2)
\end{bmatrix}
- \begin{bmatrix}
y(t,\lambda_1) \\
z(t,\lambda_1)
\end{bmatrix}
\Big\| \\
&=\| y(t,\lambda_2)-y(t,\lambda_1)\|+\| z(t,\lambda_2)-
z(t,\lambda_1) \|.
\end{align*}
Using the equation $y(t,\lambda_i)=y(a,\lambda_i)+\int_a^t
A(s)\,y(s,\lambda_i)\,ds+\int_a^t dP(s)\,z(s,\lambda_i)$ for $i=1,2$ and
the fact that $y(a,\lambda_1)=y(a,\lambda_2)$, we have
\[
y(t,\lambda_2)-y(t,\lambda_1)=
\int_a^t A(s)\,[y(s,\lambda_2)-y(s,\lambda_1)]\,ds +\int_a^t dP(s)
\,[z(s,\lambda_2)-z(s,\lambda_1)].
\]
Similarly, we have
\begin{align*}
&z(t,\lambda_2)-z(t,\lambda_1)\\
&= \int_a^t dQ(s)\,[y(s,\lambda_2)-y(s,\lambda_1)] \\
&\quad -\int_a^t dW(s)\,[\lambda_2 y(s,\lambda_2)
- \lambda_1\,y(s,\lambda_1)]
+\int_a^t D(s)\,[z(s,\lambda_2)-z(s,\lambda_1)]\,ds.
\end{align*}
By adding and subtracting the term
$\int_a^t dW(s) \,\lambda_1 \,y(s,\lambda_2)$, we get
\begin{align*}
&z(t,\lambda_2)-z(t,\lambda_1)\\
&= \int_a^t dQ(s)\,[y(s,\lambda_2)-y(s,\lambda_1)]
-\int_a^t dW(s)\,(\lambda_2 - \lambda_1)\,y(s,\lambda_2)\\
&-\int_a^t dW(s) \lambda_1 \,[y(s,\lambda_2)-y(s,\lambda_1)]
+\int_a^t D(s)\,[z(s,\lambda_2)-z(s,\lambda_1)]\,ds.
\end{align*}
Let $C>0$ be a bound on $
\big\| \begin{bmatrix}
y(t,\lambda) \\
z(t,\lambda)
\end{bmatrix}
\big\|$, from Theorem \ref{existence}. Then
\begin{align*}
\phi (t) &\le \int_a^t \|A(s)\| \,\| y(s,\lambda_2)-y(s,\lambda_1)\|\,ds
+ \int_a^t dv_P(s) \,\| z(s,\lambda_2)-z(s,\lambda_1)\| \\
& \quad + \int_a^t dv_Q(s) \,\| y(s,\lambda_2)-y(s,\lambda_1)\|
+ |\lambda_2 - \lambda_1| \int_a^t dv_W(s) \,\|y(s,\lambda_2)\| \\
& \quad + |\lambda_1|\int_a^t dv_W(s)
\,\| y(s,\lambda_2)-y(s,\lambda_1)\|
+ \int_a^t \|D(s)\| \,\|z(s,\lambda_2)-z(s,\lambda_1)\| \,ds \\
&\le \int_a^t dv_A(s)\,\phi (s) + \int_a^t dv_P(s)\,\phi (s)
+ (L)\int_a^t dv_Q(s)\,\phi (s) + C \,| \lambda_2
- \lambda_1|\,\bigvee_a^b W \\
& \quad + |\lambda_1|\,(L)\int_a^t dv_W(s) \,\phi (s) +
\int_a^t dv_D(s)\,\phi (s).
\end{align*}
By Theorem \ref{Gronwall}, it follows that
$$
\phi (t) \le \big(C\,|\lambda_2 - \lambda_1|\,\bigvee_a^b W \big) \,e^{m(t)},
$$
where $m(t)=v_A(t)+v_P(t)+v_Q(t)+kv_W(t)+v_D(t)$,
where $k=\sup_{\lambda \in K} \lambda$.
Then $\phi (t) \le c\,|\lambda_2 - \lambda_1|$, where
$c=C\bigvee_a^b W\,e^{m(b)}$.
\end{proof}
\begin{theo}\label{bv}
There is a nondecreasing function $q$ on $[a,b]$ such that
$\bigvee_{x_1}^{x_2} y(x,\lambda)\le q(x_2)-q(x_1)$ and
$\bigvee_{x_1}^{x_2} z(x,\lambda)\le q(x_2)-q(x_1)$ for $x_10$, there exists a $k_0>0$ such that if $k\ge k_0$, then
$$\|y_n^{(k)}(t) - y_n(t) \|<\frac{\epsilon}{3}$$
for all $n=1,2,3,\dots$. and $t\in [a,b]$.
Also, by the convergence of successive approximations to the solution
as shown in the proof of Theorem \ref{existence}, we can assume for
the same $k_0$ that $k\ge k_0$ implies
$$
\|y(t)-y^{(k)}(t)\|<\frac{\epsilon}{3}
$$
for all $n=1,2,3,\dotsc$
and $t\in [a,b]$. By Lemma \ref{SAconv}, we can assume that for
the same $k_0$ that $k\ge k_0$ implies
$$\|y_n^{(k)}(t) - y^{(k)}(t) \|<\frac{\epsilon}{3}.$$
Therefore, for $k\ge k_0$,
$\|y(t)-y_n(t)\|<\epsilon$
for $n=1,2,3,\dots$ and $t\in [a,b]$.
\noindent\textbf{(2)} Write
\[
\|z(t)-z_n(t)\|\le \|z(t)-z^{(k)}(t)\|+
\|z^{(k)}(t)-z_n^{(k)}(t)\| + \|z_n^{(k)}(t)- z_n(t)\|,
\]
and in a similar manner obtain
$\|z(t)-z_n(t)\|< \epsilon$,
but this time the convergence of the middle term is pointwise rather than
uniform (see Lemma \ref{SAconv}).
\end{proof}
The next corollary follows directly from Corollary \ref{zunif} and
Theorem \ref{SAconv} (2).
\begin{cor}
The convergence $z_n(t) \to z(t)$ is uniform if
$M_n \to M$ is uniform.
\end{cor}
Theorem \ref{conv} gives convergence of solutions to initial value problems
under weak conditions of convergence of the coefficients. In a second work,
we will show how this gives the continuous and differentiable dependence of
eigenvalues and eigenfunctions on the data--coefficients, boundary conditions,
and endpoints.
\begin{thebibliography}{99}
\bibitem{Battle} L. E. Battle, Eigenvalue dependence on problem parameters
for Stieltjes Sturm-Liouville problems, Ph.D. thesis, University of
Tennessee, 2003.
\bibitem{Folland} G. B. Folland, {\it Real analysis: modern techniques and
their applications,} 2nd ed., Wiley, New York, 1999.
\bibitem{Hilde} T. H. Hildebrandt, {\it Introduction to the theory of
integration,} Academic Press, New York, 1963.
\bibitem{Hinton} D. B. Hinton, A Stieltjes-Volterra integral equation
theory, {\it Canadian Journal of Mathematics,} {\bf 18} (1966), 314-331.
\bibitem{Knotts} C. Knotts-Zides, Eigenvalue extremal properties, Ph.D.
thesis, University of Tennessee, 1999.
\bibitem{Kong1} Q. Kong and A. Zettl, Dependence of eigenvalues of
Sturm-Liouville problems on the boundary, {\it J. Differential Eqs.,}
{\bf 126} (1996), 389-407.
\bibitem{Kong1.1} Q. Kong, H. Wu, and A. Zettl, Dependence of the $nth$
Sturm-Liouville eigenvalue on the problem, {\it J. Differential Eqs.,}
{\bf 156} (1999) 328-354.
\bibitem{Kong2} Q. Kong, H. Wu, and A. Zettl, Dependence of eigenvalues on the
problem, {\it Math. Nachr.,} {\bf 188} (1997), 173-201.
\bibitem{Kong3} Q. Kong and A. Zettl, Eigenvalues of regular Sturm-Liouville
problems, {\it J. Differential Eqs.,} {\bf 131} (1996), 1-19.
\bibitem{Reid} W. T. Reid, Some limit theorems for ordinary differential
systems, {\it Journal of Differential Equations,} {\bf 3} (1967), 423-439.
%\bibitem{Reid2} W. T. Reid, {\it Ordinary differential equations,}
% Wiley, New York, 1971.
\bibitem{Rudin} W. Rudin, {\it Principles of mathematical analysis,} 2nd ed.,
McGraw-Hill, New York, 1964.
\end{thebibliography}
\end{document}**