\documentclass[twoside]{article}
\usepackage{amssymb, amsmath}
\pagestyle{myheadings}
\setcounter{page}{89}
\markboth{\hfil Properties of the solution map \hfil}%
{\hfil James L. Moseley \hfil}
\begin{document}
\title{\vspace{-1in}\parbox{\linewidth}{\footnotesize\noindent
{\sc 16th Conference on Applied Mathematics, Univ. of Central Oklahoma},
\newline
Electronic Journal of Differential Equations, Conf. 07, 2001, pp. 89--97.
\newline
ISSN: 1072-6691. URL: http://ejde.math.swt.edu or http://ejde.math.unt.edu
\newline ftp ejde.math.swt.edu (login: ftp)}
\vspace{\bigskipamount} \\
%
Properties of the solution map for a first order linear problem
%
\thanks{ {\em Mathematics Subject Classifications:} 34A99.
\hfil\break\indent
{\em Key words:} First order linear ordinary differential equation.
\hfil\break\indent
\copyright 2001 Southwest Texas State University and University of
North Texas. \hfil\break\indent
Published July 20, 2001.} }
\date{}
\author{James L. Moseley}
\maketitle
\begin{abstract}
We are interested in establishing properties of the general mathematical
model
$$\frac{d\vec{u}}{dt}=T(t,\vec{u})+\vec{b}+\vec{g}(t),\quad
\vec{u}(t_0)=\vec{u}_0
$$
for the dynamical system defined by the (possibly nonlinear) operator
$T(t,\cdot):V\to V$ with state space $V$. For one state
variable where $V=\mathbb{R}$ this may be written as $dy/dx=f(x,y)$,
$y(x_0)=y_0$. This paper establishes some mapping properties
for the operator $L[y]=dy/dx+p(x)y$ with $y(x_0)=y_0$ where
$f(x,y)=-p(x)y+g(x)$ and $T(x,y)=-p(x)y$ is linear. The conditions
for the one-to-one property of the solution map as a function of
$p(x)$ appear to be new or at least undocumented. This property is
needed in the development of a solution technique for a nonlinear model
for the agglomeration of point particles in a confined space (reactor).
\end{abstract}
\newtheorem{theorem}{Theorem}
\newtheorem{corollary}{Corollary}
\section{Introduction}
We begin with a family of initial value problems (IVP's) each
consisting of a (possibly nonlinear) first order ordinary
differential equation (ODE)
\begin{equation}
\frac{dy}{dx}=f(x,y),\qquad \tag{1}
\end{equation}
and an arbitrary initial condition (IC) at an arbitrary point:
\begin{equation}
y(x_0)=y_0\,. \tag{2}
\end{equation}
To specify a problem in this family, we must give the point $\left(
x_0,y_0\right) $ in the plane $\mathbb{R}^2=\{(x,y):x,y\in \mathbb{R}\}$
where $\mathbb{R}$ is the set of real numbers and the function
$f:\Omega \to \mathbb{R}$ where $\Omega $ is an open connected region
in $\mathbb{R}^2$ containing $(x_0,y_0)$. Constraints on $f$ are
needed to make the problem reasonable and more likely to have a physical
application. We assume immediately that $\Omega \supseteq R$. where $R$ is
a closed rectangle containing $(x_0,y_0)$ in its interior and that $f$
is continuous on $R$. Thus to specify a problem, we choose, in order,
$(x_0,y_0)\in \mathbb{R}^2$, $R\subseteq \Omega $ (the exact
definition of $\Omega $ really does not matter) as a closed rectangle in
$\mathbb{R}^2$ containing $(x_0,y_0)$ in its interior, and $f\in C(R)$,
the set of functions $f: R\to \mathbb{R}$ that are continuous.
We let $\Omega_{CR}(x_0,y_0)$ be the set of all closed rectangles in
$\mathbb{R}^2$ that contain $(x_0,y_0)$ in their interior, and
$C(\Omega_{CR}(x_0,y_0))=\{f\in C(R):R\in \Omega_{CR}(x_0,y_0)\}$. Then
$\mathbb{R}^2\times \Omega_{CR}(x_0,y_0)\times C(\Omega_{CR}(x_0,y_0))$
is in a one-to-one correspondence with the set of problems of interest.
If (1) is nonlinear, the \textbf{interval of validity} (i.e, the open
interval $I$ containing $x_0$ where (1) and (2) are satisfied) is part of
the problem which is therefore impredicative. However, we state the problem
predicatively by assuming that $I$ is given and look for solutions to (1)
on $I$. That is, we look for solutions to (1) in a set
$\Sigma (I)$ of functions whose common domain is $I$
(i.e., a subset of $F(I)=\{f:I\to \mathbb{R}\})$.
Let $I_{IV}(x_0,y_0$, $R)$ be the set of intervals $I$ containing $x_0$
where $I\times\{y_0\}\subseteq R\subseteq \mathbb{R}^2$ and
Prob($(x_0,y_0), R, f, I)$ denote the initial value problem (1) and (2)
associated with $(x_0,y_0)\in \mathbb{R}^2$, $R\in \Omega_{CR}(x_0,y_0)$,
$f\in C(R)$, and $I\in I_{IV}(x_0,y_0,R)$.
A minimum requirement for this IVP to be \textbf{well-posed} is that there
is exactly one solution in $\Sigma (I)$ that
satisfies both (1) and (2). Since we elect to specify the interval $I$, we
denote by Prob($\mathbb{R}^2\times \Omega_{CR}(x_0,y_0)\times
C(\Omega_{CR}(x_0,y_0))\times I_{IV}(x_0,y_0,R)$) the set of all
initial value problems of interest.
There are at least four problem solving contexts for (1)--(2).
\paragraph{Traditional:} If $f(x,y)$ is given as an elementary function and has
one of several specific forms, the solution process starts with the
ODE and uses calculus to obtain the ``general'' solution (i.e., a
formula for all or at least almost all solutions) to the ODE as a
parameterized family of functions (or curves). The IC is then applied to
obtain the parameter and hence the (name of the) unique solution
function. The interval of validity is then obtained as the largest (open)
interval where the solution is valid. The solution algorithm (usually)
establishes uniqueness and, if all steps are reversible, existence.
If all steps are not reversible, existence can be established by
substituting the proposed solution back into the ODE and the IC. (Or
this can be used simply as a check.) Often a formula can be found
for the (name of the) solution function for a whole class of problems by
allowing parameters such as $x_0$ and $y_0$ to be arbitrary.
\paragraph{Classical:} For a class of problems, existence and uniqueness
of a solution in $\Sigma (I)$ is established using properties of $f(x,y)$,
without necessarily obtaining a solution algorithm to obtain the (name of
the) solution function.
\\
\textbf{Classical I}: $\Sigma (I)=A(I)=\{y:I\to \mathbb{R}:
\hbox{$y$ is analytic on $I$}\}$ (e.g., if $f$ is analytic).
\\
\textbf{Classical II}: $\Sigma (I)=C^1(I)=\{y:I\to \mathbb{R}:\mbox{$y'$
exists and is continuous on $I$}\}$.
\paragraph{Modern:} A weak form of the problem is developed which allows weak
solutions; that is, things that need not be functions (e.g., equivalence
classes of functions and distributions).
In the Classical II context, the standard condition that $f$, $\partial
f/\partial y\in C(R)$ assures local uniqueness (i.e., that for any
$I\in I_{IV}(x_0,y_0,R), \Sigma (I)=C^1(I)$ contains at most one
solution), but only local (and not global) existence
(i.e., there exists an $I\in I_{IV}(x_0,y_0,R)$ such that $\Sigma
(I)=C^1(I)$ contains a solution). Thus, in this context, the problem then
focuses on finding the extent of the interval of validity for a
class of problems rather than on finding the (name of the) solution function
for a specific problem.
\section{The linear solution map}
Even though different contexts may define the problem differently,
Traditional, Classical, and Modern all come together with the assumption of
linearity, that is, when $f(x,y)=-p(x)y+g(x)$. In this case we have the
ordinary differential
equation (ODE)
\begin{equation}
\frac{dy}{dx}+p(x)y=g(x) \tag{3}
\end{equation}
with the initial condition (IC)
\begin{equation}
y(x_0)=y_0\,. \tag{4}
\end{equation}
We switch from viewing (1) as an equation to viewing (3) and (4) as a
mapping problem. Thus we keep $x_0,y_0,I$, and $p$ fixed and
only vary $g$. To keep solutions as functions, we continue with the
Classical II context, let $\Sigma (I)=C^1(I)$, and assume $p,g\in
C(I)=\{f:I\to \mathbb{R}$ that are continuous$\}$ where $x_0\in I$ so that
$f,\partial f/y\in C(R)$ where $R$ is the strip
$R=I\times \mathbb{R}=\{(x,y):x\in I\}$.
Now let $L_{p}[y]:C^1(I)\to C(I)$ be defined
by $L_{p}[y]=dy/dx+p(x)y$ and $N_{p,y_0};D_{y_0}(I)\to C(I)$ be
the restriction of $L_{p}$ to the hyper-plane $D_{y_0}(I)=\{y\in
C^1(I):y(x_0)=y_0\}$. Not only are we assured that for any $g\in C(I)
$, a unique (global) solution to the IVP (3) and (4) exists in $\Sigma
(I)=C^1(I)$ so that inverse mapping exists, but, using the integrating
factor $\mu_{p}(x)=\exp \{\int_{t=x_0}^{t=x}p(t)dt\}$, we have a calculus
formula for $y(x)=$ $N_{p,y_0}^{-1}[g](x)$:
\begin{align}
y(x)&=\Big( y_0+\int_{t=x_0}^{t=x}g(t)\exp
\{\int_{s=x_0}^{s=t}p(s)ds\}dt\Big) \exp \{-\int_{t=x_0}^{t=x}p(t)dt\}
\nonumber \\
&=y_0\exp
\{-\int_{t=x_0}^{t=x}p(t)dt\}+\int_{t=x_0}^{t=x}g(t)\exp
\{\int_{s=x}^{s=t}p(s)ds\}dt \tag{5}
\end{align}
If $y_0\neq 0$, $D_{y_0}(I)$ will not pass through the origin and not be
a subspace of $C^1(I)$ so that $N_{p,y_0}$ and $N_{p,y_0}^{-1}$ will
not be linear operators. However,
\begin{equation}
y(x)=N_{p,y_0}^{-1}[g](x)=y_0\mu_{-p}(x)+L_{p,0}^{-1}[g](x) \tag{6}
\end{equation}
where the (linear) Voltera operator $L_{p,0}^{-1}[g](x)=%
\int_{t=x_0}^{t=x}G(x,t)g(t)dt$ with kernel (Green's function)
$G(x,t)=\exp \{\int_{s=x}^{s=t}p(s)ds\}$ is the inverse of $N_{p,0}$ which we
might also call $L_{p,0}$ since when $y_0=0$, $N_{p,y_0}$is linear.
In a traditional context, $p$ and $g$ are given elementary functions in
$C(I)$. If possible, the Riemann integrals in (5) are computed explicitly.
In a classical context, the
interval of validity $I=(a,b)$ can be extended to closed (and half-open)
intervals by requiring $p$ and $g$ to be analytic (in a neighborhood of) or
continuous at the end points. In one modern context, the Riemann integrals
become Lebesgue integrals and act on equivalence classes of functions, for
example, piecewise continuous functions which need not be defined at the
points of discontinuity (as these points form a set of measure zero). We
continue to keep $x_0$ and $I$ fixed, but now allow $y_0$ and $p$ as
well as $g$ to vary. To simplify our notation, we write $y(x)=y(x;y_0,p,g)$
instead of $y(x)=N_{p,y_0}^{-1}[g](x)=y_0\mu_{-p}(x)+L_{p,0}^{-1}[g](x)$.
\section{One-to-one properties}
To understand how $y(x;y_0,p,g)$ given by (5) depends on each of the
parameters $y_0$, $p$, and $g$, we need several relations. For i=1,2, let
$y_{i}$ be the solution to the IVP (3) and (4) (on $I$ where $I$ may be
open or closed) when $p=p_{i}, g=g_{i}$, and $y_0=y_{i}^{0}$. If
$p_1=p_2=p, g_1=g_2=g$ and $p,g\in C(I)$, then for all $x$ in $I$
using (5) we obtain
\begin{equation}\begin{aligned}
\left| y_1(x)-y_2(x)\right| =&\left| y_{_1}^{0}-y_{_2}^{0}\right|
\exp \{-\int_{t=x_0}^{t=x}p(t)dt\}\\
\leq& \left| y_{_1}^{0}-y_{_2}^{0}\right| \exp \{\int_{t=x_0}^{t=x}\left|
p(t)\right| dt\} \end{aligned} \tag{7}
\end{equation}
If $y_1^{0}=y_2^{0}=y_0, p_1=p_2=p$ and $p,g_1,g_2\in C(I)$,
then for all $x$ in $I$ using (3) we obtain
\begin{equation}
d(y_1-y_2)/dx+p(x)(y_1(x)-y_2(x))=g_1(x)-g_2(x) \tag{8}
\end{equation}
and using (5)
\begin{align}
\left| y_1(x)-y_2(x)\right| =&\left| \int_{t=x_0}^{t=x}\left[
g_1(t)-g_2(t)\right] \exp \{\int_{s=x}^{s=t}p(s)ds\}dt\right| \tag{9}\\
\leq & \int_{t=x_0}^{t-x}\left| g_1(t)-g_2(t)\right| \exp
\{\int_{s=x}^{s=t}\left| p(s)\right| ds\}dt \tag{10}
\end{align}
If $y_1^{0}$ = $y_2^{0}$ = $y_0$, $g_2=g_1=g$ and
$p_1,p_2,g\in C(I)$, then for all $x$ in $I$ from (3) we obtain
\begin{equation}
d(y_1-y_2)/dx+p_1(x)y_1(x)-p_2(x)y_2(x)=0 \tag{11}
\end{equation}
and using (5)
\begin{align}
| y_1&(x)-y_2(x)| \nonumber\\
=&\Big|y_0\exp\{-\int_{t=x_0}^{t=x}p_1(t)dt\}-y_0\exp
\{-\int_{t=x_0}^{t=x}p_2(t)dt\} \nonumber\\
&+\int_{t=x_0}^{t=x}g(t)\exp
\{\int_{s=x}^{s=t}p_1(s)ds\}dt-\int_{t=x_0}^{t=x}g(t)\exp
\{\int_{s=x}^{s=t}p_2(s)ds\}dt\Big| \tag{12}\\
=&\Big| y_0 \exp \{-\int_{t=x_0}^{t=x}p_1(t)dt\} \big[
1-\exp \{\int_{t=x_0}^{t=x}[ -p_2(t)+p_1(t)] dt\}\big] \nonumber\\
&+\int_{t=x_0}^{t=x}g(t)\big[ \exp
\{\int_{s=x}^{s=t}p_1(s)ds\}\big] \big[ 1-\exp \{\int_{s=x}^{s=t}[
p_2(s)-p_1(s)] ds\}\big] dt\Big|\nonumber\\
\leq &|y_0| \exp \{\int_{t=x_0}^{t=x}\left| p_1(t)\right|
dt\} \big[ \exp \{\int_{t=x_0}^{t=x}|p_2(t)-p_1(t)| dt\}-1\big] \nonumber\\
&+\int_{t=x_0}^{t=x}\left| g(t)\right| \big[ \exp \{\int_{s=x}^{s=t}\left|
p_1(s)\right| ds\}\big]
\big[ \exp \{\int_{s=x}^{s=t}|
p_2(s)-p_1(s)| ds\}-1\big] dt \tag{13}
\end{align}
where we have used the inequality $\left| 1-e^{b}\right| =\left|
e^{b}-1\right| \leq \left| e^{\left| b\right| }-1\right| =e^{\left| b\right|
}-1$.
Standard theory [1] implies that for each $y_0\in \mathbb{R}$ and $p\in
C(I)$, $N_{p,y_0}^{-1}[g](x)$ provides a one-to-one correspondence between
$g\in C(I)$ and $y\in D_{y_0}(I)$ as well as establishing that for fixed
$p $ and $g$, the solutions to (3) (parameterized by $y_0$) do not cross
each other. Interestingly, with some restrictions, the solution map from
$(y_0,p,g)\in \mathbb{R\times }C(I)\times C(I)$ to $y\in C^1(I)$ is
one-to-one if any two of these three variables are held constant.
\begin{theorem}
If $p,g\in C(I)$, then the solution map defined by (5) from
$y_0\in \mathbb{R}$ to $y\in C^1(I)$ is one-to-one. Also, the solutions
to (3) in $C^1(I)$ do not cross each other.
\end{theorem}
\paragraph{Proof.}
If $y_1^{0}$ and $y_2^{0}$ are different initial conditions and $p,g\in
C(I)$, then for all $x$ in $I$ we have from (7) that
$| y_1(x)-y_2(x)| =| y_1^{0}-y_2^{0}| \lbrace \exp - \int_{t=x_0}^{t=x}p(t) dt
\rbrace$. Hence if $y_1(x)=y_2(x)$ for
any $x \in I$, then we must have $y_1^{0}$ = $y_2^{0}$. That is, if we
change the initial condition, the solution changes everywhere. If p and g
are fixed, not only is the the mapping from $y_0\in \mathbb{R}$ to
$y\in C^1(I)$
one-to-one, but the family of solutions to (3) parameterized by $y_0$ do
not cross each other.
\medskip
\begin{theorem}
If $y_0\in I,p\in C(I)$, then the solution map $y(x) = N_{p,y_0}^{-1}[g](x)$
defined by (5) from $g\in C(I)$
to $y\in C^1(I)$ is one-to-one. Also $N_{p,y_0}^{-1}[g](x)$ provides a
one-to-one correspondence between
$C(I)$ and $D_{y_0}(I)$.
\end{theorem}
Clearly $L_{p}$ maps $C^1(I)$ into $C(I)$. $ N_{p,y_0}^{-1}$ given by
(6) shows $N_{p,y_0}$ is one-to-one and that the domain of
$N_{p,y_0}^{-1}$ is all of $C(I)$. Hence $N_{p,y_0}^{-1}$ is
one-to-one. Alternately, this follows directly from (8) and is just the
statement that $N_{p,y_0}$=$\left( N_{p,y_0}^{-1}\right) ^{-1}$is a well
defined operator.
Hence $N_{p,y_0}^{-1}$ provides a one-to-one correspondence between $C(I)$
and $D_{y_0}(I)$. (And $N_{p,y_0}$ provides a one-to-one correspondence
between $D_{y_0}(I)$ and $C(I)$.)
Restrictions on $y_0$ and $g$ are needed for the mapping from $p\in C(I)$
to $y\in C^1(I)$ to be one-to-one. To see why, note from (5) that if
$y_0=0$ and $g$ is the zero function, then for any $p$ in $C(I), y$ is
identically zero.
\paragraph{Definition}
$f\in C(I)$ is said to have only dispersed zeros on an interval $I$
(either open, closed, or half-opened), if there exists no open interval
$J\subseteq I$ where $f$ is identically zero; that is, if for any open
interval $J\subseteq I$ there exists $x\in J$ such that $f(x)\neq 0$
(i.e., $Z_{f}=\{x\in I:f(x)=0\}$ has no interior points). \smallskip
We show that for any $y_0$ and $p$, if g has only dispersed zeros, then
the solution $y$ also has only dispersed zeros. On the other hand, if $g$ is
identically zero, then $y$ is either never zero or always zero, depending on
$y_0$.
\begin{theorem}
If $g\in C(I)$ and has only dispersed zeros on an interval $I$, then the
solution $y$ has only dispersed zeros on $I$.
\end{theorem}
\paragraph{Proof.}
We prove the contrapositive. Assume y does not have only dispersed zeros on
$I$. By definition there exists an open interval $J\subseteq I$ such that
$y(x)=0$ for all $x$ in $J$. Then $dy/dx$ = $0$ on $J$ and by (3), $g(x)=0$
on $J$. Hence $g$ does not have only dispersed zeros on $I$.
\begin{theorem}
Let $I$ be an interval. If $g(x)=0$ on an interval $J\subseteq I$, then
either $y$ is identically zero on $J$ or is never zero on $J$. In
particular, if $g(x)=0$ on $I$, then either $y_0=0$ and y is identically
zero or $y_0\neq 0$ and $y$ is never zero on $I$.
\end{theorem}
\paragraph{Proof.}
Suppose $g(x)=0$ on an interval $J\subseteq I$. Choose $x_1\in J$. Applying
(5) at $x_1$ with $g(x)=0$ on $J,$ we have that $y(x) = y(x_1) exp\lbrace - \int_{t=x_0}^{t=x}p(t) dt \rbrace$ on $J$. If $y(x_1)=0$,
then y is identically zero on $J$. If $y(x_1)\neq 0,$ then y is never
zero on $J$. Similarly for $I$.
\begin{corollary}
If $y$ has only dispersed zeros on an interval $I$, then on any interval
$J\subseteq I$ where $g(x)=0$, y is never zero.
\end{corollary}
\paragraph{Proof.}
Assume y has only dispersed zeros and that $g(x)=0$ on an interval
$J\subseteq I$. But if $y$ has only dispersed zeros, there exists $x_1\in J$
such that $y(x_1)\neq 0$. Then by Theorem 4, we have that $y(x)\neq 0$
for all $x$ in J.
\begin{theorem}
If $g$ is identically zero on an interval $I$ and $y_0\neq 0$, then the
solution map defined by (5) from $p\in C(I)$ to $y\in C^1(I)$ is one-to-one.
\end{theorem}
\paragraph{Proof.}
Suppose $y_0\neq 0$, $g=0$ and $p_1,p_2 \in C(I)$. If $y_1=y_2=y$, then
for all $x \in I$ we have from (11) that $(p_1(x)-p_2(x))y(x)=0$. From
Theorem 4, $y(x)$ is never zero so that for all $x\in I, p_1(x)=p_2(x)$.
Hence $p_1=p_2$ so that the solution map is one-to-one.
\medskip
\begin{theorem}
If $g$ has only dispersed zeros on I, then the solution map defined by (5)
from $p\in C(I$) to $y\in C^1(I)$ is one-to-one.
\end{theorem}
\paragraph{Proof.}
Suppose $g$ has only dispersed zeros on $I$, and $p_1,p_2,g\in C(I)$. If
$y_1=y_2= y$, then for all $x$ in $I$ we have from (11) that
$(p_1(x)-p_2(x))y(x)=0$. Hence if $y(x)\neq 0$, then $p_1(x) = p_2(x)$.
Given $x\in I$, if there exists a sequence
$\lbrace x_{n}\rbrace_{n=1}^{\infty}$ such that $y(x_{n})\neq 0$ and
$\lim_{n \to \infty } x_{n} = x$; then by continuity, $p_1(x)=p_2(x)$.
Thus $p_1=p_2$ everywhere if the zeros of $y$ are dispersed.
It remains to show that if $g$ has only dispersed zeros on $I$, then $y$
has only dispersed zeros on $I$. But this is just Theorem 3.
\section{Continuity properties}
Finally, for the IVP (3) and (4) to be a well-posed problem, the solution
map should be continuous with respect to $y_0, p, $and$ g$. From (7),
we see that $y(x;y_1^{0},p(x),g(x))$ will be pointwise close to
$y(x;y_2^{0},p(x),g(x))$ if $y_0^1$ is close to $y_0^2$. From
(10), we see that $y(x; y_0, p(x), g_1(x)$) will be pointwise close
to $y(x;y_0, p(x), g_2(x))$ if $g_1(x)$ is everywhere pointwise
close to $g_2(x)$. Finally, from (13), we see that
$y(x; y_0, p_1(x), g(x))$ will be pointwise close to $y(x; y_0, p_2(x),g(x))$
if $p_1(x)$ is everywhere pointwise close to $p_2(x)$.
To obtain a global notion of closeness, we require the domain of $y$ to be
the closed interval $\overline{I}=[a,b]$ where $I=(a,b)$ and redefine $C(I)$
and $C^1(I)$ as $C(I)=\{f:\overline{I}\to \mathbb{R}$ such that $f$
is continuous on $I=(a,b)\}$ and $C^1(I)=\{f:\overline{I}\to
\mathbb{R}$ such that $f'$ exists and is continuous on
$I=(a,b)\}\subseteq C(I)$. Then $C(\overline{I})=\{f:I\to \mathbb{R}
$ such that $f$ is continuous on $I=[a,b]\}\subseteq C(I)$, and
$C^1(\overline{I})=\{f:I\to \mathbb{R}$ such that, using one-sided
limits, $f'$ exists and is continuous on $\overline{I}=[a,b]\}
\subseteq C^1(I)$. As we have said, if $p,g\in C(\overline{I})$, then,
using one-sided limits, $y$ given by (5) can be considered to solve (3) on
$\overline{I}$ so that $\Sigma (\overline{I})=C^1(\overline{I})$. Now let
$D_{y_0}(\overline{I})=\{y\in C^1(\overline{I}):y(x_0)=y_0\}$. Then
$N_{p,y_0}$ maps $D_{y_0}(\overline{I})$ to $C(\overline{I})$, the
solution maps $(y_0,p,g)\in \mathbb{R}\times C(\overline{I})\times C(%
\overline{I})$ to $y\in C^1(\overline{I})$ and Theorems 1, 2, 6, and 7
remain valid with $C(I)$ replaced by $C(\overline{I})$, $C^1(I)$ replaced
by $C^1(\overline{I})$ and $D_{y_0}(I)$ replaced by $D_{y_0}(\overline{%
I})$. We have $D_{y_0}(\overline{I})\subseteq C^1(\overline{I})\subseteq
C(\overline{I})\cap C^1(I)\subseteq C(I)$.
Now recall that $L^{\infty }(\overline{I})$ is the set of equivalent classes
of functions that are equal except on a set of Lebesgue measure zero such
that $\mathop{\rm ess\,sup}_{x\in \overline{I}}\left| f(x)\right| <\infty $.
Since $C(\overline{I})$ can be considered as a subset of $L^{\infty }(%
\overline{I})$ we can use the $L^{\infty }(\overline{I})$ norm, $\left\|
f\right\|_{\infty }=\mathop{\rm ess\,sup}_{x\in \overline{I}}\left|
f(x)\right| $ for functions in $C(\overline{I})$ where $\left\| f\right\|
_{\infty }= \max_{x\in \overline{I}}\left| f(x)\right| $ as
well as for those in all of its subsets: $D_{y_0}(\overline{I})\subseteq
C^1(\overline{I})\subseteq C(\overline{I})\cap C^1(I)\subseteq
C(\overline{I})\subseteq L^{\infty }(\overline{I})$. In this restricted
context, we can obtain the following global inequalities. If
$p_1= p_2=p, g_1=g_2=g$ and $p,g\in C(\overline{I})$, then for all
$x$ in $\overline{I}=[a,b]$ we have using (7) that
\[
\left| y_1(x)-y_2(x)\right| \leq \left| y_1^{0}-y_2^{0}\right| \exp
\{\left\| p\right\|_{\infty }(b-a)\}
\]
so that
\begin{equation}
\left\| y_1(x)-y_2(x)\right\|_{\infty }\leq \left|
y_1^{0}-y_2^{0}\right| \exp \{\left\| p\right\|_{\infty }(b-a)\}
\tag{14}
\end{equation}
If $y_1^{0}=y_2^{0}=y_{_0}$, $p_1=p_2=p$, and
$p,g_1,g_2\in C(\overline{I})$, then for all $x\in \overline{I}=[a,b]$
we have from (9) that
\[
\left| y_1(x)-y_2(x)\right| \leq \left\| g_1-g_2\right\|_{\infty
}(b-a)\exp \{\left\| p\right\|_{\infty }(b-a)\}
\]
so that
\begin{equation}
\left\| y_1-y_2\right\|_{\infty }\leq \left\| g_1-g_2\right\|
_{\infty }(b-a)\exp \{\left\| p\right\|_{\infty }(b-a)\} \tag{15}
\end{equation}
If $y_1^{0}=y_2^{0}=y_0$, $g_1=g_2=g$, and $p_1,p_2,g,p\in C(\overline{I})$,
then for all $x\in \overline{I}=[a,b]$ we have from (13) that
\[\begin{aligned}
\left| y_1(x)-y_2(x)\right| \leq& \left| y_0\right| \exp \{\left\|
p_1\right\|_{\infty }(b-a)\}[\exp \{\left\| p_1-p_2\right\|
(b-a)\}-1]\\
&+\left\| g\right\|_{\infty } \exp
\{\left\| p_1\right\|_{\infty }(b-a)\}[\exp \{\left\| p\right\|_{\infty
}(b-a)\}-1](b-a)
\end{aligned}\]
so that
\begin{align}
\| y_1-y_2\| \leq& | y_0| \exp \{\left\|
p_1\right\|_{\infty }(b-a)\}[\exp \{\left\| p_1-p_2\right\|
(b-a)\}-1] \tag{16}\\
&+\| g\|_{\infty } \exp \{\| p_1\|
_{\infty }(b-a)\}[\exp \{\| p_1-p_2\|_{\infty
}(b-a)\}-1](b-a) \nonumber
\end{align}
To show that $y(x;y_0,p,g)$ depends continuously on $y_0,p,$and $g$ when
two are fixed, we use the norm (and hence metric) topologies for $\mathbb{R}$
and that induced on $C(\overline{I})$ from $L^{\infty }(\overline{I})$.
\begin{theorem}
If $p, g\in C(\overline{I})$, then the solution map defined by (5) from
$y_0\in \mathbb{R}$ to $y\in C^1(\overline{I})$
is continuous.
\end{theorem}
\paragraph{Proof.}
Let $p_1 = p_2 = p$, $g_1 = g_2 = g$, $p, g\in C(\overline{I})$, and
$\epsilon > 0$. Then from (14), there
exists $\delta >0$ such that $| y_1^{0}-y_2^{0}| < \delta$
implies $\Vert y_1-y_2\Vert < \epsilon$. Hence the mapping from $y_0\in
\mathbb{R}$ to $y\in C^1(\overline {I})$ is continuous.
\begin{theorem}
If $y_0\in I,p\in C(\overline{I})$, then the solution map defined by (5)
from $g\in C(\overline{I})$ to y$\in C^1(\overline{I})$ is continuous.
\end{theorem}
\paragraph{Proof.}
Let $y_1^0=y_2^0=y_0$, $p_1=p_2=p$, $p,g_1,g_2 \in C( \overline {I})$, and
$\epsilon >0$. Then from (15), there exists $ \delta >0$ such that
$\Vert g_1-g_2 \Vert < \delta $ implies $\Vert y_1-y_2\Vert < \epsilon $. Hence
the mapping is continuous.
\begin{theorem}
If $y_0 \in I$, $p \in C( \overline {I})$, then the solution map defined by (5)
from $p \in C( \overline {I})$ to $y \in C^1( \overline {I})$
is continuous.
\end{theorem}
\paragraph{Proof.}
Let $y_1^0=y_2^0=y_0$, $g_1=g_2=g$, $p_1,p_2,g \in C( \overline {I})$, and
$\epsilon >0$. Then from (16) there exists $\delta >0$ such that
$\Vert p_1-p_2 \Vert <\delta$ implies $\Vert y_1-y_2 \Vert <\epsilon$ . Hence
the mapping is continuous.
\paragraph{Summary.}
Let $y_0\in R$ and $p,g\in C(I)$ where $I$ is an interval and
consider the solution map for the IVP (3) and (4) given by (5) that maps
$(y_0,p,g)\in \mathbb{R}\times C(I)\times C(I)$ to $y\in C^1(I)$.
\begin{enumerate}
\item If $p$ and $g$ are fixed, the mapping is one-to-one and solutions do not
cross.
\item If $y_0$ and $p$ are fixed, the mapping is one-to-one and has an
inverse mapping that is linear if and only if $y_0=0$.
\item If $y_0$ and $g$ are fixed and either $y_0$ $\neq 0$ and $g$ is
identically zero or $g$ has only dispersed zeros, then the mapping is
one-to-one.
\item If $I$ is a closed interval, and two of $y_0$, $p$ and $g$ are fixed,
then the solution map is continuous using the usual topology for
$\mathbb{R}$ and the norm (metric) topologies for $C(\overline{I})$ and
$C^1(\overline{I})$ as subspaces of $L^{\infty }(\overline{I})$.
\end{enumerate}
\begin{thebibliography}{00} \frenchspacing
\bibitem{b1} Boyce, E. and R. C. DiPrima, \textit{Elementary Differential Equations
and Boundary Value Problems }(sixth edition), John Wiley \& Sons, Inc., New
York, 1997.
\bibitem{d1} Douglass, S. A., \textit{Introduction to Mathematical Analysis},
Addition-Wesley Publishing Company, New York, 1996.
\bibitem{m1} Moseley, J. L., \textit{Properties of the Solution Map for a First
Order Linear Problem with One State Variable}. Applied Math Report \#16, AMR\#16,
West Virginia University, Morgatown, West Virginia, March, 2000.
\end{thebibliography}
\noindent\textsc{James L. Moseley}\\
West Virginia University \\
Morgantown, West Virginia 26506-6310 USA\\
e-mail: moseley@math.wvu.edu \\
Telephone: 304-293-2011
\end{document}