\leq - \eta. \end{equation} Let $B_R=\left\{x \in \mathbb{R}^d:\|x\| \leq R \right\}$ be the closed ball in $\mathbb{R}^d$ centered at the origin with radius $R$. We make some remarks about the pullback attractor $A_p$ and the global pullback attractor ${\bf A}$, first when $P$ is held fixed and then when it is varied in some systematic way. First of all, it follows from (\ref{eq9}) that, if $t>s$, then $\phi\left(t,B_R,\theta_{-t}(p)\right) \subset \phi\left(s,B_R,\theta_{-s}(p)\right)$. Thus the intersection in (\ref{eq7}) is over a decreasing collection of sets. Using the continuity property of the reduced \v{C}ech homology functor \v{H} in the category of compact spaces together with the fact that each set $\phi\left(t,B_R,\theta_{-t}(p)\right)$ is homeomorphic to a ball, we have \v{H}$(A_p)=0$. In particular, we have: \begin{proposition} \label{prop1} For each $p\in P$, the space $A_p$ is connected; in fact $A_p$ is $\infty$-proximally connected in the sense of \cite{Gorn}. \end{proposition} We record a second fact which also follows quickly from (\ref{eq9}) and from the definition of $A_p$. \begin{proposition} \label{prop2} For each $p\in P$, one has $A_p=\{x_0 \in \mathbb{R}$: the solution $x(\cdot)$ of (\ref{eq2}) with $x(0)=x_0$ is defined on the entire real axis and satisfies $\|x(t)\|\leq R$ for all $t$ in $\mathbb{R}\}$. \end{proposition} \paragraph{Proof.} Let $x_0\in A_p$. Then for each $t<0$, there exists $\bar{x}\in B_R$ such that $\phi\left(t,\bar{x},\theta_{-t}(p)\right)=x_0$, and one has $x(t)=\bar{x}$. It follows that $x(t)$ is defined and satisfies $\|x(t)\|\leq R$ for all $t\in\mathbb{R}$. It is equally easy to see that, if the solution $x(t)$ of (\ref{eq2}) satisfying $x(0)=x_0$ exists and is bounded on $\mathbb{R}$, then $x_0\in A_p$. \hfil$\Box$ \medskip It follows from Proposition \ref{prop2} that ${\bf A}\subset \mathbb{R}^d \times P$ is compact, and from this one sees that $\check{H}({\bf A})= \check{H}(P)$ because of the continuity of the \v{C}ech homology functor on compact spaces. One also sees that, if $\mathcal{K}(\mathbb{R}^d)$ is the space of all nonempty compact subsets of $\mathbb{R}^d$, then the mapping $p\mapsto A_p:P\to\mathcal{K}(\mathbb{R}^d)$ is upper semicontinuous in the sense that (using the notation of Section \ref{prel}): $$ H^{*}\left(A_{p_n},A_p\right) \to 0 \quad \mbox{whenever} \quad p_n \to p \mbox{ in } P. $$ Next let $f\in\mathcal{F}$ be a vector field satisfying condition (\ref{eq6}). We want to consider the upper semicontinuity properties of the pullback attractor $A_f$ when $f$ is digitized. It will be convenient and informative to study the upper semicontinuity properties of the pullback attractor $A_f$ using the language of skew-product flows. First we introduce some terminology. By a \textbf{digitization} we mean a procedure which, to each $f\in\mathcal{F}$ and each real number $\delta>0$, assigns the following data with the indicated properties: \begin{enumerate} \item[I)] There is a collection $\mathcal{I}^{\delta}$ $=$ $\{I^{\delta}_j \ : \ j \in \mathbb{Z}\}$ of nonempty half-open intervals in $\mathbb{R}$ such that $\cup_{j=-\infty}^{\infty} I^{\delta}_j=\mathbb{R}$, and such that each interval $I^{\delta}_j$ has length $\leq$ $\delta$ and (say) $\geq$ $\delta/2$. \item[II)] To each $f\in\mathcal{F}$ there is associated a collection $\{f_{\delta}^j : \delta > 0, j \in \mathbb{Z}\}$ of autonomous vector fields. There is a positive function $\omega=\omega(\epsilon)$, defined for positive values of $\epsilon$ and tending to zero as $\epsilon\to0+$, such that for each interval $I^{\delta}_j \in \mathcal{I}^{\delta}$ and each $x\in\mathbb{R}^d$ the following property holds: if $\epsilon_x=\sup \left\{ \|f(r,x) - f(s,x)\| : r,s \in I^{\delta}_j \right\}$, then $$ \left\|f_{\delta}^j (x) - f(t,x)\right\| \leq \omega(\epsilon_x), \quad t \in I^{\delta}_j. $$ \item[III)] There is a positive function $\omega_1=\omega_1(M)$, which is defined for positive values of $M$ and which depends only on $M$, such that, if $x$, $y$ in $\mathbb{R}^d$ satisfy $\|f(t,x)-f(t,y)\|\leq M$ for all $t$ in some interval $I^{\delta}_j$, then $$ \left\|f_{\delta}^j (x) -f_{\delta}^j (y)\right\| \leq \omega_1(M) \|x-y\| $$ for all $\delta>0$. \item[IV)] There is a positive function $\omega_2=\omega_2(\eta)$, defined for positive values of $\eta$ and tending to zero as $\eta\to0+$, such that, if $J\subset\mathbb{R}$ is an interval and if $x\in\mathbb{R}^d$ is a point, and if $f$, $\tilde{f}\in \mathcal{F}$ satisfy $\|f(t,x)-\tilde{f}(t,x)\|\leq \eta$ for all $t\in J$, then $$ \left\|f_{\delta}^j (x) -\tilde{f}_{\delta}^j (x)\right\| \leq \omega_2(\eta) $$ for all $\delta>0$ and all $j$ such that $I^{\delta}_j\subset J$. Although these properties are cumbersome to state, they are reasonable requirements on a digitization scheme. Now let $f\in\mathcal{F}$ be a vector field satisfying (\ref{eq6}). Put $f_{\delta}(t,x)=f_{\delta}^j(x)$ for $t\in I^{\delta}_j$, $j\in\mathbb{Z}$. Abusing language slightly, we call $\{f_{\delta}:\delta >0\}$ a digitization of $f$. The vector fields $f_{\delta}$ discussed in the Introduction are obtained by procedures for which I)--IV) are satisfied, so these $f_{\delta}$ are digitizations in our sense. In fact , the subintervals $\mathcal{I}^{\delta}$ in I) for each fixed $\delta>0$ of such digitizations often also satisfies the following recurrence condition, in which case it will be called a \textbf{recurrent digitization}. \item[V)] Fix $\delta>0$. To each $\eta>0$ there corresponds a number $T$ (which may depend on $\delta$ as well as on $\eta$) so that each interval $[a,a+T] \subset\mathbb{R}$ contains a number $s$ such that $\mathop{\rm dist} (\mathcal{I}^{\delta},\mathcal{I}^{\delta}+s)<\eta$. Here $\mathcal{I}^{\delta}$ $+$ $s$ is the $s$-translate of $\mathcal{I}^{\delta}$ and $\mathop{\rm dist}$ is the Hausdorff distance on $\mathbb{R}$. \end{enumerate} % Now consider the differential equation \begin{equation}\label{eq10} x' = f_{\delta}(t,x) \end{equation} for each $\delta>0$. Though $f_{\delta}$ is only piecewise continuous in $t$, it nevertheless admits a unique local solution $x(t,x_0)$ for each initial condition $x(0,x_0)=x_0\in \mathbb{R}^d$; moreover $x(t,x_0)$ is jointly continuous on its domain of definition. Using property II) and condition (\ref{eq6}) on $f$, we see that $f_{\delta}$ also satisfies condition (\ref{eq6}) for small $\delta>0$. It follows that the pullback attractor $A_{f_{\delta}} \subset \mathbb{R}^d$ of the equation (\ref{eq10}), which is defined by the formula (\ref{eq7}), is contained in $B_R$ and is compact for small $\delta>0$. For each $\delta>0$ and $t\in\mathbb{R}$, let $\left(\theta_t(f)\right)_{\delta}$ be the digitization of the $t$-translate of $f$ (we remark parenthetically that $\theta_t(f_{\delta})\neq\left(\theta_t(f)\right)_{\delta}$ in general). We want to prove that $H^{*}\left(A_{\left(\theta_t(f)\right)_{\delta}}, A_{\theta_t(f)}\right)$ converges to zero as $\delta \to 0$, uniformly in $t\in\mathbb{R}$. That is, we want to prove that $A_{\left(\theta_t(f)\right)_{\delta}}$ tends to $A_{\theta_t(f)}$ upper semicontinuously, uniformly in $t\in\mathbb{R}$. Actually we will prove more. Let $P\subset\mathcal{F}$ be the hull of $f$ and let $p_{\delta}$ be the digitization of $p$ for each $p\in P$; then $H^{*}\left(A_{p_{\delta}},A_{p}\right)$ tends to zero as $\delta\to0$, uniformly in $p\in P$. To prove this, it will be convenient to work in an enlarged topological vector space $\mathcal{G}$ which contains $\mathcal{F}$ together with the (in general, temporally discontinuous) vector fields $p_{\delta}$. We define $\mathcal{G}$ to be the class of jointly Lebesgue measurable mappings $g\in\mathbb{R} \times \mathbb{R}^d\to \mathbb{R}^d$ which satisfy the following conditions: \begin{enumerate} \item[a)] For each compact set $K\subset\mathbb{R}^d$, one has $$ \sup_{x \in K} \sup_{t \in \mathbb{R} } \int_t^{t+1} \|g(s,x)\| \, ds < \infty ; $$ \item[b)] For each compact set $K\subset\mathbb{R}^d$ there is a constant $L_K$ (depending on $g$) so that, for almost all $t$ $\in$ $\mathbb{R}$: $$ \|g(t,x)-g(t,y)\| \leq L_K \, \|x-y\|, \quad x,y \in K. $$ \end{enumerate} Now, for each $r=1$, $2$, $3$, $\ldots$ and each $N=1$, $2$, $3$, $\ldots$ introduce a pseudo-metric $d_{r,N}$ on $\mathcal{G}$: $$ d_{r,N}(g_1,g_2) = \sup_{\|x\| \leq r} \int_{-N}^{N} \|g_1(s,x)-g_2(s,x)\| \, ds. $$ Then put $$ d_r(g_1,g_2) = \sum_{N=1}^{\infty} \frac{1}{2^N} \frac{d_{r,N}(g_1,g_2)}{1+d_{r,N}(g_1,g_2)}, $$ and finally set $$ d(g_1,g_2) = \sum_{r=1}^{\infty} \frac{1}{2^r} \frac{d_r(g_1,g_2)}{1+d_r(g_1,g_2)}. $$ We identify two elements of $\mathcal{G}$ if their $d$-distance is zero, thereby obtaining a metric space which we also call $\mathcal{G}$. Observe that, if $g\in\mathcal{G}$, then the Cauchy problem \begin{equation}\label{eq11} x' = g(t,x), \quad x(0) = x_0 \end{equation} admits a unique, maximally-defined local solution $x(t,x_0)$ for each $x_0\in\mathbb{R}^d$; moreover, $x(t,x_0)$ depends continuously on $(t,x_0)$ on its domain of definition. This can be proved using the standard Picard iteration method to solve (\ref{eq11}). We will write $$ x(t,x_0) = \phi(t,x_0,g) $$ to maintain consistency with notation used previously. Observe further that, for each $t\in\mathbb{R}$, the translation $\theta_t:\mathcal{G}\to\mathcal{G}$, i.e., $\theta_t(g)(s,x)=g(t+s,x)$ is well-defined. Let $\mathcal{G}_1\subset\mathcal{G}$ be a translation invariant subset such that the supremum in a) and the constants $L_K$ in b) of the definition of $\mathcal{G}$ are uniform in $g$ $\in$ $\mathcal{G}_1$, for each compact $K\subset\mathbb{R}^d$. Then $(t,g)\to\theta_t(g): \mathbb{R} \times \mathcal{G}_1\to\mathcal{G}_1$ is continuous. Next let $\delta_0>0$. We will show that the set $\mathcal{U}=P \cup \{ p_{\delta}:p \in P, 0 < \delta \leq\delta_0\}\subset \mathcal{G}$ is equi-uniformly continuous in the sense that, to each $\epsilon>0$, there corresponds $\eta>0$ such that, if $|t-s|<\eta$, then $d(\theta_t(p),\theta_s(p))<2\epsilon$ and $d(\theta_t(p_{\delta}),\theta_s(p_{\delta}))<2\epsilon$ for all $p$, $p_{\delta}\in\mathcal{U}$ and for all $t$, $s\in\mathbb{R}$. To do this, fix $\epsilon>0$. Recall that $P\subset\mathcal{F}\subset\mathcal{G}$ is the hull of the uniformly continuous function $f$. Hence if $B\subset\mathbb{R}^d$ is a ball centered at the origin and if $N\geq1$, then we can find $\eta_1>0$ such that, if $|t-s|<\eta_1$, then $$ \int_{-N}^{N} \|\theta_t(p)(v,x)-\theta_s(p)(v,x)\| \, dv < \epsilon/3 $$ for all $x\in B$. Then, taking account of the definition of the distance $d$, we see that it is sufficient to prove that, for some sufficiently large ball $B\subset\mathbb{R}^d$ and some sufficiently large $N$, there exists $\eta_2\in(0,\eta_1]$ such that \begin{equation}\label{eq12} \sup_{x\in B} \int_{-N}^{N} \|\theta_t(p_{\delta})(v,x)-\theta_s(p_{\delta})(v,x)\| \,dv < \epsilon \end{equation} whenever $|t-s|<\eta_2$, $0<\delta\leq \delta_0$. Let us write $d_B(\theta_t(p_{\delta}),\theta_s(p_{\delta}))$ for the quantity on the left hand side of (\ref{eq12}). To prove (\ref{eq12}), we use the properties I)--IV) of a recurrent digitization. Choose $\epsilon_1>0$ so that $\omega(\epsilon_1)<\epsilon/3$, then choose $\delta_1$ so that, if $0<\delta\leq \delta_1$ then $\epsilon_x\leq \epsilon_1/(2N)$ for all $x\in B$. Using property II), we see that, if $0<\delta\leq \delta_1$ and $p\in P$, then \begin{eqnarray*} d_B(\theta_t(p_{\delta}),\theta_s(p_{\delta})) & \leq & d_B(\theta_t(p_{\delta}),\theta_t(p)) + d_B(\theta_t(p),\theta_s(p)) + d_B(\theta_s(p),\theta_s(p_{\delta})) \\ & < & 3 \cdot \epsilon/3 = \epsilon \end{eqnarray*} whenever $|t-s|<\eta$. If $\delta_1\geq\delta_0$, we set $\eta=\eta_1$ and stop. If $\delta_1<\delta_0$ and if $\delta_1<\delta\leq \delta_0$, we first choose $\eta_2\leq \delta/100$, then note that on each interval $[u,u+N]\subset\mathbb{R}$ of length $N$, the difference $p_{\delta}(t,x)-p_{\delta}(s,x)$ is zero except on at most $\left[2N/\delta\right]+1$ subintervals of length $2\eta_2$, where $[\cdot]$ denotes the integer part of a positive number. Using the uniform boundedness of the vector fields $p_{\delta}$ $\in$ $\mathcal{U}$ on $\mathbb{R}\times B$, we can determine $\eta_3\leq \eta_2$ so that, if $|t-s|<\eta_3$, then $d_B(\theta_t(p_{\delta}),\theta_s(p_{\delta}))<\epsilon$. So if $\eta=\min \{\eta_1,\eta_3\}$ we obtain (\ref{eq12}) for all $p$, $p_{\delta}\in\mathcal{U}$. Now let $\delta\in(0,\delta_0]$. For each $p\in P$, let $P_{\delta}(p)=\rm{cls} \{\theta_t(p_{\delta}): t \in \mathbb{R} \}$. Then $P_{\delta}(p)$ is compact (this uses the recurrence condition V) of a recurrent digitization) and translation invariant in $\mathcal{G}$. Moreover, using property III) of a digitization and a Gronwall-type argument, one shows that the map $(t,x_0,g)$ $\mapsto$ $(\phi(t,x_0,g), \theta_t(g))$ defines a (continuous) skew-product flow on $\mathbb{R}^d \times P_{\delta}(p)$. Choose $\delta_0$ so that each $p_{\delta}$ satisfies (\ref{eq6}) for all $p\in P$ and $0<\delta\leq \delta_0$. Then the pullback attractor $A_{p_{\delta}}$ exists and equals $\{x_0 \in \mathbb{R}^d$ $:$ $\phi(t,x_0,p_{\delta})$ is defined on all of $\mathbb{R}$ and satisfies $\|\phi(t,x_0,p_{\delta})\|\leq R$ $\}$; see Proposition \ref{prop2}. In fact, $A_{p_{\delta}}$ is then the $p_{\delta}$-fiber of a global pullback attractor ${\bf A}^{\delta}\subset\mathbb{R}^d\times P_{\delta}(p)$. Now $p_{\delta} \to p$ in $\mathcal{G}$ as $\delta\to0$, so using property III) of a digitization and a Gronwall argument, together with the characterization of $A_{p_{\delta}}$ in terms of bounded solutions of $x'=p_{\delta}(t,x)$, one shows that $H^{*}(A_{p_{\delta}},A_p)\to0$ as $\delta\to0$. However, more is true. Using property IV) of a digitization one has the following: if $p_n\top\in P$ and if $\delta_n\to 0$, then $d(p_{(n,\delta_n)},p)\to0$. Again using II) together with a Gronwall argument and the above characterization of $A_{p_{(n,\delta_n)}}$, one sees that $H^{*}(A_{(p_{n,\delta_n)}},A_p)\to0$ as $n\to \infty$. This is a strong uniformity statement, and implies \begin{proposition}\label{prop3} $H^{*}(A_{p_{\delta}},A_p)\to0$ as $\delta\to0$, uniformly in $p\in P$. In particular, $$ H^{*}(A_{(\theta_t(f)_{\delta}},A_{\theta_t(f)}) \to 0 \quad \mbox{as } \delta \to 0, $$ uniformly in $t\in\mathbb{R}$. \end{proposition} We use the recurrence condition V) to prove that the sets $P_{\delta}(p)$ are compact. This would seem to be a basic requirement to be satisfied when one sets about computing the pullback attractor, because otherwise the convergence in (\ref{eq7}) of the intersection to $A_{p_{\delta}}$ cannot be expected to have any uniformity properties. We note, however, that Proposition \ref{prop3} could be proved without asumptions ensuring that the sets $P_{\delta}(p)$ are compact; one only needs the uniform continuity in $t$ on $\mathbb{R}$ (uniform on compact $x$ subsets of $\mathbb{R}^d$) of the vector field $f(t,x)$ and the continuity of the Bebutov flow on $(\mathcal{G},d)$. \section{Continuity results} In this section, we continue to investigate the perturbation properties of pullback attractors, this time with the goal of giving a sufficient condition for the Hausdorff continuity (and not just upper semicontinuity) of the sets $A_p$, resp. ${\bf A}$, as the base space $P$ is varied in some functional space. To be specific, let $\mathcal{F}$ be the topological space introduced in Section \ref{prel}. Let $P$ be a compact, translation-invariant subset of $\mathcal{F}$ (which need not be the hull of any one element $f\in\mathcal{F}$). Let us assume that each $p\in P$ can be written in the form $$ p(t,x) = L_p(t) x + h_p(t,x), $$ where $L_p(\cdot)$ is a uniformly continuous function with values in the set $\mathcal{M}_d$ of real $d\times d$ matrices. We assume that the mappings $(p,t,x)\mapsto L_p(t) x$ and $(p,t,x)\mapsto h_p(t,x)$ are uniformly continuous on compact subsets of $P\times \mathbb{R} \times \mathbb{R}^d$. A sufficient condition that this is the case is the following. Consider the metric space $P$; put $F(p,x)=p(0,x)$ for each $p\in P$ and $x\in\mathbb{R}^d$. Note that $F(\theta_t(p),x)=p(t,x)$ for all $t\in\mathbb{R}$, $p\in P$. Suppose that the Jacobian $\frac{\partial F}{\partial x}(p,0)$ is continuous as a function of $p$. Then, setting $$ L_p(t) x = \frac{\partial F}{\partial x}(\theta_t(p),0) x, \quad h_p(t,x) = p(t,x) - L_p(t) x, $$ we obtain the desired decomposition. We now impose the following hypothesys. \begin{enumerate} \item[(H1)] The family of linear systems $$ x' = L_p(t) x, \quad p \in P, $$ admits an exponential dichotomy over $P$ with constants $L>0$, $\gamma>0$ and continuous family of projections $\{Q_p:p \in P\}$. \end{enumerate} Let $C_b=C_b(\mathbb{R},\mathbb{R}^d)$ be the Banach space of bounded continuous functions $x:\mathbb{R}\to \mathbb{R}^d$ with the norm $\|x\|_{\infty}=\sup_{t \in \mathbb{R}}\|x(t)\|$. For each $p\in P$, define a nonlinear operator $T_p:C_b\to C_b$ as follows: \begin{eqnarray*} T_p[x](t) &=& \int_t^{\infty} \Psi_p(t) Q_p \Psi_p(s)^{-1} h_p(s,x(s)) \, ds \\ &&+ \int_{-\infty}^t \Psi_p(t) (Q_p-I) \Psi_p(s)^{-1} h_p(s,x(s)), \, ds \end{eqnarray*} where $\Psi_p(t)$ is the fundamental matrix with initial value $\Psi_p(0)=I$ (identity matrix) of the linear equation $x'=L_p(t) x$. Assume from now on that condition (\ref{eq6}) holds for all $p\in P$. We further impose a condition of uniform contractivity. \begin{enumerate} \item[(H2)] For all $p\in P$ and for all $x$, $y$ in $B_R$, one has $$ \left\|h_p(t,x) -h_p(t,x) \right\| \leq k \|x-y\|, $$ where $k<{\gamma}/{2L}$. \end{enumerate} To simplify the analysis, we now modify each $h_p(t,x)$ outside the ball $B_R$ so that $h_p$ satisfies Hypothesis (H2) for all $x \in \mathbb{R}^d$ and so that $h_p(t,x)=0$ whenever $\|x\|\geq R+1$. This means that condition (\ref{eq6}) does not hold if $\|x\|$ $\geq$ $R+1$, but it will clear that this will have no effect on our analysis of the pullback attractors of the equations (\ref{eq13}). \begin{proposition} \label{prop4} For each $p\in P$, the equation \begin{equation}\label{eq13} x' = p(t,x), \end{equation} admits a unique solution $x_p(t)$ which is bounded on all of $\mathbb{R}$. \end{proposition} \paragraph{Proof.} The argument is standard (see, e.g., Fink \cite{Fink}). The operator $T_p$ is a contraction on $C_b$ and hence admits a unique fixed point $x_p(\cdot)$, which is a bounded solution of (\ref{eq13}). Since each fixed point of $T_p$ in $C_b$ is a bounded solution of (\ref{eq13}) the proposition is proved. \hfil$\Box$ \smallskip Using Propositions \ref{prop2} and \ref{prop4}, we see that, for each $p\in P$, the pullback attractor $A_p$ $=$ $\{x_p(0)\}$, i.e., each $A_p$ contains exactly one point. Then from continuity with respect to parameters of the fixed point of a contractive mapping, we see that ${\bf A}=\{(x_p(0),p): p\in P\}\subset\mathbb{R}^d \times P$ is compact. One verifies that ${\bf A}\subset\mathbb{R}^d \times P$ is the global pullback attractor for the family (\ref{eq13}). Now, if $\{x_0\}$ is a singleton subset of a metric space $\mathcal{X}$, and if $B\subset\mathcal{X}$ is compact, then $H^{*}(B,\{x_0\})$ coincides with the Hausdorff distance $H(B,\{x_0\})$. This fact will allow us to prove that, if $p\in P$, then $A_p$ is a point of Hausdorff continuity for the pullback attractors $A_{\tilde{p}}$ of equations $x'=\tilde{p}(t,x)$ obtained by appropriate perturbations $\tilde{p}$ of $p$. We will formulate a fairly general continuity result whose hypotheses are satisfied in particular by the digitizations of Section \ref{USC}. We view the compact metric space $P$ as a subset of $\mathcal{G}$. Choose fixed values for the suprema in a) and for the constants $L_K$ in b) of the definition of $\mathcal{G}$, and let $\mathcal{G}_1\subset\mathcal{G}$ be the set of all $g\in\mathcal{G}$ for which a) and b) hold with these fixed values. As in Section \ref{USC}, write $\theta_t(g)(s,x)= g(t+s,x)$, and let $\phi(t,x_0,g)$ denote the solution of the Cauchy problem (\ref{eq11}) for each $g$ $\in$ $\mathcal{G}_1$. Using a Gronwall-type inequality, one verifies that the mapping $(t,x_0,g)$ $\mapsto$ $(\phi(t,x_0,g), \theta_t(g))$ is continuous on its domain of definition $V\subset\mathbb{R} \times \mathbb{R}^d \times \mathcal{G}_1$, and defines a continuous local skew-product flow on $\mathbb{R}^d \times\mathcal{G}_1$. Next let $\tilde{P}\in\mathcal{G}_1$ be a \textbf{compact} translation-invariant set. The Hausdorff semi-metric $H^{*}(\tilde{P},P)$ is defined relative to the metric $d$ in $\mathcal{G}_1$. Let $\eta_{*}>0$ be a constant so that \begin{equation}\label{eq14} \left

\leq - \eta_{*}
\end{equation}
for all $p\in P$, $t\in\mathbb{R}$, and
$x\in\mathbb{R}^d$ with $\|x\|=R$. One can show
that there exists $\delta>0$ so that, if $H^{*}(\tilde{P},P)<\delta$,
and if $\tilde{p}\in\tilde{P}$, then for each
$x_0\in\mathbb{R}^d$ with $\|x_0\|=R$, the
solution $x(t)$ of the Cauchy problem
$$
x' = \tilde{p}(t,x), \quad x(0) = x_0,
$$
satisfies $\|x(t)\|