\documentclass[reqno]{amsart} \AtBeginDocument{{\noindent\small {\em Electronic Journal of Differential Equations}, Vol. 2001(2001), No. 05, pp. 1--17.\newline ISSN: 1072-6691. URL: http://ejde.math.swt.edu or http://ejde.math.unt.edu \newline ftp ejde.math.swt.edu \quad ejde.math.unt.edu (login: ftp)} \thanks{\copyright 2001 Southwest Texas State University.} \vspace{1cm}} \begin{document} \title[\hfilneg EJDE--2001/05\hfil On the decay rate of solutions] {On the decay rate of solutions of non-autonomous differential systems} \author[Tom\'{a}s Caraballo \hfil EJDE--2001/05\hfilneg] { Tom\'{a}s Caraballo } \address{Tom\'{a}s Caraballo \hfill\break Departamento de Ecuaciones Diferenciales y An\'{a}lisis Num\'{e}rico \hfill\break Universidad de Sevilla, Apdo. de Correos 1160, 41080-Sevilla, Spain} \email{caraball@cica.es} \date{} \thanks{Submitted December 7, 2000. Published January 3, 2001.} \subjclass{34D05, 34D10, 34D20} \keywords{Asymptotic behaviour, exponential and polynomial stability, \hfill\break\indent rate of decay} \thanks{supported by DGICYT Project PB98-1134} \begin{abstract} Some results on the asymptotic behaviour of solutions of differential equations concerning general decay rate are proved. We prove general criteria on the exponential, polynomial, and more general decay properties of solutions by using suitable Lyapunov's functions. We also present a detailed analysis of the perturbed linear and nonlinear differential systems. The theory is illustrated with several examples. \end{abstract} \maketitle \newtheorem{theorem}{Theorem}[section] % theorems numbered with section # \newtheorem{lemma}[theorem]{Lemma} \newtheorem{definition}[theorem]{Definition} \makeatletter \numberwithin{equation}{section} \makeatother \section{Introduction} The asymptotic behaviour of systems described by differential equations is a very important topic as the vast literature on this field shows. To study the stability of a nonlinear system one can, on the one hand, analyze its linear approximation (see Brauer and Nohel \cite {brauer-nohel}, Yoshizawa \cite{yoshizawa66} among others); on the other hand, one can use another method which relies in the technique discovered by Lyapunov (see Yoshizawa \cite{yoshizawa66}). This is called the direct method (or Lyapunov's Second Method) because it can be applied directly to the differential equation without any knowledge of its solutions, provided one is clever enough to construct the suitable auxiliary functions (called Lyapunov's functions). But, a major limitation of this procedure is that there are no general methods to construct such auxiliary functions, much more in the nonautonomous case in which we are most interested. In this respect, there exist some interesting results due to Yoshizawa (see \cite{yoshizawa66}-\cite{yoshizawa82}) and LaSalle (see \cite{lasalle}-\cite {lasalle76}), among others, which ensure asymptotic approach of trajectories to some closed attracting sets for the differential system (see also Kloeden \cite{kloeden} for another approach). However, apart from the usual exponential stability results obtained by the first approximation technique, in general, almost nothing is said about how fast is the convergence of solutions in dealing with the Lyapunov Second Method. Motivated by this fact, we shall first establish a sufficient condition for the exponential decay of solutions which allows the derivative of the Lyapunov function along the trajectories of the system to be bounded by a definite negative function plus an additional nonnegative function with exponential decay. Another interesting problem arises when one is not able to prove exponential stability but knows that the null solution is asymptotically stable. In this case, an interesting question concerns the possibility of deciding the decay rate of solutions (to zero or to other solution). As far as we know, most stability results related to the Lyapunov method are devoted to provide results that ensure stability, asymptotic stability, etc. but, in general, do not give any further information about the decay rate of solutions (see Haraux \cite[pp. 45-47]{haraux} for a study of the energy decay of a particular second order equation). We shall partially cover this gap by providing some conditions which permit us to estimate the decay rates related to certain general functions (e.g. polynomials, logarithmics, etc.), by introducing a generalization of the concept of Lyapunov exponents. Another interesting fact is that, although our main interest will concern sub-exponential decay of solutions, our treatment also includes the case of super-exponential decay. This paper is organized as follows. In Section 2, we prove a sufficient condition ensuring exponential decay of solutions, and another one concerning asymptotic polynomial behaviour. Next, we introduce in Section 3 the concepts of generalized Lyapunov exponent with respect to a positive general function and the general decay rate of solutions, give some criteria for the asymptotic decay of solutions, and illustrate the results by showing some examples. Section 4 is devoted to the analysis of perturbed systems. In fact, we analyze the perturbations of linear and nonlinear differential systems. Finally, we include some remarks and ideas concerning the possibility of extending the results to the infinite dimensional framework and the functional one. \section{Exponential and polynomial asymptotic behaviour} Consider the following initial-value problem for a system of differential equations in $\mathbb{R}^{n}$: \begin{equation} \begin{gathered} \dfrac{\text{d}}{\text{d}t}X(t)=f(t,X(t)), \quad t>t_{0} \\ X(t_{0})=X_{0}\in \mathbb{R}^{n}, \end{gathered} \label{P} \end{equation} where $f:\mathbb{R}\times D\to \mathbb{R}^{n}$ is a continuous function, and $D\subset \mathbb{R}^{n}$ is an open set such that $0\in D$. It is well known (see, e.g., Coddington and Levinson \cite {coddington-levinson}) that, given $t_{0}\in \mathbb{R}$ and $X_{0}\in \mathbb{R}^{n}$, there exists at least a solution to this problem defined in an open maximal interval. As we are interested in the stability or asymptotic behaviour of solutions, we assume that every solution to (\ref{P}) is defined for $t\geq t_{0}$. When we deal with the stability analysis, we will also assume that $f(t,0)=0$, so that we consider the stability of the zero solution. Otherwise, we will not assume this and, we will therefore analyze the asymptotic behaviour of such solutions. Associated to the differential system in (\ref{P}), we consider the derivative of a function along the system, i.e., for a continuously differentiable function $V(\cdot,\cdot):\mathbb{R}\times D\to \mathbb{ R}$ we define the function $\dot{V}(\cdot,\cdot):\mathbb{R}\times D\to \mathbb{R}$ as follows \begin{equation*} \dot{V}(t,x)=\frac{\partial V(t,x)}{\partial t}+\sum_{i=1}^{n}\frac{\partial V(t,x)}{\partial x_{i}}f_{i}(t,x). \end{equation*} \noindent\textbf{Remark.} Observe that if $X(t)$ is a solution to (\ref{P}), then it holds \begin{equation*} \frac{\text{\textrm{d}}}{\text{\textrm{d}}t}V(t,X(t))=\dot{V}(t,X(t)). \end{equation*} Now we state a result which, in particular, ensures exponential decay to zero of solutions to (\ref{P}). It is worth mentioning that $\dot{V}$ does not need to be definite negative. \begin{theorem} Assume $V:\mathbb{R}\times D\to \mathbb{R}$ \ is a continuously differentiable function satisfying: \begin{align*} \exists c_{1} & >0\text{ \ and }p>0\text{\ \ such that \ }c_{1}|x|^{p}\leq V(t,x),\text{ for all }(t,x)\in\mathbb{R}\times D,\\ \exists c_{2} & >0\text{ \ such that \ }\dot{V}(t,x)\leq-c_{2} V(t,x)+\lambda(t),\text{ \ for all }(t,x)\in\mathbb{R}\times D, \end{align*} where $\lambda(\cdot)$ is a nonnegative continuous function such that there exist $M\geq0,\gamma>0$ satisfying \[ \lambda(t)\leq M\mathrm{e}^{-\gamma t},\text{ \ for all }t\in\mathbb{R}^{+}. \] Then, there exists $\varepsilon>0$ such that for any solution $X(t)$ to (\ref{P}) defined for $t\geq t_{0}\geq 0$, there exists a constant $C=C(X_{0})$ (which may depend on $X_{0}$) such that \[ |X(t)|\leq C(X_{0})\text{\textrm{e}}^{-\varepsilon (t-t_0)/p},\text{ \ for all }t\geq t_{0}. \] \end{theorem} \begin{proof} Let us fix a positive number $\varepsilon$ satisfying $0<\varepsilon <\min\{c_{2},\gamma\}$, and estimate the following derivative for $X(t)$, a solution to (\ref{P}) defined for $t\geq t_{0}$, \begin{align*} \frac{\text{d}}{\text{d}t}\left[ \text{e}^{\varepsilon t}V(t,X(t))\right] & =\varepsilon\text{e}^{\varepsilon t}V(t,X(t))+\text{e}^{\varepsilon t} \dot{V}(t,X(t))\\ & \leq\text{e}^{\varepsilon t}\left( \varepsilon V(t,X(t))-c_{2} V(t,X(t))+\lambda(t)\right) \\ & \leq\text{e}^{\varepsilon t}\lambda(t), \end{align*} and thus \begin{align*} \text{e}^{\varepsilon t}V(t,X(t)) & \leq \text{e}^{\varepsilon t_0}V(t_0,X_{0})+\int_{t_0}^{t} \text{e}^{\varepsilon s}\lambda(s)\,ds\\ & \leq \text{e}^{\varepsilon t_0}V(t_0,X_{0})+\frac{M\text{e}^{(\varepsilon-\gamma) t_0}}{\gamma-\varepsilon}\\ & \leq \text{e}^{\varepsilon t_0}\left(V(t_0,X_{0})+\frac{M}{\gamma-\varepsilon}\right). \end{align*} Therefore \[ |X(t)|^{p}\leq\frac{1}{c_{1}}\left( V(t_0,X_{0})+\frac{M}{\gamma-\varepsilon }\right) \text{e}^{-\varepsilon (t-t_0)},\text{ \ for all \ }t\geq t_{0}, \] and the proof is complete. \end{proof} \noindent \textbf{Example 1. }Let us exhibit a simple example to illustrate this result. Consider the differential equation \begin{equation} \frac{\text{d}X}{\text{d}t}=-4X+\text{e}^{-t}X^{1/3}, \label{ejemplo1} \end{equation} and take the usual auxiliary function $V(x)=\frac{1}{2}x^{2}$. Then \begin{equation} \dot{V}(x)=\frac{\text{d}V(x)}{\text{d}x}\cdot \left( -4x+\text{e} ^{-t}x^{1/3}\right) =-4x^{2}+\text{e}^{-t}x^{4/3}, \label{vejemplo1} \end{equation} which is not definite negative. However, it follows by Young's inequality $ (ab\leq l\frac{a^{p}}{p}+\frac{1}{ql^{q/p}}b^{q}$ with $\frac{1}{p}+\frac{1}{ q}=1)$ for suitable $l>0$, $p=3/2$ and $q=3$, $$ \dot{V}(x) =-4x^{2}+\text{e}^{-t}x^{4/3} \leq (-4+\frac{2}{3}l)x^{2}+\frac{1}{3l^{2}}\text{e}^{-3t}, $$ and, for $l=3/2$ we have $-4+\frac{2}{3}l=-3$, and therefore \begin{equation*} \dot{V}(x)\leq -3x^{2}+\lambda (t), \end{equation*} where $\lambda (t)=\frac{4}{27}$e$^{-3t}$. Now, the theorem ensures that solutions decrease towards zero with exponential decay. \smallskip \noindent\textbf{Remark. }The exponential decay of $\lambda$ is essential to guarantee the same decay of solutions. Indeed, consider the following one dimensional equation \begin{equation*} \frac{\text{\textrm{d}}X}{\text{\textrm{d}}t}=-X+\frac{1}{1+t}. \end{equation*} It is clear that the null solution to the autonomous equation $\dot{X}=-X$ is exponentially stable. Moreover, every solution to this equation converges exponentially to zero (i.e. the global attractor for this equation is the set $\{0\}$). However, as far as we consider the perturbed nonautonomous version, the solutions do not converge to zero, in general, with the same rate. To see this, notice that the solution to the problem $$\begin{gathered} \dfrac{\text{\textrm{d}}X}{\text{\textrm{d}}t}=-X+\dfrac{1}{1+t}\\ X(t_{0})=X_{0}, \end{gathered} $$ is given by \begin{equation*} X(t)=X(t;t_{0},X_{0})=\text{e}^{-(t-t_{0})}X_{0}+\int_{t_{0}}^{t}\text{e} ^{-(t-s)}(1+s)^{-1}\,ds\,. \end{equation*} One can easily check that \begin{equation*} \lim_{t\to +\infty}\frac{\log\left| X(t)\right| }{t}=0, \end{equation*} so that we do not have exponential decay to zero. However, as a consequence of the theory we shall develop, we will be able to ensure that the solutions decay to zero with polynomial rate (see Example 3 below).\medskip This fact motivates our interest in analyzing the decay rate of solutions, that is, if we cannot prove exponential convergence of solutions and know that those are asymptotically stable, is it possible to ensure at least polynomial decrease?. The typical example related to nonexponential convergence of solutions to an equilibrium is given by the following simple ordinary differential equation (see Haraux \cite[pp. 45-46]{haraux}): \begin{equation*} \dot{X}(t)=-X(t)\left| X(t)\right| ^{p-1},t\geq0,p>1. \end{equation*} The solution starting in $X_{0}$ at time $t=0$ is given by \begin{equation*} X(t)=\frac{\mathop{\rm sgn}(X_{0})}{\left\{ \left( p-1\right) t+\left| X_{0}\right| ^{1-p}\right\} ^{1/\left( p-1\right) }}, \end{equation*} so that $\left| X(t)\right| $ behaves as $\left\{ 1/\left[ \left( p-1\right) t\right] \right\} ^{1/\left( p-1\right) }$ as time $t$ goes to $\infty$, and therefore it decreases polynomially to the equilibrium.\medskip Owing to this fact, in the following result we provide a sufficient condition guaranteeing polynomial convergence of solutions and, in the next Section, we will state a more general result concerning more general decay rates. \begin{theorem} \label{theorem-polynomial}Assume that there exists a continuously differentiable function $V:\mathbb{R}\times D\to \mathbb{R}$ satisfying \begin{align*} \exists c_{1} & >0\text{ \ and }p>0\text{\ \ such that \ }c_{1}|x|^{p}\leq V(t,x),\text{ for all }(t,x)\in\mathbb{R}\times D,\\ \exists q & >1\text{ such that }\dot{V}(t,x)\leq-\alpha(t)\left[ V(t,x)\right] ^{q},\text{ \ for all }(t,x)\in\mathbb{R}\times D, \end{align*} where $\alpha(\cdot)$ is a nonnegative continuous function such that \begin{equation} \liminf_{t\to \infty}\frac{1}{t}\int_{t_{0}}^{t}\alpha(s) \,ds \geq \nu>0 \label{limit-inf} \end{equation} Then, there exists $\delta>0$ such that for any solution $X(t)$ to (\ref{P}) defined for $t\geq t_{0}$, there exists a constant $C=C(X_{0})$ (which may depend on $X_{0}$) such that \[ |X(t)|\leq C(X_{0})t^{-\delta},\text{ \ for all }t\geq t_{0}. \] \end{theorem} \begin{proof} Let us consider $X(t)$, a solution to (\ref{P}) defined for $t\geq t_{0}$. Then $$ \frac{\text{d}}{\text{d}t}\left[ V(t,X(t))\right] =\dot{V}(t,X(t)) \leq-\alpha(t)\left[ V(t,X(t))\right] ^{q}. $$ Denoting $u(t)=V(t,X(t))$, we have that this function satisfies the following differential inequality \[ \dot{u}(t)\leq-\alpha(t)\left[ u(t)\right] ^{q}, \] and, therefore its positive solutions satisfy \[ \frac{\dot{u}(t)}{\left[ u(t)\right] ^{q}}\leq-\alpha(t). \] By a direct integration we easily obtain \[ u(t)\leq\left[ u(t_{0})^{1-q}+(q-1)\int_{t_{0}}^{t}\alpha(s)\,ds\right] ^{-1/(q-1)}. \] Taking into account assumption (\ref{limit-inf}), and given $\varepsilon>0$, we can ensure for $t_{0}$ large enough that \[ \int_{t_{0}}^{t}\alpha(s)\,ds\geq\left( \nu-\varepsilon\right) t,\text{ \ for all }t\geq t_{0}, \] and, consequently, \[ u(t)\leq C_{0}(X_{0})t^{-1/(q-1)},\text{ for all }t\geq t_{0}. \] Noticing now the expression of $u(t)$, it is clear that the result holds by setting $\delta=1/p(q-1)$ and a suitable $C(X_{0}).$ \end{proof} \smallskip \noindent \textbf{Example 2. } We consider the following two dimensional system in order to apply the previous result. $$\begin{gathered} \dot{y}_{1}=y_{2}-y_{1}\left| y_{1}\right| \\ \dot{y}_{2}=-y_{1}-y_{2}\left| y_{2}\right| . \end{gathered} $$ It is easy to check that the unique stationary solution is the zero solution. Let us take $V(t,y_{1},y_{2})=\frac{1}{2}(y_{1}^{2}+y_{2}^{2})$. Then \begin{align*} \dot{V}(t,y_{1},y_{2})& =-y_{1}^{2}\left| y_{1}\right| -y_{2}^{2}\left| y_{2}\right| \\ & =-\left( \left| y_{1}\right| ^{3}+\left| y_{2}\right| ^{3}\right) \\ & \leq -c\left( \left| y_{1}\right| ^{2}+\left| y_{2}\right| ^{2}\right) ^{3/2} \\ & =-c\left[ V(t,y_{1},y_{2})\right] ^{3/2}, \end{align*} where $c>0$ is a suitable constant (notice that we have used the inequality $ \left( \frac{a+b}{2}\right) ^{p}\leq \frac{a^{p}}{2}+\frac{b^{p}}{2}$, $a,b>0$, $p>1)$. Therefore, every solution to the system decays to zero with at least decay rate $t^{-1}$. \section{General decay rate of solutions} Firstly, we will introduce the concept of generalized Lyapunov exponent with respect to a positive function $\lambda (\cdot )$ which will enable us to establish a precise definition of stability or asymptotic behaviour with general decay function $\lambda (\cdot )$. \begin{definition} \label{definition1} \rm Let the positive function $\lambda(t)\uparrow+\infty$ be defined for all sufficiently large $t>0$, say $t\geq T>0$. Let $X(t)$ be a solution to (\ref{P}). The number \[ \limsup_{t\to \infty}\frac{\log|X(t)|}{\log\lambda(t)} \] is called the generalized Lyapunov exponent of $X(t)$ with respect to $\lambda(t)$. The solution $X(t)$ is said to decay to zero with decay function $\lambda(t)$ of order at least $\gamma>0$, if its generalized Lyapunov exponent is less than or equal to $-\gamma$, i.e., \[ \limsup_{t\to \infty}\frac{\log|X(t)|}{ \log\lambda(t)} \leq-\gamma. \] If, in addition $f(t,0)=0$ for all $t\in\mathbb{R}$, the zero solution is said to be globally asymptotically stable with decay function $\lambda(t)$ of order at least $\gamma>0$, if every solution to (\ref{P}) defined in the future decays to zero with decay function $\lambda(t)$ of order at least $\gamma>0$. \end{definition} \noindent \textbf{Remark. } Clearly, replacing in the above definition the decay function $\lambda (t)$ by e$^{t}$ leads to the usual Lyapunov exponents concept and exponential decay rate. Also, we point out that this definition includes both the case of sub-exponential decay functions (polynomials, logarithms) and the situation of super-exponential decay (e.g. $\lambda (t)=\exp \{\exp t\})$. \smallskip Now, we can prove a sufficient condition ensuring almost sure stability of the solution of (\ref{P}) with a general decay rate. \begin{theorem} \label{theorem2}\label{sinretardo} Let $\varphi_{1}(t)$, $\varphi_{2}(t)$ be two continuous functions with $\varphi_{1}$ nonnegative. Assume there exist a continuously differentiable function $V:\mathbb{R}^{+}\times D\to \mathbb{R}$, and constants $p>0$, $m\geq 0$, $\nu\geq 0$, $\theta\in\mathbb{R}$ such that \begin{description} \item[(a)] $|x|^{p}\lambda(t)^{m}\leq V(t,x),\,(t,x)\in\mathbb{R} ^{+}\times D$. \item[(b)] $\dot{V}(t,x)\leq\varphi_{1}(t)+\varphi_{2}(t)V(t,x)$, $(t,x)\in \mathbb{R}^{+}\times D$. \item[(c)] $\exists T>0$ large enough such that for $t_{0}\geq T$, $$\begin{gathered} \limsup_{t\to \infty }\dfrac{\log \int_{t_{0}}^{t}\varphi _{1}(s)\exp \left\{ -\int_{t_{0}}^{s}\varphi _{2}(r)\text{d}r\right\} \,ds }{\log \lambda (t)}\leq \nu ,\\ \limsup_{t\to \infty } \dfrac{\int_{t_{0}}^{t}\varphi _{2}(s)\,ds}{\log \lambda (t)}\leq \theta \end{gathered} $$ \end{description} Then, if $X(t)$ is a solution to (\ref{P}) defined in the future (i.e. for $t\geq t_{0})$, then \begin{equation*} \limsup_{t\to \infty }\frac{\log |X(t)|}{\log \lambda (t)}\leq -\frac{m-(\theta +\nu )}{p}\,. \end{equation*} In particular, if $m>\theta +\nu $ and $f(t,0)=0$, the null solution is globally asymptotically stable with decay function $\lambda (t)$ of order at least $\left( m-(\theta +\nu )\right)/p$. \end{theorem} \begin{proof} Given $(t_{0},X_{0})\in(T,+\infty)\times D$, and $X(t)$ a solution to the problem (\ref{P}) defined in the future, let us compute \[ \frac{\text{d}}{\text{d}t}V(t,X(t))=\dot{V}(t,X(t))\leq\varphi_{1} (t)+\varphi_{2}(t)V(t,X(t)), \] which implies \[ \frac{\text{d}}{\text{d}t}\left[ \exp\left\{ -\int_{t_{0}}^{t}\varphi _{2}(s)\,ds\right\} V(t,X(t))\right] \leq\varphi_{1}(t)\exp\left\{ -\int_{t_{0}}^{t}\varphi_{2}(s)\,ds\right\} , \] whence \[ V(t,X(t))\leq\Big( V(t_{0},X_{0})+\int_{t_{0}}^{t}\varphi_{1}(s)\exp \big\{ -\int_{t_{0}}^{s}\varphi_{2}(r)\text{d}r\big\} \,ds\Big) \exp\big( \int_{t_{0}}^{t}\varphi_{2}(s)\,ds\big) . \] Given $\varepsilon>0$, there exists $t_{1}(\varepsilon)$ such that for all $t\geq \max\{t_{1}(\varepsilon), t_{0}\}$ we have \[ \int_{t_{0}}^{t}\varphi_{1}(s)\exp\left\{ -\int_{t_{0}}^{s}\varphi _{2}(r)\text{d}r\right\} \,ds\leq\lambda(t)^{\nu+\varepsilon},\text{ \ \ }\int_{t_{0}}^{t}\varphi_{2}(s)\,ds\leq\log\lambda(t)^{(\theta +\varepsilon)}. \] Consequently, it follows that \[ \log V(X(t),t)\leq\log((V(t_{0},X_{0}))+\lambda(t)^{\nu+\varepsilon} )+(\theta+\varepsilon)\log\lambda(t) \] for all $t\geq \min\{t_{1}(\varepsilon), t_{0}\}$, which immediately implies that \[ \limsup_{t\to \infty}\frac{\log V(X(t),t)}{\log\lambda(t)} \leq\nu+\varepsilon+\theta+\varepsilon. \] As this holds for every $\varepsilon>0$, then \[ \limsup_{t\to \infty}\frac{\log V(X(t),t)}{\log\lambda(t)} \leq\nu+\theta\text{,} \] and, therefore \[ \limsup_{t\to \infty}\frac{\log|X(t)|}{\log\lambda(t)}\leq -\frac{m-\left( \theta+\nu\right) }{p}, \] which completes the proof. \end{proof} \noindent\textbf{Remarks. }a) Observe that, if $\varphi_{2}(t)\geq0$, the result follows by replacing condition (c) by \begin{equation*} \begin{array}{ll} \limsup_{t\to \infty}\dfrac{\log\int_{t_{0}}^{t}\varphi _{1}(s) \,ds}{\log\lambda(t)}\leq\nu, & \limsup_{t\to \infty} \dfrac{\int_{t_{0}}^{t}\varphi_{2}(s)\,ds}{\log\lambda(t)}\leq\theta. \end{array} \end{equation*} b) On the other hand, when $m-\left( \theta+\nu\right) >0$ it can be proved in the theorem that every solution to problem (\ref{P}) is defined for all $ t\geq t_{0}$, so that the limit makes sense for every solution.\smallskip The next result is an improvement of theorem \ref{theorem-polynomial} to the more general case of considering a general decay function $\lambda(t)$ instead of $t.$ \begin{theorem} Assume $V:\mathbb{R}\times D\to \mathbb{R}$ is a continuously differentiable function satisfying \begin{align*} \exists c_{1} & >0\text{ \ and }p>0\text{\ \ such that \ }c_{1}|x|^{p}\leq V(t,x),\text{ for all }(t,x)\in\mathbb{R}\times D,\\ \exists q & >1\text{ such that }\dot{V}(t,x)\leq-\alpha(t)\left[ V(t,x)\right] ^{q},\text{ \ for all }(t,x)\in\mathbb{R}\times D, \end{align*} where $\alpha(\cdot)$ is a nonnegative continuous function such that \begin{equation} \liminf_{t\to \infty}\frac{\log\int_{t_{0}}^{t}\alpha(s) \,ds }{\log\lambda(t)}\geq\nu>0\label{limite5} \end{equation} Then, for any solution $X(t)$ to (\ref{P}) defined for $t\geq t_{0}$ it holds \[ \limsup_{t\to \infty}\frac{\log\left| X(t)\right| }{\log\lambda (t)}\leq-\frac{\nu}{p(q-1)}. \] \end{theorem} \begin{proof} This follows the same lines as the proof of theorem \ref{theorem-polynomial}, taking into account the new assumption (\ref{limite5}). \end{proof} \smallskip Now, we shall consider some examples in order to illustrate the results. Of course, as we are going to consider simple linear examples, the conclusions can be obtained by solving directly the equations, and the theory to be developed in the next Section can also be applied. However, our interest right now is to show the different situations which can appear in more complex systems.\smallskip \noindent \textbf{Example 3. } Consider again the equation \begin{equation*} \frac{\text{\textrm{d}}X}{\text{\textrm{d}}t}=-X+\frac{1}{1+t}\,. \end{equation*} We know that every solution $X(t)$ satisfies $\lim_{t\to +\infty } \log \left| X(t)\right|/t=0$. But, taking $V(t,x)=(1+t)x^{2}$, it is easy to check that \begin{align*} \dot{V}(t,x)& =x^{2}+2x(1+t)\left( -x+\frac{1}{1+t}\right) \\ & \leq x^{2}\left( -1-2t\right) +\frac{2x(1+t)^{1/2}}{\left( 1+t\right) ^{1/2}} \\ & \leq x^{2}\left( -1-2t\right) +x^{2}\left( 1+t\right) +\frac{1}{1+t} \\ & \leq \frac{1}{1+t}, \end{align*} so that setting $\varphi _{1}(t)=\frac{1}{1+t}$ and $\varphi _{2}(t)=0$, we immediately obtain $\nu =\theta =0\ $in theorem \ref{theorem2}, what implies that \begin{equation*} \lim_{t\to +\infty }\frac{\log \left| X(t)\right| }{\log \left( 1+t\right) }\leq -\frac{1}{2}. \end{equation*} In other words, although the solutions do not approach zero exponentially, we can assure that their decay rate is at least $t^{-1/2}.$\smallskip \noindent\textbf{Example 4. } Now we include an example which does not contain any term causing exponential decay (as $-X$ in the previous one). Consider the following situation for $p>1/2$ and $q>0$, \begin{equation*} \frac{\text{\textrm{d}}X}{\text{\textrm{d}}t}=\frac{-p}{1+t}X+\frac{1}{ \left( 1+t\right) ^{q}}. \end{equation*} First, we take the function $V(t,x)=(1+t)^{2p}x^{2}$, and evaluate \begin{align*} \dot{V}(t,x)& =2p(1+t)^{2p-1}x^{2}+2(1+t)^{2p}x\left( \frac{-p}{1+t}x+\frac{1 }{\left( 1+t\right) ^{q}}\right) \\ & \leq \frac{2(1+t)^{2p}x}{\left( 1+t\right) ^{q}} \\ & \leq \frac{2x(1+t)^{p-\frac{1}{2}}(1+t)^{p+\frac{1}{2}}}{(1+t)^{q}} \\ & \leq (1+t)^{2p-1}x^{2}+(1+t)^{2(p-q)+1}. \end{align*} Now, observe that we can set $\varphi _{1}(t)=\left( 1+t\right) ^{2(p-q)+1}$ and $\varphi _{2}(t)=(1+t)^{-1}$ yielding \begin{equation*} \lim_{t\to +\infty }\frac{\int_{0}^{t}\varphi _{2}(s)\,ds}{\log (1+t)}=1, \end{equation*} and \begin{equation*} \lim_{t\to +\infty }\frac{\log \int_{0}^{t}\varphi _{1}(s)\,ds}{ \log (1+t)}=\left\{ \begin{array}{ll} 2(p-q)+2 &\text{if }2(p-q)+2>0\,, \\ 0 &\text{otherwise.} \end{array} \right. \end{equation*} Then, we can apply theorem \ref{theorem2} and obtain convergence to zero with decay rate at least $\left( 1+t\right) ^{-\gamma }$ in the following cases: \\ If $2(p-q)+2>0$, i.e. if $q3/2$, then $ \gamma=(-3+2q)/2$.\\ If $2(p-q)+2\leq0$, then $\gamma=p-1/2$.\smallskip \noindent\textbf{Example 5. } Finally, we exhibit a situation with a more general decay rate. To this end, consider \begin{equation*} \frac{\text{\textrm{d}}X}{\text{\textrm{d}}t}=\frac{-2X}{\left( 1+t\right) \log\left( 1+t\right) }+\frac{1}{\left( 1+t\right) \left[ \log\left( 1+t\right) \right] ^{2}}. \end{equation*} By using the Lyapunov function $V(t,x)=x^{2}\log\left( 1+t\right) $ (notice that we are considering $\lambda(t)=\log\left( 1+t\right) $), it holds \begin{align*} \dot{V}(t,x) & =\frac{1}{1+t}x^{2}+2x\log\left( t+1\right) \left( \frac{-2x}{ \left( 1+t\right) \log\left( 1+t\right) }+\frac{1}{\left( 1+t\right) \left[ \log\left( 1+t\right) \right] ^{2}}\right) \\ & \leq\frac{-3x^{2}}{1+t}+\frac{2x}{\left( 1+t\right) \log\left( 1+t\right) } \\ & \leq\frac{-2x^{2}}{1+t}+\frac{1}{\left( 1+t\right) \left[ \log\left( 1+t\right) \right] ^{2}}, \end{align*} and we can set $\varphi_{1}(t)=\frac{1}{\left( 1+t\right) \left[ \log\left( 1+t\right) \right] ^{2}}$ and $\varphi_{2}(t)=0$. Now, it is not difficult to check that (c) in theorem \ref{theorem2} is fulfilled with $\theta=\nu=0$ and, consequently, $\gamma=1/2.$ \section{Perturbed systems} In this Section, we shall investigate some stability properties of solutions of perturbed differential systems. Our aim is to prove some results which, in particular, ensure the transference of some decay properties from the unperturbed systems to the perturbed one. In other words, if we know that the solutions of a differential systems decay to zero with certain decay rate, under which conditions can we guarantee that the perturbed one has a similar property?. Firstly, we will consider the perturbed linear differential system, and then, we will treat a more general nonlinear one. \subsection{The perturbed linear case} Consider the linear differential system \begin{equation} \dot{X}=A(t)X, \label{linear} \end{equation} where $A\in C(\mathbb{R};\mathcal{L}(\mathbb{R}^{n}))$, i.e. is a $n\times n$ matrix whose elements are continuous functions. Let $\lambda(t)$ be a function satisfying the assumptions in the previous Section and let $ \left\langle \cdot,\cdot\right\rangle $ denote the scalar product in $ \mathbb{R}^{n}$ associated with the norm $\left| \cdot\right| $. Let us assume that the zero solution is globally asymptotically stable with decay rate $\lambda(t)$ of order $\gamma>0$, what happens if, for instance, there exists a continuous function $\alpha(t)$ such that \begin{equation*} 2\left\langle A(t)u,u\right\rangle \leq\alpha(t)|u|^{2},\text{ \ \ for all \ }t\in\mathbb{R},u\in\mathbb{R}^{n}, \end{equation*} with \begin{equation*} \underset{t\to +\infty}{\lim\sup}\frac{\int_{0}^{t}\alpha(s)\,ds }{\log\lambda(t)}\leq-2\gamma. \end{equation*} Now, consider the perturbed problem \begin{equation} \dot{X}=A(t)X+F(t,X), \label{perturbed} \end{equation} where $F:\mathbb{R}\times\mathbb{R}^{n}\to \mathbb{R}^{n}$ is a continuous function. We shall prove that under suitable conditions, every solution to (\ref{perturbed}) decreases to zero with the same decay function although possibly with a different order. To start, consider the linear autonomous case $\dot{X}=AX$. If we assume that the trivial solution is asymptotically stable with some decay rate, as this is an autonomous system, it must be uniformly asymptotically stable and henceforth, exponentially stable. Thus, all the eigenvalues associate to the matrix $A$ have negative real parts and, if necessary, by a suitable change of norm and its associated inner product (see Hirsch and Smale \cite[p. 211] {hirsch-smale}), we can ensure that there exists $\gamma >0$ such that $ \left| \exp \left\{ \left( t-t_{0}\right) A\right\} \right| \leq $ e$ ^{-\gamma \left( t-t_{0}\right) }$ for all $t_{0}$ and $t\geq t_{0}$. This immediately implies (see again Hirsch and Smale \cite[p. 259]{hirsch-smale}) that \begin{equation*} \left\langle Ax,x\right\rangle \leq -\gamma \left| x\right| ^{2},\text{ \ for all \ }x\in \mathbb{R}^{n}. \end{equation*} Let us now consider the perturbed system \begin{equation} \dot{X}=AX+F(t,X), \label{primera-aprox} \end{equation} where $F:\mathbb{R\times }D\to \mathbb{R}^{n}$ is continuous ($ D\subset \mathbb{R}^{n}$ is an open set containing $0$ in its interior) and satisfies \begin{equation*} \left\langle F(t,x),x\right\rangle \leq \phi _{1}(t)+\phi _{2}(t)\left| x\right| ^{2},\text{ \ \ for all }(t,x)\in \mathbb{R\times }D, \end{equation*} being $\phi _{1}$ and $\phi _{2}$ continuous functions, $\phi _{1}\geq 0$, and fulfilling (for a decay function $\lambda (t)$ as in the previous section) \begin{equation*} \begin{array}{c} \limsup_{t\to \infty }\dfrac{\log \int_{t_{0}}^{t}2\phi _{1}(s)\exp \left\{ -\int_{t_{0}}^{s}2\left( \phi _{2}(r)-\gamma \right) \text{d}r\right\} \,ds}{\log \lambda (t)}\leq \nu , \\ \limsup_{t\to \infty }\dfrac{\int_{t_{0}}^{t}2\left( \phi _{2}(s)-\gamma \right) \,ds}{\log \lambda (t)}\leq \theta . \end{array} \end{equation*} Then, it is straightforward to check that assumptions in theorem \ref {theorem2} are satisfied with $V(t,x)=\left| x\right| ^{2},m=0,p=2,\varphi _{1}(t)=2\phi _{1}(t),\varphi _{2}(t)=2\left( \phi _{2}(t)-\gamma \right) ,$ and therefore \begin{equation} \limsup_{t\to \infty }\frac{\text{log }|X(t)|}{\text{log } \lambda (t)}\leq \frac{(\theta +\nu )}{2}. \label{estrella} \end{equation} Now, if $\theta +\nu <0$, asymptotic decay to zero with decay rate $\lambda (t)$ of order at least $-\left( \theta +\nu \right) /2$ holds$.\smallskip \smallskip $ Although this consequence can be seen as a trivial result, the most important thing is that we can now give a very easy proof of two classical results concerning stability in the first approximation and even weaken the assumptions. In fact, we are referring here to the following general result (see, for instance Yoshizawa \cite{yoshizawa66}, Brauer and Nohel \cite {brauer-nohel}, etc.). \begin{theorem} Assume that all of the characteristic roots of the matrix $A$ have negative real parts. Assume that $F(t,x)=G_{1}(t,x)+G_{2}(t,x)$ where $G_{1}$ and $G_{2}$ are continuous functions satisfying $G_{1}(t,0)=G_{2}(t,0)=0$ and \begin{align} \lim_{\left| x\right| \to 0}\frac{\left| G_{1}(t,x)\right| }{\left| x\right| } & =0,\text{ \ \ uniformly in }t;\label{g1}\\ \left| G_{2}(t,x)\right| & \leq g(t)\left| x\right| ,\text{ \ with } \int_{0}^{\infty}g(t)\mathrm{d}\text{t}<\infty.\label{g2} \end{align} Then, the zero solution of \[ \dot{X}=AX+F(t,X) \] is exponentially asymptotically stable, i.e. there exists $\delta>0,K>0$ and $\widetilde{\gamma}>0$ such that for every $t_{0}\in\mathbb{R}$ \ large enough and every $X_{0}\in B(0;\delta):=\{x\in\mathbb{R}^n:|x|<\delta\}$, every solution $X(t)$ to (\ref{primera-aprox}) such that $X(t_{0})=X_{0}$, satisfies \[ \left| X(t)\right| \leq K\left| X_{0}\right| \text{\textrm{ %TCIMACRO{\TeXButton{e}{{\rm e}}} %BeginExpansion {\rm e} %EndExpansion }}^{-\widetilde{\gamma}(t-t_{0})},\text{ \ \ for all }t\geq t_{0}. \] \end{theorem} \begin{proof} Thanks to assumption (\ref{g1}), we can deduce that there exists $\delta>0$ such that \[ \left| G_{1}(t,x)\right| \leq\frac{\gamma}{2}\left| x\right| ,\text{ \ \ for all \ }x\in B(0;\delta). \] Now we can restrict ourselves to consider the problem in the domain $\Omega=\mathbb{R\times}B(0;\delta)$. Thus, given $(t_{0},X_{0})\in\Omega$ choose $X(t)$ a solution of (\ref{primera-aprox}) such that $X(t_{0})=X_{0}.$ Then, for all $(t,x)\in\Omega$ \begin{align*} \left\langle F(t,x),x\right\rangle & =\left\langle G_{1}(t,x)+G_{2} (t,x),x\right\rangle \\ & \leq\frac{\gamma}{2}\left| x\right| ^{2}+g(t)\left| x\right| ^{2}\\ & \leq\left( \frac{\gamma}{2}+g(t)\right) \left| x\right| ^{2}, \end{align*} and taking $\lambda(t)=$ e$^{t},\phi_{1}(t)=0,\phi_{2}(t)=\frac{\gamma} {2}+g(t)$, we can easily check that \begin{align*} \limsup_{t\to \infty}\frac{\int_{t_{0}}^{t}2(\phi_{2}(s)-\gamma )\,ds}{t} & =\limsup_{t\to \infty}\frac{\int_{t_{0}} ^{t}2(g(s)-\frac{\gamma}{2})\,ds}{t}\\ & =-\gamma+\limsup_{t\to \infty}\frac{\int_{t_{0}}^{t}2g(s)\,ds}{t}\\ & \leq-\gamma, \end{align*} and thanks to (\ref{estrella}) \[ \limsup_{t\to \infty}\frac{\text{log }|X(t)|}{t}\leq-\frac{\gamma}{2}, \] and the proof is complete. \end{proof} \noindent\textbf{Remark.} Notice that we only need to assume \begin{equation*} \limsup_{t\to \infty}\frac{\int_{t_{0}}^{t}g(s)\,ds}{t}=0 \end{equation*} instead of the integrability of $g$ in the interval $(0,+\infty).$ Consequently, this condition can be weakened in the theorem. Moreover, by a slight modification at the beginning of the proof, the stability result can be deduced by assuming only that \begin{equation*} \limsup_{t\to \infty}\frac{\int_{t_{0}}^{t}g(s)\,ds}{t}=r<\gamma\,. \end{equation*} \smallskip Now, let us consider the nonautonomous linear case and its perturbations. Namely, consider the following differential systems: \begin{gather} \dot{X}(t)=A(t)X(t) \label{lineal} \\ \dot{Y}(t)=A(t)Y(t)+f(t,Y(t)), \label{linealperturbado} \end{gather} where $A\in C(\mathbb{R};\mathcal{L}(\mathbb{R}^{n}))$ and $f\in C(\mathbb{R} ^{n+1};\mathbb{R}^{n})$. Let us denote $X(t;t_{0},X_{0})$ the unique solution to (\ref{lineal}) starting in $X_{0}$ at time $t_{0}$, and by $ Y(t;t_{0},X_{0})$ the corresponding one for (\ref{linealperturbado}) (maybe not unique). Assume that there exist $\lambda(t)$ satisfying the assumptions in Definition \ref{definition1}, $T>0,C>0$ and $\gamma>0$, such that for all $t_{0}\geq T,t\geq t_{0}$ and $X_{0}\in\mathbb{R}^{N},$ \begin{equation*} \left| X(t;t_{0},X_{0})\right| \leq C\left| X_{0}\right| \lambda (t-t_{0})^{-\gamma}. \end{equation*} Then, we can prove the following result. \begin{theorem} In the preceding situation, assume that $\left| f(t,x)\right| \leq\alpha(t)$, for all $(t,x)\in\mathbb{R}^{n+1}$, where \[ \limsup_{t\to \infty}\frac{\log\int_{t_{0}}^{t}\lambda(t-s)^{-\gamma }\alpha(s)\,ds }{\log\lambda(t-t_{0})}\leq-\delta<0\,. \] Then, \[ \limsup_{t\to \infty}\frac{\log\left| Y(t;t_{0},Y_{0})\right| } {\log\lambda(t-t_{0})}\leq-\min\{\gamma,\delta\}. \] \end{theorem} \begin{proof} Observe that if $\Phi(\cdot)$ is a fundamental matrix for the linear system (\ref{lineal}), it follows that \[ \left\| \Phi(t)\Phi(t_{0})^{-1}\right\| \leq C\lambda(t-t_{0})^{-\gamma },\forall t\geq t_{0}\geq T. \] Now, by the variation of constants formula, we can write \[ Y(t):=Y(t;t_{0},Y_{0})=\Phi(t)\Phi(t_{0})^{-1}Y_{0}+\int_{t_{0}}^{t} \Phi(t)\Phi(s)^{-1}f(s,Y(s))\,ds, \] and, consequently, \begin{align*} \left| Y(t)\right| & \leq\left\| \Phi(t)\Phi(t_{0})^{-1}\right\| \left| Y_{0}\right| +\int_{t_{0}}^{t}\left\| \Phi(t)\Phi(s)^{-1}\right\| \left| f(s,Y(s))\right| \,ds\\ & \leq C\lambda(t-t_{0})^{-\gamma}\left| Y_{0}\right| +\int_{t_{0}} ^{t}C\lambda(t-s)^{-\gamma}\alpha(s)\,ds. \end{align*} Given $0<\varepsilon<\delta$, we can get, for $t$ large enough, that \[ \int_{t_{0}}^{t}\lambda(t-s)^{-\gamma}\alpha(s)\,ds\leq\lambda (t-t_{0})^{-(\delta-\varepsilon)}, \] and, thus \[ \left| Y(t)\right| \leq\widetilde{C}\lambda(t-t_{0})^{-\min\{ \gamma, (\delta-\varepsilon)\} },\text{ for }t\geq t_{0}\text{ large enough,} \] which immediately implies the result. \end{proof} \subsection{Perturbed nonlinear systems} We shall now prove a similar result but considering the perturbations of a nonlinear differential system. However, for this more general case, we need that the decay functions $\lambda(t)$ satisfies the following sub-exponential condition \begin{equation} \lambda(t+s)\leq\lambda(t)\lambda(s),\forall t,s\in\mathbb{R}^{+}. \label{lambda} \end{equation} In this respect, consider the following differential systems \begin{gather} \dot{X}=f(t,X), \label{nonlinear} \\ \dot{Y}=f(t,Y)+g(t,Y), \label{nonlinear-perturbed} \end{gather} where $f,g$ are continuous functions from $\mathbb{R}^{n+1}$ to $\mathbb{R}^{n}$. Given $(t_{0},x)\in\mathbb{R}^{n+1}$, let us denote by $X(t;t_{0},x)$ and $Y(t;t_{0},x)$ solutions to (\ref{nonlinear}) and (\ref{nonlinear-perturbed}) respectively, starting in $x$ at time $t_{0}$. We also assume that all of the solutions to these systems are defined in the future. We can now prove the following theorem. \begin{theorem} Assume that there exist positive constants $C,M,\delta$ and $\gamma$, and nonnegative functions $\alpha(\cdot)$ and $\beta(\cdot)$ such that for all $t_{0}$ large enough (say $t_{0}\geq T)$, all $t\geq t_{0}$, every $X_{0} \in\mathbb{R}^{n}$ and every solution $X(t;t_{0},X_{0})$, it holds: \begin{subequations} \begin{gather} \left| X(t;t_{0},X_{0})\right| \leq C\left| X_{0}\right| \lambda (t-t_{0})^{-\gamma},\text{ \ \ }\forall t\geq t_{0},\label{uno}\\ \left| f(t,x)-f(t,y)\right| \leq\alpha(t)\left| x-y\right| ,\text{ \ \ }\forall t\geq t_{0},x,y\in\mathbb{R}^{n},\label{dos}\\ \left| g(t,x)\right| \leq\beta(t),\text{ \ \ }\forall t\geq t_{0} ,\label{tres}\\ \int_{t}^{t+1}\alpha(s)\,ds\leq M,\text{ \ \ }\forall t\geq t_{0},\label{cuatro}\\ \limsup_{t\to \infty}\frac{\log\int_{t}^{t+1}\beta(s)\,ds }{\log\lambda(t)}\leq-\delta.\label{cinco} \end{gather} Then, every solution to (\ref{nonlinear-perturbed}), $Y(t;t_{0},Y_{t_{0}}),$ defined in the future satisfies \end{subequations} \[ \limsup_{t\to \infty}\frac{\log\left| Y(t;t_{0},Y_{t_{0}})\right| }{\log\lambda(t)}\leq-\min\{ \gamma,\delta\} . \] \end{theorem} \begin{proof} First of all, we can assume without loss of generality that $C\leq1/4$. Otherwise, we consider the new decay function $\tilde{\lambda}(t)=\left( 4C\right) ^{-1/\gamma}\lambda(t)$ for which now (\ref{uno}) holds replacing $C$ by $1/4$ and also (\ref{cinco}) remains true with the same constant. Once the theorem is proved for this function, it is clear that also holds for $\lambda.$ Let us now take $t_{0}\geq T$ and $Y_{t_{0}}\in\mathbb{R}^{n}$ (fixed), and denote $t_{j}=t_{0}+j$, for $j\in\mathbb{N}$, $Y(t)=Y(t;t_{0},Y_{t_{0}})$ and $Y_{j}=Y(t_{j}),j\in\mathbb{N}$. Firstly, we claim that given $\varepsilon>0$ arbitrary, there exists $j_{0}(\varepsilon)\in\mathbb{N}$ such that for all $j\geq j_{0}(\varepsilon)$ it follows \begin{equation} \left| Y(t)-X(t;t_{j},Y_{j})\right| \leq\frac{1}{8}\lambda(t_{j})^{-\left( \delta-2\varepsilon\right) },\forall t\in\lbrack t_{j},t_{j+1}].\label{tj} \end{equation} Indeed, notice that (\ref{cinco}) implies that given $\varepsilon>0$, there exists $j_{1}(\varepsilon)\in\mathbb{N}$ such that \[ \int_{t_{j}}^{t_{j+1}}\beta(s)\mathrm{d}s\leq\lambda(t_{j})^{-(\delta -\varepsilon)},\text{ \ \ for all \ }j\geq j_{1}(\varepsilon), \] and, it is obvious that there exists $j_{2}(\varepsilon)\in\mathbb{N}$, such that \[ (1+\mathrm{e}^{M})\lambda(t_{j})^{-\varepsilon}<\frac{1}{8}\text{ \ \ for all }j\geq j_{2}(\varepsilon). \] Now, we can also write \begin{align*} X(t;t_{j},Y_{j}) & =Y_{j}+\int_{t_{j}}^{t}f(s,X(s;t_{j},Y_{j}))\mathrm{d} s,\forall t\in\lbrack t_{j},t_{j+1}],\\ Y(t) & =Y_{0}+\int_{t_{0}}^{t}\left[ f(s,Y(s))+g(s,Y(s))\right]\,ds\\ & =Y_{j}+\int_{t_{j}}^{t}\left[ f(s,Y(s))+g(s,Y(s))\right] \,ds,\forall t\in\lbrack t_{j},t_{j+1}]. \end{align*} Thus, denoting $j_{0}(\varepsilon)=\max\{j_{1}(\varepsilon), j_{2}(\varepsilon)\}$, and for $j\geq j_{0}(\varepsilon)$, and $t\in\lbrack t_{j},t_{j+1}]$, it follows that \begin{align*} \big| Y(t)-X(t;t_{j},Y_{j})\big| & =\Big| \int_{t_{j}}^{t}\left[ f(s,X(s;t_{j},Y_{j}))-f(s,Y(s))-g(s,Y(s))\right] \mathrm{d}s\Big| \\ & \leq\int_{t_{j}}^{t}\alpha(s)\Big| Y(s)-X(s;t_{j},Y_{j})\Big| \,ds+\int_{t_{j}}^{t}\beta(s)\,ds, \end{align*} and, by the Gronwall lemma, \begin{align*} \left| Y(t)-X(t;t_{j},Y_{j})\right| & \leq\int_{t_{j}}^{t_{j+1}} \beta(s)\mathrm{d}s\left( 1+\int_{t_{j}}^{t}\exp\left( \int_{s} ^{t}\alpha(r)\mathrm{d}r\right) \,ds\right) \\ & \leq(1+\mathrm{e}^{M})\lambda(t_{j})^{-(\delta-\varepsilon)}\\ & \leq\frac{1}{8}\lambda(t_{j})^{-(\delta-2\varepsilon)}, \end{align*} which proves (\ref{tj}). Secondly, we claim that \begin{equation} \left| Y(t)-X(t;t_{j},Y_{j})\right| \leq\frac{1}{4}\lambda(t_{j} )^{-(\delta-3\varepsilon)},\forall t\in\lbrack t_{j+1},t_{j+2}],\forall j\geq j_{0}(\varepsilon).\label{t+1} \end{equation} Indeed, notice that for $t\in\lbrack t_{j+1},t_{j+2}],j\geq j_{0}$ it follows \begin{align} \left| Y(t)-X(t;t_{j},Y_{j})\right| & \leq\left| Y(t)-X(t;t_{j+1} ,Y_{j+1})\right| +\left| X(t;t_{j+1},Y_{j+1})-X(t;t_{j},Y_{j})\right| \nonumber\\ & \leq\frac{1}{8}\lambda(t_{j})^{-(\delta-2\varepsilon)}+\left| X(t;t_{j+1},Y_{j+1})-X(t;t_{j},Y_{j})\right| .\label{claim2} \end{align} Now, we denote $v(t)=\left| X(t;t_{j+1},Y_{j+1})-X(t;t_{j},Y_{j})\right| $ and obtain an estimate for this term. Observing that for $t\in\lbrack t_{j+1},t_{j+2}]$ \begin{gather*} X(t;t_{j+1},Y_{j+1}) =Y_{j+1}+\int_{t_{j+1}}^{t}f(s,X(s;t_{j+1} ,Y_{j+1}))\,ds,\\ X(t;t_{j},Y_{j}) =X(t_{j+1};t_{j},Y_{j})+\int_{t_{j+1}}^{t}f(s,X(s;t_{j} ,Y_{j}))\,ds, \end{gather*} and, it is easy to get by the virtue of (\ref{tj}) and (\ref{dos}) \begin{align*} v(t) \leq&\left| Y_{j+1}-X(t_{j+1};t_{j},Y_{j})\right| \\ & +\int_{t_{j+1}}^{t}\left| f(s,X(s;t_{j+1},Y_{j+1}))-f(s,X(s;t_{j} ,Y_{j}))\right| \,ds\\ \leq&\frac{1}{8}\lambda(t_{j})^{-(\delta-2\varepsilon)}+\int_{t_{j+1}} ^{t}\alpha(s)v(s)\,ds, \end{align*} and the Gronwall lemma obviously implies \[ v(t)\leq\frac{1}{8}\lambda(t_{j})^{-(\delta-2\varepsilon)}\text{e}^{M} \leq\frac{1}{8}\lambda(t_{j})^{-(\delta-3\varepsilon)}. \] Taking into account now this estimate with (\ref{claim2}), we obtain (\ref{t+1}). Thirdly, we claim that \begin{equation} \left| Y(t)\right| \leq\frac{1}{2}\left( 1+\left| Y_{j_{0}}\right| \right) \lambda(i)^{-\min\{(\delta-3\varepsilon),\gamma\}}\text{, }t\in\lbrack t_{j_{0}+i},t_{j_{0}+i+1}],\text{ } i=1,2,\dots \label{claim} \end{equation} Let us prove the assertion by induction. Indeed, take $t\in\left[ t_{j_{0} +1},t_{j_{0}+2}\right] $. Then, (\ref{t+1}) and (\ref{uno}) yield to \begin{align*} \left| Y(t)\right| & \leq\left| Y(t)-X(t;t_{j_{0}},Y_{j_{0}})\right| +\left| X(t;t_{j_{0}},Y_{j_{0}})\right| \\ & \leq\frac{1}{4}\lambda(t_{j_{0}})^{-(\delta-3\varepsilon)}+\frac{1} {4}\left| Y_{j_{0}}\right| \lambda(t-t_{j_{0}})^{-\gamma}\\ & \leq\frac{1}{4}\lambda(1)^{-(\delta-3\varepsilon)}+\frac{1}{4}\left| Y_{j_{0}}\right| \lambda(1)^{-\gamma}\\ & \leq\frac{1}{2}(1+\left| Y_{j_{0}}\right| )\lambda(1)^{-\min\{(\delta-3\varepsilon),\gamma\} }, \end{align*} and the assertion holds for $i=1$. Assume now that it is true for $i$ and let us prove it for $i+1$. Thus, considering $t\in\left[ t_{j_{0}+i+1} ,t_{j_{0}+i+2}\right] $, it follows by a similar argument as above and using (\ref{lambda}) \begin{align*} \left| Y(t)\right| \leq&\left| Y(t)-X(t;t_{j_{0}+i},Y_{j_{0}+i})\right| +\left| X(t;t_{j_{0}+i},Y_{j_{0}+i})\right| \\ \leq& \frac{1}{4}\lambda(t_{j_{0}+i})^{-(\delta-3\varepsilon)}+\frac{1} {4}\left| Y_{j_{0}+i}\right| \lambda(t-t_{j_{0}+i})^{-\gamma}\\ \leq&\frac{1}{4}\lambda(t_{j_{0}+i})^{-(\delta-3\varepsilon)}+\frac{1} {4}\left( \frac{1}{2}(1+\left| Y_{j_{0}}\right| )\lambda(i)^{-\min\{(\delta-3\varepsilon),\gamma\} }\right) \lambda(1)^{-\gamma}\\ \leq&\frac{1}{4}\lambda(i+1)^{-\min\{ (\delta-3\varepsilon), \gamma\} }\\ &+\frac{1}{4}\left( \frac{1}{2}(1+\left| Y_{j_{0} }\right| )\lambda(i)^{-\min\{ (\delta-3\varepsilon), \gamma\} }\right) \lambda(1)^{-\min\{ (\delta-3\varepsilon), \gamma\} }\\ \leq&\frac{1}{4}\lambda(i+1)^{-\min\{ (\delta-3\varepsilon), \gamma\} } +\frac{1}{4}\left( \frac{1}{2}(1+\left| Y_{j_{0} }\right| )\right) \lambda(i+1)^{-\min\{ (\delta-3\varepsilon), \gamma\} }\\ \leq&\frac{1}{2}\left[ 1+\left| Y_{j_{0}}\right| \right] \lambda (i+1)^{-\min\{ (\delta-3\varepsilon), \gamma\} }, \end{align*} and our claim is proved. Finally, (\ref{claim}) implies that, for $t\in\lbrack t_{j_{0}+i} ,t_{j_{0}+i+1}]$ and for all $i\in\mathbb{N}$ large enough, \[ \frac{\log\left| Y(t)\right| }{\log\lambda(t)}\leq\frac{\log\frac{1} {2}(1+\left| Y_{j_{0}}\right| )}{\log\lambda(t)}-\min\{ (\delta-3\varepsilon), \gamma\} \frac{\log\lambda(i)}{\log\lambda(t)}, \] which allows us to ensure that \[ \limsup_{t\to \infty}\frac{\log\left| Y(t;t_{0},Y_{t_{0}})\right| }{\log\lambda(t)}\leq-\min\{ (\delta-3\varepsilon), \gamma\} , \] and since $\varepsilon>0$ is arbitrary, the proof is therefore complete. \end{proof} \noindent\textbf{Remark.} Notice that a more general result can also be proved by a suitable modification in the preceding proof. For instance, if $g$ satisfies \begin{equation*} \left| g(t,x)\right| \leq\beta_{1}(t)+\beta_{2}(t)\left| x\right| ,\forall(t,x)\in\mathbb{R}^{n+1}, \end{equation*} instead of (\ref{tres}) in the theorem, $\beta_{1}$ satisfies (\ref{cinco}), and for $\beta_{2}$ we assume that \begin{equation*} \lim_{t\to \infty}\int_{t}^{t+1}\beta_{2}(s)\mathrm{d}s=0, \end{equation*} the assertion in the preceding theorem also holds. \section{Conclusions and final remarks} We have developed a theory on general decay properties of solutions of differential systems by using the Lyapunov Second Method and some kind of first approximation results for perturbed systems. In particular, in order to prove our main results, we also have introduced the generalized Lyapunov exponents with respect to general positive functions which has permitted us to establish some criteria for general decay of solutions. However, a very interesting question is concerned with the possibility of determining how fast attract some closed set (e.g. attractors) the solutions of a differential system. Some results on this topic have previously been proved by Eden et al. \cite{eden et al} in the case of exponential attraction. But, to our knowledge, nothing is known about a weaker kind of attraction (e.g. polynomial) or a stronger one (super-exponential). On the other hand, our treatment could also be extended to the infinite-dimensional context, i.e. for partial differential equations, and some similar results could be proved for differential functional equations. We plan to investigate these in some subsequent works. \smallskip \noindent\textbf{Acknowledgments.} I wish to express my sincere gratitude to the referee for the helpful and interesting comments and suggestions on this paper. I also want to thank Professors J. Real, J. A. Langa and M. J. Garrido for their helpful discussions and suggestions. \begin{thebibliography}{99} \bibitem{brauer-nohel} F. Brauer and J.A. Nohel, \textit{The qualitative theory of ordinary differential equations,} Dover, New York, (1989). \bibitem{coddington-levinson} E. A. Coddington and N. Levinson, \textit{ Theory of Ordinary Differential Equations, }McGraw-Hill, New York, (1955). \bibitem{eden et al} A. Eden, C. Foias, B. Nikolaenko and R. Temam, Exponential attractors for dissipative evolution equations, Masson, Paris (1994). \bibitem{haraux} A. Haraux, \textit{Syst\`{e}mes dynamiques dissipatifs et applications, }Masson, Paris, (1991). \bibitem{hirsch-smale} M.W. Hirsch and S. Smale, \textit{Ecuaciones Diferenciales, Sistemas Din\'{a}micos y \'{A}lgebra Lineal, }Alianza Editorial, Madrid (1983). \bibitem{kloeden} P.E. Kloeden, A Lyapunov function for pullback attractors of nonautonomous differential equations, \textit{Elect. J. Diff. Eqns.} Conference \textbf{05} (2000), 91--102, http://ejde.math.swt.edu/conf-proc/05/toc.html \bibitem{lasalle} J.P. LaSalle, Stability theory of ordinary differential equations, \textit{J. Diff. Eqns. }4 (1968), 57-65. \bibitem{lasalle76} J.P. LaSalle, Stability of nonautonomous systems, \textit{Nonlinear Anal.} 1 (1976), 83-91. \bibitem{yoshizawa66} T. Yoshizawa, \textit{Stability Theory by Liapunov's Second Method, }The Mathematical Society of Japan, Tokyo (1966). \bibitem{yoshizawa82} T. Yoshizawa, Asymptotic behaviour of solutions in nonautonomous systems, in Trends in Theory and Practise of Nonlinear Differential Equations (Arlington, Texas, 1982), Lecture Notes in Pure and Appl. Math. 90, 553-562. \end{thebibliography} \end{document}