3/2$, then $ \gamma=(-3+2q)/2$.\\ If $2(p-q)+2\leq0$, then $\gamma=p-1/2$.\smallskip \noindent\textbf{Example 5. } Finally, we exhibit a situation with a more general decay rate. To this end, consider \begin{equation*} \frac{\text{\textrm{d}}X}{\text{\textrm{d}}t}=\frac{-2X}{\left( 1+t\right) \log\left( 1+t\right) }+\frac{1}{\left( 1+t\right) \left[ \log\left( 1+t\right) \right] ^{2}}. \end{equation*} By using the Lyapunov function $V(t,x)=x^{2}\log\left( 1+t\right) $ (notice that we are considering $\lambda(t)=\log\left( 1+t\right) $), it holds \begin{align*} \dot{V}(t,x) & =\frac{1}{1+t}x^{2}+2x\log\left( t+1\right) \left( \frac{-2x}{ \left( 1+t\right) \log\left( 1+t\right) }+\frac{1}{\left( 1+t\right) \left[ \log\left( 1+t\right) \right] ^{2}}\right) \\ & \leq\frac{-3x^{2}}{1+t}+\frac{2x}{\left( 1+t\right) \log\left( 1+t\right) } \\ & \leq\frac{-2x^{2}}{1+t}+\frac{1}{\left( 1+t\right) \left[ \log\left( 1+t\right) \right] ^{2}}, \end{align*} and we can set $\varphi_{1}(t)=\frac{1}{\left( 1+t\right) \left[ \log\left( 1+t\right) \right] ^{2}}$ and $\varphi_{2}(t)=0$. Now, it is not difficult to check that (c) in theorem \ref{theorem2} is fulfilled with $\theta=\nu=0$ and, consequently, $\gamma=1/2.$ \section{Perturbed systems} In this Section, we shall investigate some stability properties of solutions of perturbed differential systems. Our aim is to prove some results which, in particular, ensure the transference of some decay properties from the unperturbed systems to the perturbed one. In other words, if we know that the solutions of a differential systems decay to zero with certain decay rate, under which conditions can we guarantee that the perturbed one has a similar property?. Firstly, we will consider the perturbed linear differential system, and then, we will treat a more general nonlinear one. \subsection{The perturbed linear case} Consider the linear differential system \begin{equation} \dot{X}=A(t)X, \label{linear} \end{equation} where $A\in C(\mathbb{R};\mathcal{L}(\mathbb{R}^{n}))$, i.e. is a $n\times n$ matrix whose elements are continuous functions. Let $\lambda(t)$ be a function satisfying the assumptions in the previous Section and let $ \left\langle \cdot,\cdot\right\rangle $ denote the scalar product in $ \mathbb{R}^{n}$ associated with the norm $\left| \cdot\right| $. Let us assume that the zero solution is globally asymptotically stable with decay rate $\lambda(t)$ of order $\gamma>0$, what happens if, for instance, there exists a continuous function $\alpha(t)$ such that \begin{equation*} 2\left\langle A(t)u,u\right\rangle \leq\alpha(t)|u|^{2},\text{ \ \ for all \ }t\in\mathbb{R},u\in\mathbb{R}^{n}, \end{equation*} with \begin{equation*} \underset{t\to +\infty}{\lim\sup}\frac{\int_{0}^{t}\alpha(s)\,ds }{\log\lambda(t)}\leq-2\gamma. \end{equation*} Now, consider the perturbed problem \begin{equation} \dot{X}=A(t)X+F(t,X), \label{perturbed} \end{equation} where $F:\mathbb{R}\times\mathbb{R}^{n}\to \mathbb{R}^{n}$ is a continuous function. We shall prove that under suitable conditions, every solution to (\ref{perturbed}) decreases to zero with the same decay function although possibly with a different order. To start, consider the linear autonomous case $\dot{X}=AX$. If we assume that the trivial solution is asymptotically stable with some decay rate, as this is an autonomous system, it must be uniformly asymptotically stable and henceforth, exponentially stable. Thus, all the eigenvalues associate to the matrix $A$ have negative real parts and, if necessary, by a suitable change of norm and its associated inner product (see Hirsch and Smale \cite[p. 211] {hirsch-smale}), we can ensure that there exists $\gamma >0$ such that $ \left| \exp \left\{ \left( t-t_{0}\right) A\right\} \right| \leq $ e$ ^{-\gamma \left( t-t_{0}\right) }$ for all $t_{0}$ and $t\geq t_{0}$. This immediately implies (see again Hirsch and Smale \cite[p. 259]{hirsch-smale}) that \begin{equation*} \left\langle Ax,x\right\rangle \leq -\gamma \left| x\right| ^{2},\text{ \ for all \ }x\in \mathbb{R}^{n}. \end{equation*} Let us now consider the perturbed system \begin{equation} \dot{X}=AX+F(t,X), \label{primera-aprox} \end{equation} where $F:\mathbb{R\times }D\to \mathbb{R}^{n}$ is continuous ($ D\subset \mathbb{R}^{n}$ is an open set containing $0$ in its interior) and satisfies \begin{equation*} \left\langle F(t,x),x\right\rangle \leq \phi _{1}(t)+\phi _{2}(t)\left| x\right| ^{2},\text{ \ \ for all }(t,x)\in \mathbb{R\times }D, \end{equation*} being $\phi _{1}$ and $\phi _{2}$ continuous functions, $\phi _{1}\geq 0$, and fulfilling (for a decay function $\lambda (t)$ as in the previous section) \begin{equation*} \begin{array}{c} \limsup_{t\to \infty }\dfrac{\log \int_{t_{0}}^{t}2\phi _{1}(s)\exp \left\{ -\int_{t_{0}}^{s}2\left( \phi _{2}(r)-\gamma \right) \text{d}r\right\} \,ds}{\log \lambda (t)}\leq \nu , \\ \limsup_{t\to \infty }\dfrac{\int_{t_{0}}^{t}2\left( \phi _{2}(s)-\gamma \right) \,ds}{\log \lambda (t)}\leq \theta . \end{array} \end{equation*} Then, it is straightforward to check that assumptions in theorem \ref {theorem2} are satisfied with $V(t,x)=\left| x\right| ^{2},m=0,p=2,\varphi _{1}(t)=2\phi _{1}(t),\varphi _{2}(t)=2\left( \phi _{2}(t)-\gamma \right) ,$ and therefore \begin{equation} \limsup_{t\to \infty }\frac{\text{log }|X(t)|}{\text{log } \lambda (t)}\leq \frac{(\theta +\nu )}{2}. \label{estrella} \end{equation} Now, if $\theta +\nu <0$, asymptotic decay to zero with decay rate $\lambda (t)$ of order at least $-\left( \theta +\nu \right) /2$ holds$.\smallskip \smallskip $ Although this consequence can be seen as a trivial result, the most important thing is that we can now give a very easy proof of two classical results concerning stability in the first approximation and even weaken the assumptions. In fact, we are referring here to the following general result (see, for instance Yoshizawa \cite{yoshizawa66}, Brauer and Nohel \cite {brauer-nohel}, etc.). \begin{theorem} Assume that all of the characteristic roots of the matrix $A$ have negative real parts. Assume that $F(t,x)=G_{1}(t,x)+G_{2}(t,x)$ where $G_{1}$ and $G_{2}$ are continuous functions satisfying $G_{1}(t,0)=G_{2}(t,0)=0$ and \begin{align} \lim_{\left| x\right| \to 0}\frac{\left| G_{1}(t,x)\right| }{\left| x\right| } & =0,\text{ \ \ uniformly in }t;\label{g1}\\ \left| G_{2}(t,x)\right| & \leq g(t)\left| x\right| ,\text{ \ with } \int_{0}^{\infty}g(t)\mathrm{d}\text{t}<\infty.\label{g2} \end{align} Then, the zero solution of \[ \dot{X}=AX+F(t,X) \] is exponentially asymptotically stable, i.e. there exists $\delta>0,K>0$ and $\widetilde{\gamma}>0$ such that for every $t_{0}\in\mathbb{R}$ \ large enough and every $X_{0}\in B(0;\delta):=\{x\in\mathbb{R}^n:|x|<\delta\}$, every solution $X(t)$ to (\ref{primera-aprox}) such that $X(t_{0})=X_{0}$, satisfies \[ \left| X(t)\right| \leq K\left| X_{0}\right| \text{\textrm{ %TCIMACRO{\TeXButton{e}{{\rm e}}} %BeginExpansion {\rm e} %EndExpansion }}^{-\widetilde{\gamma}(t-t_{0})},\text{ \ \ for all }t\geq t_{0}. \] \end{theorem} \begin{proof} Thanks to assumption (\ref{g1}), we can deduce that there exists $\delta>0$ such that \[ \left| G_{1}(t,x)\right| \leq\frac{\gamma}{2}\left| x\right| ,\text{ \ \ for all \ }x\in B(0;\delta). \] Now we can restrict ourselves to consider the problem in the domain $\Omega=\mathbb{R\times}B(0;\delta)$. Thus, given $(t_{0},X_{0})\in\Omega$ choose $X(t)$ a solution of (\ref{primera-aprox}) such that $X(t_{0})=X_{0}.$ Then, for all $(t,x)\in\Omega$ \begin{align*} \left\langle F(t,x),x\right\rangle & =\left\langle G_{1}(t,x)+G_{2} (t,x),x\right\rangle \\ & \leq\frac{\gamma}{2}\left| x\right| ^{2}+g(t)\left| x\right| ^{2}\\ & \leq\left( \frac{\gamma}{2}+g(t)\right) \left| x\right| ^{2}, \end{align*} and taking $\lambda(t)=$ e$^{t},\phi_{1}(t)=0,\phi_{2}(t)=\frac{\gamma} {2}+g(t)$, we can easily check that \begin{align*} \limsup_{t\to \infty}\frac{\int_{t_{0}}^{t}2(\phi_{2}(s)-\gamma )\,ds}{t} & =\limsup_{t\to \infty}\frac{\int_{t_{0}} ^{t}2(g(s)-\frac{\gamma}{2})\,ds}{t}\\ & =-\gamma+\limsup_{t\to \infty}\frac{\int_{t_{0}}^{t}2g(s)\,ds}{t}\\ & \leq-\gamma, \end{align*} and thanks to (\ref{estrella}) \[ \limsup_{t\to \infty}\frac{\text{log }|X(t)|}{t}\leq-\frac{\gamma}{2}, \] and the proof is complete. \end{proof} \noindent\textbf{Remark.} Notice that we only need to assume \begin{equation*} \limsup_{t\to \infty}\frac{\int_{t_{0}}^{t}g(s)\,ds}{t}=0 \end{equation*} instead of the integrability of $g$ in the interval $(0,+\infty).$ Consequently, this condition can be weakened in the theorem. Moreover, by a slight modification at the beginning of the proof, the stability result can be deduced by assuming only that \begin{equation*} \limsup_{t\to \infty}\frac{\int_{t_{0}}^{t}g(s)\,ds}{t}=r<\gamma\,. \end{equation*} \smallskip Now, let us consider the nonautonomous linear case and its perturbations. Namely, consider the following differential systems: \begin{gather} \dot{X}(t)=A(t)X(t) \label{lineal} \\ \dot{Y}(t)=A(t)Y(t)+f(t,Y(t)), \label{linealperturbado} \end{gather} where $A\in C(\mathbb{R};\mathcal{L}(\mathbb{R}^{n}))$ and $f\in C(\mathbb{R} ^{n+1};\mathbb{R}^{n})$. Let us denote $X(t;t_{0},X_{0})$ the unique solution to (\ref{lineal}) starting in $X_{0}$ at time $t_{0}$, and by $ Y(t;t_{0},X_{0})$ the corresponding one for (\ref{linealperturbado}) (maybe not unique). Assume that there exist $\lambda(t)$ satisfying the assumptions in Definition \ref{definition1}, $T>0,C>0$ and $\gamma>0$, such that for all $t_{0}\geq T,t\geq t_{0}$ and $X_{0}\in\mathbb{R}^{N},$ \begin{equation*} \left| X(t;t_{0},X_{0})\right| \leq C\left| X_{0}\right| \lambda (t-t_{0})^{-\gamma}. \end{equation*} Then, we can prove the following result. \begin{theorem} In the preceding situation, assume that $\left| f(t,x)\right| \leq\alpha(t)$, for all $(t,x)\in\mathbb{R}^{n+1}$, where \[ \limsup_{t\to \infty}\frac{\log\int_{t_{0}}^{t}\lambda(t-s)^{-\gamma }\alpha(s)\,ds }{\log\lambda(t-t_{0})}\leq-\delta<0\,. \] Then, \[ \limsup_{t\to \infty}\frac{\log\left| Y(t;t_{0},Y_{0})\right| } {\log\lambda(t-t_{0})}\leq-\min\{\gamma,\delta\}. \] \end{theorem} \begin{proof} Observe that if $\Phi(\cdot)$ is a fundamental matrix for the linear system (\ref{lineal}), it follows that \[ \left\| \Phi(t)\Phi(t_{0})^{-1}\right\| \leq C\lambda(t-t_{0})^{-\gamma },\forall t\geq t_{0}\geq T. \] Now, by the variation of constants formula, we can write \[ Y(t):=Y(t;t_{0},Y_{0})=\Phi(t)\Phi(t_{0})^{-1}Y_{0}+\int_{t_{0}}^{t} \Phi(t)\Phi(s)^{-1}f(s,Y(s))\,ds, \] and, consequently, \begin{align*} \left| Y(t)\right| & \leq\left\| \Phi(t)\Phi(t_{0})^{-1}\right\| \left| Y_{0}\right| +\int_{t_{0}}^{t}\left\| \Phi(t)\Phi(s)^{-1}\right\| \left| f(s,Y(s))\right| \,ds\\ & \leq C\lambda(t-t_{0})^{-\gamma}\left| Y_{0}\right| +\int_{t_{0}} ^{t}C\lambda(t-s)^{-\gamma}\alpha(s)\,ds. \end{align*} Given $0<\varepsilon<\delta$, we can get, for $t$ large enough, that \[ \int_{t_{0}}^{t}\lambda(t-s)^{-\gamma}\alpha(s)\,ds\leq\lambda (t-t_{0})^{-(\delta-\varepsilon)}, \] and, thus \[ \left| Y(t)\right| \leq\widetilde{C}\lambda(t-t_{0})^{-\min\{ \gamma, (\delta-\varepsilon)\} },\text{ for }t\geq t_{0}\text{ large enough,} \] which immediately implies the result. \end{proof} \subsection{Perturbed nonlinear systems} We shall now prove a similar result but considering the perturbations of a nonlinear differential system. However, for this more general case, we need that the decay functions $\lambda(t)$ satisfies the following sub-exponential condition \begin{equation} \lambda(t+s)\leq\lambda(t)\lambda(s),\forall t,s\in\mathbb{R}^{+}. \label{lambda} \end{equation} In this respect, consider the following differential systems \begin{gather} \dot{X}=f(t,X), \label{nonlinear} \\ \dot{Y}=f(t,Y)+g(t,Y), \label{nonlinear-perturbed} \end{gather} where $f,g$ are continuous functions from $\mathbb{R}^{n+1}$ to $\mathbb{R}^{n}$. Given $(t_{0},x)\in\mathbb{R}^{n+1}$, let us denote by $X(t;t_{0},x)$ and $Y(t;t_{0},x)$ solutions to (\ref{nonlinear}) and (\ref{nonlinear-perturbed}) respectively, starting in $x$ at time $t_{0}$. We also assume that all of the solutions to these systems are defined in the future. We can now prove the following theorem. \begin{theorem} Assume that there exist positive constants $C,M,\delta$ and $\gamma$, and nonnegative functions $\alpha(\cdot)$ and $\beta(\cdot)$ such that for all $t_{0}$ large enough (say $t_{0}\geq T)$, all $t\geq t_{0}$, every $X_{0} \in\mathbb{R}^{n}$ and every solution $X(t;t_{0},X_{0})$, it holds: \begin{subequations} \begin{gather} \left| X(t;t_{0},X_{0})\right| \leq C\left| X_{0}\right| \lambda (t-t_{0})^{-\gamma},\text{ \ \ }\forall t\geq t_{0},\label{uno}\\ \left| f(t,x)-f(t,y)\right| \leq\alpha(t)\left| x-y\right| ,\text{ \ \ }\forall t\geq t_{0},x,y\in\mathbb{R}^{n},\label{dos}\\ \left| g(t,x)\right| \leq\beta(t),\text{ \ \ }\forall t\geq t_{0} ,\label{tres}\\ \int_{t}^{t+1}\alpha(s)\,ds\leq M,\text{ \ \ }\forall t\geq t_{0},\label{cuatro}\\ \limsup_{t\to \infty}\frac{\log\int_{t}^{t+1}\beta(s)\,ds }{\log\lambda(t)}\leq-\delta.\label{cinco} \end{gather} Then, every solution to (\ref{nonlinear-perturbed}), $Y(t;t_{0},Y_{t_{0}}),$ defined in the future satisfies \end{subequations} \[ \limsup_{t\to \infty}\frac{\log\left| Y(t;t_{0},Y_{t_{0}})\right| }{\log\lambda(t)}\leq-\min\{ \gamma,\delta\} . \] \end{theorem} \begin{proof} First of all, we can assume without loss of generality that $C\leq1/4$. Otherwise, we consider the new decay function $\tilde{\lambda}(t)=\left( 4C\right) ^{-1/\gamma}\lambda(t)$ for which now (\ref{uno}) holds replacing $C$ by $1/4$ and also (\ref{cinco}) remains true with the same constant. Once the theorem is proved for this function, it is clear that also holds for $\lambda.$ Let us now take $t_{0}\geq T$ and $Y_{t_{0}}\in\mathbb{R}^{n}$ (fixed), and denote $t_{j}=t_{0}+j$, for $j\in\mathbb{N}$, $Y(t)=Y(t;t_{0},Y_{t_{0}})$ and $Y_{j}=Y(t_{j}),j\in\mathbb{N}$. Firstly, we claim that given $\varepsilon>0$ arbitrary, there exists $j_{0}(\varepsilon)\in\mathbb{N}$ such that for all $j\geq j_{0}(\varepsilon)$ it follows \begin{equation} \left| Y(t)-X(t;t_{j},Y_{j})\right| \leq\frac{1}{8}\lambda(t_{j})^{-\left( \delta-2\varepsilon\right) },\forall t\in\lbrack t_{j},t_{j+1}].\label{tj} \end{equation} Indeed, notice that (\ref{cinco}) implies that given $\varepsilon>0$, there exists $j_{1}(\varepsilon)\in\mathbb{N}$ such that \[ \int_{t_{j}}^{t_{j+1}}\beta(s)\mathrm{d}s\leq\lambda(t_{j})^{-(\delta -\varepsilon)},\text{ \ \ for all \ }j\geq j_{1}(\varepsilon), \] and, it is obvious that there exists $j_{2}(\varepsilon)\in\mathbb{N}$, such that \[ (1+\mathrm{e}^{M})\lambda(t_{j})^{-\varepsilon}<\frac{1}{8}\text{ \ \ for all }j\geq j_{2}(\varepsilon). \] Now, we can also write \begin{align*} X(t;t_{j},Y_{j}) & =Y_{j}+\int_{t_{j}}^{t}f(s,X(s;t_{j},Y_{j}))\mathrm{d} s,\forall t\in\lbrack t_{j},t_{j+1}],\\ Y(t) & =Y_{0}+\int_{t_{0}}^{t}\left[ f(s,Y(s))+g(s,Y(s))\right]\,ds\\ & =Y_{j}+\int_{t_{j}}^{t}\left[ f(s,Y(s))+g(s,Y(s))\right] \,ds,\forall t\in\lbrack t_{j},t_{j+1}]. \end{align*} Thus, denoting $j_{0}(\varepsilon)=\max\{j_{1}(\varepsilon), j_{2}(\varepsilon)\}$, and for $j\geq j_{0}(\varepsilon)$, and $t\in\lbrack t_{j},t_{j+1}]$, it follows that \begin{align*} \big| Y(t)-X(t;t_{j},Y_{j})\big| & =\Big| \int_{t_{j}}^{t}\left[ f(s,X(s;t_{j},Y_{j}))-f(s,Y(s))-g(s,Y(s))\right] \mathrm{d}s\Big| \\ & \leq\int_{t_{j}}^{t}\alpha(s)\Big| Y(s)-X(s;t_{j},Y_{j})\Big| \,ds+\int_{t_{j}}^{t}\beta(s)\,ds, \end{align*} and, by the Gronwall lemma, \begin{align*} \left| Y(t)-X(t;t_{j},Y_{j})\right| & \leq\int_{t_{j}}^{t_{j+1}} \beta(s)\mathrm{d}s\left( 1+\int_{t_{j}}^{t}\exp\left( \int_{s} ^{t}\alpha(r)\mathrm{d}r\right) \,ds\right) \\ & \leq(1+\mathrm{e}^{M})\lambda(t_{j})^{-(\delta-\varepsilon)}\\ & \leq\frac{1}{8}\lambda(t_{j})^{-(\delta-2\varepsilon)}, \end{align*} which proves (\ref{tj}). Secondly, we claim that \begin{equation} \left| Y(t)-X(t;t_{j},Y_{j})\right| \leq\frac{1}{4}\lambda(t_{j} )^{-(\delta-3\varepsilon)},\forall t\in\lbrack t_{j+1},t_{j+2}],\forall j\geq j_{0}(\varepsilon).\label{t+1} \end{equation} Indeed, notice that for $t\in\lbrack t_{j+1},t_{j+2}],j\geq j_{0}$ it follows \begin{align} \left| Y(t)-X(t;t_{j},Y_{j})\right| & \leq\left| Y(t)-X(t;t_{j+1} ,Y_{j+1})\right| +\left| X(t;t_{j+1},Y_{j+1})-X(t;t_{j},Y_{j})\right| \nonumber\\ & \leq\frac{1}{8}\lambda(t_{j})^{-(\delta-2\varepsilon)}+\left| X(t;t_{j+1},Y_{j+1})-X(t;t_{j},Y_{j})\right| .\label{claim2} \end{align} Now, we denote $v(t)=\left| X(t;t_{j+1},Y_{j+1})-X(t;t_{j},Y_{j})\right| $ and obtain an estimate for this term. Observing that for $t\in\lbrack t_{j+1},t_{j+2}]$ \begin{gather*} X(t;t_{j+1},Y_{j+1}) =Y_{j+1}+\int_{t_{j+1}}^{t}f(s,X(s;t_{j+1} ,Y_{j+1}))\,ds,\\ X(t;t_{j},Y_{j}) =X(t_{j+1};t_{j},Y_{j})+\int_{t_{j+1}}^{t}f(s,X(s;t_{j} ,Y_{j}))\,ds, \end{gather*} and, it is easy to get by the virtue of (\ref{tj}) and (\ref{dos}) \begin{align*} v(t) \leq&\left| Y_{j+1}-X(t_{j+1};t_{j},Y_{j})\right| \\ & +\int_{t_{j+1}}^{t}\left| f(s,X(s;t_{j+1},Y_{j+1}))-f(s,X(s;t_{j} ,Y_{j}))\right| \,ds\\ \leq&\frac{1}{8}\lambda(t_{j})^{-(\delta-2\varepsilon)}+\int_{t_{j+1}} ^{t}\alpha(s)v(s)\,ds, \end{align*} and the Gronwall lemma obviously implies \[ v(t)\leq\frac{1}{8}\lambda(t_{j})^{-(\delta-2\varepsilon)}\text{e}^{M} \leq\frac{1}{8}\lambda(t_{j})^{-(\delta-3\varepsilon)}. \] Taking into account now this estimate with (\ref{claim2}), we obtain (\ref{t+1}). Thirdly, we claim that \begin{equation} \left| Y(t)\right| \leq\frac{1}{2}\left( 1+\left| Y_{j_{0}}\right| \right) \lambda(i)^{-\min\{(\delta-3\varepsilon),\gamma\}}\text{, }t\in\lbrack t_{j_{0}+i},t_{j_{0}+i+1}],\text{ } i=1,2,\dots \label{claim} \end{equation} Let us prove the assertion by induction. Indeed, take $t\in\left[ t_{j_{0} +1},t_{j_{0}+2}\right] $. Then, (\ref{t+1}) and (\ref{uno}) yield to \begin{align*} \left| Y(t)\right| & \leq\left| Y(t)-X(t;t_{j_{0}},Y_{j_{0}})\right| +\left| X(t;t_{j_{0}},Y_{j_{0}})\right| \\ & \leq\frac{1}{4}\lambda(t_{j_{0}})^{-(\delta-3\varepsilon)}+\frac{1} {4}\left| Y_{j_{0}}\right| \lambda(t-t_{j_{0}})^{-\gamma}\\ & \leq\frac{1}{4}\lambda(1)^{-(\delta-3\varepsilon)}+\frac{1}{4}\left| Y_{j_{0}}\right| \lambda(1)^{-\gamma}\\ & \leq\frac{1}{2}(1+\left| Y_{j_{0}}\right| )\lambda(1)^{-\min\{(\delta-3\varepsilon),\gamma\} }, \end{align*} and the assertion holds for $i=1$. Assume now that it is true for $i$ and let us prove it for $i+1$. Thus, considering $t\in\left[ t_{j_{0}+i+1} ,t_{j_{0}+i+2}\right] $, it follows by a similar argument as above and using (\ref{lambda}) \begin{align*} \left| Y(t)\right| \leq&\left| Y(t)-X(t;t_{j_{0}+i},Y_{j_{0}+i})\right| +\left| X(t;t_{j_{0}+i},Y_{j_{0}+i})\right| \\ \leq& \frac{1}{4}\lambda(t_{j_{0}+i})^{-(\delta-3\varepsilon)}+\frac{1} {4}\left| Y_{j_{0}+i}\right| \lambda(t-t_{j_{0}+i})^{-\gamma}\\ \leq&\frac{1}{4}\lambda(t_{j_{0}+i})^{-(\delta-3\varepsilon)}+\frac{1} {4}\left( \frac{1}{2}(1+\left| Y_{j_{0}}\right| )\lambda(i)^{-\min\{(\delta-3\varepsilon),\gamma\} }\right) \lambda(1)^{-\gamma}\\ \leq&\frac{1}{4}\lambda(i+1)^{-\min\{ (\delta-3\varepsilon), \gamma\} }\\ &+\frac{1}{4}\left( \frac{1}{2}(1+\left| Y_{j_{0} }\right| )\lambda(i)^{-\min\{ (\delta-3\varepsilon), \gamma\} }\right) \lambda(1)^{-\min\{ (\delta-3\varepsilon), \gamma\} }\\ \leq&\frac{1}{4}\lambda(i+1)^{-\min\{ (\delta-3\varepsilon), \gamma\} } +\frac{1}{4}\left( \frac{1}{2}(1+\left| Y_{j_{0} }\right| )\right) \lambda(i+1)^{-\min\{ (\delta-3\varepsilon), \gamma\} }\\ \leq&\frac{1}{2}\left[ 1+\left| Y_{j_{0}}\right| \right] \lambda (i+1)^{-\min\{ (\delta-3\varepsilon), \gamma\} }, \end{align*} and our claim is proved. Finally, (\ref{claim}) implies that, for $t\in\lbrack t_{j_{0}+i} ,t_{j_{0}+i+1}]$ and for all $i\in\mathbb{N}$ large enough, \[ \frac{\log\left| Y(t)\right| }{\log\lambda(t)}\leq\frac{\log\frac{1} {2}(1+\left| Y_{j_{0}}\right| )}{\log\lambda(t)}-\min\{ (\delta-3\varepsilon), \gamma\} \frac{\log\lambda(i)}{\log\lambda(t)}, \] which allows us to ensure that \[ \limsup_{t\to \infty}\frac{\log\left| Y(t;t_{0},Y_{t_{0}})\right| }{\log\lambda(t)}\leq-\min\{ (\delta-3\varepsilon), \gamma\} , \] and since $\varepsilon>0$ is arbitrary, the proof is therefore complete. \end{proof} \noindent\textbf{Remark.} Notice that a more general result can also be proved by a suitable modification in the preceding proof. For instance, if $g$ satisfies \begin{equation*} \left| g(t,x)\right| \leq\beta_{1}(t)+\beta_{2}(t)\left| x\right| ,\forall(t,x)\in\mathbb{R}^{n+1}, \end{equation*} instead of (\ref{tres}) in the theorem, $\beta_{1}$ satisfies (\ref{cinco}), and for $\beta_{2}$ we assume that \begin{equation*} \lim_{t\to \infty}\int_{t}^{t+1}\beta_{2}(s)\mathrm{d}s=0, \end{equation*} the assertion in the preceding theorem also holds. \section{Conclusions and final remarks} We have developed a theory on general decay properties of solutions of differential systems by using the Lyapunov Second Method and some kind of first approximation results for perturbed systems. In particular, in order to prove our main results, we also have introduced the generalized Lyapunov exponents with respect to general positive functions which has permitted us to establish some criteria for general decay of solutions. However, a very interesting question is concerned with the possibility of determining how fast attract some closed set (e.g. attractors) the solutions of a differential system. Some results on this topic have previously been proved by Eden et al. \cite{eden et al} in the case of exponential attraction. But, to our knowledge, nothing is known about a weaker kind of attraction (e.g. polynomial) or a stronger one (super-exponential). On the other hand, our treatment could also be extended to the infinite-dimensional context, i.e. for partial differential equations, and some similar results could be proved for differential functional equations. We plan to investigate these in some subsequent works. \smallskip \noindent\textbf{Acknowledgments.} I wish to express my sincere gratitude to the referee for the helpful and interesting comments and suggestions on this paper. I also want to thank Professors J. Real, J. A. Langa and M. J. Garrido for their helpful discussions and suggestions. \begin{thebibliography}{99} \bibitem{brauer-nohel} F. Brauer and J.A. Nohel, \textit{The qualitative theory of ordinary differential equations,} Dover, New York, (1989). \bibitem{coddington-levinson} E. A. Coddington and N. Levinson, \textit{ Theory of Ordinary Differential Equations, }McGraw-Hill, New York, (1955). \bibitem{eden et al} A. Eden, C. Foias, B. Nikolaenko and R. Temam, Exponential attractors for dissipative evolution equations, Masson, Paris (1994). \bibitem{haraux} A. Haraux, \textit{Syst\`{e}mes dynamiques dissipatifs et applications, }Masson, Paris, (1991). \bibitem{hirsch-smale} M.W. Hirsch and S. Smale, \textit{Ecuaciones Diferenciales, Sistemas Din\'{a}micos y \'{A}lgebra Lineal, }Alianza Editorial, Madrid (1983). \bibitem{kloeden} P.E. Kloeden, A Lyapunov function for pullback attractors of nonautonomous differential equations, \textit{Elect. J. Diff. Eqns.} Conference \textbf{05} (2000), 91--102, http://ejde.math.swt.edu/conf-proc/05/toc.html \bibitem{lasalle} J.P. LaSalle, Stability theory of ordinary differential equations, \textit{J. Diff. Eqns. }4 (1968), 57-65. \bibitem{lasalle76} J.P. LaSalle, Stability of nonautonomous systems, \textit{Nonlinear Anal.} 1 (1976), 83-91. \bibitem{yoshizawa66} T. Yoshizawa, \textit{Stability Theory by Liapunov's Second Method, }The Mathematical Society of Japan, Tokyo (1966). \bibitem{yoshizawa82} T. Yoshizawa, Asymptotic behaviour of solutions in nonautonomous systems, in Trends in Theory and Practise of Nonlinear Differential Equations (Arlington, Texas, 1982), Lecture Notes in Pure and Appl. Math. 90, 553-562. \end{thebibliography} \end{document}