\documentclass[twoside]{article} \usepackage{amssymb} % font used for R in Real numbers \pagestyle{myheadings} \markboth{\hfil Some remarks on the Melnikov function \hfil EJDE--2002/13} {EJDE--2002/13\hfil Flaviano Battelli \& Michal Fe\v ckan \hfil} \begin{document} \title{\vspace{-1in}\parbox{\linewidth}{\footnotesize\noindent {\sc Electronic Journal of Differential Equations}, Vol. {\bf 2002}(2002), No. 13, pp. 1--29. \newline ISSN: 1072-6691. URL: http://ejde.math.swt.edu or http://ejde.math.unt.edu \newline ftp ejde.math.swt.edu (login: ftp)} \vspace{\bigskipamount} \\ % Some remarks on the Melnikov function % \thanks{ {\em Mathematics Subject Classifications:} 34C23, 34C37. \hfil\break\indent {\em Key words:} Melnikov function, residues, Fourier coefficients. \hfil\break\indent \copyright 2002 Southwest Texas State University. \hfil\break\indent Submitted November 27, 2001. Published February 7, 2002. \hfil\break\indent F. Battelli was partially supported by G.N.A.M.P.A. - INdAM (Italy). \hfil\break\indent M. Fe\v ckan was partially supported by G.N.A.M.P.A. - INdAM (Italy) and by \hfil\break\indent Grant GA-MS 1/6179/00 } } \date{} % \author{Flaviano Battelli \& Michal Fe\v ckan} \maketitle \begin{abstract} We study the Melnikov function associated with a periodic perturbation of a differential equation having a homoclinic orbit. Our main interest is the characterization of perturbations that give rise to vanishing or non-vanishing of the Melnikov function. For this purpose we show that, in some cases, the Fourier coefficients of the Melkinov function can be evaluated by means of the calculus of residues. We apply this result, among other things, to the construction of a second-order equation whose Melnikov function vanishes identically for any $C^{1}$, $2\pi$-periodic perturbation. Then we study the second order Melnikov function of the perturbed equation, and prove it is non-vanishing for a large class of perturbations. \end{abstract} \newcommand{\ds}{\displaystyle} \newcommand{\Arg}{\mathop{\rm Arg}} \newcommand{\Log}{\mathop{\rm Log}} \newtheorem{theorem}{Theorem}[section] \newtheorem{proposition}[theorem]{Proposition} \newtheorem{corollary}[theorem]{Corollary} \renewcommand{\theequation}{\thesection.\arabic{equation}} \catcode`@=11 \@addtoreset{equation}{section} \catcode`@=12 \section{Introduction} Melnikov's method has shown to be an easy and effective method to detect chaotic dynamics in differential equations. The starting point is an autonomous system $\dot x = f(x)$, where $x$ belongs to an open subset $\Omega\subset\mathbb{R}^n$, having a hyperbolic fixed point $x_0$ and a {\it non-degenerate} homoclinic orbit $\phi (t)$, that is a non constant solution $\phi (t)$ such that $\ds\lim_{t\to\pm\infty}\phi(t) = x_0$ and $\dot\phi (t)$ spans the space of bounded solutions of the variational system \begin{equation} \dot x = f'(\phi(t)) x. \label{eq:1} \end{equation} Actually some extensions to the case where the variational system (\ref{eq:1}) has a higher dimensional space of bounded solutions have also been given in the literature, see for example \cite{BL,F,G}, however we are not interested in such generalizations here. Then, associated to a given time periodic sufficiently smooth perturbation $\varepsilon h(t,x,\varepsilon)$, with $\varepsilon$ sufficiently small, there is the so called Melnikov function: $$ M(\alpha) := \int_{-\infty}^{+\infty} \psi^{*}(t)h(t+\alpha,\phi(t),0) {\rm d} t $$ with $\psi (t)$ being the unique (up to a multiplicative constant) bounded solution of the variational system $$ \dot x = -f'(\phi (t))^{*} x. $$ Note that $M(\alpha)$ is a periodic function having the same period as $h(t,x,\varepsilon)$. The basic result states that $M(\alpha)$ gives a kind of $O(\varepsilon)$-measure of the distance between the stable and unstable manifolds of the (unique) hyperbolic periodic solution $x_0(t,\varepsilon)$ of the perturbed system \begin{equation} \dot x = f(x) + \varepsilon h(t+\alpha,x,\varepsilon) \label{eq:2} \end{equation} which is at a $O(\varepsilon)$-distance from $x_0$ (see \cite{H}). Thus if $M(\alpha)$ has a simple zero at some points, the implicit function theorem implies that these two manifolds intersect transversally along a solution $\phi(t,\varepsilon)$ of (\ref{eq:2}) which is homoclinic to $x_0(t,\varepsilon)$. This transversality implies, by the classical Smale horseshoe construction (see \cite{S}), that a suitable iterate of the Poincar\'e map of the perturbed system exhibits chaotic behavior (for a more analytical proof of this fact see \cite{P}). In this paper, we will mainly consider the case where $h(t,x,0)=q(t)$ is a $T$-periodic perturbation independent of $x$, although in the next Section some results are derived for the more general case. We also assume that $q(t)$ is $C^{1}$. Our first remark is that the Melnikov function is a bounded linear map from the space of $T$-periodic functions into itself, as it can be easily checked using the fact that $|\psi (t)| \le Ce^{-\sigma |t|}$, for some positive real number $\sigma$. Moreover the average of $M(\alpha)$ is: $$ \bar{M} = \int_{-\infty}^{+\infty} \psi^{*}(t) {\rm d} t \cdot \bar{q}. $$ Now, in many interesting cases, for example when one deals with a second order conservative equation on $\mathbb{R}$, one has $$ \int_{-\infty}^{+\infty} \psi^{*}(t) {\rm d} t = 0 $$ so that $\bar{M} =0$. In this case the Melnikov function can either be zero or there are $\alpha_1$ and $\alpha_2$ such that $M(\alpha_1) < 0 < M(\alpha_2)$. This means that the Brouwer degree of $M(\alpha)$ in the interval whose end points are $\alpha_1$ and $\alpha_2$ is different from zero. This, in turns, implies a kind of chaotic behavior of some iterate of the Poincar\'e map (see \cite{BF}). This seems to be a good reason to study the kernel of the Melnikov map: $$ q(t)\mapsto \int_{-\infty}^{+\infty} \psi^{*}(t) q(t+\alpha) {\rm d} t. $$ This is the purpose of this paper whose content we now briefly explain. In Section 2 we give a method to evaluate the Melnikov function when $\phi(t) = \Phi(e^{t})$ for some rational function $\Phi(u)$, $u\in\mathbb{C}$. The method states that the Melnikov function can be evaluated by means of the calculus of residues. Even if the use of the calculus of residues for the study of the splitting of separatrices seems to be a quite standard tool \cite{Ge}, our Theorem \ref{thm1} does not seem to follow directly from previous results. Section 3 is devoted to the study of $M(\alpha)$ for a second order equation on $\mathbb{R}$ with a $T$-periodic perturbation $\varepsilon q(t)$. Finally, in Section 4 the results of Section 3 are used to construct some second order equations whose Melnikov map has an infinite dimensional kernel, possibly vanishing on the whole space of ($C^{1}$) $2\pi$-periodic perturbations. For this class of equation we also study the second order Melnikov function $M_2(\alpha)$, that is the coefficient of $\varepsilon^{2}/2$ in the Taylor expansion of the bifurcation function. We prove that for a large class of perturbations, $M_2(\alpha)$ is not identically zero and changes sign provided a certain simmetry condition is satisfied. This fact is important because when the first Melnikov function is identically zero, it is $M_2(\alpha)$ that determines the chaotic behaviour of the system. \section{Melnikov Function and Calculus of Residues} Given the system $\dot x=f(x)+\varepsilon h(t,x,\varepsilon)$, $x\in\Omega \subset\mathbb{R}^n$, $|\varepsilon|<2\varepsilon_0$, $t\in\mathbb{R}$, such that $\dot x= f(x)$ has a non--degenerate homoclinic orbit $\phi(t)$ whose closure is contained in $\Omega$ and $h(t,x,\varepsilon)$ is a $C^{1}$-function, bounded together with its derivatives on $\mathbb{R}\times\Omega\times [-\varepsilon_0,\varepsilon_0]$, the Melnikov function of the system is given by $$ M(\alpha)=\int_{-\infty}^{+\infty}\psi^*(t)h(t+\alpha,\phi(t ),0){\rm d} t $$ where $\psi (t)$ is the unique, up to a multiplicative constant, bounded solution of the variational system $\dot y=-f^\prime(\phi (t))^* y(t)$. When $h(t,x,\varepsilon)= h(t+T,x,\varepsilon)$, $T>0$, the existence of a simple zero of $M(\alpha)$ implies the existence of a transversal homoclinic orbit for the Poincar\'e (period $T$) map of the system $\dot x=f(x)+\varepsilon h(t,x,\varepsilon)$ with the induced chaotic behavior. Here we assume that \begin{description} \item{(a)} $\phi(t)=\Phi(e^t)$, where $\Phi(u)$ is a rational function on $\mathbb{C}$ such that $\Phi(u)\to 0$, and $\Phi(1/u) \to 0$ as $u\to 0$; \item{(b)} $\psi (t) =e^t \Psi(e^t)$, where $\Psi(u)u\to 0$ as $|u|\to +\infty$, and $\Psi(u)$ is a rational function on $\mathbb{C}$. \end{description} From $h(t,x,\varepsilon)=h(t+T,x,\varepsilon)$ we deduce that $M(\alpha)$ is $T$-periodic. Set $\omega=\frac{2\pi}{T}$ and $M_0(\alpha) = M(\alpha)\chi_{[-T/2,T/2]}(\alpha)$, $h_0(t,x)=h(t,x,0) \chi_{[-T/2,T/2]}$ and, for any $n\in\mathbb{Z}$, consider: \begin{eqnarray*} \hat M_0(n) & = & \ds{1\over T}\int_{-T/2}^{T/2} M_0(\alpha)e^{-in\omega\alpha}{\rm d}\alpha \\ & = & \ds{1\over T}\int_{-T/2}^{T/2} \int_{-\infty}^{+\infty}\psi^*(t)h(t+\alpha,\phi(t),0) e^{-in\omega\alpha}{\rm d} t{\rm d}\alpha \\ & = & \ds\int_{-\infty}^{+\infty}\psi^*(t) {1\over T} \int_{-T/2}^{T/2} h(t+\alpha,\phi(t),0)e^{-in\omega\alpha} {\rm d}\alpha{\rm d} t\\ & = & \ds\int_{-\infty}^{+\infty}e^t \Psi^*(e^t)\hat h(n,\Phi(e^t)) e^{in\omega t}{\rm d} t \end{eqnarray*} where \begin{equation} \hat h(n,x):={1\over T}\int_{-\infty}^{+\infty} h_0(t,x)e^{-in\omega t}{\rm d} t = {1\over T}\int_{-T/2}^{T/2} h(t,x,0)e^{-in\omega t}{\rm d} t \label{eq:3} \end{equation} is the $n$-th Fourier coefficient of $h_0(t,x)$. We assume that \begin{description} \item{(c)} For any $n\in\mathbb{Z}$ the function $\hat h(n,\Phi(x))$ extends to a meromorphic function $\hat h(n,\Phi(u))$ on $\mathbb{C}$ having the same poles as $\Phi(u)$. \end{description} Thus $$ F(n,u):=\Psi^*(u)\hat h(n,\Phi(u)) $$ is meromorphic in $\mathbb{C}$, for any fixed $n\in\mathbb{Z}$, and its poles are either those of $\Psi(u)$ or those of $\Phi(u)$. Let us make some comments about the function $F(n,u)$. As $\Psi(u)$ and $\Phi(u)$ take real values when $u\in\mathbb{R}_{+}$, Schwartz reflection principle gives: \begin{equation} \overline{\Psi(\bar u)}=\Psi(u),\quad\hbox{and}\quad \overline{\Phi(\bar u)}=\Phi(u). \label{eq:4} \end{equation} Second, we notice that, being $\overline{\hat h(n,x)}= \hat h(-n,x)$ for any $x\in\Omega\subset\mathbb{R}^n$ we obtain $$ \overline{\hat h(n,\Phi(\bar u))}=\hat h(-n,\Phi(u)) $$ because of the uniqueness of the analytical extension and hence \begin{equation} \overline{F(n,\bar u)}=F(-n,u). \label{eq:5} \end{equation} From (\ref{eq:4}) it follows that $\Psi(u)$ and $\Phi(u)$ have complex conjugate poles and hence the same holds for $F(n,u)$. Let $w_j=u_j\pm iv_j$, $j=1,\ldots , r$ be the poles of $F(n,u)$ (independent of $n\in\mathbb{Z}$). Note that the $w_j$ do not belong to an angular sector around the positive real half--line, otherwise $\Psi(e^t)$ will have singularities on the real line. Thus the poles of $F(n,e^z)$ are $z=\Log w_j:=\log |w_j|+i\Arg w_j$ where $\Arg (w)\in (\beta,2\pi-\beta )$ for some $\beta >0$ and $\Log w_j$ is the logarithm principal value. We remark that $\Arg (\bar w_j)=2\pi - \Arg (w_j)$. Finally, for any $u\in\mathbb{C}\setminus \{0\}$ such that $0 \le \Arg (u) < 2\pi$, we set $$ u^{i\omega n} := e^{in\omega Log u}. $$ Then we integrate the meromorphic function $e^z F(n,e^z) e^{in\omega z}$ on the boundary of the rectangle $\{ -\rho \le \Re z \le \rho, 0 \le \Im z \le 2\pi \}$. For $\rho$ sufficiently large Cauchy theorem implies: $$ \begin{array}{l} \ds 2\pi i \sum_j \mathop{\rm Res}(e^z F(n,e^z)e^{in\omega z},\Log w_j) = \int_{-\rho}^{\rho}e^t F(n,e^t)e^{in\omega t}{\rm d} t \\ \ds - \int_{-\rho}^{\rho} e^t F(n,e^t)e^{in\omega t}e^{-2\pi n\omega} {\rm d} t + \int_0^{2\pi}e^\rho e^{iy} F(n,e^\rho e^{iy}) e^{in\omega\rho}e^{-n\omega y}i{\rm d} y \\ \ds - \int_0^{2\pi}e^{-\rho}e^{iy} F(n,e^{-\rho}e^{iy}) e^{-in\omega\rho}e^{-n\omega y}i{\rm d} y . \end{array} $$ Now, the last two integrals in the right tend to zero as $\rho\to+\infty$ uniformly with respect to $n$. Hence, for any $n \neq 0$ we get \begin{eqnarray*} \ds \int_{-\infty}^{+\infty}e^t F(n,e^t)e^{in\omega t}{\rm d} t & = & \ds {2\pi i\over 1-e^{-2\pi n\omega}} \sum_j \mathop{\rm Res}\left( e^z F(n,e^z)e^{in\omega z},\Log w_j \right ) \\ & = & \ds 2\pi i\sum_j \mathop{\rm Res}\left( { F(n,u)u^{in\omega}\over 1-e^{-2\pi n\omega}}, w_j \right ). \end{eqnarray*} Thus we have proved the following. \begin{theorem} \label{thm1} Under the conditions {\rm (1)-(3)} the Fourier coefficients of the Melnikov function $M(\alpha)$ are given by: \begin{equation} \hat M_0(n)=2\pi i\sum_j \mathop{\rm Res}\left( { F(n,u)u^{in\omega}\over 1-e^{-2\pi n\omega}}, w_j \right ) \label{eq:6} \end{equation} for $n\neq 0$, while $$ \hat M_0(0) = {1\over T}\int_{-T/2}^{T/2}M_0(\alpha) {\rm d}\alpha = \int_{-\infty}^{+\infty} \psi^{*}(t) \hat h_0 (0,\phi(t)) {\rm d} t $$ where $\hat h_0(n,x)$ has been defined in {\rm (\ref{eq:3})}. \end{theorem} Using (\ref{eq:5}) we obtain: $$ \begin{array}{rl} \overline{\hat M_0}(n) & = \ds\overline {\int_{-\infty}^{+\infty}e^t F(n,e^t)e^{in\omega t}{\rm d} t} = \int_{-\infty}^{+\infty}e^t \overline {F(n,e^t)} e^{-in\omega t} {\rm d} t \\ & = \ds\int_{-\infty}^{+\infty}e^t F(-n,e^t)e^{-in\omega t} {\rm d} t = \hat M_0(-n). \end{array} $$ Finally note that, since $h(t,x)$ is $T$-periodic in $t$ and $C^{1}$ we have $$ \hat M_0(n) = 0 \; \hbox{\rm for any $n\in\mathbb{Z}$ if and only if }\; M_0(\alpha) \equiv 0. $$ We conclude this Section giving a first example of application of the above result. Consider the Duffing-like equation: \begin{equation} \ddot x + x\left (\frac{x}{k}-1\right ) = \varepsilon [q_1(t)x + q_2(t)\dot x]\label{eq:6.1} \end{equation} where $k>0$ and $q_1(t)$, $q_2(t)$ are $2\pi$-periodic, $C^{1}$ functions. Setting $x_1=x$ and $x_2=\dot x$ we obtain the equivalent system $$ \begin{array}{l} \dot x_1 = x_2 \\ \ds\dot x_1 = x_1\left (1-\frac{x}{k}\right ) + \varepsilon [q_1(t)x_1 + q_2(t)x_2]. \end{array} $$ Let $\ds\Phi_0(x) = \frac{6kx}{(x+1)^{2}}$. Then the homoclinic solution of the unperturbed system is given by $\phi(t) = \Phi(e^{t})$ where $$ \Phi(x) =\pmatrix{ \Phi_0(x) \cr x\Phi_0'(x)} $$ Moreover: $$ \Psi(x_1,x_2) =\pmatrix{ -x_1\Phi''_0(x_1) - \Phi_0(x_1) \cr \Phi'_0(x_1)} $$ and $$ h(t,x_1,x_2,\varepsilon) = \pmatrix{ 0 \cr q_1(t)x_1+q_2(t)x_2 } $$ Thus: $$ F(n,u) = q_{n}^{1}\Phi_0(u)\Phi'_0(u) +q_{n}^{2}u\Phi'_0(u)^2 $$ where $q^{1}_{n}$ and $q^{2}_{n}$ are the Fourier coefficients of $q_1(t)$ and $q_2(t)$ respectively. From Theorem \ref{thm1} we obtain then: $$ \hat M_0(n) = \delta^{1}_{n}q^{1}_{n} + \delta^{2}_{n}q^{2}_{n} $$ where $$ \delta_{n}^{1} = \frac{2\pi i}{1-e^{-2\pi n}} \mathop{\rm Res}\Phi(u)\Phi'(u) u^{in},-1) $$ and $$ \delta_{n}^{2} = \frac{2\pi i}{1-e^{-2\pi n}} \mathop{\rm Res}(\Phi'(u))^{2}u^{in+1},-1) $$ (note that $-1$ is the unique pole of $\Phi_0(u)$). Hence, using the fact that $\Arg (z)\in (0,2\pi)$, we obtain the following expressions of $\delta_{n}^{1}$, $\delta_{n}^{2}$ for $n\neq 0$: $$ \delta_{n}^{1} = -3in^2k^2(n^{2}+1)\frac{\pi}{\sinh n\pi} $$ and $$ \delta_{n}^{2} = -\frac{6}{5}nk^2(n^{2}+1)(n^{2}-1)\frac{\pi} {\sinh n\pi}. $$ Thus $\hat M_0(0)=q_0^2\int \limits_{-\infty }^{+\infty}\dot p(t)^2 dt$ and $\hat M_0(n)=0$, for $n\neq 0$, is equivalent to \begin{equation} \frac{q_{n}^{1}}{q_{n}^{2}} = -\frac{\delta_{n}^{2}}{\delta_{n}^{1}} = \alpha_{n} := \frac{2i(n^{2}-1)}{5n}. \label{eq:6.2} \end{equation} Note that, obviously, $\alpha_{-n}=\bar\alpha_{n}$ and then taking, for any integer $n\neq 0$, $q_{n}^{1}=\alpha_{n}q^{2}_{n}$, $q^{2}_0\in\mathbb{R}$, we get the following \begin{corollary} \label{coro1} Given any $2\pi$-periodic function $q_2(t) \in H^3(\mathbb{R})$ with zero mean value, there exists a unique $2\pi$-periodic function with zero mean value, $q_1(t)\in H^2(\mathbb{R})\subset C^1(\mathbb{R})$ such that the Melnikov function of the equation $(\ref{eq:6.1})$ vanishes identically on $\mathbb{R}$. Actually, for $n\neq 0$, the Fourier coefficients of $q_1(t)$ and $q_2(t)$ satisfy the relation $(\ref{eq:6.2})$. The map $q_2(t) \mapsto q_1(t)$ is linear, bounded and its kernel is the space $span \{\cos t, \sin t \}$. That is, if $q_2(t)\in span \{ 1, \cos t, \sin t \}$, the Melnikov map of equation $(\ref{eq:6.1})$ does not vanish identically for any non constant perturbation $\varepsilon q_1(t)x$. \end{corollary} For example if $q_2(t)=\cos 2t$ then $q_1(t)=-\frac{3}{5}\sin 2t$ \section{The case of the second order equation} In this Section we consider the Melnikov function for the second order equation $$ \ddot x = f(x) + \varepsilon q(t) $$ where $x$ belongs to an open interval $I\subset\mathbb{R}$, $f\in C^1(I,\mathbb{R})$, $q(t)$ is a $C^{1}$, $T$-periodic function, and $\ddot x = f(x)$ is assumed to have the hyperbolic fixed point $x=0\in I$ and an associated homoclinic orbit $p(t)\in I$. In this case we have $$ \phi^{*}(t) = (p(t)\quad \dot p(t)), \quad \psi^{*}(t) = (-\ddot p(t)\quad \dot p(t)), \quad h^{*}(t,x) = (0\quad q(t)) $$ and hence $\psi^{*}(t)h(t,x) = \dot p(t) q(t)$. As in the previous section we assume $p(t)=\Phi_0(e^t)$. Then $\dot p(t) = e^t\Phi_0'(e^t)$, and $$ \Phi (u) = \pmatrix{ \Phi_0(u) \cr u\Phi_0'(u)} \quad \Psi(u) = \pmatrix{-u\Phi_0''(u) -\Phi_0'(u) \cr \Phi_0'(u)}. $$ Note that we have $F(n,u) = R^{*}(u)\hat h(n,\Phi(u))= \Phi_0'(u)\hat q_{n}$ where $\hat q_{n}$ is the $n$-th Fourier coefficient of the periodic function $q(t)$. Thus in order that the analysis of the previous Section is valid we see that we only need that assumption (2) is satisfied, that is $\Phi_0'(u)$ is rational (we do not need that $\Phi_0(u)$ satisfies this condition), and $\lim_{u\to\infty}\Phi_0'(u)u = 0$. Anyway, for simplicity, we also assume that $\Phi_0(u)$ is rational with the same poles as $\Phi_0'(u)$. Next: $$ \hat M_0(0) = \int_{-\infty}^{+\infty} \dot p(t){\rm d} t \cdot \hat q_0 = 0 $$ because $p(t)$ is homoclinic, and, from (\ref{eq:6}) we obtain for $n\in\mathbb{Z}$, $n\neq 0$: $$ \hat M_0(n) = 2\pi i\sum_{w_j} \mathop{\rm Res}\left( {\Phi_0'(u) \hat q_n u^{in\omega}\over 1-e^{-2\pi n\omega}}, w_j \right ) =\delta_{n}\hat q_n $$ where $\hat q_n$ is the $n$-th Fourier coefficient of the periodic function $q(t)$, $w_j$ are the poles of $\Phi_0(u)$ and \begin{equation} \delta_{n} = {2\pi i\over 1-e^{-2\pi n\omega}}\sum_{w_j} \mathop{\rm Res}( \Phi_0'(u)u^{in\omega}, w_j). \label{eq:7} \end{equation} Now, let $\gamma_j$ be a circle around $w_j$ such that no other pole $w_{i}$, $i\neq j$, is inside $\gamma_j$. We have, integrating by parts: \begin{equation} 2\pi i \mathop{\rm Res}( \Phi_0'(u)u^{in\omega}, w_j) = \int_{\gamma_j}\Phi_0'(u) u^{in\omega}{\rm d} u = -in\omega\int_{\gamma_j}\Phi_0(u)u^{in\omega-1}{\rm d} u \label{eq:8} \end{equation} and then \begin{equation} \delta_{n} = {2\pi n\omega\over 1-e^{-2\pi n\omega}}\sum_{w_j} \mathop{\rm Res}(\Phi_0(u)u^{in\omega-1}, w_j). \label{eq:9} \end{equation} Next, assume that $w_j$ is a pole of $\Phi_0 (u)$ of multiplicity $k$. We have: \begin{equation} \begin{array}{rl} & \mathop{\rm Res}(\Phi_0 (u)u^{in\omega-1},w_j) = \ds{1\over (k-1)!} {d^{k-1} \over du^{k-1}} \left [ (u-w_j)^k\Phi_0 (u)u^{in\omega-1} \right ]_{u=w_j} \\ & \ds = {1\over (k-1)!} \sum_{m=0}^{k-1} { k-1\choose m } \Big \{ {d^{k-m-1}\over du^{k-m-1}} \left [ (u-w_j)^k \Phi_0 (u) \right ] \cdot \\ &\quad (in\omega-1)\ldots (in\omega-m)u^{in\omega-m-1} \Big \}_{u=w_j} \\ & \ds = \sum_{m=0}^{k-1} {1\over m!} \mathop{\rm Res}\left( (u-w_j)^m \Phi_0 (u), w_j\right ) i^m {(n\omega+i)\ldots (n\omega+mi)\over w_j^{m+1}}e^{in\omega\Log w_j}. \end{array}\label{eq:10} \end{equation} Now, $\mathop{\rm Res}\left( (u-w_j)^m\Phi_0 (u), w_j\right ) =0$ for $m\ge k$ because $(u-w_j)^m\Phi_0 (u)$ is holomorphic in a neighborhood of $w_j$ when $m\ge k$. Thus, denoting by $r$ the maximum of the multiplicities of the poles $w_j$, we can extend the above sum up to $r-1$ obtaining: \begin{equation} \begin{array}{rl} \ds{\delta_{n}(1-e^{-2\pi n\omega})\over 2\pi n\omega} &\ds = \sum_{j=1}^N \sum_{m=0}^{r-1} {1\over m!} \mathop{\rm Res} ((u-w_j)^m \Phi_0 (u), w_j ) i^m \\ & \ds\quad (n\omega+i)(n\omega+2i)\ldots(n\omega+mi){e^{-n\omega \Arg w_j} \over w_j^{m+1}} e^{in\omega\log |w_j|} \end{array}\label{eq:11} \end{equation} $w_1,\ldots,w_{N}$ being the poles of $\Phi_0(u)$. Let $\beta_0=\min\{\Arg (w_j) : j=1\ldots N\}\in(0,\pi]$ and $r_0$ be the greatest multiplicity of the poles of $\Phi_0(u)$ that belong to the half line $\Arg (u)=\beta_0$. Multiplying both sides of equation (\ref{eq:11}) by $e^{n\omega\beta_0}$ we see that ${1\over 2\pi n\omega}e^{n\omega\beta_0}\delta_{n} (1-e^{-2\pi n\omega})$ is asymptotic, as $n\to+\infty$, to $$ \sum_{\Arg (w_j)=\beta_0} \mathop{\rm Res}( \Phi_0 (u)u^{in\omega-1}, w_j)e^{n\omega\beta_0}. $$ Now, again from equation (\ref{eq:10}) we see that for any pole $w_j$ such that $\Arg (w_j)=\beta_0$, and multiplicity less than $r_0$ the quantity $n^{1-r_0} \mathop{\rm Res}\Phi_0(u)u^{in\omega-1},w_j)e^{n\omega\beta_0}$ tends to zero as $n\to\infty$. Thus the leading term in $\ds\sum_{w_j} \mathop{\rm Res}(\Phi_0(u)u^{in\omega-1}, w_j)$ is: $$ \sum_{{\Arg (w_j)=\beta_0\atop mult(w_j)=r_0}}\mathop{\rm Res}(\Phi_0(u)u^{in\omega-1},w_j) $$ $mult(w_j)$ being the multiplicity of $w_j$. As a consequence, using also $\delta_{-n}=\overline{\delta_{n}}$, we obtain the following: \begin{proposition} \label{prop1} Let $\beta_0=\min\{ \Arg (w_j) : j=1\ldots N\} \in(0,\pi]$ and $r_0=\max\{ mult (w_j) : \Arg (w_j) = \beta_0 \}$. Then, if \begin{equation} \liminf_{n\to\infty}{e^{n\omega\beta_0}\over n^{r_0-1}}\Big | \sum_{{\Arg (w_j) = \beta_0\atop mult(w_j)=r_0}} \mathop{\rm Res} ( \Phi_0 (u)u^{in\omega-1}, w_j) \Big | \neq 0, \label{eq:12} \end{equation} there exists $\bar n$ such that for any $n\in\mathbb{Z}$ such that $|n|\ge \bar n$ we have $\delta_{n}\neq 0$. As a consequence the space of periodic functions $q(t)$ such that the associated Melnikov function is identically zero is finite-dimensional. \end{proposition} Condition (\ref{eq:12}) can be simplified a bit looking at equation (\ref{eq:10}). In fact setting $r_0$ in the place of $k$ in that equation and multiplying by $e^{\beta_0n}n^{1-r_0}$ we see that only the term with $m=r_0-1$ survives. Thus we obtain the following \begin{proposition} \label{prop2} Let $\beta_0$, $r_0$ be as in Proposition \ref{prop1} Then, if \begin{equation} \liminf_{n\to\infty}\Big | \sum_{{\Arg (w_j)=\beta_0\atop mult(w_j)=r_0}} {1\over w_j^{r_0}}Res ( (u-w_j)^{r_0-1}\Phi_0 (u), w_j) e^{in\omega\log|w_j|} \Big | \neq 0, \label{eq:13} \end{equation} there exists $\bar n$ such that for any $n\in\mathbb{Z}$ such that $|n|\ge \bar n$ we have $\delta_{n}\neq 0$. As a consequence the space of periodic functions $q(t)$ such that the associated Melnikov function is identically zero is finite-dimensional. \end{proposition} \paragraph{Proof.} As we have already observed, for any pole $w_j$ such that $\Arg (w_j)=\beta$ and $mult(w_j)=r$ the quantity: $$ \begin{array}{l} \ds\left ({e^{n\omega\beta}\over n^{r-1}}\right )Res (\Phi_0(u)u^{in\omega-1}, w_j) - {i^{r-1}\over (r-1)!} \prod_{k=1}^{r-1} \left ( \omega+{ki\over n} \right ) \\ \ds\times {1\over w_j^{r}}\mathop{\rm Res}( (u-w_j)^{r-1} \Phi_0(u), w_j) e^{in\omega\log|w_j|} \end{array} $$ tends to zero as $n$ tends to infinity, and then the result follows from: $$ \omega^{r-1}\le \left | \left ( \omega +{i\over n}\right )\cdot\ldots \cdot\left ( \omega +{(r-1)i\over n} \right ) \right | \le \left ( \omega + {r-1\over n} \right )^{r-1}. $$ The proof is finished. \paragraph{Remark.} If $\Phi_0(u)$ has only one pole on the line $\Arg (u) = \beta_0$ with maximum multiplicity $r_0$, condition (\ref{eq:13}) of Proposition \ref{prop2} is certainly satisfied. In fact in this case the left hand side of (\ref{eq:13}) reads: $$ {1\over w_j^{r_0}}\lim_{u\to w_j}(u-w_j)^{r_0}\Phi_0(u) $$ and cannot be zero because $w_j$ is a pole of multiplicity $r_0$ of $\Phi_0 (u)$. \smallskip Equation (\ref{eq:11}) has an interesting consequence when $\Phi(u)$ has only the simple poles $w$ and $\bar w$ (we do not exclude that $w=\bar w$). In fact in this case we have the following result: \begin{theorem} \label{thm2} Assume that $\Phi_0(u)$ satisfies the assumption of the previous section and, moreover, that it has only the simple poles $w$ and $\bar w$ (including the case that $\Phi_0(u)$ has only one simple pole $w=\bar w$). Then $\delta_{n}\neq 0$ for any $n\in\mathbb{Z}$, $n\neq 0$. Thus, for any $2\pi$-periodic, nonconstant function, the associated Melnikov function is not identically zero. \end{theorem} \paragraph{Proof.} Let us consider, first the case where $\Phi_0(u)$ has only the simple pole $w=\bar w < 0$. We have $$ \begin{array}{rl} \ds\frac{1-e^{-2\pi n\omega}}{2\pi n\omega}\delta_{n} &\ds = \frac{1}{w}\mathop{\rm Res}(\Phi_0(u),w)e^{-n\pi\omega}e^{in\omega\log |w|} \\ & \ds = e^{-n\pi\omega}e^{in\omega\log |w|} \lim_{z\to 1}(z-1) \Phi_0(zw) \neq 0 \end{array} $$ because $w$ is a simple pole of $\Phi_0(u)$. Now consider the case where $w\neq \bar w$ and assume, without loss of generality, that $0<\Arg (w)<\pi$. We have: $$ \frac{1}{w} \mathop{\rm Res}(\Phi_0(u),w) = \frac{1}{w}\lim_{u\to w}\Phi_0(u)(u-w) = \lim_{z\to 1}\Phi_0(zw)(z-1) $$ and $$ \frac{1}{\bar w} \mathop{\rm Res}(\Phi_0(u),\bar w) = \frac{1}{\bar w}\lim_{u\to \bar w} \Phi_0(u)(u-\bar w) = \lim_{z\to 1}\Phi_0(z\bar w)(z-1). $$ Since the above two limits exist we can evaluate them changing $z$ with $x\in\mathbb{R}$. We get: $$ \frac{1}{\bar w} \mathop{\rm Res}(\Phi_0(u),\bar w) = \lim_{x\to 1}\Phi_0 (\overline{xw})(x-1) = \lim_{x\to 1} \overline{\Phi_0(xw)(x-1)} = \overline{\frac{1}{w} \mathop{\rm Res}(\Phi_0(u),w)}. $$ Setting $\lambda = w^{-1}\mathop{\rm Res}(\Phi_0(u),w)$ we obtain, from the above equation, and (\ref{eq:11}): $$ \frac{1-e^{-2\pi n\omega}}{2\pi n\omega}\delta_{n} = \lambda w^{in\omega} +\bar\lambda\bar{w}^{in\omega}. $$ Thus, when $n\neq 0$, $\delta_{n}=0$ if and only if \begin{equation} \frac{\lambda}{\bar\lambda} = -\left ( \frac{\bar w}{w}\right )^{in\omega}. \label{eq:14} \end{equation} Now $$ \left (\frac{\bar w}{w}\right )^{in\omega} = e^{-n(\Arg\bar w - \Arg w)\omega} = e^{-2n(\pi-\Arg w)\omega} = \alpha^{2}_{n} > 0. $$ Thus $\lambda = -\alpha^{2}_{n}\bar\lambda$ and then $\bar\lambda = -\alpha^{2}_{n}\lambda$. So $\lambda = \alpha^{4}_{n}\lambda$ and hence $\alpha^{2}_{n}=1$, because $\lambda\neq 0$. But this means that $w$ is real and negative and this contradicts $w\neq \bar w$. The proof is finished. We now give a closer look at the case where $\Phi_0(u)$ has two poles of multiplicity $r_0$ on the half-line $\Arg (u)=\beta_0$. Since $$ {1\over w_j^{r_0}}\mathop{\rm Res}( (u-w_j)^{r_0-1}\Phi_0 (u), w_j) =\lim_{z\to 1} (z-1)^{r_0}\Phi_0 (w_jz), $$ we see that we have to study the equation: \begin{equation} \lambda_1|w_1|^{in\omega} + \lambda_2|w_2|^{in\omega} = 0 \label{eq:15} \end{equation} where $\lambda_j = \lim_{z\to 1} (z-1)^{r_0}\Phi_0 (w_jz)$. Now, equation (\ref{eq:15}) has a solution $n\in\mathbb{N}$ if and only if $$ \left | {w_1\over w_2} \right |^{in\omega} = -{\lambda_2\over\lambda_1}. $$ Thus we have the following cases: \begin{description} \item{(i)} $|\lambda_2|\neq|\lambda_1|$. In this case $\liminf_{n\to\infty}\left | \lambda_1 |w_1|^{in\omega} + \lambda_2 |w_2|^{in\omega} \right | \neq 0$; \item{(ii)} $|\lambda_2|=|\lambda_1|$ and $\log(|\frac{w_1}{w_2}|)$ is a rational multiple of $T=\frac{2\pi}{\omega}$. In this case equation (\ref{eq:15}) either has a (least) solution $n_0\in\mathbb{N}$, and hence it has infinite solutions of the type: $n=n_0+kq$, $k\in\mathbb{Z}$, and $q\in\mathbb{Z}$ such that $\frac{q}{T}\log(|{w_1\over w_2}|)\in\mathbb{Z}$, or $\liminf_{n\to\infty}\left | \lambda_1 |w_1|^{in\omega} + \lambda_2|w_2|^{in\omega} \right | \neq 0$; \item{(iii)} $|\lambda_2|=|\lambda_1|$ and $\log(|{w_1\over w_2}|)$ is not a rational multiple of $T$. In this case $\liminf_{n\to\infty}\left | \lambda_1 |w_1|^{in\omega} + \lambda_2|w_2|^{in\omega} \right | = 0$. \end{description} As a consequence, Proposition \ref{prop2} applies if either $|\lambda_2|\neq|\lambda_1|$, or $\log(|{w_1\over w_2}|)$ is a rational multiple of $T$ and $-{\lambda_2\over\lambda_1}$ is not one of the (finite) values of $\big| {w_1\over w_2} \big|^{in\omega}$. \paragraph{Remarks.} (i) In this section we have assumed that $\Phi_0(u)$ and $\Phi'_0(u)$ are both rational functions on $\mathbb{C}$ with the same poles. However it may well happen that the poles of $\Phi'_0(u)$ correspond to essential singularities of $\Phi_0(u)$. Nonetheless, the argument of this Section hold even in this case, we simply do not have to integrate by parts as in (\ref{eq:8}) and use (\ref{eq:7}) instead of (\ref{eq:9}). For example equation (\ref{eq:11}) reads: $$ \begin{array}{rl} \ds{\delta_{n}(1-e^{-2\pi n\omega})\over 2\pi i} &\!\!\!\ds = \sum_{j=1}^N \sum_{m=0}^{r-1} {1\over m!} Res ((u-w_j)^m \Phi_0' (u), w_j ) i^m e^{in\omega\log |w_j|} \\ &\!\!\!\ds\quad n\omega (n\omega + i)(n\omega + 2i)\ldots(n\omega + (m-1)i) {e^{-n\omega \Arg w_j}\over w_j^{m}} \end{array} $$ with $r$ being the multiplicity of the pole $w_j$ of $\Phi_0'(u)$. Thus Proposition \ref{prop1} and \ref{prop2} hold with the following changes: \\ $r_0$ is the maximum of the multiplicities of the poles of $\Phi_0'(u)$ and in equations (\ref{eq:12}), (\ref{eq:13}), $\Phi_0(u)$ and $u^{in\omega-1}$ have to be changed with $\Phi_0'(u)$, and $u^{in\omega}$ respectively. Moreover, Theorem \ref{thm2} holds as is (with $\Phi_0'(u)$ instead of $\Phi_0(u)$ of course). The proof goes almost in the same way, apart that equation (\ref{eq:14}) has to be written as: $$ \frac{\lambda w}{\overline{\lambda w}} = -\left ( \frac{\bar w}{w}\right )^{in\omega} = -\alpha_{n}^{2}\le 0. $$ The rest of the proof is the same with $\lambda w$ instead of $\lambda$. \newline\noindent Note that the function $\Phi_0(u)={\rm tan}^{-1}\left ( \frac{3u} {2(u^{2}+1)}\right )$ is an example of such a situation. In fact $2i{\rm tan}^{-1}(u) = \log\left ( \frac{1+iu}{1-iu}\right )$ has, at the points $u=\pm i$, essential singularities and is defined, for example, outside the set $\{ z=iy\; | \; |y|\ge 1\}$. In the next Section we will give a method to construct a second order differential equation satisfied by $p(t):= \Phi_0(e^t)$. Following this method we see that $p(t)$ satisfies: $$ \ddot p = \frac{1}{9}\frac{9-41{\rm tan}^{2}p}{({\rm tan}^{2}p+1)^{2}}{\rm tan}p. $$ (ii) From the previous Section we know that $\delta_{n}$ is also given by: $$ \delta_{n} = \int_{-\infty}^{+\infty} \dot p(t) e^{in\omega t} dt. $$ Now, the function: $$ \delta (\xi) = \int_{-\infty}^{+\infty} \dot p(t) e^{-i\omega\xi t} dt $$ tends to zero as $|\xi|\to\infty$ and the same holds for $i\xi\delta(\xi)$ and $(i\xi)^{2}\delta(\xi)$ because $p(t)$, $\dot p(t)$ tend to zero exponentially fast as $|t|\to\infty$ (and hence belong to $L^{2}(\mathbb{R})$), and $p(t)$ satisfies the equation $\ddot p=f(p)$. In fact we have, for example, integrating by parts: $$ i\omega\xi\delta(\xi) = \int_{-\infty}^{+\infty} \ddot p(t) e^{-i\omega\xi t} dt = \int_{-\infty}^{+\infty} f(p(t)) e^{-i\omega\xi t} dt $$ and $$ (i\omega\xi)^{2}\delta(\xi) = \int_{-\infty}^{+\infty} f'(p(t))\dot p(t) e^{-i\omega\xi t} dt. $$ As a consequence $\xi\delta(\xi)\to 0$, $\xi^{2}\delta(\xi)\to 0$ as $|\xi|\to\infty$ and then $\delta_{n}\in L^1(\mathbb{Z})\cap L^{2}(\mathbb{Z})$. Thus the series: $$ \sum_{n\in\mathbb{Z}} \delta_{n}e^{in\omega t} $$ is totally convergent to a continuous, $T$-periodic function $\Delta(t)$ whose $n$-th Fourier coefficient is precisely $\delta_{n}$. Note that, being $\delta_{-n}=\bar\delta_{n}$ we have: $$ \Delta(t) = \delta_0 + 2\sum_{n=0}^{+\infty} ({\rm Re}\delta_{n}) \cos nt - ({\rm Im}\delta_{n})\sin nt. $$ Now, let $\phi_1(t)$ and $\phi_2(t)$ be two $T$-periodic functions on $\mathbb{R}$. For $t\in\mathbb{R}$, we set: $$ \phi_1*\phi_2(t) = \frac{1}{T}\int_{-T/2}^{T/2} \phi_1(t-s) \phi_2(s) ds. $$ Then $\phi_1*\phi_2(t)$ is $T$-periodic and its $n$-th Fourier coefficient is: $$ \begin{array}{l} \displaystyle{1\over T^{2}} \int_{-T/2}^{T/2} \int_{-T/2}^{T/2} \phi_1(t-s)\phi_2(s) ds e^{-in\omega t} dt \\ \displaystyle = {1\over T^{2}} \int_{-T/2}^{T/2} \Big\{ \int_{-T/2}^{T/2} \phi_1(\tau)e^{-in\omega\tau} d\tau \Big \} \phi_2(s) e^{-in\omega s} ds = \phi_1^{(n)}\phi_2^{(n)} \end{array} $$ $\phi_j^{(n)}$ being the $n$-th Fourier coefficient of $\phi_j(t)$. As a consequence $\delta_{n}\hat q_{n}$ is the Fourier coefficient of both $M(\alpha)$ and $\Delta*q(\alpha)$ that is $$ M(\alpha) = \Delta*q(\alpha) = {1\over T}\int_{-T/2}^{T/2} \Delta(\alpha-s)q(s) ds = {1\over T}\int_{-T/2}^{T/2} \Delta(s)q(\alpha-s) ds. $$ Finally, we note that the function $\Delta (\alpha)$ can be expressed by means of $\dot p(t)$ as follows. We have: $$ \begin{array}{rl} M(\alpha ) & \ds = \int_{-\infty}^{+\infty} \dot p(t-\alpha) q(t) dt = \sum_{k\in\mathbb{Z}} \int_{(2k-1)T/2}^{(2k+1)T/2} \dot p(s-\alpha) q(s) ds \\ & \ds = \int_{-T/2}^{T/2}\sum_{k\in\mathbb{Z}} \dot p(s+kT-\alpha) q(s) ds. \end{array} $$ Now, the function $\sum_{k\in\mathbb{Z}} \dot p(kT-t)$ is $T$-periodic and continuous (actually analytic, since so is $\dot p(t)$). Thus: $$ \Delta(t) = T\sum_{k\in\mathbb{Z}} \dot p(kT-t). $$ \section{Some examples} In this Section we apply the result of the previous Section to construct a second order equation in $\mathbb{R}$ whose Melnikov function vanishes identically on an infinite number of (independent) $2\pi$-periodic functions, or on any $2\pi$-periodic function $q(t)$ (actually, we will give an example of case (ii) of the previous Section). To do this we will first prove a result allowing us to construct second order equations satisfied by prescribed homoclinic solutions. For completeness we will also give an example showing that this procedure can also produce non rational differential equations. To start with we make some remarks on the properties of the function $p(t)$ and the associated $\Phi_0(x)$. Since $p(t)\to 0$ as $|t|\to\infty$, there exists $t_0$ such that $\dot p(t_0)=0$. Without loss of generality we can assume that $t_0=0$. Thus $p(t)= p(-t)$ because both satisfy the Cauchy problem: $$ \begin{array}{c} \ddot x = f(x) \\ x(0) = p(0) \quad \dot x(0) =\dot p(0)\, . \end{array} $$ Possibly changing $f(x)$ with $-f(-x)$, we can also assume that $p_0:=p(0)>0$. Thus $t\dot p(t)<0$ for any $t\neq 0$. In fact if $\dot p(\tau)=0$ for some $\tau>0$ then $p(t)$ would be $2\tau$-periodic contradicting the fact that $p(t)\to 0$ as $|t|\to\infty$. Now, let $\Phi_0(x)$ be as in the previous section. Since we want that the equality $\Phi_0(x)= p(\log x)$ holds for any $x>0$, we see that we have to assume that: \begin{equation} \Phi_0(x)=\Phi_0(1/x)\label{eq:16} \end{equation} for any $x>0$ and then $\Phi_0(u)=\Phi_0(1/u)$ because of uniqueness of the analytical extension. Thus, besides the pole $w_j$, $\Phi_0(u)$ has also the pole $w_j^{-1}$ whose argument is $2\pi - \Arg (w_j)$ (here we assume that $0<\Arg (w_j)\le\pi$). Then, the function $\Phi_0(x)$ is increasing in $[0,1]$ and decreasing in $[1,\infty)$, moreover $\Phi_0(1)=p_0$. Thus there exist two functions $x_{\pm}(p)$ defined on $(0,p_0]$ such that \begin{description} \item{(i)} $x_{+}(p)$ is decreasing on $(0,p_0]$ and $x_{+}(p_0)=1$, \item{(ii)} $x_{-}(p)$ is increasing on $[0,p_0]$ and $x_{-}(p_0)=1$, $x_{-}(0)=0$ \end{description} and satisfy: \begin{equation} \Phi_0(x_{\pm}(p)) = p. \label{eq:17} \end{equation} Note that, because of (\ref{eq:16}), we obtain: $$ x_{+}(p) = {1\over x_{-}(p)} $$ for any $p\in(0,p_0]$, moreover, being $\Phi_0'(1)=0$ we get: $\ds\lim_{p\to p_0} x_{\pm}'(p_0) = \mp\infty.$ Now, $p(t)$ satisfies the equation: $$ \ddot p(t) = F(e^{t}) $$ where $F(x)=x^{2}\Phi_0''(x)+x\Phi_0'(x)$ is a rational function defined on a neighborhood of $x\ge 0$. Thus the point is to see whether $F(e^t)=f(p(t))$ for some $C^{1}$-function $f(p)$. We note the following: $$ F(1/x) = {1\over x^{2}}\Phi_0''(1/x) +{1\over x}\Phi_0'(1/x) $$ and, using (\ref{eq:16}), we get: $$ -{1\over x^{2}}\Phi_0'(1/x) = \Phi_0'(x), \quad {2\over x^{3}}\Phi_0'(1/x)+{1\over x^{4}}\Phi_0''(1/x) = \Phi_0''(x). $$ Thus it is easy to see that $F(x) = F(1/x)$ and then $F(x_{-}(p))=F(x_{+}(p))$. Note that we also get $x^{2}F'(x) = -F'(1/x)$. This last equation implies $F'(1)=0$ (note that a similar conclusion holds for $\Phi_0'(1)$). We set $$ f(p) := F(x_{-}(p))\; (=F(x_{+}(p))), \quad p\in[0,p_0]. $$ Note that we choose $x_{-}(p)$ so that $f(p)$ is continuous up to $p=0$; moreover from (\ref{eq:17}) we see that either $x_{-}(p(t))=e^t$ or $x_{+}(p(t))=e^t$, but then, in any case $f(p(t)) = F(e^{t}) = \ddot p(t)$. Thus we want to show that $f(p)$ can be extended in a $C^{1}$ way in a neighborhood of $[0,p_0]$. To this end we simply have to show that the limits: $\lim_{p\to p_0}{d\over dp}F(x_{-}(p))$ and $\lim_{p\to 0}{d\over dp}F(x_{-}(p))$ exist in $\mathbb{R}$. We have: $$ \lim_{p\to p_0}{d\over dp}F(x_{-}(p)) = \lim_{p\to p_0} {F'(x_{-}(p)) \over \Phi_0'(x_{-}(p))} = \lim_{x\to 1}{F'(x) \over \Phi_0'(x)} = \lim_{x\to 1}{F''(x) \over \Phi_0''(x)} = {F''(1) \over \Phi_0''(1)} \in\mathbb{R} . $$ Note that we have to assume that $\Phi_0''(1)\neq 0$ because otherwise $f(p_0)=0$ and then $p(t)\equiv p_0$ will be another solution of the Cauchy problem $\ddot p = f(p)$, $p(0)=p_0$, $\dot p(0) = 0$. Hence $f(p)$ cannot even be Lipschitz continuous in any neighborhood of $p_0$. Next we prove that the limit $$ \lim_{p\to 0}{d\over dp}F(x_{-}(p))=\lim_{x\to 0}{F'(x) \over \Phi_0'(x)} $$ exists in $\mathbb{R}$. To this end we observe that, $\Phi_0(x)$ being analytic, $x=0$ has to be a zero of finite multiplicity, say $k\ge 1$, of $\Phi_0(x)$, that is $\Phi_0(x)=x^kG(x)$, with $G(0)\neq 0$. Then $F(x) = k^{2}\Phi_0(x) + O(x^{k+1})$ and $$ \lim_{x\to 0}{F'(x) \over \Phi_0'(x)} = \lim_{x\to0} {k^{2} \Phi_0'(x) + O(x^{k}) \over \Phi_0'(x)} = k^{2}. $$ Let us briefly recall what we have seen so far. Let $\Phi_0(x)$ be a non negative rational function on $[0,+\infty)$ such that $\Phi_0(x)=\Phi_0(1/x)$ and $\Phi_0(x)$ is strictly increasing from $0$ to $\Phi_0(1)$ on $[0,1]$ (and of course strictly decreasing from $\Phi_0(1)$ to $0$ on $[1,\infty)$) and $\Phi_0''(1)\neq 0$. Then $p(t):=\Phi_0(e^t)$ is a homoclinic solution of a $C^{1}$-equation $\ddot p = f(p)$. We now want to show that this equation can have further smoothness properties, for example that can be $C^{2}$. To do this we show that the limits $\ds\lim_{p\to p_0} {d^{2}\over dp^{2}}F(x_{-}(p))$ and $\ds\lim_{p\to 0}{d^{2}\over dp^{2}}F(x_{-}(p))$ exist in $\mathbb{R}$. As for the first we will see that the result holds without any further assumption on $\Phi_0(x)$, but the same does not hold in general for the second. We have: $$ {d^{2}\over dp^{2}}F(x_{-}(p)) = {F''(x_{-}(p))\Phi_0'(x_{-}(p))- F'(x_{-}(p)) \Phi_0''(x_{-}(p)) \over \Phi_0'(x_{-}(p))^{3}} $$ and hence we are led to evaluate the two limits: \begin{equation} \lim_{x\to 1}{F''(x)\Phi_0'(x)-F'(x)\Phi_0''(x) \over \Phi_0'(x)^{3}}\label{eq:18} \end{equation} and \begin{equation} \lim_{x\to 0}{F''(x)\Phi_0'(x)-F'(x)\Phi_0''(x) \over \Phi_0'(x)^{3}}.\label{eq:19} \end{equation} Let us consider, first, the limit in (\ref{eq:18}). Since $F'(1)=\Phi_0'(1)=0$ we apply L'Hopital rule and get: $$ \begin{array}{l}\ds \lim_{x\to 1}{F''(x)\Phi_0'(x)-F'(x)\Phi_0''(x) \over \Phi_0'(x)^{3}} = {1\over 3\Phi_0''(1)}\lim_{x\to 1} {F'''(x)\Phi_0'(x)-F'(x)\Phi_0'''(x) \over \Phi_0'(x)^{2}} \\ \\ \ds = {1\over 6\Phi_0''(1)^{2}}\lim_{x\to 1}{F^{(iv)}(x) \Phi_0'(x) + F'''(x)\Phi_0''(x) - F''(x)\Phi_0'''(x) - F'(x)\Phi_0^{(iv)}(x) \over \Phi_0'(x)} \end{array} $$ provided the last limit exists. Now, from $\Phi_0(1/x)= \Phi_0(x)$ we get: $$ -\Phi_0'''(1/x) = x^{6}\Phi_0'''(x) + 6x^{5}\Phi_0''(x) + 6x^{4}\Phi_0'(x) $$ and then $\Phi_0'''(1) = -3\Phi_0''(1)$. Similarly $F'''(1) = -3F''(1)$. Thus we can apply again L'Hopital rule and obtain: $$ \lim_{x\to 1}{F''(x)\Phi_0'(x)-F'(x)\Phi_0''(x) \over \Phi_0'(x)^{3}} = {F^{(iv)}(1)\Phi_0''(1) - F''(1)\Phi_0^{(iv)}(1) \over 3\Phi_0''(1)^{3}}\in \mathbb{R}. $$ Now we consider $\lim_{p\to 0} f''(p)$ that is the limit in (\ref{eq:19}). Recall that we set $\Phi_0(x)= x^kG(x)$, with $G(0)\neq 0$ and note that also $F(x)$ has $x=0$ as a zero of multiplicity $k$. Thus the numerator of (\ref{eq:19}) has $x=0$ as a zero of multiplicity (at least) $2k-3$ while the denominator has $x=0$ as a zero of multiplicity $3(k-1)$. Now a simple computation shows that $x=0$ is actually a zero of the numerator of multiplicity at least $2(k-1)$, but in general this is the maximum we can expect. In fact one has: \begin{equation} F''(x)\Phi_0'(x)-F'(x)\Phi_0''(x) = k(k+1)(2k+1)x^{2(k-1)}G(x)G'(x) + O(x^{2k-1}).\label{eq:20} \end{equation} Of course this is not enough to prove that $f(p)$ is $C^{2}$ up to $p=0$, unless $k=1$. So we assume that $k\in\mathbb{N}$, $k>1$, and the following holds: $$ \Phi_0(x) = x^{k}G_0(x^{k}) $$ where $G_0(0)\neq 0$. In this case in fact the left hand side of equation (\ref{eq:20}) vanishes at $x=0$ (since $G'(0)=0$) and we actually have: $$ F''(x)\Phi_0'(x)-F'(x)\Phi_0''(x) = 6k^{5}G_0(x^k)G_0'(x^k)x^{3(k-1)} + O(x^{4k-3}) $$ and then $$ \lim_{x\to 1}{F''(x)\Phi_0'(x)-F'(x)\Phi_0''(x) \over \Phi_0'(x)^{3}} = {6k^{2}G_0'(0)\over G_0(0)^{2}} \in\mathbb{R}. $$ Let us rewrite what we have done as a theorem: \begin{theorem} \label{thm3} Let $\Phi_0(u)=u^kG(u)$, $k\ge 1$, be a rational function such that $G(0)\ne 0$ and the following hold: \begin{description} \item {(i)} $\Phi_0(u)=\Phi_0(1/u)$ (that is $G(1/u)= u^{2k}G(u)$), \item {(ii)} $\Phi_0(x)>0$ when $x$ is real and $x>0$, \item {(iii)} $\Phi_0'(x)=0$ on $x>0$ is equivalent to $x=1$, \item{(iv)} $\Phi_0''(1)\neq 0$. \end{description} Then $\lim_{u\to\infty}u\Phi_0'(u)=0$ and there exists a $C^{1}$--function $f(p)$ in a neighborhood of $[0,\Phi_0(1)]$ such that $p(t)=\Phi_0 (e^{t})$ is the solution of the equation $\ddot p = f(p)$. Moreover, if $G(u)=G_0(u^k)$ for some rational function $G_0(u)$, $G_0(0)\ne 0$, the function $f(p)$ is $C^{2}$ in a neighborhood of $[0,\Phi_0(1)]$. \end{theorem} \paragraph{Proof}. We only have to prove that $\lim_{u\to\infty}u\Phi_0'(u)=0$. To this end we note that $G(1/u)=u^{2k}G(u)$ implies $G'(1/u)=-2ku^{2k+1}G(u) - u^{2k+2}G'(u)$ and then $$ \begin{array}{rl} \ds\lim_{u\to\infty} u\Phi_0'(u) & \ds = \lim_{u\to 0} {\Phi_0'(1/u)\over u} = \lim_{u\to 0} {k G(1/u)\over u^k} + {G'(1/u)\over u^{k+1}} \\ & \ds = \lim_{u\to 0} -u^{k}\{ uG'(u) + k G(u)\} = 0. \end{array} $$ Finally note that condition $\Phi_0''(1)\neq 0$ can be also stated in terms of $G(u)$ since condition (i) implies $G'(1)=-kG(1)$ and then $\Phi_0''(1) = G''(1) - k(k+1) G(1)$. The proof is finished. One might wonder what kind of system one obtains starting with functions $\Phi_0(x)$ as in Theorem \ref{thm3}. Actually, since $\Phi_0(x)$ is a rational function one might expect that the function $F(x_{-}(p))$ is a rational function of $p$. However this is not generally true because $x_{\pm}(p)$ are in general far from being rational. To show this we start with the function $$ \Phi_0(x) := \frac{x(x^2+1)}{x^4+4x^2+1}. $$ There is no particular reason for the coefficient 4. It only has to be different from 2, otherwise the expression of $\Phi_0(x)$ can be simplified. It is easy to see that all the conditions of Theorem \ref{thm2} are satisfied. In particular we have $$ \Phi_0(x) = \Phi_0\left ( {1\over x}\right );\quad \Phi_0(1) = {1\over 3}; \quad \Phi_0'(1) = 0;\quad \Phi_0'(0) = 1; \quad \Phi_0''(1) = -{1\over 9}. $$ Moreover we obtain the following expression for $F(x) = x^{2} \Phi_0''(x) + x\Phi_0'(x)$: $$ \begin{array}{rl} F(x) & \ds\!\!\! = \frac{x(x^{2}+1)(x^{8}-16x^{6}+18x^{4}-16x^{2}+1)} {(x^{4}+4x^{2}+1)^{3}} \\ & \ds\!\!\! = \Phi_0(x)\frac{(x^{8}-16x^{6}+18x^{4}-16x^{2}+1)} {(x^{4}+4x^{2}+1)^{2}}. \end{array} $$ In order to apply the above described procedure we have to solve the equation: \begin{equation} x(x^{2}+1) = (x^{4}+4x^{2}+1) p \label{eq:21} \end{equation} for $x$ as a function of $p$. Since $\Phi_0(x)=\Phi_0(1/x)$ we can solve (\ref{eq:21}) multiplying it by $x^{-2}$ and setting $z = x+x^{-1}$. We obtain: $$ pz^2-z+2p = 0 $$ which has the solution \begin{equation} z_{\pm}(p) = \frac{1\pm\sqrt{1-8p^2}}{2p}. \label{eq:22} \end{equation} Now, $x_-(p)$ and $x_+(p) = x_-(p)^{-1}$ are both solutions of the equation $x+x^{-1} = z_+(p)$, and not $x+x^{-1} = z_-(p)$, because, for $p = p_0 = \Phi_0(1) = 1/3$ we have $z_{+}(p_0) = 2$, $z_{-}(p_0) = 1$ and $x_+(p_0) = x_-(p_0) = 1$. Now, we want to construct $f(p) = F(x_{-}(p))$ where $x_{-}(p)$ is the unique solution of $\Phi_0(x) = p$ such that $0\le x_{-}(p)\le 1$. We have $\Phi_0(x_{-}(p)) = p$ for any $0\le p \le {1\over 3}$, and $x_{-}(\Phi_0(x)) = x$ for any $0\le x \le 1$. So: $$ f(p) = p f_0(x_{-}(p)) $$ where $$ f_0(x) = \frac{x^{8}-16x^{6}+18x^{4}-16x^{2}+1}{(x^{4}+4x^{2} +1)^{2}} = \frac{x^{4}-16x^{2}+18-16x^{-2}+x^{-4}}{(x^{2}+4 +x^{-2})^{2}}. $$ Since $x_{-}(p)+x_{-}(p)^{-1} = z_{+}(p)$ we have $$ x_{-}^{2}(p)+x_{-}(p)^{-2} = z_{+}^{2}(p)-2, $$ and $$ x_{-}^{4}(p)+x_{-}(p)^{-4} = z_{+}^{4}(p)-4z_{+}^{2}(p)+2. $$ So $$ f_0(x_{-}(p)) = \frac{z_{+}^{4}(p)-20z_{+}^{2}(p)+52} {(z_{+}^{2}(p)+2)^{2}}. $$ Plugging (\ref{eq:22}) in the above equation we obtain, after some algebra: $$ f_0(x_{-}(p)) = 7-6\sqrt{1-8p^2}-48p^2. $$ Thus we have seen that the second order equation \begin{equation} \ddot x = x(7-6\sqrt{1-8x^2}-48x^2) \label{eq:23} \end{equation} has the homoclinic solution $\ds p(t) = \frac{e^{t}(e^{2t}+1)} {e^{4t}+4e^{2t}+1}$. Note that the equation (\ref{eq:23}) is defined on the interval $(-\frac{1}{2\sqrt{2}},\frac{1}{2\sqrt{2}})$ that contains $[0,\frac{1}{3}]$. We now give an example of equations whose associated Melnikov function vanishes on an infinite dimensional space of $C^{1}$, $2\pi$-periodic functions. Take $a\in\mathbb{R}$, $a^{2}\neq 0,1$ and set: $$ \Phi_0(x) = {|a^{4}-1|x^{2}\over (x^{2}+a^{2})(a^{2}x^{2}+1)}. $$ Note that $\Phi_0(x)>0$, for $x\neq 0$, and changing $a$ with $a^{-1}$, we obtain the same function, so we assume $a^{2}>1$. Moreover $\Phi_0(x)$ satisfies all the assumptions of Theorem \ref{thm3} including $\Phi_0(u)=u^kG_0(u^k)$ with $k=2$. For example one has: $$ \Phi_0''(1) = {8a^2(1-a^2) \over (1+a^2)^3} $$ which is different from zero when $a^{2}\neq 0,1$. Now, the (simple) poles of $\Phi_0(u)$ are $$ w_1:=ia,\quad \bar w_1=-ia, \quad w_2:=ia^{-1}, \quad \bar w_2=-ia^{-1} $$ and we have $$ \begin{array}{l} \lambda_1 = \lim_{z\to 1} (z-1)\Phi_0(w_1z) = -1/2 \\ \bar\lambda_1 = \lim_{z\to 1} (z-1)\Phi_0(\bar w_1z) = -1/2 \\ \lambda_2 = \lim_{z\to 1} (z-1)\Phi_0(w_2z) = 1/ 2 \\ \bar\lambda_2 = \lim_{z\to 1} (z-1)\Phi_0(\bar w_2z) = 1/2. \end{array} $$ Thus equation (\ref{eq:11}) gives, after some algebra: $$ \delta_{n} = \frac{\pi in}{\sinh (n\frac{\pi}{2})}\sin(n\log a) $$ and we obtain the following: \begin{description} \item{(a)} taking $a=e^{m\pi}$, $m\in\mathbb{N}$, we can construct a family of second order equation whose Melnikov function is identically zero, no matter what the ($2\pi$-periodic) perturbation is; \item{(b)} taking $a=e^{m\pi/2}$, $m\in\mathbb{N}$, we can construct a family of second order equation whose Melnikov function is identically zero on an infinite number of independent $2\pi$-periodic perturbations but not for all. \end{description} To obtain an analytical expression of such systems, we proceed as in the previous example. The equation $\Phi_0(x) = p$ reads: $$ pa^2(x^2+\frac{1}{x^{2}})-a^4+p+pa^4+1 = 0 $$ and again can be solved by setting $z=x+x^{-1}$. We obtain: $$ z^2=(a^{2}-1)\frac{a^{2}+1-p(a^{2}-1)}{pa^2} $$ which has the solutions $$ z_{\pm}(p) = \pm\frac{\sqrt{p[a^4-1-p(a^2-1)^{2}]}}{ap} $$ It is not necessary to solve the equations $x+x^{-1} = z_{\pm}$. We only have to note that both $x_{-}(p)$ and $x_{+}(p) = x_-(p)^{-1}$ are solutions of the equation $x+x^{-1} = z_{+}(p)$, and not $x+x^{-1} = z_{-}(p)$, because $z_+(p_0) = 2$, $z_-(p_0) = -2$ and $x_-(p_0) = x_+(p_0) = 1$. Next we compute $F(x)=x^{2} \Phi_0''(x) + x\Phi_0'(x)$. Since $F(x) = F(1/x)$ we expect that $F(x)$ can be expressed in terms of $z = x+x^{-1}$. An annoying computation shows that, in fact, $F(x) = G(x+x^{-1})$, where: $$ G(z) = 4a^{2}(a^{4}-1) \frac{a^2z^4-(a^{4}+4a^2+1)z^2+2(a^2-1)^{2}} {(a^2z^2+(a^2-1)^2)^{3}}. $$ Thus $f(p,a) = F(x_{-}(p)) = G(z_{+}(p))$. After some algebra, we get: \begin{equation} f(p,a) = 4p\left ( 2p^{2}-3{a^{4}+1\over a^{4}-1}p+1\right ) = 4p[2p^{2}-3p\coth(2\log a)+1]. \label{eq:24} \end{equation} Thus, in this case, $p(t)=\Phi_0(e^t)$ is the solution of an analytic second order equation $\ddot x = f(x,a)$ such that, when $a=e^{m\pi}$ (or $a=e^{m\pi/2}$), $m\in\mathbb{N}$, its Melnikov function vanishes identically on any $2\pi$-periodic functions (or it is identically zero for infinitely many independent $2\pi$-periodic functions but not for all). The geometrical meaning of this is that, in spite of the fact that the perturbation of the equation is of the order $O(\varepsilon)$, the distance between the stable and unstable manifolds of the perturbed equation, along a transverse direction, is of the order (at least) $O(\varepsilon^{2})$. This means that in order to study the intersection of the stable and the unstable manifolds, we have to look at the second order Melnikov function. For a $C^{2}$-equation like $\ddot x + f(x) = \varepsilon q(t)$ this second order Melnikov function is given by: $$ M_2(\alpha) = \int_{-\infty}^{+\infty} \dot p(t) f''(p(t)) v_{\alpha}^{2}(t) dt $$ where $v_{\alpha}(t)$ is any fixed bounded solution of the equation \begin{equation} \ddot x = f'(p(t))x + q(t+\alpha). \label{eq:24a} \end{equation} This solution exists thanks to the fact that $M(\alpha)=0$. Note that any two of these bounded solutions differ for a multiple of $\dot p(t)$, and hence $v_{\alpha+2\pi}(t) = v_{\alpha}(t) +\lambda\dot p(t)$, for some $\lambda\in\mathbb{R}$. On the other hand $M_2(\alpha)$ does not depend on the particular solution $v_{\alpha}(t)$ we choose. This fact easily follows from the fact that $\ddot p(t)$ is a bounded solution of the non homogeneous system $$ \ddot x = f'(p(t))x + f''(p(t))\dot p(t)^{2} $$ and $\dot v_{\alpha}(t)$ is a bounded solution of $$ \ddot x = f'(p(t))x + f''(p(t))\dot p(t)v_{\alpha} + \dot q(t+\alpha). $$ Hence: $$ \int_{-\infty}^{+\infty}\dot p(t)f''(p(t))\dot p(t)^{2} dt = 0 $$ and $$ \int_{-\infty}^{+\infty}\dot p(t)f''(p(t))\dot p(t)v_{\alpha}(t) dt = -\int_{-\infty}^{+\infty}\dot p(t)\dot q(t+\alpha) = M'(\alpha) = 0. $$ Thus $M_2(\alpha)$ is $2\pi$-periodic. This fact, however, also follows from the more general fact that the bifurcation function itself is $2\pi$-periodic. We now prove the following result. \begin{theorem}\label{thm4} For any $m\in\mathbb{N}$ and $c\ne 0$, the second order Melnikov function $M_2(\alpha)$ associated to the equation \begin{equation} \ddot x = 4x(2x^{2}-3x\coth (2 m\pi)+1) +\varepsilon \left (\frac{c}{2} + q_{odd}(t) \right ) \label{eq:25} \end{equation} does not vanish identically on a dense subset ${\cal S}$ of the space $C^1_{odd,2\pi }$ of all $C^{1}$-smooth, $2\pi-$periodic and odd functions $q_{odd}(t)$. Actually, ${\cal S}$ is the complement of a codimension one closed linear subspace of $C^1_{odd,2\pi}$. Moreover if a positive integer $k\in\mathbb{N}$ exists such that $q_{odd}(t+\frac{\pi}{k}) = -q_{odd}(t)$, $M_2(\alpha)$ changes sign in the interval $[0,\frac{\pi}{k}]$. \end{theorem} \paragraph{Proof.} We emphasize the fact that many of the arguments of this proof can be used even for more general equations than (\ref{eq:25}) having a homoclinic orbit. For this reason we will write $f(x)$ instead of $4x(2x^{2}-3x\coth (2 m\pi)+1)$, $q(t)$ instead of $\frac{c}{2}+q_{odd}(t)$ and $p(t)$ for the orbit homoclinic to the hyperbolic fixed point $x=0$, in the first part of the proof. Note that the hyperbolicity of $x=0$ implies that $f'(0)>0$. As a first step we simplify the expression of $M_2(\alpha)$ in the following way. Let $v_{\alpha}(t)$ be a bounded solution of the equation $\ddot x =f'(p(t))x + q(t+\alpha)$, whose existence is guaranteed by the fact that $M(\alpha)=0$, and $u(t)$ be the unique $2\pi$-periodic solution of the equation $\ddot x =f'(0)x + q(t)$. Then $r_{\alpha}(t) := v_{\alpha}(t) - u(t+\alpha)$ is a bounded solution of $$ \ddot x = f'(p(t))x + [f'(p(t))-f'(0)]u(t+\alpha). $$ As a consequence $r_{\alpha}(t)\to 0$ exponentially together with its first and second derivative (uniformly with respect to $\alpha$) and $v_{\alpha}(t) = r_{\alpha}(t) + u(t+\alpha)$. Then $$ \begin{array}{rl} M_2(\alpha) = & \ds \!\!\! \int_{-\infty}^{+\infty} \frac{d}{dt} [ f'(p(t) -f'(0) ] v_{\alpha}^{2}(t) dt \\ = & \ds \!\!\! -2\int_{-\infty}^{+\infty} [f'(p(t)) - f'(0)] v_{\alpha}(t)\dot v_{\alpha}(t) dt \\ = & \ds \!\!\! -2\int_{-\infty}^{+\infty} \Big ( [ \ddot v_{\alpha}(t) - q(t+\alpha) ]\dot v_{\alpha}(t) - f'(0) v_{\alpha}(t)\dot v_{\alpha}(t) \Big ) dt. \end{array} $$ Now we observe that $$ \begin{array}{rl} \ds 2\lim_{n\to +\infty}\int_{-n\pi}^{n\pi}\ddot v_{\alpha}(t) \dot v_{\alpha}(t) dt = & \!\!\! \ds\lim_{n\to+\infty} \Big \{ [\dot r_{\alpha}(n\pi) + \dot u(n\pi+\alpha)]^{2} \\ & \ds - [\dot r_{\alpha}(-n\pi) + \dot u(-n\pi+\alpha)]^{2} \Big \} = 0 \end{array} $$ because $\dot r_{\alpha}(t) \to 0$ as $|t|\to+\infty$ and $u(t)$ is $2\pi$-periodic. Similarly, using the fact that $r_{\alpha}(t)\to 0$ as $|t|\to+\infty$, we get: $$ \begin{array}{rl} \ds 2\lim_{n\to +\infty}\int_{-n\pi}^{n\pi} v_{\alpha}(t) \dot v_{\alpha}(t) dt = & \!\!\! \ds\lim_{n\to+\infty} \Big \{ [r_{\alpha}(n\pi) + u(n\pi+\alpha)]^{2} \\ & \ds - [r_{\alpha}(-n\pi) + u(-n\pi+\alpha)]^{2} \Big \} = 0. \end{array} $$ As a consequence \begin{equation} M_2(\alpha) = 2 \lim_{n\to +\infty}\int_{-n\pi}^{n\pi} \dot v_{\alpha}(t)q(t+\alpha) dt \label{eq:25a} \end{equation} Note that $\ds\lim_{n\to +\infty}\int_{-n\pi}^{n\pi}$ in equation (\ref{eq:25a}) cannot be replaced by $\ds\int_{-\infty}^{+\infty}$ because the convergence of this integral is not guaranteed. Now, in order to compute $v_{\alpha}(t)$, we first look for a fundamental matrix of the homogeneous equation $\ddot x = f'(p(t))x$. We already know that $\dot p(t)$ is a solution of the previous equation that satisfies also $\dot p(0) = 0$, and $\ddot p(0) \neq 0$. So we look for a solution $y(t)$ such that $y(0)\ddot p(0) = 1$ and $\dot y(0)=0$. If $y(t)$ is such a solution, Liouville Theorem implies that $$ X(t) = \pmatrix{y(t) & \dot p(t) \cr \dot y(t) & \ddot p(t)} $$ satisfies ${\rm det} X(t) = 1$ that is $\dot p(t)\dot y(t) - \ddot p(t) y(t) = -1$. Integrating this equation we obtain: $$ y(t) = -\dot p(t)\int^t \frac{1}{\dot p(s)^{2}}ds. $$ Note that, no matter the constant we add to the integral, $c\dot p(t)$ vanishes for $t=0$; however the constant is uniquely determined by the condition $\dot y(0) = 0$ (from which the equality $y(0) \ddot p(0) = 1$ follows). Let $\mu = \sqrt{f'(0)}$. From $p(t)=P(e^{\mu t})$, we obtain: \begin{equation} y(t) = Y(e^{\mu t}) \label{eq:25b} \end{equation} where \begin{equation} Y(x) = -\frac{1}{\mu^{2}} xP'(x)\int^{x} \frac{d\sigma}{\sigma^{3} [P'(\sigma)]^{2}}. \label{eq:25c} \end{equation} Specializing (\ref{eq:25c}) to equation (\ref{eq:25}) where $P(x) = \frac{(a^{4}-1)x}{(x+a^{2}) (a^{2}x+1)}$, with $\mu = 2$, we obtain $Y(x) = Y_0(x) + Y_{s}(x) + Y_{b}(x)$ where $$ \begin{array}{l} \ds Y_0(x) = \frac{3}{2} \frac{a^{2}(a^{8}+3a^{4}+1)}{a^{4}-1} \frac{x(x^{2}-1)\log x}{(x+a^{2})^{2}(a^{2}x+1)^{2}} \\ \\ \ds Y_{s}(x) = \frac{a^{2}}{8(a^{4}-1)}\left ( x + x^{-1} \right ) \\ \\ \ds Y_{b}(x) = \frac{3}{4}\frac{a^{4}+1}{a^{4}-1} - \frac{a^{16}+52a^{12}+72a^{8}-4a^{4}-1}{16a^{2}(a^{4}-1)^{2}(x+a^{2})} +\frac{a^{12}+29a^{8}+29a^{4}+1}{16(a^{4}-1)(x+a^{2})^{2}} \\ \ds \phantom{v_{b}(x) =} -\frac{a^{16}+4a^{12}-72a^{8}-52a^{4}-1}{16a^{4}(a^{4}-1)^{2}(a^{2}x+1)} +\frac{a^{12}+29a^{8}+29a^{4}+1}{16a^{4}(a^{4}-1)(a^{2}x+1)^{2}} \end{array} $$ Note that $Y_0(x)+Y_{b}(x)$ is bounded on $[0,+\infty)$ while $Y_{s}(x)$ is unbounded near $x=0$ and infinity. Now, the variation of constants formula gives, for any solution of equation (\ref{eq:24a}): $$ \begin{array}{rl} v_{\alpha}(t) = &\!\!\! \ds c_1y(t) + c_2\dot p(t) + \int_0^{t} [\dot p(t) y(s) - \dot p(s) y(t)]q(s+\alpha) ds \\ = &\!\!\! \ds \Big [ c_1 - \int_0^{t}\dot p(s)q(s+\alpha) ds \Big ] y(t) + \dot p(t) \Big [ c_2 + \int_0^{t} y(s)q(s+\alpha) ds \Big ]. \end{array} $$ Then, from the boundedness of $q(t)$, the fact that $y(t)$ is of the order $e^{\mu |t|}$ at $\pm \infty$ and $\dot p(t)$ is of the order $e^{-\mu |t|}$ at $\pm\infty$, we see that the second term is bounded on $\mathbb{R}$. Hence $v_{\alpha}(t)$ will be bounded on $\mathbb{R}$ if and only if a constant $c_1$ exists such that $$ \Big[ c_1 - \int_0^{t}\dot p(s)q(s+\alpha) ds \Big] y(t) $$ is bounded on $\mathbb{R}$, and this can happen (if and) only if $$ c_1 = \int_0^{+\infty}\dot p(s)q(s+\alpha) ds = \int_{-\infty}^{0}\dot p(s)q(s+\alpha) ds. $$ This choice of $c_1$ is made possible by the fact that $M(\alpha)=0$ and gives: $$ v_{\alpha}(t) = y(t)\int_{t}^{\infty}\dot p(s)q(s+\alpha) ds + \dot p(t) \left [ c_2 + \int_0^{t} y(s)q(s+\alpha) ds\right ]. $$ Note that $v_{\alpha}(t)$ is bounded on $\mathbb{R}$ for any value of $c_2$. However we can make it unique by adding the condition $\dot v_{\alpha}(0) = 0$. Since $\dot y(0) = 0$ we see that this is equivalent to choosing $c_2=0$. That is \begin{equation} v_{\alpha}(t) = y(t)\int_{t}^{\infty}\dot p(s)q(s+\alpha) ds + \dot p(t) \int_0^{t} y(s)q(s+\alpha) ds. \label{eq:26} \end{equation} It is worth of mentioning that equation (\ref{eq:26}) gives a bounded solution of equation (\ref{eq:24a}) provided $p(t)$ is a homoclinic solution of $\ddot x = f(x)$, and $y(t)$ is defined as in (\ref{eq:25b}) and (\ref{eq:25c}). Now, we write $q(t) = q_{even}(t) + q_{odd}(t)$ where $q_{even}(-t) = q_{even}(t)$ and $q_{odd}(-t) = -q_{odd}(t)$. Then the solution $v(t)$ of the equation $\ddot x = f'(p(t))x + q(t)$ satisfies $v(t) = v_{even}(t) + v_{odd}(t)$ where $v_{even}(t)$ is the (unique) bounded solution of $$ \ddot x = f'(p(t))x +q_{even}(t), \quad \dot x(0) =0 $$ while $v_{odd}(t)$ is the (unique) bounded solution of $$ \ddot x = f'(p(t))x +q_{odd}(t), \quad \dot x(0) =0. $$ From $p(t)=p(-t)$, and the uniqueness of the solutions we get $v_{even}(t) = v_{even}(-t)$ and $v_{odd}(t) = -v_{odd}(-t)$ and then $$ M_2(0) = 2 \lim_{n\to +\infty}\int_{-n\pi}^{n\pi} \dot v_{even}(t)q_{odd}(t) + \dot v_{odd}(t)q_{even}(t) dt. $$ Now, we consider the situation where $q_{even}(t) = \frac{c}{2}\neq 0$ is constant and different from zero. We obtain immediately: $$ \int_{-n\pi}^{n\pi}\dot v_{odd}(t)q_{even}(t) dt = c\; v_{odd}(n\pi). $$ Next, let $u_{odd}(t)$ be the unique bounded solution of $\ddot x = f'(0)x +q_{odd}(t)$. From the uniqueness we see that $u_{odd}(t)$ is $2\pi$-periodic and odd, moreover $v_{odd}(t) - u_{odd}(t)$ is a bounded solution of $\ddot x = f'(0)x + [f'(p(t))-f'(0)]v_{odd}(t)$ and hence tends to zero exponentially as $|t|\to +\infty$. As a consequence $$ \lim_{n\to +\infty}v_{odd}(n\pi) = \lim_{n\to +\infty}u_{odd}(n\pi). $$ On the other hand $-u_{odd}(-n\pi) = u_{odd}(n\pi) = u_{odd}(-n\pi)$ because of oddness and periodicity. As a consequence $u_{odd}(n\pi) = 0$ and then $$ M_2(0) = 2 \lim_{n\to +\infty}\int_{-n\pi}^{n\pi} \dot v_{even}(t)q_{odd}(t) dt = 2 \int_{-\infty}^{\infty} \dot v_{even}(t)q_{odd}(t) dt $$ the last equality being justified by the fact that $v_{even}(t) + \frac{c}{2f'(0)}$ tends to zero, as $|t|\to +\infty$, together with its first derivative, being a bounded solution of $$ \ddot x = f'(p(t))x - \frac{c}{2f'(0)} [ f'(p(t)) - f'(0)]. $$ At this point we note that when $q_{odd}(t+ \frac{\pi}{k}) = -q_{odd}(t)$ we have $v_{\pi/k}(t) = v_{even}(t) - v_{odd}(t)$ and hence it is easy to see that $$ \begin{array}{cc} M_2(\pi/k) & \ds\!\!\! = 2\lim_{n\to+\infty}\int_{n\pi}^{n\pi} \dot v_{\pi/k}(t)[\frac{c}{2} - q_{odd}(t)] dt \\ & \ds\!\!\! = -2\int_{-\infty}^{\infty} \dot v_{even}(t) q_{odd}(t) = -M_2(0) \end{array} $$ and the theorem follows provided we prove that $M_2(0)\neq 0$. Now, from equation (\ref{eq:26}) we obtain: $$ v_{even}(t) = \frac{c}{2}\left ( \dot p(t) \int_0^{t} y(s) ds - p(t) y(t) \right ) = \frac{c}{2} v(t) $$ where $v(t)$ is defined by the equality. We note that $v(t)$ is the bounded solution of $\ddot x = f'(p(t))x + 1$, with $\dot x (0) =0$ and that $v(t) + \frac{1}{f'(0)}$ tends to zero exponentially, as $|t| \to +\infty$, together with its derivative. Moreover $$ M_2(0) = c \int_{-\infty}^{\infty}\dot v(t)q_{odd}(t) dt = c \int_0^{2\pi}r(t)q_{odd}(t) dt $$ where \begin{equation} r(t) = \sum_{k\in\mathbb{Z}} \dot v(t+2k\pi)\label{eq:26a} \end{equation} is $2\pi-$periodic and odd. From $p(t) = P(e^{\mu t})$ and $y(t) = Y(e^{\mu t})$ we see that $v(t) = V(e^{\mu t})$ where $$ V(x) = xP'(x)\int_1^{x}\frac{Y(\sigma)}{\sigma} d\sigma - P(x)Y(x). $$ Note that $V(x)$ is linear in $Y(x)$. Applying the above considerations to equation (\ref{eq:25}) (hence with $\mu=2$) we obtain after some integrations: $$ \begin{array}{rl} \ds V(x) +\frac{1}{4} & \ds \!\!\! = \frac{3a^{2}x(a^4+1)(1-x^{2}) \log x}{4(a^2x+1)^2(x+a^2)^2} \\ & \!\!\! \ds + \frac{x[(a^{12}+23a^8+23a^4+1)(x^2+1)+16a^{2} (a^{8}+4a^4+1)x]}{16a^2(a^2x+1)^2(x+a^2)^{2}}. \end{array} $$ We set $$ \widetilde M_2(\alpha) := \int_{-\infty}^{+\infty} \dot v(t) q_{odd}(t+\alpha) dt. $$ Then $\widetilde M_2(\alpha)$ is $2\pi-$periodic and $M_2(0) = c\widetilde M_2(0)$. Expanding $\widetilde M_2(\alpha)$ into its Fourier series we get: $$ \widetilde M_2(\alpha) = -\sum_{n\in\mathbb{Z}} in\gamma_{n}q_{n} e^{in\alpha} $$ $q_{n}$ being the $n-$th Fourier coefficient of $q_{odd}(t)$ and $$ \gamma_{n} = \int_{-\infty}^{+\infty} [V(e^{2t})+\frac{1}{4}] e^{int} dt. $$ Note that $in\gamma_{-n}/(2\pi )$ are also the Fourier coefficients of the function $r(t)$ defined in (\ref{eq:26a}). Since $q_{odd}(t)$ is an odd real function we easily get $q_{n} = ic_{n}$ where $c_{n}$ are real numbers such that $c_{n}=-c_{-n}$. Thus \begin{equation} \widetilde M_2(\alpha) = \sum_{n\in\mathbb{Z}} n\gamma_{n}c_{n} e^{in\alpha}. \label{eq:27} \end{equation} Being $\widetilde M(\alpha)$ a real valued function, we also get: $\bar\gamma_{n} =\gamma_{-n}$. Moreover, arguing as in Section 2 of this paper we can evaluate the Fourier coefficients of $\widetilde M_2(\alpha)$ by means of residues and get, for $n\neq 0$: $$ \gamma_{n} = \frac{\pi ie^{n\pi}}{\sinh (n\pi)}\Big ( \sum_{w_j} \mathop{\rm Res} (W(u)u^{in-1},w_j) + \frac{2\pi i}{e^{2n\pi}-1} \sum_{w_j}\mathop{\rm Res} (H(u)u^{in-1},w_j) \Big) $$ where $w_j$ runs in the set $\{ \pm ia, \pm i/a\}$ and: $$ \begin{array}{rl} W(u) & \ds \!\!\! = W_0(u) + H(u)\log u \\ \\ H(u) & \ds \!\!\! = \frac{3a^{2}(a^{4}+1)u^{2}(1-u^{4})} {2(a^2u^{2}+1)^2(u^{2}+a^2)^2} \\ \\ W_0(u) & \ds \!\!\! = \frac{u^{2}((a^{12}+23a^8+23a^4+1)(u^4+1) +16a^{2}(a^8+4a^4+1)u^{2})}{16a^2(a^2u^{2}+1)^2(u^{2}+a^2)^{2}} \end{array} $$ Note that $W(u)$ is the extension of $V(x^{2})+\frac{1}{4}$ to the complex field. Moreover $W(u)$ is a meromorphic function on $\mathbb{C} \setminus\{x\in\mathbb{R} : x\ge 0\}$ that satisfies $\lim_{u\to\infty} W(u) = 0$ uniformly with respect to $Arg(u)\in (0,2\pi)$. An annoying computation shows that: $$ \begin{array}{rl} \mathop{\rm Res} (H(u)u^{in-1},ia) = & \ds \!\!\! \frac{3i(a^{4}+1)} {8(a^{4}-1)}n(\cos(n\log a)+i\sin(n\log a)) e^{-n\pi/2} \\ \mathop{\rm Res} (H(u)u^{in-1},i/a) = & \ds \!\!\! -\frac{3i(a^{4}+1)} {8(a^{4}-1)}n(\cos(n\log a)-i\sin(n\log a)) e^{-n\pi/2} \\ \mathop{\rm Res} (H(u)u^{in-1},-ia) = & \ds \!\!\! \frac{3i(a^{4}+1)} {8(a^{4}-1)}n(\cos(n\log a)+i\sin(n\log a)) e^{-3n\pi/2} \\ \mathop{\rm Res} (H(u)u^{in-1},-i/a) = & \ds \!\!\! -\frac{3i(a^{4}+1)} {8(a^{4}-1)}n(\cos(n\log a)-i\sin(n\log a)) e^{-3n\pi/2}. \end{array} $$ Thus: $$ \sum_{w_j}\mathop{\rm Res} (H(u)u^{in-1},w_j) = -\frac{3(a^{4}+1)} {4(a^{4}-1)}n(e^{-3n\pi/2}+e^{-n\pi/2})\sin (n\log a) $$ which is zero for $a=e^{m\pi}$. A similar computation gives: $$ \begin{array}{rl} \ds\sum_{w_j}\mathop{\rm Res} (W_0(u)u^{in-1},& \!\!\!\!\! w_j) = \ds -\frac{in(e^{n\pi}+1)}{32e^{3n\pi/2}}\frac{a^{12}+9a^{8} -9a^{4}-1}{a^{4}(a^{4}-1)}\cos (n\log a) \\ & \!\!\!\ds -\frac{3i(e^{n\pi}+1)}{4e^{3n\pi/2}}\frac{a^{4}+1} {a^{4}-1}\sin (n\log a) \\ \ds\sum_{w_j}\mathop{\rm Res} (H(u)u^{in-1}\log u,& \!\!\!\!\! w_j) = \ds\frac{3in(e^{n\pi}+1)\log a}{4e^{3n\pi/2}}\frac{a^{4}+1} {a^{4}-1}\cos (n\log a) \\ & \!\!\!\ds +\frac{3i(a^{4}+1)[2(e^{n\pi}+1) - n\pi (e^{n\pi}+3)]} {8(a^{4}-1)e^{3n\pi/2}}\sin (n\log a) \end{array} $$ As a consequence, setting $a=e^{m\pi}$: $$ \gamma_{n} = \frac{(-1)^{nm}\pi n}{8\sinh (n\frac{\pi}{2})} \left [ \cosh ^{2}(2m\pi) - 6m\pi \coth (2m\pi) + 2 \right ] $$ that is $$ M_2(0) = c\cdot C_{m}\sum_{n\in \mathbb{Z} \setminus \{0\} }\frac{(-1)^{nm}n^{2}c_{n}}{\sinh(n\frac{\pi}{2})} = 2 c \cdot C_{m}\sum_{n>0}\frac{(-1)^{nm}n^{2} c_{n}}{\sinh (n\frac{\pi}{2})} $$ $C_{m}$ being a positive constant. So, for any non zero real number $c$, we have $\gamma_{n}\neq 0$, for any $n\in\mathbb{Z}\setminus\{0\}$ and then $r(t)\neq 0$. Since $M_2(0)\ne 0$ if and only if $\int\limits_0^{2\pi} r(t)q_{odd}(t) dt \ne 0$, the thesis of the present theorem follows. For example the space of $2\pi-$periodic functions for which $M_2(0)\neq 0$ is different from zero contains functions like $\frac{c}{2}+q_{odd}(t)$, where $c\neq 0$, and $q_{odd}(t)$ is a $2\pi-$periodic, odd function whose Fourier coefficients $ic_{n}$ satisfy $(-1)^{nm}c_{n}\ge 0$ (resp. $\le 0$) for $n>0$. The proof is finished. We conclude this Section with a remark. Letting $m\to+\infty$ in equation \begin{equation} \ddot x = 4x(2x^{2}-3x\coth (2 m\pi)+1) \label{eq:30} \end{equation} we obtain the equation \begin{equation} \ddot x = 4x(2x^{2}-3x+1) \label{eq:31} \end{equation} which has two {\it heteroclinic} connections to the equilibria $x=0$ and $x=1$. Since the Melnikov function of equation (\ref{eq:30}) is identically zero for any $2\pi$-periodic perturbation of the equation, one might wonder whether this fact holds for the Melnikov functions associated to the heteroclinic orbits of equation (\ref{eq:31}). The answer to this question is negative as it can be easily seen by direct evaluation of the Fourier coefficients of the Melnikov function. In fact let us consider, for example the heteroclinic solution of (\ref{eq:31}) going from $x=0$ to $x=1$: $$ p_{\infty}(t) = {e^{2t}\over e^{2t}+1} = R(e^{t}) $$ where $R(x) ={x^{2}\over x^{2}+1}$. Applying the procedure described in this paper we see that the Fourier coefficients of the Melnikov function are given by $\delta_{n}q_{n}$ where $\delta_0=1$ and, for $n\neq 0$: $$ \delta_{n} = {2n\pi\over 1-e^{-2n\pi}} [ \mathop{\rm Res}({u^{in+1}\over u^{2}+1},i) + \mathop{\rm Res}({u^{in+1}\over u^{2}+1},-i)] = {n\pi\over 2\sinh (n{\pi\over 2})}. $$ Geometrically, this strange behaviour depends on the fact that the homoclinic solution of (\ref{eq:30}) gets orbitally closer and closer (as $m\to\infty$) to the {\it heteroclinic cycle} and not to any of the heteroclinic orbits. As a matter of fact, setting $$ p_{m}(t) = {e^{2t}(e^{4m\pi}-1)\over (e^{2t}+e^{2m\pi}) (e^{2t+2m\pi}+1)}, $$ the Melnikov function associated to a heteroclinic solution of (\ref{eq:31}) is the limit, for $m\to\infty$ of either: $$ \int_0^{+\infty}\dot p_{2m}(t)q(t+\alpha) dt $$ or $$ \int^{0}_{-\infty}\dot p_{2m}(t)q(t+\alpha) dt $$ and these are not zero in general. To see this, consider, for example the heteroclinic solution of (\ref{eq:31}) $p_{\infty}(t)$. We have, for $t\le 0$: \begin{equation} 0 \le \dot p_{\infty}(t+m\pi) - \dot p_{m}(t) = {1\over 2\cosh^{2}(m\pi-t)} \le 2e^{2t}. \label{eq:32} \end{equation} From Lebesgue's theorem we get then: $$ \lim_{m\to+\infty}\int_{-\infty}^{0} [\dot p_{\infty}(t+m\pi) - \dot p_{m}(t)] b(t) dt = 0 $$ for any $L^{\infty}$-function $b(t)$, and hence: $$ \begin{array}{l} \ds\int_{-\infty}^{\infty} \ds \dot p_{\infty}(t) q(t+\alpha) dt = \lim_{m\to+\infty}\int_{-\infty}^{2m\pi} \dot p_{\infty}(t) q(t+\alpha) dt = \\ \ds\lim_{m\to+\infty}\int_{-\infty}^{0} \dot p_{\infty}(t+2m\pi) q(t+\alpha ) dt = \ds\lim_{m\to+\infty}\int_{-\infty}^{0}\dot p_{2m}(t) q(t+\alpha ) dt. \end{array} $$ A similar argument shows that $$ \int_{-\infty}^{\infty} \ds \dot p_{\infty}(t+\pi) q(t+\alpha)dt = \lim_{m\to+\infty}\int_{-\infty}^{0}\dot p_{2m+1}(t) q(t+\alpha ) dt. $$ \begin{thebibliography}{99} \bibitem[1]{BF} Battelli,~F.~and~Fe\v ckan,~M. {Chaos arising near a topologically transversal homoclinic set}, preprint. \bibitem[2]{BL} Battelli,~F.~and~Lazzari,~C. {Exponential dichotomies, heteroclinic orbits, and Melnikov functions}, {\sl J. Diff. Equations} {\bf 86} (1990), 342--366. \bibitem[3]{F} Fe\v ckan,~M. {Higher dimensional Melnikov mappings}, {\sl Math. Slovaca} {\bf 49} (1999), 75-83. \bibitem[4]{Ge} Gelfreich,~V.G. {A proof of the exponentially small transversality of the separatrices for the standard map}, {\sl Comm. Math. Phys.} {\bf 201} (1999), 155-216. \bibitem[5]{G} Gruendler,~J. {The existence of homoclinic orbits and the method of Melnikov for systems in $\mathbb{R}^n$}, {\sl SIAM J. Math. Analysis} {\bf 16} (1985), 907--931. \bibitem[6]{GH} Guckenheimer,~J.~and~Holmes,~P. {Nonlinear Oscillations, Dynamical Systems, and Bifurcations of Vector Fields}, {\sl Springer-Verlag}, New York, 1983. \bibitem[7]{H} Holmes,~P. {Averaging and chaotic motions in forced oscillations}, {\sl SIAM J. Appl. Math.} {\bf 38} (1980), 65-80. \bibitem[8]{P} Palmer,~K.~J. {Exponential dichotomies and transversal homoclinic points}, {\sl J. Diff. Equations} {\bf 55} (1984), 225--256. \bibitem[9]{R} Rudin,~W. {Real and Complex Analysis}, {\sl McGraw-Hill}, Inc. New York, 1974. \bibitem[10]{S} Smale~S. {Differentiable dynamical systems}, {\sl Bull. Amer. Math. Soc.} {\bf 73} (1967), 747-817. \end{thebibliography} \noindent\textsc{Flaviano Battelli} \\ Dipartimento di Matematica ``V. Volterra'', \\ Facolt\'a di Ingegneria -- Universit\'a di Ancona, \\ Via Brecce Bianche 1, 60131 Ancona - Italy \\ e-mail: fbat@dipmat.unian.it \smallskip \noindent\textsc{Michal Fe\v ckan}\\ Department of Mathematical Analysis, Comenius University, \\ Mlynsk\'a dolina, 842 48 Bratislava - Slovakia \\ e-mail: Michal.Feckan@fmph.uniba.sk \end{document}