\documentclass[reqno]{amsart} \AtBeginDocument{{\noindent\small {\em Electronic Journal of Differential Equations}, Vol. 2004(2004), No. 21, pp. 1--13.\newline ISSN: 1072-6691. URL: http://ejde.math.txstate.edu or http://ejde.math.unt.edu \newline ftp ejde.math.txstate.edu (login: ftp)} \thanks{\copyright 2004 Texas State University - San Marcos.} \vspace{9mm}} \begin{document} \title[\hfilneg EJDE-2004/21\hfil Existence of solutions to a Hamiltonian system] {Existence of solutions to a Hamiltonian system without convexity condition on the nonlinearity} \author[Gregory S. Spradlin\hfil EJDE-2004/21\hfilneg] {Gregory S. Spradlin} \address{Gregory S. Spradlin \hfill\break Department of Mathematics\\ Embry-Riddle Aeronautical University\\ Daytona Beach, Florida 32114-3900, USA} \email{spradlig@erau.edu} \date{} \thanks{Submitted Ocotber 31, 2003. Published February 12, 2004.} \subjclass[2000]{34C37, 47J30} \keywords{Mountain Pass Theorem, variational methods, Nehari manifold, \hfill\break\indent homoclinic solutions} \begin{abstract} We study a Hamiltonian system that has a superquadratic potential and is asymptotic to an autonomous system. In particular, we show the existence of a nontrivial solution homoclinic to zero. Many results of this type rely on a convexity condition on the nonlinearity, which makes the problem resemble in some sense the special case of homogeneous (power) nonlinearity. This paper replaces that condition with a different condition, which is automatically satisfied when the autonomous system is radially symmetric. Our proof employs variational and mountain-pass arguments. In some similar results requiring the convexity condition, solutions inhabit a submanifold homeomorphic to the unit sphere in the appropriate Hilbert space of functions. An important part of the proof here is the construction of a similar manifold, using only the mountain-pass geometry of the energy functional. \end{abstract} \maketitle \numberwithin{equation}{section} \newtheorem{theorem}{Theorem}[section] \newtheorem{lemma}[theorem]{Lemma} \newtheorem{proposition}[theorem]{Proposition} \section{Introduction} As Poincar\'e showed, the structure of homoclinic orbits of a system of differential equations, or of a dynamical system, can reveal part of the structure of the entire set of solutions. For nonautonomous differential equations, dynamical systems tools may be insufficient to find homoclinic solutions. Variational methods can be used to find periodic solutions of differential equations fairly easily \cite{MW}. When one searches for homoclinic solutions, the variational problem lacks some compactness properties present in the periodic case. However, these difficulties can be overcome by careful arguments (see \cite{R1}). Consider the system $$-u'' + u = g(t)V'(u), \label{e1.0}$$ where $u: \mathbb{R} \to {\mathbb{R}^N}$, $V'$ is the gradient of $V:{\mathbb{R}^N} \to \mathbb{R}$, and $V(q)$ is a positive potential function similar to a superquadratic power of $q$ (i.e., $|q|^p$, for some $p > 2$). Assume that $g$ is positive and bounded away from zero (see \cite{CM} for a relaxation of this condition). We seek nontrivial solutions homoclinic to zero, or simply, homoclinics.'' That is, solutions $u \not\equiv 0$ with $u(t) \to 0$ and $u'(t) \to 0$ as $t \to \pm \infty$. A natural and surprisingly difficult question is, what conditions must be assumed on $g$ and $V$ to conclude the existence of a nontrivial homoclinic solution? That we must assume something is shown by the following counterexample (see \cite{EL} for a PDE version): let $N = 1$ (the single equation case), $V(q) = q^4$ (but any suitable $V$ will do), and let $g$ be monotone and nonconstant. Then \eqref{e1.0} has no nontrivial homoclinic. To prove, multiply both sides of \eqref{e1.0} by $u'$, and integrate from $-\infty$ to $\infty$, using integration by parts on the right side. Then the left side is zero, while the right side is nonzero. On the affirmative side, existence of homoclinics has been proven for when $g$ is periodic, and more recently, $g$ almost periodic (\cite{STT},\cite{CMN}). \cite{AM} contains results for slowly oscillating'' $g$ ($g$ oscillates between two positive values and $g' \to 0$ as $t \to \infty$). This author has existence results for $g$ a perturbation of a periodic function (\cite{S1}), and for $g$ approaching a constant exponentially quickly as $t \to \pm \infty$ (\cite{S2}). Between the counterexample cited above and the positive results to date, there is a large gap. Most of the results cited above (all except for \cite{R1}), and many newer ones (e.g., \cite{CMN}) rely on a certain convexity assumption on $V$. This assumption is given below, but for now, we note that this assumption makes the variational problem similar to the power case ($V(q) = |q|^p$). It is an interesting challenge to remove or at least weaken this assumption and attempt to reach similar conclusions. In order to state the theorem, we must introduce the variational framework. Consider an unfactored'' version of \eqref{e1.0}, % $$-u'' + u = W'(t,u), \label{e1.1}$$ % where $W:\mathbb{R} \times {\mathbb{R}^N} \to \mathbb{R}$ and $W'= \nabla_q W = ({{\partial W}\over{\partial q_1}},\ldots, {{\partial W}\over{\partial q_N}})$. Let $E = W^{1,2}(\mathbb{R}, \mathbb{R}^N)$, with the inner product $(u,w) = {\int_{\mathbb{R}}} u' \cdot w' + u \cdot w \,dt$ and the corresponding norm $\|u\| = {\sqrt {(u,u)}}$. The functional $I:E \to \mathbb{R}$ corresponding to \eqref{e1.1} is $$I(u) = { 1 \over 2}\|u\|^2 - {\int_\mathbb{R}} W(t,u(t))\,dt.\label{e1.2}$$ Conditions will be put on $W$ to ensure that $I$ is well-defined, and has a continuous Frech\'et derivative. Critical points of $I$ correspond exactly to homoclinic solutions of \eqref{e1.1} (see \cite{R2}). Let $V:{\mathbb{R}^N} \to \mathbb{R}$ and satisfy $W(t,q) \to V(q)$ as $t \to \pm \infty$ (this will be made more precise in a moment). The functional $I_0$, defined by $$I_0(u) = { 1 \over 2}\|u\|^2 - {\int_\mathbb{R}} V(u(t))\,dt,\label{e1.3}$$ corresponds to the autonomous system $$-u'' + u = V'(u). \label{e1.4}$$ $I_0$ and $I$ have mountain-pass geometry.'' That is (in the case of $I_0$, for example), $0$ is a strict local minimum of $I_0$ (with $I_0(0) = 0$, and for some $r > 0$, \break $\inf\{I(u) \mid \|u\| = r\} > 0$), and $I_0(u) < 0$ for some $u \in E$. Therefore the set of mountain-pass curves'' $$\Gamma_0 = \{\gamma \in C([0,1], E) \mid \gamma(0) = 0, \ I_0(\gamma(1)) < 0\}$$ is nonempty, and the mountain pass'' value $c_0$ defined by $$% 1.6 c_0 = \inf_{\gamma \in \Gamma_0} \max_{\theta \in [0,1]} I_0(\gamma(\theta))$$ is positive. The result proven here is: \begin{theorem} \label{thm1.7} Let $N \in \mathbb{N}$. Let $V$ and $F$ satisfy \begin{itemize} \item[(V1)] $V \in C^{1,1}(\mathbb{R}^N, \mathbb{R})$ \item[(V2)] $V(0) = 0$, $V(q) > 0$ for all $q \neq 0$ \item[(V3)] There exists $\mu > 2$ such that $V'(q)q \geq \mu V(q)$ for all $q \in \mathbb{R}^N$, where $V'(q) \equiv \nabla V(q)$ \item[(V4)] There exists $d > 0$ such that $I_0$ (defined by \eqref{e1.3}) has no critical values other than $c_0$ in the interval $(0, c_0 + d)$. \item[(F1)] $F \in C(\mathbb{R}^+,\mathbb{R}^+)$ \item[(F2)] ${\limsup}_{s\to 0^+} {F(s) \over s^2} < \infty$. \end{itemize} Then there exists $\epsilon = \epsilon(V,F)$ with the following property: If $W$ satisfies \begin{itemize} \item[(W1)] $W \in C^{1,1}(\mathbb{R} \times \mathbb{R}^N, \mathbb{R})$ \item[(W2)] $W(t,0)=0$, $W(t,q) > 0$ for all $t \in \mathbb{R}$, $q \in \mathbb{R}^N \setminus \{0\}$. \item[(W3)] There exists $\mu > 2$ such that $W'(t,q)q \geq \mu W(t,q)$ for all $t \in \mathbb{R}$, $q \in \mathbb{R}^N$ \item[(W4)] For $q \neq 0$, $|W'(t,q)-V'(q)|/|q| \to 0$ as $|t| \to \infty$, uniformly in $q$ \item[(W5)] $W(t,q) \geq V(q)-\epsilon F(|q|)$ for all $t \in \mathbb{R}$, $q \in \mathbb{R}^N$, \end{itemize} then \eqref{e1.1} has a nontrivial solution $u$ homoclinic to zero. $0 O(|q|^2)$ (similarly for $V$) . Therefore, $I(u) < 0$ for some $u \in E$, and $\Gamma$ and $\Gamma_0$ are nonempty. Also, $I(u) = {1 \over 2} \|u\|^2 - o(\|u\|^2)$ for small $\|u\|$ (similarly for $I_0$), so $c_0$ (and the similarly defined $c$) are positive. (V1)-(V3) are all satisfied in the canonical case $V(q) = |q|^\alpha/\alpha$ for $\alpha > 2$. \noindent The missing convexity assumption'' on $V$ and $W$ is the following: \label{e1.8} \begin{aligned} &\mbox{For all t \in \mathbb{R} and q \in {\mathbb{R}^N} \setminus \{0\}, W(t,sq)/s^2 is}\\ &\mbox{ a nondecreasing function of s for s > 0, and }\\ &\mbox{{V(sq)}/s^2 is a nondecreasing function of s for s > 0.} \end{aligned} % This condition holds in the power case, $V(q) = |q|^\alpha/\alpha,\ \alpha > 2$. (V4) is apparently independent of \eqref{e1.8}. Although (V4) may be difficult to verify in general, it is true if (V1) and (V2) are satisfied, and $V$ is radially symmetric, that is, $V(q) \equiv V(|q|)$ (a proof is at the end of the paper). Let us examine the implications of the non-assumption'' \eqref{e1.8}. Under \eqref{e1.8}, for any $u \in E\setminus \{0\}$ and $s > 0$, %\label{e1.9} \begin{align*} I(su) &= {1 \over 2} s^2 \|u\|^2 - \int_\mathbb{R} W(t,su)\,dt \\ &= s^2 \big( {1 \over 2} \|u\|^2 - \int_\mathbb{R} {W(t,su) \over s^2}\,dt \big). \end{align*} So for any $u \in E \setminus \{0\}$, the mapping $s \mapsto I(su)$ begins at $0$ at $s = 0$, increases to a positive maximum, then decreases to $- \infty$. Defining $$\mathcal{S} = \{u \in E \setminus \{0\} \mid I'(u)u = 0\},$$ % \label{e1.10} $\mathcal{S}$ is a codimension-one submanifold of $E$, homeomorphic to the unit sphere in $E$ via radial projection. Any ray of the form $\{su \mid s>0\}$ $(u \neq 0)$ intersects $\mathcal{S}$ exactly once. All nonzero critical points of $I$ are on $\mathcal{S}$. Conversely, under suitable smoothness assumptions on $V$, any critical point of $I$ constrained to $\mathcal{S}$ is a critical point of $I$ (in the large) (see \cite{S1}). Therefore, one can work with $\mathcal{S}$ instead of $E$, and look for, say, a local minimum of $I$ constrained to $\mathcal{S}$ (which may be easier than looking for a saddle point of $I$). There is another way to use \eqref{e1.8}: for any $u \neq 0$, the ray from $0$ passing through $u$ can be used (after rescaling in $\theta$) as a mountain-pass curve along which the maximum value of $I$ is $I(u)$. Conversely, any mountain-pass curve $\gamma \in \Gamma$ intersects $\mathcal{S}$ at least once (\cite{CR}). Therefore, one may work with points on $\mathcal{S}$ instead of paths in $\Gamma$. Without assumption \eqref{e1.8}, the topology of $\mathcal{S}$ is unclear, though any ray through the origin in $E$ must intersect $\mathcal{S}$ {\it at least} once. Unfortunately, (V4) is not weaker than \eqref{e1.8}, but merely (at least apparently) independent of \eqref{e1.8}. While \eqref{e1.8} ensures that the functional $I_0$ (or even $I$, for a non-autonomous problem) has no critical values below the mountain-pass value $c_0$, it implies nothing about what critical values may exist {\it above} $c_0$. The proof of Theorem \ref{thm1.7} has a feature that may be new and of interest for other problems. A set is constructed with some of the properties enjoyed by $\mathcal{S}$. This set is the boundary of the basin of attraction of the zero function, under a gradient flow for the functional $I$. The construction of that set relies only on the mountain-pass geometry of $I$. This paper is organized as follows: Section~2 contains some properties of $I$ and the associated gradient vector flow. Section~3 contains the rest of the proof of Theorem \ref{thm1.7}. Also $\epsilon$ is constructed for the power case, $V(q) = |q|^\alpha/\alpha$, $F(s)=s^\alpha/\alpha$. \section{Properties of $I$ and the Associated Flow} First, some fairly unsurprising facts about the functional $I$. \begin{lemma} \label{lm2.0} \begin{itemize} \item[(i)] $I \in C^1(E, \mathbb{R})$. \item[(ii)] $I$ and $I'$ are bounded on bounded subsets of $E$. \item[(iii)] $I'$ is Lipschitz on bounded subsets of $E$. \end{itemize} \end{lemma} A proof of (i) is found in \cite{R2}. (ii) and (iii) are proven in \cite{S1}, and probably elsewhere. \medskip A {\it Palais-Smale sequence} for $I$ is a sequence $(u_m) \subset E$ with $(I(u_m))$ convergent and $\|I'(u_m)\| \to 0$ as $m \to \infty$. Here $\|I'(u_m)\|$ is defined using the operator norm, $\|I'(u_m)\| = \sup\{I'(u)w \mid w \in E,\ \|w\| \leq 1\}$. $I$ does not satisfy the Palais-Smale condition, that is, a Palais-Smale condition need not be precompact. However, any Palais-Smale sequence is bounded in norm. This is well known, but the lemma below gives a formula we will need for the bound. \begin{lemma} \label{lm2.1} For all $u \in E$, $$\|u\| \leq { {2\|I'(u)\| + \sqrt{2\mu(\mu - 2)\max(0, I(u))} } \over{\mu - 2}}.$$ \end{lemma} \begin{proof} \begin{align*} %\label{e2.2} -\|I'(u)\|\ \|u\| &\leq I'(u)u = \|u\|^2 - {\int_\mathbb{R}} h(t) W'(t,u)u\,dt \\ &\leq \|u\|^2 - \mu {\int_\mathbb{R}} W(t,u)\,dt \\ &=\mu I(u) - ({ {\mu - 2} \over 2}) \|u\|^2, \end{align*} so $$({ {\mu - 2} \over 2}) \|u\|^2 -\|I'(u)\|\,\|u\| - \mu I(u) \leq 0.\label{e2.3}$$ Applying the quadratic formula to \eqref{e2.3}, and the inequality ${\sqrt {A^2 + B^2}} \leq |A| + |B|$, yields \begin{align*} %&\eqref{e2.4} \|u\| &\leq { {\|I'(u)\| + \sqrt{\|I'(u)\|^2 + 2\mu(\mu-2)\max(0,I(u))} } \over {\mu - 2}} \\ &\leq {2\|I'(u)\| + \sqrt{ 2\mu(\mu-2)\max(0,I(u))} } \over {\mu - 2}. \end{align*} \end{proof} To describe the fate of Palais-Smale sequences, it will be convenient to define the translation operator $\tau$: for a function $u$ on the reals and $a \in \mathbb{R}$, define let $\tau_a u$ be $u$ shifted by $a$, that is, $(\tau_a u)(t) = u(t- a)$. The proposition below states that a Palais-Smale sequence splits'' into the sum of a critical point of $I$ and translates of critical points of $I_0$: % \begin{proposition} \label{prop2.5} If $(u_m) \subset E$ with $I'(u_m) \to 0$ and $I(u_m) \to a > 0$, then there exist $k \geq 0$, $v_0, v_1, \dots, v_k \in E$, and sequences $(t^i_m)_{m \geq 1}^{1 \leq i \leq k} \subset {\mathbb{R}}$, such that \begin{itemize} \item[(i)] $I'(v_0) = 0$ \item[(ii)] $I'_0(v_i) = 0$ for all $i = 1, \ldots, k$ \end{itemize} and along a subsequence (also denoted $(u_m)$) \begin{itemize} \item[(iii)] $\|u_m - (v_0 + \sum_{i=1}^k \tau_{t^i_m}v_i)\| \to 0$ as $m \to \infty$ \item[(iv)] $|t^i_m| \to \infty$ $m \to \infty$ for $i = 1, \ldots, k$ \item[(v)] $t^{i+1}_m - t^i_m \to \infty$ as $m \to \infty$ for $i = 1, \ldots, k-1$ \item[(iii)] $I(v_0) + \sum_{i=1}^k I_0(v_i) = a$ \end{itemize} \end{proposition} A proof for the case of periodic $W$ is found in \cite{CR}, and essentially the same proof works here. Similar propositions for nonperiodic coefficient functions, for both ODE and PDE, are found in \cite{CMN}, \cite{AM}, and \cite{S3}, for example. All are inspired by the concentration-compactness'' theorems of P.~-L.~Lions (\cite{L}). Let $\nabla I: E \to E$ be the gradient of $I$; that is, for all $u, w \in E$, $(\nabla I(u),w) = I'(u)w$. Define the flow $\eta$ to be the solution of the initial value problem $${ {d\eta} \over {dt}} = -\nabla I(\eta); \quad \eta(0, u) = u. %\label{e2.6}$$ Since $I'$ is locally Lipschitz, $\eta$ is well defined on an open subset of $\mathbb{R} \times E$. It is unclear whether $\eta$ is well-defined on all of $\mathbb{R} \times E$. However, \begin{lemma} \label{lm2.7} For all $u \in E$, either \begin{itemize} \item[(i)] $\eta(s,u)$ is well-defined for all $s>0$, $I(\eta(s,u)) \geq 0$ for all $s>0$, and the forward trajectory $\{\eta(s,u) \mid s>0\}$ is bounded, or \item[(ii)] For all $b < I(u)$, there exists $s>0$ with $I(\eta(s, u))=b$. \end{itemize} \end{lemma} \begin{proof} let $u \in E$, and assume (ii) does not hold. We will show that (i) holds. Let $b < I(u)$ such that for all $s > 0$ with $\eta(s,u)$ well-defined, $I(\eta(s,u)) > b$. Let $\eta \equiv \eta(s) \equiv \eta(s,u)$. Suppose the forward trajectory of $\eta$ is unbounded; that is, there exists a sequence $(s^i)$ with $0 < s^1 < s^2 < \ldots < s^i \to {\bar s} \in (0, \infty]$ as $i \to \infty$, and $\|I'(\eta(s^i))\| \to \infty$. Then by Lemma \ref{lm2.0}(ii), $\|\eta(s^m)\| \to \infty$. Let $$R = 1 + \|u\| + { {4 \mu {\sqrt {\max(0,I(u))} } }\over{\mu - 2} } + { {16(I(u)-b)^3}\over {\mu - 2} }.$$ %\label{e2.7} Let $0 < s_1 < s_2 < {\bar s}$ with $\|\eta(s_1)\| = R$, $\|\eta(s_2)\| =2 R$, and $R < \|\eta(s)\| < 2R$ for all $s \in (s_1, s_2)$. By Lemma \ref{lm2.1}, for all $s \in (s_1,s_2)$, % \begin{align*} \|I'(\eta(s))\| &\geq {1 \over 2} ( (\mu - 2)\|\eta(s)\| - {\sqrt {2\mu(\mu - 2)\max(0,I(\eta(s)))}} ) \\ &\geq {1 \over 2} ( (\mu - 2)R - 2\mu{\sqrt {I(u)} } ) \geq { {\mu - 2} \over 4} R. \end{align*} Therefore, \begin{aligned} \label{e2.9} I(u)-b &> I(\eta(s_1)) - I(\eta(s_2)) \\ &= -\int_{s_1}^{s_2} {d \over {ds}} I(\eta)\,ds \\ &= \int_{s_1}^{s_2} \|I'(\eta)\|^2 \,ds \geq (s_2 - s_1){ {(\mu - 2)^2} \over {16} } R. \end{aligned} Also, \begin{aligned} \label{e2.10} R &= \|\eta(s_2) - \eta(s_1)\| = \| \int_{s_1}^{s_2} {{d\eta} \over {ds}}\,ds\| \\ &\leq \int_{s_1}^{s_2} \|{{d\eta} \over {ds}}\| \,ds = \int_{s_1}^{s_2} \|I'(\eta(s))\| \,ds \\ &\leq {\sqrt {s_2 - s_1} } \cdot {\sqrt { \int_{s_1}^{s_2} \|I'(\eta(s))\|^2 \,ds }} \\ &= {\sqrt {s_2 - s_1} } \cdot {\sqrt { - \int_{s_1}^{s_2} {d \over {ds}} I(\eta(s)) \,ds}} \\ &= {\sqrt {s_2 - s_1} } \cdot {\sqrt {I(\eta(s_1))-I(\eta(s_2))}} \\ &<{\sqrt {s_2 - s_1} } \cdot {\sqrt {I(u)-b} } \end{aligned} by the Cauchy-Schwarz Inequality. Combining \eqref{e2.9} and \eqref{e2.10} yields \begin{gather*} {R^2 \over {(I(u)-b)^2}} \leq s_2 - s_1 \leq { {16(I(u)-b)} \over {(\mu- 2)R^2} },\\ R^4 \leq { {16(I(u)-b)^3} \over {\mu - 2} }, \end{gather*} which contradicts the definition of $R$. Therefore the assumption is false, and the forward trajectory of $\eta$ is bounded. Since $I'$ is locally Lipschitz, and bounded on bounded subsets of $E$, $\eta(s)$ is well defined for all $s > 0$. Finally, we must show that $I(\eta(s)) \geq 0$ for all $s > 0$. Since (ii) does not hold, $\lim_{s \to \infty} I(\eta(s)) > -\infty$. Since ${d \over {ds}}I(\eta) = -\|I'(\eta)\|^2$, there exists a sequence $(s_m)$ with $\|I'(\eta(s_m))\| \to 0$. By Lemma \ref{lm2.1}, ${\lim\sup}_{m \to \infty} I(\eta(s_m)) \geq 0$. Therefore $I(\eta(s)) \geq 0$ for all $s > 0$. The lemma is proven. \end{proof} There exists a mountain pass curve $\gamma_0 \in \Gamma_0$ along which the maximum value of autonomous functional $I_0$ is exactly $c_0$. The proof below is by Caldiroli (\cite{C}). It generalizes his paper \cite{C2}, which proved the result for when $I_0$ is restricted to the space of even functions: \begin{lemma} \label{lm2.12} There exists $\gamma_0 \in \Gamma_0$ with $\max_{\theta \in [0,1]} I_0(\gamma_0(\theta)) = c_0$. Furthermore, $\gamma_0$ is even in $t$; that is, for all $\theta \in [0,1]$ and $t \in \mathbb{R}$, $\gamma(\theta)(-t) = \gamma(\theta)(t)$. \end{lemma} \begin{proof} set \begin{gather*} %e2.13 E_{\rm even} = \{u \in E \mid u(-t) = u(t)\ a.e.\},\\ \Omega=\{q\in {\mathbb{R}}^{n}:-{1\over 2}|q|^{2}+V(q)<0\}\cup\{0\},\\ \mathcal{M}=\{u\in E:{\rm range}~u\subset\overline\Omega,\ {\rm range}~u\cap\partial\Omega\ne\emptyset\},\\ \mathcal{M}^{*}=\mathcal{M}\cap E_{even},\quad \Gamma^{*}_{0}=\Gamma_{0}\cap C([0,1],E_{even}),\\ m_{0}=\inf_{u\in\mathcal{M}}I_{0}(u),\quad c_{0}=\inf_{\gamma\in\Gamma_{0}}\sup_{\theta\in[0,1]} I_{0}(\gamma(\theta)),\\ m_{0}^{*}=\inf_{u\in\mathcal{M}^{*}}I_{0}(u),\quad c_{0}^{*}=\inf_{\gamma\in\Gamma_{0}^{*}}\sup_{\theta\in[0,1]} I_{0}(\gamma(\theta)). \end{gather*} In \cite{C2} it is proven that there exists $\gamma_0 \in \Gamma_0^*$ with $\max_{\theta \in [0,1]} I_0(\gamma_0)(\theta) = c_0^*$. Thus it suffices to show $c_0 = c_0^*$. Clearly $c_0 \leq c_0^*$. We will show $c_0^* = m_0^* \leq m_0 \leq c_0$. The equality $c_0^* = m_0^*$ is proven in \cite{C2}. For every $\gamma \in \Gamma_0$, there exists ${\bar \theta} \in [0,1]$ with $\gamma({\bar \theta}) \in \mathcal{M}$. Hence $m_0 \leq I_0(\gamma({\bar \theta})) \leq \max_{\theta \in [0,1]} I_0(\gamma(\theta))$. Therefore, $m_0 \leq c_0$. Last, we must show $m_0^* \leq m_0$. Let $u\in\mathcal{M}$ and set $t_{-}=\min\{t\in{\mathbb{R}}:u(t)\in\partial\Omega\}$ and $t_{+}=\max\{t\in{\mathbb{R}}:u(t)\in\partial\Omega\}\geq t_{-}$. Then, define $u_{-}(t)=u(t_{-}-|t|)$ and $u_{+}(t)=u(t_{+}+|t|)$. $u_{\pm}\in\mathcal{M}^{*}$. Since $u \in \mathcal{M}$, ${1 \over 2} |u(t)|^2 + {1 \over 2} |u'(t)|^2 - V(u(t)) \geq 0$ for all $t \in \mathbb{R}$, hence \begin{align*}% {e2.14} I_0(u) &= \int_\mathbb{R} {1 \over 2} |u(t)|^2 + {1 \over 2} |u'(t)|^2 - V(u(t)) \,dt \\ &\geq \int_{-\infty}^{t_{-}} {1 \over 2} |u(t)|^2 + {1 \over 2} |u'(t)|^2 - V(u(t)) \,dt\\ &\quad + \int_{t_{+}}^\infty {1 \over 2} |u(t)|^2 + {1 \over 2} |u'(t)|^2 - V(u(t)) \,dt \\ &= {1 \over 2} I_0(u_-) + {1 \over 2} I_0(u_+). \end{align*}% Therfore, $\min\{I_{0}(u_{-}),I_{0}(u_{+})\}\le I_{0}(u)$. This obviously implies $m_{0}^{*}\le m_{0}$. \end{proof} \section{Proof of Theorem \ref{thm1.7}} Define $\Gamma$ and $c$ analogously to $\Gamma_0$ and $c_0$: \begin{gather*} % e3.0 e3.1 \Gamma = \{\gamma \in C([0,1], E) \mid \gamma(0) = 0, \ I(\gamma(1)) < 0\}\\ c = \inf_{\gamma \in \Gamma} \max_{\theta \in [0,1]} I(\gamma(\theta)). \end{gather*} It is easy to show that $c \leq c_0$: let $\epsilon > 0$ be arbitrary, and take $\gamma \in \Gamma_0$ with $I_0(\gamma) < c_0 + \epsilon$. For $t > 0$, define $\tau_t \gamma$ by $(\tau_t \gamma)(\theta) = \tau_t(\gamma(\theta))$. It is easy to show that by $(W_4)$, $\tau_t \gamma \in \Gamma$ for large $t$, and % $$c_0 + \epsilon > I_0(\gamma) = \lim_{t \to \infty} I_0(\tau_t \gamma) = \lim_{t \to \infty} I(\tau_t \gamma) \geq c. %\label{e3.2}$$ If $c < c_0$, then by a deformation argument found, for example, in \cite{R2}, there exists a Palais-Smale sequence $(u_m)$ for $I$ with $I(u_m) \to c$ and $I'(u_m) \to 0$ as $m \to \infty$. Applying Proposition \ref{prop2.5} shows that $I$ must have a positive critical value less than or equal to $c$. So from now on, assume $$c = c_0. \label{e3.3}$$ Without \eqref{e1.8}, we do not have the Nehari manifold'' $\mathcal{S}$ to work with. However, we can find a set with similar properties. Let $\mathcal{B}$ be the basin of attraction of $0$ under the flow $\eta$. That is, $$\mathcal{B} = \{u \in E \mid \| \eta(s,u) - 0\| \to 0 \hbox{ as } s \to \infty \}.$$% \label{e3.4} $\partial {\mathcal{B}}$, the topological boundary of ${\mathcal{B}}$, has similar properties to $\mathcal{S}$. Call a set $A \subset E$ {\it forward-$\eta$-invariant} for all $s > 0$ and $u \in A$, $\eta(s,u) \in A$ whenever $\eta(s,u)$ is well-defined. \begin{lemma} \label{lm3.5} \begin{itemize} \item[(i)] ${\mathcal{B}}$ is an open neighborhood of $0 \in E$. \item[(ii)] ${\mathcal{B}}$ and $\partial {\mathcal{B}}$ are forward-$\eta$-invariant. \item[(iii)] For any $K>0$, the set $\mathcal{B} \cap \{u \in E \mid I(u) < K \}$ is bounded. \end{itemize} \end{lemma} \begin{proof} (i): $0$ is an isolated critical point and local minimum of $I$, so ${\mathcal{B}}$ contains an open neighborhood $U$ of $0$. Let $u \in {\mathcal{B}}$. For some $s > 0$, $\eta(s, u) \in U$. For small enough $r > 0$, $\|w-u\| < r$ implies $\eta(s,w) \in U$. So $B_r(u) \equiv \{w \mid \|w-u\| < r\}$ is an open neighborhood of $u$ that is contained in $\mathcal{B}$. \noindent (ii) Let $u \in {\mathcal{B}}$ and $s_1 > 0$. Since $\eta(s,u) \to 0$ as $s \to \infty$, $\eta(s + s_1, u) = \eta(s, \eta(s_1,u)) \to 0$ as $s \to \infty$, and $\eta(s_1, u) \in {\mathcal{B}}$. Next, let $u \in \partial {\mathcal{B}}$ and $s > 0$. Since ${\mathcal{B}}$ is open, $u \not\in {\mathcal{B}}$. $\eta(s,u)$ is not in ${\mathcal{B}}$, for if it were, the definition of ${\mathcal{B}}$ would imply $u \in {\mathcal{B}}$. $u$ is in the closure of ${\mathcal{B}}$, so let $(u_m) \subset {\mathcal{B}}$ with $u_m \to u$. $\eta(s, u_m) \to \eta(s,u)$ and $\eta(s, u_m) \in {\mathcal{B}}$, so $\eta(s,u)$ belongs to the closure of ${\mathcal{B}}$. \noindent (iii) We use an annulus'' argument, similar to Lemma \ref{lm2.7}. Let $K > 0$, and let $$R = 1 + {{2 \mu K} \over {\mu - 2}} + {{16K^2} \over {(\mu -2)^2}}.$$% \label{e3.6} Let $u \in \partial \mathcal{B}$ with $I(u) \leq K$. Assume $\|u\| > 2R$. This will lead to a contradiction. By the definition of $\mathcal{B}$ and the fact that $\mathcal{B}$ is open, it is clear that $I(u) \geq 0$. For any $w \in E$ with $I(w) \leq 0$ and $\|w\| \geq R$, Lemma \ref{lm2.1} gives \begin{aligned} \label{e3.7} \|I'(w)\| &\geq {1 \over 2} \big( (\mu - 2)\|w\| - {\sqrt {2\mu(\mu-2)I(w)}}\big) \\ &\geq {1 \over 2} \big( (\mu - 2)R - 2\mu{\sqrt K} \big) \geq {{\mu - 2} \over 4} R. \end{aligned} By Lemma \ref{lm2.7}, $\eta(s,u)$ is well-defined for all $s > 0$. Since $I(\eta(s,u)) > 0$ for all $s>0$, and ${d\over {ds}}I(\eta(s,u)) = -\|I'(\eta(s,u))\|^2$, $\|\eta(s^*,u)\| = R$ for some $s^* > 0$. Let $\eta \equiv \eta(s) \equiv \eta(s,u)$. Let $0 < s_1 < s_2$ with $\|\eta(s_1)\|=2R$, $\|\eta(s_2)\|=R$, and $\|\eta(s_1)\|\in(R,2R)$ for all $s \in (R,2R)$. Then by \eqref{e3.7}, \label{e3.8} \begin{aligned} K &\geq I(\eta(s_1))-I(\eta(s_2)) = -\int_{s_1}^{s_2} {d \over {ds}} I(\eta(s))\,ds \\ &= \int_{s_1}^{s_2} \|I'(\eta(s))\|^2\,ds \geq(s_2 - s_1) {(\mu - 2)^2 \over {16}}R^2. \end{aligned} But \label{e3.9} \begin{aligned} R &= \|\eta(s_1) - \eta(s_2)\| = \| \int_{s_1}^{s_2} {{d\eta}\over{ds}}\,ds\| \\ &\leq \int_{s_1}^{s_2} \|{{d\eta}\over{ds}}\| \,ds = \int_{s_1}^{s_2} \|I'(\eta)\|\,ds \\ &\leq {\sqrt {s_2 - s_1}} \cdot {\sqrt {\int_{s_1}^{s_2} \|I'(\eta)\|^2 \,ds} }\\ &= {\sqrt {s_2 - s_1}}\cdot {\sqrt { \int_{s_1}^{s_2}{d\over{ds}}I(\eta(s))\,ds } } \\ &= {\sqrt {s_2 - s_1}}\cdot{\sqrt {I(\eta(s_1))-I(\eta(s_1))}} \leq {\sqrt {(s_2 - s_1)K}} \end{aligned} by the Cauchy-Schwarz Inequality. \eqref{e3.8}-\eqref{e3.9} gives \begin{gather*} % \eqref{e3.10} {R^2 \over K} \leq s_2 - s_1 \leq {{16K} \over {(\mu - 2)^2 R^2}},\\ R^4 \leq {{16K^2} \over {(\mu - 2)^2}}. \end{gather*}% This contradicts the definition of $R$. Lemma \ref{lm3.5} is proven. \end{proof} Note: it is unclear whether $\partial {\mathcal{B}}$ must be homeomorphic to the unit ball of $E$. For the rest of this article, we assume, in addition to $c=c_0$, that $$\label{e3.11} \hbox{The interval (0, 2c_0) does not contain critical values of I.}$$ % This will lead to a contradiction. \eqref{e3.11} implies that for all $u \in \partial \mathcal{B}$, $$\label{e3.12} I(u) \geq c_0.$$ To see why, suppose $u \in \partial \mathcal{B}$, with $I(u) < c_0$. Define $(u_m)$ by $u_m = \eta(m, u)$. By the arguments of \cite{CMN}, $\|I'(u_m)\| \to 0$. By Lemma \ref{lm2.1}, $\lim_{m\to \infty} I(u_m) > 0$. Applying Proposition \ref{prop2.5} and Lemma \ref{lm3.5}(i) shows that $I$ has a positive critical value that is less than $c_0$. This contradicts assumption \eqref{e3.11}. Define the location'' function $\mathcal{L} : E \setminus \{0\} \to \mathbb{R}$ by $$\label{e3.13} {\int_\mathbb{R}} |u|^2 \tan^{-1}(t - \mathcal{L}(u))\,dt = 0.$$ By the Implicit Function Theorem, $\mathcal{L}$ is a well defined and continuous function. Roughly, $\mathcal{L}$ tells where on the real line a function is located. For $a \in \mathbb{R}$, $\mathcal{L}(\tau_a u) = \mathcal{L}(u) + a$. Now, \begin{lemma} \label{lm3.14} Assuming \eqref{e3.3} and \eqref{e3.11}, there exists $\delta > 0$ such that if $u \in \partial {\mathcal{B}}$ with $\mathcal{L}(u) = 0$, then $I(u) > c_0 + \delta$. \end{lemma} \begin{proof} Let $$b = \inf\{I(u) \mid u \in \partial \mathcal{B},\ \mathcal{L}(u) = 0\}.$$ % \label{e3.15} We must show that $b > c_0$. Let $(u_m) \subset \partial \mathcal{B}$ with $\mathcal{L}(u_m) = 0$ for all $m$ and $I(u_m) \to b$. If $b \geq 2c_0$, then obviously $b > c_0$. So assume $b<2c_0$. Suppose $\inf\{\|I'(u_m)\| \mid m \geq 1\} = 0$. Then, applying Proposition \ref{prop2.5}, there exists a subsequence (also denoted $(u_m)$), $k \geq 0$, $v_0$, and, if $k >0$, $v_i$ for $1 \leq i \leq k$ as in the conclusion of Proposition \ref{prop2.5}. By Proposition \ref{prop2.5}(vi), $k \geq 1$ since $b < 2c_0$. If $k = 1$, then $|\mathcal{L}(u_m)| \to \infty$. Therefore $k=0$, and $(u_m)$ converges to a critical point $v_0$ of $I$ with $I(v_0)=b<2c_0$. $v_0 \in \partial \mathcal{B}$ , so $I(v_0) \geq c_0$ (\eqref{e3.12}). This contradicts assumption \eqref{e3.11}. Therefore, $\inf\{\|I'(u_m)\| \mid m \geq 1\} > 0$. Since $\partial {\mathcal{B}} \cap \{I < 2c_0\}$ is bounded (Lemma \ref{lm3.5}(iii)), and $I'$ is Lipschitz on bounded subsets of $E$ (Lemma \ref{lm2.0}(ii)), there exists $p > 0$ with $I(\eta(1, u_m)) < b - p$ for large enough $m$. Thus, $c_0 \leq b - p$, so $b > c_0$. \end{proof} Now the end of the proof of Theorem \ref{thm1.7}. Let $\gamma_0$ be from Lemma \ref{lm2.12}. We need a path $\gamma_1 \in \Gamma_0$ with $I_0(\gamma_1(1)) \leq - c_0$. If $I_0(\gamma_0(1)) \leq - c_0$, then let $\gamma_1 = \gamma_0$. Otherwise, let $s>0$ be large enough so that $I_0(\eta(s,\gamma_0(1))) \leq -c_0$. This is possible by Lemma \ref{lm2.7}. Then join $\gamma_0(1)$ with $\eta(s, \gamma_0(1))$, that is, define $\gamma_1$ by $$\gamma_1(\theta) = \begin{cases} \gamma_0(2\theta) &\mbox{if } 0 \leq \theta \leq {1 \over 2}\\ \eta(s(2\theta -1),\gamma_0(1)) &\mbox{if } {1 \over 2} \leq \theta \leq 1. \end{cases}$$ %\label{e3.16} Define % $$K = \max_{\theta \in [0,1]} {\int_\mathbb{R}} F(| \gamma_1(\theta)(t)| )\,dt.$$% \label{e3.17} where $F$ is from the statement of Theorem \ref{thm1.7}, and let $\epsilon$ in the statement of Theorem \ref{thm1.7} satisfy $$\epsilon <\min( {c_0 \over {2K}}, {d \over {2K}}) \label{e3.18}$$ where $d$ is from (V4). For all $\theta \in [0,1]$ and $a \in \mathbb{R}$, \label{e3.19} \begin{aligned} I(\tau_a \gamma_1(\theta)) &= I_0(\tau_a \gamma_1(\theta)) + \big(I(\tau_a \gamma_0(\theta)) - I_0(\tau_a \gamma_0(\theta))\big)\\ & \leq I_0(\gamma_1(\theta)) + \epsilon {\int_\mathbb{R}} F(|\gamma_0(\eta)(t) |)\,dt \\ &\leq I_0(\gamma_1(\theta)) + \min({c_0 \over 2}, {d \over 2}). \end{aligned} Let $\delta$ be given by Lemma \ref{lm3.14}, and let $R > 0$ be big enough so that for all $\theta \in [0,1]$, $$I(\tau_{-R}\gamma_1(\theta)) < c_0 + {2 \over 3}\delta \quad\hbox{and}\quad I(\tau_R \gamma_1(\theta)) < c_0 + {2 \over 3}\delta.$$ % \label{e3.20} This is possible by (W4). Define a map $G:[-R,R] \times [0,1]\to E$ by %% $$G(t, \theta) = \tau_t \gamma_1(\theta).$$ %\label{e3.21} % Note that for all $t \in [-R,R]$ and $\theta \in [0,1]$, $I(G(t,\theta)) < c_0 + d/2$ by \eqref{e3.19}. Also, for all $\theta \in [0,1]$, $I(G(\pm R, \theta)) \leq c_0 + \delta/2$, and for all $t \in [-R, R]$, $I(G(t,1)) < 0$. Define $T: [-R,R] \times [0,1]\to \mathbb{R}^+$ by $$T(t, \theta) = \min\{ s \geq 0 \mid I(\eta(s, G(t, \theta))) \leq c_0 + \delta/2 \}.$$ %\label{e3.22}$$T is well-defined because by assumption \eqref{e3.11} and Proposition \ref{prop2.5}, $$\inf\{\|I'(u)\| \mid c_0 + {\delta \over 2} \leq I(u) \leq c_0 + {d \over 2} \} > 0. \label{e3.23}$$ It is easy to show, also using \eqref{e3.23}, that T is continuous. Define G_1 : [-R,R] \times [0,1]\to E by %$$ G_1(t, \theta) = \eta(T(t, \theta), G(t, \theta)). % \label{e3.24} % Now for all (t, \theta) \in [-R,R] \times [0,1], $$I(G_1(t, \theta)) \leq c_0 + {1 \over 2} \delta. \label{e3.25}$$ We will show that G_1([-R,R] \times [0,1]) contains a point u \in \partial {\mathcal{B}} with \mathcal{L}(u) = 0, which is impossible, by \eqref{e3.25} and Lemma \ref{lm3.14}. Let g be a path from the bottom side of the rectangle [-R,R] \times [0,1] to the top side, that is, g: [0,1] \to [-R,R] \times [0,1] with g(0)_2 = 0 and g(1)_2 = 1, where {}_2'' denotes projection to the second coordinate. Define \gamma: [0,1]\to E by \gamma(s) = G_1(g(s)). Since \gamma(0) = G_1(g(0)) = 0 and I(\gamma(1)) = I(G_1(g(1))) < 0, \gamma \in \Gamma. Therefore, \gamma(s) \in \partial \mathcal{B} for some s \in (0,1), and G_1(g([0,1])) intersects \partial \mathcal{B}. Since for any path g connecting the bottom and top sides of the rectangle [-R,R] \times [0,1], G_1(g([0,1])) intersects \partial {\mathcal{B}}, there must exist a connected set C \subset [-R,R] \times [0,1] with \begin{itemize} \item[(i)] For all (t, \theta) \in C, G_1(t, \theta) \in \partial {\mathcal{B}} \item[(ii)] There exist \theta_-, \theta_+ \in (0,1) with (-R, \theta_-) \in C and (R, \theta_+) \in C. \end{itemize} % Since G_1(\pm R,\theta) = G(\pm R,\theta) and I(G(\pm R, \theta)) < c_0 + \delta/2 for all \theta, \mathcal{L}(G_1(-R,\theta_-)) = -R and \mathcal{L}(G_1(R,\theta_+)) = R. C is a connected set and \mathcal{L} is continuous, so \mathcal{L}(G_1(C)) is an interval on the real line containing -R and R, and 0 \in \mathcal{L}(G_1(C)). Thus there exists (t^*, \theta^*) \in C with \mathcal{L}(G_1(t^*, \theta^*)) = 0. This is impossible, because G_1(t^*,\theta^*)\in \partial \mathcal{B} and I(G_1(t^*, \theta^*)) < c_0 + \delta (Lemma \ref{lm3.14}). The proof of Theorem \ref{thm1.7} is complete. \medskip If V(q) depends on on |q| alone, i.e., V(q) \equiv V(|q|), then (V4) holds. To prove this, it suffices to show that all solutions of the autonomous problem \eqref{e1.4} are radial. For if u has the form u(t) = {\bf a} v(t) for some unit vector {\bf a}\in {\mathbb{R}^N} and positive scalar function v, then v is a positive solution of the scalar equation -v'' + v = V(v). A phase plane analysis of this equation shows that that equation has only one positive solution, modulo translational symmetry. To show that all homoclinic solutions of \eqref{e1.4} are radial, let u be such a solution and consider the quantity (u \cdot u')^2 - |u|^2 |u'|^2. This expression tends to zero as t \to \pm \infty. If it equals zero for some t, then u'(t) and u(t) are parallel (this is the equality case of the Cauchy-Schwarz Inequality). So it suffices to show that {d \over {dt}}[(u \cdot u')^2 - |u|^2 |u'|^2] is always zero. Since V'(q) points away from the origin, V'(q) = (|V'(q)|/|q|)q, and \begin{align*} {d \over {dt}}\big[(u \cdot u')^2 - |u|^2 |u'|^2\big] &= 2(u \cdot u')(|u'|^2 + u \cdot u'') -2(u \cdot u')|u'|^2 - 2|u|^2(u' \cdot u'') \\ &= 2\big[(u \cdot u')(u \cdot u'') - |u|^2(u' \cdot u'')\big] \\ &= 2\big[(u \cdot u')(u \cdot (u-V'(u))) - |u|^2(u' \cdot (u - V'(u)))\big] \\ &= 2\big[|u|^2(u' \cdot V'(u)) - (u \cdot u')(u \cdot V'(u))\big] \\ &= 2\big[|u|(u \cdot u')|V'(u)| - (u \cdot u')|u||V'(u)| \big] = 0. \end{align*} \subsection*{Calculating \epsilon for the power case} If V(q) = |q|^\alpha/\alpha for some \alpha >2 and F(s) = s^\alpha/\alpha, then (W_5) becomes W(t,q) \geq (1- \epsilon)|q|^\alpha/\alpha. In this case it is possible to get an explicit formula for \epsilon in terms of \alpha. This V is radially symmetric, so as shown above, d = \infty in (V4). Therefore we need only estimate c_0/(2K) in \eqref{e3.18}. Let \omega be the unique positive, even solution of the scalar equation -{\omega}'' + {\omega} = {\omega}^{\alpha - 1}. Multiplying both sides of the equation by {\omega} and integrating by parts yields %{\int_\mathbb{R}} (\omega')^2+ {\omega}^2 = {\int_\mathbb{R}} {\omega}^\alpha. $$%\label{e3.28} For ease in notation, identify {\omega} with {\omega} \equiv ({\omega}, 0, 0, \ldots, 0): \mathbb{R} \to {\mathbb{R}^N}. For T \geq 0,$$ I_0(T{\omega}) = {\int_\mathbb{R}} {1 \over 2} T^2({\omega}')^2 + {1 \over 2} T^2 {\omega}^2 - {1 \over \alpha}T^\alpha {\omega}^\alpha = ({1\over 2}T^2-{1 \over \alpha}T^\alpha) {\int_\mathbb{R}} \omega^\alpha. $$To define K, we need \gamma_1 \in \Gamma_0 with I_0(\gamma_1(1)) \leq - c_0. To do this, we will find T \equiv T(\alpha)> 1 with I_0(T\omega) \leq -c_0 and set \gamma_1(\theta ) = T\theta\omega. Set %$$ T = \alpha^{1 \over {\alpha -2}} > 1.\label{e3.30} % Then \begin{align*} I_0(T\omega)+c_0 &= I_0(T(\omega))+I_0(\omega) \\ &= ({1 \over 2}T^2-{1 \over \alpha}T^\alpha + {1\over 2} - {1 \over \alpha}){\int_\mathbb{R}} \omega^\alpha \\ &< (T^2 - {1 \over \alpha}T^\alpha){\int_\mathbb{R}} \omega^\alpha =0, \end{align*} so this choice of T works. Now % K = {\int_\mathbb{R}} F(T\omega) = {{T^\alpha} \over \alpha}{\int_\mathbb{R}} \omega^\alpha, $$and we can set$$ \epsilon \equiv \epsilon(\alpha) = {c_0 \over {2K}} = {I_0(\omega) \over {2K}} ={{({1 \over 2} - {1 \over \alpha}){\int_\mathbb{R}} \omega^\alpha} \over {{2 \over \alpha}T^\alpha {\int_\mathbb{R}} \omega^\alpha} } ={ {\alpha - 2} \over {4 \alpha^{\alpha \over {\alpha-2}}} }.  Note that $\epsilon(\alpha) \to 0$ as $\alpha \to 2^+$ and $\epsilon(\alpha) \to {1 \over 4}$ as $\alpha \to \infty$. However, $\epsilon(\alpha)$ may not be a sharp bound. \subsection*{Acknowledgment} The author would like to thank Paolo Caldiroli for his advice and support, and the anonymous referee for his or her suggestions and corrections. \begin{thebibliography}{00} \bibitem{AM} F. Alessio and P. Montecchiari, {\it Multibump solutions for a class of Lagrangian systems slowly oscillating at infinity}, Annales de l'Institut Henri Poincar\'e, Vol. 16 (1999), No. 1, 107-135. \bibitem{BL} A. Bahri and Y.-Y. Li, {\it On a Min-Max Procedure for the Existence of a Positive Solution for a Certain Scalar Field Equation in } $\mathbb{R}^N$, Revista Iberoamericana, Vol.~6 (1990), 1-17. \bibitem{C} P. Caldiroli, personal communication. \bibitem{C2} P. Caldiroli, {\it A New Proof of the Existence of Homoclinic Orbits for a Class of Autonomous Second Order Hamiltonian Systems in} $\mathbb{R}^N$, Math.~Nachr., Vol.~187 (1997), 19-27. \bibitem{CM} P. Caldiroli, P. Montecchiari, {\it Homoclinic orbits for second order Hamiltonian systems with potential changing sign}, Comm. Appl. Nonlinear Anal., Vol. 1 (1994), No.~2, 97-129. \bibitem{CMN} V. Coti Zelati, P. Montecchiari, and M. Nolasco, {\it Multibump solutions for a class of second order, almost periodic Hamiltonian systems}, Nonlinear Ordinary Differential Equations and Applications, Vol.~4 (1997), No.~1, 77-99. %% \bibitem{CR} V. Coti Zelati and P. Rabinowitz, {\it Homoclinic Orbits for Second Order Hamiltonian Systems Possessing Superquadratic Potentials}, Journal of the American Mathematical Society, Vol. 4 (1991), 693-627. \bibitem{EL} M. Estaban and P.-L. Lions, {\it Existence and non existence results for semilinear elliptic problems in unbounded domains}, Proc. Roy. Soc. Edinburgh, Vol. 93 (1982), 1-14. \bibitem{L} P. L. Lions, {\it The concentration-compactness principle in the calculus of variations. The locally compact case.}, Annales de l'Institut Henri Poincar\'e, Vol. 1 (1984), 102-145 and 223-283. \bibitem{MW} J. Mawhin and M. Willem, Critical Point Theory and Hamiltonian Systems,'' Springer-Verlag, New York, 1989. \bibitem{R1} P. Rabinowitz, {\it Homoclinic Orbits for a class of Hamiltonian Systems}, Proc. Roy. Soc. Edinburgh, Sect. A, Vol. 114 (1990), 33-38. \bibitem{R2} P. Rabinowitz, Minimax Methods in Critical Point Theory with Applications to Differential Equations,'' C. B. M. S. Regional Conf. Series in Math., No. 65, Amer. Math. Soc., Providence, 1986. \bibitem{S1} G. Spradlin, {\it A Perturbation of a Periodic Hamiltonian System}, Nonlinear Analysis, Theory, Methods, \& Applications, Vol.~38 (1999), No. 8, 1003-1022. \bibitem{S2} G. Spradlin, {\it Interfering solutions of a nonhomogeneous Hamiltonian System}, Electronic Journal of Differential Equations, Vol. 2001 (2001), No. 47, 1-10. \bibitem{S3}\ G. Spradlin, {\it A Singularly Perturbed Elliptic Partial Differential Equation with an Almost Periodic Term}, Calculus of Variations and Partial Differential Equations, Vol. 9 (1999), 207-232. \bibitem{STT} E. Serra, M. Tarallo, and S.Terracini, {\it On the existence of homoclinic solutions to almost periodic second order systems}, Annales de l'Institut Henri Poincar\'e, Vol. 13 (1996), 783-812. \end{thebibliography} \end{document}