\documentstyle[twoside]{article} \pagestyle{myheadings} \markboth{\hfil Hill's Equation for a Homogeneous Tree \hfil EJDE--1997/23}% {EJDE--1997/23\hfil Robert Carlson \hfil} \begin{document} \title{\vspace{-1in}\parbox{\linewidth}{\footnotesize\noindent {\sc Electronic Journal of Differential Equations}, Vol.\ {\bf 1997}(1997), No.\ 23, pp. 1--30. \newline ISSN: 1072-6691. URL: http://ejde.math.swt.edu or http://ejde.math.unt.edu \newline ftp (login: ftp) 147.26.103.110 or 129.120.3.113} \vspace{\bigskipamount} \\ Hill's equation for a homogeneous tree \thanks{ {\em 1991 Mathematics Subject Classifications:} 34L40. \hfil\break\indent {\em Key words and phrases:} Spectral graph theory, Hill's equation, periodic potential. \hfil\break\indent \copyright 1997 Southwest Texas State University and University of North Texas. \hfil\break\indent Submitted August 24, 1997. Published December 18, 1997.} } \date{} \author{Robert Carlson} \maketitle \begin{abstract} The analysis of Hill's operator $-D^2 + q(x)$ for $q$ even and periodic is extended from the real line to homogeneous trees ${\cal T}$. Generalizing the classical problem, a detailed analysis of Hill's equation and its related operator theory on $L^2({\cal T})$ is provided. The multipliers for this new version of Hill's equation are identified and analyzed. An explicit description of the resolvent is given. The spectrum is exactly described when the degree of the tree is greater than two, in which case there are both spectral bands and eigenvalues. Spectral projections are computed by means of an eigenfunction expansion. Long time asymptotic expansions for the associated semigroup kernel are also described. A summation formula expresses the resolvent for a regular graph as a function of the resolvent of its covering homogeneous tree and the covering map. In the case of a finite regular graph, a trace formula relates the spectrum of the Hill's operator to the lengths of closed paths in the graph. \end{abstract} \newtheorem{theorem}{Theorem}[section] \newtheorem{corollary}[theorem]{Corollary} \newtheorem{lemma}[theorem]{Lemma} \renewcommand{\theequation}{\arabic{section}.\arabic{equation}} \def\real{{\cal R}} \def\complex{{\cal C}} \section{Introduction} There is a large literature on the spectral theory of linear difference operators associated with a combinatorial graph \cite{Brooks2,Chung,Cvet}. Despite almost immediate physical applications the study of differential operators on a topological graph has received very little attention. However there is a history of related work in physical chemistry and mathematical physics \cite{Montroll,Pauling,Bulla,Exner3,Ger2,Ger1}, and some work for parabolic equations \cite{Lumer1,Nicaise,Below}. There are several reasons to study differential operators, rather than difference operators, on graphs. First, there are problems of physical interest, particularly inspired by advances in micro-electronic fabrication, which are modeled using differential operators on graphs \cite{Avron2,Bulla,Exner1,Exner3,Shapiro}. Second, it may be easier to analyze the differential equations rather than the corresponding difference equations. Third, one may expect that the metaphor of differential operators on a graph as operators on a one-dimensional space with nontrivial topology can be developed to explore a class of problems which are intermediate in complexity between traditional ordinary differential operators and partial differential operators on manifolds. The main aim of this work is to extend the theory of Hill's equation \cite{Magnus} $$-y''+qy= \lambda y, \quad q(x+1) = q(x), \quad \lambda \in \complex\label{1.a}$$ with a real-valued and even potential $q(1-x) = q(x)$, $0 \le x \le 1$, to graphs. This equation will be interpreted as a system of equations on $[0,1]$, with certain transition conditions satisfied at the vertices. The most direct extension will be carried out when the graph is a homogeneous tree whose vertices have a common number of incident edges. In preparation for the analysis of Hill's equation on homogeneous trees, and its relatives on regular graphs, the second section establishes some basic results for Schr\"odinger operators on a weighted graph. These operators are actually a (possibly infinite) system of ordinary differential operators on intervals whose lengths are given by the edge weights of the graph. The domains of these operators will be determined by a set of boundary conditions at each set of interval endpoints which are identified with a graph vertex. Under suitable conditions these operators are essentially self-adjoint when given a domain of compactly supported functions satisfying the vertex conditions. The third section considers solutions of Hill's equation (\ref{1.a}) on homogeneous trees which are continuous across each vertex, and which satisfy an additional condition on the sum of the derivatives at each vertex. A central role is played by solutions which are functions of a signed distance $x(g)$ from a vertex and are square integrable for $x(g) > 0$, respectively $x(g) < 1$. These decaying solutions may be analyzed using transition matrices whose eigenvalues $\mu ^{\pm}(\lambda )$ are a generalization of the classical Hill's equation multipliers. In the fourth section, the decaying solutions and multipliers are used to give quite explicit formulae for several functions of the Hill's operator. The resolvent is considered first. The analysis of the resolvent subsequently leads to a description of the spectral projections by means of an eigenfunction expansion. In addition, the large time behaviour of the associated semigroup kernel is described. In the final section, we consider the implications of the Hill's equation analysis for regular graphs, which have a homogeneous tree as a universal covering space. A summation formula relates the resolvent for a regular graph to combinatorial features of the graph and the resolvent of its covering tree. When the regular graph is finite the trace of the resolvent can be expressed in terms of the integral of the diagonal of the resolvent on the tree and a generating function for numbers of closed paths of length $l$ in the graph. When the potential $q$ is zero, the resolvent trace has a very simple form; this in turn gives a detailed description of the generating function. These last results for differential operators are strongly analogous to results of Brooks \cite{Brooks2} for the difference Laplacian. \section{Schr\"odinger operators on graphs} \setcounter{equation}{0} Before treating the special structure of Hill's equation on homogeneous trees and related graphs, we consider basic questions about Schr\"odinger operators $-D^2 + q$ on graphs. Some of this material extends to more general differential operators \cite{Car97a}. Operators with a different class of self adjoint domains are treated in \cite{Car96a}. In this work a graph ${\cal G}$ will be connected, with a countable vertex set and a countable set of edges $e_n$. The edges are initially assumed to be directed, although this is for notational convenience and plays no essential role. Each edge has a positive weight (length) $w_n$, and each vertex appears in at least one, but only finitely many edges. Loops and multiple edges with the same vertices are allowed. A topological graph, also denoted ${\cal G}$, may be constructed from the graph data \cite[p. 190]{Massey}. For each directed edge $e_n$ let $[a_n,b_n]$, with $a_n < b_n$, be a real interval of length $w_n$, and let $\alpha _j \in \{ a_n,b_n \}$. Identify those interval endpoints $\alpha _j$ whose corresponding edge endpoints are the same vertex $v$. The Euclidean metric on the intervals may be extended to a metric on ${\cal G}$ by taking the distance between two points to be the length of the shortest (undirected) path joining them. Since every point in ${\cal G}$ may be covered by an open set having nonempty intersection with only finitely many edges, every compact set is contained in a finite union of closed edges $e_n$. Let $L^2({\cal G})$ denote the Hilbert space $\oplus _n L^2(e_n)$ with the inner product $$\langle f, g \rangle = \int_{\cal G} f\overline g = \sum_n \int_{a_n}^{b_n} f_n(x)\overline{g_n(x)} \ dx , \quad f = (f_1, f_2, \dots ).$$ In this work $q$ denotes a bounded real valued function on ${\cal G}$, measurable on each edge. An operator ${\cal L} = -D^2 + q$ acts component-wise on functions $f \in L^2({\cal G})$ in its domain. In order to obtain a self adjoint operator, the domain of ${\cal L}$ will be specified by certain vertex (boundary) conditions. Suppose that ${\rm deg}\,(v)$ interval endpoints $\alpha _j$ are identified with a vertex $v$, which we write $\alpha _j \sim v$. At each vertex $v$ we will require that a function in the domain of ${\cal L}$ satisfy the continuity conditions $$f(\alpha _j) = f(\alpha _{j+1}) , \quad j=1,\dots ,{\rm deg}\,(v)-1, \quad \alpha _j \sim v. \label{2.a}$$ An additional condition of the form $$\sum_{j=1}^{{\rm deg}\,(v)} (-1)^{\kappa (\alpha _j )} f'(\alpha _j) = \gamma _vf(v), \quad \kappa (\alpha _j) = \Bigl \{ \matrix{0, \quad \alpha _j = a_n, \cr 1, \quad \alpha _j = b_n,} \Bigr \} \quad \gamma _v \in R. \label{2.b}$$ will be satisfied. For operators on the real axis, these vertex conditions with $\gamma \not=0$ are known as $\delta$ (function) interactions. An extensive treatment of such operators is in \cite{Albeverio}. Let ${\cal D}_{com}$ be the set of compactly supported continuous functions $f$ in $L^2({\cal G})$ such that $f_n'$ is absolutely continuous on each $e_n$, and $f_n'' \in L^2(e_n)$. Initially ${\cal L}$ will be defined on the domain ${\cal D}$ consisting of those functions in ${\cal D}_{com}$ which satisfy the vertex conditions (\ref{2.a}) and (\ref{2.b}). By working on one interval $[a_n,b_n]$ at a time the classical treatment of differential operators \cite[p. 1294]{Dunford}, \cite[pp. 169--171]{Kato} shows that the adjoint of ${\cal L}$ is a differential operator acting by $-D^2 + q$. In addition we obtain the following lemma. \begin{lemma}\label{Lemma2.1} If $f$ is in the domain of ${\cal L}^*$, then $f_n'$ is absolutely continuous on each $e_n$, and $f_n'' \in L^2(e_n)$. \end{lemma} \begin{theorem}\label{Thm2.2} If the weights $w_n$ satisfy $w_n \ge w >0$, and $\gamma _v \ge \gamma > -\infty$ then $-D^2+q$ is essentially self adjoint and bounded below on the domain ${\cal D}$. \end{theorem} \paragraph{Proof:} Since multiplication by $q$ is a bounded self adjoint operator, only the case $-D^2$ needs to be considered \cite[p. 287]{Kato}. The next step is to show that ${\cal L} = -D^2$ is a symmetric operator which is bounded below on $\oplus_{e \in {\cal G}} L^2(e)$. Suppose that $f,g \in {\cal D}$, and $f$ is supported in an open set containing a single vertex $v$. Let $f_j$ be the component of $f$ for an edge $e_j$ incident on $v$, and let $\alpha _j$ be an endpoint of $e_j$ identified with $v$. Integration by parts gives $$\langle {\cal L}f,g \rangle = \sum_j \int_{a_j}^{b_j} -f'' \overline g = \sum_j (-1)^{\kappa (\alpha _j)} f_j'(\alpha _j )\overline{g_j(\alpha _j )} + \sum_j \int_{a_j}^{b_j} f' \overline g'\,.$$ By virtue of the vertex conditions $$\sum_j (-1)^{\kappa (\alpha _j)} f_j'(\alpha _j)\overline{g_j(\alpha _j)} = \overline{g(v)} \sum_j (-1)^{\kappa (\alpha _j)} f_j'(\alpha _j) = \gamma _v \overline{g(v)} f(v),$$ where $f(v)$ is the common value for $f_j(\alpha _j)$. By a partition of unity argument every function in ${\cal D}$ can be written as a sum of functions either supported in a small open neighborhood of a single vertex $v$, or supported in an open subinterval of a single edge $e$. Thus the computation above implies $\langle {\cal L}f,g \rangle = \langle f,{\cal L}g \rangle$. Also \cite[p. 193]{Kato}, for each $f_j$ and any $\epsilon > 0$ $$|f(\alpha _j)| \le \epsilon \| f_j' \| + C(\epsilon ) \| f_j \|,$$ so that the quadratic form $$\langle {\cal L}f,f \rangle = \sum_n \| f_n' \| ^2 + \sum_v \gamma _v |f(v)|^2 \ge C_1\| f \| ^2$$ is bounded below by a multiple of $\| f \| ^2$. The remainder of the proof that ${\cal L}=-D^2$ is essentially self adjoint, is adopted from \cite[p. 274]{Kato}. For some positive constant $\beta$ the symmetric operator $\beta + {\cal L}$ is bounded below by $1$, so it will be essentially self adjoint if the range is dense. Assume that the range is not dense. Since the orthogonal complement of the range of $\beta +{\cal L}$ is the null space of $\beta +{\cal L}^*$, this null space must contain a nonzero element $\psi$. By virtue of Lemma~\ref{Lemma2.1} and integration by parts $\psi$ must satisfy the vertex conditions (\ref{2.a}) and (\ref{2.b}). Pick a $C^\infty$ function $\eta (x)$ on $(0,w)$ which is $1$ in a neighborhood of $0$ and vanishes identically for $x > w/4$. Pick any edge $e_0$, and for $K = 1,2,3, \dots$ construct a $C^\infty$ cutoff function $\phi _K$ on ${\cal G}$ as follows. On the set $E_0$ of (closed) edges containing some point whose distance from a vertex of $e_0$ is less than or equal to $K$, let $\phi _K = 1$. On edges $e = [a_n,b_n]$ not in $E_0$ which share a vertex $v\sim a_n$ (resp. $v\sim b_n$) with an edge in $E_0$, let $\phi _K = \eta (x - a_n)$ (resp. $\phi _K = \eta (b_n - x)$) where $\eta$ is defined. Otherwise let $\phi _K = 0$. The function $\phi _K \psi$ is in the domain of ${\cal L}$. We have $$(\beta + {\cal L}) \phi _K \psi = (\beta - D^2)\phi _K \psi = \phi _K(\beta - D^2)\psi -2\phi _K '\psi' - \phi _K ''\psi = 0 -2\phi _K '\psi' - \phi _K ''\psi.$$ Since $\phi _K ''$ is uniformly bounded, the term $\phi _K ''\psi$ goes to zero in $L^2$ as $K \to \infty$. Let $E(K)$ denote those edges where $\phi _K '$ is not identically zero. There is no loss of generality assuming that $\phi _K$ and $\psi$ are real. Then integration by parts gives \begin{eqnarray*} \int_{\cal G} (\phi _K ' \psi')^2 &=& \int_{E(K)} (\phi _K ')^2 \psi'\psi' = - \int_{E(K)} \psi[2\phi _K ' \phi _K '' \psi' + (\phi _K ')^2 \psi'']\\ &=& - \beta \int_{E(K)} \psi ^2(\phi _K ')^2 - {1 \over 2} \int_{E(K)} (\psi^2)' ([\phi _K ']^2)' \\ &=& \int_{E(K)} \psi ^2[{1 \over 2} ([\phi _K ']^2)'' - \beta (\phi _K ')^2]\,. \end{eqnarray*} Since ${1 \over 2} ([\phi _K ']^2)'' - (\phi _K ')^2$ is uniformly bounded, the integral goes to zero as $K \to \infty$. If $\psi$ existed it would follow that $\phi _K \psi$ is in the domain of $\beta + {\cal L}$, and $(\beta + {\cal L})\phi _K \psi \to 0$ in $L^2$. But this contradicts the fact that $\beta + {\cal L}$ is bounded below by $1$. Consequently, the range of $\beta +{\cal L}$ is dense, and so it is essentially self adjoint. \hfill$\Box$ The operators considered in this work will satisfy the hypotheses of Theorem~\ref{Thm2.2}, and henceforth the domain of ${\cal L}$ will be extended so that ${\cal L}$ is self adjoint. Results similar to Theorem~\ref{Thm2.2} for more general differential operators on graphs, and the problem of characterizing self adjoint operators by means of vertex conditions are treated in \cite{Car97a}. Notice that the second derivative and multiplication by a function are defined independently of the choice of edge direction. The outward pointing' derivatives at a vertex, $$(-1)^{\kappa (\alpha _j)}f'(\alpha _j), \quad \alpha_j \sim v, \quad \kappa (\alpha _j) = \left\{\begin{array}{ll} 0, & \alpha _j = a_n, \\ 1, & \alpha _j = b_n. \end{array}\right.$$ are also orientation independent. Thus Schr\"odinger operators may be defined on undirected graphs. \section {Multipliers for homogeneous trees} \setcounter{equation}{0} In this section the graph ${\cal G}$ is assumed to be a homogeneous tree ${\cal T}$ whose edge weights are all $1$, and whose vertices have degree $\delta +1$. A particular edge $e = [0,1]$ is selected. For $g \in {\cal T}$, the function $x(g)$ will be the signed distance from $0 \in e$. The sign is taken to be nonnegative if $g \in e$, or if the shortest path from $0$ to $g$ includes $e$, and negative otherwise. The vertex conditions (\ref{2.a}) and (\ref{2.b}) are specialized by requiring $\gamma _v$ to have the same value $\gamma$ at all vertices $v$, $$\sum_{j=1}^{\delta + 1} (-1)^{\kappa (\alpha _j )} f'(\alpha _j) = \gamma f(v), \quad f(\alpha _j) = f(\alpha _{j+1}) , \quad j=1,\dots ,\delta , \ \alpha _j \sim v. \label{3.a}$$ In a homogeneous tree there is an obvious way to extend solutions of $-y'' +qy= \lambda y$ beyond $e$ so as to satisfy the vertex conditions (\ref{3.a}) as $x(g)$ increases (resp. decreases), which we will call moving to the right (left). At each vertex $v$ encountered as we move right (resp. left), impose the condition $$y'(v_+) = [y'(v_-) + \gamma y(v)]/\delta , \quad \Bigl ( {\rm resp.} \ y'(v_-) = [y'(v_+) - \gamma y(v)]/\delta \Bigr ) , \label{3.b}$$ in addition to the continuity condition. This extension of solutions of (\ref{1.a}) to adjacent edges $e_{\pm}$ provides a linear map from from the solutions on $e$ to those on $e_{\pm}$. Two transition matrices will describe the propagation of initial data for solutions of (\ref{1.a}) as we move from edge to edge. These transition matrices will generally have a pair of eigenvalues, and the propagation of the initial data can be described by decomposing the data into eigenvectors of the transition matrix, and then using the eigenvalues as multipliers. Having selected $e$, identify other edges $e_n = [v_0(n),v_1(n)]$ with the interval $[0,1]$ so that $v_0 \to 0$ when $x(v_0) < x(v_1)$. In these local coordinates there is a basis $C(t,\lambda ), S(t,\lambda )$ of solutions for (\ref{1.a}) satisfying $$\pmatrix{ C(0,\lambda ) & S(0,\lambda ) \cr C'(0,\lambda ) & S'(0,\lambda )} = \pmatrix{1 & 0 \cr 0&1}.$$ We will use the abbreviations $c(\lambda ) = C(1,\lambda ), \ c'(\lambda ) = C'(1,\lambda )$ and $s(\lambda ) = S(1,\lambda ), \ s'(\lambda ) = S'(1,\lambda )$. A solution $y$ of (\ref{1.a}) satisfying $y(0,\lambda ) = a$ and $y'(0, \lambda ) = b$ will have values at $1$ given by $$\pmatrix{ y(1,\lambda ) \cr y'(1,\lambda )} = M_1(\lambda )\pmatrix{a \cr b}, \quad M_1(\lambda ) = \pmatrix{ c(\lambda ) & s(\lambda ) \cr c'(\lambda ) & s'(\lambda )}.$$ The matrix taking initial data from the right endpoint of an edge to the left endpoint is $$M_0(\lambda ) = M_1^{-1}(\lambda ) = \pmatrix{ s'(\lambda ) & -s(\lambda ) \cr -c'(\lambda ) & c(\lambda )}.$$ The transition conditions (\ref{3.b}) at a vertex also have a matrix form on the initial data. The leftward transition $v_+ \to v_-$ and rightward transition $v_- \to v_+$ respectively have matrices $$J_0 = \pmatrix{1 & 0 \cr -\gamma /\delta & 1/\delta}, \quad J_1 = \pmatrix{1 & 0 \cr \gamma /\delta & 1/\delta}.$$ If we start at the left (respectively right) endpoint, we can propagate initial conditions across the vertex and then across the adjacent edge simply by multiplying the initial data respectively by the matrices $T_0(\lambda ) = M_0(\lambda )J_0$, $T_1(\lambda ) = M_1(\lambda )J_1$, where \begin{eqnarray*} T_0(\lambda ) &=&\pmatrix{ s'(\lambda ) + \gamma s(\lambda )/\delta & -s(\lambda )/\delta \cr -c'(\lambda ) - \gamma c(\lambda )/\delta & c(\lambda )/ \delta }, \\ T_1(\lambda ) &=&\pmatrix{ c(\lambda ) + \gamma s(\lambda )/\delta & s(\lambda )/\delta \cr c'(\lambda ) +\gamma s'(\lambda )/\delta & s'(\lambda )/ \delta }. \end{eqnarray*} In both cases $\det (M_j(\lambda )) = 1$ so that $\det T_j(\lambda ) = 1/\delta$, for $j=0,1$, while $${\rm tr \ }T_0(\lambda ) = s'(\lambda ) + {c(\lambda ) + \gamma s(\lambda ) \over \delta }, \quad {\rm tr \ }T_1(\lambda ) = c(\lambda ) + {s'(\lambda ) + \gamma s(\lambda ) \over \delta }\,.$$ The eigenvalues are $$\mu _j^{\pm}(\lambda ) = {\rm tr}(T_j)/2 \pm \sqrt{{\rm tr}(T_j)^2/4 - \det(T_j)},$$ and the corresponding eigenvectors for $T_j$ are multiples of \begin{eqnarray} E_0^{\pm} &=& \pmatrix{-s(\lambda ) \cr \delta \mu _0^{\pm} - \delta s'(\lambda ) - \gamma s(\lambda )} = \pmatrix{-s(\lambda ) \cr c(\lambda ) - \delta \mu _0^{\mp}} , \label{3.c} \\ E_1^{\pm} &=& \pmatrix{s(\lambda ) \cr \delta \mu _1^{\pm} - \delta c(\lambda ) - \gamma s(\lambda )} = \pmatrix{s(\lambda ) \cr s'(\lambda ) - \delta \mu _1^{\mp}} \,.\nonumber \end{eqnarray} The alternate forms come from the formulas for ${\rm tr}(T_j)$. Suppose that $\lambda$ is real, so that the matrices $T_j(\lambda )$ are real. When the term ${\rm tr}(T_j)^2/4 - \det (T_j)$ is nonpositive we have $$|\mu _j^{\pm}| = 1/ \sqrt{\delta}, \quad - 2/\delta \le {\rm tr}(T_j) \le 2/\delta ,$$ and the eigenvalues are conjugate pairs. On the other hand, when ${\rm tr}(T_j)^2/4 - \det (T_j)$ is nonnegative, the eigenvalues are real, and $\mu _j^+ \mu _j^- = 1/\delta$ implies they have the same sign. Most of the following lemma is well known \cite[p. 8]{Magnus}. \begin{lemma} \label{Lemma3.1} Since $q(x)$ is even, $c(\lambda ) = s'(\lambda )$. It follows that $\mu _0^{\pm}(\lambda ) = \mu _1^{\pm} (\lambda )$. In addition if $s(\lambda ) = 0$, then $c^2(\lambda ) = 1$.\end{lemma} \paragraph{Proof:} Since $q(x) = q(1-x)$, the identity $$C(1-x,\lambda ) = s'(\lambda )C(x,\lambda ) - c'(\lambda )S(x,\lambda ).$$ holds because both sides are solutions of (\ref{1.a}) with the same initial data at $x=1$. Evaluation at $x=0$ gives $c(\lambda ) = s'(\lambda )$. We also have the Wronskian identity $$1 = c(\lambda )s'(\lambda ) - s(\lambda )c'(\lambda ).$$ When $s(\lambda ) = 0$ the equation $c^2(\lambda ) = 1$ is satisfied. To establish the equality of the eigenvalues for $T_0$ and $T_1$, it is sufficient to observe that their determinants and traces are the same. \hfill$\Box$ In light of the previous lemma the transition matrix eigenvalues will be denoted $\mu ^{\pm}$. \begin{lemma}\label{Lemma3.2} \it If $| \mu^{\pm}(\lambda ) | = 1/\sqrt{\delta}$ then $\lambda$ is in the spectrum of ${\cal L}$, and so is real.\end{lemma} \paragraph{Proof:} The various cases being similar, suppose that $y$ is a nontrivial solution of (\ref{1.a}) on $e$ whose initial data at the right endpoint $x(g) = 1$ is an eigenvector for $T_1(\lambda )$ with eigenvalue $\mu ^+$. Extend $y$ to $x(g) > 0$ using $y(x(g)+1) = \mu ^+ y(x(g))$. The self adjoint conditions (\ref{3.a}) hold for $y$. Now for $0 1/\sqrt{\delta }$ and $|\mu ^-(\lambda )| < 1/\sqrt{\delta }$. These functions have continuous extensions to the real axis which are analytic except on the discrete set where ${\rm tr}(T_j)^2 = 4 \det (T_j)$. If $\nu \in \sigma _1$ then $$\lim_{\epsilon \to 0^+} \mu^{\pm}(\nu + i \epsilon ) - \mu^{\pm}(\nu - i \epsilon ) = 2i\,{\rm Im}\,(\mu^{\pm}(\nu ))\,. \label{3.g}$$ \end{theorem} \paragraph{Proof:} Since ${\rm tr}(T_j)$ and $\det (T_j)$ are entire functions of $\lambda$, the eigenvalues $\mu ^{\pm }$ and eigenvectors will be analytic in any simply connected domain with ${\rm tr}(T_j)^2 - 4\det(T_j) \not= 0$. The condition ${\rm tr}(T_j)^2 - 4\det(T_j) = 0$ is equivalent to requiring $\mu ^+ = \mu ^-$, in which case $(\mu ^{\pm})^2 = 1/ \delta$. The fact that the functions $\mu ^+(\lambda )$ and $\mu ^-(\lambda )$ extend as single valued analytic functions on the complement of $\sigma _1$ satisfying $|\mu ^+(\lambda )| > 1/\sqrt{\delta }$ and $|\mu ^-(\lambda )| < 1/\sqrt{\delta }$ is simply a consequence of the identity $\mu ^+\mu ^- = 1/\delta$. To obtain the continuous extension to the real axis, note that the set of points where the eigenvalues coalesce, or ${\rm tr}(T_j)^2 - 4\det(T_j) = 0$, is the zero set of an entire function, which has isolated (real) zeroes $r_i$. The analytic functions $\mu^{\pm}$ thus have an analytic continuation from either half plane to the real axis with these points $r_i$ omitted. At these points $$\lim_{\lambda \to r_i} \mu^{\pm}(\lambda ) = \mbox{tr}\,(T_j)/2$$ independent of the branch of the square root. We have observed in Lemma~\ref{Lemma3.2} that if $|\mu^{\pm}(\nu )| = 1/\sqrt{\delta }$, then $\nu \in \real$. If ${\rm tr}\,(T_j)^2/4 = \det (T_j) = 1/\delta$, then both sides of (\ref{3.g}) are $0$. Suppose instead that ${\rm tr}\,(T_j)^2/4 - \det (T_j) <0$, so that the eigenvalues $\mu^{\pm}(\nu )$ are a nonreal conjugate pair. Since the eigenvalues are distinct, they extend analytically across the real axis. There are two possibilities: either (i) (\ref{3.g}) holds, in which case the extension of $\mu^{\pm}(\nu +i\epsilon )$ is $\mu^{\mp}(\nu - i\epsilon )$, or (ii) $\mu^{\pm}(\nu + i\epsilon )$ extends to $\mu^{\pm}(\nu - i\epsilon )$. The second case will be excluded because $|\mu ^+| > 1/\sqrt{\delta}$ in the complement of $\sigma _1$. If (ii) held, then $\mu ^+$ would be an analytic function of $\lambda$ in a neighborhood of $\nu$ satisfying $|\mu^+(\nu)| = 1/\sqrt{\delta }$, and $|\mu^+(\lambda )| \ge 1/\sqrt{\delta }$. But this violates the open mapping theorem \cite[p. 132]{Ahl}, so (i) must hold. The treatment of $\mu ^-$ is the same.\hfill$\Box$ Let $\rho$ denote the resolvent set of ${\cal L}$. \begin{theorem}\label{Thm3.4} \it If $\lambda \in \complex \setminus \sigma _1$ then there is a nontrivial solution $y_1$ of (\ref{1.a}) on ${\cal T}$ which satisfies the vertex conditions (\ref{3.a}), is square integrable on $x(g) > 0$, and whose initial data at $1$ is an eigenvector of the transition matrix $T_1(\lambda )$ with eigenvalue $\mu ^- (\lambda )$. If $\lambda \in \rho$ the space of solutions of (\ref{1.a}) which satisfy the vertex conditions (\ref{3.a}) and are square integrable on $x(g) > 0$, is one dimensional. The analogous statements hold for solutions $y_0$ on $x(g) < 1$ and the transition matrix $T_0(\lambda )$. \end{theorem} \paragraph{Proof:} If $\lambda \in \complex \setminus \sigma _1$ then by Theorem~\ref{Thm3.3} the eigenvalue $\mu ^-(\lambda )$ is well defined. Use a corresponding eigenvector as the initial data at $1$ for a solution $y_1$ of (\ref{1.a}) on $e$. The solution on $x(g) > 0$ obtained by propagating with the transition matrix $T_1$, i.e. with the eigenvalue $\mu ^-(\lambda )$, will satisfy the vertex conditions (\ref{3.a}). The square integrability is checked by the computation $$\int_{x(g) > 0} |y_1|^2 = \sum_{k=0}^\infty \delta ^k \int_0^1 |\mu ^-(\lambda )^k y_1|^2 = \sum_{k=0}^\infty \delta ^k|\mu ^-(\lambda )|^{2k} \int_0^1 |y_1|^2\,.$$ Since $|\mu ^-(\lambda )| < 1/\sqrt{\delta }$, the solution $y_1$ is square integrable. Suppose that $\lambda \in \rho$ and that the sum of the dimensions of the two spaces of solutions of (\ref{1.a}) satisfying the vertex conditions (\ref{3.a}), and square integrable on $x(g)>0$ and $x(g) < 1$ respectively, exceeds $2$. Since the space of solutions to $-y'' + qy = \lambda y$ is two dimensional on $e$, there would be at least one nontrivial solution of the equation which satisfied all the vertex conditions and was square integrable on ${\cal T}$. This function would be an eigenfunction for ${\cal L} - \lambda I$, which is impossible. \hfill$\Box$ \section{Functions of ${\cal L}$ for homogeneous trees} \setcounter{equation}{0} \subsection*{The resolvent and spectrum of ${\cal L}$} The solutions $y_0$ and $y_1$ of Theorem~\ref{Thm3.4} can be used to construct the resolvent of ${\cal L}$ on ${\cal T}$. The explicit description of the eigenvectors for $\mu ^-(\lambda )$ shows that they satisfy the boundary conditions $$[c(\lambda ) - \delta \mu ^+(\lambda )]y(0) + s(\lambda )y'(0) = 0 , \label{4.z}$$ $$[s'(\lambda ) - \delta \mu ^+(\lambda )]y(1) - s(\lambda )y'(1) = 0 .$$ Since $q$ is even the solutions $C_1(x,\lambda )$ and $S_1(x,\lambda )$ of (\ref{1.a}) satisfying $$\pmatrix{ C_1(1,\lambda ) & S_1(1,\lambda ) \cr C_1'(1,\lambda ) & S_1'(1,\lambda )} = \pmatrix{1 & 0 \cr 0&1}$$ may also be written as $C_1(x,\lambda ) = C(1-x,\lambda )$ and $S_1(x,\lambda ) = -S(1-x,\lambda )$. To make a specific choice of functions $y_0$ and $y_1$, define \begin{eqnarray*} U(x,\lambda ) &=& -s(\lambda )C(x,\lambda ) + [c(\lambda ) - \delta \mu ^+(\lambda )]S(x,\lambda ) ,\\ V(x,\lambda ) &=& s(\lambda )C_1(x,\lambda ) + [s'(\lambda ) - \delta \mu ^+(\lambda )]S_1(x, \lambda )\\ &=& s(\lambda )C_1(x,\lambda ) + [c(\lambda ) - \delta \mu ^+(\lambda )]S_1(x, \lambda ) \,. \end{eqnarray*} The Wronskian $W(\lambda ) = W(V,U) = V(x,\lambda )U'(x,\lambda )-V'(x,\lambda )U(x,\lambda )$ is independent of $x$, and has the value \begin{eqnarray} W(\lambda ) &=& s(\lambda ) \Bigl [ -s(\lambda )c'(\lambda ) + (c(\lambda ) - \delta \mu ^+)c(\lambda ) \Bigr ] \nonumber \\ &&- \Bigl [ c(\lambda ) - \delta \mu ^+ \Bigr ] \Bigl [ - s(\lambda ) c(\lambda ) + (c(\lambda ) - \delta \mu ^+) s(\lambda )\Bigr ] \label{4.a} \\ &=& s(\lambda )[1 - \delta \mu ^+ ] [1 + \delta \mu ^+ ]\,. \nonumber \end{eqnarray} For $\lambda \in \rho$ and $W(\lambda ) \not= 0$ define the kernel $$R_e(x,t,\lambda ) = \left\{\begin{array}{ll} U(x,\lambda )V(t,\lambda )/W, \quad 0 \le x \le t \le 1\,, \\ U(t, \lambda )V(x, \lambda )/W, \quad 0 \le t \le x \le 1\,. \end{array}\right. \label{4.b}$$ If $f_e$ is supported in the interior of $e$ the function $$h_e(x) = \int_0^1 R_e(x,t,\lambda )f_e(t) \, dt$$ satisfies $[-D^2+q-\lambda ]h_e = f_e$, and in neighborhoods of $0$ and $1$ the function $h_e$ satisfies (\ref{1.a}) and the boundary conditions (\ref{4.z}) \cite [p. 309]{BR}. Extending $U$ to $x(g) < 0$ and $V$ to $x(g) > 1$ using the multiplier $\mu ^-$, Theorem~\ref{Thm3.4} shows that $h_e$ is square integrable on ${\cal T}$ and satisfies the vertex conditions (\ref{3.a}). Using cutoff functions $\phi _K$ as in Theorem~\ref{Thm2.2}, it is easy to check that $h$ is in the domain of ${\cal L}$. Let $f_e$ denote the restriction of $f \in L^2({\cal T})$ to the edge $e$. Since integration of $f_e$ against the meromorphic kernel $R_e(x,t,\lambda )$ agrees with $R(\lambda )f_e$ as long as $W(\lambda ) \not= 0$, they must agree for all $\lambda \in \rho$. The discussion above implies the next result. \begin{theorem}\label{Thm4.1} For $\lambda \in \rho$, $$R(\lambda )f = \sum_e \int_0^1 R_e(x,t,\lambda )f_e(t) \, dt ,$$ the sum converging in $L^2({\cal T})$. \end{theorem} Turning to the spectrum of ${\cal L}$ let $\sigma _2 = \{ \lambda \in \real \Bigl | \ s(\lambda ) = 0 \}$. \begin{theorem}\label{Thm4.2} The spectrum $\sigma$ of ${\cal L}$ on $L^2({\cal T})$ is the semibounded set $\sigma = \sigma _1 \cup \sigma _2$. If $\delta = 1$, then $\sigma _2 \subset \sigma _1$. If $\delta > 1$ then $\sigma _1 \cap \sigma _2 = \emptyset$ and every point in the infinite sequence $\sigma _2$ is an eigenvalue. \end{theorem} \paragraph{Proof} Lemma~\ref{Lemma3.2} has already shown that $\sigma _1 \subset \sigma$. In case $\delta = 1$ and $\lambda \in \sigma _2$, Lemma~\ref{Lemma3.1} implies ${\rm tr}(T_j)^2/4 = c^2(\lambda ) = 1$, so $\mu ^+ = \pm 1$ and $\sigma _2 \subset \sigma _1$. Suppose $\delta$ is arbitrary and that $\lambda _1 \in \real \setminus {\sigma _1 \cup \sigma _2}$. Then $W(\lambda _1) \not= 0$ by Theorem~\ref{Thm3.3} and (\ref{4.a}). Since $|\mu^-(\lambda _1)| < 1/\sqrt{\delta }$ the resolvent formula of Theorem~\ref{Thm4.1} defines an analytic $L^2({\cal T})$ valued function in a neighborhood of $\lambda _1$ as long as $f$ is supported on a finite union of edges. Let $[a,b]$ be a compact interval containing $\lambda _1$ and contained in $\complex \setminus \{ \sigma _1 \cup \sigma _2 \}$. If $P$ denotes the family of spectral projections for ${\cal L}$, then \cite[p. 237,264]{RS2} for any $f \in L^2({\cal T})$ $${1 \over 2}[P_{[a,b]} + P_{(a,b)}]f = \lim_{\epsilon \downarrow 0} {1 \over 2 \pi i}\int_a^b [R(\lambda + i\epsilon ) - R(\lambda - i\epsilon )]f \ d\lambda \,. \label{4.c}$$ By the observations above, the right hand side of (\ref{4.c}) vanishes on the dense set of $f$ supported on finitely many edges. This means that $[P_{[a,b]} + P_{(a,b)}]f =0$ for all $f \in L^2({\cal T})$, and $[a,b]$ is in the resolvent set $\rho$. Thus $\complex \setminus \{ \sigma _1 \cup \sigma _2 \} \subset \rho$. Finally, suppose that $\lambda _1 \in \sigma _2$ and $\delta > 1$. By Lemma~\ref{Lemma3.1} $c^2(\lambda _1) = 1$. In addition ${\rm tr}(T_j) = c(\lambda _1) (1 + 1/\delta )$, $\mu ^+ = c(\lambda _1)$, and $\mu ^- = c(\lambda _1)/\delta$. Since $|\mu ^-| = 1/\delta$, $\lambda _1 \notin \sigma _1$. Eigenvectors for $\mu ^-$ are multiples of $\pmatrix{0 \cr 1}$ for both $T_j$. The function $S(x,\lambda _1)$, which has such initial data at both $0$ and $1$, thus extends to an $L^2$ eigenfunction of $-D^2 + q$ on ${\cal T}$. \hfill$\Box$ As in the classical Hill's equation, the discriminant $$\Delta (\lambda ) = {\rm tr}(T_j(\lambda )) = {\delta + 1 \over \delta }c(\lambda ) +{\gamma \over \delta }s(\lambda )$$ plays a central role in describing the spectrum of ${\cal L}$. With the help of Lemma~\ref{Lemma3.2} one checks easily that $\lambda \in \sigma _1$ if and only if $-2/\sqrt{\delta} \le \Delta (\lambda ) \le 2/\sqrt{\delta }$. \begin{theorem}\label{Thm4.3} Suppose $\delta \ge 2$ and $\mu _n$, $n=1,2,3,\dots ,$ are the naturally ordered points in $\sigma _2$. Then $\Delta (\mu _n) = (-1)^n(\delta + 1)/\delta$. In each of the intervals $(-\infty ,\mu _1)$, $(\mu _n, \mu _{n+1})$, and for all $\eta$ satisfying $-2/\sqrt{\delta} \le \eta \le 2/\sqrt{\delta }$, the equation $\Delta (\lambda ) - \eta = 0$ has exactly one root, counted with multiplicity. The function $\partial _{\lambda }\Delta$ has no roots in $\sigma _1$. \end{theorem} \paragraph{Proof:} By Lemma~\ref{Lemma3.1} $s'(\lambda ) = c(\lambda )$ and $c^2(\mu _n ) = 1$. Moreover $s(\mu _n) = 0$ implies $$\Delta (\mu _n) = {\delta + 1 \over \delta }c(\mu _n ).$$ Counting the number of sign changes for $S(x,\mu _n)$ \cite[p. 41]{Pos} gives $c(\mu _n) = s'(\mu _n ) = (-1)^n$. Since $|\Delta (\mu _n)| = (\delta + 1)/ \delta > 2/\sqrt{\delta }$ when $\delta \ge 2$, the function $\Delta (\lambda ) - \eta$ must have at least one root between $\mu _n$ and $\mu _{n+1}$. To show that there is exactly one root, we begin by considering the case $q(x) = 0$ and $\gamma = 0$. In this case the claim is elementary. For $0 \le t \le 1$ let $$\Delta _t(\lambda ) - \eta = {\delta + 1 \over \delta }c_t(\lambda ) + t{\gamma \over \delta } s_t(\lambda ),$$ where $c_t$ and $s_t$ are the functions $c(\lambda )$ and $s(\lambda )$ for the potential $tq(x)$ with $t\gamma$ in the vertex condition. For each $t$ the function $\Delta _t(\lambda ) - \eta$ is entire, with real roots missing the values $\mu _n(t)$. By Rouche's theorem \cite[p. 152]{Ahl} the number of roots of $\Delta _t(\lambda ) - \eta = 0$, counted with multiplicity, between $\mu _n(t)$ and $\mu _{n+1}(t)$ is locally constant in $t$. Since the number of roots is $1$ when $t=0$, it remains $1$ up to $t=1$. The case of the interval $(-\infty ,\mu _1)$ may be handled in a similar fashion, although in this case the trapping of the roots simply makes use of an uniform estimate of the growth of $\Delta _t(\lambda )$ as $\lambda \to -\infty$ for $0 \le t \le 1$, \cite[p. 13]{Pos}. Finally, if $\partial _{\lambda }\Delta (\lambda _0) = 0$ for $\lambda _0 \in \sigma _1$, then $\eta = \Delta (\lambda _0)$ would satisfy $-2/\sqrt{\delta} \le \eta \le 2/\sqrt{\delta }$, and $\Delta (\lambda ) - \eta$ would have a root of multiplicity higher than $1$, which is impossible. \hfill$\Box$ \subsection*{Spectral projections} The rather explicit formula (\ref{4.b}) can be used to compute the spectral projections for ${\cal L}$. These computations will involve the extensions of $C(x,\lambda )$ and $S(x,\lambda )$ to the left and right of $e$, obtained by means of the transition matrices $T_j$. Write $x = t+k$ for integer $k$ and $0 \le t < 1$. It is convenient to compute the values for $C(t+k,\lambda )$ and $S(t+k,\lambda )$ by diagonalizing the transition matrices $$T_0(\lambda ) = S_0\pmatrix{\mu ^+ &0 \cr 0&\mu ^-}S_0^{-1}, \quad T_1(\lambda ) = S_1\pmatrix{\mu ^+ &0 \cr 0&\mu ^-}S_1^{-1},$$ \begin{eqnarray*} S_0 &=& \pmatrix{-s(\lambda ) & -s(\lambda ) \cr c(\lambda ) - \delta \mu ^- & c(\lambda ) - \delta \mu ^+ }, \\ S_0^{-1} &=& {1 \over \delta s(\lambda )[\mu ^+ - \mu ^-]} \pmatrix{c(\lambda ) - \delta \mu ^+ & s(\lambda ) \cr \delta \mu ^- - c(\lambda ) & -s(\lambda ) }, \\ S_1 &=& \pmatrix{s(\lambda ) & s(\lambda ) \cr c(\lambda ) - \delta \mu ^- & c(\lambda ) - \delta \mu ^+ }, \\ S_1^{-1} &=& {1 \over \delta s(\lambda )[\mu ^- - \mu ^+]} \pmatrix{c(\lambda ) - \delta \mu ^+ & -s(\lambda ) \cr \delta \mu ^- - c(\lambda ) & s(\lambda ) }\,. \end{eqnarray*} If $k < 0$ and $$C(t+k,\lambda ) = c_1(k) C(t,\lambda ) + s_1(k)S(t,\lambda ), \quad S(t+k,\lambda ) = c_2(k) C(t,\lambda ) + s_2(k)S(t,\lambda ),$$ then $$\pmatrix{c_1(k) & c_2(k) \cr s_1(k) &s_2(k) } = S_0\pmatrix{[\mu ^+]^k &0 \cr 0&[\mu ^-]^k}S_0^{-1}\,.$$ The expression is slightly different if $k >0$ since the transition matrix $T_1$ uses a basis of values at $x=1$, rather than $x=0$. Thus for $k>0$, $$\pmatrix{c_1(k) & c_2(k) \cr s_1(k) &s_2(k) } = \pmatrix{s'(\lambda ) & -s(\lambda ) \cr -c'(\lambda ) & c(\lambda )} S_1\pmatrix{[\mu ^+]^k &0 \cr 0&[\mu ^-]^k}S_1^{-1} \pmatrix{c(\lambda ) & s(\lambda ) \cr c'(\lambda ) & s'(\lambda )}.$$ For $f$ supported in $e$ the equation $-y'' + qy - \lambda y = f$ has solutions $$K_1f(x) = \int_0^x K_1(x,t,\lambda )f(t) \, dt\,, \quad K_2f(x) = \int_x^1 K_2(x,t,\lambda )f(t) \, dt\,,$$ where the kernels for these formal right inverses are \begin{eqnarray*} K_1(x,t,\lambda ) &=& C(x,\lambda )S(t,\lambda )-S(x,\lambda )C(t,\lambda ), \\ K_2(x,t,\lambda ) &=& C(t,\lambda )S(x,\lambda )-S(t,\lambda )C(x,\lambda ). \end{eqnarray*} Notice that these functions are entire as functions of $\lambda$. Define $K = [K_1 + K_2]/2$ and \begin{eqnarray*} \lefteqn{G(x,t,\lambda )} && \\ &=& R(x,t,\lambda ) - K(x,t,\lambda )\\ &=& \left\{\begin{array}{ll} U(t,\lambda )V(x,\lambda )/W(\lambda ) - C(x,\lambda )S(t,\lambda )/2+ S(x,\lambda )C(t,\lambda )/2, & t \le x,\\ U(x,\lambda )V(t,\lambda )/W(\lambda ) - C(t,\lambda )S(x,\lambda )/2+ S(t,\lambda )C(x,\lambda )/2, & x \le t\,.\end{array} \right. \end{eqnarray*} For each fixed $t \in [0,1]$ the function $G(x,t,\lambda )$ is a solution of ${\cal L}G = \lambda G$, except possibly for $x=t$. But since $G$ and $\partial _xG$ are continuous at $x=t$, it is a solution for all $x$, so that for $0 \le t \le 1$, $$G(x,t,\lambda ) = U(x,\lambda )V(t,\lambda )/W(\lambda ) - C(t,\lambda )S(x,\lambda )/2+S(t,\lambda )C(x,\lambda )/2. \label{4.d}$$ To obtain a more explicit description of the spectral projections of ${\cal L}$, we restrict (\ref{4.c}) to $f \in L^2({\cal T})$ supported in $e$, so the expression (\ref{4.b}) is available. If in addition $h \in L^2({\cal T})$ is supported in the union of finitely many edges, then since $K(x,t,\lambda )$ is entire in $\lambda$, $${1 \over 2}\langle [P_{[a,b]} + P_{(a,b)}]f, h \rangle = \lim_{\epsilon \downarrow 0} {1 \over 2 \pi i} \int_a^b \langle [G(\lambda + i\epsilon ) - G(\lambda - i\epsilon )]f,h \rangle \ d\lambda \,. \label{4.e}$$ Using the definitions of $U$ and $V$ and the identity $$\pmatrix{C_1(t,\lambda ) \cr S_1(t,\lambda ) } = \pmatrix{ s'(\lambda ) & - c'(\lambda ) \cr -s(\lambda ) & c(\lambda )} \pmatrix{C(t,\lambda ) \cr S(t,\lambda ) },$$ the expression (\ref{4.d}) may be written as $$G(x,t,\lambda ) = \Bigl (C(x,\lambda ),S(x,\lambda ) \Bigr )\Psi (\lambda ) \pmatrix{C(t,\lambda ) \cr S(t,\lambda )},$$ where \begin{eqnarray} \Psi (\lambda ) &=& \pmatrix{ 0 & 1/2 \cr -1/2 & 0} \label{4.f}\\ &&+ {1 \over W(\lambda )}\pmatrix{-s^2(\lambda ) & -s(\lambda )[s'(\lambda )-\delta \mu ^+(\lambda )] \cr s(\lambda )[c(\lambda )-\delta \mu ^+(\lambda )] & [c(\lambda )-\delta \mu ^+(\lambda )] [s'(\lambda )-\delta \mu ^+(\lambda )] } \times \nonumber \\ &&\pmatrix{ s'(\lambda ) & - c'(\lambda ) \cr -s(\lambda ) & c(\lambda )} \nonumber \\ &=& {s(\lambda ) \over W(\lambda )}\pmatrix{-s\delta \mu ^+ & - 1/2 + c(\lambda ) \delta \mu ^+ -(\delta \mu ^+)^2/2 \cr -1/2 + c(\lambda ) \delta \mu ^+ - (\delta \mu ^+)^2 /2& [c(\lambda )-\delta \mu ^+][1 - c(\lambda )\delta \mu ^+]/s }\nonumber \end{eqnarray} These calculations essentially follow the program in \cite{Cod2}, so we refer to this reference for the proofs that $$\Psi ^*(\lambda ) = \Psi (\overline{\lambda }), \quad \,{\rm Im}\,(\Psi )= {\Psi - \Psi ^* \over 2i} \ge 0,$$ and a fuller discussion of the next theorem. In our context the development of an eigenfunction expansion is simplified since $\Psi$ has continuous extensions from the upper and lower half planes to the real axis except possibly where $s(\lambda ) = 0$ or $\delta \mu ^+(\lambda )^2 = 1$. \begin{theorem}\label{Thm4.4} For $f$ supported in $e$ the spectral projections for ${\cal L}$ may be written as \begin{eqnarray} \lefteqn{ {1 \over 2} [P_{[a,b]} + P_{(a,b)}]f(x) }&& \label{4.g} \\ &=& \lim_{\epsilon \downarrow 0} {1 \over \pi } \int_a^b \int_0^1 \Bigl (C(x,\nu ),S(x,\nu ) \Bigr ) \,{\rm Im}\,(\Psi (\nu + i\epsilon )) \pmatrix{C(t,\nu ) \cr S(t,\nu )} f(t) \, dt \ d \nu \nonumber \\ &=& \int_a^b \Bigl (C(x,\nu ),S(x,\nu ) \Bigr ) \hat f(\nu ) \ dM(\nu )\nonumber \end{eqnarray} where the transform is defined by $$\hat f(\nu ) = \int_0^1 \pmatrix{C(t,\nu ) \cr S(t,\nu )} f(t) \, dt$$ and the spectral matrix is $$M(\nu ) = \lim_{\epsilon \to 0^+} {1\over \pi }\int_0^{\nu } \,{\rm Im}\, \Psi (t+i\epsilon ) \, dt\,.$$ \end{theorem} The explicit formula for $\Psi (\lambda )$ together with Theorem~\ref{Thm3.3} and (\ref{4.a}) provide the next result. \begin{corollary}\label{Corollary4.5} On the complement of $\sigma _2$ the spectral measure is absolutely continuous with respect to Lebesgue measure. \rm \end{corollary} Some comments are in order regarding the transform $f \to \hat f$. The classical treatments \cite[p. 1351]{Dunford} of eigenfunction expansions for ordinary differential operators on an interval ${\cal I}$ might suggest stronger results than Theorem~\ref{Thm4.4}, including the surjectivity of $f \to \hat f$ from $L^2({\cal I})$ to $L^2(M)$, and the explicit diagonalization $\widehat{{\cal L}f}(\nu ) = \nu {\hat f}(\nu )$. However we would then anticipate an infinite spectral matrix, whose explicit determination as in (\ref{4.f}) might still involve the computations above. The approach here is instead based on the study of self adjoint extensions of symmetric ordinary differential operators on $L^2[0,1]$ in the larger space $L^2({\cal T})$ \cite[pp. 121--139]{Akhiezer}, \cite{Cod2}, \cite[pp. 499--513]{Cod4}. \subsection*{Pointwise decay for the semigroup $\exp(-\tau {\cal L})$} The aim of this section is to develop an asymptotic expansion and pointwise decay estimates as $\tau \to \infty$ for the kernel of the semigroup $\exp(-\tau {\cal L})$ generated by ${\cal L}$ on $L^2({\cal T})$ when $\delta > 1$. The analysis of the semigroup kernel is based on a well known contour integral representation involving the resolvent. Thus we begin with pointwise estimates for the resolvent kernel. \begin{lemma}\label{Lemma4.6} \it Suppose that $\delta > 1$, $\epsilon > 0$ and $|\lambda - \nu | > \epsilon$ for all $\nu$ with $s(\nu ) = 0$. If $0 \le x,t \le 1$ and $x+m$ is the signed distance from $0 \in e$, then $$|R(x+m,t,\lambda )| \le K |\mu ^-|^{|m|}\exp(-|(x-t)\,{\rm Im}\,(\sqrt{\lambda })|).$$ \end{lemma} \paragraph{Proof:} Using (\ref{4.b}), the case $0 \le x \le t \le 1$ is considered first. From the estimates (\ref{3.e}) and (\ref{3.f}) one obtains $$|U(x,\lambda )V(t,\lambda )| \le {K_1 \over 1 + |\sqrt{\lambda }|}\exp(3|\,{\rm Im}\,(\sqrt{\lambda })|) \exp(-|\,{\rm Im}\,(\sqrt{\lambda })(x-t)|).$$ The estimate $$|s(\lambda )| \ge {K_2 \over 1 + |\sqrt{\lambda }|}\exp(|\,{\rm Im}\,(\sqrt{\lambda })|)$$ can be established using \cite[p. 27]{Pos} $$|\sin(z)| \ge C_{\epsilon }\exp(|\, {\rm Im}\,(z)|), \quad |z-n\pi | \ge \epsilon /2 , \quad C_{\epsilon } > 0,$$ and (\ref{3.e}). By Theorem~\ref{Thm3.3} we have $|\mu ^+| \ge 1/\sqrt{\delta}$, and $\delta > 1$, so that $$|W(\lambda )| \ge {K_3 \over 1 + |\sqrt{\lambda }|}\exp(3|\,{\rm Im}\,(\sqrt{\lambda })|)$$ as long as $|\lambda - \nu | > \epsilon$ for all $\nu$ with $s(\nu ) = 0$. This establishes the estimate in case $0 \le x \le t \le 1$, while the case $0 \le t \le x \le 1$ is similar. Again suppose that $0 \le x \le t \le 1$ and $m < 0$. The function $U(x,\lambda )$ is extended to $x < 0$ using the multiplier $\mu ^-$. Thus the initial data for $U$ at $m$ is $$\pmatrix{U(m,\lambda ) \cr U'(m,\lambda )} = [\mu ^-(\lambda )]^{|m|} \pmatrix{U(0,\lambda ) \cr U'(0,\lambda )}\,.$$ The argument is now similar to the case $m=0$, and the remaining cases are similar. \hfill$\Box$ We will also need estimates for derivatives of the functions $C(x,\lambda )$ and $S(x,\lambda )$. \begin{lemma}\label{Lemma4.7} \it For positive integers $n$, and $0 \le x \le 1$, the partial derivatives of $C(x,\lambda )$ and $S(x,\lambda )$ satisfy the estimates $$|\partial _{\lambda }^n C(x,\lambda )| \le K_n [1+|\sqrt{\lambda }|]^{-n}\exp(|\,{\rm Im}\,(\sqrt{\lambda })|x),$$ $$|\partial _{\lambda }^n S(x,\lambda )| \le K_n [1+|\sqrt{\lambda }|]^{-n-1}\exp(|\,{\rm Im}\,(\sqrt{\lambda })|x).$$ \end{lemma} \paragraph{Proof:} Differentiation of the equation (\ref{1.a}) for $C$ and $S$ leads to $$-(\partial _{\lambda }^nC)'' + (q-\lambda )\partial _{\lambda }^nC= n\partial _{\lambda }^{n-1}C, \quad -(\partial _{\lambda }^nS)'' + (q-\lambda )\partial _{\lambda }^nS= n\partial _{\lambda }^{n-1}S,$$ with the initial conditions $$\partial _{\lambda }^{n}C(0,\lambda ) = 0 , \quad (\partial _{\lambda }^{n}C)'(0,\lambda ) = 0, \quad \partial _{\lambda }^{n}S(0,\lambda ) = 0 , \quad (\partial _{\lambda }^{n}S)'(0,\lambda ) = 0$$ for $n \ge 1$. Thus \begin{eqnarray*}{1 \over n}\partial _{\lambda }^nC(x,\lambda ) &=& \int_0^x [C(x,\lambda ) S(t,\lambda ) - S(x,\lambda )C(t,\lambda ) ] \partial _{\lambda }^{n-1}C(t,\lambda ) \, dt \\ {1 \over n}\partial _{\lambda }^nS(x,\lambda ) &=&\int_0^x [C(x,\lambda ) S(t,\lambda ) - S(x,\lambda )C(t,\lambda ) ] \partial _{\lambda }^{n-1}S(t,\lambda ) \, dt\,.\end{eqnarray*} Application of the estimates (\ref{3.e}) gives $$|C(x,\lambda ) S(t,\lambda ) - S(x,\lambda )C(t,\lambda ) | \le {K \over 1 + |\sqrt{\lambda }|} \exp(|\,{\rm Im}\,(\sqrt{\lambda })|(x-t))\,.$$ An induction argument then gives the result. \hfill$\Box$ The semigroup $\exp(-\tau {\cal L})$ may be written as a contour integral involving the resolvent $R(\lambda )$ \cite[pp. 489 -- 493]{Kato}). For $r > 0$ and $0 < \theta < \pi /2$, let $\Gamma (r,\theta ) = \Gamma _1 \cup \Gamma _2 \cup \Gamma _3$ where \begin{eqnarray*} \Gamma _1 &=& s e^{i \theta } , \quad \Gamma _3 = s e^{-i \theta } , \quad r \le s < \infty \,, \\ \Gamma _2 &=& re^{i \phi } , \quad \theta \le \phi \le 2\pi - \theta\,. \end{eqnarray*} Choosing $r$ so large that $\Gamma$ lies in the resolvent set of ${\cal L}$, $$\exp(-\tau {\cal L})f = {1 \over 2\pi i} \int_{\Gamma } e^{-\lambda \tau }R(\lambda )f \ d \lambda , \quad \tau > 0. \label{4.h}$$ This contour is traversed counterclockwise', starting at $s = \infty$, coming in along $\Gamma _1$, going counterclockwise around $\Gamma _2$, and finally going out along $\Gamma _3$. For $f$ supported in $e$, the resolvent may be represented as an integral operator. Interchanging orders of integration may be justified using Lemma~\ref{Lemma4.6}, so that $$\exp(-\tau {\cal L})f(x) = \int _e \Bigl [ {1 \over 2\pi i} \int_{\Gamma } e^{-\lambda \tau } R(x,t,\lambda ) \ d \lambda \Bigr ] f(t) \, dt , \quad \tau > 0.$$ Thus the semigroup may be represented by integration against a continuous kernel $$H(x,t,\tau ) = {1 \over 2\pi i} \int_{\Gamma } e^{-\lambda \tau } R(x,t,\lambda ) \ d \lambda , \quad \tau > 0, \quad 0 \le t \le 1.$$ \begin{theorem}\label{Thm4.8} Suppose $\delta \ge 2$, $t \in [0,1]$, and $K$ is a positive integer. Then the semigroup kernel $H(x,t,\tau )$ has an asymptotic expansion as $\tau \to \infty$, $$H(x,t,\tau ) = \exp(-\lambda _0 \tau) \sum_{n=0}^{K-1} H_n(x,t)\tau ^{-(n+2)/2} +O(\tau ^{-(K+2)/2}\exp(-\lambda _0 \tau)).$$ The functions $H_n(x,t)$ are uniformly bounded, and $\lim_{|x| \to \infty} |H(x,t)| = 0$. The error is uniform for $t\in [0,1]$ and $x(g) \in \real$, $g \in {\cal T}$. \end{theorem} \paragraph{Proof:} Let $\lambda _0$ be the smallest point in $\sigma ({\cal L})$. By Theorem~\ref{Thm4.3} $\lambda _0 \in \sigma _1$, $\Delta (\lambda _0) = 2/\sqrt{\delta}$, and $\partial _{\lambda } \Delta (\lambda _0) \not= 0$. Since $$\partial _\lambda [\Delta ^2 - 4/\delta ](\lambda _0) = 2\Delta (\lambda _0)\partial _{\lambda } \Delta (\lambda _0) = c_1 \not= 0 ,$$ the transition matrix eigenvalues $\mu ^{\pm}$ are analytic functions of $(\lambda -\lambda _0)^{1/2}$ for $\lambda$ near $\lambda _0$, \begin{eqnarray*} \mu ^{\pm}(\lambda ) &=& \Delta (\lambda )/2 \pm \sqrt{c_1(\lambda -\lambda _0) + c_2(\lambda -\lambda _0)^2 + \dots}\\ &=& \Delta (\lambda )/2 \pm c_1^{1/2}(\lambda -\lambda _0)^{1/2} \sqrt{1 +c_2(\lambda -\lambda _0)/c_1 + \dots}\,. \end{eqnarray*} Since $\mu ^+(\lambda _0) = \mu ^-(\lambda _0) = 1/\sqrt{\delta}$, and $W(\lambda _0) \not= 0$, the resolvent kernel (\ref{4.b}) is an analytic function of $(\lambda -\lambda _0)^{1/2}$ in a neighborhood of $\lambda _0$. Let $\lambda _1 > \lambda _0$ be in this neighborhood, with $|\mu ^-(\lambda )| = 1/\sqrt \delta$ for $\lambda _0 \le \lambda \le \lambda _1$. The semigroup kernel analysis will involve a deformation $\tilde{\Gamma}$ of the contour $\Gamma$ in the complement of the spectrum of ${\cal L}$. Slit the complex plane along the real axis from $\lambda _0$ to $\infty$. Follow the contour $\Gamma$ in from $\infty$ in the upper half plane until $Re(\lambda ) = Re(\lambda _1)$. Drop down along this line to the real axis, follow the real axis along the upper half cut to $\lambda _0$, go back to $\lambda _1$ along the lower half cut, and then drop down the line $Re(\lambda ) = Re(\lambda _1)$ to $\Gamma$. Finally, follow the contour $\Gamma$ out to $\infty$ in the lower half plane. By Lemma~\ref{Lemma4.6} the kernel $R(x,t,\lambda )$ is uniformly bounded along the contour $\tilde{\Gamma}$. Thus $$2\pi i H(x,t,\tau ) = - \int_{\lambda _0}^{\lambda _1} e^{-\lambda \tau } R_u(x,t,\lambda ) \ d \lambda + \int_{\lambda _0}^{\lambda _1} e^{-\lambda \tau } R_l(x,t,\lambda ) \ d \lambda + O(e^{-\lambda _1\tau })\,,$$ with $\tau > 0$. Here $R_u$, $R_l$ indicates that the integrands are to be evaluated as limits from the upper and lower half planes respectively. Now make the change of variable $s^2 = \lambda -\lambda _0$ and let $\beta = \sqrt{\lambda _1 - \lambda _0}$ to get \begin{eqnarray*} 2\pi i H(x,t,\tau ) &=& - \int_0^{\beta} e^{-[s^2 + \lambda _0] \tau } {\tilde R}(x,t,s) 2s \, ds\\ &&+ \int_0^{-\beta} e^{-[s^2+\lambda _0] \tau } {\tilde R}(x,t,s) 2s \, ds + O(e^{-\lambda _1\tau }), \quad \tau > 0\,, \end{eqnarray*} or $$H(x,t,\tau ) = {i \over \pi} \int_{-\beta }^{\beta} e^{-[s^2 + \lambda _0] \tau } {\tilde R}(x,t,s) s \, ds + O(e^{-\lambda _1\tau }), \quad \tau > 0. \label{4.i}$$ Here $\tilde R(x,t,s) = R(x,t,\sqrt{\lambda - \lambda _0})$ is just the resolvent kernel from the upper and lower half of the slit expressed as a function of $s$. We will now use a Taylor expansion for $\tilde R(x,t,s)$ near $s=0$, $$|\tilde R(x,t,s) - \sum_{n=0}^{k-1} \partial _s^n \tilde{R}(x,t,0){s^n \over n!}| \le {|s^k| \over k!}\max_{\xi} | \partial _s^k \tilde{R}(x,t,\xi )| ,$$ the maximum taken over $\xi$ between $0$ and $s$. Since the integral in (\ref{4.i}) extends over the interval $[-\beta ,\beta ]$, it will suffice to have estimates for the derivatives of $\tilde{R}(x,t,s)$ over this interval. First consider the case $x, t \in [0,1]$. Using (\ref{4.b}), Lemma~\ref{Lemma4.7}, and the fact that $s(\lambda )$, $c(\lambda )$ and $\mu ^+(\lambda )$ are analytic functions of $s$ on the interval $[-\beta ,\beta ]$, it follows that there is a constant $C_k$ such that $$\max_{|\xi| \le \beta } | \partial _s^k \tilde{R}(x,t,\xi )| \le C_k\,.$$ If the first argument is $x+m$ for $m$ a negative integer (the case $m>0$ being similar), the resolvent kernel has the form $$\tilde{R}(x+m,t,s) = [\mu ^-(\lambda )]^{|m|}U(x,\lambda )V(t,\lambda )/W(\lambda ).$$ Since $|\mu ^-| = 1/\sqrt{\delta }$ for $s \in [-\beta ,\beta ]$, and the derivatives satisfy bounds $$|\partial _s^n (\mu ^-)^{|m|}| \le C_n[m^n +1]|\mu ^-|^{|m|},$$ we conclude that each partial derivative of $\tilde{R}(x,t,s)$ is uniformly bounded and $\partial _s^nR(x,t,s) \to 0$ as $|x| \to \infty$ for $t\in [0,1]$, $x \in \real$, and $s \in [-\beta ,\beta ]$. Thus $$\int_{-\beta }^{\beta} e^{-[s^2 + \lambda _0] \tau } | {\tilde R}(x,t,s) - \sum_{n=0}^{k-1} \partial _s^n \tilde{R}(x,t,0){s^n \over n!}| s \, ds \le c_1e^{-\lambda _0\tau} \int_0^{\infty } e^{-s^2 \tau } s^{k+1} \, ds\,.$$ We have the elementary calculations $$\int_0^{\infty }(-s^2)^n e^{-s^2 \tau } \, ds = \partial _{\tau}^n \int_0^{\infty } e^{-s^2 \tau } \, ds = \partial _{\tau}^n {\sqrt{\pi} \over 2} \tau ^{-1/2}$$ and $$\int_0^{\infty }s (-s^2)^n e^{-s^2 \tau } \, ds = \partial _{\tau}^n \int_0^{\infty }s e^{-s^2 \tau } \, ds = \partial _{\tau}^n {1 \over 2\tau }\,,$$ which give the desired error bounds. Finally, the Taylor series for the resolvent gives \begin{eqnarray*} \lefteqn{ \int_{-\beta }^{\beta} e^{-[s^2 + \lambda _0] \tau } s \sum_{n=0}^{k-1} \partial _s^n \tilde{R}(x,t,0){s^n \over n!} \, ds } && \\ &=& e^{-\lambda _0\tau }\sum_{n=0}^{k-1} {1 \over n!} \partial _s^n \tilde{R}(x,t,0) \int_{-\beta }^{\beta} e^{-s^2 \tau } s^{n+1} \, ds \\ &=& e^{-\lambda _0\tau } \sum_{n=0}^{k-1} {1 \over n!} \partial _s^n \tilde{R}(x,t,0) \int_{-\infty }^{\infty } e^{-s^2 \tau } s^{n+1} \, ds + O(\exp(-\lambda _0\tau ) \exp (-\beta ^2 \tau /2))\,. \end{eqnarray*} Our earlier observations about the boundedness and decay of $\partial _s^nR(x,t,s)$ give the corresponding conclusions about $H(x,t)$. \hfill$\Box$ In case $q=0$ and $\gamma = 0$ the computations are simplified considerably. At $\lambda _0$ we find $$\cos(\sqrt{\lambda _0}) = 2\sqrt{\delta }/[\delta +1] , \quad \sin(\sqrt{\lambda _0}) = \sqrt{\delta -1 / \delta}.$$ Some algebraic simplifications lead to \begin{eqnarray*} \lefteqn{ -(\delta +1)\sqrt{\lambda _0} R(x,t,\lambda _0)} && \\ &=& \left\{ \begin{array}{ll} \bigl( \cos(\sqrt{\lambda _0}x) - \sqrt{\delta}\sin(\sqrt{\lambda _0}x) \bigr )\times & \\ \bigl ( \cos(\sqrt{\lambda _0}[1-t]) + \sqrt{\delta} \sin(\sqrt{\lambda _0}[1-t]) \bigr ) & \mbox{ if $0 \le x \le t \le 1$},\\ \bigl ( \cos(\sqrt{\lambda _0}t) - \sqrt{\delta}\sin(\sqrt{\lambda _0}t) \bigr )\times & \\ \bigl ( \cos(\sqrt{\lambda _0}[1-x]) + \sqrt{\delta} \sin(\sqrt{\lambda _0}[1-x]) \bigr ) & \mbox{ if $0 \le t \le x \le 1$}. \end{array} \right. \end{eqnarray*} Evaluation of the resolvent at values of $x$ outside of $[0,1]$ may be made using the fact that $U$ and $V$ of (\ref{4.b}) are eigenfunctions for $x \to x-1$, respectively $x \to x+1$, with multiplier $\mu ^-$. \section{Covering spaces} \setcounter{equation}{0} In this section the previous analysis of the resolvent on the homogeneous tree is extended to the case of a regular graph ${\cal G}$ whose vertices all have the same degree $\delta + 1$, and whose edges have length $1$. Each such graph has a universal covering space $({\cal T},p)$, where as before ${\cal T}$ is the homogeneous tree of degree $\delta +1$. We refer to \cite[p. 145]{Massey} for a development of covering spaces and their application to graphs. As in the case of the tree, a common set of vertex conditions (\ref{3.a}) is selected for the vertices. By Theorem~\ref{Thm2.2} the self adjoint operator $L_{\cal G} = -D^2 + q$ may be defined by means of these vertex conditions. Denote the resolvents on the graph and tree by $R_{\cal G}(\lambda )$ and $R_{\cal T}(\lambda )$ respectively. Suppose that $\xi _0$ is a point in the interior of the edge $e_0 \in {\cal G}$, and that $\tilde{\xi}_0 \in p^{-1}(\xi _0)$. Let $\tilde e_0$ be the edge of ${\cal T}$ containing $\tilde{\xi}_0$. Then given any function $f \in L^2(e_0)$, there is a corresponding function $\tilde f \in L^2(\tilde e_0)$ such that $$\tilde f(\tilde{\xi} ) = \left\{ \begin{array}{ll} f(p(\tilde{\xi} )), & \tilde{\xi} \in \tilde e_0\,, \\ 0, & \tilde{\xi} \notin \tilde e_0. \end{array}\right.$$ \begin{theorem}\label{Thm5.1} Suppose that $f \in L^2({\cal G})$ is supported on an edge $e_0$. There is a positive $C(q,\gamma )$ such that if $|Im(\sqrt \lambda )| > C(q,\gamma )$ then for $\xi \in {\cal G}$ $$[R_{\cal G}(\lambda )f](\xi ) = \sum_{\tilde{\xi} \in p^{-1}(\xi )} [R_{\cal T}(\lambda )\tilde f](\tilde \xi ).$$ The sum and its first two derivatives converge uniformly for $\xi \in {\cal G}$. \end{theorem} \paragraph{Proof} The proof has two parts: a formal verification and a proof that the sum converges. Consider the two sums $$H(\xi ,\lambda ) = \sum_{\tilde{\xi} \in p^{-1}(\xi )} [R_{\cal T}(\lambda )\tilde f](\tilde \xi ), \quad h(\xi ,\lambda ) = \sum_{\tilde{\xi} \notin \tilde e_0} [R_{\cal T}(\lambda )\tilde f](\tilde \xi ).$$ As for the formal part, note that $$(- D^2 - \lambda ) H(\xi ,\lambda ) = \left\{ \begin{array}{ll} f(\xi ) , & \xi \in e_0, \\ 0, &\xi \notin e_0. \end{array}\right.$$ Moreover since the vertex conditions are satisfied in the tree, they are still satisfied when we sum over vertices in the tree. The remainder of the proof consists of verifying the convergence of the sums in question for suitable $\lambda$. To check on the convergence of $H(\xi ,\lambda )$ it suffices to check $h(\xi ,\lambda )$ and to show that the decaying solutions $U$ and $V$ may be summed over arbitrary subsets of edges satisfying $x < 0$, respectively $x > 0$, implying in particular convergence of the sums over the subsets $p^{-1}(e)$. We first consider uniform convergence. The series for the $k-th$ derivative, $k = 0,1,2$, of $h(\xi ,\lambda )$ will converge absolutely if $$\sum_{n=0}^\infty \delta ^n |\mu ^-|^n < \infty ,$$ or $|\delta \mu ^-(\lambda )| < 1$. Since $\mu ^-\mu ^+ = 1/\delta$, this is equivalent to $|\mu ^+(\lambda )| > 1$. The asymptotics (\ref{3.e}) show that there is some value $C(q,\gamma )$ such that $|\mu ^+(\lambda )| > 1$ if $|\,{\rm Im}\,(\sqrt \lambda )| > C(q,\gamma )$. If ${\cal G}$ is not a finite graph we must still check that $h$ is square integrable. It is enough to consider the summands of $h$ with $x(\tilde \xi ) > 0$, which contribute $$\sum_{e \in {\cal G}} \int_0^1 |V(x,\lambda )|^2 | \sum_{e_m \in p^{-1}(e)} (\mu ^-)^{k(m)} |^2.$$ Here $k(m)$ is given by the signed distance from $\tilde e_0$, $x(e_m) = [k(m), k(m) + 1]$. This sum converges with $$\sum_{e}|S_e|^2, \quad S_e = \sum_{e_m \in p^{-1}(e)} (\mu ^-)^{k(m)}.$$ If the largest magnitude of a term in $S_e$ is $|\mu ^-|^n$, then $$|S_e| \le \sum_{j=n}^{\infty} \delta ^j|\mu ^-|^j \le {\delta ^n|\mu ^-|^n \over 1 - \delta |\mu ^-|}.$$ On the other hand each edge $e_m$ in the tree appears once in the sum $\sum |S_e|^2$, so there are at most $\delta ^n$ sums $S_e$ with a term whose magnitude is as large as $|\mu ^-|^n$. This count gives the bound $$\sum_{e}|S_e|^2 \le \sum_n \delta ^n\Bigl ( {\delta ^n|\mu ^-|^n \over 1 - \delta |\mu ^-|} \Bigr )^2 \le {1 \over [1 - \delta |\mu ^-|]^2} \sum_n \delta ^{3n}|\mu ^-|^{2n}.$$ Thus $h$ is square integrable if $\delta ^{3}|\mu ^-|^{2} < 1$. After modifying $C(q,\gamma )$ to ensure $\delta ^{3}|\mu ^-|^{2} < 1$, this shows that the sums for $H(\xi ,\lambda )$ and its first two derivatives converge uniformly, that the resulting function satisfies the vertex conditions, is square integrable, and satisfies the differential equation $$(-D^2 -\lambda )H(\xi ,\lambda ) = f.$$ Thus $H(\xi ,\lambda )$ is in the domain of ${\cal L}^*$, and so is in the domain of ${\cal L}$, which is self adjoint. \hfill$\Box$ Theorem~\ref{Thm5.1} has two related aspects which invite more consideration. On one hand, since ${\cal L}_{\cal G}$ is self adjoint, the resolvent $R_{\cal G}(\lambda )$ has an analytic extension beyond the set of $\lambda$ for which convergence was established. On the other hand, the sums of powers of $\mu ^-$ appearing in the computation of $R_{\cal G}(\lambda )$ have geometric meaning. \begin{theorem}\label{Thm5.2} For $|\,{\rm Im}\,(\sqrt \lambda )| > C(q,\gamma )$ the diagonal of the resolvent may be written as $$R_{\cal G}(t,t,\lambda ) = b_e(\mu ^-){U(t,\lambda )V(t,\lambda ) \over W(\lambda )}, \quad t \in e . \label{5.a}$$ The function $b_e(z) = 1 + \sum_{l > 0} \eta _l z^l$ is analytic in a neighborhood of $z=0$. The coefficients $\eta _l$ count homotopy classes of loops with basepoint in $e$ represented by a loop of least length $l$. If ${\cal G}$ is a finite regular graph with $N_e$ edges, then $R_{\cal G}(\lambda )$ is trace class for $\lambda \in \rho$, and $${\rm tr}R_{\cal G}(\lambda ) = {b(\mu ^-) \over W(\lambda )} \int_0^1 U(t,\lambda )V(t,\lambda ) \,dt\, . \label{5.b}$$ The function $b(z) = \sum_e b_e(z) = N_e + \sum_{l > 0} N_lz^l$, where $N_l$ counts homotopy classes of loops represented by a loop of least length $l$, with one of the $N_e$ basepoints at the midpoint of an edge. \end{theorem} \paragraph{Proof:} The proof will actually provide additional information, relating the construction of the resolvent kernel to the numbers of homotopy classes of certain types of paths in ${\cal G}$. Suppose first that $\xi \in e_0$, where $f$ is supported. Let us split the resolvent sum into three parts, $$[R_{\cal G}(\lambda )f](\xi ) = [R_{\cal T}^0(\lambda )\tilde f](\xi ) + [R_{\cal T}^+(\lambda )\tilde f](\xi ) + [R_{\cal T}^-(\lambda )\tilde f](\xi ), \label{5.c}$$ where the $0,+,-$ terms are the resolvent sums of Theorem~\ref{Thm5.1} coming respectively from $\tilde \xi \in \tilde e_0$, $x(\tilde \xi ) > 1$ and $x(\tilde \xi ) < 0$. The three terms are given by integration against kernels, where $R_{\cal T}^0(x,t,\lambda )$ is given by (\ref{4.b}) for $0 \le x,t,\le 1$, while $$R^+(x,t,\lambda ) = {U(t,\lambda )V(x,\lambda ) \over W(\lambda )} \sum_{x(e_m) > 1} [\mu ^-(\lambda )]^{k(m)}, \quad e_m \in p^{-1}(e), \quad 0 \le x,t \le 1, \label{5.d}$$ and $$R^-(x,t,\lambda ) = {V(t,\lambda )U(x,\lambda ) \over W(\lambda )} \sum_{x(e_m) < 0} [\mu ^-(\lambda )]^{k(m)}, \quad e_m \in p^{-1}(e), \quad 0 \le x,t \le 1. \label{5.e}$$ In case $\xi \notin e_0$ the term $R_{\cal T}^0$ will be missing, but otherwise the representation of the resolvent will have the same form. Introduce the functions $$b^+_e(z) = \sum_{x(e_m) > 1} z^{k(m)} = \sum_{l > 0} \eta ^+_l z^l, \quad b^-_e(z) = \sum_{x(e_m) < 0} z^{k(m)} = \sum_{l > 0} \eta ^-_l z^l.$$ The coefficients $\eta ^+_l$, $\eta ^-_l$, count homotopy classes of paths which, to pick one description, start at the midpoint of $e_0$, end at the midpoint of $e$, and whose lift from the midpoint of $\tilde e_0$ to the midpoint of $\tilde e$ is homotopic to a minimal length path of length $l$ with $x(\tilde e)$ respectively greater than $1$ or less than $0$. Setting $x=t$ and including the three terms in (\ref{5.c}) gives the description of the diagonal of the resolvent in the statement of the theorem. When the graph ${\cal G}$ is finite, the operator ${\cal L}_{\cal G}$ has discrete spectrum with trace class resolvent, and the trace may be computed by integration over the diagonal. Perhaps the easiest way to establish these claims is to begin with a different set of vertex conditions, such as the Dirichlet conditions $f(0) = 0 = f(1)$. The corresponding operator ${\cal L}_D$ is now decoupled into $N_e$ copies of the Dirichlet operator on $[0,1]$, for which the result is known. Since the domains of ${\cal L}_{\cal G}$ and ${\cal L}_D$ differ only by finitely many boundary conditions, their eigenvalue distribution functions have a bounded difference \cite{Car79}. This shows that $R_{\cal G}(\lambda )$ is trace class. For a fixed $\lambda$ in the intersection of the two resolvent sets we have $R_{\cal G}(\lambda ) = R_D(\lambda ) + F$, where $F$ has finite rank. Based on this observation one may establish that the trace is given by integration over the diagonal of the resolvent kernel. \hfill$\Box$ The formulas (\ref{5.a},\ref{5.b}) and (\ref{5.d},\ref{5.e}) represent interesting relationships between generating functions for path counts and the spectral theory of differential operators. Since the resolvents extend analytically to the complement of the spectrum, the function $b(z)$ and its relatives will have analytic continuations determined by the spectrum of $\cal L$ on ${\cal G}$ and the singularities of $U,V,W$. The relationship between spectral theory and the analytic continuation of such generating functions was previously explored in \cite{Brooks2}, where the spectral theory for the discrete Laplacian on finite regular graphs was considered. Motivated in part by this discrete analog, we next consider calculation of the function $b(z)$. For the next result our graph need not have vertices of a fixed degree. Instead we require a finite graph, with edges of length $1$, and self adjoint vertex conditions. In addition to allowing vertex conditions of the form (\ref{3.a}) with $\gamma =0$, Dirichlet conditions $f(v) = 0$ or Neumann conditions $f'(v) = 0$ are allowed. In the last two cases these conditions are to hold at all edges incident to $v$. The vertex conditions need not be the same at each vertex. \begin{theorem}\label{Thm5.3} Suppose ${\cal G}$, not necessarily regular, has $N_e$ edges, all of length $1$. Assume self adjoint vertex conditions for the operator $-D^2$, the vertex conditions at a vertex $v$ having one of the following forms: (\ref{3.a}) with $\gamma =0$, or $f(v) = 0$, or $f'(v) = 0$. Then there is polynomial $$\det \tilde C(\zeta ) = 0$$ of degree at most $2N_e$, whose nonzero roots $\zeta _k$ satisfy $|\zeta _k| = 1$, and the nonzero eigenvalues of $-D^2$ on ${\cal G}$ are $$\{ [arg(\zeta _k ) + 2\pi m]^2 \} .$$ The eigenspaces corresponding to two nonzero eigenvalues $$[{\rm arg}\,(\zeta _k ) + 2\pi m_j]^2 , \quad j=1,2$$ have the same dimension. \end{theorem} \paragraph{Proof:} On the $n$-th interval any eigenfunctions must satisfy the equation $$-y_n'' = \lambda y_n, \label{5.f}$$ and $2N$ linearly independent boundary conditions. Letting $j = 0,1$, and $x_k = 0,1$, these each have one of the forms $$y_n^{(j)}(x_k) = 0, \quad y_m(x_k) - y_n(x_l) = 0, \quad \sum_{n} (-1)^{x_j}y_n'(x_j) = 0. \label{5.g}$$ Each boundary condition may be written in the general form $$\sum b_{mn}^1y_n(0) + \sum b_{mn}^2y_n'(0) + \sum b_{mn}^3y_n(1) + \sum b_{mn}^4y_n'(1) = 0\,,$$ where $m = 1 ,\dots 2N$, and $n = 1,\dots ,N$. If $B_l = (b_{mn}^l)$ we get a $2N \times 4N$ boundary matrix $B = (B_1, B_2, B_3, B_4)$, whose entries are $0,\pm 1$. Letting $E_n$ denote the $n$-th standard basis vector for $\complex ^N$, a basis for the solutions of (\ref{5.f}) can be written $$Y(x,\lambda ) = (e^{i\sqrt{\lambda } x} E_1,\dots ,e^{i\sqrt{\lambda } x}E_N,e^{-i\sqrt{\lambda } x}E_1,\dots ,e^{-i\sqrt{\lambda } x}E_N), \quad \lambda \not= 0.$$ Define the $4N \times 2N$ matrix $$\hat Y(\lambda ) = \pmatrix{Y(0,\lambda ) \cr Y'(0,\lambda ) \cr Y(1,\lambda ) \cr Y'(1,\lambda )}.$$ With this formulation, $\lambda \not= 0$ is an eigenvalue if and only if $$\det[C(\lambda )] = 0, \quad C(\lambda ) = B\hat Y(\lambda )\,,$$ which is the condition that some linear combination of the columns of $\hat Y(\lambda )$ satisfies all the boundary conditions. The entries of the matrix $C(\lambda )$ may be $0$, $\pm 1$, $\pm i\sqrt{\lambda }$, $\pm i\sqrt{\lambda } e^{\pm i \sqrt{\lambda } }$, $\pm i\sqrt{\lambda } e^{\mp i \sqrt{\lambda } }$. Since the conditions (\ref{5.g}) involve either the evaluation of a function or the first derivative, but not both, the nonzero entries in a row will either have no factors $i\sqrt{\lambda }$ or a common factor $i\sqrt{\lambda }$. Also, the entries in each column may have one of the exponentials $e^{\pm i \sqrt{\lambda } }$, but not both. Removing factors $i\sqrt{\lambda }$ from the rows, and $e^{-i\sqrt{\lambda } }$ from the columns will not change the nonzero roots of the determinant. Substituting $\zeta = e^{i \sqrt{\lambda } }$, the resulting matrix $\tilde C(\zeta )$ will have entries, up to sign, of $0, 1, \zeta$. Since $-D^2$ with the prescribed self adjoint vertex conditions is nonnegative, there are only nonnegative eigenvalues, and any nonzero roots of $\det \tilde C(\zeta ) = 0$ must have modulus $1$. Suppose we have two nonzero eigenvalues of the form $[\sqrt{\lambda } _k + 2\pi m_j]^2$, $j=1,2$, for a fixed $\sqrt{\lambda } _k$ and integers $m_j$. Consider the linear isomorphism $T$ mapping solutions of $-y'' = [\sqrt{\lambda } _k +2\pi m_j]^2y$ with $m_j = m_1$ to $m_j = m_2$ defined by $$\exp(\pm i [\sqrt{\lambda } _k + 2\pi m_1]x)E_n \to \exp(\pm i [\sqrt{\lambda } _k + 2\pi m_2]x)E_n .$$ This map leaves the function values at $x=0$ and $x=1$ unchanged. Suppose $f$ is a linear combination of the functions $$B_n^{\pm}(x) = \exp(\pm i [\sqrt{\lambda } _k + 2\pi m_1]x)E_n.$$ Evaluating the derivatives at $x=0$ and $x=1$ gives $$(B_n^{\pm})'(0) = \pm i [\sqrt{\lambda } _k + 2\pi m_1] E_n ,\quad (B_n^{\pm})'(1) = \pm i [\sqrt{\lambda } _k + 2\pi m_1] \exp(\pm i \sqrt{\lambda } _k )E_n.$$ Suppose the vertex $v$ has a condition $\sum f_e'(v) = 0$. In this sum, all of the terms have a common nonzero factor $[\sqrt{\lambda } _k + 2\pi m_1]$, the remaining factors being independent of $m_1$. Under the linear transformation $T$ the corresponding sum is the same except this common nonzero factor has been replaced by $[\sqrt{\lambda } _k + 2\pi m_2]$. Thus $f$ satisfies the vertex conditions at $v$ if and only if $Tf$ does. \hfill$\Box$ It is natural to wonder if the eigenvalue $0$ really has a distinguished role. To see that this may be the case, consider the system with edges $[0,1]$ and $[1,2]$, with the endpoints $1$, respectively $0,2$, identified as the two vertices. The vertex conditions are (\ref{3.a}) with $\gamma = 0$. This is simply the case of $-D^2$ on $\real \ mod \ 2Z$, where the eigenvalue $0$ has an eigenspace of dimension one while all other eigenvalues have two dimensional eigenspaces. To continue with the calculation of $b(z)$, we will apply Theorem~\ref{Thm5.3} to the case of ${\cal L} = -D^2$ on a finite regular graph ${\cal G}$ with vertex condition of the form (\ref{3.a}) with $\gamma = 0$. Theorem~\ref{Thm5.3} guarantees that there are at most $2N_e$ distinct numbers $\zeta _k$ such that the spectrum of $-D^2$ consists of $0$, and the sequences $[{\rm arg}\,(\zeta _k) + 2\pi m]^2$. \begin{theorem}\label{Thm5.4} If ${\cal G}$ is regular and finite, then there are integers $C$ and $M_k$ such that for $$\cos(\sqrt{\lambda }) = {\delta \over \delta +1}[z + {1 \over \delta z}]$$ the function $b(z)$ satisfies \begin{eqnarray} \lefteqn{{C \over \lambda } + {\sin(\sqrt{\lambda }) \over 2 \sqrt{\lambda }} \sum_{k=1}^{2N} { M_k \over \cos(\sqrt{\lambda }) - \cos(arg(\zeta _k )) } }&&\label{5.h} \\ &=& {\sqrt{\lambda } b(z) \over 2\lambda \sin(\sqrt{\lambda }) [1 - (1/z)^2]} \Bigl ( \Bigl [ \cos(\sqrt{\lambda }) - {\sin(\sqrt{\lambda }) \over \sqrt{\lambda }} \Bigr ] \Bigl [ 1 + (1/z)^2 \Bigr ] \nonumber \\ &&+ {2 \over z} \Bigl [ {\sin(\sqrt{\lambda })\cos (\sqrt{\lambda }) \over \sqrt{\lambda }} - 1 \Bigr ] \Bigr )\,. \nonumber \end{eqnarray} \end{theorem} \paragraph{Proof:} There are two elementary but tedious calculations which are omitted. The first is the evaluation of $${1 \over W(\lambda )} \int_0^1 U(t,\lambda )V(t,\lambda ) \, dt\,.$$ In case ${\cal L} = -D^2$ and the vertex conditions have the form (\ref{3.a}) with $\gamma = 0$, the transition matrix trace and determinant are $$\mu ^+(\lambda ) + \mu ^-(\lambda ) = {\delta + 1 \over \delta } \cos(\sqrt{\lambda }), \quad \mu^+ \mu^- = 1 /\delta \,.$$ Using these identities, the integration produces the right hand side of (\ref{5.h}), except for the factor $b(z)$. The second calculation is the sum $$\sum_{n=-\infty}^{\infty} {1 \over [2\pi n + \alpha ]^2 -\lambda } = {\sin(\sqrt{\lambda }) \over 2 \sqrt{\lambda }} { 1 \over \cos(\sqrt{\lambda }) - \cos(\alpha ) } \label{5.i}$$ which arises in the calculation of the resolvent trace ${\rm tr}\,[-D^2 - \lambda ]^{-1}$. This sum may be viewed as the trace of the resolvent of an operator whose eigenvalues are $[\alpha + 2\pi n ]^2$, each $n$ contributing an eigenspace of dimension $1$. Such an operator is $[iD]^2 = -D^2$ on $[0,1]$ with the boundary condition $f(0) = e^{-i\alpha }f(1)$ for $iD$. The kernel for this auxiliary resolvent can be explicitly computed and the diagonal integrated, yielding (\ref{5.i}). Now starting with the trace formula of Theorem~\ref{Thm5.2}, and using the form of the eigenvalues for $-D^2$ given by Theorem~\ref{Thm5.3}, the result is obtained. \hfill$\Box$ We will conclude by considering the relationship between Theorem~\ref{Thm5.2} and the spectral theory of the combinatorial Laplacian as developed in \cite{Brooks2}. The generating function considered by Brooks is $$f_{\cal G}(z) = \sum_{l} lN_lz^l,$$ so that as long as ${\cal G}$ has its edges defined by pairs of vertices (which is not necessary for this work on topological graphs) $$f_{\cal G}(z) = zb'(z).$$ Theorem~\ref{Thm3.3} of \cite{Brooks2} shows that $f_{\cal G}$ is determined by the eigenvalues of the combinatorial Laplacian, together with their multiplicities. Since $b(0) = N_e$, the number of edges of ${\cal G}$, the function $b(z)$ is determined by $f_{\cal G}(z)$ and the number of edges. If ${\cal G}$ is regular, then $N_e$ is $[\delta + 1]/2$ times the number of vertices, which is the same as the number of eigenvalues, counted with multiplicity. The preprint \cite{Brooks3} shows that there are numerous regular combinatorial graphs with the same spectrum, hence the same functions $f_{\cal G}(z)$ and $b(z)$. The formula (\ref{5.b}) shows that the resolvent trace of such graphs, and hence the spectrum, must agree for every operator ${\cal L}_{\cal G}$, since the only other data on the right hand side comes from ${\cal L}_{\cal T}$. \begin{thebibliography}{10} \bibitem{Ahl} L.~Ahlfors. \newblock {\em Complex Analysis}. \newblock McGraw-Hill, New York, 1966. \bibitem{Akhiezer} N.~Akhiezer and I.~Glazman. \newblock {\em Theory of Linear Operators in Hilbert Space}. \newblock Dover, New York, 1993. \bibitem{Albeverio} S.~Albeverio, F.~Gesztesy, R.~H{\o}egh-Krohn, and H.~Holden. \newblock {\em Solvable Models in Quantum Mechanics}. \newblock Springer-Verlag, New York, 1988. \bibitem{Avron2} J.~Avron, A.~Raveh, and B.~Zur. \newblock Adiabatic quantum transport in multiply connected systems. \newblock {\em Reviews of Modern Physics}, 60(4):873--915, 1988. \bibitem{BR} G.~Birkhoff and G.~Rota. \newblock {\em Ordinary Differential Equations}. \newblock Blaisdell, Waltham, 1969. \bibitem{Brooks2} R.~Brooks. \newblock The spectral geometry of k-regular graphs. \newblock {\em Journal D'Analyse Mathematique}, 57:120--151, 1991. \bibitem{Brooks3} R.~Brooks, R.~Gornet, and W.~Gustafson. \newblock Mutually isospectral {R}iemann surfaces. \newblock {\em preprint}, 1997. \bibitem{Bulla} W.~Bulla and T.~Trenkler. \newblock The free {D}irac operator on compact and noncompact graphs. \newblock {\em J. Math. Phys.}, 31(5):1157--1163, 1990. \bibitem{Car96a} R.~Carlson. \newblock Inverse eigenvalue problems on directed graphs. \newblock {\em Transaction of the American Mathematical Society (to appear)}. \bibitem{Car79} R.~Carlson. \newblock Expansions associated with non-self-adjoint boundary-value problems. \newblock {\em Proceedings of the American Mathematical Society}, 73(2):173--179, 1979. \bibitem{Car97a} R.~Carlson. \newblock Adjoint and self adjoint differential operators on graphs. \newblock {\em preprint}, 1997. \bibitem{Chung} F.~Chung. \newblock {\em Spectral Graph Theory}. \newblock American Mathematical Society, Providence, 1997. \bibitem{Cod2} E.~Coddington. \newblock Generalized resolutions of the identity for symmetric ordinary differential operators. \newblock {\em Annals of Mathematics}, 68(2):378--392, 1958. \bibitem{Cod4} E.~Coddington and A.~Dijksma. \newblock Self-adjoint subspaces and eigenfunction expansions for ordinary differential subspaces. \newblock {\em Journal of Differential Equations}, 20:473--526, 1976. \bibitem{Cvet} D.~Cvetkovic, M.~Doob, and H.~Sachs. \newblock {\em Spectra of Graphs}. \newblock Academic Press, New York, 1979. \bibitem{Dunford} N.~Dunford and J.~Schwartz. \newblock {\em Linear Operators, Part II}. \newblock Interscience, New York, 1964. \bibitem{Exner1} P.~Exner and P.~Seba. \newblock Electrons in semiconductor microstructures. \newblock In P.~Exner and P.~Seba, editors, {\em Schr{\"o}dinger operators, standard and non-standard}, pages 79--100, Dubna, USSR, 1988. \bibitem{Exner3} P.~Exner and P.~Seba. \newblock Schroedinger operators on unusual manifolds. \newblock In S.~Albeverio, J.~Fenstad, H.~Holden, and T.~Lindstrom, editors, {\em Ideas and methods in quantum and statistical physics}, pages 227--253, Oslo 1988, 1992. \bibitem{Ger2} N.~Gerasimenko. \newblock Inverse scattering problem on a noncompact graph. \newblock {\em Theoretical and Mathematical Physics}, 75(2):460--470, 1988. \bibitem{Ger1} N.~Gerasimenko and B.~Pavlov. \newblock Scattering problems on noncompact graphs. \newblock {\em Theoretical and Mathematical Physics}, 74(3):230--240, 1988. \bibitem{Kato} T.~Kato. \newblock {\em Perturbation Theory for Linear Operators}. \newblock Springer, New York, 1995. \bibitem{Lumer1} G.~Lumer. \newblock Connecting of local operators and evolution equations on networks. \newblock In {\em Potential Theory Copenhagen 1979}, volume 787 of {\em Lecture notes in mathematics}, pages 219--234. Springer-Verlag, 1980. \bibitem{Magnus} W.~Magnus and S.~Winkler. \newblock {\em Hill's Equation}. \newblock Dover Publications, New York, 1979. \bibitem{Massey} W.~Massey. \newblock {\em Algebraic Topology: An Introduction}. \newblock Harcourt, Brace and World, New York, 1967. \bibitem{Montroll} E.~Montroll. \newblock Quantum theory on a network. \newblock {\em Journal of Mathematical Physics}, 11(2):635--648, 1970. \bibitem{Nicaise} S.~Nicaise. \newblock Some results on spectral theory over networks applied to nerve impulse transmission. \newblock In {\em Polynomes orthogonaux et applications}, volume 1171 of {\em Lecture notes in mathematics}, pages 532--541. Springer-Verlag, 1985. \bibitem{Pauling} L.~Pauling. \newblock The diamagnetic anisotropy of aromatic molecules. \newblock {\em Journal of Chemical Physics}, 4:673--677, 1936. \bibitem{Pos} J.~Poschel and E.~Trubowitz. \newblock {\em Inverse Spectral Theory}. \newblock Academic Press, Orlando, 1987. \bibitem{RS2} M.~Reed and B.~Simon. \newblock {\em Methods of Modern Mathematical Physics, 2}. \newblock Academic Press, New York, 1975. \bibitem{Shapiro} B.~Shapiro. \newblock Quantum conduction on a {C}ayley tree. \newblock {\em Physical Review Letters}, 50(10):747--750, 1983. \bibitem{Below} J.~von Below. \newblock Classical solvability of linear parabolic equations on networks. \newblock {\em Journal of Differential Equations}, 72:316--337, 1988. \end{thebibliography} \bigskip {\sc Robert Carlson}\\ Department of Mathematics; University of Colorado at\\ Colorado Springs, Colorado 80933 USA.\\ E-mail address: carlson@castle.uccs.edu \end{document} --Bed_of_Clams_567_000--