\documentclass[reqno]{amsart} \usepackage{mathrsfs} \AtBeginDocument{{\noindent\small {\em Electronic Journal of Differential Equations}, Vol. 2003(2003), No. 26, pp. 1--21.\newline ISSN: 1072-6691. URL: http://ejde.math.swt.edu or http://ejde.math.unt.edu \newline ftp ejde.math.swt.edu (login: ftp)} \thanks{\copyright 2003 Southwest Texas State University.} \vspace{9mm}} \begin{document} \title[\hfilneg EJDE--2003/26\hfil On 2$\times$2 systems of conservation laws] {On 2$\times$2 systems of conservation laws with fluxes that are entropies} \author[Michael Junk \hfil EJDE--2003/26\hfilneg] {Michael Junk} \address{FB Mathematik, Universit\"at Kaiserslautern, Erwin-Schr\"odingerstra\ss e, 67663 Kaiserslautern, Germany} \email{junk@mathematik.uni-kl.de} \date{} \thanks{Submitted August 26, 2002. Published March 13, 2003.} \subjclass[2000]{35L65, 82C40} \keywords{Nonlinear conservation laws, entropies, kinetic formulation} \begin{abstract} In this article, we study systems of conservation laws with two dependent and two independent variables which have the property that the fluxes are entropies. Several characterizations of such flux functions are presented. It turns out, that the corresponding systems automatically possess a large class of additional entropies, they are closely related to a kinetic equation, and, in the case of strict hyperbolicity, they can be decoupled into two independent Burgers' equations. The isentropic Euler equations with zero or cubic pressure laws are the most prominent examples of such systems, but other examples are also presented. \end{abstract} \maketitle \newtheorem{theorem}{Theorem}[section] \newtheorem{lemma}[theorem]{Lemma} \newtheorem{proposition}[theorem]{Proposition} \newtheorem{cor}[theorem]{Corollary} \newtheorem{definition}[theorem]{Definition} \numberwithin{equation}{section} \section{Introduction and statement of results}\label{s1} The considerations in the present article are motivated by the work of Bouchut \cite{Bou99} who establishes a connection between general systems of conservation laws and kinetic equations with collision operators in relaxation form (so called BGK operators \cite{BGK}). While the kinetic solution normally yields approximations to the underlying hyperbolic system which are of first order in the BGK relaxation parameter, the approximation can be second order accurate if the fluxes in the hyperbolic system are themselves entropies (we call such fluxes {\em entropic}). This observation, which we describe in more detail below, is our starting point. It indicates that systems with entropic fluxes have some deeper relation to kinetic formulations. In the case of scalar conservation laws, where fluxes are always entropic because all smooth functions are entropies, this relation has been successfully used (see, for example, \cite{LPT94,P&T91}). For general systems, entropies are generally rare which indicates already that systems with entropic fluxes are not easy to find. However, in the case of $2\times2$ systems, general statements about systems with entropic fluxes are possible. In particular, we will show the %\begin{itemize} %\item existence of entropic fluxes %\end{itemize} {\em existence of entropic fluxes} by characterizing entropic flux functions as solutions of a non-linear hyperbolic problem. Moreover, we prove that for systems with entropic fluxes, {\em many additional entropies} can be constructed by simple integration. Finally, we will see that the assumption of entropic fluxes automatically leads to a natural %\begin{itemize} %\item kinetic formulation of the system. %\end{itemize} {\em kinetic formulation of the system.} Before commenting in more details on these topics, let us now briefly consider the background why to study entropic fluxes. For ease of notation, we consider a system of $m$ conservation laws in a single space dimension $$\label{r1.0} \partial_t{\boldsymbol U}+\partial_x{\boldsymbol F}({\boldsymbol U})={\boldsymbol 0}$$ where ${\boldsymbol U}(t,x)\in{\mathbb{R}}^m$ (for the general case, we refer to \cite{Bou99}). The basic idea of kinetic formulations is to replace the non-linear equation \eqref{r1.0} by some related semi-linear kinetic equation, for example, a BGK model for ${\boldsymbol f}_\epsilon(t,x,\xi)\in{\mathbb{R}}^m$ which has \eqref{r1.0} as singular limit $$\label{r1.1} \partial_t{\boldsymbol f}_\epsilon+a(\xi)\partial_x {\boldsymbol f}_\epsilon=\frac{1}{\epsilon}\left({\boldsymbol M}(\left\langle {\boldsymbol f}_\epsilon\right\rangle)-{\boldsymbol f}_\epsilon\right).$$ The additional kinetic variable $\xi$ may be discrete or continuous and $a(\xi)$ is a given function, for example $a(\xi)=\xi$. The relation between ${\boldsymbol f}_\epsilon$ and an approximate solution ${\boldsymbol U}_\epsilon$ of \eqref{r1.0} is established by averaging over $\xi$ (with respect to a measure on the $\xi$ space, e.g.\ $\xi\in{\mathbb{R}}$ with Lebesgue measure) which is denoted by ${\boldsymbol U}_\epsilon(t,x)=\left\langle {\boldsymbol f}_\epsilon(t,x,\xi)\right\rangle_\xi$, or simply ${\boldsymbol U}_\epsilon=\left\langle {\boldsymbol f}_\epsilon\right\rangle$. Note that the non-linearity of the original problem \eqref{r1.0} is now condensed in the so called Maxwellian function ${\boldsymbol M}(\left\langle {\boldsymbol f}_\epsilon\right\rangle,\xi)$ which depends, in general, non-linearly on $\left\langle {\boldsymbol f}_\epsilon\right\rangle$ and should satisfy $$\label{r1.2} \left\langle {\boldsymbol M}({\boldsymbol U},\xi)\right\rangle_\xi={\boldsymbol U},\quad \left\langle a(\xi){\boldsymbol M}({\boldsymbol U},\xi)\right\rangle_\xi={\boldsymbol F}({\boldsymbol U})\quad \text{for all {\boldsymbol U}.}$$ Note that the simplification from a non-linear to a semi-linear PDE comes with the price of an additional variable $\xi$ and a singular limit $\epsilon\to0$. We refer to \cite{AN00,DM94} and \cite{P&T91} for examples how to profit from the kinetic reformulation \eqref{r1.1} for both numerical and analytical investigations of \eqref{r1.0}. To see that \eqref{r1.1} formally leads to \eqref{r1.0} in the limit $\epsilon\to0$, we assume that ${\boldsymbol f}_\epsilon\to{\boldsymbol f}$ and consequently, ${\boldsymbol U}_\epsilon=\left\langle {\boldsymbol f}_\epsilon\right\rangle\to\left\langle {\boldsymbol f}\right\rangle={\boldsymbol U}$§. Taking the average of \eqref{r1.1} and using the first relation in \eqref{r1.2}, we find $$\label{r1.3} \partial_t{\boldsymbol U}_\epsilon+\partial_x\left\langle a{\boldsymbol f}_\epsilon\right\rangle ={\boldsymbol 0}.$$ To obtain information about ${\boldsymbol f}_\epsilon$ in terms of ${\boldsymbol U}_\epsilon$, we regroup \eqref{r1.1} after multiplication with $\epsilon$ $$\label{r1.4} {\boldsymbol f}_\epsilon={\boldsymbol M}({\boldsymbol U}_\epsilon)-\epsilon(\partial_t{\boldsymbol f}_\epsilon+a\partial_x {\boldsymbol f}_\epsilon).$$ Replacing ${\boldsymbol f}_\epsilon$ on the right of \eqref{r1.4} by relation \eqref{r1.4} itself, we obtain \begin{equation*} {\boldsymbol f}_\epsilon={\boldsymbol M}({\boldsymbol U}_\epsilon)-\epsilon(\partial_t{\boldsymbol M}({\boldsymbol U}_\epsilon)+a \partial_x {\boldsymbol M}({\boldsymbol U}_\epsilon))+{\mathcal{O}}(\epsilon^2). \end{equation*} Hence, using \eqref{r1.2} and \eqref{r1.3}, we are led to $$\label{r1.5} \partial_t{\boldsymbol U}_\epsilon+\partial_x{\boldsymbol F}({\boldsymbol U}_\epsilon)=\epsilon\partial_x\left(\partial_t{\boldsymbol F}({\boldsymbol U}_\epsilon)+\partial_x\left\langle a^2{\boldsymbol M}({\boldsymbol U}_\epsilon)\right\rangle\right)+{\mathcal{O}}(\epsilon^2).$$ From equation \eqref{r1.5} we can see that ${\boldsymbol U}_\epsilon$ is (formally) a first order approximation to the solution of \eqref{r1.0}. If, however, the additional conservation laws \begin{equation*} \partial_t{\boldsymbol F}({\boldsymbol U})+\partial_x{\boldsymbol G}({\boldsymbol U})={\boldsymbol 0},\quad {\boldsymbol G}({\boldsymbol U})=\left\langle a^2{\boldsymbol M}({\boldsymbol U})\right\rangle \end{equation*} are satisfied by solutions of \eqref{r1.0}, i.e.\ if the flux functions $F_i$ are entropies, then ${\boldsymbol U}_\epsilon$ is a second order approximation. Thus, the connection between \eqref{r1.0} and \eqref{r1.1} is {\em closer} if the fluxes of \eqref{r1.0} are themselves entropies and if the second moments of the Maxwellian are the corresponding entropy fluxes. The basic idea is now to {\em classify} those systems which are particularly connected to a kinetic formulation and in this article, we concentrate on the case $m=2$. We present characterizations of entropic fluxes in terms of \begin{itemize} \item a partial differential equation for the coefficients of $A={\boldsymbol F}'$, \item integrability properties of functions $h(A)$ like $A^n,\exp(A),|A|$, etc.\, \item a transformation leading to Burgers' equations, \item existence of a particular Maxwellian function. \end{itemize} Consequences of these characterization will be discussed below and proofs are presented in Sections \ref{s2} to \ref{s4}. We start by listing our basic assumptions. \subsection{Assumptions and definitions} We consider hyperbolic systems of the form \label{1} \begin{aligned} \phantom{.} & \partial_t U_1+\partial_x F_1({\boldsymbol U})=0,\\ \phantom{.} & \partial_t U_2+\partial_x F_2({\boldsymbol U})=0, \end{aligned} where ${\boldsymbol U}$ ranges in an open, simply connected set ${\mathcal{S}}\subset{\mathbb{R}}^2$ and ${\boldsymbol F}:{\mathcal{S}}\to{\mathbb{R}}^2$ is a continuously differentiable mapping. The hyperbolicity assumption means that the Jacobian matrix $A({\boldsymbol U})={\boldsymbol F}'({\boldsymbol U})$ has only real eigenvalues (the prime always refers to a ${\boldsymbol U}$ derivative). If ${\boldsymbol F}$ has two distinct real eigenvalues, we call \eqref{1} strictly hyperbolic. A differentiable function $\eta:{\mathcal{S}}\to{\mathbb{R}}$ is called {\em entropy} of the system \eqref{1} if the one-form $\eta'{\boldsymbol F}'$ is {\em exact}, i.e.\ if there exists some function $\phi:{\mathcal{S}}\to{\mathbb{R}}$ (the {\em entropy-flux}) such that $\phi'=\eta'{\boldsymbol F}'$. Note that $\phi$ can be constructed by integrating $\eta'{\boldsymbol F}'$ along a path in ${\mathcal{S}}$ with a fixed starting point and a variable endpoint (which we indicate by a preceding $\int$ symbol, i.e.\ $\phi=\int\eta'{\boldsymbol F}'$). In the following, we concentrate on the case where the fluxes $F_i$ in \eqref{1} are both entropies so that $F_1'{\boldsymbol F}'$ and $F_2'{\boldsymbol F}'$ are exact. Since $F_i'{\boldsymbol F}'$ are the rows of the matrix $({\boldsymbol F}')^2$, we are led to the \begin{definition} \label{D1} A matrix function $B\in C^0({\mathcal{S}},{\mathbb{R}}^{2\times 2})$ is called exact if it is a Jacobian, i.e.\ if there exists a function ${\boldsymbol b}\in C^1({\mathcal{S}},{\mathbb{R}}^{2})$ such that ${\boldsymbol b}'=B$. A function ${\boldsymbol F}\in C^1({\mathcal{S}},{\mathbb{R}}^2)$ is called entropic if $({\boldsymbol F}')^2$ is exact. \end{definition} We will refer not only to the square of $A={\boldsymbol F}'$ but also to higher powers $A^n$, resp.\ polynomials $Q=\beta_n A^n+\beta_{n-1}A^{n-1}+\dots+\beta_1 A+\beta_0$ with $\beta_i\in{\mathbb{C}}$. Note that $Q$ is a matrix valued mapping from ${\mathcal{S}}$ to ${\mathbb{C}}^{2\times 2}$. The collection of all these mappings is a sub-algebra of $C^0({\mathcal{S}},{\mathbb{C}}^{2\times2})$ with respect to the point-wise matrix product. The locally-uniform closure of this sub-algebra will be denoted by ${\mathcal{P}}(A)$. \begin{definition} \label{3D22} Let $A:{\mathcal{S}}\mapsto{\mathbb{R}}^{2\times 2}$ be continuous. The set ${\mathcal{P}}(A)$ consists of those functions $Q:{\mathcal{S}}\mapsto{\mathbb{C}}^{2\times 2}$ which are locally-uniform limits of $A$-polynomials over ${\mathbb{C}}$. \end{definition} A few properties of ${\mathcal{P}}(A)$ are discussed in the appendix. \subsection{Existence of entropic fluxes} In Section \ref{s2.2}, we show that $A=(a_{ij})\in C^1({\mathcal{S}},{\mathbb{R}}^{2\times2})$ with trace $\mu={\operatorname{tr}}(A)$ and determinant $-\lambda=\det A$ is the Jacobian of an entropic flux function ${\boldsymbol F}\in C^2({\mathcal{S}},{\mathbb{R}}^2)$ if and only if it satisfies the relations \label{r1.A} \begin{aligned} \frac{\partial a_{12}}{\partial U_1}-\frac{\partial a_{11}}{\partial U_2}&=0,&\quad &\frac{\partial a_{22}}{\partial U_1}-\frac{\partial a_{21}}{\partial U_2}=0,\\ a_{12}\frac{\partial \mu}{\partial U_1}-a_{11}\frac{\partial \mu}{\partial U_2}-\frac{\partial \lambda}{\partial U_2}&=0,&\quad & \frac{\partial \lambda}{\partial U_1}+a_{22}\frac{\partial \mu}{\partial U_1}-a_{21}\frac{\partial \mu}{\partial U_2}=0 \end{aligned} as well as $\mu^2+4\lambda\geq0$. The first row in \eqref{r1.A} are integrability conditions to ensure that $A$ is exact. Similarly, the second row yields integrability of $A^2$ and the inequality $\mu^2+4\lambda\geq0$ guarantees that the eigenvalues of $A$ are real so that the flux ${\boldsymbol F}=\int A$ leads to a hyperbolic system. In terms of $a_{ij}$, the system \eqref{r1.A} is hyperbolic and, in Section \ref{s3.0}, we show existence of local solutions with the help of the Cauchy-Kovalevskaya theorem. Hence, by prescribing $A$ along a suitable curve through some point $\bar{\boldsymbol U}\in{\mathbb{R}}^2$, we can find a neighborhood ${\mathcal{S}}$ of $\bar{\boldsymbol U}$ in which we can solve \eqref{r1.A} with the given data and finally obtain an entropic flux function $\int A={\boldsymbol F}\in C^2({\mathcal{S}},{\mathbb{R}}^2)$. Apart from this abstract existence result, we discuss several particular solutions of \eqref{r1.A}. For example, every constant $2\times 2$ matrix satisfies \eqref{r1.A} so that linear flux functions ${\boldsymbol F}({\boldsymbol U})=A{\boldsymbol U}$ are always entropic. Other simple solutions correspond to decoupled fluxes, i.e.\ ${\boldsymbol F}({\boldsymbol U})=(F_1(U_1),F_2(U_2))^{\scriptscriptstyle T}$ with Jacobians \begin{equation*} A({\boldsymbol U})=\begin{pmatrix} F_1'(U_1) & 0\\ 0 & F_2'(U_2)\end{pmatrix} \end{equation*} which clearly satisfy \eqref{r1.A}. These decoupled systems appear in Section \ref{s3.3} where we characterize symmetric solutions of \eqref{r1.A}. Note that if $A={\boldsymbol F}'$ is symmetric then ${\boldsymbol F}$ satisfies the integrability condition $\partial_{U_2}F_1=\partial_{U_1}F_2$, so that ${\boldsymbol F}=\Phi'$ for some scalar potential $\Phi$. It turns out that in this case, the non-linear system \eqref{r1.A} can be transformed into a linear, second order, hyperbolic equation for $\Phi$ which eventually leads to entropic fluxes which decouple under suitable transformations. In a next step, we concentrate on flux functions of the form $$\label{r1.30} {\boldsymbol F}({\boldsymbol U})=(U_2,F_2({\boldsymbol U}))^{\scriptscriptstyle T},\quad A({\boldsymbol U})=\begin{pmatrix} 0 & 1\\ \lambda({\boldsymbol U}) & \mu({\boldsymbol U})\end{pmatrix}.$$ Under this structural assumption on $A$, the system \eqref{r1.A} reduces to a non-linear hyperbolic system for $\lambda$ and $\mu$ which can be simplified further by going over to an equivalent system of Riemann invariants $H_1, H_2$ $$\label{r1.A2} \frac{\partial }{\partial U_1} \begin{pmatrix} H_1\\H_2\end{pmatrix}+ \begin{pmatrix} H_2 & 0\\0 & H_1\end{pmatrix}\frac{\partial }{\partial U_2} \begin{pmatrix} H_1\\H_2\end{pmatrix} ={\boldsymbol 0}.$$ It turns out that the simple wave solutions of \eqref{r1.A2}, i.e.\ those solutions for which either $H_1$ or $H_2$ are constant, lead again to entropic fluxes which decouple under suitable transformations. Other particular solutions of \eqref{r1.A2} are easily obtained in the case $H_1=H_2=H$, where \eqref{r1.A2} reduces to the Burgers' equation for $H$. Hence, any smooth solution of the Burgers' equation gives rise to an entropic flux function. For example, the initial'' value $H(0,U_2)=U_2$ leads to the flux function of the pressure-less Euler equation. A less familiar flux is also derived in Section \ref{s3.2} based on $H(0,U_2)=U_2^3/|U_2|$. Finally, the ansatz $H_{1,2}({\boldsymbol U})=H({\boldsymbol U})\pm h(U_1)$ leads to a solution of \eqref{r1.A2} which gives rise to the isentropic Euler equation with cubic pressure law which is also studied independently in Section \ref{s3.1}. Although the structural assumption \eqref{r1.30} on the flux Jacobian seems to be quite restrictive, we show in Section \ref{s3.2} that it actually is a {\em standard form} of entropic fluxes: whenever the first component of an entropic flux depends reasonably on $U_2$, one can transform the ${\boldsymbol U}$-variable in such a way that the new flux is again entropic and has the form \eqref{r1.30}. \subsection{Additional entropies} A characterization of entropic fluxes in terms of integrability properties is given by the following result which we prove in Section \ref{s2.1}. \begin{theorem} \label{3T55} ${\boldsymbol F}\in C^2({\mathcal{S}},{\mathbb{R}}^2)$ is entropic $\Leftrightarrow$ all $Q\in{\mathcal{P}}({\boldsymbol F}')$ are exact. \end{theorem} This theorem has the consequence that for systems \eqref{1} with entropic flux functions, many additional entropies can be found by integration: since ${\mathcal{P}}({\boldsymbol F}')$ is an algebra, $(\int Q)'{\boldsymbol F}'=Q{\boldsymbol F}'\in{\mathcal{P}}({\boldsymbol F}')$ and hence, $Q{\boldsymbol F}'$ is exact for every $Q\in{\mathcal{P}}({\boldsymbol F}')$, so that the components of $\int Q$ are entropies. Moreover, we prove in Section \ref{s4} that, if \eqref{1} admits at least one strictly convex entropy, then every convex entropy $\eta$ of \eqref{1} gives rise to additional entropies $\int \eta'Q$ with $Q\in{\mathcal{P}}({\boldsymbol F}')$. Since the availability of convex entropies is important in the analysis of hyperbolic equations, it would be nice if the new entropies $\int \eta'Q$ were also convex. However, this is not true in general (see the example below) and there is no simple criterion to check which elements $Q$ of ${\mathcal{P}}({\boldsymbol F}')$ give rise to convex entropies (see also the comment at the end of section \ref{kf}). In order to illustrate Theorem \ref{3T55}, we consider the isentropic Euler equation which has an entropic flux if the pressure law is cubic (see Section \ref{s3.1}; the first component is a convex and the second a strictly convex entropy) $$\label{eul} {\boldsymbol F}(\rho,m)=\begin{pmatrix}m\\ \frac{m^2}{\rho}+\frac{1}{3}\rho^3\end{pmatrix}, \quad \rho>0,\,\,m\in{\mathbb{R}}.$$ Choosing the ${\boldsymbol F}'$-monomials $({\boldsymbol F}')^n\in{\mathcal{P}}({\boldsymbol F}')$ and noting that, since $F_1'=(0,1)$, the first row of $({\boldsymbol F}')^n$ is equal to the second row of $({\boldsymbol F}')^{n-1}$, we obtain entropies $$\label{r1.21} \begin{pmatrix}\eta_n\\\eta_{n+1}\end{pmatrix}=\int ({\boldsymbol F}')^n,\quad n\geq0.$$ For example, we find with $u=m/\rho$ \begin{align*} \eta_0(\rho,m) & = \rho, &\quad \eta_1(\rho,m) & = m,\\ \eta_2(\rho,m) & = \rho u^2+\frac{1}{3}\rho^3, &\quad \eta_3(\rho,m) & = \rho u^3+\rho^3u,\\ \eta_4(\rho,m) & = \rho u^4+ 2\rho^2u^2+ \frac{1}{5}\rho^5, &\quad \eta_5(\rho,m) & = \rho u^5+\frac{10}{3}\rho^3 u^3+\rho^5 u. \end{align*} In general, we have $$\label{r1.11} \eta_n(\rho,m)=\frac{(u+\rho)^{n+1}-(u-\rho)^{n+1}}{2(n+1)}.$$ Since analytic functions $h$ give rise to elements $h({\boldsymbol F}')$ of ${\mathcal{P}}({\boldsymbol F}')$, we can also choose, for example, $h(s)=\exp(s)$. The components of $\int h({\boldsymbol F}')$ are \begin{equation*} \hat\eta_1(\rho,m)=\frac{e^u}{\sqrt{3\alpha}}\sinh(\rho),\quad \hat\eta_2(\rho,m)=\frac{e^u}{\sqrt{3\alpha}}((u-1)\sinh(\rho)+\rho\cosh(\rho)). \end{equation*} According to Lemma \ref{3L23} in the appendix, we can even use continuous functions $h$ to generate elements of ${\mathcal{P}}({\boldsymbol F}')$. In general, we set \begin{equation*} h({\boldsymbol F}')=R\begin{pmatrix}h(\lambda_1) & 0\\ 0 & h(\lambda_2)\end{pmatrix}R^{-1} \end{equation*} where $\lambda_i$ are the eigenvalues of ${\boldsymbol F}'$ and $R$ contains the right eigenvalues in its columns. Choosing, for example, $h(s)=|s|$, the first component of $\int |{\boldsymbol F}'|$ is the entropy $$\label{abs} \eta(\rho,m)=\frac{1}{4}\left((u+\rho)^2{\text{sign}}(u+\rho)-(u-\rho)^2{\text{sign}}(u-\rho)\right)$$ We remark that the entropies for the Euler equation listed above are so called {\em weak entropies} which can also be generated with suitable functions $g(\xi)$ in the form (see \cite{LPT94b}) $$\label{enin} \eta(\rho,m)=\int_{\mathbb{R}} g(\xi)\chi(\rho,\xi-u)\,d\xi,$$ where $\xi\mapsto 2\chi(\rho,\xi)$ is the indicator function on the interval $[-\rho,\rho]$. In \cite{LPT94b} it has been shown that entropies of type \eqref{enin} are convex if and only if $g$ is convex. Hence, $\eta_n$ above are convex for all even $n\in{\mathbb{N}}$ since they belong to $g_n(\xi)=\xi^n$. Moreover $\hat \eta_1$ is convex because it corresponds to $g(\xi)=\exp(\xi)$ and $\hat \eta_2$ is associated with the non-convex function $g(\xi)=(\xi-1)\exp(\xi)$. Finally, the entropy \eqref{abs} is also convex since it belongs to to $g(\xi)=|\xi|$. \subsection{Decoupling property} To illustrate the next characterization of entropic fluxes, we consider again the isentropic Euler equation with pressure law $p(\rho)=\rho^3/3$. \begin{equation*} {\boldsymbol F}(\rho,m)=\begin{pmatrix}m\\ \frac{m^2}{\rho}+\frac{1}{3}\rho^3\end{pmatrix}, \quad A(\rho,m)=\begin{pmatrix} 0 & 1\\c^2-u^2 & 2u\end{pmatrix} \end{equation*} where $u=m/\rho$ and $c=\sqrt{p'(\rho)}=\rho$. In terms of the eigenvalues $H_1(\rho,m)=u+c$ and $H_2(\rho,m)=u-c$ of $A$ (i.e.\ the characteristic speeds of the Euler system), we can write \begin{equation*} A=\begin{pmatrix} 0 & 1\\-H_1H_2 & H_1+H_2\end{pmatrix}. \end{equation*} It is an interesting feature of the isentropic Euler system that the derivatives of the eigenvalues \begin{equation*} H_1'=\frac{2}{H_1-H_2}\begin{pmatrix}-H_2 & 1\end{pmatrix},\quad H_2'=\frac{2}{H_1-H_2}\begin{pmatrix}-H_1 & 1\end{pmatrix}, \end{equation*} are left eigenvectors of $A$ with eigenvalues $H_1, H_2$. In other words, $H_1, H_2$ are Riemann invariants of the Euler system and \begin{equation*} H_1'{\boldsymbol F}'=H_1H_1'=(H_1^2/2)',\quad H_2'{\boldsymbol F}'=H_2H_2'=(H_2^2/2)' %{\boldsymbol H}'A ({\boldsymbol H}')^{-1}=\begin{pmatrix}H_1 & 0 \\ 0 & H_2\end{pmatrix} \end{equation*} so that the {\em characteristic speeds are entropies}. Consequently, if $(\rho,m)$ is a smooth solution of the Euler system, then $H_i(\rho,m)$ satisfy a system of two decoupled Burgers' equations \begin{equation*} \begin{aligned} \phantom{.} & \partial_t H_1+\partial_x H_1^2/2=0,\\ \phantom{.} & \partial_t H_2+\partial_x H_2^2/2=0. \end{aligned} \end{equation*} As it turns out in Section \ref{s2.3}, Theorem \ref{r1.T2}, this property is not restricted to the Euler system but actually {\em characterizes} strictly hyperbolic systems with entropic fluxes: the characteristic speeds are entropies and every such system decouples into independent Burgers' equations. Conversely, if a system can be decoupled like that, the flux is entropic. We also present a result for non-strictly hyperbolic system which generalizes a feature of the pressure-less Euler equation \label{r1.60} \begin{aligned} \phantom{.} & \partial_t \rho+\partial_x m=0,\\ \phantom{.} & \partial_t m+\partial_x \frac{m^2}{\rho}=0, \end{aligned} which has a non-diagonalizable flux Jacobian with eigenvalue $H(\rho,m)=u$. Writing $m=\rho u$ in the $m$-equation of \eqref{r1.60}, we find \begin{equation*} \rho( \partial_t u +u\partial_x u)+u( \partial_t \rho+\partial_x m)=0. \end{equation*} In view of the $\rho$-equation, we conclude that for smooth solutions, the eigenvalue $H(\rho,m)=u$ satisfies again a Burgers' equation (i.e.\ $H$ is an entropy of the system with flux $H^2/2$). In Corollary \ref{r1.P2}, we will see that any non-diagonizable hyperbolic system with entropic flux has this property. \subsection{Kinetic formulation}\label{kf} Our last characterization of entropic fluxes concerns the existence of a particular Maxwellian function ${\boldsymbol M}({\boldsymbol U},v)$. With respect to the kinetic variable $v\in{\mathbb{R}}$, the components $M_i({\boldsymbol U},v)$ of the Maxwellian are compactly supported distributions on ${\mathbb{R}}$ which we denote by $\mathcal{E}'({\mathbb{R}})$ with dual product $\left\langle\cdot,\cdot\right\rangle$. For scalar test functions $\phi\in C^\infty({\mathbb{R}})$, the product $\left\langle {\boldsymbol M},\phi\right\rangle$ is considered component-wise, and for pairs of test functions ${\boldsymbol \phi}=(\phi_1,\phi_2)^{\scriptscriptstyle T}\in C^\infty({\mathbb{R}})^2$, the product $\left\langle {\boldsymbol M},{\boldsymbol \phi}\right\rangle$ abbreviates $\left\langle M_1,\phi_1\right\rangle+\left\langle M_2,\phi_2\right\rangle$. \begin{theorem} \label{momprob} Let ${\boldsymbol F}\in C^2({\mathcal{S}},{\mathbb{R}}^2)$ be the flux of a hyperbolic system. Then ${\boldsymbol F}$ is entropic if and only if there exists a function ${\boldsymbol M}\in C^1({\mathcal{S}},\mathcal{E}'({\mathbb{R}})^2)$ which is a Maxwellian for \eqref{1}, i.e.\ $$\label{r1.20} \left\langle {\boldsymbol M}({\boldsymbol U}),1\right\rangle={\boldsymbol U},\quad \left\langle {\boldsymbol M}({\boldsymbol U}),v\right\rangle={\boldsymbol F}({\boldsymbol U}),\quad \forall {\boldsymbol U}\in{\mathcal{S}}$$ with the additional property that each element of the set $$\label{r1.22} {\mathcal{E}}=\left\{ (\left\langle {\boldsymbol M},{\boldsymbol \phi}\right\rangle,\left\langle v{\boldsymbol M},{\boldsymbol \phi}\right\rangle):{\boldsymbol \phi}=(\phi_1,\phi_2)^{\scriptscriptstyle T}\in C^\infty({\mathbb{R}})^2\right\}$$ is an entropy entropy-flux pair for \eqref{1}. \end{theorem} Details of the proof can be found in Section \ref{s2.4}. Here, we just mention the example of the isentropic Euler equation with cubic pressure law. In this case, the Maxwellian is given by \begin{equation*} {\boldsymbol M}(\rho,m,v)=\begin{pmatrix}1\\ v\end{pmatrix}\chi(\rho,v-m/\rho) \end{equation*} with $s\mapsto 2\chi(\rho,s)$ being the indicator function on $[-\rho,\rho]$. In view of \eqref{r1.21}, \eqref{r1.11}, and \eqref{enin}, it is easy to check that ${\boldsymbol M}$ satisfies \eqref{r1.20}. The entropy property of ${\boldsymbol M}$ follows from the fact that expressions of type \eqref{enin} are entropies for the isentropic Euler system. We refer to \cite{LPT94b} for the proof that the corresponding entropy fluxes are given by the $\chi$-integral with weight $\xi g(\xi)$. The advantage of the Maxwellian ${\boldsymbol M}$ obtained from Theorem \ref{momprob} is that the entropy production for every pair $(\eta,\theta)\in{\mathcal{E}}$ can be characterized by a single distribution ${\boldsymbol J}$. In fact, if ${\boldsymbol \phi}\in{\mathbb{C}}^\infty({\mathbb{R}})^2$ generates $(\eta,\theta)$, then $$\label{r1.23} \partial_t\eta({\boldsymbol U})+\partial_x\theta({\boldsymbol U})=\left\langle {\boldsymbol J},{\boldsymbol \phi}\right\rangle, \quad {\boldsymbol J}=\partial_t{\boldsymbol M}({\boldsymbol U})+v\partial_x{\boldsymbol M}({\boldsymbol U}).$$ Note that the original system \eqref{1} is contained in \eqref{r1.23} because, in view of \eqref{r1.20}, the test functions ${\boldsymbol \phi}(v)=(1,0)^{\scriptscriptstyle T}$ and ${\boldsymbol \phi}(v)=(0,1)^{\scriptscriptstyle T}$ generate the pairs $(U_1,F_1({\boldsymbol U}))$ and $(U_2,F_2({\boldsymbol U}))$. This implies that for weak solutions ${\boldsymbol U}$ of \eqref{1}, the relation $\left\langle {\boldsymbol J},1 \right\rangle={\boldsymbol 0}$ is satisfied so that ${\boldsymbol J}=\partial_v{\boldsymbol m}$ with \begin{equation*} \left\langle {\boldsymbol m},\phi\right\rangle_v:\:= -\left\langle {\boldsymbol J},\Phi\right\rangle,\quad \Phi(v)=\int_0^v\phi(s)\,ds. \end{equation*} We also remark that ${\boldsymbol J}$ vanishes identically if ${\boldsymbol U}$ is a smooth solution of \eqref{1} because entropy productions are zero in that case and hence $\left\langle {\boldsymbol J},{\boldsymbol \phi}\right\rangle=0$ for all ${\boldsymbol \phi}\in C^\infty({\mathbb{R}})^2$. Finally, if ${\boldsymbol U}$ is an entropy solution of \eqref{1}, then the measure ${\boldsymbol m}$ satisfies a sign condition for all test functions from the set \begin{equation*} {\mathcal{T}}_c=\{{\boldsymbol \phi}\in C^\infty({\mathbb{R}})^2: \text{$\left\langle {\boldsymbol M},{\boldsymbol \phi}\right\rangle$ is convex}\} \end{equation*} To see this, we pick ${\boldsymbol \phi}\in{\mathcal{T}}_c$ and introduce $\eta=\left\langle {\boldsymbol M},{\boldsymbol \phi}\right\rangle$ and $\theta=\left\langle v{\boldsymbol M},{\boldsymbol \phi}\right\rangle$. Then, for a non-negative test function $\psi\in C_0^\infty((0,\infty)\times {\mathbb{R}})$, we have \begin{equation*} 0 \geq \left\langle \partial_t\eta({\boldsymbol U})+\partial_x\theta({\boldsymbol U}),\psi\right\rangle_{(t,x)} = \left\langle \left\langle {\boldsymbol J},{\boldsymbol \phi}\right\rangle_v,\psi\right\rangle_{(t,x)} = - \left\langle \left\langle {\boldsymbol m},{\boldsymbol \phi}'\right\rangle_v,\psi\right\rangle_{(t,x)} \end{equation*} from which we conclude $$\label{r1.24} \left\langle {\boldsymbol m},{\boldsymbol \phi}'\right\rangle_v\geq0\quad \forall {\boldsymbol \phi}\in{\mathcal{T}}_c.$$ Hence, if ${\boldsymbol U}$ is an entropy solution of \eqref{1} then there exists a distribution ${\boldsymbol m}$ with compact $v$-support which satisfies \eqref{r1.24} and $$\label{kins} \partial_t {\boldsymbol f}+v\partial_x{\boldsymbol f}=\partial_v {\boldsymbol m}, \quad {\boldsymbol f}={\boldsymbol M}({\boldsymbol U}).$$ A converse statement is also true if the class ${\mathcal{E}}_c=\{(\eta,\theta)\in{\mathcal{E}}:\text{$\eta$convex}\}$ is rich enough to single out the entropy solutions among the weak solutions of \eqref{1}. In fact, if there are distributions ${\boldsymbol f}$ and ${\boldsymbol m}$ with compact $v$-support satisfying \eqref{kins} and \eqref{r1.24} then ${\boldsymbol U}=\left\langle {\boldsymbol f},1\right\rangle_v$ is an entropy solution of \eqref{1}. This follows easily by applying the test functions $\phi(v)=1$ and ${\boldsymbol \phi}\in{\mathcal{T}}_c$ to the transport equation in \eqref{kins}. If the flux ${\boldsymbol F}$ has the standard form ${\boldsymbol F}({\boldsymbol U})=(U_2,F_2({\boldsymbol U}))^{\scriptscriptstyle T}$, equation \eqref{kins} can be given additional structure. First, we note that since $F_1'=(0,1)$, the second component of $\int ({\boldsymbol F}')^n$ equals the first component of $\int ({\boldsymbol F}')^{n+1}$. Hence, according to Theorem \ref{momprob} \begin{equation*} \left\langle M_2,v^n\right\rangle=\left\langle M_1,v^{n+1}\right\rangle=\left\langle v M_1,v^n\right\rangle\quad \forall n\in{\mathbb{N}}_0 \end{equation*} and unique solvability of moment problems in the class of compactly supported distributions implies $M_2=v M_1$. The definition of ${\boldsymbol J}$ in \eqref{r1.23} yields in accordance $J_2=v J_1$. Consequently, $\partial_v(vm_1)=v\partial_v m_1+m_1=\partial_v m_2+m_1$ which yields $m_1=-\partial_v\mu$ with $\mu=m_2-v m_1$ and $J_1=-\partial_v^2 \mu$. Thus, in the case of fluxes in standard form, the vector equation \eqref{kins} can be replaced by a scalar one $$\label{kin} \partial_t f+v\partial_xf=-\partial_v^2 \mu, \quad f=M_1({\boldsymbol U}).$$ We remark that kinetic formulations of type \eqref{kin} have been successfully used in a number of cases to derive information about solutions of the underlying conservation laws (see, for instance, \cite{LPT94b,LPT94,B&C98}). A common feature in all these examples is that the kinetic formulation relates to entropy solutions of the conservation laws if the distribution $\mu$ is {\em non-negative}. Assuming non-negativity in our case has the following consequence: we pick test functions $\psi(t,x)\geq0$, $\varphi(v)\geq0$ and integrate $\varphi$ twice to obtain a $C^\infty$ function $\phi$ which is convex. Using the definition of $J_1$, we find \begin{multline*} 0\leq\left\langle \mu,\psi\otimes\varphi\right\rangle=\left\langle\partial_v^2\mu,\psi\otimes\phi\right\rangle=-\left\langle J_1,\psi\otimes\phi\right\rangle\\ =\left\langle \left\langle M_1,\phi\right\rangle,\partial_t\psi\right\rangle+\left\langle\left\langle M_1,v\phi\right\rangle,\partial_x\psi\right\rangle \end{multline*} where $\left\langle M_1,\phi\right\rangle$ is an entropy for \eqref{1} according to Theorem \ref{momprob}. Hence, non-negativity of $\mu$ is equivalent to the entropy inequalities $$\label{ent} \partial_t \left\langle M_1,\phi\right\rangle +\partial_x \left\langle M_1,v\phi\right\rangle\leq0$$ for all convex $\phi\in C^\infty$. If $M_1$ has the {\em convexity property} that $\left\langle M_1,\phi\right\rangle$ is convex if $\phi$ is convex then \eqref{ent} is the usual entropy condition. In connection with \eqref{enin}, we have already seen that the Maxwellian $M_1(\rho,m,v)=\chi(\rho,v-m/\rho)$ related to the isentropic Euler equation possesses the convexity property. We refer to \cite{LPT94b} for examples how to take advantage of the kinetic reformulation \eqref{kin} in that case. \subsection{General remarks} In the following sections, we will carefully state and prove the results described above. Section \ref{s2} deals with the characterization of entropic fluxes, in Section \ref{s3} we discuss existence of such fluxes, and in Section \ref{s4}, additional entropies for systems with entropic fluxes are derived. We conclude with the remark that for systems with $m\geq3$ equations, a similar characterization of entropic fluxes seems to be difficult. The reason is that the number of conditions on ${\boldsymbol F}\in C^2({\mathcal{S}},{\mathbb{R}}^m)$ to be entropic amounts to $m^2(m-1)$ conditions on $A={\boldsymbol F}'$. In the case $m=1$ there is no condition (all fluxes are entropic), and for $m=2$ there are 4 conditions (the equations in \eqref{r1.A}) which is just enough to fix $A$ and thus ${\boldsymbol F}$. For $m\geq3$, however, the number of conditions exceeds the number of components of $A={\boldsymbol F}'$ which indicates that entropic fluxes are rare if $m\geq3$ (for examples see \cite{B&C98}). \section{Characterization of entropic fluxes}\label{s2} \subsection{Characterization: ${\mathcal{P}}({\boldsymbol F}')$}\label{s2.1} We prove Theorem \ref{3T55} which states that ${\boldsymbol F}\in C^2({\mathcal{S}},{\mathbb{R}}^2)$ is entropic if and only if every $Q\in{\mathcal{P}}(A)$ with $A={\boldsymbol F}'$ is exact. The if-direction of this statement is easy: since $A^2$ is an $A$-polynomial, we have $A^2\in{\mathcal{P}}(A)$ and hence $A^2=({\boldsymbol F}')^2$ is exact so that ${\boldsymbol F}$ is entropic by Definition \ref{D1}. To prove the only-if part, we proceed in two steps: we show that the exactness of $A^2$ implies the exactness of all powers $A^n$ (and thus of all $A$-polynomials). Then, we use Lemma \ref{3P24} in the appendix (which basically says that exactness carries over to locally-uniform limits) with $\eta({\boldsymbol U})=U_1$, resp.\ $\eta({\boldsymbol U})=U_2$ to conclude the exactness of all $Q\in{\mathcal{P}}(A)$. It thus suffices to show \begin{lemma} \label{r1.L1} Let ${\boldsymbol F}\in C^2({\mathcal{S}},{\mathbb{R}}^2)$ and $A={\boldsymbol F}'$. If ${\boldsymbol F}$ is entropic then $A^n$ are exact for all $n\in{\mathbb{N}}_0$. \end{lemma} The proof of this Lemma relies on an application of Cayley-Hamilton's theorem which allows us to represent arbitrary powers $A^n$ as combinations of $A$ and the identity matrix $I$. To be more precise, we note that the characteristic polynomial of $A$ has the form \begin{equation*} \chi(s)=s^2-\mu s-\lambda \end{equation*} where $\mu={\operatorname{tr}}(A)$ is the trace and $\lambda=-\det A$ the negative determinant of $A$. Now the theorem of Cayley-Hamilton states that $\chi(A)=0$, i.e. $$\label{3m60} A^2=\lambda I+\mu A.$$ Obviously, with the help of \eqref{3m60}, higher powers of $A$ can be reduced to combinations of $I$ and $A$. \begin{lemma} \label{3L56} Let $A\in{\mathbb{R}}^{2\times 2}$ and set $\mu={\operatorname{tr}}(A)$, $\lambda=-\det A$. Then \begin{equation*} A^n=p_n I+q_n A,\quad n\in{\mathbb{N}}_0 \end{equation*} where $p_n$ and $q_n$ are polynomials in $\mu,\lambda$ which satisfy the recurrence relations \begin{align*} p_n&=\lambda q_{n-1}, & \,& p_0=1,\\ q_n&=\mu q_{n-1} +p_{n-1}, & \,& q_0=0. \end{align*} \end{lemma} \noindent{\bf Proof:} The case $n=0$ is trivially satisfied. Using \eqref{3m60}, we have by induction \begin{multline*} A^{n+1}=(p_n I+q_n A)A=p_n A+q_n A^2=p_n A+q_n(\lambda I+\mu A)\\ =\lambda q_n I+(p_n+\mu q_n)A=p_{n+1}I+q_{n+1}A. \end{multline*} Which completes the proof.\hfill$\diamondsuit$ A straight forward calculation shows that $p=p_n$, $q=q_n$ solve the linear hyperbolic system $$\label{3m61} \frac{\partial p}{\partial \mu}-\lambda\frac{\partial q}{\partial \lambda}=0,\quad \frac{\partial q}{\partial \mu}-\frac{\partial p}{\partial \lambda}-\mu\frac{\partial q}{\partial \lambda}=0.$$ The following proposition shows that exactness of $A$ and $A^2$ implies exactness of the combination $pI+qA$ if $(p,q)$ satisfy \eqref{3m61} and are evaluated at $\mu={\operatorname{tr}}(A)$, $\lambda=-\det A$. Using, in particular, $p=p_n$ and $q=q_n$, Lemma \ref{3L56} shows that $A^n$ is exact for every $n$ which completes the proof of Lemma \ref{r1.L1}. \begin{proposition} \label{3P57} Let ${\boldsymbol F}\in C^2({\mathcal{S}},{\mathbb{R}}^2)$, $A={\boldsymbol F}'$, $\mu={\operatorname{tr}}(A)$, and $\lambda=-\det A$. Further, let ${\mathcal{L}}=\{(p,q): p,q\in C^1({\mathbb{R}}^2,{\mathbb{R}}) \text{solve \eqref{3m61}}\}$. Equivalent are: \begin{enumerate} \item[i)] ${\boldsymbol F}$ is entropic \item[ii)] $h(A)=p(\lambda,\mu)I+q(\lambda,\mu)A$ is exact for all $(p,q)\in{\mathcal{L}}$ \item[iii)] $p(\lambda,\mu)'+q(\lambda,\mu)'(\mu I -A)=0$ for all $(p,q)\in{\mathcal{L}}$ \item[iv)] $\lambda'+\mu'(\mu I -A)=0$ \end{enumerate} \end{proposition} \noindent{\bf Proof:} In view of \eqref{3m60}, it is clear that (ii) implies (i) with $p(\lambda,\mu)=\lambda$ and $q(\lambda,\mu)=\mu$. Next, we show the equivalence between (ii) and (iii): since ${\mathcal{S}}$ is simply connected, $h(A)$ is exact if and only if the rows of $h(A)$ are closed one-forms. With \begin{equation*} h(A)=\begin{pmatrix} p+q a_{11} & q a_{12}\\ qa_{21} & p+qa_{22}\end{pmatrix} \end{equation*} this leads to the conditions $$\label{3m63} \frac{\partial (p+q a_{11})}{\partial U_2}=\frac{\partial (q a_{12})}{\partial U_1},\quad \frac{\partial (q a_{21})}{\partial U_2}=\frac{\partial (p+qa_{22})}{\partial U_1}.$$ Since $A={\boldsymbol F}'$ is exact, we find from \eqref{3m63} with $p=0$, $q=1$ $$\label{3m64} \frac{\partial a_{11}}{\partial U_2}=\frac{\partial a_{12}}{\partial U_1},\quad \frac{\partial a_{21}}{\partial U_2}=\frac{\partial a_{22}}{\partial U_1}.$$ Using \eqref{3m64}, the conditions \eqref{3m63} can be simplified to $$\label{3m65} \frac{\partial p}{\partial U_2}+a_{11}\frac{\partial q}{\partial U_2}=a_{12}\frac{\partial q}{\partial U_1},\quad a_{21}\frac{\partial q}{\partial U_2}=\frac{\partial p}{\partial U_1}+a_{22}\frac{\partial q}{\partial U_1}.$$ In terms of the matrix \begin{equation*} \bar A=\begin{pmatrix} a_{22} & - a_{12}\\-a_{21} & a_{11}\end{pmatrix}=\mu I-A \end{equation*} we can write \eqref{3m65} in the compact form $$\label{3m66} p'+q'\bar A=0$$ and $h(A)$ is exact if and only if \eqref{3m66} holds which completes the case (ii) $\Leftrightarrow$ (iii). Setting $p(\lambda,\mu)=\lambda$ and $q(\lambda,\mu)=\mu$ in \eqref{3m66}, we see that (iii) implies (iv), i.e. $$\label{3m67} \lambda'+\mu'\bar A=0.$$ Using chain rule and \eqref{3m67}, we get \begin{equation*} p'=\mu'\left(\frac{\partial p}{\partial \mu}I-\frac{\partial p}{\partial \lambda}\bar A\right),\quad q'=\mu'\left(\frac{\partial q}{\partial \mu}I-\frac{\partial q}{\partial \lambda}\bar A\right) \end{equation*} so that $$\label{3m68} p'+q'\bar A=\mu'\left(\frac{\partial p}{\partial \mu}I+\left(\frac{\partial q}{\partial \mu}-\frac{\partial p}{\partial \lambda}\right)\bar A-\frac{\partial q}{\partial \lambda}{\bar A}^2\right)$$ Observing that ${\operatorname{tr}}(\bar A)={\operatorname{tr}}(A)=\mu$ and $-\det \bar A=-\det A=\lambda$, the theorem of Cayley-Hamilton applied to $\bar A$ yields ${\bar A}^2=\lambda I+\mu \bar A$. Inserting this result into \eqref{3m68}, we conclude $$\label{3m69} p'+q'\bar A=\mu'\left(\left(\frac{\partial p}{\partial \mu}-\lambda\frac{\partial q}{\partial \lambda}\right)I+\left(\frac{\partial q}{\partial \mu}-\frac{\partial p}{\partial \lambda}-\mu\frac{\partial q}{\partial \lambda}\right)\bar A\right).$$ In particular, \eqref{3m69} is equal to zero if $(p,q)$ solve \eqref{3m61}, which shows that (iv) implies (iii). A repetition of the above arguments for the special case $h(A)=A^2$ shows that (i) implies (iv) which completes the proof. \hfill$\diamondsuit$ \subsection{Characterization: PDE}\label{s2.2} Condition (iv) in Proposition \ref{3P57} gives rise to two partial differential equations for the coefficients $a_{ij}$ of the Jacobian $A={\boldsymbol F}'$. Note that the exactness of $A$ is equivalent to $\partial_{U_1}a_{12}=\partial_{U_2}a_{11}$ and $\partial_{U_1}a_{22}=\partial_{U_2}a_{21}$. This leads to the following characterization. \begin{theorem}\quad \label{3C58} Let $A=(a_{ij})\in C^1({\mathcal{S}},{\mathbb{R}}^{2\times2})$ with trace $\mu={\operatorname{tr}}(A)$ and negative determinant $\lambda=-\det A$. Then $A$ is the Jacobian of an entropic flux function ${\boldsymbol F}\in C^2({\mathcal{S}},{\mathbb{R}}^2)$ if and only if \label{A} \begin{aligned} \frac{\partial a_{12}}{\partial U_1}-\frac{\partial a_{11}}{\partial U_2}&=0,&\quad &\frac{\partial a_{22}}{\partial U_1}-\frac{\partial a_{21}}{\partial U_2}=0,\\ a_{12}\frac{\partial \mu}{\partial U_1}-a_{11}\frac{\partial \mu}{\partial U_2}-\frac{\partial \lambda}{\partial U_2}&=0,&\quad & \frac{\partial \lambda}{\partial U_1}+a_{22}\frac{\partial \mu}{\partial U_1}-a_{21}\frac{\partial \mu}{\partial U_2}=0. \end{aligned} If $\mu^2+4\lambda\geq0$, then ${\boldsymbol F}=\int A$ is the flux of some hyperbolic $2\times 2$ system. \end{theorem} We remark that condition (iv) in Proposition \ref{3P57} is invariant under transformations: if ${\boldsymbol R}:{\mathcal{S}}\to\hat{\mathcal{S}}$ is a diffeomorphism and \begin{equation*} B({\boldsymbol V})={\boldsymbol R}'({\boldsymbol U})A({\boldsymbol U}){\boldsymbol R}'({\boldsymbol U})^{-1},\quad {\boldsymbol U}={\boldsymbol R}^{-1}({\boldsymbol V}) \end{equation*} then $\lambda_B({\boldsymbol V})=\lambda_A({\boldsymbol U})$ and $\mu_B({\boldsymbol V})=\mu_A({\boldsymbol U})$. Thus, \begin{equation*} \lambda_A'+\mu_A'(\mu_AI-A)=0=\lambda_B'{\boldsymbol R}'+\mu_B'{\boldsymbol R}'(\mu_BI-A)=(\lambda_B'+\mu_B'(\mu_BI-B)){\boldsymbol R}', \end{equation*} so that the $A$-expression vanishes if the $B$-expression does and vice versa. To state this result concisely, we need the following \begin{definition} \label{r1.D2} Let ${\boldsymbol F}\in C^2({\mathcal{S}},{\mathbb{R}}^2)$, $\hat{\boldsymbol F}\in C^2(\hat{\mathcal{S}},{\mathbb{R}}^2)$. We say that ${\boldsymbol F}$ transforms into ${\boldsymbol F}$ if, for every $\bar{\boldsymbol U}\in{\mathcal{S}}$, there exists an open neighborhood $D\subset{\mathcal{S}}$ of $\bar{\boldsymbol U}$ and a diffeomorphism ${\boldsymbol R}:D\to{\boldsymbol R}(D)\subset\hat{\mathcal{S}}$ such that \begin{equation*} \hat{\boldsymbol F}'({\boldsymbol V})={\boldsymbol R}'({\boldsymbol U}){\boldsymbol F}'({\boldsymbol U}){\boldsymbol R}'({\boldsymbol U})^{-1},\quad {\boldsymbol U}={\boldsymbol R}^{-1}({\boldsymbol V}). \end{equation*} \end{definition} We remark that, if ${\boldsymbol F}$ transforms into $\hat{\boldsymbol F}$ and if ${\boldsymbol U}$ is a smooth solution of \eqref{1} which ranges in the domain of definition $D$ of the diffeomorphism ${\boldsymbol R}$, then ${\boldsymbol V}={\boldsymbol R}({\boldsymbol U})$ is a solution of the transformed system \begin{equation*} \begin{aligned} \phantom{.} & \partial_t V_1+\partial_x \hat F_1({\boldsymbol V})=0,\\ \phantom{.} & \partial_t V_2+\partial_x \hat F_2({\boldsymbol V})=0. \end{aligned} \end{equation*} Note that this implies that the two components of ${\boldsymbol R}$ are entropies. Using Definition \ref{r1.D2}, we can restate our above result on the invariance of $\lambda'+\mu'(\mu I-A)=0$. \begin{proposition} \label{r1.P1} Let ${\boldsymbol F}\in C^2({\mathcal{S}},{\mathbb{R}}^2)$, $\hat{\boldsymbol F}\in C^2(\hat{\mathcal{S}},{\mathbb{R}}^2)$. If $\hat{\boldsymbol F}$ is entropic and ${\boldsymbol F}$ transforms into $\hat{\boldsymbol F}$ then ${\boldsymbol F}$ is also entropic. \end{proposition} \subsection{Characterization: Burgers' equation}\label{s2.3} The next characterization of entropic fluxes generalizes a well known property of smooth solutions of the isentropic Euler equation with cubic pressure law: by going over to Riemann invariants as variables, the Euler system decouples into two independent Burgers' equations. In terms of Definition \ref{r1.D2}, we can say that the Euler flux transforms into the flux $\hat {\boldsymbol F}({\boldsymbol V})=\frac{1}{2}(V_1^2,V_2^2)^{\scriptscriptstyle T}$. The corresponding diffeomorphism is given by the eigenvalues of ${\boldsymbol F}'$ so that the characteristic speeds of entropic systems are entropies themselves. We state this important result separately. \begin{proposition}\quad \label{r2.P1} Let ${\boldsymbol F}\in C^2({\mathcal{S}},{\mathbb{R}}^2)$ be an entropic flux of the hyperbolic system \eqref{1} and assume $H_1,H_2$ are the eigenvalues of ${\boldsymbol F}'$. Assume further that either $H_1({\boldsymbol U})\not=H_2({\boldsymbol U})$ for all ${\boldsymbol U}\in{\mathcal{S}}$ or $H_1({\boldsymbol U})=H_2({\boldsymbol U})$ for all ${\boldsymbol U}\in{\mathcal{S}}$. Then $H_1,H_2$ are entropies of \eqref{1} with entropy fluxes $H_1^2/2,H_2^2/2$. \end{proposition} \noindent{\bf Proof:} The result is a consequence of (iv) in Proposition \ref{3P57} which states $$\label{r2e1} \lambda'+\mu'(\mu I-A)=0$$ where $\mu={\operatorname{tr}}(A)$, $\lambda=-\det A$. In the case $H_1=H=H_2$ on ${\mathcal{S}}$, we have $\mu=2H$, $\lambda=-H^2$ so that \eqref{r2e1} reduces to \begin{equation*} 0=-2HH'+2H'(2H I-A)=2H'(HI-A). \end{equation*} Hence, $H'A=(H^2/2)'$ so that $H$ is an entropy with entropy flux $H^2/2$. Next, let us turn to the case $H_1({\boldsymbol U})\not= H_2({\boldsymbol U})$ for all ${\boldsymbol U}\in{\mathcal{S}}$ with corresponding right eigenvectors ${\boldsymbol r}_1,{\boldsymbol r}_2$. Clearly, $\mu={\operatorname{tr}}(A)=H_1+H_2$ and $\lambda=-\det A=-H_1H_2$, so that \eqref{r2e1} gives \begin{equation*} (H_1H_1'+H_2H_2')I-(H_1'+H_2')A=0. \end{equation*} Applying the right eigenvectors ${\boldsymbol r}_1,{\boldsymbol r}_2$, we find $(H_2-H_1)H_2'{\boldsymbol r}_1=0$ and $(H_1-H_2)H_1'{\boldsymbol r}_2=0$. Since everywhere $H_1\not= H_2$, we conclude $H_1'{\boldsymbol r}_2=H_2'{\boldsymbol r}_1=0$. Consequently, if $H_1'({\boldsymbol U}),H_2'({\boldsymbol U})\not=0$, the gradients $H_1'$, $H_2'$ are left eigenvectors of ${\boldsymbol F}'$ with eigenvalues $H_1$ and $H_2$. Finally, the relation $H_1'A=(H_1^2/2)'$ is trivially satisfied if $H_1'({\boldsymbol U})=0$ and otherwise it is a consequence of $H_1'({\boldsymbol U})$ being a left eigenvector. Hence, $H_1$ is an entropy with flux $H_1^2/2$. The same argument applies to $H_2$. \hfill$\diamondsuit$ As immediate consequence, we state \begin{cor}\quad \label{r1.P2} Let ${\boldsymbol F}\in C^2({\mathcal{S}},{\mathbb{R}}^2)$ be the entropic flux of a hyperbolic system \eqref{1} and suppose further that ${\boldsymbol F}'$ has only a single eigenvalue $H$. If ${\boldsymbol U}$ is a smooth solution of \eqref{1} then $H({\boldsymbol U})$ solves the Burgers' equation. \end{cor} For strictly hyperbolic systems with entropic fluxes, we have the following characterization. \begin{theorem} \label{r1.T2} Let ${\boldsymbol F}\in C^2({\mathcal{S}},{\mathbb{R}}^2)$ be the flux of a strictly hyperbolic system \eqref{1} and suppose that the derivatives of the eigenvalues never vanish. Equivalent are: \begin{enumerate} \item[i)] ${\boldsymbol F}$ is entropic. \item[ii)] ${\boldsymbol F}$ transforms into $\hat {\boldsymbol F}({\boldsymbol V})=\frac{1}{2}(V_1^2,V_2^2)^{\scriptscriptstyle T}$. \end{enumerate} \end{theorem} \noindent{Proof:} The decoupled flux $\hat{\boldsymbol F}$ is entropic because $(\hat{\boldsymbol F}'({\boldsymbol V}))^2=(\frac{1}{3}(V_1^3,V_2^3)^{\scriptscriptstyle T})'$. Using Proposition \ref{r1.P1}, we thus obtain the implication (ii) $\Rightarrow$ (i). Conversely, if ${\boldsymbol F}$ is entropic, Proposition \ref{r2.P1} implies $H_i'A=H_iH_i'$ so that $H_1'$, $H_2'$ are left eigenvectors of ${\boldsymbol F}'$ with eigenvalues $H_1$ and $H_2$ (stated differently: the eigenvalues $H_1,H_2$ are Riemann invariants of the system). In particular, ${\boldsymbol H}'$ is invertible and we can use ${\boldsymbol H}$ as local diffeomorphism. Since the rows $H _1'$, $H_2'$ of ${\boldsymbol H}'$ are left eigenvectors of ${\boldsymbol F}'$, it is clear that \begin{equation*} {\boldsymbol H}'({\boldsymbol U}){\boldsymbol F}'({\boldsymbol U}){\boldsymbol H}'({\boldsymbol U})^{-1}=\begin{pmatrix}H_1({\boldsymbol U}) & 0\\ 0 & H_2({\boldsymbol U})\end{pmatrix}=\hat{\boldsymbol F}'({\boldsymbol H}({\boldsymbol U})), \end{equation*} which shows that ${\boldsymbol F}$ transforms into $\hat{\boldsymbol F}$. \hfill$\diamondsuit$ \subsection{Characterization: Maxwellian}\label{s2.4} We now prove Theorem \ref{momprob} which states that the flux ${\boldsymbol F}$ of a hyperbolic system \eqref{1} is entropic if and only if there exists a Maxwellian ${\boldsymbol M}$ which has the property that, for every $\phi\in C^\infty({\mathbb{R}})$, the functions ${\boldsymbol U}\mapsto \left\langle M_i({\boldsymbol U},v),\phi(v)\right\rangle_v$, $i=1,2$ are entropies of \eqref{1} with fluxes $\left\langle v M_i({\boldsymbol U},v),\phi(v)\right\rangle_v$. Here, $\left\langle \cdot,\cdot\right\rangle$ denotes the dual product on the set $\mathscr{E}'({\mathbb{R}})$ of compactly supported distributions. As in \eqref{r1.2}, the name {\em Maxwellian} refers to the fact that \begin{equation*} \left\langle {\boldsymbol M}({\boldsymbol U}),1\right\rangle={\boldsymbol U},\quad \left\langle {\boldsymbol M}({\boldsymbol U}),v\right\rangle={\boldsymbol F}({\boldsymbol U}),\quad \forall {\boldsymbol U}\in{\mathcal{S}}. \end{equation*} Again, the if-part of the statement is simple. Choosing $\phi(v)=v$, we see that ${\boldsymbol F}=\left\langle {\boldsymbol M},v\right\rangle=\left\langle {\boldsymbol M},\phi\right\rangle$ is a pair of entropies, so that ${\boldsymbol F}$ is entropic. The converse direction is shown by constructing a suitable function ${\boldsymbol M}$. Using Lemma \ref{3L23} in the appendix with the function $h(s)=\exp(-i\xi s)$ which is analytic for every $\xi\in {\mathbb{R}}$, we conclude that $\hat E({\boldsymbol U},\xi)=\exp(-i\xi A({\boldsymbol U}))\in{\mathcal{P}}(A)$ for every $\xi$. Since $A={\boldsymbol F}'$ has only real eigenvalues due to the hyperbolicity assumption, one can show (see \cite{Jun99b,Jun02h}) that $\hat E({\boldsymbol U},\xi)$ grows at most polynomially in $\xi$. Hence, $\hat E({\boldsymbol U},\xi)$ is a tempered distribution in $\xi$. Consequently, if $\psi$ is any rapidly decaying test function on ${\mathbb{R}}$, the function ${\boldsymbol U}\mapsto \langle \hat E({\boldsymbol U},\xi),\psi(\xi)\rangle_\xi$ is contained in ${\mathcal{P}}(A)$ because the integral $\langle \hat E,\psi\rangle$ can be seen as locally-uniform limit of a suitable sequence of elements of ${\mathcal{P}}(A)$, e.g. \begin{equation*} \sum_{i=-N}^N \hat E({\boldsymbol U},\xi_i^{(N)})\psi(\xi_i^{(N)})\Delta \xi_i^{(N)}\xrightarrow[N\to\infty]{} \int_{\mathbb{R}} \hat E({\boldsymbol U},\xi)\psi(\xi)\,d\xi \end{equation*} Finally, an application of the Paley--Wiener theorem implies that for each ${\boldsymbol U}\in{\mathcal{S}}$ the inverse Fourier transform $E({\boldsymbol U})={\mathcal{F}}^{-1}\hat E({\boldsymbol U})$ is a matrix of compactly supported distributions where the size of the support depends on $\|A({\boldsymbol U})\|$ (for details see \cite{Jun99b,Jun02h}). If $K\subset{\mathcal{S}}$ is any compact set, the norm $\|A({\boldsymbol U})\|$ and thus the support of $E({\boldsymbol U})$ is uniformly bounded for ${\boldsymbol U}\in K$. Hence, by choosing a test function $\psi_K\in C_0^\infty({\mathbb{R}})$ which is equal to one on a suitably large interval, we obtain for every $\phi\in C^\infty({\mathbb{R}})$ and all ${\boldsymbol U}\in K$ \begin{equation*} \left\langle E({\boldsymbol U}),\phi\right\rangle=\left\langle E({\boldsymbol U}),\phi\psi_K\right\rangle=\left\langle \hat E({\boldsymbol U}),{\mathcal{F}}^{-1}(\phi\psi_K)\right\rangle. \end{equation*} Using Lemma \ref{r1.L2} in the appendix, we conclude that $\left\langle E,\phi\right\rangle\in{\mathcal{P}}(A)$ for every $\phi\in C^\infty({\mathbb{R}})$ and since ${\boldsymbol F}$ is entropic, the exactness of $\left\langle E,\phi\right\rangle$ follows. One can then check (see \cite{Jun02h}) that the primitive $\int \left\langle E,\phi\right\rangle$ gives rise to a pair of continuous linear functionals on $C^\infty({\mathbb{R}})$ which we denote by $\tilde {\boldsymbol M}$. We now check that, possibly up to some ${\boldsymbol U}$ independent distribution, the required Maxwellian ${\boldsymbol M}$ is given by $\tilde {\boldsymbol M}$. To obtain the entropy property of $\tilde {\boldsymbol M}$, we apply standard properties of the Fourier transform (see e.g.\ \cite{Hoe83}) to ${\mathcal{F}} E=\hat E$ \begin{equation*} \left\langle E,1\right\rangle=\hat E({\boldsymbol U},0)=I ,\quad vE={\mathcal{F}}^{-1}{\mathcal{F}}(vE)={\mathcal{F}}^{-1}(i\partial_\xi \hat E)={\mathcal{F}}^{-1}\hat E A=EA \end{equation*} which implies $\langle \tilde{\boldsymbol M},1\rangle'=I$, and for any $\phi\in C^\infty({\mathbb{R}})$, \begin{equation*} \langle v\tilde{\boldsymbol M},\phi\rangle'=\langle E ,\phi\rangle A=\langle \tilde{\boldsymbol M},\phi\rangle'{\boldsymbol F}'. \end{equation*} Hence, $\langle \tilde{\boldsymbol M},\phi\rangle$ are entropies and $\langle \tilde{\boldsymbol M},1\rangle={\boldsymbol U}+{\boldsymbol C}_1$. For the particular case $\phi=1$, we conclude that \begin{equation*} {\boldsymbol F}'=\langle \tilde{\boldsymbol M},1\rangle'{\boldsymbol F}'=\langle v\tilde{\boldsymbol M},1\rangle' \end{equation*} so that $\langle \tilde{\boldsymbol M},v\rangle={\boldsymbol F}({\boldsymbol U})+{\boldsymbol C}_2$. Finally, setting \begin{equation*} {\boldsymbol M}({\boldsymbol U},v)=\tilde {\boldsymbol M}({\boldsymbol U},v)-{\boldsymbol C}_1\delta(v)-{\boldsymbol C}_2\delta'(v), \end{equation*} we obtain a Maxwellian for \eqref{1} with the required entropy property. \section{Existence of entropic fluxes}\label{s3} \subsection{Abstract existence result}\label{s3.0} Using Theorem \ref{3C58}, entropic fluxes can be constructed by solving the nonlinear system \eqref{A} for $A={\boldsymbol F}'$. To assess solvability of \eqref{A}, let us first investigate its type. Following \cite{E&S92}, we write the system in quasilinear matrix-vector form \begin{multline}\label{A1} \begin{pmatrix} 0 & 1 & 0 & 0\\ 0 & 0 & 0 & 1\\ a_{12} & 0 & 0 & a_{12}\\ 0 & a_{21} & a_{12} & a_{22}-a_{11} \end{pmatrix} \frac{\partial}{\partial U_1} \begin{pmatrix} a_{11}\\a_{12}\\a_{21}\\a_{22} \end{pmatrix}\\ +\begin{pmatrix} -1 & 0 & 0 & 0\\ 0 & 0 & -1& 0\\ a_{22}-a_{11} & -a_{21}& -a_{12}& 0\\ -a_{21}& 0 & 0 & -a_{21} \end{pmatrix} \frac{\partial}{\partial U_2} \begin{pmatrix} a_{11}\\a_{12}\\a_{21}\\a_{22} \end{pmatrix} ={\boldsymbol 0} \end{multline} and set up the determinant \begin{equation*} D({\boldsymbol x})=\det(x_1 B_1+x_2 B_2),\quad {\boldsymbol x}\in{\mathbb{R}}^2, \end{equation*} where $B_1$ and $B_2$ are the matrices in \eqref{A1}. A straight forward calculation shows that $D({\boldsymbol x})=d({\boldsymbol x},{\boldsymbol x})^2$ with \begin{equation*} d({\boldsymbol x},{\boldsymbol y})=\sum_{i,j=1}^2 \hat A_{ij}x_iy_j,\quad \hat A=\begin{pmatrix} -a_{12} & \frac{1}{2}(a_{22}-a_{11})\\ \frac{1}{2}(a_{22}-a_{11})& a_{21} \end{pmatrix} \end{equation*} With the notation of Theorem \ref{3C58}, we find that $\det\hat A=-(\mu^2+4\lambda)/4$ which is non-positive if we are looking for hyperbolic systems \eqref{1}. Hence, $\hat A$ is not definite and there must be characteristic vectors ${\boldsymbol 0}\not={\boldsymbol x}\in{\mathbb{R}}^2$ (i.e.\ $D({\boldsymbol x})=0$). Picking a non-characteristic direction ${\boldsymbol \nu}$ and checking the roots of the polynomial $\lambda \to D(\lambda{\boldsymbol \nu}+{\boldsymbol xi})$, we find that they are real for every ${\boldsymbol xi}\in{\mathbb{R}}^2$ because the roots involve the square root of $d({\boldsymbol \nu},{\boldsymbol xi})^2-4d({\boldsymbol \nu},{\boldsymbol \nu})d({\boldsymbol xi},{\boldsymbol xi})=(\nu_1\xi_2-\nu_2\xi_1)^2(\mu^2+4\lambda)\geq0$. Hence, if we select an analytic surface $\Gamma$ in ${\mathbb{R}}^2$ and prescribe analytic values for $A=(a_{ij})$ along $\Gamma$ in such a way that $\mu^2+4\lambda\geq\gamma>0$ and that the normal to $\Gamma$ is not characteristic, the theorem of Cauchy-Kovalevskaya ensures that there exists an open set ${\mathcal{S}}\subset{\mathbb{R}}^2$ and an analytic function $A$ on ${\mathcal{S}}$ which solves \eqref{A1} with the prescribed boundary values. Moreover, with a suitable choice of ${\mathcal{S}}$, the relation $\mu^2+4\lambda>0$ will be satisfied throughout ${\mathcal{S}}$ and in connection with Theorem \ref{3C58}, we conclude that many hyperbolic $2\times 2$ systems with entropic fluxes exist. \subsection{Constant and symmetric solutions}\label{s3.3} Clearly, the most trivial solutions of \eqref{A1} are those with constant $a_{ij}$. Hence, all linear functions ${\boldsymbol F}({\boldsymbol U})=A{\boldsymbol U}$ are entropic. One can also check the exactness condition directly in that case \begin{equation*} {\boldsymbol F}'({\boldsymbol U})^2=A^2=(A^2{\boldsymbol U})'. \end{equation*} Another simple case where the exactness can be checked directly arises if the $2\times 2$ system \eqref{1} consists of two independent scalar equations, i.e. $$\label{dec} {\boldsymbol F}(U_1,U_2)=\begin{pmatrix}F_1(U_1)\\F_2(U_2)\end{pmatrix}.$$ Then $({\boldsymbol F}')^2=\left(\begin{smallmatrix} (F_1')^2 & 0\\0 & (F_2')^2\end{smallmatrix}\right)$ is exact according to the fundamental theorem of calculus. Hence, decoupled fluxes are entropic. It is interesting to note that these simple fluxes are related to situations in which the system \eqref{A} reduces to a linear problem. To see this, we first note that decoupled fluxes are special cases of fluxes ${\boldsymbol F}$ with a potential $\Phi$ (i.e.\ $F_i=\partial_{U_i}\Phi$). In order to check when such fluxes are entropic, we insert the {\em symmetric} matrix $A={\boldsymbol F}'=\Phi''$ into the system \eqref{A1}. Due to symmetry, the first two equations are satisfied and the remaining equations can be written in the form $$\label{A4} L_1\Phi (L_2\Phi)'=L_2\Phi(L_1\Phi)'$$ where $L_1=\partial_{U_1}\partial_{U_2}$, $L_2=\partial_{U_2}^2-\partial_{U_1}^2$ and the dash denotes the ${\boldsymbol U}$-gradient. Note that \begin{equation*} L_1\Phi (L_2\Phi+\alpha L_1\Phi)'=L_2\Phi(L_1\Phi)'+\alpha L_1\Phi(L_1\Phi)'= (L_2\Phi+\alpha L_1\Phi)(L_1\Phi)'. \end{equation*} Hence, if $L_1\Phi$ is non-zero at some point $\bar{\boldsymbol U}$, we can choose $\alpha$ such that also $(L_2\Phi+\alpha L_1\Phi)(\bar{\boldsymbol U})\not=0$. In some suitable neighborhood around $\bar{\boldsymbol U}$, we then have \begin{equation*} \left(\ln (L_2\Phi+\alpha L_1\Phi)-\ln L_1\Phi\right)'=0 \end{equation*} which implies \begin{equation*} L_1\Phi-\frac{1}{\alpha+e^c}L_2\Phi=0 \end{equation*} for some constant $c\in{\mathbb{R}}$ such that $\alpha+e^c\not=0$. Conversely, if $L_1\Phi-\gamma L_2\Phi=0$ for some $\gamma\in{\mathbb{R}}$, then equation \eqref{A4} is obviously satisfied. We thus have shown that, at least locally in ${\boldsymbol U}$, potential fluxes are entropic if and only if the potential $\Phi$ satisfies an equation of the form $$\label{A5} \frac{\partial^2\Phi}{\partial U_1\partial U_2}+\gamma\left(\frac{\partial^2\Phi}{\partial U_1^2}-\frac{\partial^2\Phi}{\partial U_2^2}\right)=0.$$ In the case $\gamma=0$, we recover the decoupled fluxes \eqref{dec}. For $\gamma\not=0$, we can write $1/\gamma=\lambda-1/\lambda$ with a unique $\lambda>0$ and in a rotated coordinate system ${\boldsymbol V}=R{\boldsymbol U}$ with \begin{equation*} R=\frac{1}{\sqrt{1+\lambda^2}}\begin{pmatrix}1 & \lambda\\-\lambda & 1\end{pmatrix} \end{equation*} the flux also decouples because $\partial_{V_1}\partial_{V_2}\hat\Phi=0$ where $\hat\Phi({\boldsymbol V})=\Phi(R^{\scriptscriptstyle T}{\boldsymbol V})$. More precisely, the flux $\tilde {\boldsymbol F}$ related to $\hat \Phi$ has the form \eqref{dec} and ${\boldsymbol F}({\boldsymbol U})=R^{\scriptscriptstyle T}\hat{\boldsymbol F}(R{\boldsymbol U})$ is the flux in the original variables. Consequently, the assumption that an entropic flux ${\boldsymbol F}$ has a symmetric Jacobian (i.e.\ ${\boldsymbol F}=\Phi'$) implies that the corresponding hyperbolic system decouples after a suitable linear transformation of the unknowns ${\boldsymbol U}$. \subsection{Solutions in standard form}\label{s3.2} In order to exclude completely decoupled fluxes like ${\boldsymbol F}({\boldsymbol U})=(F_1(U_1),F_2(U_2))^{\scriptscriptstyle T}$ from our considerations, we now assume that \eqref{1} is a system with an entropic flux ${\boldsymbol F}'$ whose first component depends reasonably on $U_2$ (or $F_2$ depends reasonably on $U_1$ in which case we go over to the new variables $\hat{\boldsymbol U}=(U_2,U_1)$ with corresponding entropic flux $\hat{\boldsymbol F}(\hat{\boldsymbol U})=(F_2(\hat U_2,\hat U_1),F_1(\hat U_2,\hat U_1))^{\scriptscriptstyle T}$). Then, we will have in general that $\partial_{U_2}F_1\not=0$ and we can locally invert the relation $$\label{trans} {\boldsymbol V}= \begin{pmatrix} V_1\\V_2\end{pmatrix} =\begin{pmatrix} U_1\\F_1(U_1,U_2)\end{pmatrix}={\boldsymbol R}({\boldsymbol U}).$$ Since ${\boldsymbol F}$ is entropic, we know that $F_1$ is an entropy of \eqref{1} and hence, it is not surprising that $V_2=F_1({\boldsymbol U})$ satisfies a conservation law if ${\boldsymbol U}$ is a smooth solution of \eqref{1}. More precisely, with $\Theta=\int F_1'{\boldsymbol F}'$, we obtain the system \label{r1.50} \begin{aligned} \phantom{.} & \partial_t V_1+\partial_x V_2=0,\\ \phantom{.} & \partial_t V_2+\partial_x \Theta({\boldsymbol R}^{-1}({\boldsymbol V}))=0 \end{aligned} with flux ${\boldsymbol G}({\boldsymbol V})=(V_2,\Theta({\boldsymbol R}^{-1}({\boldsymbol V})))^{\scriptscriptstyle T}$. Since ${\boldsymbol G}$ transforms into ${\boldsymbol F}$ via \eqref{trans}, Proposition \ref{r1.P1} implies that the flux ${\boldsymbol G}$ of \eqref{r1.50} is also entropic. To summarize this result, we introduce the notion of fluxes in {\em standard form}. \begin{definition}\quad We say that a system \eqref{1} has standard form if the flux function is entropic and has the structure $$\label{standard} {\boldsymbol F}({\boldsymbol U})=\begin{pmatrix}U_2\\F_2({\boldsymbol U})\end{pmatrix}.$$ \end{definition} In connection with Definition \ref{r1.D2}, we can restate the above result. \begin{proposition} \label{r1.P3} Let ${\boldsymbol F}\in C^2({\mathcal{S}},{\mathbb{R}}^2)$ be entropic and assume \eqref{trans} defines a diffeomorphism. Then, the flux ${\boldsymbol F}$ transforms into a flux in standard form. \end{proposition} Let us now consider fluxes of the form \eqref{standard} which implies $a_{11}=0$, $a_{12}=1$, $a_{21}=\lambda$, and $a_{22}=\mu$ in \eqref{A}. These additional assumptions reduce \eqref{A} to the problem $$\label{A2.0} \frac{\partial }{\partial U_1} \begin{pmatrix} \mu\\\lambda\end{pmatrix}+ \begin{pmatrix} 0 & -1\\ -\lambda & \mu\end{pmatrix}\frac{\partial }{\partial U_2} \begin{pmatrix} \mu\\\lambda\end{pmatrix}={\boldsymbol 0}.$$ Going over to Riemann invariants $H_1,H_2$ as variables \begin{equation*} H_1=\frac{\mu}{2}-\sqrt{\frac{\mu^2}{4}+\lambda},\quad H_2=\frac{\mu}{2}+\sqrt{\frac{\mu^2}{4}+\lambda} \end{equation*} respectively $\mu=H_1+H_2$, $\lambda=-H_1H_2$, we find the diagonal system $$\label{A2} \frac{\partial }{\partial U_1} \begin{pmatrix} H_1\\H_2\end{pmatrix}+ \begin{pmatrix} H_2 & 0\\0 & H_1\end{pmatrix}\frac{\partial }{\partial U_2} \begin{pmatrix} H_1\\H_2\end{pmatrix} ={\boldsymbol 0}.$$ Any solution of this $2\times 2$ system on some open, simply connected set ${\mathcal{S}}\subset{\mathbb{R}}^2$ which observes $H_2\geq H_1$ gives rise to an entropic flux ${\boldsymbol F}$ as primitive of $A=\left(\begin{smallmatrix} 0 & 1\\ \lambda & \mu\end{smallmatrix}\right)$. Note that at points $\bar{\boldsymbol U}\in{\mathcal{S}}$ with $H_2(\bar{\boldsymbol U})>H_1(\bar{\boldsymbol U})$, the discriminant $\mu^2/4+\lambda$ is strictly positive so that \eqref{1} is strictly hyperbolic at $\bar{\boldsymbol U}$. Since general solutions of the nonlinear system \eqref{A2} are still not easily accessible, we restrict ourselves to several special cases. First, we discuss the simple wave solutions. We recall that along simple wave solutions, one of the Riemann invariants $H_1,H_2$ is constant. Assuming, for example, $H_1({\boldsymbol U})=\bar H_1$, the system \eqref{A2} reduces to a linear transport equation for $H_2$ with the solution \begin{equation*} H_2({\boldsymbol U})=G'(U_2-\bar H_1 U_1) \end{equation*} for some $G\in C^1({\mathbb{R}})$ which satisfies $G'\geq \bar H_1$. Going back to the variables $\mu,\lambda$ in \eqref{A2.0} and integrating the resulting matrix $A=\left(\begin{smallmatrix} 0 & 1\\ \lambda & \mu\end{smallmatrix}\right)$, we find the entropic flux \begin{equation*} {\boldsymbol F}({\boldsymbol U})=\begin{pmatrix} U_2\\ G(U_2-\bar H_1 U_1)+\bar H_1 U_2\end{pmatrix} \end{equation*} Note that for $\bar H_1=0$, the system essentially decouples because $U_1$ is calculated by simple time integration from $U_2$ which solves a scalar conservation law. For $\bar H_1\not=0$, we can apply the linear transformation ${\boldsymbol V}= T{\boldsymbol U}= (U_2-\bar H_1 U_1,U_2)^{\scriptscriptstyle T}$, and find that the corresponding $2\times 2$ system also decouples in the above sense: the transformed flux is $\hat {\boldsymbol F}({\boldsymbol V})=T{\boldsymbol F}(T^{-1}{\boldsymbol V})=(G(V_1),G(V_1)+\bar H_1 V_2)^{\scriptscriptstyle T}$. Next, we study the particular case of solutions which satisfy $H_2=H_1=H$. In this case, the system \eqref{A2} reduces to Burgers' equation for $H$ $$\label{A3} \frac{\partial H}{\partial U_1}+H\frac{\partial H}{\partial U_2}=0.$$ Note that the similarity solution $H(U_1,U_2)=U_2/U_1$ of \eqref{A3} gives rise to the flux of the pressure-less Euler equation: setting $U_1=\rho, U_2=m$, we find with $\mu=2H$ and $\lambda=-H^2$ the flux ${\boldsymbol F}(\rho,m)=(m,m^2/\rho)^{\scriptscriptstyle T}$ as primitive of \begin{equation*} A=\begin{pmatrix} 0 & 1\\ -\frac{m^2}{\rho^2} & 2\frac{m}{\rho} \end{pmatrix}. \end{equation*} Using the implicit representation $H({\boldsymbol U})=\Phi(U_2-U_1 H({\boldsymbol U}))$ of the solution to \eqref{A3} with initial'' value $H(0,U_2)=\Phi(U_2)$, we can obtain other solutions as well (note that no shocks develop in \eqref{A3} if $\Phi$ is increasing and $U_1\geq0$). Starting with $\Phi(s)=s$, we find again the pressure-less Euler system, if we set $U_1+1=\rho>0$ and $U_2=m$. For $\Phi(s)=s^3/|s|$, we obtain with $U_1=\rho\geq 0$ and $U_2=m$ \begin{equation*} H(\rho,m)={\text{sign}}(m)\left(\frac{|m|}{\rho}+\frac{1}{2\rho^2}-\sqrt{\frac{|m|}{\rho^2}+\frac{1}{4\rho^4}}\right) \end{equation*} which leads to the flux \begin{equation*} {\boldsymbol F}(\rho,m)=\begin{pmatrix} m\\ \frac{1}{6\rho^3}(1+6\rho|m|+6\rho^2m^2-(1+4\rho |m|)^\frac{3}{2}) \end{pmatrix}. \end{equation*} Note that $(1+4\rho |m|)^\frac{3}{2}\approx 1+6\rho |m|+6\rho^2m^2-4\rho^3 |m|^3$ for small $m$ so that ${\boldsymbol F}$ is really twice differentiable. Finally, we look for particular solutions of \eqref{A2} in the form $H_{1,2}({\boldsymbol U})=H({\boldsymbol U})\pm h(U_1)$ and find $H({\boldsymbol U})=U_2/(U_1+c)$ and $h(U_1)=\kappa (U_1+c)^2$ with certain constants $c,\kappa\in{\mathbb{R}}$. Setting $U_1+c=\rho>0$ and $m=U_2$, we find the flux of the isentropic Euler equation with cubic pressure law \begin{equation*} {\boldsymbol F}(\rho,m)=\begin{pmatrix} m\\ \frac{m^2}{\rho}+\frac{1}{3}\kappa \rho^3 \end{pmatrix}. \end{equation*} \subsection{Isentropic Euler equation}\label{s3.1} In this section, we show by direct calculation that the isentropic Euler equation has an entropic flux function if and only if the pressure is a constant or cubic function of the density. Note that Theorem \ref{r1.T2} immediately implies the well known fact that the gas dynamics equations with cubic pressure law can be decoupled into two independent Burgers' equations. This can also be interpreted as a non-interaction property of simple waves. The general form of the isentropic Euler system is $$\label{ff} \begin{gathered} \frac{\partial\rho}{\partial t}+\frac{\partial m}{\partial x}=0\\ \frac{\partial m}{\partial t}+\frac{\partial}{\partial x}\left(\frac{m^2}{\rho}+p(\rho)\right)=0 \end{gathered}$$ where $\rho>0$ is the mass density, $m\in{\mathbb{R}}$ the momentum density, and the pressure $p$ is a given function of $\rho$ which satisfies $p'\geq0$. In particular, the flux function is \begin{equation*} {\boldsymbol F}(\rho,m)=\begin{pmatrix}m\\ \frac{m^2}{\rho}+p(\rho)\end{pmatrix},\quad {\boldsymbol F}'(\rho,m)=\begin{pmatrix}0 & 1\\ -\frac{m^2}{\rho^2}+p'(\rho) & 2\frac{m}{\rho}\end{pmatrix}. \end{equation*} To check that ${\boldsymbol F}$ is entropic, we have to show that $({\boldsymbol F}')^2$ is exact and since $F_1'{\boldsymbol F}'=F_2'$, it suffices to investigate exactness of $\omega=F_2'{\boldsymbol F}'$, i.e.\ \begin{equation*} \omega(\rho,m)=\begin{pmatrix}-2\frac{m^3}{\rho^3}+ 2\frac{m}{\rho}p'(\rho) & 3\frac{m^2}{\rho^2}+p'(\rho)\end{pmatrix}. \end{equation*} Using the fact that the $(\rho,m)$ domain is simply connected, the exactness of $\omega$ reduces to the condition $\partial_\rho\omega_2=\partial_m\omega_1$ which is obviously satisfied if and only if \begin{equation*} \frac{2}{\rho}p'(\rho)=p''(\rho). \end{equation*} This condition singles out the cubic pressure laws but includes also the case of constant pressure \begin{equation*} p(\rho)=C+D\rho^3,\quad C,D\geq0 \end{equation*} (for a detailed study of these systems and their relation to kinetic equations, we refer to \cite{Bou94,B&C98,B&G97}) To construct additional entropic fluxes, we can use Theorem \ref{3T55} which states that $\int Q$ is entropic for every $Q\in{\mathcal{P}}({\boldsymbol F}')$ if ${\boldsymbol F}'$ is entropic. Choosing, for example, $Q=({\boldsymbol F}')^2$ or $Q=({\boldsymbol F}')^3$, we obtain entropic flux functions \begin{equation*} {\boldsymbol F}_2(\rho,m)=\begin{pmatrix} \frac{m^2}{\rho}+\alpha\rho^3\\ 3\alpha\rho^2m+\frac{m^3}{\rho^2} \end{pmatrix},\quad {\boldsymbol F}_3(\rho,m)=\begin{pmatrix} 3\alpha\rho^2m+\frac{m^3}{\rho^2}\\ \frac{9}{5}\alpha^2\rho^5+\frac{m^4}{\rho^3}+6\alpha\rho m^2 \end{pmatrix}. \end{equation*} Setting $u=m/\rho$ and $c=\sqrt{3\alpha\rho^2}$, another example is given by \begin{equation*} Q=\exp({\boldsymbol F}')=\frac{e^u}{c}\begin{pmatrix}c\cosh(c)-u\sinh(c) & \sinh(c)\\ (c^2-u^2)\sinh(c) & c\cosh(c)+u\sinh(c) \end{pmatrix}. \end{equation*} which yields the flux \begin{equation*} {\boldsymbol F}_{\exp} (\rho,m)= \frac{e^u}{\sqrt{3\alpha}}\begin{pmatrix} \sinh(c)\\ (u-1)\sinh(c)+c\cosh(c) \end{pmatrix}. \end{equation*} Using $Q=\sinh({\boldsymbol F}')$, we obtain \begin{equation*} {\boldsymbol F}_{\sinh} (\rho,m)= \frac{1}{\sqrt{3\alpha}}\begin{pmatrix} \sinh(u)\sinh(c)\\ \sinh(c)(\sinh(u)u-\cosh(u)) +\cosh(u)\cosh(c)c \end{pmatrix}. \end{equation*} \section{Additional entropies}\label{s4} An immediate consequence of the characterizing Theorem \ref{3T55} is the existence of many entropies for systems with entropic fluxes. \begin{cor} \label{r1.C1} Assume ${\boldsymbol F}\in C^2({\mathcal{S}},{\mathbb{R}}^2)$ is entropic. Then, for every $Q\in{\mathcal{P}}({\boldsymbol F}')$, the functions $(\int Q)_i$, $i=1,2$ are entropies of \eqref{1}. \end{cor} \noindent{\bf Proof:} If ${\boldsymbol F}$ is entropic, then all $Q\in{\mathcal{P}}({\boldsymbol F}')$ are exact. Since ${\mathcal{P}}({\boldsymbol F}')$ is an algebra, also $Q{\boldsymbol F}'$ is exact and hence $(\int Q)_i$ is an entropy for \eqref{1} with entropy-flux $\int Q_i{\boldsymbol F}'$. \hfill$\diamondsuit$ The result of Corollary \ref{r1.C1} can be extended if the system \eqref{1} admits at least one {\em strictly} convex entropy. \begin{theorem} \label{3T54} Assume \eqref{1} admits at least one strictly convex entropy $\eta_s\in C^2({\mathcal{S}},{\mathbb{R}})$ and has an entropic flux ${\boldsymbol F}$. Then for all convex entropies $\eta\in C^2({\mathcal{S}},{\mathbb{R}})$ of \eqref{1} and all $Q\in{\mathcal{P}}({\boldsymbol F}')$, the function $\int \eta'Q$ is again an entropy. \end{theorem} \noindent{\bf Proof:} We introduce $A={\boldsymbol F}'$ and the flux $\theta_s=\int \eta_s'A$ of the strictly convex entropy $\eta_s$. In order to use an argument similar to the one in \cite{Har83}, we define the transformation ${\boldsymbol w}={\boldsymbol W}({\boldsymbol U}):\:=\nabla_{\boldsymbol U}\eta_s({\boldsymbol U})$ which is locally invertible because $\eta_s$ is strictly convex. Setting \begin{equation*} r({\boldsymbol w}):\:= {\boldsymbol w}\cdot {\boldsymbol F}({\boldsymbol U})-\theta_s({\boldsymbol U}),\quad {\boldsymbol U}={\boldsymbol W}^{-1}({\boldsymbol w}) \end{equation*} we conclude that $\nabla_{\boldsymbol w} r={\boldsymbol F}$. Consequently, $\nabla_{\boldsymbol w} {\boldsymbol F}=A\nabla_{\boldsymbol w}{\boldsymbol W}^{-1}$ is, as the Hessian of $r$, symmetric. Since also $\nabla_{\boldsymbol w}{\boldsymbol W}^{-1}$ is symmetric as the inverse of the symmetric matrix $\nabla_{\boldsymbol U} {\boldsymbol W}$ (which is the Hessian of $\eta_s$), we conclude \begin{equation*} \nabla_{\boldsymbol w}{\boldsymbol F}=A\nabla_{\boldsymbol w}{\boldsymbol W}^{-1}=(A\nabla_{\boldsymbol w}{\boldsymbol W}^{-1})^{\scriptscriptstyle T} =\nabla_{\boldsymbol w}{\boldsymbol W}^{-1}A^{\scriptscriptstyle T}. \end{equation*} Applied to the product $A^n$, we get by induction $$\label{30} A^n\nabla_{\boldsymbol w}{\boldsymbol W}^{-1}=(\nabla_{\boldsymbol w}{\boldsymbol W}^{-1})^{\scriptscriptstyle T}[A^n]^{\scriptscriptstyle T}=(A^n\nabla_{\boldsymbol w}{\boldsymbol W}^{-1})^{\scriptscriptstyle T}.$$ The exactness of $A^n$ allows us to define ${\boldsymbol F}_n=\int A^n$ and we find with \eqref{30} \begin{equation*} \nabla_{\boldsymbol w} {\boldsymbol F}_n=A^n\nabla_{\boldsymbol w}{\boldsymbol W}^{-1}=(\nabla_{\boldsymbol w} {\boldsymbol F}_{n})^{\scriptscriptstyle T}. \end{equation*} Since ${\mathcal{S}}$ is simply connected, we conclude from the symmetry of $\nabla_{\boldsymbol w} {\boldsymbol F}_{n}$ that there exists a function $R_{n}$ such that $\nabla_{\boldsymbol w} R_{n}={\boldsymbol F}_{n}$. Finally, by setting \begin{equation*} \Psi({\boldsymbol U}):\:= {\boldsymbol w}\cdot{\boldsymbol F}_{n}({\boldsymbol U})-R_{n}({\boldsymbol w}),\quad {\boldsymbol w}={\boldsymbol W}({\boldsymbol U}) \end{equation*} we get with the definition of $R_{n}$, ${\boldsymbol F}_{n}$ and ${\boldsymbol W}$ that $\Psi'=\eta_s' A^n$. If $\Gamma$ is any closed and piecewise smooth curve in ${\mathcal{S}}$, we thus find \begin{equation*} \int_\Gamma \eta_s'A^n=0. \end{equation*} If $\eta$ is any convex entropy for \eqref{1}, we can make it strictly convex by adding $\eta_s$. With the same arguments as above, we conclude that for any closed curve $\Gamma$ \begin{equation*} 0=\int_\Gamma (\eta+\eta_s)' A^n =\int_\Gamma \eta' A^n +\int_\Gamma \eta_s' A^n =\int_\Gamma \eta' A^n \end{equation*} so that also $\eta' A^n$ is exact for every $n\in{\mathbb{N}}_0$. In particular, $\int\eta' A^n$ is an entropy because $\eta' A^nA=\eta'A^{n+1}$ is exact. Using Lemma \ref{3P24} in the appendix, the result follows. \hfill$\diamondsuit$ \section{Appendix} We collect some basic properties of the set ${\mathcal{P}}(A)$ which is the locally-uniform closure of all $A$-polynomials over ${\mathbb{C}}$. \begin{lemma} \label{3L23} Let $A:{\mathcal{S}}\mapsto{\mathbb{R}}^{2\times 2}$ be continuous. Then ${\mathcal{P}}(A)$ is a sub-algebra of $C^0({\mathcal{S}},{\mathbb{C}}^{2\times 2})$ which is closed under locally-uniform limits. Moreover, ${\mathcal{P}}(A)$ contains $h(A)$ for all analytic functions $h:{\mathbb{C}}\to{\mathbb{C}}$. If $A$ is diagonalizable, i.e. \begin{equation*} A({\boldsymbol U})=R({\boldsymbol U}){\operatorname{diag}}(\lambda_i({\boldsymbol U}))R^{-1}({\boldsymbol U}) \end{equation*} with continuous functions $R,R^{-1},\lambda_i$ and if $h:{\mathbb{C}}\mapsto{\mathbb{C}}$ is continuous, then the function $h(A)$ defined by \begin{equation*} h(A({\boldsymbol U}))=R({\boldsymbol U}){\operatorname{diag}}(h(\lambda_i({\boldsymbol U})))R^{-1}({\boldsymbol U}),\quad {\boldsymbol U}\in{\mathcal{S}} \end{equation*} is also an element of ${\mathcal{P}}(A)$. \end{lemma} \noindent{\bf Proof:} The set ${\mathcal{P}}(A)$ is an algebra because the locally-uniform limit commutes with all relevant operations. Since $A$-polynomials are continuous on ${\mathcal{S}}$, the same holds for locally-uniform limits which shows that ${\mathcal{P}}(A)\subset C^0({\mathcal{S}},{\mathbb{C}}^{2\times 2})$. Obviously ${\mathcal{P}}(A)$ is closed because it is defined as locally-uniform closure. If $h$ is an analytic function, we define \begin{equation*} h_n(s)=\sum_{j=0}^n\frac{h^{(j)}(0)}{j!}s^n,\quad n\in{\mathbb{N}},\quad \hat h(s)=\sum_{n=0}^\infty\frac{|h^{(n)}(0)|}{n!}s^n. \end{equation*} Setting $Q_n(A)=h_n(A)$ we find on any compact set $K\subset{\mathcal{S}}$ with a bound $|A({\boldsymbol U})|\leq C$ for ${\boldsymbol U}\in K$ that $|Q_n(A({\boldsymbol U}))|\leq \hat h(C)$. Since $Q_n(A)$ forms a Cauchy sequence (in the locally-uniform topology), we get convergence with limit defined as $h(A)$. Let us now turn to the case of continuous functions $h$ and diagonalizable matrices $A$. If $K\subset {\mathcal{S}}$ is compact, continuity implies existence of $C>0$ such that $|R({\boldsymbol U})|,|R^{-1}({\boldsymbol U})|,|{\operatorname{diag}}(\lambda_i({\boldsymbol U}))|\leq C$ for all ${\boldsymbol U}\in K$. For given $n\in{\mathbb{N}}$, we can use the theorem of Stone-Weierstra\ss\ to find a polynomial $P_n$ such that \begin{equation*} \sup\{|P_n(s)-h(s)|:|s|\leq C\}<\frac{1}{n} \end{equation*} Since $P_n(A)=R{\operatorname{diag}}(P_n(\lambda_i))R^{-1}$, we thus have $|P_n(A({\boldsymbol U}))-h(A({\boldsymbol U}))|\leq C^2/n$ for all ${\boldsymbol U}\in K$ which completes the proof. \hfill$\diamondsuit$ A simple consequence of the closedness of ${\mathcal{P}}(A)$ is the following result. \begin{lemma}\quad \label{r1.L2} Let $A\in C^0({\mathcal{S}},{\mathbb{R}}^{2\times 2})$ and $Q:{\mathcal{S}}\to{\mathbb{C}}^{2\times 2}$. If, for every compact subset $K\subset {\mathcal{S}}$, there exists $Q_K\in{\mathcal{P}}(A)$ such that $Q({\boldsymbol U})=Q_K({\boldsymbol U})$ for all ${\boldsymbol U}\in K$ then $Q\in{\mathcal{P}}(A)$. \end{lemma} \noindent{\bf Proof:} We choose the sequence \begin{equation*} K_n=\{{\boldsymbol U}\in{\mathcal{S}}:\|{\boldsymbol U}\|\leq n,\,\operatorname{dist}({\boldsymbol U},\partial{\mathcal{S}})\geq 1/n\},\quad n\in{\mathbb{N}}. \end{equation*} Then $Q_{K_n}$ converges locally uniformly to $Q$ and since ${\mathcal{P}}(A)$ is closed, the result follows. \hfill$\diamondsuit$ The final result shows that exactness of all powers of $A^n$ (and thus of all $A$-polynomials) leads to exactness of each $Q\in{\mathcal{P}}(A)$. The proof basically shows that exactness commutes with locally-uniform limits. \begin{lemma} \label{3P24} Let $\eta\in C^1({\mathcal{S}},{\mathbb{R}})$ and $A\in C^0({\mathcal{S}},{\mathbb{R}}^{2\times 2})$. Equivalent are \begin{enumerate} \item[i)] $\eta'A^n$ is exact for all $n\in{\mathbb{N}}_0$, \item[ii)] $\eta'Q$ is exact for all $Q\in {\mathcal{P}}(A)$. \end{enumerate} \end{lemma} \noindent{\bf Proof:} Since $({\boldsymbol F}')^n\in{\mathcal{P}}({\boldsymbol F}')$, only (i) $\Rightarrow$ (ii) is non-trivial. Assuming (i), we first note that $\eta'P({\boldsymbol F}')$ is exact for every polynomial $P$, or in other words, $$\label{3m30} \int_\Gamma \eta' P({\boldsymbol F}')=0$$ for all polynomials $P$ and all closed, piece-wise smooth curves $\Gamma\subset{\mathcal{S}}$. Fixing any such $\Gamma$ and a $Q\in{\mathcal{P}}({\boldsymbol F}')$, we know that $Q({\boldsymbol U})=\lim_{n\to\infty}P_n({\boldsymbol F}'({\boldsymbol U}))$ for all ${\boldsymbol U}\in\Gamma$ with a uniform bound for $P_n({\boldsymbol F}'({\boldsymbol U}))$. Thus, \begin{equation*} \int_\Gamma \eta' Q=\int_\Gamma \eta' (Q-P_n({\boldsymbol F}'))+\int_\Gamma \eta' P_n({\boldsymbol F}'). \end{equation*} Thanks to \eqref{3m30}, the second integral on the right vanishes and the first one disappears in the limit $n\to\infty$ with the help of Lebesgue's dominated convergence theorem. Hence, closed curve integrals over $\eta'Q$ vanish which proves (ii). \hfill$\diamondsuit$ \subsection*{Acknowledgement} I want to express my gratitude to F.~Bouchut for reading an earlier version of this paper and giving helpful comments. \begin{thebibliography}{10} \bibitem{AN00} Denise Aregba-Driollet and Roberto Natalini. \newblock Discrete kinetic schemes for multidimensional systems of conservation laws. \newblock {\em SIAM J. Numer. Anal.}, 37:1973--2004, 2000. \bibitem{Bou94} F.~Bouchut. \newblock On zero pressure gas dynamics. \newblock In {\em Perthame, B. (ed.), Advances in kinetic theory and computing: selected papers. Singapore: World Scientific. Ser. Adv. Math. Appl. Sci. 22, 171-190}, 1994. \bibitem{Bou99} F.~Bouchut. \newblock Construction of {BGK} models with a family of kinetic entropies for a given system of conservation laws. \newblock {\em J. Stat. Phys.}, 95:113--170, 1999. \bibitem{BGK} P.L.~Bhatnagar, E.P.~Gross and M.~Krook. \newblock A model for collision processes in gases. \newblock {\em Phys. Rev.}, 94:511, 1954. \bibitem{B&C98} Y.~Brenier and L.~Corrias. \newblock A kinetic formulation for multi--branch entropy solutions of scalar conservation laws. \newblock {\em Ann. Inst. Henri Poincare, Anal. Non Lineaire}, 15:169--190, 1998. \bibitem{B&G97} Y.~Brenier and E.~Grenier. \newblock Sticky particles and scalar conservation laws. \newblock {\em SIAM J. Numer. Anal.}, 35:2317--2328, 1997. \bibitem{DM94} S.~M. Deshpande and J.~C. Mandal. \newblock Kinetic flux--vector splitting for {E}uler equations. \newblock {\em Comput. Fluids}, 23:447--478, 1994. \bibitem{E&S92} Yu.~V. Egorov and M.~A. Shubin. \newblock {\em Partial Differential Equations I}. \newblock Springer-Verlag, 1992. \bibitem{Har83} A.~Harten. \newblock On the symmetric form of systems of conservation laws with entropy. \newblock {\em J. Comp. Phys.}, 49:151--164, 1983. \bibitem{Hoe83} L.~H\"ormander. \newblock {\em The Analysis of Linear Partial Differential Operators I}. \newblock Springer, 1983. \bibitem{Jun99b} M.~Junk. \newblock A new perspective on kinetic schemes. \newblock {\em SIAM J. Numer. Anal.}, 38:1603--1625, 2000. \bibitem{Jun02h} M.~Junk. \newblock {\em Moment problems in kinetic theory}. \newblock Habilitationsschrift, Universit\"at Kaiserslautern, 2002. \bibitem{LPT94} P.~L. Lions, B.~Perthame, and E.~Tadmor. \newblock A kinetic formulation of multidimensional scalar conservation laws and related equations. \newblock {\em J. Amer. Math. Soc.}, 7:169--191, 1994. \bibitem{LPT94b} P.~L. Lions, B.~Perthame, and E.~Tadmor. \newblock Kinetic formulation of the isentropic gas dynamics and p-system. \newblock {\em Commun. Math. Phys.}, 163:415--431, 1994. \bibitem{P&T91} B.~Perthame and E.~Tadmor. \newblock A kinetic equation with kinetic entropy functions for scalar conservation laws. \newblock {\em Comm. Math. Phys.}, 136:501--517, 1991. \end{thebibliography} \end{document}