\alpha$. For $0<\alpha<1$, we have $n=1$ and (5) becomes $$ D_{\pm,\varepsilon}^{\alpha}f(x)=\frac{-1}{\Gamma(-\alpha)} \int_{\varepsilon}^{+\infty}t^ {-\alpha-1} [f(x)-f(x\mp t)]dt . $$ In this definition, the relative freedom let to $\Delta^n_{t}$, is useful when $\alpha$ is a complex number. In view of our objective, we will focus on real valued orders for integrals and derivatives, hence $\lambda_0=0$, $\lambda_1=1$, \dots $\lambda_n=n$ with $ \Delta_{ t}^n f(x)=(Id-T_t)^nf(x)$, used by \cite{Sa} will be enough. For $\alpha=1$, we have to put $n=2$ in (5) if we want to use this expression, but we also can consider that $ D_{ \pm}^{\alpha}$ is the usual left or right-sided derivative of order $\alpha$ when $\alpha $ is a non-negative integer. We thus have a left inverse for $I_{\pm}^\alpha$ in a wider domain, which in some sense is optimal, since it provides a characterization of $I_{\pm}^\alpha L^p$ for $1

0}\Vert D_{
\pm,\varepsilon}^{\alpha}f\Vert_{L^p(\mathbb{R})} $ is finite, if
moreover, $f$ belongs to $L^p(\mathbb{R})$ with $1\leq r<\infty$, then
$f$ belongs to $I_{\pm}^\alpha L^p(\mathbb{R})$ and there exists
$\varphi$ s.t. $f(x)=I_{\pm}^\alpha\varphi(x)$ almost everywhere.
in $\mathbb{R}$. The theorem was stated in $ L^p(\mathbb{R})$, but the proof
adapts without any modification to $ L^p]- \infty,a]$ for
$D_+^\alpha$ and to $ L^p[a,+ \infty[$ for $D_-^\alpha$.
Derivatives $D_\pm^\alpha$ and $\mathcal{D}_\pm^\alpha$
coincide for functions of the form $I_\pm^\alpha\varphi$ with
$\varphi$ in $L^1_{\rm loc}$ such that $I_\pm^{[\alpha]+1}$ converges
absolutely \cite{Rub}.
Other expressions yield the left inverse of $I_\pm^\alpha$. Among them,
the Gr\" unwald-Letnikov fractional derivative \cite{Sa} or order
$\alpha$ of $f$ is the limit, when mesh $h$ tends to zero, of
$h^{ -\alpha}$ times the series
$\Sigma_{k=0}^{\infty}(-1)^k(^\alpha_k)f(x-kh)$. It provides
useful approximations to Riemann-Liouville or Marchaud's
derivatives, connected with finite differences numerical schemes.
Here we present a further expression for the left inverse of
${I}_\pm^\alpha$, not very different from Gr\" unwald-Letnikov
operator, since it contains an integrals in place of the above
evocated series. Then, we will discuss the physical meaning.
\subsection{ A new expression for the inverse of $I_{\pm}^\alpha$ }
Here we consider $0<\alpha\leq 1$. With some modifications, the
following adapts to all positive values of $\alpha$.
{\bf Notation} Let $F$ be a function, while $l$ is positive. Set
\begin{equation} \label{e6}
\mathcal{W}_{l,\pm}^{\alpha,F}f(x)=l^{-\alpha-1}
\int_{0}^{+\infty}f(x\mp t)F(t/l)dt.
\end{equation}
The limit of $ \mathcal{W}_{l,\pm}^{\alpha,F}f$ when $l$ tends to zero,
if it ever exists, will be denoted by $ { W}_{\pm}^{\alpha,F}f$.
\begin{itemize}
\item We will say that $F$ satisfies Hypothesis (H1) if $F$ belongs
to $L^1({\mathbb{R}}^+)$ with $\int_0^{\infty}F(t)dt=0$.
\item We will say that $F$ satisfies Hypothesis (H2) if, in a
neighborhood $[A,+\infty[$ of $+\infty$, there exists a function $F_1$
such that $\int_A^{+\infty}y^\alpha\vert F_1(y)\vert dy<\infty$ and
$F(x)=F_1(x)+\lambda x^{-\alpha-1}$, for $0<\alpha<1$ but
$F(x)=F_1(x)+\lambda x^{-2- \varepsilon}$ with $\varepsilon>0$ for
$ \alpha=1$.
\end{itemize}
We will see that (H1) and (H2) imply that
$ { W}_{\pm}^{\alpha,F}$ is a left inverse to $ { I}_{\pm}^{\alpha}$.
In this purpose, let us consider
$ { W}_{\pm}^{\alpha,F}\circ { I}_{\pm}^{\alpha}$.
Let $\varphi$ belong to $L^p[a,+\infty[$. We have
$$
\mathcal{W}_{l,-}^{\alpha,F}\circ { I}_{-}^{\alpha}\varphi(x)
=\frac{l^{-\alpha-1}}{\Gamma(\alpha)}\int_0^{+\infty}F(t/l)
\int_{x+t}^{+\infty}\varphi(y)(y-x-t)^{\alpha-1}dydt.
$$
Setting $t=lT$ yields
\begin{align*}
\mathcal{W}_{l,-}^{\alpha,F}\circ { I}_{-}^{\alpha}\varphi(x)
&=\frac{l^{-\alpha}}{\Gamma(\alpha)}\int_0^{+\infty}F(T)
\int_{x+lT}^{+\infty}\varphi(y)(y-x-lT)^{\alpha-1}dy\,dT\\
&=\frac{1}{\Gamma(\alpha)}\int_0^{+\infty}F(T)
\int_{T}^{+\infty}\varphi(x+l\theta)(\theta-T)^{\alpha-1}d\theta \,dT,
\end{align*}
with $y=x+l \theta$.
Then, Fubini's theorem yields
\begin{equation} \label{e7}
\mathcal{W}_{l,-}^{\alpha,F}\circ { I}_{-}^{\alpha}\varphi(x)
=\frac{1}{\Gamma(\alpha)}\int_0^{+\infty}\varphi(x+l\theta)
\int_{0}^{\theta}F(T)(\theta-T)^{\alpha-1}\,dT\,d\theta,
\end{equation}
as soon as
$I_+^\alpha(HF)(\theta)=\int_{0}^{\theta}F(T)(\theta-T)^{\alpha-1}\,dT$
is integrable in ${\mathbb{R}}^+$. Let us use this point, which will be
stated in Lemma \ref{lem1} below. Let $H$ denote Heaviside's function: on
the right-hand side of \eqref{e7} we have $\int_{\mathbb{R}}\varphi(x+l
\theta)(I_+^\alpha(HF))(\theta)d\theta$ which, by \cite[Theorem
1.3]{Sa} is an approximation of
$\int_0^{+\infty}I_+^{\alpha}(HF)(\theta)d\theta$ times Identity
in $L^p$.
For $\varphi$ in $L^p]-\infty,a]$, instead of \eqref{e7} we have
\begin{equation} \label{e8}
\mathcal{W}_{l,+}^{\alpha,F}\circ { I}_{+}^{\alpha}\varphi(x)
=\frac{1}{\Gamma(\alpha)}\int_0^{+\infty}\varphi(x-l\theta)
\int_{0}^{\theta}F(T)(\theta-T)^{\alpha-1}\,dTd\theta.
\end{equation}
Hence the following Theorem holds.
\begin{theorem} \label{thm1}
Suppose $F$ satisfies hypotheses (H1) and (H2),
with $0<\alpha\leq 1$.
\begin{itemize}
\item[(i)] For $\varphi$ in $L^p[a,+\infty[$,
$ \mathcal{W}_{l,-}^{\alpha}\circ { I}_{-}^{\alpha}\varphi$ tends
in $L^p[a,+\infty[$ to $\int_0^{+\infty}I_+^\alpha HF(t)dt\times\varphi$
when $l$ tends to zero, and pointwise everywhere $\varphi$ is right continuous.
\item[(ii)] For $\varphi$ in $L^p]-\infty,a]$,
$ \mathcal{W}_{l,+}^{\alpha}\circ { I}_{+}^{\alpha}\varphi$
tends in $L^p]-\infty,a]$ to $\int_0^{+\infty}I_+^\alpha HF(t)dt\times\varphi$
when $l$ tends to zero, and pointwise everywhere $\varphi$ is left continuous.
\end{itemize}
\end{theorem}
It remains to prove the following lemma.
\begin{lemma} \label{lem1}
If $F$ satisfies (H1) and (H2), with $0<\alpha\leq 1$,
then $\int_{0}^{\theta}F(T)(\theta-T)^{\alpha-1}\,dT$ is integrable
in ${\mathbb{R}}^+$.
\end{lemma}
\begin{proof} If $F$ is as $F_1$ in hypothesis (H2)'s statement,
\cite[Lemma 4.12]{Rub} shows that
$$
\Gamma(\alpha) I_-^\alpha(HF)(\theta)=\int_0^\theta F(T)
(\theta-T)^{\alpha-1}\,dT
$$
is in $L^1$. It is enough to prove the lemma for
$F=-\frac{1}{\alpha}\chi_{[0,1]}+x^{-\alpha-1}\chi_{[1,+\infty[}$
if $\alpha$ is less than $1$, for
$F=-\frac{1}{1+\varepsilon}\chi_{[0,1]}+x^{-2-\varepsilon}\chi_{[1,+\infty[}$
if $\alpha$ is equal to $1$, since modifying $F_1$ will immediately
lead to the general case. For $\alpha=1$, the result is obvious, for
$\alpha$ less than $1$, we have
$$
\int_0^x(x-y)^{\alpha-1}\chi_{[0,1]}(y)dy=\frac{x^\alpha-(x-1)^\alpha}{\alpha}
$$
for $x>1$, and
$$
\int_0^x(x-y)^{\alpha-1}y^{-\alpha-1}\chi_{[1,+\infty[}(y)dy
=x^{-1}(G(1)-G(1/x)+\frac{x^\alpha-1}{\alpha})
$$
when $x$ is large enough, with $G$ being defined by
$G(X)=\int_0^X[(1-z)^{\alpha-1}-1]z^{-\alpha-1}dz$. From this we deduce
\begin{equation} \label{e9}
\begin{aligned}
&\int_0^xF(t)(x-t)^{\alpha-1}dt\\
&=\alpha^{-1}(x^{\alpha-1}-\alpha^{-1}x^\alpha(1-(1-1/x)^\alpha))
+x^{-1}(G(1)-\alpha^{-1})-x^{-1}G(1/x).
\end{aligned}
\end{equation}
Function $\frac{(1-t)^{\alpha-1}-1}{t}$ is continuous and integrable
in $[0,1[$. In the neighborhood of $0$,
$\frac{(1-t)^{\alpha-1}-1}{t}t^{-\alpha}$ is equivalent to
$(1-\alpha)t^{-\alpha}$, hence $G(1/x)$ is equivalent to
$x^{\alpha-1}$ when $x$ is large. Hence $x^{-1}G(1/x)$ is integrable
in a neighborhood of $+\infty$. It also is the case for
$\alpha^{-1}[x^{\alpha-1}-\alpha^{-1}x^\alpha(1-(1-1/x) ^\alpha)$.
We now will check that $G(1)-\alpha^{-1}$ is zero. To see this,
set $g(p,q)=\int_0^1((1-t)^{q-1}-1)t^{p-1}dt$. For complex valued $p$ and $q$ satisfying $Re(p)>0$ and $Re(q)>0$, $\int_0^1(1-t)^{q-1}t^{p-1}dt$ is a beta function \cite{AbSt} and we have
\begin{equation} \label{e10}
g(p,q)=\frac{\Gamma(p)\Gamma(q)}{\Gamma(p+q)}-\frac{1}{p}.
\end{equation}
Let us fix $q=\alpha$, and vary the complex number $p$: $t^p$ is a function
of $p$, whose derivative $t^pLn(t)$ is dominated by the $L ^1]0,1[$
function $t^p \vert Ln(t)\vert $ for $Re(p)\geq p_0>-1$, so that,
by dominated convergence, $g(p,\alpha)$ is derivable with respect to $p$.
Hence it is analytic for $Re(p)\geq p_0>-1$. Since
$\frac{\Gamma(q)}{\Gamma(p+q)}$ is also analytic in the neighborhood of
$0$ while $\Gamma(p)$ has a simple pole with residuum $1$, the right-hand
side of \eqref{e10} is holomorphic for $Re(p)\geq p_0>-1$.
Hence relation \eqref{e10}
holds for $p=-\alpha$, and Lemma \ref{lem1} is proved.
\end{proof}
Therefore Theorem \ref{thm1} holds. It states that the operator
$\mathcal{ W}_{l,-}^{\alpha,F}$, which is defined on
$I_-^{\alpha}L^p[a;+\infty[$, has a limit in $L^p[a;+\infty[$
when $l$ tends to zero. Up to multiplication by a function of $F$
and $\alpha$, the limit is a left inverse to $I_-^{\alpha}$, hence
it coincides with $D_-^{\alpha}$. Similarly,
$\mathcal{W}_{l,+}^{\alpha,F}$, defined in
$I_+^{\alpha}L^p]-\infty,a]$, tends to $D_+^{\alpha}$, times a
function of $F$ and $\alpha$. Theorem \ref{thm1} adapts to higher
values of $\alpha$, provided Hypothesis 1 is made stronger.
For values of $\alpha$ between $0$ and $1$, will see that Theorem
\ref{thm1} allows us to represent the flux of particles within the
frame work of a
wide class of Random Walks.
\section{Particles flux for L\'evy Flights, in the macroscopic limit}
Brownian Motion is a particular case of L\'evy Flights. The latters
are Continuous Time Random Walks: a large number of particles
perform a succession of independent jumps, whose lengths $X_i$ are
identically distributed. To be more precise, with $l$ being a
length scale, the density of $X_i/l$ is the normalized
$\alpha$stable L\'evy density $L_{\alpha,\theta}$ of exponent
$\alpha$ between $1$ and $2$ and with skewness parameter $\theta$
(see Appendix A). Waiting times $T_i$ between successive jumps are
such that the independent random variables $T_i/\tau$ have density
$\psi$, whose average is $1$. Here, for definiteness, we set $
\psi(t)=e^{-t}$. Looking at the cloud of particles from the
macroscopic point of view means that we let length and time scales
$l$ and $\tau$ tend to zero. Then, if the scaling relation
$l^\alpha/\tau=K$ holds \cite{Com} \cite{Gorsc2}, the probability
of finding a particle in a given interval tends to a limit, which
has a density satisfying a space-fractional diffusion equation
such as \eqref{e17}. This implies that the flux of particles satisfies a
fractional generalization of Fick's law \cite{MainPara}. All these
results are based upon Generalsized Master Equation and Fourier's
analysis.
In fact, we will see that Theorem \ref{thm1} connects more
directly particle flux and fractional derivatives.
\subsection{Computing the flux for L\'evy flights with length scale
$l$ and mean waiting time $\tau$ satisfying $l^\alpha/\tau=K$ }
For a given particle, the location after $n^{th}$ jump is
$\Sigma_{i=0}^n X_i$, and it happens at time $\Sigma_{i=0}^n T_i$.
Let us denote by $\mu(.,t)$ the measure giving the probability
$\mu(I,t)$ that the particle be in interval $I$ at time $t$. With
this notation, the balance of particles crossing abscissa $x$
during $[t,t+dt[$ is the difference of two expressions. The first
one is the probability
$\int_{-\infty}^xF_{\alpha,\theta}^d(\frac{x-y}{l})d\mu(y,t)\frac{\psi(0)}{
\tau}dt$ of crossing $x$ to the right, with
$F_{\alpha,\theta}^d(y/l)$ being the probability
$\int_{y}^{+\infty}\frac{1}{l}L_{\alpha,\theta}(z/l)dz
=\int_{y/l}^{+\infty}L_{\alpha,\theta}(z)dz $ for a jump to have
an amplitude of more than $y$.
The second is the probability
$\int_x^{+\infty}F_{\alpha,\theta}^g(\frac{x-y}{l})d\mu(y,t)
\frac{\psi(0)}{ \tau}dt$ of crossing $x$ to the left, with
$F_{\alpha,\theta}^g(-y/l)$ being the probability
$\int_{-\infty}^{-y/l}\frac{1}{l}L_{\alpha,\theta}(z/l)dz$
for a jump to have an amplitude of more than $y$, but to the left.
The flux is the probability rate, hence the following difference:
$$
Kl^{-\alpha}\Big[\int_{-\infty}^xF_{\alpha,\theta}^d(\frac{x-y}{l})d\mu(y,t)
-\int_x^{+\infty}F_{\alpha,\theta}^g(\frac{x-y}{l})d\mu(y,t)\Big].
$$
When $\mu(.,t)$ has density $C(.,t)$, the flux
$Q_l^{\alpha,\theta}C(.,t)(x)$ is given by
\begin{equation} \label{e11}
Q_l^{\alpha,\theta}C(.,t)(x)=Kl^{-\alpha}\Big[
\int_0^{+\infty}C(x-y,t)F_{\alpha,\theta}^d(\frac{y}{l})dy
-\int_0^{+\infty}C(x+y)F_{\alpha,\theta}^g(\frac{-y}{l})dy\Big].
\end{equation}
Both integrals are similar to \eqref{e6}, except that $F_{\alpha,\theta}^d$ and
$F_{\alpha,\theta}^g(-.)$ satisfy (H2) with $\alpha-1$ instead of $\alpha$, according to Appendix A,
but of course not (H1). Appendix B shows that
$\int_0^{+\infty}F_{\alpha,\theta}^d(y)dy
=\int_0^{+\infty}F_{\alpha,\theta}^g(-y)dy
=\mathcal{I}_{\alpha,\theta}$. Hence, with $f _{\alpha,\theta}$
being a compactly supported function of class $L^1$ s.t.
$\int_0^{+\infty}f _{\alpha,\theta}(y)dy=\mathcal{I}_{\alpha,\theta}$,
setting $\tilde{F}_{\alpha,\theta}^d=F_{\alpha,\theta}^d-f _{\alpha,\theta}$
and $\tilde{F}_{\alpha,\theta}^g(-y) =F_{\alpha,\theta}^g(-y)-f
_{\alpha,\theta}(y)$ yields functions, satisfying (H1) and (H2)
with $\alpha-1$ instead of $\alpha$. Then, we have
$Q_l^{\alpha,\theta}C(.,t)(x)=Q_{+,l}^{\alpha,\theta}
C(.,t)(x)-Q_{-,l}^{\alpha,\theta}C(.,t)(x)$,
with
\begin{equation} \label{e12}
\begin{aligned}
Q_{+,l}^{\alpha,\theta}f(x)
&=Kl^{-\alpha}\int_0^{+\infty} {F}_{\alpha,\theta}^d(y/l)(f(x-y)-f(x))dy\\
&=K\Big[ (\mathcal{W}_{l,+}^{\alpha-1,\tilde{F}_{\alpha,\theta}^d}f)(x)
+l^{-\alpha}\int_0^{+\infty}f_{\alpha,\theta}(y/l)(f(x-y)-f(x))dy\Big]
\end{aligned}
\end{equation}
at the left of $x$, and
\begin{equation} \label{e13}
\begin{aligned}
Q_{-,l}^{\alpha,\theta}f(x)
&=Kl^{-\alpha}\int_0^{+\infty}{F}_{\alpha,\theta}^g(-y/l)(f(x+y)-f(x))dy \\
&=K\Big[ (\mathcal{W}_{l,-}^{\alpha-1,\tilde{F}_{\alpha,\theta}^g(-.)}f))(x)
+l^{-\alpha}\int_0^{+\infty}f_{\alpha,\theta}(y/l)(f(x+y)-f(x))dy\Big]
\end{aligned}
\end{equation}
at the right. Since $\tilde{F}_{\alpha,\theta}^d$ and
$\tilde{F}_{\alpha,\theta}^g(-.)$ satisfy (H1) and (H2) with $\alpha-1$ instead of $\alpha$, hence
$(\mathcal{W}_{l,+}^{\alpha-1,\tilde{F}_{\alpha,\theta}^d}f)(x)$ tends to
$\int_0^{+\infty}I^{\alpha-1}_+(H\tilde{F}_{\alpha,\theta}^d)
(y)dyD^{\alpha-1}_+(f)(x)$ in $L^p]-\infty,a]$ when $f$ belongs to
$I_+^{\alpha-1}L^p]-\infty,a]$ and
\begin{equation*}
(\mathcal{W}_{l,-}^{\alpha-1,\tilde{F}_{\alpha,\theta}^g(-.)}f)(x)\mbox{
tends
to}\;\int_0^{+\infty}I^{\alpha-1}_+(H\tilde{F}_{\alpha,\theta}^g(-.))(y)dyD^{\alpha-1}_-(f)(x)
\end{equation*}
in $L^p[a,+\infty[$ when $f$ belongs to $I_-^{\alpha-1}L^p[a,+\infty[$.
We will see that appropriately choosing $f_{\alpha,\theta}$ allows us
to see on the right-hand sides of
\eqref{e12} and \eqref{e13} expressions which are
``local fractional derivatives'', in the sense of Kolwankar and Gangal.
\subsection{Kolwankar and Gangal's local fractional derivatives}
The notion of ``a local fractional derivative'' was introduced
\cite{KolG} in view of building a tool, designed for the study of
continuous but nowhere differentiable functions frequently occurring
in Nature and economics. Those fractional derivatives share some
properties with previously defined ones, such as chain rule or
generalized Leibniz rule \cite{BaVa}. They are very useful for
to compute fractal dimensions of graphs. In fact, they vanish for
smooth enough functions, and hence can become ``invisible''.
For $q$ between $0$ and $1$, the right-sided Kolwankar and Gangal's
\cite{KolG} fractional derivative of order $q$ of function $f$, computed
at $x$, will be denoted by
$$
D^{KG,q}_+f(x)=\lim_{h\to 0+}\frac{d}{dh}I^{1-q}_{x,+}(f(.)-f(x))(x+h).
$$
Let us suppose that $f$ is continuous is $[x,x+\varepsilon]$, with
positive $\varepsilon$. When the limit exists, it is equal to the
limit, when $h$ tends to $0+$, of
$h^{-1}I^{1-q}_{x,+}(f(.)-f(x))(x+h)$, due to l'H\^ opital's rule
and to $ \lim_{h\to 0}(I^{1-q}_{x,+}(f(.)-f(x))(x+h))=0$ .
Moreover, we have
$h^{-1}I^{1-q}_{x,+}(f(.)-f(x))(x+h)
=\frac{h^{-q}}{\Gamma(1-q)}\int_0^{1}(1-t)^{-q}(f(x+th)-f(x))dt$.
At the left, we have
$$
D^{KG,q}_-f(x)=\lim_{h\to 0+}\frac{d}{dh}I^{1-q}_{x,-}(f(x)-f(.))(x-h),
$$
also equal to the limit, when $h$ tends to $0+$, of
$h^{-1}I^{1-q}_{x,-}(f(x)-f(.))(x-h)$.
If, for positive and finite $b-a$ and with $qx\}$ and
$P\{X<-x\}$ for a jump to have an amplitude $X$ larger than $x$,
and directed to the right or to the left. Both probabilities have
to satisfy (H2) with $\alpha-1$ instead of $\alpha$.
An other property of stable laws was
used, in view of (H1): it is the fact that the integrals, over
${\mathbb{R}}^+$, of $P\{X>x\}$ and $P\{X<-x\}$, are equal. This allowed
us to subtract this integral times $f(x)$ from both sides of the
difference, giving the flux, without any net change. In fact, any
Continuous Time Random Walk made of successive independent jumps,
identically distributed according to a random variable $lX$
satisfying both conditions, has a flux whose limit is \eqref{e16} when
$l$ tends to zero, provided mean waiting time $\tau$ exists with
also $l^\alpha/\tau=K$.
\section*{Conclusion}
Among many objects, interpolating between derivatives of integer orders,
several tools termed fractional derivatives, were designed for
various purposes. Some of them are connected with the idea that
integration and derivation are inverses of each other.
Within this frame work, there are several ways for to define fractional
derivatives, which are more or less similar to each other. They
are more or less interesting, according to the sets of functions,
which we want them to operate on. Among them, Gr\"
unwald-Letnikov derivatives led to performing numerical schemes.
Theorem \ref{thm1} indicates a novel definition of fractional derivatives,
not so far from Gr\" unwald-Letnikov's: an integral replaces a
series. It seems to be appropriate for to represent fluxes of
particles performing Continuous Time Random Walks satisfying some
hypotheses. Among them, L\'evy flights play an important role,
since stable laws are ubiquitous in Nature. We developped this
point for random walks in a free one-dimensional space, also using
the local derivatives invented by Kolwankar and Gangal for
fractal graphs.
Combining those objects also applies to situations with boundary
conditions, for instance in a half space $\{x\in {\mathbb{R}}/x>0\}$
limited by a wall at $x=0$. There are several possibilities for
the interaction between wall and particles. For instance, we can
imagine that they do not exchange any energy: particles bouncing
on the wall continue the distance, they had to fly if there were
no wall, but they stay on the same side. Then, when writing down
the balance of particles crossing abscissa $x$, we have to take
into account that, among random walkers flying to the left (in the
direction of the wall) some of them bounce and come back to the
right of $x$: they have to be excluded from balance
$\mathcal{W}_{l,-}$. If a particle located in $y\in]0,x[$ has to
jump a length larger than $\vert y-2x\vert$ to the left, it
arrives at the right of the wall and has to be taken into account
in $\mathcal{W}_{l,+}$. By doing this, we obtain that the flux
through $x$ is the sum of two terms. The first one is the flux
corresponding to a concentration profile equal to the even
extension of the actual one, to a free space without any wall. The
second term is proportional to the left-sided Riemann-Liouville or
Marchaud's derivative of order $\alpha-1$, if $\alpha$ denotes the
stability exponent of the jump length distribution of the L\'evy
flight. It contains a factor, which becomes zero when the
distribution is symmetric, in agreement with results \cite{Kre},
previously obtained by an other method.
\section*{Appendix A: Densities of alpha stable L\'evy laws}
Stable laws are a generalization of Gaussian statistics.
In many occasions, and here also, the word ``stable'' refers to
some property, invariant under a definite set of transformations,
as in the following definition.
\noindent{\bf Definition:} Let $X$ be a random variable,
distributed according to the probability law $F$. Random variable
$X$ and law $F$ are said to be stable if \cite{Lev} for every
$(a_1,a_2) \in {\mathbb{R}}^{+2}$ and $(b_1,b_2) \in {\mathbb{R}}^2$, there
exist $a\in {\mathbb{R}}^+$ and $b\in {\mathbb{R}}$ such that
$F(a_1x+b_1)*F(a_2x+b_2)$ (the law of the random variable
$a_1X_1+b_1+a_2X_2+b_2$, with $X_1$ and $X_2$ being independent
and distributed according to $F$) be equal to $F(aX+b)$.
When $F$ is as in the above definition, for any sequence of
independent random variables $X_i$ identically distributed
according to $F$, there exists a sequence $c_n$ of positive
numbers such that $\frac{X_1+\dots X_n}{c_n}$ be distributed
according to $F$ itself for any positive integer $n$ \cite{Fel}
\cite{GneK}. Moreover, $c_n$ is a power of $n$, and the inverse
$\alpha$ of the exponent belongs to $]0,2]$ and serves as a label
for the law: it is called the stability exponent of the law, which
is said to be $\alpha$ stable. For $\alpha=2$ we have normal law,
which is symmetric. For $\alpha \in ]0,2[$, stable laws may be
symmetric or skewed. Stable laws play an important role in Nature
because they are attractors, which are defined below.
\noindent{\bf Definition:} Let $F$ be the probability of a
sequence of independent random variables $X_n$. The probability
law $G$ is an attractor for $F$ if there exists sequences $A_n$
and $B_n$, with $B_n>0$, such that the law of $\frac{X_1+\dots
+X_n}{B_n}-A_n$ tends to $G$ when $n$ tends to $\infty$
\cite{Fel}.
Loosely speaking, $\alpha$ stable laws are attractors for
probability laws whose density behaves asymptotically as
$x^{-\alpha-1}$ if $\alpha$ belongs to $]0,2[$, normal law (with
$\alpha=2$)is an attractor for probability laws whose
asymptotics is $x^{-\alpha'-1}$ with $\alpha'\geq 2$ \cite{Fel}
\cite{GneK}.
Except for some values (e.g. $\alpha=1$ or $2$), the density of a
stable law cannot be given in closed form. But, up to translations
and dilatations, the Fourier transform is $e^{-\vert
k\vert^\alpha e^{isign(k)\pi\theta/2}}$. The corresponding density
$L_\alpha^\theta$ satisfies
$L_\alpha^\theta(-x)=L_\alpha^{-\theta}(x)$. Up to dilatations and
translations, two labels determine stable densities: the stability
exponent $\alpha$, and the skewness parameter $ \theta$, which
belongs to $[\alpha-2,2-\alpha]$.
In neighborhoods of $\infty$, except for $\alpha=2$,
$L_\alpha^\theta$ behaves as a negative power of the variable
\cite{Schn} \cite{MaiLuPa}. For $1<\alpha<2$,
$\alpha-2<\theta\leq 2-\alpha $ and $x>A>0$, we have
\begin{equation} \label{e19}
L_\alpha^\theta(x)=\frac{1}{\pi x}\Sigma_{n=1}^{+\infty}(-x^{-\alpha})^n
\frac{\Gamma(1+n\alpha)}{n!}\sin{\frac{n\pi}{2}(\theta-\alpha)}.
\end{equation}
We will denote by $C_\alpha^\theta=\frac{-1}{\pi }\Gamma(1+\alpha)
\sin{\frac{\pi}{2}(\theta-\alpha)}$ the coefficient of the leading
term in expansion \eqref{e19}.
\section*{Appendix B: Integrals of cumulated alpha stable L\'evy laws}
Due to symmetry, the integrals
$\int_0^{+\infty}F_{\alpha,\theta}^d(y)dy$ and
$\int_0^{+\infty}F_{\alpha,\theta}^g(-y)dy$ are equal for
$\theta=0$. In fact, and this point is important for us, this
equality holds for all admissible values of $\theta$. Let us prove
the claim.
First, notice that
$F_{\alpha,\theta}^g(-x)=\int_{-\infty}^{-x}L_\alpha^\theta(y)dy
=\int_x^{+\infty}L_\alpha^{-\theta}(-y)dy=F_{\alpha,-\theta}^d(x)$.
Then, we will uses Mellin's transform, defined by
$\mathcal{M}\omega(z)=\int_0^{+\infty}t^{z-1}\omega(t)dt$ for
function $\omega$. With $z=1$ we see that
$\int_0^{+\infty}F_{\alpha,\theta}^d(y)dy=\mathcal{M}F_{\alpha,\theta}^d(1)$,
while we have $F_{\alpha,\theta}^d(x)=I_-^1L_\alpha^{-\theta}(x)$,
hence
$\int_0^{+\infty}F_{\alpha,\theta}^d(y)dy
=(\mathcal{M}I_-^1L_\alpha^{-\theta})(1)$.
For $z>1$ and sufficiently good-behaved functions in neighborhoods of $\infty$,
such as $L_\alpha^\theta$, we have
$$
(\mathcal{M}I_-^1\omega)(z)=\frac{\Gamma(z)}{\Gamma(z+1)}
(\mathcal{M}\omega)(z+1),
$$
for $z<\alpha$ according to \cite{Rub} page 44. From this, due to
$F_{\alpha,\theta}^d(x)=\int_x^{+\infty}L_\alpha^\theta(y)dy
=I_-^1L_\alpha^\theta(x)$, we deduce
$$
(\mathcal{M}F_{\alpha,\theta}^d)(z)=\frac{\Gamma(z)}{\Gamma(z+1)}
(\mathcal{M}L_\alpha^\theta)(z+1).
$$
The Mellin transform $\mathcal{M}L_\alpha^\theta$ is given in \cite{Schn}:
$$
(\mathcal{M}L_\alpha^\theta)(z)=\frac{1}{\alpha}\frac{\Gamma(z)
\Gamma((1-z)\alpha^{-1})}{\Gamma((1-z)\frac{\alpha-\theta}{2\alpha})
\Gamma(1-(1-z)\frac{\alpha-\theta}{2\alpha})},
$$
which is of the form
\begin{equation} \label{e20}
(\mathcal{M}L_\alpha^\theta)(z)=\frac{1}{\pi\alpha}\Gamma(z)
\Gamma(\frac{1-z}{\alpha})\sin{((1-z)\pi\frac{\alpha-\theta}{2\alpha})}
\end{equation}
due to complements formula for Gamma functions \cite{AbSt}.
In fact, \cite{Schn} proved (20) for $0