1\\ \frac{u^{p}}{u^{p}+(1-u)^{p}}& \quad \mbox{if } 0\le u\le 1. \end{cases} $$ \end{example} Both functions are used to model logistic-type nonlinearities (``sigmoids"). They are non-decreasing, but only the Hill function is strictly monotone for all $y\ge 0$ and thus never reaches the value 1. The logoid function, on the contrary, assumes this value for all $y\ge \theta +(2q)^{-1}$. Some important general properties ((A1)-(A3), (B1)-(B3)) of these two functions are listed below. Notice that (B1)-(B3) can be deduced from (A1)-(A3). Any sigmoid function $\Sigma$ used in the sequel is assumed to satisfy (A1)-(A3). In particular, the results proved are valid for the functions from Examples \ref{Ex.HillFunc}-\ref{Ex.LogoidFunc}. It is convenient to characterize some of these properties in terms of the inverse function $y$ $=\Sigma^{-1}(Z,\theta,q)$, where it exists. Let $Z=\Sigma(y,\theta,q)$ be any function defined for $y\in\mathbb{R}$, $\theta>0$, $0< q< q^0$. We say that $Z=\Sigma(y,\theta,q)$ satisfies Property \begin{itemize} \item[(A1)] If $\Sigma(y,\theta,q)$ is continuous in $(y,q)\in\mathbb{R}\times (0,q^0)$ for all $\theta>0$, continuously differentiable with respect to $y\in \mathbb{R}$ for all $\theta>0$, $0< q0$ on the set $\{y\in\mathbb{R}: 0<\Sigma(y,\theta,q)<1\}$. \item[(A2)] If $0\le \Sigma(\theta,\theta,q)\le 1$ for all $y\in\mathbb{R}$, $ q\in(0,q^0)$, $\theta>0$, and in addition, $\Sigma(\theta,\theta,q)=0.5$, $\Sigma(0,\theta,q)=0$, $\Sigma(+\infty,\theta,q)=1$ for all $\theta>0$, $0< q< q^0$. \item[(A3)] If for all $\theta>0$, $\frac{\partial}{\partial Z}\Sigma^{-1}(Z,\theta,q)\to 0$ ($q\to 0$) uniformly on compact subsets of the interval $Z\in(0,1)$, and $\Sigma^{-1}(Z,\theta,q)\to\theta$ ($q\to 0$) pointwise for all $Z\in(0,1)$ and $\theta>0$. \end{itemize} Clearly, (A1)-(A2) imply that $Z=\Sigma(y,\theta,q)$ is non-decreasing in $y\in\mathbb{R}$ and strictly increasing in $y$ on the set $\{y\in\mathbb{R}: 0<\Sigma(y,\theta,q)<1\}$. The inverse function $y=\Sigma^{-1}(Z,\theta,q)$ is defined then for $Z\in(0,1)$, $\theta>0$, $0< q< q^0$, where it is strictly increasing in $Z$ and continuously differentiable w.r.t. $Z$. If (A1)-(A3) are satisfied, then the function $Z=\Sigma(y,\theta,q)$ satisfies also the following properties: \begin{itemize} \item[(B1)] If $q\to 0$, then $\Sigma^{-1}(Z,\theta,q)\to\theta$ uniformly on compact subsets of the interval $Z\in (0,1)$ and every $\theta >0$. \item[(B2)] If $q\to 0$, then $\Sigma(y,\theta,q)$ converges pointwise to 1 (if $y>\theta$), to 0 (if $y<\theta$), and to 0.5 (if $y=\theta$) for all $\theta >0$. \item[(B3)] $\frac{\partial}{\partial y}\Sigma(y_n,\theta,q_n)\to +\infty$, whenever $q_n\to 0$ and $\Sigma(y_n,\theta,q_n)\to Z^*$ for some $00$ for some $0 N$. It is evident that the functions from Examples \ref{Ex.HillFunc}-\ref{Ex.LogoidFunc} satisfy Properties (A1)-(A3) and hence Properties (B1)-(B3). Property (B2) justifies the following notation for the step function with threshold $\theta$: \begin{example}\label{Ex.StepFunc} $$ \Sigma(y,\theta,0):=\begin{cases} 0 & \mbox{if } y<\theta \\ 0.5 & \mbox{if } y=\theta \\ 1 & \mbox{if } y>\theta \end{cases} $$ \end{example} \begin{remark} \rm This is not exactly what one usually calls the (unit) step, or Heaviside, function, as $\Sigma(\theta,\theta,0)=0.5$. But studying the dynamics of System (\ref{Eq.MainSystem}) with $Z_i=\Sigma(y_i,\theta_i,0)$ one can disregard this difference as the solutions do not depend on the value of $Z_i$ at $y_i=\theta_i$. On the other hand, this difference is essential for simplifying some definitions: see e.g. Definitions \ref{defRegDomain} and \ref{defSingDomain} in Section \ref{SecFormalization}. \end{remark} \section{Some examples of regulatory systems}\label{SecExamples} First of all, let us consider an example of System (\ref{Eq.MainSystem}) without time-delay (i.e. under the assumption that $\Re_i$ are the identity operators). The example was introduced in \cite{PlahteKjogl} and contains many important features of the general regulatory networks (different types of stationary points, non-trivial attractor basins, sliding modes etc.). \begin{example}\label{Ex.SystemWithoutDelay} \rm \begin{equation}\label{Eq.PlanarSys} \begin{gathered} \dot{x}_1=Z_1+Z_2-2Z_1Z_2-\gamma_1x_1 \\ \dot{x}_2=1 - Z_1Z_2-\gamma_2x_2. \\ \end{gathered} \end{equation} Let us assume that $\theta_1=\theta_2=1$, $\gamma_1\ge 0$, $\gamma_2>0$ as well as $y_i=x_i$, $Z_i=\Sigma(x_i,\theta_i,0)$ ($i=1,2$) (remember that $y_i=x_i$ means that no delay occurs for the variable $x_i$). \end{example} Now, we have an example including delays. \begin{example}\label{Ex.PlanarSysDelay} \rm Using the same system as in the previous example, let us again put $y_1=x_1$ (no delay). But the other variable will now be ``delayed": $y_2(t)=(\Re x_2)(t)$ where \begin{equation}\label{Eq.DelayOperator} (\Re x)(t)=c_0x(t)+\int_{-\infty}^{t}G(t-s)x(s)ds, \quad t \ge 0 \end{equation} for some kernel (``memory function") $G(u)\ge 0$ defined on $[0,+\infty )$ and $c_0\ge 0$. A simple, yet of practical importance, example of such a kernel is given by \emph{the weak generic delay kernel} \begin{equation}\label{Eq.WeakGenericKernel} G^1(u)=\alpha e^{-\alpha u}, \quad \alpha >0. \end{equation} Another important example is \emph{the strong generic delay kernel}: \begin{equation}\label{Eq.StrongGenericKernel} G^2(u)=\alpha^2ue^{-\alpha u}, \quad \alpha >0. \end{equation} Taking an arbitrary linear combination $G(u)=c_1G^1(u)+c_2G^2(u)$ ($c_1\ge 0, c_2\ge 0$) with the normalization condition $c_0+c_1+c_2=1$, a more general feedback can be obtained. In addition to their practical importance, such degenerate kernels are easier to study, as one can apply the so-called "linear chain trick" to remove delays and obtain a finite dimensional system of ordinary differential equations, which gives an opportunity to use the technique, developed in \cite{Mestl-95, Plahte-98}, in the case of the delay system (\ref{Eq.MainSystem}). \end{example} Note that if $c_1=c_2=0$, then automatically $c_0=1$, and one obtains the non-delay case. Let us continue to study System (\ref{Eq.PlanarSys}) from Example \ref{Ex.SystemWithoutDelay}. The regulatory functions are now the step functions. On the set of its continuity the function $Z_i=\Sigma(x_i,\theta_i,0)$ assumes only two values: 0 and 1. This divides the phase space of the system into 4 pieces, called \textit{boxes or regular domains}. By definition these boxes are open subsets of the phase space as $\Sigma(\theta_i,\theta_i,0)=0.5$ in view of Property (A2). Inside any box, (\ref{Eq.PlanarSys}) becomes a fairy simple diagonal system of affine differential equations: \begin{itemize} \item $\dot{x}_1=-\gamma_1 x_1$, $\dot{x}_2=1-\gamma_2 x_2$ if $Z_1=0, Z_2=0$; \item $\dot{x}_1=1-\gamma_1 x_1$, $\dot{x}_2=1-\gamma_2 x_2$ if $Z_1=1, Z_2=0$; \item $\dot{x}_1=1-\gamma_1 x_1$, $\dot{x}_2=1-\gamma_2 x_2$ if $Z_1=0, Z_2=1$; \item $\dot{x}_1=-\gamma_1 x_1$, $\dot{x}_2=-\gamma_2 x_2$ if $Z_1=1, Z_2=1$. \end{itemize} The main advantage of this approach is that the dynamics inside each box can be calculated explicitly. Each phase trajectory is a curve evolving towards a certain point, as $\gamma_i$ is always positive according to the assumptions on System (\ref{Eq.MainSystem}). This point is usually called \textit{the focal point} of the given box. For example, the point $(x^*_1, x^*_2)$, where $x^*_1=0, x^*_2=\gamma_2^{-1}$, is the focal point of the box where $Z_1=0, Z_2=0$ Any focal point, which belongs to the box it corresponds to, is a stable stationary point of the system. For example, if $\gamma_2>1$, then the focal point $(x^*_1, x^*_2)$ of the box, where $Z_1=0, Z_2=0$, belongs to this box. To check this, one simply observes that $x^*_1=0<\theta_1=1, x^*_2=\gamma_2^{-1}<\theta_2=1$ and that $Z_i=0$ holds if and only if $x_i<\theta_i$. Thus, as soon as a trajectory enters this box, it will approach the stable stationary point $(0, \gamma^{-1})$. For the focal points lying in the boxes they correspond to, one uses the expression "regular stationary points" (RSP) (see \cite{Thomas}). However, if (for instance) $\gamma_1=0.6$, $\gamma_1=0.9$, then all focal points will lie outside the respective boxes. In such a case, the system does not have any RSP, and its dynamics becomes more complicated. To understand it better, one needs to classify the sets between adjacent boxes. There are 5 such sets (called "singular domains" \cite{PlahteKjogl}) in the example: 4 sets of codimension 1 (called "walls", in this case they are open intervals), where one of the variable assumes its threshold value, while the other does not, and 1 set of codimension 2 ($x_1=\theta_1, x_2=\theta_2$) which is the only limit point for all 4 boxes. One needs an appropriate notation to describe regular domains (i.e. boxes) and singular domains (walls etc.). The problem is fixed in a convenient way (see e.g. \cite{Mestl-95}, \cite{Plahte-98}, \cite{PlahteKjogl}) if to any coordinate $x_i$ one assigns the Boolean variables $1$ (for YES) or $0$ (for NO). Given a box, one asks whether the coordinate $x_i$ is bigger than its threshold value $\theta_i$ inside this box. If the answer is YES, then the value $1$ is assigned to it, if NO, then it will be $0$. Alternatively, one can use the corresponding values of $Z_i$ which is even easier: $0$ (resp. $1$) replaces $Z_i=0$ (resp. $Z_i=1$). This gives rise to the mapping $\beta: 0\mapsto 0; 1 \mapsto 1$ which extends to the set of all Boolean vectors $(0, 1)^p$ in a natural way. To any 2-dimensional Boolean vector $ B=( B_1, B_2)\in (0, 1)^2$ one associates the box $\mathcal{B}( B)$, where $Z_i$ takes the value $\beta( B_i)$. For example, $Z_1=0$ and $Z_2=1$ in the box $\mathcal{B}(0, 1)$, or equivalently, $x_1<\theta_1$, $x_2>\theta_2$. To describe walls (i.e. singular domains of codimension 1) between two adjacent boxes one should simply replace the only Boolean variable, which is different for the two boxes, by the corresponding threshold value. For instance, the wall between $\mathcal{B}(0, 1)$ and $\mathcal{B}(1, 1)$ is denoted by $\mathcal{SD}(\theta_1, 1)$. For this wall one has $x_2>\theta_2$ (as $Z_2=1$) and $x_1=\theta_1$. If the threshold values are fixed, then $\theta_i$ can be skipped (this notation is used in \cite{PlahteKjogl}). Example \ref{Ex.SystemWithoutDelay} displays why it is convenient to distinguish between Boolean values and numbers: the walls $\mathcal{SD}(1, 1)$ and $\mathcal{SD}(1, 1)$ are different. Singular domains of higher codimension can be described in a similar way. The only singular domain of codimension 2 in the above example is $\mathcal{SD}(1,1)$. There exist three different types of walls: transparent, white and black (this terminology was introduced in \cite{Plahte-94}). If the focal points of two adjacent boxes are not separated by the wall between them (i. e. the focal points belong to the same half-space), then the trajectories hit the wall from one side and depart from it on the other side. One calls such a wall \textit{transparent}. For System (\ref{Eq.PlanarSys}) with $\gamma_1=0.6$ and $\gamma_2=0.9$ the wall $\mathcal{SD}(0,1)$ is transparent, because the focal points of $\mathcal{B}(0,0)$ and $\mathcal{B}(0,1)$ are $(0, 10/9)$ and $(5/3, 10/9)$, respectively, and both belong to the half-plane $x_2>\theta_2=1$. The trajectories hit the wall from below and stay then in the box $\mathcal{B}(0,1)$ until they hit the wall $\mathcal{SD}(1, 1)$. Assume now that the focal points lie in different half-spaces. If either focal point belongs to the half-space, which also contain the associated box, then the wall is \textit{white}. In this case, the trajectories never hit the wall as the focal points are attractors. \textit{Black} walls provide the opposite situation: the focal points and the associated boxes lie in the different half-spaces. Therefore the trajectories hit the black wall from either side. Black walls contain therefore potential attractors for the dynamics of the system. For System (\ref{Eq.PlanarSys}) with $\gamma_1=0.6$ and $\gamma_2=0.9$ the wall $\mathcal{SD}(1, 0)$ is white as the focal point $(5/3, 10/9)$ of the box $\mathcal{B}(1, 0)$ is contained in the half-plane $\{x_1>1\}$ and the focal point $(0, 10/9)$ of the box $\mathcal{B}(0, 0)$ is contained in $\{x_1<1\}$. It is easy to see that the two remaining walls in this example are black. Now we are ready to explain why regulatory systems can have stationary points which are not RSP. This is because the system can have stationary regimes that are \textit{hidden} in black walls. When replacing the corresponding step function by a smooth, steep sigmoid, the situation cannot be excluded that the resulting smooth system will have stationary points close to such a wall. If the steepness parameter goes back to zero, then this stationary point can again disappear in the black wall (just because the wall is "black"). Unfortunately, the limit system itself provides no information about such hidden stationary points as the right-hand side of the system is discontinuous in the walls. Following \cite{Thomas} let us call such points \textit{singular stationary points} (SSP). Examples show that SSP do exist. For instance, in \cite{PlahteKjogl} it is shown that the point $(3/2,1)$ in the black wall $\mathcal{SD}(1, 1)$ is an asymptotically stable SSP for System (\ref{Eq.PlanarSys}) with $\gamma_1=0.6$ and $\gamma_2=0.9$. Although SSP are always contained in the sets of a lower dimension, it is due to strongly nonlinear interactions in the system that such points may not disappear under small perturbations. Stability analysis around such "invisible" points is even more demanding. The challenge of the present paper is to show how SSP can be studied in the case of time-delay (of a special kind). We now look at Example \ref{Ex.PlanarSysDelay}. The main idea is to replace System (\ref{Eq.PlanarSys}) with distributed delay (\ref{Eq.DelayOperator}) by a system of ordinary differential equations. This is done by the linear chain trick described in Section \ref{SecLinearChainTrick}. It is assumed in the example that $G(u)=c_1G^1(u)+c_2G^2(u)$, where $c_0+c_1+c_2=1$, $c_i\ge 0$ ($i=1,2,3$). In other words, the delayed feedback $y_2$ is given by \begin{equation}\label{Eq.Y_2} y_2=c_0x_2+c_1w_1+c_2w_2, \end{equation} where $$ w_1(t)=\alpha \int_{-\infty}^{t}e^{-\alpha (t-s)}x_2(s)ds, \quad w_2(t)=\alpha^2 \int_{-\infty}^{t}(t-s)e^{-\alpha (t-s)}x_2(s)ds. $$ It is easily seen that $\dot{w}_1=-\alpha w_1+\alpha x_2$ and $\dot{w}_2=\alpha w_1-\alpha w_2.$ These formulae are used in the classical linear chain trick. For the purposes of the present paper, this trick can only be utilized in a modified form, because $Z_2$ should, as before, depend on a single variable. This modification and its justification are also offered in Section \ref{SecLinearChainTrick}. In the example, let us just formally change the variables as follows: $v_2=c_2w_1$, $y_2=c_0x+c_1w_1+c_2w_2$ (the index "2" for the variables $v_2$ and $y_2$ is chosen to stress that we are considering the second, i.e. "delayed", variable; this notation is also consistent with that used in the general case - see Section \ref{SecDelayFormalization}). By this, the following system of ordinary differential equations is derived: \begin{equation}\label{Eq.PlanarSystemTricked} \begin{gathered} \dot{x}_1=Z_1+Z_2-2Z_1Z_2-\gamma_1x_1 \\ \dot{x}_2= 1-Z_1 Z_2-\gamma_2x_2 \\ \dot{y}_2=c_0(1-Z_1 Z_2-\gamma_2x_2)+\alpha (c_0+c_1)x_2 -\alpha y_2+\alpha v_2,\\ \dot{v}_2=\alpha c_2x_2 -\alpha v_2, \end{gathered} \end{equation} where $Z_i=\Sigma(y_i,\theta_i,0)$ ($i=1,2$). This system is equivalent to System (\ref{Eq.PlanarSys}) with $y_1=x_1$ (i.e. no delay in $x_1$) and $y_2(t)=c_0x_2(t)+\int_{-\infty}^{t}(c_1G^1(t-s)+c_2G^2(t-s))x_2(s)ds$. Among the system's four variables $x_1, x_2, y_2, v_2$ only the first and the third have thresholds $\theta_1=\theta_2=1$. Thus, the system, as in the non-delay Example \ref{Ex.SystemWithoutDelay}, has 4 boxes: \begin{itemize} \item $\mathcal{B}(0,0)$ where $Z_1=Z_2=0$; \item $\mathcal{B}(0,1)$ where $Z_1=0, Z_2=1$; \item $\mathcal{B}(1,0)$ where $Z_1=1, Z_2=0$; \item $\mathcal{B}(1,1)$ where $Z_1=Z_2=1$. \end{itemize} However, these boxes are now 4-dimensional due to additional variables. In each box, (\ref{Eq.PlanarSystemTricked}) becomes an affine system. Each system, governing the dynamics inside the boxes, consists of two parts. One of the parts is formally identical with the system in the same box for the correspondent system without delay (the first two equations in (\ref{Eq.PlanarSystemTricked})). The other contains the additional variables and coincides with the last two equations in (\ref{Eq.PlanarSystemTricked}). The focal points are: \begin{itemize} \item $x^*_1=0, x^*_2=\gamma_2^{-1}, y^*_2=(c_1-c_2)\gamma_2^{-1}, v^*_2=c_2\gamma_2^{-1}$ for $\mathcal{B}(0,0)$; \item $x^*_1=\gamma_1^{-1}, x^*_2=\gamma_2^{-1}, y^*_2=(c_1-c_2)\gamma_2^{-1}, v^*_2=c_2\gamma_2^{-1}$ for $\mathcal{B}(0,1)$; \item $x^*_1=\gamma_1^{-1}, x^*_2=\gamma_2^{-1}, y^*_2=(c_1-c_2)\gamma_2^{-1}, v^*_2=c_2\gamma_2^{-1}$ for $\mathcal{B}(1,0)$; \item $x^*_1=x^*_2=y^*_2=v^*_2=0$ for $\mathcal{B}(1,1)$. \end{itemize} Evidently, these points would be asymptotically stable for the respective systems if the latter were defined in the entire $\mathbb{R}^4$. Hence the trajectories inside each box will be, exactly as in the non-delay case, evolve towards the corresponding focal point until they hit a wall. It also means that if the focal point belongs to the box it is assigned to, then this focal point becomes an asymptotically stable stationary point for the system (i.e. a stable RSP). According to the non-delay case, the walls (which are 3-dimensional now) can be black, white or transparent. Compared to Example \ref{Ex.SystemWithoutDelay}, some qualitative changes can be registered in the case with time-delay. To give an impression of what can happen in the delay case, let us assume that $\gamma_1=0.6, \gamma_2=0.9$ as well as $ 0 1, x_2>0$ in this wall will be a point of attraction, if at this point $\dot{y_2}<0$ for $Z_1=1, Z_2=1$ and $\dot{y_2}>0$ for $Z_1=1, Z_2=0$. Clearly, these assumptions are only fulfilled, if $-\alpha+\alpha x_2 -c_0\gamma_2 x_2<0$ and $-\alpha+\alpha x_2 -c_0\gamma_2 x_2+c_0>0$. This does not hold for all $x_2$. For instance, if $x_2=\theta_2=1$, then both inequalities are fulfilled. However, if $x_2$ lies far enough from the threshold value for $y_2$, then one of the conditions is violated, and the wall becomes transparent, yet never white, as $c_0>0$. If $c_0=0$, this wall is never black either (only transparent). This example illustrates the difference from the non-delay case, where one has: ``once black (white, transparent), always black (white, transparent)". In the presence of delays a wall, on the contrary, can be partly black, partly transparent. On the other hand, the walls corresponding to the non-delay variables ($x_1$ in the example) cannot change their type. This fact is proved in Proposition \ref{prop2}. It is natural to expect that stationary points for System (\ref{Eq.MainSystem}), with or without delay, are the same. It is of course the case for smooth dynamical systems. It is shown below (Proposition \ref{prop0} and Theorem \ref{thSSP}, respectively) that both RSP and SSP in the delay and non-delay situation coincide, too, at least under some additional assumptions. Now let us make a break, using it to describe the linear chain trick and its modification. This modification will be applied in the forthcoming sections to formalize the model which includes time-delays. \section{The linear chain trick and its modifications}\label{SecLinearChainTrick} This is a well-known algorithm (see e.g. \cite{Mc}) to reduce delay differential equations to finite dimensional systems of ordinary differential equations. It is widely used in numerical and statistical analysis of delay differential equations. The method consists in representing the delay operator $\Re$ as a finite linear combination of "elementary" delay operators which admit a simple description. This section is organized as follows: the classical version of the "trick" is first described, giving a system of ordinary differential equations in a matrix form. The latter is then used to construct the "trick" in a modified form, which is needed for the purposes of the paper. Both versions are, in fact, special cases of the generalized linear chain trick, which also covers infinite sums, non-smooth delay kernels etc. Its justification can be found in \cite{MPS}. In this section, the following scalar nonlinear delay differential equation is considered: \begin{equation}\label{eq.basic} \dot{x}(t)=f(t, x(t), (\Re x)(t)), \quad t > 0 \end{equation} with the initial condition \begin{equation}\label{prehist} x(\tau)=\varphi (\tau), \quad \tau \le 0, \end{equation} The function $f(\cdot , \cdot, \cdot): \, [0,\infty)\times \mathbb{R}^2 \to \mathbb{R}$ has three properties. \begin{itemize} \item[(C1)] The function $f(\cdot,u,v)$ is measurable for any $u,v\in\mathbb{R}$. \item[(C2)] The function $f(\cdot,0,0)$ is bounded: $|f(t,0,0)|\le C$ ($t\ge 0$) for some constant $C$. \item[(C3)] The function $f$ is Lipschitz: There exists a constant $L$ such that \begin{equation}\label{f2} |f(t,u_1,v_1)-f(t,u_2,v_2)|\le L (|u_1-u_2|+|v_1+v_2|) \end{equation} for all $t\ge 0$, $u_i,v_i \in \mathbb{R}$. \end{itemize} Note that these three conditions imply that $|f(t,u,v)|\le L(|u|+|v|)+C$ for any $t \ge 0$ and $u,v\in\mathbb{R}$. The initial function $\varphi$ is bounded and continuous. The integral operator $\Re$ is assumed to be as follows: \begin{equation}\label{operator} (\Re x)(t)=\int_{-\infty}^{t}G(t-s)x(s)ds, \quad t>0, \end{equation} where \begin{gather}\label{trickFuncG} G(u)=\sum_{\nu=1}^{p}c_{\nu}G_{\alpha}^{\nu}(u)\,,\\ \label{trickFuncG_j} G_{\alpha}^{\nu}(u)=\frac{\alpha^{\nu} u^{{\nu}-1}}{({\nu}-1)!}e^{-\alpha u} \end{gather} The coefficients $c_{\nu}$ are real numbers, and it is also assumed that $\alpha >0$. Note that the operator (\ref{operator}) is a particular case of the operator (\ref{Eq.DelayOperator}) with $c_0=0$. If the initial function is defined on a finite interval $[-H,0]$, then one can put $x(\tau)=0$ for $\tau < -H$. The assumptions on $\Re$ exclude delay terms like $x(t-h)$. The linear chain trick is therefore only applicable to some classes of smooth delay functions and differential equations with distributed delays. The functions $G_{\alpha}^{\nu}$ have the following properties: \begin{equation}\label{TrickPropertiesG} \begin{gathered} G_{\alpha}^{\nu}(\infty)=0, \\ G_{\alpha}^{\nu}(0)=0,\quad ({\nu}\ge 2.) \\ G_{\alpha}^{1}(0)=\alpha \,. \end{gathered} \end{equation} It is also straightforward to show that \begin{equation}\label{TrickPropertiesG-1} \begin{gathered} \frac{d}{du}G_{\alpha}^{\nu}(u)=\alpha G_{\alpha}^{{\nu}-1}(u) - \alpha G_{\alpha}^{\nu}(u) \quad ({\nu}\ge 2)\\ \frac{d}{du}G_{\alpha}^{\nu}(u)=-\alpha G_{\alpha}^{\nu}(u) \quad ({\nu}=1). \end{gathered} \end{equation} Hence, putting \begin{equation}\label{trickVariables-z} v_{\nu}(t)=\int_{-\infty}^{t}G_{\alpha}^{\nu}(t-s)x(s)ds \quad ({\nu}=1, 2,\dots , p) \end{equation} yields \begin{equation}\label{TrickEq-x} %\begin{array}{ll} (\Re x)(t) =\int_{-\infty}^{t}\sum_{{\nu}=1} ^{p}c_{{\nu}}G_{\alpha}^{{\nu}}(t-s)x(s)ds = \sum_{{\nu}=1}^{p}c_{{\nu}}v_{{\nu}}(t), %\end{array} \end{equation} so that \begin{equation}\label{Eq.TrickPart1} \dot{x}(t)=f(t, x(t), \sum_{{\nu}=1}^{p}{c_{\nu}v_{\nu}(t)})=f(t, x(t), lv(t)), \end{equation} where \begin{equation}\label{Trick-functional} l= (c_1, \, c_2, \dots , c_p), \end{equation} $c_{\nu}$ being identical with the coefficients in (\ref{trickFuncG}). On the other hand, for ${\nu}\ge 2$ the functions $v_{\nu}$ satisfy $$ \dot{v}_{\nu}(t) = \alpha v_{{\nu}-1}(t)-\alpha v_{\nu}(t), $$ while for ${\nu}=1$ one has $$ \dot{v}_1(t) = -\alpha v_1(t)+\alpha x(t). $$ This gives the following system of ordinary differential equations: \begin{equation}\label{Trick-model} \dot{v}(t)=Av(t)+\pi x(t), \quad t\ge 0, \end{equation} where \begin{equation}\label{matrixA} A= \begin{pmatrix} -\alpha & 0 & 0 & \dots & 0 \\ \alpha & -\alpha & 0 & \dots & 0 \\ 0 & \alpha & -\alpha & \dots & 0 \\ \vdots & \vdots & \ddots & \ddots & \vdots \\ 0 & 0 & \dots & \alpha & -\alpha \end{pmatrix} \quad\text{and}\quad %\label{Trick-func-p} \pi =\begin{pmatrix} \alpha \\ 0\\ \vdots\\ 0\\ \end{pmatrix}. \end{equation} Clearly, the system of ordinary differential equations (\ref{Eq.TrickPart1}), (\ref{Trick-model}) is equivalent to the delay differential equation (\ref{eq.basic}). The initial condition (\ref{prehist}) can be rewritten in terms of the new functions as follows: \begin{equation}\label{Trick-func-q-scalar} v_{\nu}(0)=\int_{-\infty}^{0}G_{\alpha}^{\nu}(-\tau)\varphi(\tau)d\tau =(-1)^{{\nu}-1}\frac{\alpha ^{\nu}}{({\nu}-1)!}\int_{-\infty}^{0}\tau^{{\nu}-1}e^{\alpha \tau}\varphi (\tau)d\tau, \end{equation} ${\nu}=1, \dots , p$. As before, $x(0)=\varphi(0)$. The initial conditions (\ref{Trick-func-q-scalar}) can be represented in a vector form as well: \begin{equation}\label{Trick-func-q-vector} v(0)=\int_{-\infty}^{0}e^{A(-\tau)}\pi\varphi(\tau)d\tau. \end{equation} This classical version of the linear chain trick is, however, not directly suitable for gene regulatory networks as the regulatory functions $Z_{i}$ depend only on one variable, while the "trick" gives a sum of the form (\ref{TrickEq-x}). That is why a modification of the linear chain trick, which is a particular case of the general reduction scheme introduced in \cite{MPS}, is now described. First of all, let us observe that the solution to the auxiliary system (\ref{Trick-model}) can be represented as follows: \begin{equation}\label{Eq.TrickW-funct} \begin{aligned} v(t) & = e^{At}v_{0}+\int_0^t e^{A(t-s)}\pi x(s)ds\\ & =e^{At}\int_{-\infty}^{0}e^{A(-\tau)}\pi\varphi(\tau)d\tau +\int_0^t e^{A(t-s)}\pi x(s)ds \\ & =\int_{-\infty}^t e^{A(t-s)}\pi x(s)ds, \end{aligned} \end{equation} as $x(s)=\varphi(s)$ for $s\le 0$. Thus, \begin{equation}\label{Eq.DelayOperatorRepresentation} (\Re x)(t)=\int_{-\infty}^t \Big(\sum_{\nu=1}^pc_{\nu}v_{\nu}\Big)ds =l\int_{-\infty}^t e^{A(t-s)}\pi x(s)ds. \end{equation} This formula is a starting point for a modification of the linear chain trick which is used in this paper. Formally, the auxiliary system of the same form as in (\ref{Trick-model}) is exploited. However, the matrix $A$, the functionals $\pi$ and $l$ will be changed to $A^T$, \begin{equation}\label{Trick-func-p-modified} \pi' x=\alpha x \begin{pmatrix} c_1\\ c_{2}\\ \vdots\\ c_p \end{pmatrix} \end{equation} and %\label{Trick-functional-new} $ l'= (1, 0, \dots , 0, 0)$, respectively. It is claimed, in other words, that System (\ref{eq.basic}) with Condition (\ref{prehist}) is equivalent to the following system of ordinary differential equations: \begin{equation}\label{Eq.trickedModified} \begin{gathered} \dot{x}(t)=f(t,x(t),v_1(t))\\ \dot{v}=A^Tv+\pi'x(t)\\ \end{gathered} \end{equation} with the initial conditions $x(0)=\varphi(0)$ and \begin{equation}\label{Trick-func-q-vector-modified} v(0)=\int_{-\infty}^{0}e^{A^T(-\tau)}\pi'\varphi(\tau)d\tau. \end{equation} Note that, unlike the right-hand side in the classical linear chain trick (see (\ref{Eq.TrickPart1})), the right-hand side in (\ref{Eq.trickedModified}) depends only on two state variables: $x$ and $v_1$. This is crucial for applications which are of interest in this paper. To prove (\ref{Eq.trickedModified}), one needs to show that the representation (\ref{Eq.DelayOperatorRepresentation}) holds true if $A$, $\pi$ and $l$ are replaced by $A^T$, $\pi'$ and $l'$, respectively. This is done by writing down the fundamental matrix of the corresponding homogeneous system: \begin{equation}\label{Trick-FundamentalMatrix} Y(t) = e^{-\alpha t} \begin{pmatrix} 1 & \alpha t & \frac{(\alpha t)^{2}}{2!} & \dots & \frac{(\alpha t)^{p-1}}{(p-1)!} \\ 0& 1 & \alpha t & \dots & \frac{(\alpha t)^{p-2}}{(p-2)!} \\ 0& 0 & 1& \dots& \vdots \\ \vdots & \vdots & \ddots & \ddots & \alpha t \\ 0 & 0 & \dots & 0 & 1 \end{pmatrix}. \end{equation} Then a direct calculation proves the result. A similar argument gives (\ref{Trick-func-q-vector-modified}). \begin{remark}\label{remPositiveness} \rm Assume that $v(t)$ is a solution to $\dot{v}=A^Tv+\pi'x(t)$, $A$, $\pi'$ are given by (\ref{matrixA}) and (\ref{Trick-func-p-modified}), respectively. If now $c_{\nu}\ge 0$ for $\nu =0,\dots p$, $v_{\nu}(0)\ge 0$ for $\nu =1,\dots p$, and $x(t)\ge 0$ for all $t\ge 0$, then $v_{\nu}(t)\ge 0$ for all $t\ge 0, \nu =1,\dots p$ as well. It follows easily from the representation formula for the solution $v(t)$ and the formula (\ref{Trick-FundamentalMatrix}) for the fundamental matrix. \end{remark} \section{Incorporating time-delays into the model}\label{SecDelayFormalization} Now let us again look at System (\ref{Eq.MainSystem}). The aim is to apply the modified linear chain trick described in Section \ref{SecLinearChainTrick}. To do it, let us assume that the delay operators $\Re_i x_i$ from (\ref{Eq.MainSystem}) are all of the form \begin{equation}\label{operatorRe-i} (\Re_i x_i)(t)=c_0^{(i)}x_i(t)+\int_{-\infty}^{t}G_i(t-s)x_i(s)ds, \quad t>0, \end{equation} where \begin{equation}\label{trickFuncG-i} G_i(u)=\sum_{\nu=1}^{p}c_{\nu}^{(i)}G_{\alpha_i}^{\nu}(u) \end{equation} and the functions $G_{\alpha_i}^{\nu}(u)$ are defined by (\ref{trickFuncG_j}). Note that adding zero coefficients one can always assume that the number $p$ is independent of $i$. First of all let us observe that the delay operators in this section are slightly different from those studied in Section \ref{SecLinearChainTrick}: one term is added, namely $c_0^{(i)}x_i$. According to Section \ref{SecLinearChainTrick}, applying formally the (even modified) linear chain trick would give (\ref{Eq.trickedModified}) with the first equation equal $\dot{x}=f(t, x, c_0x+v_1)$. For (\ref{Eq.MainSystem}) it would provide sigmoids depending on two variables, which is not acceptable for the model. This problem causes, however, no big troubles. We can, for instance, replace $v_1$ by the input variable $y=c_0x+v_1$ arriving, as we will see, at a slightly different system of ordinary differential equations. Indeed, differentiating $y$ gives $$ \dot{y}=c_0\dot{x}+\dot{v}_1=c_0f(t,x,y)-\alpha v_1+ \alpha v_2+\alpha c_1 x=c_0f(t,x,y)-\alpha y+\alpha v_2+\alpha x(c_0+c_1). $$ For (\ref{Eq.MainSystem}) this results in the following system of ordinary differential equations: \begin{equation}\label{Eq.Trick-model-general} \begin{gathered} \dot{x_i}(t)=F_i(Z)-G_i(Z)x_i(t) \\ \dot{v}^{(i)}(t)=A^{(i)}v^{(i)}(t)+\Pi^{(i)} (x_i(t)), \quad t >0 \\ Z_i=\Sigma(y_i,\theta_i, 0), \quad y_i=v_1^{(i)} \quad (i=1,\dots ,n), \end{gathered} \end{equation} where \begin{equation}\label{matrixA-i} A^{(i)}= \begin{pmatrix} -\alpha_i & \alpha_i & 0 & \dots & 0 \\ 0 & -\alpha_i & \alpha_i & \dots & 0 \\ 0 & 0 & -\alpha_i & \dots & 0 \\ \vdots & \vdots & \ddots & \ddots & \vdots \\ 0 & 0 & \dots & 0 & -\alpha_i \end{pmatrix}, {v}^{(i)}=\begin{pmatrix} v_1^{(i)}\\ v_{2}^{(i)}\\ \vdots\\ v_p^{(i)}\\ \end{pmatrix}, \end{equation} and \begin{equation}\label{vectorPi} \Pi^{(i)} (x_i):=\alpha_i x_i\pi^{(i)} + c_0^{(i)}f_i(Z,x_i) \end{equation} with \begin{equation}\label{Eq.pi-i,f-i} \pi^{(i)}:=\begin{pmatrix} c_0^{(i)}+c_1^{(i)}\\ c_{2}^{(i)}\\ \vdots\\ c_p^{(i)}\\ \end{pmatrix}, \quad f_i(Z,x_i):=\begin{pmatrix} F_i(Z)-G_i(Z)x_i\\ 0\\ \vdots\\ 0\\ \end{pmatrix}. \end{equation} Recall that according to the assumptions on System (\ref{Eq.MainSystem}), $F_i, G_i$ are real functions which are affine in each $Z_i$ and which satisfy $F_i\ge 0, G_i\ge \delta>0$ for $0\le Z_i\le 1$. Such a system (up to notational changes) was already obtained in Section \ref{SecExamples}: see (\ref{Eq.PlanarSystemTricked}). Note that the notation in (\ref{Eq.Trick-model-general}) is chosen in such a way that the first coordinate $v^{(i)}_1$ always coincides with the $i$-th input variable $y_i$. For the sake of simplicity the notation $v^{(i)}_1$ in (\ref{Eq.Trick-model-general}) will be kept in the sequel. Assume that $Z_i=\mbox{const}$ ($i=1,\dots n$). Then System (\ref{Eq.Trick-model-general}) becomes affine: \begin{equation}\label{Eq.MainSystem-Sect4InBoxes} \begin{gathered} \dot{x}_i(t)=\psi_i-\gamma_ix_i(t) \\ \dot{v}^{(i)}(t)=A^{(i)}v^{(i)}(t)+\bar\Pi^{(i)}( x_i(t)), \quad t >0, \quad i=1,\dots ,n, \end{gathered} \end{equation} where $y_i=v_1^{(i)}$, $\psi_i\ge 0, \gamma_i >0,$ and \begin{equation}\label{vectorPi-insideboxes} \bar\Pi^{(i)} (x_i):= \alpha_i x_i \begin{pmatrix} c_0^{(i)}+c_1^{(i)}\\ c_{2}^{(i)}\\ \vdots\\ c_p^{(i)} \end{pmatrix} + c_0^{(i)}\begin{pmatrix} \psi_i-\gamma x_i\\ 0\\ \vdots\\ 0 \end{pmatrix}. \end{equation} \begin{proposition}\label{propPositiveness} If the initial conditions for System (\ref{Eq.MainSystem-Sect4InBoxes}) satisfy $x_i(t_0)\ge 0$, $v_1^{(i)}(t_0)\ge c_0^{(i)}x_i(t_0),$ $v_{\nu}^{(i)}(t_0)\ge 0$ ($\nu =2,\dots p$), then the corresponding solution to (\ref{Eq.MainSystem-Sect4InBoxes}) satisfies $x_i(t)\ge 0$, $v_1^{(i)}(t)\ge c_0^{(i)}x_i(t),$ $v_{\nu}^{(i)}(t)\ge 0$ ($\nu =2,\dots p$) for all $t\ge t_0$. \end{proposition} \begin{proof} First of all, let us notice that $x_i(t)\ge 0$ because $\psi_i \ge 0$, $\gamma_i >0$. Now the idea is to use the property of solutions mentioned in Remark \ref{remPositiveness}. It is worth remembering however that although $v_{\nu}^{(i)}$ for $\nu \ge 2$ remained the same, the variable $v_1^{(i)}$ was replaced by $y_i=c_0^{(i)}x_i+v_1^{(i)}$. But if one assumes that $y_i$ (i.e. the new $v_1^{(i)}$) satisfies $y_i(t_0)\ge c_0x_i(t_0)$, then Remark \ref{remPositiveness} can be applied, and the result follows. \end{proof} \begin{remark}\label{remNegativity} \rm The following example shows that if the assumption $v_1^{(i)}(t_0)\ge c_0x_i(t_0)$ is violated, then the coordinate $v_1^{(i)}(t)$ can be negative. Putting $p=1$, $c_0^{(i)}=0.6$, $\alpha_i=0.2$, $\psi_i=0$, $\gamma_i=1$, $x=x_i$, $y=y_i=v_1^{(i)}$ gives the system % \label{Eq.NeagtiveSolution} \begin{gather*} \dot{x}=-x\\ \dot{y}=-0.4x -0.2y. \end{gather*} Solving it for $x(0)=1$, $y(0)=y_0$ results in $$ y(t)=e^{-0.2t}\left(y_0+0.5\left(e^{-0.8t}-1\right)\right). $$ Thus, if $y_0<0.5$, then the $y(t)$ becomes eventually negative. This example explains why the sigmoids are to be defined for $y<0$ (see Section \ref{SecIntr}). However, looking at the original delay system (\ref{Eq.MainSystem}), it is straightforward to observe that the as\-sump\-tion $v_1^{(i)}(t_0)\ge c_0^{(i)}x_i(t_0)$ is natural and always fulfilled. Indeed, $v_1^{(i)}(t) \ (=y_i)$ is nothing but $(\Re_i x_i)(t)$ given by (\ref{operatorRe-i}), where $\int_{-\infty}^{t}G_i(t-s)x_i(s)ds\ge 0$ as $x_i(t)\ge 0$. Thus, assuming for convenience that the sigmoids are defined for all $y_i $ one, in fact, never obtains negative values for the inputs $y_i$. \end{remark} \begin{proposition}\label{propStability} System (\ref{Eq.MainSystem-Sect4InBoxes}) is asymptotically stable: all its solutions converge to the focal point of the system. \end{proposition} \begin{proof} System (\ref{Eq.MainSystem-Sect4InBoxes}) is affine. The linear part of this system (a matrix) is quasitriangular with the quasidiagonal consisting of the numbers $-\gamma_i<0$ and the stable matrices $A^{(i)}$. This observation implies also that the matrix has only real, negative eigenvalues. The solutions of System (\ref{Eq.Trick-model-general}) converge to the focal point given by \begin{equation}\label{Eq.FocalPoints-Sect4} \begin{gathered} x^*_i=\psi_i\gamma_i^{-1} \\ {v}^{(i),*}=-(A^{(i)})^{-1}\bar\Pi^{(i)}(x^*_i) \quad (i=1,\dots ,n) \end{gathered} \end{equation} (the matrix $A^{(i)}$ is invertible because it is stable). \end{proof} Thus, it is almost the same situation as in the non-delay case. The only difference is that the matrix of System (\ref{Eq.MainSystem-Sect4InBoxes}) has multiple eigenvalues. \begin{remark}\label{rem.WeakKernel} \rm One of the equations in (\ref{Eq.Trick-model-general}) is of particular interest. It is the equation containing $\dot{v}_1^{(i)}$ (i.e. $\dot{y}_i$ according to our notational agreement). If the kernel (\ref{trickFuncG-i}) of the delay operator (\ref{operatorRe-i}) contains at least one term, other than $G_{\alpha_i}^{1}(u)$, then this equation is given by \begin{equation}\label{Eq.Trick-For-y-i} \dot{y}_i(t)=-\alpha_iy_i(t)+\alpha_i v_2^{(i)}(t) + \alpha_i(c_0^{(i)}+c_1^{(i)})x_i(t) +c_0^{(i)}(F_i(Z)-G_i(Z)x_i(t)). \end{equation} If the kernel (\ref{trickFuncG-i}) contains only the term $G_{\alpha_i}^{1}(u)$, which corresponds to the case of the weak generic delay kernel, then the equation, containing $\dot{y}_i$, becomes \begin{equation}\label{Eq.Trick-weak-kernel} \dot{y}_i(t)=-\alpha_iy_i(t)+\alpha_ix_i(t) +c_0^{(i)}(F_i(Z)-G_i(Z)x_i(t)), \end{equation} as $c_0^{(i)}+c_1^{(i)}=1$ and $v_2^{(i)}(t)\equiv 0$. \end{remark} \begin{proposition}\label{prop0} Assume that in some open subset, containing the focal point with the coordinates given by (\ref{Eq.FocalPoints-Sect4}), one has $Z_i(y_i)=\mbox{const}$. Then this point is an asymptotically stable stationary point for System (\ref{Eq.Trick-model-general}). \end{proposition} \begin{proof} Under these assumptions System (\ref{Eq.Trick-model-general}) becomes (\ref{Eq.MainSystem-Sect4InBoxes}) for some $\psi_i\ge 0$, $\gamma_i>0$. As the matrix of the second system is stable, there exist numbers $t_0, \kappa$ ($t_0>0; \ 0<\kappa<1$) and a metric $\rho$ in the phase space of System (\ref{Eq.MainSystem-Sect4InBoxes}) such that the fundamental matrix $V(t)$ of the homogeneous system corresponding to (\ref{Eq.MainSystem-Sect4InBoxes}) is a contraction in the metric $\rho$ for all $t\ge t_0$. In particular, any ball $\mathfrak{B}_r(0)$ in this metric, centered in the origin, will be $V(t)$-invariant for $t\ge t_0$. This implies that any trajectory $\upsilon(t), t\ge t_0$ of System (\ref{Eq.MainSystem-Sect4InBoxes}) is contained in the ball $\mathfrak{B}_r(P^*)$, centered in the focal point $P^*$, as soon as $\upsilon(t_0)\in \mathfrak{B}_r(P^*)$. Now let us take a small $r$, such that System (\ref{Eq.Trick-model-general}) and System (\ref{Eq.MainSystem-Sect4InBoxes}) coincide in $\mathfrak{B}_r(P^*)$. In this ball the trajectories of the two systems are the same for $t\ge t_0$. By this, the focal point for (\ref{Eq.MainSystem-Sect4InBoxes}) becomes an asymptotically stable stationary point for (\ref{Eq.Trick-model-general}). \end{proof} \begin{remark}\label{remNew} \rm It is usually assumed in this paper that $Z_i=B_i$, where $B_i$ is either 0 or 1. The set, where $Z_i=B_i$ ($i=1,\dots ,n$), is an open subset of the input space (i. e. it is a box: see Definition \ref{defRegDomain} below). Any box is easily seen to be invariant under the solution flow to any diagonal system of affine differential equations, provided the focal point is inside this box. Thus, the proof of the proposition is trivial for the non-delay case. However, in the delay case (i.e. for System (\ref{Eq.MainSystem-Sect4InBoxes})) the situation is different, as boxes are not necessarily invariant. To see it, let us consider a modified example from Remark \ref{remNegativity}, where we now put $p=1$, $c_0^{(i)}=0.6$, $\alpha_i=0.2$, $\gamma_i=1$, $x=x_i$, $y=y_i=v_1^{(i)}$, as before, but change $\psi_i$ to be $1.2$ assuming that this corresponds to $Z=1$. Then the system becomes % \label{Eq.NeagtiveSolution-1} \begin{gather*} \dot{x}=1.2-x \\ \dot{y}=0.6(1.2-x) + 0.2x-0.2y. \end{gather*} Let us also set $x(0)=2.2$, $y(0)=1.1$. Clearly, the focal point of the box $y>1$ is $(1.2, 1.2)$, thus belonging, together with the initial point, to this box. However, the solution (which can easily be obtained from the corresponding formula in Remark \ref{remNegativity} by adding $\psi_i=1.2$ to both variables) is $$ y(t)=e^{-0.2t}(-0.1+0.5(e^{-0.8t}-1)) +1.2. $$ Now, $y(t)<0.9$ for $t=1$, so that the trajectory leaves the box. This example explains why a slightly more precise argument to prove Proposition \ref{prop0} is required. \end{remark} \begin{remark}\label{remRSP} \rm It was already mentioned that the focal points, which belong to the box they correspond to, are called regular (RSP) in the non-delay model. This terminology can be kept in the delay case, too. Proposition \ref{prop0} justifies this. \end{remark} \section{Some definitions}\label{SecFormalization} Using the examples from Section \ref{SecExamples}, let us now formulate some general definitions related to geometric properties of System (\ref{Eq.MainSystem}) with $\Re_i$ given by (\ref{operatorRe-i}). Similar definitions are known for the non-delay case (see e.g. \cite{PlahteKjogl}). However, according to the idea described in the previous section, System (\ref{Eq.MainSystem}) is to be replaced with a system of ordinary differential equations (\ref{Eq.Trick-model-general}). The latter is formally different from the general system studied in \cite{PlahteKjogl} (i.e. from (\ref{Eq.MainSystem}) with $y_i=x_i$ for all $i$). Indeed, (\ref{Eq.Trick-model-general}) contains more than one state variable in all but one component. We observed in the previous section that the two systems might have different properties, too. By this reason, some definitions have to be revised. But first of all let us describe some useful notation (see again \cite{PlahteKjogl}). In what follows, it is assumed that \begin{itemize} \item $M:=\{1,\dots ,m\}, \ \ N:= \{1,\dots ,n\}, \ \ n\le m;$ \item $S:=N\backslash R \ \mbox{for a given} \ R\subset N;$ \item $A^R\ \mbox{consists of all functions}\ v: R\to A;$ \item $a_R:= (a_r)_{r\in R} \ \ (a_R\in A^R), \ \ \ a_S:= (a_s)_{s\in S} \ \ (a_S\in A^S).$ \end{itemize} The following system of ordinary differential equations is used in this section to simplify the notation in the main definitions: \begin{equation}\label{Eq.GeneralizedOrdinarySystem} \dot{u}(t)=\Phi(Z,u(t)), \ \ \ t>0, \end{equation} where $u=(u_1,\dots u_m)$, $Z=(Z_1,\dots Z_n)$, $Z_i=\Sigma(u_i,\theta_i,0)$ for $i\in N$ (i.e. the step function with threshold $\theta_i>0$), the functions $\Phi_j: [0,1]^N\times \mathbb{R}^{M}\to\mathbb{R}$ ($j\in M$) are continuously differentiable in $Z\in [0,1]^N$ for all $u\in \mathbb{R}^{M}$ and affine in each vector variable $u\in \mathbb{R}^{M}$ for all $Z\in [0,1]^N$. These assumptions are fulfilled for System (\ref{Eq.Trick-model-general}) where $u_i$ is either $x_j$ or $v^{(j)}_{\nu}$. In fact, it is the only example which is of interest in this paper. However, System (\ref{Eq.GeneralizedOrdinarySystem}) is used to keep the notation under control. The assumptions imposed on System (\ref{Eq.GeneralizedOrdinarySystem}) are needed e. g. for the following reason: if one replaces the step functions $\Sigma(u_i,\theta_i,0)$ with the sigmoid functions $\Sigma(u_i,\theta_i,q_i)$ ($q_i \in (0,q^0)$), satisfying Conditions (A1)-(A3) from Section \ref{SecExamples}, then for any $u^0\in \mathbb{R}^{M}$ there exists the only solution $u(t), t\ge 0$ to (\ref{Eq.GeneralizedOrdinarySystem}) satisfying $u(0)=u^0$. As $q_i >0$, the function $\Sigma(u_i,\theta_i,q_i)$ is smooth for all $u_i$, so that the unique solution does exist. Assume again that {all} $q_i=0$. Then the right-hand side of System (\ref{Eq.GeneralizedOrdinarySystem}) can be discontinuous, namely, if one or several $u_i$ ($i\in N$) assume the respective threshold values $u_i=\theta_i$. Let $\Theta$ denote the set $\{u\in\mathbb{R}^M: \exists j\in N: u_j=\theta_j\}$. This set contains all discontinuity points of the vector-function $$ f(\Sigma(u_1,\theta_1,0),\dots ,\Sigma(u_n,\theta_n,0), u_1,\dots ,u_m) $$ and consists of $2^{n}$ open, disjoint subsets of the space $\mathbb{R}^N$. Inside each of these subsets one has $Z_i=B_i$, where $B_i=0$ or $B_i=1$ for all $i\in N$, so that System (\ref{Eq.GeneralizedOrdinarySystem}) becomes affine: \begin{equation}\label{Eq.GeneralizedOrdinarySystemInBoxes} \dot{u}(t)=\Phi(B,u(t)):=A_Bu(t)+f_B, \quad t>0. \end{equation} Thus, if the initial values of the potential solution belong to one of these subsets, then the local existence and uniqueness result can easily be proved. The global existence problem is, however, complicated, at least in the case of \textit{black walls} (see Section \ref{SecExamples}), requiring a more involved technique (see \cite{PlahteKjogl}). This problem is not addressed in the paper: global existence in the case of smooth $\Sigma$ and local existence in the case of the step functions are sufficient for what follows. Below System (\ref{Eq.GeneralizedOrdinarySystem}) is studied under the assumption $Z_i=\Sigma(u_i,\theta_i,0)$. As in Section \ref{SecExamples}, it is convenient to use Boolean vectors, now $n-dimensional$. The set of all Boolean vectors $ B=( B_1,\dots , B_n)$ (where $ B_i=0$ or $1$), according to the general notation adopted in this section, will be denoted by $\{0, 1\}^N$. The mapping $\beta : \{0, 1\}^R \to \{{0}, {1}\}^R$ acts on each component as follows: $\beta(0)=0; \beta(1) = 1$ (remark that this mapping is given the same notation for all $R$). The next three definitions can be found in \cite{PlahteKjogl}. \begin{definition}\label{defRegDomain} \rm Given a Boolean vector $ B\in \{0, 1\}^N$, the set $\mathcal{B}( B)$, which consists of all $u\in\mathbb{R}^M$, where $(Z_i(u_i))_{i\in N}=\beta( B)$, is called \emph{a regular domain} (or a box). \end{definition} Note that any box is an open subset of the space $\mathbb{R}^M$, as $\Sigma(\theta_i,\theta_i,0)=0.5$ (according to (A2)) excludes the value $u_i=\theta_i$. Remark also that only the coordinates $u_i$ ($n 0$; \item[``white"] if $\dot{u}_j(0,1,P)>0$ and $\dot{u}_j(0,0,P)<0$; \item[``transparent"] if $\dot{u}_j(0,1,P)<0$ and $\dot{u}_j(0,0,P)<0$, or if $\dot{u}_j(0,1,P)>0$ and $\dot{u}_j(0,0,P)>0$. \end{description} \end{definition} \begin{definition}\label{defwalltypesContinued} \rm We say that a wall $\mathcal{SD}(\theta_j, B_R)$ is {black} (white, transparent) if any point in it, except for a nowhere dense set, is {black} (white, transparent). \end{definition} Exceptional points correspond to the trajectories that are not transversal to the hyperplane $u_j=\theta_j$, i. e. where $\dot{u}_j=0$. The definition means simply that black points are attracting, white points are repelling, while at any transparent point the solution to (\ref{Eq.GeneralizedOrdinarySystem}) can be extended to some neighborhood of this point. At any transparent point System (\ref{Eq.GeneralizedOrdinarySystem}) can be rigorously characterized as a switching dynamical system. That is why it is easier to study trajectories in the vicinity of a transparent (and of course, white) wall, than trajectories that contain some black points. \section{Some general properties of the delay model}\label{SecGenResults} Let us again look at System (\ref{Eq.Trick-model-general}) taking advantage of the general definitions from the previous section. A challenge is to characterize walls in a more convenient way. For $j\in N$, $R=N\backslash \{j\}$, put $$ \eta_j(x,v,Z_j, B_R):=-\alpha_j\theta_j +\chi\alpha_jv+\alpha_j(c_0^{(j)}+c_1^{(j)})x+c_0^{(j)} (F_j(Z_j, B_R)-G_j(Z_j, B_R)x), $$ where $\chi=0$ if the kernel (\ref{trickFuncG-i}) of the operator (\ref{operatorRe-i}), with $j$ instead of $i$, contains only one term, namely, the weak generic delay kernel $G_{\alpha_j}^1$, and $\chi=1$ if the kernel contains at least one term in addition to $G_{\alpha_j}^1$. The following simple proposition can be obtained from Definition \ref{defwalltypes}. \begin{proposition}\label{prop1} For System (\ref{Eq.Trick-model-general}), let $P\in \mathcal{SD}(\theta_j, B_R)$ and $x_j, v_2^{(j)}$ be (two of) the coordinates of $P$: \begin{itemize} \item if $\eta_j(x_j, v_2^{(j)},0, B_R)>0$ and $\eta_j(x_j, v_2^{(j)},1, B_R)<0$, then $P$ is black; \item if $\eta_j(x_j, v_2^{(j)},0, B_R)<0$ and $\eta_j(x_j, v_2^{(j)},1, B_R)>0$, then $P$ is white; \item if $\eta_j(x_j, v_2^{(j)},0, B_R)$ and $\eta_j(x_j, v_2^{(j)},1, B_R)$ have the same sign, then $P$ is transparent. \end{itemize} Note that the type of the point does not depend on its other coordinates. \end{proposition} \begin{proof} We simply observe that, according to Remark \ref{rem.WeakKernel}, $\eta_j(x_j, v_2^{(j)},Z_j, B_R)$ coincides with $\dot{y}_j$. The latter corresponds to $\dot{u}$ in Definition \ref{defwalltypes}, if (\ref{Eq.GeneralizedOrdinarySystem}) is replaced with (\ref{Eq.Trick-model-general}). \end{proof} Applying this proposition to System (\ref{Eq.PlanarSystemTrickedWeak}), it is easy to obtain the description of the wall between the boxes $\mathcal{B}(1,0)$ and $\mathcal{B}(1,1)$, which was offered in Section \ref{SecExamples}. \begin{proposition}\label{prop2} For System (\ref{Eq.Trick-model-general}) with $y_j=x_j$ (i.e. no delay in $x_j$), the wall $\mathcal{SD}(\theta_j, B_R)$ is black (white, transparent), once one of its points is black (white, transparent). \end{proposition} \begin{proof} Indeed, in this case $c_1^{(j)}=0$, $c_0^{(j)}=1$, $x_j=y_j=\theta_j$, so that $$ \eta_j(x_j,v_2^{(j)},Z_j, B_R) =F_j(Z_j, B_R)-G_j(Z_j, B_R)\theta_j, $$ which is independent of the point in the wall. \end{proof} \begin{proposition}\label{prop3} Assume that for System (\ref{Eq.Trick-model-general}) with $y_j=x_j$ (i.e. no delay in $x_j$), the wall $\mathcal{SD}(\theta_j, B_R)$ is black (white, transparent). Then for any delay operator (\ref{operatorRe-i}) (where $i=j$) with $c_0^{(j)}\ne 0$, System (\ref{Eq.Trick-model-general}) with $y_j(t)=(\Re_jx)(t)$ has at least one black (white, transparent) point in the same wall, namely, any point $P$ whose coordinates (a part of coordinates, in fact) satisfy \begin{equation}\label{Eq.ConditionBlackWhiteTransp} \begin{gathered} x_j=y_j=\theta_j \\ -\theta_j+\chi v_2^{(j)}+(c_0^{(j)}+c_1^{(j)})\theta_j=0. \end{gathered} \end{equation} \end{proposition} \begin{proof} Let us first observe that (\ref{Eq.ConditionBlackWhiteTransp}) implies that $$ \eta_j(\theta_j,v_2^{(j)},Z_j, B_R)=c_0^{(j)}(F_j(Z_j, B_R)-G_j(Z_j, B_R)\theta_j). $$ This means that at any point $P$, satisfying (\ref{Eq.ConditionBlackWhiteTransp}), the value of the function $\eta_j$ coincides (up to a positive factor) with the corresponding function in the non-delay case. Due to Proposition \ref{prop2}, this gives then a point of the same type as in the non-delay case. To check that points satisfying (\ref{Eq.ConditionBlackWhiteTransp}) do exist, let us first put $\chi=0$. This gives the case of the weak generic kernel, where $c_0^{(j)}+c_1^{(j)}=1$. Thus, the second of the equations becomes trivial, and one can take any point (\ref{Eq.ConditionBlackWhiteTransp}), satisfying $x_j=y_j=\theta_j$. If $\chi=1$, then the second equation is always solvable w.r.t. $v_2^{(j)}$. \end{proof} \begin{proposition}\label{Cor.StationaryPointsTrickedSystem} Given arbitrary $i\in N$, $x_i, y_i\in\mathbb{R}$, the system \begin{equation}\label{Eq.StationaryPointTrickedSystem} \begin{gathered} A^{(i)}v^{(i)}+\alpha_i\pi^{(i)} x_i=0 \\ v_1^{(i)}=y_i, \end{gathered} \end{equation} where $A^{(j)}$ and $\pi^{(j)}$ are defined by (\ref{matrixA-i}) and (\ref{Eq.pi-i,f-i}), respectively, has a solution $v_1^{(i)}, v_2^{(i)},\dots ,v_p^{(i)}$ if and only if $x_i=y_i$. In this case, the solution is unique. In particular, any point $P$ whose coordinates satisfy (\ref{Eq.StationaryPointTrickedSystem}) with $i=j$, $x_j=y_j=\theta_j$, satisfies also the assumptions of Proposition \ref{prop3}. \end{proposition} \begin{proof} It suffices to prove the following: if $y_i\in\mathbb{R}$, then System (\ref{Eq.StationaryPointTrickedSystem}) has one and only one solution, which necessarily satisfies $x_i=y_i$. Rewriting it as a matrix equation gives \begin{equation}\label{Eq.matrixForSSP} \begin{pmatrix} c_0^{(i)}+c_1^{(i)} & 1 & 0 & \dots & 0 \\ c_2^{(i)} & -1 & 1 & \dots & 0 \\ c_3^{(i)} & 0 & -1 & \dots & 0 \\ \vdots & \vdots & \ddots & \ddots & \vdots \\ c_p^{(i)} & 0 & \dots & 0 & -1 \end{pmatrix} \begin{pmatrix} x_i\\ v_{2}^{(i)}\\ \vdots\\ v_p^{(i)}\\ \end{pmatrix} = \begin{pmatrix} y_i\\ 0\\ \vdots\\ 0 \end{pmatrix}. \end{equation} Observe that the determinant of the matrix is equal to \begin{align*} &c_0^{(i)}+c_1^{(i)} (-1)^{p-1}-c_2^{(i)} (-1)^{p-2}+ c_3^{(i)} (-1)^{p-3}-\dots +(-1)^{p-1}c_p^{(i)} \\ &=(-1)^{p-1}\sum_{\nu =0}^pc_{\nu}^{(i)}=(-1)^{p-1}\ne 0. \end{align*} To show that $x_i=y_i$, let us notice that adding together all the rows in the matrix equation(\ref{Eq.matrixForSSP}) (or, in other words, left-multiplying this equation by the row-vector $(1, 1, \dots , 1)$) yields $(c_0+c_1+\dots +c_p)x_i=y_i$. As $c_0+c_1+\dots +c_p=1$, one obtains $x_i=y_i$. The last part of the proposition follows from Remark \ref{rem.WeakKernel}. \end{proof} This proposition will be used in the next section to compare singular stationary points in the delay and non-delay cases. \begin{proposition}\label{prop4} For System (\ref{Eq.Trick-model-general}) with $c_0^{(j)}=0$ (the case of pure delay), the wall $\mathcal{SD}(\theta_j, B_R)$ is always transparent. \end{proposition} \begin{proof} If $c_0^{(j)}=0$, then $\eta_j(x,v,Z_j, B_R)=\alpha_j(-\theta_j+\chi v_2^{(j)}+c_1^{(j)}x_j),$ which is independent of $Z_j$. Thus, at no point the function $\eta_j$ changes the sign, when $Z_j$ switches from 0 to 1 or back. The subset, where $\eta_j=0$, is a nowhere dense subset of $\mathcal{SD}(\theta_j, B_R)$, if $v_2^{(j)}$ is present ($\chi=1$). If not ($\chi=0$), then one has the case of the weak generic delay kernel, where $c_1^{(j)}=1$. This again gives a nowhere dense exceptional subset. \end{proof} \section{Singular stationary points in the delay case} According to Proposition \ref{prop0} any focal point belonging to the associated box will necessarily be an asymptotically stable stationary point for System (\ref{Eq.Trick-model-general}). Localization of singular stationary points is a more delicate issue. A method of solving this problem in the non-delay case was suggested in \cite{Mestl-95}, \cite{Plahte-94}, \cite{Plahte-98}. This section is devoted to a generalization of this method to the delay model. Let us start with System (\ref{Eq.PlanarSystemTricked}), used previously as an example, and again look at the wall between boxes $\mathcal{B}(0, 1)$ and $\mathcal{B}(1, 1)$. The wall is denoted by $\mathcal{SD}(\theta_1, 1)$. The degradation rates in the example are $\gamma_1=0.6$, $\gamma_2=0.9$, the threshold values are set to be $\theta_1=\theta_2=1$. First, let us assume that there is no delay in this system, i.e. that $y_1=x_1$, $y_2=x_2$. Then the wall, where $Z_1=1$ and $x_2=\theta_2$, is black (see Section \ref{SecExamples}). We want to know if this wall contains hidden stationary points. The challenge is discontinuity of the system for $x_2=\theta_2$. As $Z_1=1$, $x_2=\theta_2$, System (\ref{Eq.PlanarSystemTricked}) becomes \begin{equation}\label{Eq.PlanarZ1=1} \begin{gathered} \dot{x}_1=1-Z_2-\gamma_1x_1 \\ \dot{x}_2= 1-Z_2-\gamma_2\theta_2. \end{gathered} \end{equation} Let us now replace the step function $Z_2=\Sigma(x_2,\theta_2,0)$ by a smooth sigmoid function $Z_2=\Sigma(x_2,\theta_2,q)$, $q>0$, satisfying Conditions (A1)-(A3). If for small $q>0$ the resulting smooth system has a stationary point $x^q$ tending to $x^0\in\mathcal{SD}(\theta_1, 1)$ as $q\to 0$, then it is natural to call $x^0$ a singular stationary point (SSP) for System (\ref{Eq.PlanarZ1=1}) and, thus, for the original system (\ref{Eq.PlanarSystemTricked}). This definition suggests also a method of localizing SSP, however, rather impractical one. Fortunately, there exists a much more efficient method suggested by E. Plahte et al. in the papers mentioned in the beginning of the section. According to this method, one does not need to study the system with $q>0$ (which however is used for the justification purposes). Instead, one needs to solve the linear system \begin{equation}\label{Eq.SSP-PlanarZ1=1} \begin{gathered} 1-Z_2-\gamma_1x_1=0 \\ 1-Z_2-\gamma_2\theta_2=0 \end{gathered} \end{equation} with two additional constraints: $0\theta_1$. The second constraint says that the point should correspond to the chosen value of $Z_1$ (which is 1). The first constraint reminds us that the variable $Z_2$ takes values in $[0,1]$. However, strict inequalities are required, because they are stable under small perturbations of the parameters of the system, and this is used to justify the method. >From the second equation one obtains $Z_2=0.1$, so that $x_1=1.5$. This solution satisfies the above constraints. This means that the point $(1.5, 1)$ is SSP for the system. A similar procedure is used to check whether SSP is asymptotically stable. However, stability analysis is beyond the scope of this paper. Let us now take a look at the delay case assuming, as before, that $y_1=x_1$ (no delay) and letting $y_2$ be a three-term delay operator described right after System (\ref{Eq.PlanarSystemTricked}). The wall is no longer black, but contains some black spots (see Proposition \ref{prop3}). Putting again $Z_1=1$ gives System \begin{equation}\label{Eq.PlanarZ1=1DelayedVariables} \begin{gathered} \dot{x}_1=1-Z_2-\gamma_1x_1 \\ \dot{x}_2= 1-Z_2-\gamma_2x_2 \\ \dot{y}_2=c_0(1-Z_2-\gamma_2x_2)+\alpha (c_0+c_1)x_2 -\alpha \theta_2+\alpha v_2,\\ \dot{v}_2=\alpha c_2x_2 -\alpha v_2, \end{gathered} \end{equation} as now $y_2$, rather than $x_2$, has a threshold. Let us proceed as in the non-delay case. The corresponding system of algebraic equations is now given by \begin{equation}\label{Eq.SSP-PlanarZ1=1DelayedVariables} \begin{gathered} 1-Z_2-\gamma_1x_1=0 \\ 1-Z_2-\gamma_2x_2=0 \\ c_0(1-Z_2-\gamma_2x_2)+\alpha (c_0+c_1)x_2 -\alpha \theta_2+\alpha v_2=0\\ \alpha c_2x_2 -\alpha v_2=0 \end{gathered} \end{equation} with the same constraints as before: $0 \theta_1$ (remember that $x_2$ and $v_2$ have no thresholds). A straightforward calculation results in the following solution $Z_2=0.1$, $x_2=1$, $x_1=1.5$, $v_2=c_2$, which satisfies the constraints. Notice that $Z_2$ and $x_2$ assume the same values as before. It is not a coincidence: any stationary point for a delay system has to be a stationary point for the corresponding system without delay. Notice also that if $c_0\ne 0$, then this SSP is ``black" due to Proposition \ref{prop3}. Let us turn to the general case of the delay system (\ref{Eq.MainSystem}) with the delay feedback operators given by (\ref{operatorRe-i}). In the "tricked" form it becomes (\ref{Eq.Trick-model-general}). Our main interest is SSP in one of the system's singular domains, say $\mathcal{SD}(\theta_S, B_R)$, where $S$ is a nonempty subset of the set $N=\{1,2,\dots ,n\}$, and $R:=N\backslash S$. Assume that we are given a Boolean vector $ B_R$ being a mapping from $R$ to $\{0,1\}$. Put also $B_R:=\beta( B_R)$, where $\beta$ is defined right before Definition \ref{defRegDomain}. In a sufficiently small neighborhood of each point of the singular domain $\mathcal{SD}(\theta_S, B_R)$ the vector $Z_R$ always assumes the value $B_R$ regardless of the box, where trajectories currently are contained. So, without loss of generality, we may assume that (\ref{Eq.Trick-model-general}) becomes \begin{equation}\label{Eq.Trick-model-B-R} \begin{gathered} \dot{x_i}(t)=F_i(Z_S, B_R)-G_i(Z_S, B_R)x_i(t) \\ \dot{v}^{(i)}(t)=A^{(i)}v^{(i)}(t)+\Pi^{(i)} (x_i(t)) \quad (t>0) \\ y_i=v_1^{(i)} \quad (i\in N), \end{gathered} \end{equation} where only $Z_S=(Z_s)_{s\in S}$, \ $Z_s=\Sigma(y_s,\theta_s,0)$ are active. Here $A^{(i)}, v^{(i)}$ are the same as in (\ref{matrixA-i}), while $\Pi^{(i)}$ is defined by \begin{equation}\label{vectorPi-B-R} \Pi^{(i)} (x_i):=\alpha_i x_i\pi^{(i)} + c_0^{(i)}f_i(Z_S,B_R,x_i), \end{equation} and $\pi^{(i)}$, $f_i(Z,x_i)$ are described in (\ref{Eq.pi-i,f-i}). The state space of System (\ref{Eq.Trick-model-general}) is in the sequel denoted by $\Omega$. It coincides, in fact, with the ordinary Euclidean space of dimension $n(p+1)$. \begin{definition}\label{defSSP} \rm A point $P^0\in \mathcal{SD}(\theta_j, B_R)$ is called a singular stationary point (SSP) for System (\ref{Eq.Trick-model-general}) if for any set of functions $\Sigma(y_s,\theta_s,q_s)$, $s\in S$, satisfying (A1)-(A3) from Section \ref{SecIntr}, there exist a number $\varepsilon>0$ and points $P^q$, where $q:=q_S=(q_s)_{s\in S}, \ q_s\in (0,\varepsilon)$, such that \begin{itemize} \item The point $P^q\in \Omega$ is a stationary point for System (\ref{Eq.Trick-model-B-R}) with $Z_s=\Sigma(y_s,\theta_s,q_s)$ ($s\in S$); \item $P^q\to P^0$ as $q_s\to +0$ ($s\in S$). \end{itemize} \end{definition} Before formulating the main result of this section let us introduce some more notation. As before, let us put $F_S=(F_s)_{s\in S}$, and in addition, $G_S=\mbox{diag}[G_s]_{s \in S}$, which is a diagonal $|S|\times|S|$-matrix, where $|S|$ is the number of elements in $S$. In what follows, a crucial role is also played by the Jacobian $\frac{\partial}{\partial Z_{S}}F_S(Z)-\frac{\partial}{\partial Z_{S}}G_S(Z)y_S$ corresponding to $\dot{x}_s$, $Z_s$ ($s\in S$). This is an $|S|\times|S|$-matrix, too. The entry in the $s$-th row and the $\sigma$-th column of this matrix amounts $\frac{\partial}{\partial Z_{\sigma}}F_s(Z)-\frac{\partial}{\partial Z_{\sigma}}G_s(Z)y_s$. The determinant of this matrix at $Z_R=B_R$ is denoted by $J_S(Z_S,B_R,y_S)$ in the sequel. In other words, \begin{equation}\label{Eq.DetJacobian} \begin{aligned} J_S(Z_S,B_R,y_S)&= \det\Big(\frac{\partial}{\partial Z_{S}}F_S(Z_S,B_R) -\frac{\partial}{\partial Z_{S}}G_S(Z_S,B_R)y_S\Big) \\ &= \det\Big[\frac{\partial}{\partial Z_{\sigma}}F_s(Z_S,B_R)-\frac{\partial}{\partial Z_{\sigma}}G_s(Z_S,B_R)y_s \Big]_{s,\sigma\in S}. \end{aligned} \end{equation} \begin{theorem}\label{thSSP} Assume that the system of linear algebraic equations \begin{equation}\label{Eq.ForSSP} \begin{gathered} F_s(Z_S,B_R)-G_s(Z_S,B_R)\theta_s = 0 \quad (s\in S)\\ F_r(Z_S,B_R)-G_r(Z_S,B_R)y_r=0 \quad (r\in R) \end{gathered} \end{equation} with the constraints \begin{equation}\label{Eq.ConstraintsSSP} \begin{gathered} 0 0$. Then, using the inverse sigmoid functions, we arrive at a system of functional equations w.r.t. $Z_s$ which is resolved by the implicit function theorem. This gives the values of $Z_s$ depending on the vector parameter $q=q_S \ (q_s\ge 0)$. Then we restore, step by step, the other variables, namely $y^q$, $x^q$ and finally, $v_{\nu}^{(i),q}$. All of them depend continuously on the parameter $q$. Letting $q_s$ go to zero gives SSP in the wall $\mathcal{SD}(\theta_S, B_R)$. To implement this program let us first rewrite the stationarity conditions for the variables $y_S$ in the matrix form. It gives \begin{equation}\label{Eq.Th1-01} F_S(Z_S,B_R)-G_S(Z_S,B_R)y_S=0, \end{equation} which is an equation in $\mathbb{R}^S$. Originally, i. e. in (\ref{Eq.ForSSP}), it was assumed that $y_S=\theta_S$. If the step functions are replaced by smooth sigmoid functions, then this equality may be violated. Let $Z_S=\Sigma(y_S,\theta_S,q):=(\Sigma(y_s,\theta_s,q_s))_{s\in S} $, where $q_s>0$. Due to Property (A1) from Section \ref{SecIntr} the inverse function $y_S=\Sigma^{-1}(Z_S,\theta_S,q)$ is continuously differentiable with respect to $Z_s\in (0,1)$, $s\in S$. Putting it into (\ref{Eq.Th1-01}) produces \begin{equation}\label{Eq.Th1-02} F_S(Z_S,B_R)-G_S(Z_S,B_R)\Sigma^{-1}(Z_S,\theta_S,q)=0. \end{equation} The Jacobian of the left-hand side with respect to $Z_S$ is equal to \begin{equation}\label{Eq.Th1-03} \begin{aligned} &\frac{\partial}{\partial Z_{S}}F_S(Z_S,B_R) -\frac{\partial}{\partial Z_{S}}G_S(Z_S,B_R)\Sigma^{-1}(Z_S,\theta_S,q) \\ &+ (F_S(Z_S,B_R)-G_S(Z_S,B_R)y_S)\frac{\partial}{\partial Z_{S}}\Sigma^{-1}(Z_S,\theta_S,q). \end{aligned} \end{equation} According to Properties (A1)-(A2) from Section \ref{SecIntr} and assumptions on $F, G$ listed in Introduction, this is a continuous function w.r.t. $(Z_S,q)$, if $0 0$. This function satisfies $Z_S^0=\bar{Z}_S$, so that, in particular, $0 0$, $s\in S$ are calculated. Let now $q\to 0$. It is already shown that $Z_S^q\to Z_S^0$. Using again Property (B1) gives $$ y_S^0:=\lim_{q\to 0}y_S^q=\lim_{q\to 0}\Sigma^{-1}(Z_S^q,\theta_S,q)=\theta_S. $$ This and (\ref{Eq.Th1-05}) justify also the equalities $$ x_S^0:=\lim_{q\to 0}x_S^q=\lim_{q\to 0}y_S^q=\theta_S \quad \mbox{and} \quad y_R^0:=\lim_{q\to 0}y_R^q=\lim_{q\to 0}x_R^q:=x_R^0. $$ Finally, $v^{(i),q}\to v^{(i),0}$ which solves Equation (\ref{Eq.Th1-06a}), where $x_i^0=y_i^0$ for all $i\in N$. By this, the point $P^0$ is constructed as a limit point for $P^q$, $q\to 0$, the latter being stationary points for the perturbed system (\ref{Eq.Trick-model-B-R}) with $Z_s=\Sigma(y_s,\theta_s,q_s)$ ($q_s>0$, $s\in S$). To show that $P^0\in \mathcal{SD}(\theta_S, B_R)$, one has to check that $y_S=\theta_S$ and that $Z_R(y_R)=B_R$. The first equality is already proved. To verify the other let us simply observe that by construction, $$ y^0_r=F_r(Z^0_S,B_R)G_r^{-1}(Z^0_S,B_R)=F_r(\bar{Z}_S,B_R)G_r^{-1}(\bar{Z}_S,B_R). $$ On the other hand, from the second of the equations (\ref{Eq.ForSSP}) it follows that $\bar{y}_r=F_r(\bar{Z}_S,B_R)$ $\times G_r^{-1}(\bar{Z}_S,B_R).$ Hence, $y^0_r=\bar{y}_r$ for all $r\in R$. At the same time, $\bar{y}_R$ satisfies, by assumption, the second constraint in (\ref{Eq.ConstraintsSSP}). Therefore $Z_R(y^0_R)=B_R$, so that, indeed, $P^0\in \mathcal{SD}(\theta_S, B_R)$. The theorem is proved. \end{proof} \begin{remark}\label{remSSP-1} \rm In the non-delay case, the definition of SSP for System (\ref{Eq.MainSystem}), where both production $F_i$ and degradation $G_i$ are regulated (i.e. dependent on $Z$), as well as the described method of localizing SSP in singular domains were suggested and justified by E. Plahte et al. in \cite{Plahte-98} for the logoid functions and in (\cite{PlahteKjogl}) for the Hill-like sigmoids. The fact that SSP are independent of the choice of approximating sigmoids is natural and was observed in the above works. \end{remark} \begin{remark}\label{remSSP-2} Theorem \ref{thSSP} shows, in particular, that stationary regimes are independent of the choice of delays (at least in the case of the delay operators (\ref{operatorRe-i})). This statement is evident in the smooth case, but in the case of SSP requires some work, as the definition of a stationary point in this case is not straightforward. \end{remark} \subsection*{Conclusion} The main results of the paper can be summarized as follows: \begin{enumerate} \item a framework for analysis of gene regulatory networks with time-delay and Boolean-like interactions, which is based on a modification of the linear chain trick, is offered; \item the notion of a singular stationary point in the time-delay case is introduced and a method of localizing such points is justified; \item it is shown that singular stationary points are the same for systems with or without time-delays. \end{enumerate} \subsection*{Acknowledgments} The author thanks Stig W. Omholt and Erik Plahte for introducing him the subject, formulating the problem and helping him with many stimulating discussions. The author is grateful to Andrei Shindiapin for many useful hints and remarks as well as for his constant support during the author's work on the paper. The critical analysis of the manuscript undertaken by two anonymous referees is also very much appreciated. Due to their valuable comments and suggestions the first version of the paper was considerably revised. In particular, the critics helped the author to improve the structure of the paper, the notation, and the model itself. \begin{thebibliography}{99} \bibitem{deJong} {\sc H. de Jong}. {\it Modeling and simulation of genetic regulatory systems: a literature review}, J. Comp. Biol., v. 9 (2002), 67-104. \bibitem{Glass} {\sc L. Glass and S. A. Kauffman }. {\it The logical analysis of continuous, non-linear biochemical control networks}, J. Theor. Biol., v. 39 (1973), 103-129. \bibitem{Mc} {\sc N. McDonald}. {\it Time lags in biological models}, Lect. Notes in Biomathematics, Springer-Verlag, Berlin-Heidelberg-New York, 1978, 217\,p. \bibitem{Mestl-95} {\sc T. Mestl, E. Plahte, and S. W. Omholt}, {\it A mathematical framework for describing and analysing gene regulatory networks}, J. Theor. Biol., v. 176 (1995), 291-300. \bibitem{MPS} {\sc J. J. Miguel, A. Ponosov, and A. Shindiapin A.}, {\it The W-transform links delay and ordinary differential equations}, J. Func. Diff. Eq., v. 9 (4) (2002), 40-73. \bibitem{PlahteKjogl} {\sc E. Plahte and S. Kj{\o}glum}, {\it Analysis and generic properties of gene regulatory networks with graded response functions}, Physica D, v. 201, no. 1-2 (2005), 150-176. \bibitem{Plahte-94} {\sc E. Plahte, T. Mestl, and S. W. Omholt}, {\it Global analysis of steady points for systems of differential equations with sigmoid interactions}, Dynam. Stabil. Syst., v. 9, no. 4 (1994), 275-291. \bibitem{Plahte-98} {\sc E. Plahte, T. Mestl, and S. W. Omholt}, {\it A methodological basis for description and analysis of systems with complex switch-like interactions}, J. Math. Biol., v. 36 (1998), 321-348. \bibitem{Thomas} {\sc E. H. Snoussi and R. Thomas}, {\it Logical identification of all steady states: the concept of feedback loop characteristic states}, Bull. Math. Biol., v. 55 (1993), 973-991. \end{thebibliography} \end{document}