Solution of Stochastic Van der Pol Equation Using Spectral Decomposition Techniques

Show more

1. Introduction

In recent years, vibration systems represented by oscillations have received considerable attention. One of these systems is the van der Pol (VDP) oscillator [1] [2] [3]. In 1920, the Dutch scientist named Van der Pol found stable oscillation in the electric circuit employing a vacuum tube [1]. This oscillation was modeled mathematically by a second-order differential equation. This differential equation is used for modeling many biological and physical phenomena such as the oscillation of atoms, the contraction and the expansion of the human heart [4], etc. Furthermore, van der Pol discovered a new type of oscillation called the relaxation oscillation [5]. The relaxation oscillator is used in many applications such as an electronic beeper, blinking lights and others [6]. It is applied also to the nonlinear oscillation system such as gene activation systems [4] and earthquakes [3].

As it is known, accurate analytical solution of the VDP system is not obtained until now. So, the interest in studying the approximate solution of this system has increased lately, where the dynamic properties of such system have been investigated in [7] - [12], and the references therein. Using Liénard’s theorem [13], it was proved that the deterministic VDP (DVDP) system has a unique limit cycle. The efforts made to study of this system are not just for the deterministic case, but also the stochastic case. Since the VDP system can be affected by an external force, this external force can be described by the Gaussian white noise. Therefore, the stochastic system can be modeled by a mathematical model and called the SVDP system. This system has been studied by many authors such as using the stochastic averaging method with the Fokker-Planck equation to find the probability density function of the stationary solution [14] [15] [16]. So, it is appropriate for studying this system using other numerical techniques such as the spectral decomposition techniques. The spectral decomposition techniques have been used for solving nonlinear stochastic differential equations (NSDEs). One of those techniques is the WHE technique which was suggested by Norbert Wiener [17]. The WHE was used by Meecham et al. [18] to study turbulence solution of Burger equation. Also, WHE was combined with the perturbation theory to solve the perturbed NSDE by El-Tawil, M. and his co-workers [19] [20] [21]. Also, El-beltagy et al. used this technique to study the higher order solution for many NSDEs [20] [21] [22] [23]. For, the second technique, it is called the WCE technique; it was developed by Cameron and Martin [24] in 1947. This technique is based on a discretization of the Gaussian white noise by using the Fourier expansion [25]. After applying the WCE to the stochastic differential equations (SDEs), a system of DDEs is obtained which is called the propagators [25] [27] [28]. Finally, using a simple deterministic numerical method such as the 5th order Rung Kutta [29] is an appropriate method for solving these propagators.

The aim of the current work is to study the dynamic behavior of SVDP equation and SVDP-Duffing equation using the stochastic spectral expansions; WCE and WHE. Some important theoretical results are deduced. The paper also studies the influence of the coefficients on the oscillatory motion and dynamic behavior limitation. The outline of this paper is:

In Section 2 we investigate the mathematical equation of SVDP system and some basic theorems that are quite useful to deal with non-linear differential equations. In Section 3 we introduce the WCE method, its application on the SVDP equation and the error bound of the method, in addition to the analytic formula for the WCE of polynomial nonlinear term. In Section 4 we introduce the WHE method and its application on the SVDP equation. The numerical simulation for the results of the WCE method is shown in Section 5. The simulation of the comparison between the numerical results of WCE method, the WHE method and those of the MC simulations are shown in Section 6. In Section 7 the conclusions are summarized.

2. The Proposed Stochastic VDP Model

The van der Pol equation is a non-conservative system, where adding and subtracting energy from the system is done in the periodic motion called the limit cycle [12]. So the nonlinear oscillatory motion of a unit mass particle under external stochastic force is described by the following mathematical model:

$\stackrel{\xa8}{x}\left(t\right)-\left(\zeta -\epsilon {x}^{2}\left(t\right)\right)\stackrel{\dot{}}{x}+\beta x\left(t\right)=\lambda \stackrel{\dot{}}{W}\left(t\right),\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}x\left(0\right)=0,\stackrel{\dot{}}{x}\left(0\right)=b.$ (2.1)

where the oscillation behavior of the SVDP Equation (2.1) is affected by three forces:

1) The damping force $\left(\zeta -\epsilon {x}^{2}\right)\stackrel{\dot{}}{x}$.

2) The external (stochastic) force $\lambda \stackrel{\dot{}}{W}\left(t\right)$.

3) The spring force $-\beta x$.

Since $x\left(t\right)$ is the displacement; $\zeta $ is the coefficient of the linear damping, $\epsilon $ is the coefficient of the nonlinear damping; $\lambda $ is the forcing intensity, $\stackrel{\dot{}}{W}\left(t\right)$ is the Gaussian white noise and $b,\beta $ are deterministic coefficients. The three forces act in the opposite direction of the displacement.

Equation (2.1) can be converted into a system of first order stochastic differential equations as follows:

$\begin{array}{l}\stackrel{\dot{}}{x}\left(t\right)=y\left(t\right),\\ \stackrel{\dot{}}{y}\left(t\right)-\left(\zeta -\epsilon {x}^{2}\left(t\right)\right)y\left(t\right)+\beta x\left(t\right)=\lambda \stackrel{\dot{}}{W}\left(t\right),\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}x\left(0\right)=0,y\left(0\right)=b.\end{array}$ (2.2)

Existence and Uniqueness Theorem

Suppose we have the following stochastic differential equation system,

$\begin{array}{l}\stackrel{\dot{}}{x}\left(t\right)={f}_{1}\left(t,x,y\right)\\ \stackrel{\dot{}}{y}\left(t\right)={f}_{2}\left(t,x,y\right)+{g}_{2}\left(t,x,y\right)\cdot \stackrel{\dot{}}{W}\left(t\right),\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}x\left(0\right)=0,y\left(0\right)=b.\end{array}$

where ${W}_{t}$ is a one dimensional Wiener process b is constant. If the ${R}^{2}$ -valued functions ${f}_{i}\left(t,x,y\right)$ and the real valued functions ${g}_{i}\left(t,x,y\right),i=1,2$ are defined and measurable on $\left[{t}_{0},T\right]\times {R}^{2}$ and have the following properties [30] [31]. There exist a constant $\mathcal{K}$ such that,

a) (Lipschitz condition) for all $t\in \left[{t}_{0},T\right],\text{\hspace{0.17em}}{x}_{1},{x}_{2}\in R,\text{\hspace{0.17em}}{y}_{1},{y}_{2}\in R$

$\begin{array}{l}\left|{f}_{1}\left(t,{x}_{1},y\right)-{f}_{1}\left(t,{x}_{2},y\right)\right|+\left|{g}_{1}\left(t,{x}_{1},y\right)-{g}_{1}\left(t,{x}_{2},y\right)\right|\le \mathcal{K}\left|{x}_{1}-{x}_{2}\right|,\\ \left|{f}_{2}\left(t,x,{y}_{1}\right)-{f}_{2}\left(t,x,{y}_{2}\right)\right|+\left|{g}_{2}\left(t,x,{y}_{1}\right)-{g}_{2}\left(t,x,{y}_{2}\right)\right|\le \mathcal{K}\left|{y}_{1}-{y}_{2}\right|,\end{array}$ (2.3)

b) Restriction on the growth for all $t\in \left[{t}_{0},T\right],x,y\in R$

$\begin{array}{l}{\left|{f}_{1}\left(t,x,y\right)\right|}^{2}+{\left|{g}_{1}\left(t,x,y\right)\right|}^{2}\le {\mathcal{K}}^{2}\left(1+{\left|x\right|}^{2}\right),\\ {\left|{f}_{2}\left(t,x,y\right)\right|}^{2}+{\left|{g}_{2}\left(t,x,y\right)\right|}^{2}\le {\mathcal{K}}^{2}\left(1+{\left|y\right|}^{2}\right),\end{array}$ (2.4)

then the system has a unique ${R}^{2}$ -valued solution on $\left[{t}_{0},T\right]$ with probability one. Then by checking the applicability of the conditions (2.3) and (2.4) on functions ${f}_{i}\left(t,x,y\right)$ and ${g}_{i}\left(t,x,y\right),i=1,2$ with respect to the variables $x,y$ for the system (2.2) then we will get: for $i=1$, the conditions (2.3) and (2.4) are satisfied.

By checking the condition (2.3) for $i=2$ then for

$\left|{f}_{2}\left(t,x,{y}_{1}\right)-{f}_{2}\left(t,x,{y}_{2}\right)\right|\le \mathcal{K}\left|{y}_{1}-{y}_{2}\right|,$

we get, $\left|\left(\zeta -\epsilon {x}^{2}\right){y}_{1}-\beta x-\left(\zeta -\epsilon {x}^{2}\right){y}_{2}+\beta x\right|=\left|\zeta \left({y}_{1}-{y}_{2}\right)-\epsilon {x}^{2}\left({y}_{1}-{y}_{2}\right)\right|$, and, $\left|\zeta \left({y}_{1}-{y}_{2}\right)-\epsilon {x}^{2}\left({y}_{1}-{y}_{2}\right)\right|\le \left|{y}_{1}-{y}_{2}\right|\left|\zeta -\epsilon {x}^{2}\right|\le \mathcal{K}\left|{y}_{1}-{y}_{2}\right|$, then the condition is satisfied and results in

$\left|\zeta -\epsilon {x}^{2}\right|\le \mathcal{K},$ (2.5)

which means that $x\left(t\right)$ should be a second-order (finite-variance) process.

Furthermore, the derivatives of the functions $f,g$ are:

$\begin{array}{l}\frac{\partial {f}_{1}}{\partial x}=0,\text{\hspace{0.17em}}\text{\hspace{0.17em}}\frac{\partial {f}_{1}}{\partial y}=1,\text{\hspace{0.17em}}\text{\hspace{0.17em}}\frac{\partial {f}_{2}}{\partial x}=-2\epsilon xy-\beta ,\text{\hspace{0.17em}}\text{\hspace{0.17em}}\frac{\partial {f}_{2}}{\partial y}=\zeta -\epsilon {x}^{2},\\ \frac{\partial {g}_{1}}{\partial x}=\frac{\partial {g}_{1}}{\partial y}=\frac{\partial {g}_{2}}{\partial x}=\frac{\partial {g}_{2}}{\partial y}=0.\end{array}$ (2.6)

Then the functions ${f}_{i,i=1,2}$ are locally Lipchitz continuous in $\left(x,y\right)$.

By checking the growth condition (2.4), for $i=2$, consider the following:

${\left[\left(\zeta -\epsilon {x}^{2}\right)y-\beta x\right]}^{2}+{\lambda}^{2}\le \mathcal{K}\left(1+{y}^{2}\right),$

then we get $\left[\left(\zeta -\epsilon {x}^{2}\right)y-\beta x\right]\le {\left[\mathcal{K}\left(1+{y}^{2}\right)-{\lambda}^{2}\right]}^{\frac{1}{2}}$, which results in

$y\le \frac{{\left[\mathcal{K}\left(1+{y}^{2}\right)-{\lambda}^{2}\right]}^{\frac{1}{2}}+\beta x}{\zeta -\epsilon {x}^{2}},$ (2.7)

where x represents the solution of the stochastic Van der Pol Equation (2.1) and $y=\stackrel{\dot{}}{x}$ is the velocity. The inequality (2.7) means that the velocity should be bounded and the bound is inversely proportional to the displacement. In fact, the existence and boundedness of the solution depend on the damping coefficients $\zeta $ and $\epsilon $. Also, the solution is expected to be restricted in a local area respected by condition (2.7). Furthermore, it is found that if x is larger than

$\sqrt{\frac{\zeta}{\epsilon}}$, the nonlinear term will be dominant and the damping force will be positive. If x is smaller than $\sqrt{\frac{\zeta}{\epsilon}}$, the damping force will be negative because the

nonlinear term will become negligible. On the other hand, to prove the uniqueness and stability of the limit cycle for the mean solution of system (2.2), we need to check the satisfaction of Lineard’s theorem [13]. This theorem is used to prove that the mean solution of the SVDP equation has unique and stable limit cycle.

Theorem 2.3

The second order differential equations are called Lineard’s equations and take the form:

$\stackrel{\xa8}{x}+Z\left(x\right)\stackrel{\dot{}}{x}+Q\left(x\right)=0,$ (2.8)

this equation is a generalization of the van der Pol equation $\stackrel{\xa8}{x}-\left(\zeta -\epsilon {x}^{2}\right)\stackrel{\dot{}}{x}+\beta x=0$, where $Z\left(x\right)=-\left(\zeta -\epsilon {x}^{2}\right)$ and $Q\left(x\right)=\beta x$ ; it is equivalent to the system

$\begin{array}{l}\stackrel{\dot{}}{x}=\upsilon ,\\ \stackrel{\dot{}}{\upsilon}=-Z\left(x\right)\upsilon -Q\left(x\right),\end{array}$ (2.9)

Lineard’s theorem states that system (2.9) has a unique, stable limit cycle under the following conditions on Z and Q.

i) $Z\left(x\right)$ and $Q\left(x\right)$ are continuously differentiable for all x.

ii) $Z\left(x\right)$ is an even function; $Z\left(-x\right)=Z\left(x\right)$ for all x.

iii) $Q\left(x\right)>0$ for all x.

iv) $Q\left(x\right)$ is an odd function; $Q\left(-x\right)=-Q\left(x\right)$ for all x.

v) $H\left(x\right)={\displaystyle \underset{0}{\overset{x}{\int}}Z\left(u\right)\text{d}u}$ is an odd function and has exactly one positive zero at $x=c$, is positive for $x>c$ and negative when $0<x<c$ and $H\left(x\right)\to \infty $ as $x\to \infty $. Then the system (2.9) has unique and stable limit cycle in the phase plane. Then for deterministic system of Equation (2.1) at $\lambda =0$, the conditions (i - iv) are satisfied whereas, the condition (v) satisfies at $c=\sqrt{\frac{3\zeta}{\epsilon}}$. So, the mean

solution of the SVDP system (2.2) has unique and stable limit cycle in the phase plane surrounding the origin point. This result will be shown in Section 5.

3. The Wiener-Chaos Expansion Technique

The WCE is applied to the SVDP Equation (2.1). Referring to [25] [26] [27] [28], we can assume that the solution of Equation (2.1) can be expanded as:

$x\left(t;w\right)={\displaystyle \underset{\alpha \in \Im}{\sum}{x}_{\alpha}\left(t\right){T}_{\alpha}\left(\xi \right)},\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}{x}_{\alpha}=E\left[x{T}_{\alpha}\left(\xi \right)\right],$ (3.1)

where the multi-index set $\Im =\left\{\alpha =\left({\alpha}_{n},n\ge 1\right),{\alpha}_{n}\in \left\{0,1,2,\cdots \right\};\left|\alpha \right|={\displaystyle \underset{n=1}{\overset{\infty}{\sum}}{\alpha}_{n}<\infty}\right\}$ ; ${T}_{\alpha}\left(\xi \right)={\displaystyle \underset{i=1}{\overset{\infty}{\prod}}{H}_{{\alpha}_{i}}\left({\xi}_{i}\right)}$ and ${T}_{\alpha}$ are called Wick polynomials of order $\alpha $ ; ${\xi}_{i}$ are

the standard Gaussian random variables (GRVs)and
${H}_{n}\left(x\right)$ is the normalized n^{th} order Hermite polynomial. The expansion of the Wiener process
$W\left(t\right)$ will be:

$W\left(t\right)={\displaystyle \underset{i=1}{\overset{\infty}{\sum}}{\xi}_{i}{\displaystyle \underset{0}{\overset{t}{\int}}{m}_{i}\left(s\right)\text{d}s}},$ (3.2)

where ${m}_{i}\left(t\right)$ are an orthnormal basis in Hilbert space ${L}^{2}\left(\left[0,T\right]\right),T<\infty $. The mean square error of this expansion for all $t\le T$ [25] [26] [27] is:

$E{\left[W\left(t\right)-{\displaystyle \underset{i=1}{\overset{K}{\sum}}{\xi}_{i}{\displaystyle \underset{0}{\overset{t}{\int}}{m}_{i}\left(s\right)\text{d}s}}\right]}^{2}\le \frac{T}{\pi K}.$ (3.3)

The basis function ${m}_{i}\left(t\right)$ can be chosen as the trigonometric functions [26] [27]:

${m}_{1}\left(t\right)=\sqrt{\frac{1}{T}},\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}{m}_{i}\left(t\right)=\sqrt{\frac{2}{T}}\mathrm{cos}\left(\frac{\left(i-1\right)\pi t}{T}\right),\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}i>1,\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}0\le t\le T.$

The application of WCE on the linear problem is rather easy than the nonlinear problem. So, the following theorem enables us to handle the cubic term in the Equation (3.1) using WCE.

Theorem 3.1. Assume X and Y are two processes, they have Wiener chaos expansion (3.1), then if $E\left[{\left|{X}^{2}Y\right|}^{2}\right]<\infty $ then the product ${X}^{2}Y$ has the Wiener chaos expansion

${X}^{2}Y={\displaystyle \underset{\gamma \in \Im}{\sum}{\displaystyle \underset{0\le \theta \le \gamma}{\sum}{\displaystyle \underset{r\in \Im}{\sum}{\displaystyle \underset{0\le \beta \le \theta +\mu}{\sum}{\displaystyle \underset{\mu \in \Im}{\sum}C\left(\theta +\mu ,\beta ,r\right)C\left(\gamma ,\theta ,\mu \right){x}_{\theta +\mu -\beta +r}{x}_{\beta +r}{y}_{\gamma -\theta +\mu}{T}_{\gamma}}}}}},$

where $C\left(\gamma ,\theta ,\mu \right)={\left[\left(\begin{array}{c}\gamma \\ \theta \end{array}\right)\left(\begin{array}{c}\theta +\mu \\ \mu \end{array}\right)\left(\begin{array}{c}\gamma -\theta +\mu \\ \mu \end{array}\right)\right]}^{\frac{1}{2}}$

Proof:

From [25], we start with ${X}^{2}\left(t\right)={\displaystyle \underset{\theta \in \Im}{\sum}{\displaystyle \underset{r\in \Im}{\sum}{\displaystyle \underset{0\le \beta \le \theta}{\sum}C\left(\theta ,\beta ,r\right){u}_{\theta -\beta +r}\left(t\right){u}_{\beta +r}\left(t\right){T}_{\theta}}}}$ and let $Y={\displaystyle \underset{\alpha \in \Im}{\sum}{y}_{\alpha}}{T}_{\alpha}$, then we have

$\begin{array}{c}{X}^{2}Y={\displaystyle \underset{\theta \in \Im}{\sum}{\displaystyle \underset{r\in \Im}{\sum}{\displaystyle \underset{0\le \beta \le \theta}{\sum}C\left(\theta ,\beta ,r\right){x}_{\theta -\beta +r}\left(t\right){x}_{\beta +r}\left(t\right){T}_{\theta}}}{\displaystyle \underset{\alpha \in \Im}{\sum}{y}_{\alpha}}{T}_{\alpha}}\\ ={\displaystyle \underset{\theta \in \Im}{\sum}{\displaystyle \underset{r\in \Im}{\sum}{\displaystyle \underset{0\le \beta \le \theta}{\sum}{\displaystyle \underset{\alpha \in \Im}{\sum}C\left(\theta ,\beta ,r\right){x}_{\theta -\beta +r}{x}_{\beta +r}{y}_{\alpha}}{\displaystyle \underset{\mu \le \theta \wedge \alpha}{\sum}B\left(\theta ,\alpha ,\mu \right){T}_{\theta +\alpha -2\mu}}}}},\end{array}$

where $B\left(\theta ,\alpha ,\mu \right)={\left[\left(\begin{array}{c}\theta \\ \alpha \end{array}\right)\left(\begin{array}{c}\alpha +\mu \\ \mu \end{array}\right)\left(\begin{array}{c}\theta -\alpha +\mu \\ \mu \end{array}\right)\right]}^{\frac{1}{2}}$, then

${X}^{2}Y={\displaystyle \underset{\alpha \in \Im}{\sum}{\displaystyle \underset{\theta \in \Im}{\sum}{\displaystyle \underset{r\in \Im}{\sum}{\displaystyle \underset{0\le \beta \le \theta}{\sum}{\displaystyle \underset{\mu \le \theta \wedge \alpha}{\sum}C\left(\theta ,\beta ,r\right){\left[\left(\begin{array}{c}\theta \\ \mu \end{array}\right)\left(\begin{array}{c}\alpha \\ \mu \end{array}\right)\mu !\frac{\sqrt{\theta +\alpha -2\mu !}}{\sqrt{\theta !\alpha !}}\right]}^{\frac{1}{2}}}}}}}{x}_{\theta -\beta +r}{x}_{\beta +r}{y}_{\alpha}{T}_{\theta +\alpha -2\mu}.$

Let $\alpha -\mu ={\alpha}^{\prime},\theta -\mu ={\theta}^{\prime}$ then $\beta \le \theta ,\beta \le {\theta}^{\prime}+\mu \ge 0$, the above summation can be rewritten as:

${X}^{2}Y={\displaystyle \underset{{\alpha}^{\prime}\in \Im}{\sum}{\displaystyle \underset{{\theta}^{\prime}\in \Im}{\sum}{\displaystyle \underset{r\in \Im}{\sum}{\displaystyle \underset{0\le \beta \le {\theta}^{\prime}+\mu}{\sum}{\displaystyle \underset{\mu \in \Im}{\sum}C\left({\theta}^{\prime}+\mu ,\beta ,r\right){\left[\left(\begin{array}{c}{\theta}^{\prime}+\mu \\ \mu \end{array}\right)\left(\begin{array}{c}{\alpha}^{\prime}+\mu \\ \mu \end{array}\right)\mu !\frac{\sqrt{{\theta}^{\prime}+{\alpha}^{\prime}!}}{\sqrt{{\theta}^{\prime}+\mu !{\alpha}^{\prime}+\mu !}}\right]}^{\frac{1}{2}}{x}_{{\theta}^{\prime}+\mu -\beta +r}{x}_{\beta +r}{y}_{{\alpha}^{\prime}+\mu}{T}_{{\theta}^{\prime}+{\alpha}^{\prime}}.}}}}}$ (3.4)

For simplicity, we denote ${\alpha}^{\prime}=\alpha ,{\theta}^{\prime}=\theta ;\alpha +\theta =\gamma $ then $\alpha =\gamma -\theta \ge 0,\theta \ge 0$ ; so, the formula (3.4) will be:

${X}^{2}Y={\displaystyle \underset{\gamma \in \Im}{\sum}{\displaystyle \underset{0\le \theta \le \gamma}{\sum}{\displaystyle \underset{r\in \Im}{\sum}{\displaystyle \underset{0\le \beta \le \theta +\mu}{\sum}{\displaystyle \underset{\mu \in \Im}{\sum}C\left(\theta +\mu ,\beta ,r\right){\left[\left(\begin{array}{c}\theta +\mu \\ \mu \end{array}\right)\left(\begin{array}{c}\gamma -\theta +\mu \\ \mu \end{array}\right)\mu !\frac{\sqrt{\gamma !}}{\sqrt{\theta +\mu !\gamma -\theta +\mu !}}\right]}^{\frac{1}{2}}{x}_{\theta +\mu -\beta +r}{x}_{\beta +r}{y}_{\gamma -\theta +\mu}{T}_{\gamma},}}}}}$

which can be written as:

${X}^{2}Y={\displaystyle \underset{\gamma \in \Im}{\sum}{\displaystyle \underset{0\le \theta \le \gamma}{\sum}{\displaystyle \underset{r\in \Im}{\sum}{\displaystyle \underset{0\le \beta \le \theta +\mu}{\sum}{\displaystyle \underset{\mu \in \Im}{\sum}C\left(\theta +\mu ,\beta ,r\right)C\left(\gamma ,\theta ,\mu \right){x}_{\theta +\mu -\beta +r}{x}_{\beta +r}{y}_{\gamma -\theta +\mu}{T}_{\gamma}}}}}}.$

This completes the proof.

From theorem (3.1), it follows that:

${X}^{3}={\displaystyle \underset{\gamma \in \Im}{\sum}{\displaystyle \underset{0\le \theta \le \gamma}{\sum}{\displaystyle \underset{r\in \Im}{\sum}{\displaystyle \underset{0\le \beta \le \theta +\mu}{\sum}{\displaystyle \underset{\mu \in \Im}{\sum}C\left(\theta +\mu ,\beta ,r\right)C\left(\gamma ,\theta ,\mu \right){x}_{\theta +\mu -\beta +r}{x}_{\beta +r}{x}_{\gamma -\theta +\mu}{T}_{\gamma}}}}}}.$ (3.5)

Plugging formula (3.5) into Equation (2.1), we arrive at

$\begin{array}{l}{\stackrel{\xa8}{x}}_{\alpha}\left(t\right)-\left(\zeta {\stackrel{\dot{}}{x}}_{\alpha}\left(t\right)-\epsilon {\displaystyle \underset{0\le \theta \le \alpha}{\sum}{\displaystyle \underset{p\in \Im}{\sum}{\displaystyle \underset{0\le \beta \le \theta +\mu}{\sum}{\displaystyle \underset{\mu \in \Im}{\sum}C\left(\theta +\mu ,\beta ,p\right)C\left(\alpha ,\theta ,\mu \right){x}_{\theta +\mu -\beta +p}{x}_{\beta +p}{\stackrel{\dot{}}{x}}_{\alpha -\theta +\mu}}}}}\right)\\ +\beta {x}_{\alpha}\left(t\right)=\lambda {\displaystyle \underset{i=1}{\overset{\infty}{\sum}}{m}_{i}\left(t\right){I}_{\left\{{\alpha}_{j}={\delta}_{ij}\right\}}},\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}{x}_{\alpha}\left(0\right)=0\text{}\text{}\text{},\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}{x}_{\alpha}\left(0\right)=\{\begin{array}{l}b\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\alpha =0,\\ 0\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\alpha \ne 0.\end{array}\end{array}$ (3.6)

Consequently, Equation (3.6) can be rewritten as the following system:

$\begin{array}{l}{\stackrel{\dot{}}{x}}_{\alpha}\left(t\right)={y}_{\alpha}\left(t\right),\\ {\stackrel{\dot{}}{y}}_{\alpha}\left(t\right)-\left(\zeta {y}_{\alpha}\left(t\right)-\epsilon {\displaystyle \underset{0\le \theta \le \alpha}{\sum}{\displaystyle \underset{p\in \Im}{\sum}{\displaystyle \underset{0\le \beta \le \theta +\mu}{\sum}{\displaystyle \underset{\mu \in \Im}{\sum}C\left(\theta +\mu ,\beta ,p\right)C\left(\alpha ,\theta ,\mu \right){x}_{\theta +\mu -\beta +p}{x}_{\beta +p}{y}_{\alpha -\theta +\mu}}}}}\right)\\ +\beta {x}_{\alpha}\left(t\right)=\lambda {\displaystyle \underset{i=1}{\overset{\infty}{\sum}}{m}_{i}\left(t\right){I}_{\left\{{\alpha}_{j}={\delta}_{ij}\right\}}},\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}{x}_{\alpha}\left(0\right)=0\text{}\text{}\text{},\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}{y}_{\alpha}\left(0\right)=\{\begin{array}{l}b\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\alpha =0,\\ 0\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\alpha \ne 0.\end{array}\end{array}$ (3.7)

By truncating the system (3.7) for N order of polynomial chaos and K GRVs; then the set $\Im $ can be truncated as the multi-index set ${\Im}_{K,N}$ which can be defined as:

${\Im}_{K,N}=\left\{\alpha =\left({\alpha}_{n},1\le n\le K\right),{\alpha}_{n}\in \left\{0,1,\cdots \right\};\left|\alpha \right|={\displaystyle \underset{n=1}{\overset{K}{\sum}}{\alpha}_{n}\le N}\right\}.$

Therefore, the number of the propagators can be calculated as [25]:

$\chi ={\displaystyle \underset{n=0}{\overset{N}{\sum}}\left(\begin{array}{c}K+n-1\\ n\end{array}\right)}=\frac{\left(K+N\right)\text{!}}{K\text{!}N\text{!}}.$

Accordingly, the number of propagators for system (3.7) equals $2\chi $. So, for $\left(N,K\right)=\left(2,5\right),\left(2,8\right),\left(3,3\right)$ and $\left(3,5\right)$, the number of DDEs (propagators) for system (3.5) will be 42, 90, 40 and 112 respectively. Those DDEs can be solved using the fifth order Rung kutta. Thus, the error of the truncated WCE for system (2.2) which satisfies the Lipchitz conditions (2.3) and (2.4), can be estimated depending on both K and N similar to [27] as

${\rm O}\left(\frac{{\left({B}^{2}T\right)}^{N+1}}{\left(N+1\right)!}+\frac{{T}^{4}}{K}\right)$ (3.8)

where B depending only on K. Since N regulates the quality of the actual chaos expansion and the number K of used basis functions characterize the approximation within each chaos. Furthermore, the estimated error of the technique depends on the time interval $\left[0,T\right]$ and the coefficient of the random forcing $\lambda $ as it depends on the coefficients of the damping force. Also, we found that the smaller the time interval, the faster the convergence. For the equations of the long time simulation such as the vander Pol equation, we can divide the time interval into small subintervals of size $\Delta t$. Therefore, the error for each step becomes

$o{\left(\Delta t\right)}^{\frac{3}{2}}$. The mean and the variance of the random solution using WCE can be calculated as:

$\begin{array}{l}E\left[x\left(t\right)\right]={x}_{\alpha =0}\left(t\right),\\ \mathrm{var}\left[x\left(t\right)\right]={\displaystyle \underset{\alpha \in {\Im}_{K,N},\alpha \ne 0}{\sum}{\left|{x}_{\alpha}\left(t\right)\right|}^{2}}.\end{array}$ (3.9)

4. The Wiener-Hermite Expansion Technique

The solution of Equation (2.1) can be expanded in terms of second order Wiener-Hermite functionals as, [17] [18] [19]:

$x\left(t;\omega \right)={x}^{\left(0\right)}\left(t\right)+{\displaystyle \underset{-\infty}{\overset{\infty}{\int}}{x}^{\left(1\right)}\left(t;{t}_{1}\right){H}^{\left(1\right)}\left({t}_{1}\right)\text{d}{t}_{1}}+{\displaystyle {\int}_{-\infty}^{\infty}{\displaystyle {\int}_{-\infty}^{\infty}{x}^{\left(2\right)}\left(t;{t}_{1},{t}_{2}\right){H}^{\left(2\right)}\left({t}_{1},{t}_{2}\right)\text{d}{t}_{1}}}\text{d}{t}_{2},$ (4.1)

where
${H}^{\left(i\right)}\left({t}_{1},\cdots ,{t}_{i}\right)$ is the i^{th} order Wiener-Hermite time-independent functional. The zero and first order terms
${x}^{\left(0\right)}$ and
${x}^{\left(1\right)}$ represent the Gaussian part of the solution and are suitable only in the case of Gaussian processes. The second order term
${x}^{\left(2\right)}$ will account for the non-Gaussian part of the solution. Higher order terms will be less dominant than the second order one and hence, can be neglected as in the current work.

Apply the second order WHE to Equation (2.1) to get:

$\begin{array}{l}{\stackrel{\xa8}{x}}^{\left(0\right)}+{\displaystyle \underset{-\infty}{\overset{\infty}{\int}}{\stackrel{\xa8}{x}}^{\left(1\right)}{H}^{\left(1\right)}\text{d}{t}_{1}}+{\displaystyle \underset{{R}^{2}}{\int}{\stackrel{\xa8}{x}}^{\left(2\right)}{H}^{\left(2\right)}\text{d}{\tau}_{2}}-\zeta \left({\stackrel{\dot{}}{x}}^{\left(0\right)}+{\displaystyle \underset{-\infty}{\overset{\infty}{\int}}{\stackrel{\dot{}}{x}}^{\left(1\right)}{H}^{\left(1\right)}\text{d}{t}_{1}}+{\displaystyle \underset{{R}^{2}}{\int}{\stackrel{\dot{}}{x}}^{\left(2\right)}{H}^{\left(2\right)}\text{d}{\tau}_{2}}\right)\\ \text{\hspace{0.05em}}+\epsilon \left(\left({\stackrel{\dot{}}{x}}^{\left(0\right)}+{\displaystyle \underset{-\infty}{\overset{\infty}{\int}}{\stackrel{\dot{}}{x}}^{\left(1\right)}{H}^{\left(1\right)}\text{d}{t}_{1}}+{\displaystyle \underset{{R}^{2}}{\int}{\stackrel{\dot{}}{x}}^{\left(2\right)}{H}^{\left(2\right)}\text{d}{\tau}_{2}}\right){\left({x}^{\left(0\right)}+{\displaystyle \underset{-\infty}{\overset{\infty}{\int}}{x}^{\left(1\right)}{H}^{\left(1\right)}\text{d}{t}_{1}}+{\displaystyle \underset{{R}^{2}}{\int}{x}^{\left(2\right)}{H}^{\left(2\right)}\text{d}{\tau}_{2}}\right)}^{2}\right)\\ \text{\hspace{0.05em}}+\beta \left({x}^{\left(0\right)}+{\displaystyle \underset{-\infty}{\overset{\infty}{\int}}{x}^{\left(1\right)}{H}^{\left(1\right)}\text{d}{t}_{1}}+{\displaystyle \underset{{R}^{2}}{\int}{x}^{\left(2\right)}{H}^{\left(2\right)}\text{d}{\tau}_{2}}\right)=\lambda \stackrel{\dot{}}{W}\left(t\right).\end{array}$ (4.2)

where $\text{d}{\tau}_{i}=\text{d}{t}_{1}\text{d}{t}_{2}\cdots \text{d}{t}_{i}$ and $\underset{{R}^{i}}{\int}\text{\hspace{0.05em}}$ is a i-dimensional integral over the disposable

variables ${t}_{1},{t}_{2},\cdots ,{t}_{i}$, $i\ge 1$. Taking the ensemble averages together with the statistical properties of WHE functionals for Equation (4.2), a set of deterministic integro-differential equations are obtained in the deterministic kernels ${x}^{\left(l\right)}\left(t;{t}_{1},\cdots ,{t}_{l}\right),l=0,1,2$. From the orthognality of Hermite functionals [17] [18], we can get the following deterministic differential equations:

$\begin{array}{l}{\stackrel{\xa8}{x}}^{\left(0\right)}-\zeta {\stackrel{\dot{}}{x}}^{\left(0\right)}+\epsilon ({\stackrel{\dot{}}{x}}^{\left(0\right)}{\left({x}^{\left(0\right)}\right)}^{2}+{\stackrel{\dot{}}{x}}^{\left(0\right)}{\displaystyle \underset{R}{\int}{\left({x}^{\left(1\right)}\right)}^{2}\text{d}{t}_{1}}+2{\stackrel{\dot{}}{x}}^{\left(0\right)}{\displaystyle \underset{{R}^{2}}{\int}{\left({x}^{\left(2\right)}\right)}^{2}\text{d}{\tau}_{2}}\\ +2{x}^{\left(0\right)}{\displaystyle \underset{R}{\int}{\stackrel{\dot{}}{x}}^{\left(1\right)}{x}^{\left(1\right)}\text{d}{t}_{1}}+4{\displaystyle \underset{{R}^{2}}{\int}{\stackrel{\dot{}}{x}}^{\left(1\right)}\left({t}_{1}\right){x}^{\left(1\right)}\left({t}_{2}\right){x}^{\left(2\right)}\text{d}{\tau}_{2}}\\ +2{\displaystyle \underset{{R}^{2}}{\int}{x}^{\left(1\right)}\left({t}_{1}\right){x}^{\left(1\right)}\left({t}_{2}\right){\stackrel{\dot{}}{x}}^{\left(2\right)}\left({t}_{1},{t}_{2}\right)\text{d}{\tau}_{2}}+4{x}^{\left(0\right)}{\displaystyle \underset{{R}^{2}}{\int}{\stackrel{\dot{}}{x}}^{\left(2\right)}{x}^{\left(2\right)}\text{d}{\tau}_{2}}\\ \text{\hspace{0.05em}}+8{\displaystyle \underset{{R}^{3}}{\int}{x}^{\left(2\right)}\left({t}_{1},{t}_{2}\right){x}^{\left(2\right)}\left({t}_{2},{t}_{3}\right){\stackrel{\dot{}}{x}}^{\left(2\right)}\left({t}_{1},{t}_{3}\right)\text{d}{\tau}_{3}})\\ \text{\hspace{0.05em}}+\beta {x}^{\left(0\right)}=0,\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}{x}^{\left(0\right)}\left(0\right)=0,{\stackrel{\dot{}}{x}}^{\left(0\right)}\left(0\right)=b.\end{array}$ (4.3)

$\begin{array}{l}{\stackrel{\xa8}{x}}^{\left(1\right)}-\zeta {\stackrel{\dot{}}{x}}^{\left(1\right)}+\epsilon [2{\stackrel{\dot{}}{x}}^{\left(0\right)}{x}^{\left(0\right)}{x}^{\left(1\right)}+4{\stackrel{\dot{}}{x}}^{\left(0\right)}{x}^{\left(1\right)}{\displaystyle \underset{R}{\int}{x}^{\left(2\right)}\text{d}{t}_{2}}+{x}^{\left(0\right)}{}^{2}{\stackrel{\dot{}}{x}}^{\left(1\right)}+{\stackrel{\dot{}}{x}}^{\left(1\right)}{\displaystyle \underset{R}{\int}{\left({x}^{\left(1\right)}\right)}^{2}\text{d}{t}_{1}}\\ +2{x}^{\left(1\right)}{\displaystyle \underset{R}{\int}{\stackrel{\dot{}}{x}}^{\left(1\right)}{x}^{\left(1\right)}\text{d}{t}_{1}}+8{\displaystyle \underset{{R}^{2}}{\int}{\stackrel{\dot{}}{x}}^{\left(1\right)}\left({t}_{2}\right){x}^{\left(2\right)}\left({t}_{1},{t}_{3}\right){x}^{\left(2\right)}\left({t}_{2},{t}_{3}\right)\text{d}{t}_{2}\text{d}{t}_{3}}+2{\stackrel{\dot{}}{x}}^{\left(1\right)}{\displaystyle \underset{{R}^{2}}{\int}{\left({x}^{\left(2\right)}\right)}^{2}\text{d}{\tau}_{2}}\\ +4{x}^{\left(0\right)}{\displaystyle \underset{R}{\int}{\stackrel{\dot{}}{x}}^{\left(1\right)}{x}^{\left(2\right)}\text{d}{t}_{2}}+4{x}^{\left(0\right)}{\displaystyle \underset{R}{\int}{x}^{\left(1\right)}{\stackrel{\dot{}}{x}}^{\left(2\right)}\text{d}{t}_{2}}+4{x}^{\left(1\right)}{\displaystyle \underset{{R}^{2}}{\int}{x}^{\left(2\right)}{\stackrel{\dot{}}{x}}^{\left(2\right)}\text{d}{\tau}_{2}}\\ +8{\displaystyle \underset{{R}^{2}}{\int}{x}^{\left(1\right)}\left({t}_{2}\right){x}^{\left(2\right)}\left({t}_{2},{t}_{3}\right){\stackrel{\dot{}}{x}}^{\left(2\right)}\left({t}_{1},{t}_{3}\right)\text{d}{t}_{2}\text{d}{t}_{3}}+8{\displaystyle \underset{{R}^{2}}{\int}{x}^{\left(1\right)}\left({t}_{2}\right){x}^{\left(2\right)}\left({t}_{1},{t}_{3}\right){\stackrel{\dot{}}{x}}^{\left(2\right)}\left({t}_{2},{t}_{3}\right)\text{d}{t}_{\text{2}}\text{d}{t}_{3}}]\\ +\beta {x}^{\left(1\right)}\left(t;{t}_{1}\right)=\lambda \delta \left(t-{t}_{1}\right),\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}{x}^{\left(1\right)}\left(0;{t}_{1}\right)={\stackrel{\dot{}}{x}}^{\left(1\right)}\left(0;{t}_{1}\right)=0\end{array}$ (4.4)

$\begin{array}{l}2{\stackrel{\xa8}{x}}^{\left(2\right)}-2\zeta {\stackrel{\dot{}}{x}}^{\left(2\right)}+\epsilon [2{\stackrel{\dot{}}{x}}^{\left(0\right)}{x}^{\left(1\right)}\left({t}_{1}\right){x}^{\left(1\right)}\left({t}_{2}\right)+8{\stackrel{\dot{}}{x}}^{\left(0\right)}{\displaystyle \underset{R}{\int}{x}^{\left(2\right)}\left({t}_{1},{t}_{3}\right){x}^{\left(2\right)}\left({t}_{2},{t}_{3}\right)\text{d}{t}_{3}}\\ \text{\hspace{0.05em}}+4{x}^{\left(0\right)}{\stackrel{\dot{}}{x}}^{\left(0\right)}{x}^{\left(2\right)}+2{x}^{\left(0\right)}{\stackrel{\dot{}}{x}}^{\left(1\right)}\left({t}_{1}\right){x}^{\left(1\right)}\left({t}_{2}\right)+2{x}^{\left(0\right)}{\stackrel{\dot{}}{x}}^{\left(1\right)}\left({t}_{2}\right){x}^{\left(1\right)}\left({t}_{1}\right)+4{x}^{\left(2\right)}{\displaystyle \underset{R}{\int}{x}^{\left(1\right)}{\stackrel{\dot{}}{x}}^{\left(1\right)}\text{d}{t}_{1}}\\ \text{\hspace{0.05em}}+4{\stackrel{\dot{}}{x}}^{\left(1\right)}\left({t}_{2}\right){\displaystyle \underset{R}{\int}{x}^{\left(1\right)}{x}^{\left(2\right)}\text{d}{t}_{2}}+2{\stackrel{\dot{}}{x}}^{\left(2\right)}{\displaystyle \underset{R}{\int}{\left({x}^{\left(1\right)}\right)}^{2}\text{d}{t}_{1}}+4{\stackrel{\dot{}}{x}}^{\left(1\right)}\left({t}_{1}\right){\displaystyle \underset{R}{\int}{x}^{\left(1\right)}{x}^{\left(2\right)}\text{d}{t}_{1}}\\ \text{\hspace{0.05em}}+4{x}^{\left(1\right)}\left({t}_{2}\right){\displaystyle \underset{R}{\int}{\stackrel{\dot{}}{x}}^{\left(1\right)}{x}^{\left(2\right)}\text{d}{t}_{2}}+4{x}^{\left(1\right)}\left({t}_{1}\right){\displaystyle \underset{R}{\int}{\stackrel{\dot{}}{x}}^{\left(1\right)}{x}^{\left(2\right)}\text{d}{t}_{1}}+4{x}^{\left(1\right)}\left({t}_{1}\right){\displaystyle \underset{R}{\int}{x}^{\left(1\right)}{\stackrel{\dot{}}{x}}^{\left(2\right)}\text{d}{t}_{1}}\\ \text{\hspace{0.05em}}+4{x}^{\left(1\right)}{\displaystyle \underset{R}{\int}{x}^{(1)}{\stackrel{\dot{}}{x}}^{\left(2\right)}\text{d}{t}_{1}}+2{\left({x}^{\left(0\right)}\left(t\right)\right)}^{2}{\stackrel{\dot{}}{x}}^{\left(2\right)}+4{\stackrel{\dot{}}{x}}^{\left(2\right)}{\displaystyle \underset{{R}^{2}}{\int}{\left({x}^{\left(2\right)}\right)}^{2}\text{d}{\tau}_{2}}\end{array}$

$\begin{array}{l}\text{\hspace{0.05em}}+8{x}^{\left(2\right)}{\displaystyle \underset{{R}^{2}}{\int}{\stackrel{\dot{}}{x}}^{\left(2\right)}{\stackrel{\dot{}}{x}}^{\left(2\right)}\text{d}{\tau}_{2}}+16{\displaystyle \underset{R}{\int}{x}^{\left(2\right)}\left({t}_{3},{t}_{4}\right){\stackrel{\dot{}}{x}}^{\left(2\right)}\left({t}_{1},{t}_{4}\right){\stackrel{\dot{}}{x}}^{\left(2\right)}\left({t}_{2},{t}_{3}\right)\text{d}{t}_{3}\text{d}{t}_{4}}\\ \text{\hspace{0.05em}}+16{\displaystyle \underset{R}{\int}{x}^{\left(2\right)}\left({t}_{3},{t}_{4}\right){\stackrel{\dot{}}{x}}^{\left(2\right)}\left({t}_{2},{t}_{4}\right){\stackrel{\dot{}}{x}}^{\left(2\right)}\left({t}_{1},{t}_{3}\right)\text{d}{t}_{3}\text{d}{t}_{4}}\\ \text{\hspace{0.05em}}+16{\displaystyle \underset{R}{\int}{x}^{\left(2\right)}\left({t}_{1},{t}_{3}\right){\stackrel{\dot{}}{x}}^{\left(2\right)}\left({t}_{3},{t}_{4}\right){\stackrel{\dot{}}{x}}^{\left(2\right)}\left({t}_{2},{t}_{4}\right)\text{d}{t}_{3}\text{d}{t}_{4}}\\ \text{\hspace{0.05em}}+8{x}^{\left(0\right)}{\displaystyle \underset{R}{\int}{\stackrel{\dot{}}{x}}^{\left(2\right)}\left({t}_{1},{t}_{3}\right){x}^{\left(2\right)}\left({t}_{2},{t}_{3}\right)\text{d}{t}_{3}}+8{x}^{\left(0\right)}{\displaystyle \underset{R}{\int}{\stackrel{\dot{}}{x}}^{\left(2\right)}\left({t}_{2},{t}_{3}\right){x}^{\left(2\right)}\left({t}_{1},{t}_{3}\right)\text{d}{t}_{3}}]\\ \text{\hspace{0.05em}}+2\beta {x}^{\left(2\right)}\left(t;{t}_{1},{t}_{2}\right)=0,\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}{x}^{\left(2\right)}\left(0;{t}_{1},{t}_{2}\right)={\stackrel{\dot{}}{x}}^{\left(2\right)}\left(0;{t}_{1},{t}_{2}\right)=0.\end{array}$ (4.5)

For solving the deterministic differential Equations (4.3)-(4.5), we can use a numerical method such as the modified Euler method or the perturbation theory [19]. The perturbation technique has the disadvantage of small convergence interval and hence is not will applicable to many problems. Furthermore, the perturbation method is sensitive for the problems of the long time integral and needs complicated computations for higher order equations. So it is a bad method for solving the results of WHE for SVDP equation although it is appropriate for solving many other NSDEs. Consequently, we will solve the WHE resulting system using the modified Euler with $o{\left(\Delta t\right)}^{2}$.

5. Numerical Simulation Using WCE Technique

In this section, we analyze the results using WCE technique. Hence, we compare the results using different number of propagators and using MC simulations. For comparison, we will solve system (2.2) by MC simulations using 10,000 realizations and the time step 0.001. The initial conditions are chosen as ${x}_{0}=0,{\stackrel{\dot{}}{x}}_{0}=0.1$ and the time interval will be $\left[0,T\right]$, $T<\infty $. As it is known, the WCE method and the WHE method are both sensitive to the time intervals [19] [25]. So, we will choose a moderate intervals $T=50$ and $T=30$. Accordingly, the oscillatory properties of SVDP system will be shown in these intervals through its statistical properties such as mean and variance.

From the comparison, we find that increasing the number of GRVs is more effective than increasing the order of polynomial. Also, the parametric values affect the dynamic behavior of the approximate solution through its statistical properties. The influence of the external force on the dynamic behavior of the system is greater than the influence of the damping force.

This influence needs to reduce the error (3.3) of $W\left(t\right)$ by increasing the number of GRVs (basis functions) of the expansion. Furthermore, it is found that a good agreement between the results of WCE and those of MC simulations in the mean solutions, but with some divergence in the variance with longer time intervals. To reduce the divergence, we need higher number of GRVs (see Figure 1(b)).

Figures 1-3 show a good agreement between the results of WCE using $N=2,K=8$ and those of MC simulations. Also, the figures show the influence of the damping parameters $\zeta $ and $\epsilon $ on the behavior of the solution. Depending on conditions (2.5) and (2.7), we found that reducing the values of $\zeta $ and $\epsilon $ makes the damping term $\left(\zeta -\epsilon {x}^{2}\right)\stackrel{\dot{}}{x}$ tends to zero. Consequently, the oscillations of system (2.2) converge to the harmonic oscillation. Accordingly, the mean solution makes a periodic motion with a self-sustained limit cycle with quasi-absence of the external force. While the variance increases linearity with time. Furthermore, magnifying the value of $\epsilon $ reduces the amplitude $\left|x\right|$ of the system and that makes the motion converges from the nature of damping with time. This is one of advantage of generalized the damping term, using two different coefficients for the linear damping term and non-linear damping term.

Figure 1. Mean (a) and variance (b) using MC simulations and WCE method for $\zeta =0.001$, $\epsilon =0.001$, $\beta =1$, $\lambda =0.001$. (a) The mean solution for $T=50$ and (b) The variance for $T=10$.

Figure 2. Mean (a) and variance (b) using MC simulations and WCE (N = 2, K = 8) for $\zeta =0.001$, $\epsilon =3$, $\beta =1$, $\lambda =0.001$. (a) The mean solution for $T=50$ and (b) The variance for $T=10$.

Figure 3. Phase plane of the mean solution computed by MC simulations and WCE (N = 2, K = 8) for $\zeta =0.001$, $\beta =1$, $\lambda =0.001$, $T=50$. (a) for $\epsilon =0.001$ and (b) for $\epsilon =3$.

Figures 4-6 show the convergence of WCE to MC results as the number K of GRVs increases. We can note that, it is more effective to use WCE with higher number K of random variables than to increase the polynomial order N. Furthermore, they show the dynamic behavior of the oscillations at large value of linear damping coefficient $\zeta $. With quasi-absence of the external force, the mean solution reacts as a negative damping system. Where, the system starts outside of the limit cycle and acquires energy.

But, for $\zeta \gg 1$ the system acts as a relaxation oscillator system as result of the sudden increase and decrease of the velocity $\stackrel{\dot{}}{x}$, (see Figure 5(a), Figure 6(b)).

Also, Depending on conditions (2.5) and (2.7), we found that reducing the value of $\epsilon $ makes the nonlinear damping term $\epsilon {x}^{2}\stackrel{\dot{}}{x}$ tends to zero. Thence, the

Figure 4. Mean (a) and variance (b) using MC simulations and WCE method for $\zeta =1$, $\epsilon =0.001$, $\beta =3$, $\lambda =0.001$, $T=30$.

Figure 5. Mean (a) and variance (b) using MC simulations and WCE method for $\zeta =3$, $\epsilon =0.001$, $\beta =1$, $\lambda =0.001$. (a) The mean solution for $T=50$ and (b) The variance for $T=30$.

Figure 6. Phase plane of the mean solution computed by MC simulations and WCE method for $\epsilon =0.001$, $\beta =3$, $\lambda =0.001$, $T=50$. (a) for $\zeta =1$ and (b) for $\zeta =3$.

approximate solution of the SVDP equation converges to the solution of the damping harmonic oscillation. For this reason, the mean solution of the WCE for different number of propagators will converge to the form ${\stackrel{\xa8}{x}}_{0}+\zeta {\stackrel{\dot{}}{x}}_{0}+\beta {x}_{0}=0$, where $0=\left(0,0,\cdots ,0\right)$. So, Figure 1(a) to Figure 6(a) have the same variation tendencies.

In Figures 7-9, the system acts as a positive damping system due to increase the influence of the external force. Therefore, the system dissipates the energy and converges to the origin. Depending on condition (2.7) where $1+{x}^{2}-{\lambda}^{2}\ge 0$, we get ${x}^{2}\ge {\lambda}^{2}-1$.

Furthermore, increasing the value of $\lambda \gg 1$ produces a deviation between the results of WCE and those of MC simulations. This deviation can be reduced by increasing MC samples, but at the other side, the computing time will increase and also the memory required. Especially we handle a second order differential equation. So, the oscillation of the results of MC simulations is been

Figure 7. Mean (a) and variance (b) using MC simulations and WCE (N = 2, K = 8 and N = 3, K = 5) for $\zeta =0.001$, $\epsilon =1$, $\beta =1$, $\lambda =1$, $T=30$.

Figure 8. Mean (a) and variance (b) using MC simulations and WCE (N = 2, K = 8 and N = 3, K = 5) for $\zeta =0.001$, $\epsilon =1$, $\beta =1$, $\lambda =1.5$, $T=30$.

Figure 9. Phase plane of the mean solution computed by MC simulations and WCE method for $\zeta =0.001$, $\epsilon =1$, $\beta =1$, $T=30$. (a) for $\lambda =1$ and (b) for $\lambda =1.5$.

chaotic. Figure 9 shows the limit cycles of the mean solution of the system (2.2). Where the limit cycles dissipate the energy and converge rapidly to the origin point when the value of $\lambda $ increases.

6. Comparison between the Results of WCE Technique and WHE Technique for SVDP Equation (2.1)

The following results discussed the comparison between the results of WCE method, WHE method with those of MC simulations.

Although the existence of a small deviation of the results of WHE from the other methods, Figure 10 & Figure 11 show a good agreement with both MC simulations and WCE This deviation is due to using the numerical Euler method to solve the deterministic system of the WHE. This numerical method becomes better by using a very small time step by increasing the number of discretization of time. And, this is in some times impossible with the ability of the used machinery. In addition to, increasing the value of $\zeta $ makes the results of WHE deviate from the results of the others both techniques as in Figure 12. So, we need complicated computations for higher accuracy results of the WHE method. Therefore, we can deduce that the WCE gives good results than the WHE method. Also, we found that the dynamic behavior of the oscillation for the results of the three methods has the same parametric effect.

Figure 13 shows the phase plane of the mean solution of the three methods for different values of $\zeta $.

Figure 14 shows the positive damping motion of the results of using both WCE and WHE as a result of increasing the effect of the stochastic external force. As it known, WCE method, MC simulations and WHE method are handling differently with the external force. Therefore, the deviation between the results of the three methods appears when $\lambda $ is large.

Figure 10. Mean (a) and variance (b) using MC simulations and WCE (N = 2, K = 8) and 2nd order WHE for $\zeta =0.001$, $\epsilon =0.001$, $\beta =1$, $\lambda =0.001$. (a) The mean solution for $T=30$ and (b) the variance for $T=10$.

Figure 11. Mean (a) and variance (b) using MC simulations and WCE (N = 2, K = 8) and 2nd order WHE for $\zeta =1$, $\epsilon =0.001$, $\beta =1$, $\lambda =0.001$.

Figure 12. Mean (a) and variance (b) using MC simulations and WCE (N = 2, K = 8) and 2nd order WHE for $\zeta =3$, $\epsilon =0.001$, $\beta =1$, $\lambda =0.001$.

Figure 13. Phase plane of the mean solution computed by MC simulations, WCE method and 2nd order WHE method for $\epsilon =0.001$, $\beta =1$, $\lambda =0.001$. (a) for $\zeta =0.001$ and (b) for $\zeta =1$.

Figure 14. Mean (a) and variance (b) using MC simulations and WCE (N = 2, K = 8) and 2nd order WHE for $\zeta =0.001$, $\epsilon =1$, $\beta =1$, $\lambda =1.7$.

7. Conclusions

In this paper, we investigated the SVDP equation under an external excitation described by Gaussian white noise. The accurate analytical solution of the SVDP oscillator was not obtained until now. Therefore we needed to use numerical techniques such as the spectral decomposition techniques to obtain an effective approximate solution. The dynamic behavior of this solution was studied through its statistical moments. Furthermore, depending on the study, it was found that:

· WCE technique was more accurate if it is compared with WHE technique for the same order.

· WCE technique was programmable if it is compared with WHE technique and for the same order, WCE consumed less time on the same processor.

· The accuracy of the WCE technique increases by increasing the number of Gaussian random variables (GRVs), much more than the order of the expansion. Hence, in creasing the number of GRVs makes the propagators capture more information about the stochastic solution. Especially, for equations need a long interval time.

· Depending on the growth condition, the solution of the SVDP system was locally existence as long as the values of both the linear damping coefficient and the external force coefficient were small.

Furthermore, we deduced that as long as the influence of the external force is negligible, the results of the three techniques are converged. But, increasing the influence of the external force refers to the efficiency of each method in dealing with the randomness of external force. In that study, we found that the WCE method was more efficient since it depends on the discretization of the white noise using the Fourier expansion. This discretization was missed in the WHE method. For the MC simulations, it was based on the number of realizations, the increasing of this number may be exceeded the capacity of the used machinery.

Acknowledgements

We would like to thank everyone who contributed to improving the quality and clarity of this research.

References

[1] Van der Pol, B. and Van der Mark, J. (1928) The Heartbeat Considered as a Relaxation Oscillation, and an Electrical Model of the Heart. Pholophical Magazine, 6, 763-775.

https://doi.org/10.1080/14786441108564652

[2] Hanggi, P. and Riseborough, P. (1983) Dynamics of Nonlinear Dissipative Oscillators. American Journal of Physics, 51, 347-352.

https://doi.org/10.1119/1.13246

[3] Ginoux, J.M. and Letellier, C. (2011) Van der Pol and the History of Relaxation Oscillations: Toward the Emergence of a Concept. Chaos: An Interdisciplinary Journal of Nonlinear Science, 22, Article ID: 023120.

https://doi.org/10.1063/1.3670008

[4] Sauro, H.M. (2014) Oscillatory Circuits (PDF). Class Notes: Systems and Synthetic Biology 492A. Sauro Lab, Center for Synthetic Biology, University of Washington, Washington DC.

[5] van der Pol, B. (1926) On Relaxation-Oscillations. The London, Edinburgh, and Dublin Phidophical Magazine and Journal of Science, 7, 978-992.

https://doi.org/10.1080/14786442608564127

[6] Oliveira, L.B., et al. (2008) Analysis and Design of Quadrature Oscillators. Springer, Berlin, 24.

https://doi.org/10.1007/978-1-4020-8516-1

[7] Xu, J.X. and Jiang, J. (1996) The Global Bifurcation Characteristics of the Forced van der Pol Oscillator. Chaos, Solitons & Fractals, 7, 3-19.

https://doi.org/10.1016/0960-0779(95)00045-3

[8] Buonomo, A. (1998) The Periodic Solution of van der Pol’s Equation. SIAM Journal on Applied Mathematics, 59, 156-171.

https://doi.org/10.1137/S0036139997319797

[9] Liao, X., Wong, K. and Wu, Z. (2001) Hopf Bifurcation and Stability of Periodic Solutions for van der Pol Equation with Distributed Delay. Nonlinear Dynamics, 26, 23-44.

https://doi.org/10.1023/A:1012927603832

[10] Mickens, R.E. and Gumel, A.B. (2002) Numerical Study of a Non-Standard Finite-Difference Scheme for the van der Pol Equation. Journal of Sound and Vibration, 250, 955-963.

https://doi.org/10.1006/jsvi.2001.3783

[11] Barbosa, R.S., Machado, J.A.T., Vinagre, B.M. and Calderón, A.J. (2007) Analysis of the Van der Pol Oscillator Containing Derivatives of Fractional Order. Journal of Vibration and Control, 13, 1291-1301.

https://doi.org/10.1177/1077546307077463

[12] Boyd, J.P. and Fernández, F.M. (2018) High Order Analysis of the Limit Cycle of the van der Pol Oscillator. Journal of Mathematical Physics, 59, Article ID: 012702.

https://doi.org/10.1063/1.5016961

[13] Alsholm, P. (1992) Existence of Limit Cycles for Generalized Lienard Equations. Journal of Mathematical Analysis and Applications, 171, 242-255.

https://doi.org/10.1016/0022-247X(92)90387-S

[14] To, C. (2017) Nonlinear Random Vibration: Analytical Techniques and Applications. 2nd Edition, CRC Press, Boca Raton.

[15] Timmer, J., Haussler, S., Lauk, M. and Lucking, C.H. (2000) Pathological Tremors: Deterministic Chaos or Nonlinear Stochastic Oscillators? Chaos: An Interdisciplinary Journal of Nonlinear Science, 10, 278-288.

https://doi.org/10.1063/1.166494

[16] Singh, A.K. and Yadava, R.D.S. (2018) Amplitude and Phase Fluctuations of Van der Pol Oscillator under External Random Forcing. American Institute of Physics, College Park.

https://doi.org/10.1063/1.5033291

[17] Wiener, N. (1938) The Homogeneous Chaos. American Journal of Mathematics, 60, 897-936.

https://doi.org/10.2307/2371268

[18] Meecham, W.C. and Siegel, A. (1964) Wiener-Hermite Expansion in Model Turbulence at Large Reynolds Number. Physics of Fluids, 7, 1178.

https://doi.org/10.1063/1.1711359

[19] El-Tawil, M.A. (2003) The Application of the WHEP Technique on Partial Differential Equations. International Journal of Differential Equations and Applications, 7, 325-337.

[20] El Tawil, M., Elbeltagy, M., El-desouky, B.S. and Hamed, M. (2014) Solution of Nonlinear Stochastic Langevin’s Equation Using WHEP, Pickard and HPM Methods. Applied Mathematics, 5, 398-412.

https://doi.org/10.4236/am.2014.53041

[21] El-Beltagy, M.A. and El-Tawil, M.A. (2013) Toward a Solution of a Class of Non-Linear Stochastic perturbed PDEs Using Automated WHEP Algorithm. Applied Mathematical Modelling, 37, 7174-7192.

https://doi.org/10.1016/j.apm.2013.01.038

[22] El-Beltagy, M.A. and Wafa, M.I. (2013) Stochastic 2D Incompressible Navier-Stokes Solver Using the Vorticity-Stream Function Formulation. Journal of Applied Mathematics, 2013, Article ID: 903618.

https://doi.org/10.1155/2013/903618

[23] El-Beltagy, M.A. and Al-Johani, A.S. (2013) Numerical Approximation of Higher-Order Solutions of the Quadratic Nonlinear Stochastic Oscillatory Equation Using WHEP Technique. Journal of Applied Mathematics, 2013, Article ID: 685137.

https://doi.org/10.1155/2013/685137

[24] Cameron, H. and Martin, W.T. (1947) The Orthogonal Development of Non-Linear Functionals in Series of Fourier-Hermite Functionals. Annals of Mathematics, 48, 385-392.

https://doi.org/10.2307/1969178

[25] Lue, W. (2006) Wiener Chaos Expansion and Numerical Solutions of Stochastic Partial Differential Equations. PhD Thesis, California Institute of Technology, Pasadena.

[26] Mikulevicius, R. and Rozovskii, B. (2004) Stochastic Navier-Stokes Equations for Turbulence Flow. SIAM Journal on Mathematical Analysis, 35, 1250-1310.

https://doi.org/10.1137/S0036141002409167

[27] Lototsky, S.V. (2003) Nonlinear Filtering of Diffusion Processes in Correlated Noise: Analysis by Separation of Variables. Applied Mathematics & Optimization, 47, 167-194.

https://doi.org/10.1007/s00245-002-0760-4

[28] Mahahamed, El-Kalla, I.L., El-desouky, B.S. and El-Beltagy, M.A. (2018) Numerical Treatment of the Stochastic Advection-Diffusion Equation Using the Spectral Stochastic Techniques. International Journal of Scientific and Engineering Research, 4, 1-17.

[29] Goeken, D. and Johnson, O. (2000) Runge-Kutta with Higher Order Derivative Approximations. Applied Numerical Mathematics, 34, 207-218.

https://doi.org/10.1016/S0168-9274(99)00128-2

[30] Erbe, L., Peterson, A. and Tisdell, C.C. (2005) Existence of Solutions to Second-Order BVPs on Time Scales. Applicable Analysis, 84, 1069-1078.

https://doi.org/10.1080/00036810410001724706

[31] Dalmasso, R. (2007) An Existence and Uniqueness Theorem for a Second Order Nonlinear System. Journal of Mathematical Analysis and Applications, 327, 715-722.

https://doi.org/10.1016/j.jmaa.2006.04.067