2 Notation and basic concepts

2.5 Existence and uniqueness (This section is not not examined, and you can safely ignore this section.)

We have been assuming the existence of solutions of dynamical systems without comment. However this is not necessarily straightforward and needs to be examined in more depth. As the next set of examples show, solutions for ODEs may be difficult to pin down!

Different phenomena in ODEs: We will give a sequence of examples showing how complications can arise in ODEs.

Example 2.21 (Non-uniqueness with continuous right hand side) Consider the ODE \(\dot x = \sqrt{|x|}, \quad x_0=0\). By observation \(x(t)=0\) is a solution (a stationary point). On the other hand, using separation of variables, we get \[ \int^x \frac{dx}{\sqrt{|x|}} =\int^t dt. \] If \(x\ge 0\), both sides of above equation become \(2(\sqrt{x}-\sqrt{x_0})=t\). Therefore \(x(t)=t^2/4\) is a different solution other than the trivial one \(x(t)\equiv 0\)! Even worse, we have a family of functions \(x_\tau (t)\) for \(\tau\ge 0\), defined by \[ x_\tau(t) = \begin{cases} 0,\qquad & \mbox{if } 0 \leq t \leq \tau,\cr (t-\tau)^2/4, & \mbox{if } t>\tau, \end{cases} \] as can be verified by direct substitution (check it!). The main issue responsible for the non-uniqueness here is that \(f(x)=\sqrt{|x|}\) is not Liptchitz continuous.
Example 2.22 (Finite time blow up) Consider the differential equation \[ \dot x =x^2\] with solutions \(\int\frac{dx}{x^2}=t\) or \(x=\frac{x_0}{1-x_0t}\). Thus if \(x_0>0\) then solutions tend to infinity as \(t\to x_0^{-1}\).

These examples show that we need a better understanding of existence of solutions. To show the existence, we first convert the ODE \(\dot x =f(x,t),\ x(0)=x_0\) into an integral equation \begin{equation}\label{Pic} x(x_0,t)=x_0+\int_0^{t}f(x(s),s)ds, \end{equation} which can be verified by differentiating and using the Fundamental Theorem of Calculus. Of course, if we do not know \(x(s)\) for \(s \in (0,t)\), then this does not help as we cannot evaluate the integral. Instead we consider the iteration \begin{equation}\label{PicIt} \qquad\qquad x^{(n+1)}(t)=T[x^{(n)}](t), \qquad \mbox{ where } \ T[x](t)=x_0 + \int_0^t f(x(s),s)ds \end{equation} with the initial condition \(x^{(0)}(t)=x_0\). If the sequence of functions \(\{ x^{(n)}(t)\}\) converges to some function \(\bar{x}(t)\), then taking the limit of both sides of \eqref{PicIt}, we get \(\bar{x}(T)=T[\bar{x}](t)\), or \(\bar{x}\) is a fixed point of the operator \(T\). Taking derivative of both sides of \(\bar{x}(T)=T[\bar{x}](t)\), we can show that \(\bar{x}\) is a solution of the ODE \(\dot{x}=f(x,t)\) with initial condition \(\bar{x}(0)=x_0\). This is called Picard Iteration, and so if we can show that \(T\) defined in (\ref{PicIt}) is a contraction mapping then we have an existence theorem. On the assumption that this can be done Picard iteration also provides a way of constructing approximate solutions locally (See Figure 2.5).

Figure 2.5 The sequence of function \(x^{(n)}(t)\) converges to the exact solution \(x(t)\).
Example 2.23 For the ODE \(\dot{x} = ax\) with initial condition \(x(0)=1\). The Picard iteration is \[ x^{(n+1)}(t) = 1+ \int_0^t ax^{(n)}(s)ds \] with \(x^{(0)}(t) = x_0=1\). Therefore, \begin{align*} x^{(1)}(t) &= x_0+ \int_0^t ax^{(0)}(s)ds = 1+\int_0^t a ds = 1+at, \cr x^{(2)}(t) &= x_0 + \int_0^t ax^{(1)}(s)ds = 1 + \int_0^t a(1+as)ds = 1+at+\frac{a^2}{2}t^2, \cr x^{(3)}(t) &= x_0 + \int_0^t ax^{(2)}(s)ds = 1 + \int_0^t a\left(1+as+\frac{a^2}{2}s^2\right)ds =1+at+\frac{a^2}{2}t^2+\frac{a^3}{3!}t^3. \end{align*} In fact, we can show that \[ x^{(n)}(t) = 1+ at + \cdots + \frac{a^n}{n!}t^n \] which converges to the exact solution \(x(t) = e^{at}\). The \(n\)-th iteration is exactly the \(n\)-th Taylor series expansion at the start time \(t=0\). In general, higher order (greater than \(n\)) terms may appear in \(x^{(n)}(t)\), which may not agree with the Taylor expansion with more than \(n\) terms, as shown in the following example.
Example 2.24 Find a power series expansion for solutions to \[ \dot x = x-x^2, \quad x_0=2 \] correct up to and including cubic terms. Set \(x^{(0)}(t)=2\). Then \[ x^{(1)}(t)=2+\int_0^t (2-2^2)\mathrm{d} s=2-2t. \] Continuing \[ x^{(2)}(t) =2+\int_0^t\big[(2-2s)-(2-2s)^2\big]\mathrm{d} s=2+\int_0^t(-2+6s-4s^2)\mathrm{d} s =2-2t+3t^2-\textstyle{\frac{4}{3}}t^3. \] Although the cubic term appears in \(x^{(2)}(t)\), its coefficient is not that in the Taylor series, and will be correct in the next iteration. That is, \begin{align*} x^{(3)}(t) &=2+\int_0^t(2-2s-3s^2+\dots )-(2-2s-3s^2+\dots )^2\mathrm{d} s \cr &=2+\int_0^t\left(-2+6s-13s^2+16s^3-\frac{43}{3}s^4+8s^5-\frac{16}{9}s^6\right)\mathrm{d} s \cr & =2-2t+3t^2-\textstyle{\frac{13}{3}}t^3+4t^4+\cdots, \end{align*} which is correct to the cubic term. This ODE can be integrated explicitly (a separable ODE) to give \[ x(t) = \frac{2}{2-e^{-t}} = 2-2t-\frac{13}{3}t^3+\frac{25}{4}t^4-\frac{541}{60}t^5+O(t^6). \]
Remark After some technical work, the existence of solution can be established using the above Picard Iteration scheme \(x^{(n+1)}(t) = T[x^{(n)}](t)\) by taking the limit as \(n\) goes to infinity. We will focus on qualitative properties in the rest of the course (You can safely ignore related questions about the existence and uniqueness of ODEs in the past papers), and the interested readers may consult Chapter 3 in Meiss's book differential dynamical systems.
Remark We will mainly focus on ODEs in two dimension, while there are new phenomena in three or higher dimensions. You can have a look at the strange attractors for Lorenz ODEs, Rossler ODEs and various systems proposed by Prott: Case A,Case G, Case I, Case L and Case S.