next up previous contents
Next: Linearization Up: Linear Stability Previous: Partial differential equations

Semigroups of operators

We consider an abstract initial value problem  
 \begin{displaymath}
\dot u=Au,\quad u(0)=u_0,\end{displaymath} (29)
in a Hilbert space H. The linear operator A is defined on some subset D(A) of H. We want to identify a class of operators A for which we can, in some sense, define a ``solution" $\exp(At)u_0$.

To this end, let us first review a few ways of defining the exponential of a matrix.

1. The power series:
\begin{displaymath}
\exp(At)=I+At+{A^2t^2\over 2}+{A^3t^3\over 6}+...\end{displaymath} (30)

2. As a limit:
\begin{displaymath}
\exp(At)=\lim_{n\to\infty}(I+{At\over n})^n.\end{displaymath} (31)

3. By diagonalization or, more generally, transforming to Jordan canonical form. This is the procedure we followed in the preceding sections.

4. By Laplace transforms: Consider the equation

\begin{displaymath}
\dot u=Au,\quad u(0)=u_0,\end{displaymath}

and take the Laplace transform:

\begin{displaymath}
\hat u(s)=\int_0^\infty e^{-st}u(t)\,dt.\end{displaymath}

We find the transformed equation

\begin{displaymath}
s\hat u-u_0=A\hat u,\end{displaymath}

which leads to

\begin{displaymath}
\hat u=(sI-A)^{-1}u_0.\end{displaymath}

The inversion formula for the Laplace transform then yields

\begin{displaymath}
u(t)={1\over 2\pi i}\int_{\sigma-i\infty}^{\sigma+i\infty}
e^{st}(sI-A)^{-1}u_0\,ds,\end{displaymath}

where $\sigma$ must be chosen larger than the real part of any eigenvalue of A. This leads us to the definition  
 \begin{displaymath}
\exp(At)={1\over 2\pi i}\int_{\sigma-i\infty}^{\sigma+i\infty}
e^{st}(sI-A)^{-1}\,ds.\end{displaymath} (32)

For unbounded operators, the power series definition is not useful. Consider the example of the heat equation discussed in the last section A=d2/dx2. Then formally, we would have
\begin{displaymath}
\exp(At)u=\sum_{n=0}^\infty {1\over n!}t^nA^nu=\sum_{n=0}^\infty {t^n\over n!}
{d^{2n}u\over dx^{2n}}.\end{displaymath} (33)
For the right hand side to make any sense, u must have derivatives of arbitrarily high orders, and convergence poses even more serious restrictions. In addition, we would have to impose an infinite number of boundary conditions on u.

The definition as a limit seems to suffer from the same defect. However, we can make the following modification to it:  
 \begin{displaymath}
\exp(At)=\lim_{n\to\infty} (I-{At\over n})^{-n}.\end{displaymath} (34)
The difference is now that we are dealing with powers of an inverse operator rather than powers of A. It turns out that this definition is indeed useful. Indeed, the following theorem, known as the Hille-Yosida theorem, is at the foundation of the study of infinite-dimensional evolution problems.

Theorem 3

Assume that A is a linear operator defined on a dense subspace D(A) of a Hilbert space H. Assume further that there are constants M and $\omega$such that $(A-\sigma I)^{-1}$ exists (as an operator from H to D(A)) for $\sigma\gt\omega$ and  
 \begin{displaymath}
\Vert(A-\sigma I)^{-n}\Vert\le {M\over (\sigma-\omega)^n}.\end{displaymath} (35)
Then  
 \begin{displaymath}
\exp(At)u=\lim_{n\to\infty} (I-{At\over n})^{-n}u\end{displaymath} (36)
exists for every $u\in H$, t>0. Moreover, $\exp(At)$ is a bounded operator from H to itself. We have the exponential property $\exp(At)\exp(As)=\exp(A(t+s))$, and the continuity property $\lim_{t\to 0+}
\exp(At)u=u$. If $u\in D(A)$, then
\begin{displaymath}
{d\over dt} \exp(At)u=A\exp(At)u.\end{displaymath} (37)

The set of operators $\exp(At)$ where $t\ge 0$ is referred to as a semigroup of operators; it is closed under multiplication since $\exp(At)*\break\exp(As)=
\exp(A(t+s))$. It is in general not possible to extend t to negative values; for instance the heat equation is well posed only for solutions forward in time, not for solutions backward in time.

We note that
\begin{displaymath}
(I-{At\over n})^{-n}=(-1)^n(A-{n\over t}I)^{-n}({t\over n})^{-n}.\end{displaymath} (38)
The condition (1.37) guarantees that, for $n\gt\omega t$, the norm of the right hand side is bounded by
\begin{displaymath}
{M({t\over n})^{-n}\over ({n\over t}-\omega)^n}={M\over 
(1-{\omega t\over n})^n}.\end{displaymath} (39)
For $n\to\infty$, the right hand side converges to $M\exp(\omega t)$.

Example 7

 To give a simple example, we show how the Hille-Yosida theorem applies to the heat equation, discussed in the previous section. We use the Fourier series representation
\begin{displaymath}
u[x]=\sum_{n=1}^\infty u_n\sin(n\pi x).\end{displaymath} (40)
It is easily checked that
\begin{displaymath}
\Vert u\Vert={1\over\sqrt{2}}(\sum_{n=1}^\infty \vert u_n\vert^2)^{1/2}.\end{displaymath} (41)
Moreover,
\begin{displaymath}
Au[x]=\sum_{n=1}^\infty -n^2\pi^2u_n\sin(n\pi x),\end{displaymath} (42)
and consequently
\begin{displaymath}
(A-\sigma I)^{-1}u[x]=\sum_{n=1}^\infty (-n^2\pi^2-\sigma)^{-1}u_n\sin(n\pi x),\end{displaymath} (43)
which is well defined as long as $\sigma\neq -n^2\pi^2$ for any n. In particular, if $\sigma\gt-\pi^2$, then

\begin{eqnarray}
\Vert(A-\sigma I)^{-1} u\Vert&\le& {1\over\sqrt{2}}(\sum_{n=1}^...
 ...\vert^2)^{1/2}\nonumber\\ &\le& {1\over \sigma+\pi^2}\Vert u\Vert,\end{eqnarray}

and consequently,
\begin{displaymath}
\Vert(A-\sigma I)^{-n}u\Vert\le {1\over(\sigma+\pi^2)^n}\Vert u\Vert.\end{displaymath} (44)
Consequently, the Hille-Yosida theorem holds with $\omega=-\pi^2$ and M=1.

In a somewhat simpler fashion, we could argue that
\begin{displaymath}
(Au,u)=\int_0^1 \bar u''[x]u[x]\,dx=-\int_0^1 \vert u'[x]\vert^2\,dx\le 0,\end{displaymath} (45)
and hence
\begin{displaymath}
((A-\sigma I)u,u)\le -\sigma (u,u)\end{displaymath} (46)
for $\sigma\gt$. We conclude that
\begin{displaymath}
\Vert(A-\sigma I)u\Vert\Vert u\Vert\ge \vert((A-\sigma I)u,u)\vert\ge \sigma \Vert u\Vert^2.\end{displaymath} (47)
Consequently
\begin{displaymath}
\Vert u\Vert\le {1\over\sigma}\Vert(A-\sigma I)u\Vert,\end{displaymath} (48)
and hence
\begin{displaymath}
\Vert(A-\sigma I)^{-1}u\Vert\le {1\over\sigma}\Vert u\Vert\end{displaymath} (49)
and
\begin{displaymath}
\Vert(A-\sigma I)^{-n}u\Vert\le {1\over\sigma^n}\Vert u\Vert.\end{displaymath} (50)
This shows that the assumptions of the Hille-Yosida theorem hold with $\omega=0$and M=1.

The second argument given in the last example clearly yields a weaker conclusion than the first. On the other hand, it required no knowledge of eigenvalues and eigenfunctions. In general, the representation of $\exp(At)$ in terms of eigenvalues and eigenfunctions is very useful for closed form solutions, if such a representation is available. As a foundation for a general theory, however, such a definition would be too restrictive, since little is known (except for special cases such as self-adjoint operators) about spectral representations. Indeed, there are (physically relevant) examples of operators which do not even have a spectrum, so any attempt to ``diagonalize" $\exp(At)$ is doomed from the start.

Example 8

Consider the equation ut+ux=0, with boundary condition u(0,t)=0 on the interval (0,1). Abstractly, we associate with this the operator Au=-u' with domain $D(A)=\{u\in H^1(0,1)\,\vert\,u[0]=0\}$. The initial value problem has a well-defined solution, namely, we have
\begin{displaymath}
\exp(At)u_0[x]=\cases{u_0[x-t],&if $t<x$,\cr
0,&if $t\gt x$.\cr}\end{displaymath} (51)
Note also that
\begin{displaymath}
{\rm Re}(Au,u)=-{\rm Re}[\int_0^1 \bar u'[x]u[x]\,dx]=-{1\over 2}\vert u'[1]\vert^2,\end{displaymath} (52)
and as before we can conclude from this that
\begin{displaymath}
\Vert(A-\sigma I)^{-1}u\Vert\le\Vert u\Vert\end{displaymath} (53)
for $\sigma\gt$, so the assumptions of the Hille-Yosida theorem hold with M=1 and $\omega=0$.

If, however, we consider the problem $Au-\sigma u=f$, i.e.
\begin{displaymath}
-u'[x]-\sigma u[x]=f[x],\quad u[0]=0,\end{displaymath} (54)
we find the unique solution
\begin{displaymath}
u[x]=-\int_0^x \exp(-\sigma(x-y))f[y]\,dy,\end{displaymath} (55)
regardless what $\sigma$ is. Consequently, the operator A has no eigenvalues and indeed no spectrum (see below).

In the earlier sections on stability of systems of ODEs, we established a connection between stability and eigenvalues. For infinite-dimensional systems, the issues are more complicated. First of all, there is a more general notion of spectrum. We say that $\sigma$ is in the resolvent set of A if the ``resolvent" $(A-\sigma I)^{-1}$ exists as a bounded operator defined on all of H, and we say $\sigma$ is in the spectrum if it is not in the resolvent set. In the finite-dimensional case, the spectrum consists precisely of the eigenvalues, but in infinite dimensions this is no longer the case; for instance, there may be what is known as a ``continuous spectrum."

Example 9

Consider, for instance, the operator

Au[x]=xu[x]

(56)

on the Hilbert space L2(0,1). Clearly, we have  
 \begin{displaymath}
(A-\sigma I)^{-1}f[x]={f[x]\over x-\sigma},\end{displaymath} (57)
and this is well-defined bounded operator unless $\sigma\in [0,1]$. If $\sigma\in [0,1]$, then $(A-\sigma I)^{-1}$ is unbounded, because the denominator on the right hand side of (1.60) can become zero. Hence $\sigma$ is in the spectrum of A. Nevertheless, $\sigma$ is not an eigenvalue: If $Au=\sigma u$, then $(x-\sigma)u[x]=0$, and hence u[x]=0 for $x\neq\sigma$, i.e. almost everywhere.

For any operator which satisfies the hypotheses of the Hille-Yosida theorem, we can define the following quantities:

   \begin{eqnarray}
r(A)&=&\sup\{{\rm Re}\,\sigma\,\vert\,\sigma\ {\rm is\ in\ the\...
 ...er\\ \omega(A)&=&\lim_{t\to\infty}{\log\Vert\exp(At)\Vert\over t}.\end{eqnarray}

We call r(A) the spectral bound, and $\omega(A)$ the type of the semigroup. The quantity $\omega(A)$ measures the rate of exponential growth or decay of $\exp(At)$. The connection between spectrum and stability is established if $\omega(A)=r(A)$. While it can be shown that $\omega(A)\ge r(A)$, the converse inequality is in general not true. There are a number of known counterexamples in the literature; a very natural one [13] is the equation

utt=uxx+uyy+eixuy,

(58)

with periodic boundary conditions in x and y (we can make this into a first order system by introducing the new variable v=ut; the natural function space to consider the problem is then $(u,v)\in H^1\times L^2$). It can be shown that for this example r(A)=0, but $\omega(A)=1/2$.

The Laplace inversion formula for $\exp(At)$ (see above) can be exploited to establish a weaker result. Namely, it turns out that

   \begin{eqnarray}
\omega(A)&=&\inf\{s\in{\rm I\!R}\,\vert\,\Vert(A-\sigma I)^{-1}...
 ...ed}
\nonumber\  
&&{\rm in\ the\ half\ plane\ Re}\,\sigma\ge s\}.\end{eqnarray}

We note that, if there are no spectral values for ${\rm Re}\,\sigma\ge s$, i.e. if s>r(A), then the resolvent exists in that half plane. But this does not necessarily imply that it is uniformly bounded. In the finite-dimensional case, this is no issue, since $\lim_{\sigma\to\infty} \Vert(A-\sigma I)^{-1}\Vert=0$.

For a wide class of problems, including those of Newtonian fluid mechanics, it is known that $\omega(A)=r(A)$. However, these results do not include viscoelastic fluids, where the problem is in general still open. Hence we do not know that the onset of instabilities is always associated with eigenvalues crossing the imaginary axis. Nevertheless, many of the instabilities observed clearly show the type of dynamics which is expected from such a scenario, and we shall analyze them in the appropriate framework.


next up previous contents
Next: Linearization Up: Linear Stability Previous: Partial differential equations
Michael Renardy
1998-07-13