Solution to vector differential equations
Recall that for a scalar differential equation (i.e., ): has the solution
Can we say the same for vector systems? That is, can we define matrix exponential in such a way that the vector differential equation (i.e., ): where has the solution
Mathematically, it turns out that this is straight forward. Recall that for a scalar , we have So, a natural choice is to define matrix exponential as where the second equation uses the fact that because is a scalar.
With this definition, we have
Thus, if we take the candidate solution , we have that Thus, our candidate solution satisfies the vector differential equation!
If we define as
Then, we can write the solution of the vector differential equation (i.e., ) where as
We now present some examples where the matrix exponential can be computed easily.
Example 4.1 Compute for
Based on the solution, solve the vector differential equation
We compute the terms of :
which means that for . Thus, the right hand side of contains only a finite number of nonzero terms:
Thus, the solution of the vector differential equation is given by
Example 4.2 Compute for
Based on the solution, solve the vector differential equation
We compute the terms of :
which means that for . Thus, the right hand side of contains only a finite number of nonzero terms:
Thus, the solution of the vector differential equation is given by
Example 4.3 Compute for
Based on the solution, solve the vector differential equation
We compute the terms of :
Thus, we have
Thus, the solution of the vector differential equation is given by
But outside of such few special cases, computing via definition is not computationally feasible. We now present computationally efficient methods to compute the matrix exponential.
Computing matrix exponential
Method 1: Eigenvalue diagonalization method
As illustrated by Example 4.3, compute matrix exponential is easy for diagonal matrix. So, if the matrix is diagonalizable (i.e., has distinct eigenvalues) we can do a change of coordinates and compute the matrix exponential in the eigen-coordinates.
In particular, suppose has distinct eivenvalues and are the corresponding eigvenvectors. Thus, for all . Writing this in matrix form, we have Now define So, the above equation can be writen as or Observe that
Therefore,
Thus, once we know the eigenvalues and eigenvectors of (and if all eigenvalues are distinct), then
Exercise 4.1 Use the eigenvalue diagonalizable method to compute for .
We start by computing the eigenvalues and eigenvectors of .
To compute the eigenvalues: Therefore, the characteristic equation is Hence, the eigenvalues of are and .
We now compute the eigenvectors. Recall that for any eigenvalue , the eigen-vector satisfies . We start wtih . Then, We set . Then, . Thus, the eigenvector
Similarly, for , we have We set . Then, . Thus, the eigenvector
Thus, and therefore . Hence,
Exercise 4.2 Use the eigenvalue diagonalizable method to compute for .
We start by computing the eigenvalues and eigenvectors of .
To compute the eigenvalues: Therefore, the characteristic equation is Hence, the eigenvalues of are and .
We now compute the eigenvectors. Recall that for any eigenvalue , the eigen-vector satisfies . We start wtih . Then, We set . Then, . Thus, the eigenvector
Similarly, for , we have We set . Then, . Thus, the eigenvector
Thus, and therefore . Hence,
Internal stability
As stated in the beginning of the lecture, matrix exponential allows us to solve the vector differential equation in the same manner as a scalar linear differential equation. The solution is given by
Suppose has distinct eigenvalues with corresponding (linearly independent) eigenvalues .
Recall that for any eigenvector , . Thus, if the system starts from the intial condition , then Therefore, if we start along an eigenvector of , the system trajectory remains along that direction, with the length scaling as .
In general, since the eigenvectors are linearly independent, an arbitrary initial condition can be written as Therefore,
Thus, the response of the dynamical system is a combination of motions along the eigenvectors of the matrix . Each eigendirection is called a mode of the system. A particular mode is excited by choosing an initial condition to have a component along that eigendirection.
An implication of the above is the following: the state as for all initial states if and only if all eigenvalues of lie in the open left-hand plane.
A SSM where all eigenvalues of the matrix lie in the open-left hand plane is called internally stable.
The TF of a SSM is given by The elements of adjoint of are all polynomials in . Thus, the numerator is a polynomial in . So, is the denominator. Moreover, the denominator equals the characteristic equation of ; thus, the roots of the denominator are the eigenvalues of .
The numerator and denominator may have common roots that cancel each other. So, in general, Hence, if is internally stable (i.e., all its eigenvalues are in the open left-hand plane) then is BIBO stable (i.e., all its poles are in the open left-hand plane).
However, the converse is not true there may be pole-zero cancellations. For example, consider , , . Then, Notice that the TF is BIBO stable but the SSM is not internally stable! Any initial condition that activates the mode corresponding to eigenvalue will cause to diverge to infinity.
Time response of state space models
Now consider a SSM given by
Suppose the system starts at with an initial state and we apply the input . How do we find the output?
Taking Laplace transform of the SSM, we get: Solving for , we get Substituting in , we get We can the use the above expression compute by taking the inverse Laplace transform.
It is sometimes useful to write the expression in time domain (but we will not use this expression for computations). To do so, recall that is the transfer function of the system. Therefore, its inverse Laplace transform is the impulse response: Then, we can compute the inverse Laplace transform of using the convolution formula and write