7  Routh-Hurwitz Stability Criterion

Updated

June 3, 2025

Consider a unity feedback system as shown in Figure 11.1. In such a setting, we are often interested in finding the values of \(K\) for which the system is stable.

Figure 7.1: Block diagram of a proportional controller

Example 7.1 Consider the system shown in Figure 11.1 with \[ G(s) = \frac{1}{s-1}. \] Find the values of \(K\) for which the system is stable.

Solution

The closed-loop transfer function is given by \[ T(s) = \frac{K G(s)}{1 + K G(s) } = \frac{K}{s - 1 + K}. \] Thus, the closed-loop transfer function has a pole at \(1 - K\). The system is stable when all poles are in the OLHP, i.e., \(1 - K < 0\). Thus, the system is stable for \[ K > 1. \]

In the above example, we could identify the roots of the closed loop transfer function and thereby determine the values of gain \(K\) for which the system is stable. However, we can only factorize lower order polynomials; for polynomials with degree greater than \(5\), we need to resort to numerical methods. Doing so, makes it difficult to find the range of values of \(K\) for which a polynomial is stable.

However, for determine the stability of a system, we don’t need to find the roots of the denominator polynomial; we simply need to verify that all the roots are in the OLHP. The following result shows that we can determine if roots lie in the ORHP without factorizing a polynomial.

Theorem 7.1 A necessary condition for a polynomial to be stable is that all the coefficients have the same sign (positive or negative)

But this is a necessary condition but not a sufficient condition. So, we know that the polynomial \[ D(s) = s^5 + 4 s^4 + 10s^3 - s^2 + 2 s + 1\] is unstable because one of the coefficients are negative. Moreover, we know that the polynomial \[ D(s) = s^4 + 4s^3 + s + 1\] is unstable because the coefficient of the \(s^2\) term is \(0\). But this necessary condition doesn’t tell us if \[ D(s) = s^5 + 4 s^4 + 10s^3 + s^2 + 2 s + 1\] is stable or not.

The Routh-Hurwitz criterion is a simple algebraic procedure which determines whether a polynomial is stable. The first step is generating what is called a Routh Array.

7.1 Generating the Routh Array

Consider a polynomial \[ D(s) = a_n s^n + \cdots + a_0. \]

The Routh array is a (non-rectangular) array with \(n+1\) rows, indexed by \(s^n\), \(s^{n-1}\), \(\dots\), \(s^0\).

  • Step 1. Fill the first two rows of the Routh array with the coefficients of \(D(s)\) going in the zigzag pattern as shown below. We stop when we have used all the coefficients. Any unfilled entries in the Routh array are assumed to be zero.

    \(s^n\)

    \(a_{n}\)

    \(a_{n-2}\)

    \(\cdots\)

    \(s^{n-1}\)

    \(a_{n-1}\)

    \(a_{n-3}\)

    \(\cdots\)

  • Step 2. This is a recursive step, where we take two filled rows, say row \(s^{m+2}\) and \(s^m\) and use that to fill row \(s^m\), for all \(m \in \{n-2, \dots, 0\}\). Each entry is the negative determinant of a \(2 \times 2\) matrix constructed from the entries in the previous two rows (i.e., row \(s^{m+2}\) and \(s^{m+1}\) when we are filling in row \(s^m\)) divided by the first entry in row \(s^{m+1}\) (provided that entry is not zero!). The first column of \(2 \times 2\) matrix is the first column of the previous two rows; the second column of the \(2 \times 2\) matrix is the columns above and to the right.

    \(s^{m+2}\)

    \(b_{1}\)

    \(b_{2}\)

    \(\cdots\)

    \(s^{m+1}\)

    \(c_1\)

    \(c_2\)

    \(\cdots\)

    \(s^{m}\)

    \(-\dfrac{\DET{b_1 & b_2 \\ c_1 & c_2}}{c_1}\)

    \(-\dfrac{\DET{b_1 & b_3 \\ c_1 & c_3}}{c_1}\)

    \(\cdots\)

    Note that in each row, we eventually end up with zeros at which time we stop filling the row. We repeat this procedure until we have filled all rows until row \(s^0\).

We always follow the above method to fill in the Routh array, irrespective of the size of the polynomial. We illustrate this via some examples.

Example 7.2 Find the Routh Array of \[ D(s) = s^4 + 2 s^3 + 3 s^2 + 4s + 5. \]

Solution

We follow the procedure described above. The first column of the table is highlight.

$s^{4}$$1$$3$$5$
$s^{3}$$2$$4$
$s^{2}$$\displaystyle -\frac{\DET{ 1 & 3 \\ 2 & 4}}{2} = 1$$\displaystyle -\frac{\DET{ 1 & 5 \\ 2 & 0}}{2} = 5$
$s^{1}$$\displaystyle -\frac{\DET{ 2 & 4 \\ 1 & 5}}{1} = -6$
$s^{0}$$\displaystyle -\frac{\DET{ 1 & 5 \\ -6 & 0}}{-6} = 5$

7.2 Interpreting the Routh Array

We start with the basic case when there is no zero in the first column (as is the case in Example 7.2). In this case, we look at the first column and count the number of sign changes. For example, the signs of the entries in the first column of Example 7.2 are shown below, from which we can see that there are two sign changes in the first column (going from row \(s^2\) to row \(s^1\) and going from row \(s^1\) to row \(s^0\)).

TermSign
$s^{4}$$+$
$s^{3}$$+$
$s^{2}$$+$
$s^{1}$$-$
$s^{0}$$+$

When there are no zeros in the first column, it implies that the polynomial has no roots on the \(j ω\)-axis. Moreover,

  • No. of roots in the ORHP = no. of sign changes
  • No. of roots in the OLHP = degree of polynomial \(-\) no. of sign changes.

So, for Example 7.2, we have

  • No. of roots in the ORHP = 2 (no. of sign changes)
  • No. of roots in the OLPH = 4 (degree of polynomial) \(-\) 2 (no. of sign changes) = 2.

We can verify this by factorizing \(D(s)\), which gives

D = Polynomial([5,4,3,2,1], :s)
println("D(s) = ", D)
roots(D)
D(s) = 5 + 4*s + 3*s^2 + 2*s^3 + s^4
4-element Vector{ComplexF64}:
 -1.2878154795576484 - 0.8578967583284913im
 -1.2878154795576484 + 0.8578967583284913im
  0.2878154795576478 - 1.416093080171908im
  0.2878154795576478 + 1.416093080171908im

Example 7.3 Find the location of the poles of a TF with with denominator given by \[ D(s) = s^4 + 5 s^3 + s^2 + 10s + 1. \]

We first compute the Routh Array

$s^{4}$$1$$1$$1$
$s^{3}$$5$$10$
$s^{2}$$\displaystyle -\frac{\DET{ 1 & 1 \\ 5 & 10}}{5} = -1$$\displaystyle -\frac{\DET{ 1 & 1 \\ 5 & 0}}{5} = 1$
$s^{1}$$\displaystyle -\frac{\DET{ 5 & 10 \\ -1 & 1}}{-1} = 15$
$s^{0}$$\displaystyle -\frac{\DET{ -1 & 1 \\ 15 & 0}}{15} = 1$

We now look at the signs of the terms in the first column:

TermSign
$s^{4}$$+$
$s^{3}$$+$
$s^{2}$$-$
$s^{1}$$+$
$s^{0}$$+$

Note that there are two sign changes in the first column. Thus, we have

  • No. of roots in ORHP = 2 (no. of sign changes)
  • No. of roots in OLHP = 4 (degree of poly) \(-\) 2 (no. of sign changes) = 2.

We can verify this by factorizing \(D(s)\), which gives

D(s) = 1 + 10*s + s^2 + 5*s^3 + s^4
4-element Vector{ComplexF64}:
   -5.173143012444715 + 0.0im
 -0.10051275725870919 + 0.0im
  0.13682788485171066 - 1.3800281123446885im
  0.13682788485171066 + 1.3800281123446885im

7.2.1 An optimization

Since we only care about the signs of the coefficient, we can multiply or divide all elements in a row by a positive number without changing the result. This can sometimes lead to simpler calculations.

Example 7.4 Find the location of the poles of a TF with with denominator given by \[ D(s) = s^6 + 4s^5 + 3s^4 + 2s^3 + s^2 + 4s + 4. \]

We first compute the Routh Array

$s^{6}$$1$$3$$1$$4$
$s^{5}$$\cancel{4} 2$$\cancel{2} 1$$\cancel{4} 2$
$s^{4}$$\displaystyle -\frac{\DET{ 1 & 3 \\ 2 & 1}}{2} = \frac{5}{2}$$\displaystyle -\frac{\DET{ 1 & 1 \\ 2 & 2}}{2} = 0$$\displaystyle -\frac{\DET{ 1 & 4 \\ 2 & 0}}{2} = 4$
$s^{3}$$\displaystyle -\frac{\DET{ 2 & 1 \\ \frac{5}{2} & 0}}{\frac{5}{2}} = 1$$\displaystyle -\frac{\DET{ 2 & 2 \\ \frac{5}{2} & 4}}{\frac{5}{2}} = \frac{-6}{5}$
$s^{2}$$\displaystyle -\frac{\DET{ \frac{5}{2} & 0 \\ 1 & \frac{-6}{5}}}{1} = 3$$\displaystyle -\frac{\DET{ \frac{5}{2} & 4 \\ 1 & 0}}{1} = 4$
$s^{1}$$\displaystyle -\frac{\DET{ 1 & \frac{-6}{5} \\ 3 & 4}}{3} = \frac{-38}{15}$
$s^{0}$$\displaystyle -\frac{\DET{ 3 & 4 \\ \frac{-38}{15} & 0}}{\frac{-38}{15}} = 4$

We now look at the signs of the terms in the first column:

TermSign
$s^{6}$$+$
$s^{5}$$+$
$s^{4}$$+$
$s^{3}$$+$
$s^{2}$$+$
$s^{1}$$-$
$s^{0}$$+$

Note that there are two sign changes in the first column. Thus, we have

  • No. of roots in ORHP = 2 (no. of sign changes)
  • No. of roots in OLHP = 6 (degree of poly) \(-\) 2 (no. of sign changes) = 4.

We can verify this by factorizing \(D(s)\), which gives

D(s) = 4 + 4*s + s^2 + 2*s^3 + 3*s^4 + 4*s^5 + s^6
6-element Vector{ComplexF64}:
 -3.2643574436966434 + 0.0im
 -0.8858022358397655 + 0.0im
 -0.6045963281166988 - 0.993535028893593im
 -0.6045963281166988 + 0.993535028893593im
  0.6796761678849015 - 0.7488138087285154im
  0.6796761678849015 + 0.7488138087285154im

7.3 Why does it work?

All this appears to be magic! To get some intuition behind on why Routh-Hurwitz argument works, let’s look at some special case.

  • Degree 1 polynomials

    Consider \(D(s) = a_1 s + a_0\). The Routh array is given by

    \(s^1\)

    \(a_{1}\)

    \(s^{0}\)

    \(a_{0}\)

    The Routh-Hurwitz criteria states that if \(a_0\) and \(a_1\) are non-zero and have the same signs, then all roots of \(D(s)\) are in the OLHP. It is easy to verify that this is true as the root of \(D(s)\) is \(-a_0/a_1\).

  • Degree 2 polynomials

    Consider \(D(s) = a_2 s^2 + a_1 s + a_0\). The Routh array is given by

    \(s^2\)

    \(a_{2}\)

    \(a_{0}\)

    \(s^1\)

    \(a_{1}\)

    \(s^{0}\)

    \(a_{0}\)

    The Routh-Hurwitz criteria states that if \(a_0\), \(a_1\), and \(a_2\) are non-zero and have the same signs, then all roots of \(D(s)\) are in the OLHP. It is easy to verify that this is true as the roots of \(D(s)\) are given by \[ \frac{-a_1 \pm \sqrt{a_1^2 - 4 a_0 a_2}}{2a_0} \] which either are complex valued with negative real part, or are negative real valued.

But the above line of argument is going to be difficult to generalize because we don’t have closed form formulas for the roots of polynomials. So, let’s look at the degree 2 polynomial differently. Define \[ P_2(s) = a_2 s^2 + a_0, \quad P_1(s) = a_1 s, \quad\text{and}\quad P_0(s) = a_0 \] to be the polynomials corresponding to each row of the Routh array.

Simple algebra shows that \[ P_2(s) = \frac{a_2 s}{a_1} P_1(s) + P_0(s), \] that is, the \(P_0(s)\) polynomial is the remainder when \(P_2(s)\) is divided by \(P_1(s)\).

For ease of notation, let \(q_2(s)\) denote \(a_2 s/a_1\) and define \(Q_2(s) = P_2(s) + P_1(s)\) and \(Q_1(s) = P_1(s) + P_0(s)\). The key property that implies the Routh-Hurwitz criteria is that the polynomials \(Q_2(s)\) and \((1 + q_2(s))Q_1(s)\) have the same number of roots in the OLHP and ORHP. Moreover, they have identical roots on the \(jω\)-axis.

For the quadratic case, the key property is easy to verify via the formula for the roots. For the general case, see this paper for two proofs of the key property based on continuity properties of polynomials and Nyquist stability criterion.

7.4 Special cases: Zero in the first column

We cannot follow the usual method to construct the Routh array if there is zero in the first column. For example, consider \[ D(s) = s^5 + 2s^4 + 3s^3 + 6s^2 + 5s + 3. \]

\(s^5\) \(1\) \(3\) \(5\)
\(s^4\) \(2\) \(6\) \(3\)
\(s^3\) \(0\) \(7/2\) \(3\)

If there is zero in the first column but the entire row is not zero, we proceed as follows

  • Replace the zero in the first column by an \(ε\) and continue to construct the Routh array as a function of \(ε\).
  • Count the number of sign changes as \(ε \to 0^{+}\) (\(ε\) goes to zero from above). Let this number be \(k_{+}\).
  • Count the number of sign changes as \(ε \to 0^{-}\) (\(ε\) goes to zero from below). Let this number be \(k_{-}\).

In all cases, we will have \(k_{+} = k_{-}\). Therefore, we have

  • No. of roots in the ORHP = \(k_{+} = k_{-}\)
  • No. of roots on the \(j ω\)-axis = 0$.
  • No. of roots in the OLHP = \(\text{degree of polynomial} - k_{+}\) = \(\text{degree of polynomial} - k_{-}\).

Therefore, in practice, we may only consider the limit \(ε \to 0^{+}\). See below for why some textbooks recommend taking both limits.

Example 7.5 Find the location of the poles of a TF with with denominator given by \[ D(s) = s^5 + 2s^4 + 3s^3 + 6s^2 + 5s + 3. \]

We first compute the Routh Array

$s^{5}$$1$$3$$5$
$s^{4}$$2$$6$$3$
$s^{3}$$\displaystyle -\frac{\DET{ 1 & 3 \\ 2 & 6}}{2} = \cancel{0} \epsilon$$\displaystyle -\frac{\DET{ 1 & 5 \\ 2 & 3}}{2} = \frac{7}{2}$
$s^{2}$$\displaystyle -\frac{\DET{ 2 & 6 \\ \epsilon & \frac{7}{2}}}{\epsilon} = \frac{-7 + 6 \epsilon}{\epsilon}$$\displaystyle -\frac{\DET{ 2 & 3 \\ \epsilon & 0}}{\epsilon} = 3$
$s^{1}$$\displaystyle -\frac{\DET{ \epsilon & \frac{7}{2} \\ \frac{-7 + 6 \epsilon}{\epsilon} & 3}}{\frac{-7 + 6 \epsilon}{\epsilon}} = \frac{\frac{-49}{2} + 21 \epsilon - 3 \epsilon^{2}}{-7 + 6 \epsilon}$
$s^{0}$$\displaystyle -\frac{\DET{ \frac{-7 + 6 \epsilon}{\epsilon} & 3 \\ \frac{\frac{-49}{2} + 21 \epsilon - 3 \epsilon^{2}}{-7 + 6 \epsilon} & 0}}{\frac{\frac{-49}{2} + 21 \epsilon - 3 \epsilon^{2}}{-7 + 6 \epsilon}} = 3$

We now look at the signs of the terms in the first column:

Term$ ε \to 0^{+}$$ ε \to 0^{-}$
$s^{5}$$+$$+$
$s^{4}$$+$$+$
$s^{3}$$+$$-$
$s^{2}$$-$$+$
$s^{1}$$+$$+$
$s^{0}$$+$$+$

In both cases, we have two sign changes in the first column. Therefore, \(k_{+} = k_{-} = 2\). Hence, we have

  • No. of roots in the ORHP = 2
  • No. of roots on the \(j ω\)-axis = 0
  • No. of roots in the OLHP = \(5 - 2 = 3\)

We can verify this by factorizing \(D(s)\), which gives

D(s) = 3 + 5*s + 6*s^2 + 3*s^3 + 2*s^4 + s^5
5-element Vector{ComplexF64}:
 -1.6680888389741944 + 0.0im
 -0.5088331416337463 - 0.7019951317695377im
 -0.5088331416337463 + 0.7019951317695377im
 0.34287756112084433 - 1.5082901611666297im
 0.34287756112084433 + 1.5082901611666297im
Why do we replace zero with epsilon?

When we get a zero in the first column (but the rest of the row is not zero), we cannot proceed with the construction of the Routh array. The intuition behind replacing the zero with epsilon is as follows.

  • Roots of a polynomial are a continuous function of the coefficients. So, if we make an infinitesimally small change in the coefficients, the roots will move by an infinitesimally small amount (and therefore not change location from OLHP to ORHP or vice-versa).

  • An infinitesimally small perturbation of the coefficients will change the entries of the Routh array by an infinitesimally small amount. This will change the zero in the first column to an infinitesimally small positive or negative value.

  • We can think of replacing zero by epsilon as the result of such an infinitesimally small perturbation.

Why do textbooks do it differently?

Most textbooks explain the procedure as follows. Compute \(k_{+}\) and \(k_{-}\) as above. Then, we have

  • No. of roots in the ORHP = \(\min\{k_{+}, k_{-}\}\)
  • No. of roots on the \(j ω\)-axis = \(|k_{+} - k_{-}|\).
  • No. of roots in the OLHP = \(\text{degree of polynomial} - \max\{k_{+}, k_{-}\}\).

A typical example for using such a procedure is the following:

Find the location of the poles of a TF with with denominator given by \[ D(s) = s^3 + 2s^2 + s^1 + 2. \]

We first compute the Routh Array

$s^{3}$$1$$1$
$s^{2}$$\cancel{2} 1$$\cancel{2} 1$
$s^{1}$$\displaystyle -\frac{\DET{ 1 & 1 \\ 1 & 1}}{1} = \cancel{0} \epsilon$
$s^{0}$$\displaystyle -\frac{\DET{ 1 & 1 \\ \epsilon & 0}}{\epsilon} = 1$

We now look at the signs of the terms in the first column:

Term$ ε \to 0^{+}$$ ε \to 0^{-}$
$s^{3}$$+$$+$
$s^{2}$$+$$+$
$s^{1}$$+$$-$
$s^{0}$$+$$+$

There are no sign changes when \(ε \to 0^{+}\). Thus, \(k_{+} = 0\). But, there are two sign changes when \(ε \to 0^{-}\). Thus, \(k_{-} = 2\). Thus, we have

  • No. of roots in the ORHP = \(\min\{k_{+}, k_{-}\} = 0\).
  • No. of roots on the \(j ω\)-axis = \(|k_{+} - k_{-}| = 2\).
  • No. of roots in the OLHP = \(\text{degree of polynomial} - \max\{k_{+}, k_{-}\} = 1\).

We can verify this by factorizing \(D(s)\), which gives \[ s^3 + 2s^2 + s^2 + 2 = (s+2)(s^2 + 1) \] Thus, there is indeed one root in the OLHP and two roots on the \(j ω\) axis.

7.4.0.1 So, why don’t I follow this procedure?

In the above example, we have roots on the \(j ω\) axis. So, when we perturb the coefficients slightly, the roots may go either to the OLHP or the ORHP, depending on the sign of the perturbation.

Personally, I don’t think that we need to use such an elaborate procedure to capture this case. The roots on the \(j ω\)-axis always come in complex conjugate pairs. So, when we have roots on the \(j ω\) axis, a polynomial of the form \((s^2 + ω^2)\) factorizes \(D(s)\).

Since an even polynomial divides \(D(s)\), there will be a row of zero in the Routh array (we explain that below). In fact, since this even polynomial is of degree two, we know that it is the \(s^1\) row that will be zero.

The main question is: when the \(s^1\) row is zero, should we treat is as a row with zero in the first column (but the entire row not being zero) or a row with all zeros?

Many textbooks treat it as a zero in the first column because the method to deal with it is simpler. But, it complicates how the general procedure for dealing with zero in the first column (separately take limits as \(ε \to 0^{+}\) and \(ε \to 0^{-}\).

My opinion is that we should treat zero in the first column of the \(s^1\) row as a row of all zeros (because mathematically that is what it is). So, we will need to follow the more elaborate method of dealing with the row of zero. To me, this is the proper way to do it.

Once you have read how to deal with a row of zeros, we can resolve the above example correctly as follows:

$s^{3}$$1$$1$
$s^{2}$$\cancel{2} 1$$\cancel{2} 1$
$s^{1}$$\displaystyle -\frac{\DET{ 1 & 1 \\ 1 & 1}}{1} = \cancel{0} 2$
$s^{0}$$\displaystyle -\frac{\DET{ 1 & 1 \\ 2 & 0}}{2} = 1$

We now look at the signs of the terms in the first column:

TermSign
$s^{3}$$+$
$s^{2}$$+$
$s^{1}$$+$
$s^{0}$$+$

Since we have a row of zeros, we split our analysis into two cases. The remainder polynomial, which is rows \(s^3\) to \(s^2\) and has no sign change, and the divisor polynomial, which is rows \(s^2\) to \(s^0\) and has no sign changes either. So, we have

Polynomial degree no. of sign changes ORHP OLHP \(j ω\)-axis
Remainder 1 0 0 1 0
Divisor 2 0 0 0 2
\(D(s)\) 3 0 1 2

7.4.1 Reciprocal polynomial

If we get a zero in the first column, we can follow another method but this method is not guaranteed to work. To understand this, we need the notion of a reciprocal polynomial: for a polynomial \(D(s)\) of degree \(n\), the reciprocal polynomial is \(s^n D(\frac 1s)\). For instance, for Example 7.5, we have

\[\begin{align*} s^5 D\left(\frac 1s\right) &= s^5 \left[ \frac{1}{s^5} + \frac{2}{s^4} + \frac{3}{s^3} + \frac{6}{s^2} + \frac{5}{s} + 3 \right] \\ &= 3 s^5 + 5 s^4 + 6 s^3 + 3 s^2 + 2s + 1 \end{align*}\]

Why look at the reciprocal polynomial.

Observe that if \(D(s)\) factorizes as \[ D(s) = (s+p_1)(s+p_2) \cdots (s+p_n) \] then the factorization of \(s^nD(\frac 1s)\) is given by \[\begin{align*} s^n D(s) &= s^n \left( \frac 1s + p_1 \right) \left( \frac 1s + p_2 \right) \cdots \left( \frac 1s + p_n \right) \\ &= (p_1 p_2 \cdots p_n) \left(s + \frac 1{p_1} \right) \left( s + \frac{1}{p_2}\right) \cdots \left(s + \frac{1}{p_n} \right). \end{align*}\] Thus, if \(p_i\) is a root of \(D(s)\), then \(\dfrac{1}{p_i}\) is a root of \(s^n D(\frac 1s)\).

The key point is that \(p_i\) and \(1/p_i\) have the same sign. In particular if \[ p_i = a_i + j b_i \implies \frac {1}{p_i} = \frac{a}{|p_i|} - j\frac{ b_i}{|p_i|} \] So, the polynomial and its reciprocal have same number of roots in the OLHP, ORHP, and \(j ω\)-axis.

Example 7.6 Solve Example 7.5 using the reciprocal polynomial.

We have already computed the reciprocal polynomial above. We now construct the Routh array

$s^{3}$$\cancel{2} 1$$\cancel{2} 1$
$s^{2}$$1$$1$
$s^{1}$$\displaystyle -\frac{\DET{ 1 & 1 \\ 1 & 1}}{1} = \cancel{0} 2$
$s^{0}$$\displaystyle -\frac{\DET{ 1 & 1 \\ 2 & 0}}{2} = 1$

We now look at the signs of the terms in the first column:

TermSign
$s^{3}$$+$
$s^{2}$$+$
$s^{1}$$+$
$s^{0}$$+$

Thus, we see that there are two sign changes in the first column. Therefore,

  • No. of roots in the ORHP = 2 (no. of sign changes)
  • No. of roots in the OLHP = 5 (degree of polynomial) \(-\) 2 (no. of sign changes) = 3. which matches with what we had observed in the solution of Example 7.5.

7.5 Special cases: Entire row of zeros

Sometimes, we get an entire row of zero. For instance, consider \[ D(s) = s^5 + 7s^4 + 6 s^3 + 42 s^2 + 8 s + 56. \]

In this case, the Routh array ends with a row of zero.

\(s^5\) \(1\) \(6\) \(8\)
\(s^4\) \(\cancel{7} 1\) \(\cancel{42} 6\) \(\cancel{56} 8\)
\(s^3\) \(0\) \(0\) \(0\)
When do we get a row of zeros

Suppose the row \(s^{m-1}\) is zero in the Routh array. This means that the polynomial \(P^m\) (which is the polynomial above the row of zeros) divides \(D(s)\).

For instance, in the above example, row \(s^3\) is zero. The above claim means that \[ P_4(s) = s^4 + 6 s^2 + 8 \] divides \(D(s)\). This is indeed the case and we can verify that \[ D(s) = \underbrace{(s+7)}_{\text{remainder poly.}} \underbrace{(s^4 + 6s^2 + 8)}_{\text{divisor poly.}}. \]

The rows that we have encountered until the row of all zeros corresponds to the remainder polynomial. We interpret is the same way we did the normal case.

Observe that the divisor polynomial is an even polynomial. A key feature of an even polynomial is that it has an equal number of roots in the OLHP and the ORHP. The number of sign changes is equal to the number of roots in the ORHP. So, the number of roots in OLHP are the same. The remaining roots must be on the \(j ω\)-axis.

Roots of even polynomial

An even polynomial has the same number of roots in the ORHP and OLHP.

Consider an even polynomial

\[P^{2m}(s) = a_{2m} s^{2m} + a_{2m -2 } s^{2m -2} + \cdots + a^0.\]

We can write this as a polynomial in \(z = s^2\) as

\[ a_{2m} z^m + a_{2m -2} z^{m-1} + \cdots + a_0. \]

Let \(p_1, p_2, \dots, p_m\) be the roots of this polynomial. Then, the roots of \(P^{2m}\) are \(\pm \sqrt{p_1}, \pm \sqrt{p_2}, \dots, \pm \sqrt{p_m}\).

Now observe the following:

A. If \(p_i\) is real and positive, then \(+ \sqrt{p_i}\) is positive and \(- \sqrt{p_i}\) is negative. Thus, one root is in the ORHP and one is in the OLHP.

B. If \(p_i\) is real and negative, then \(\pm \sqrt{p_i} = \pm j \sqrt{|p_i|}\), which lie on the \(j ω\)-axis. Thus, there are no roots in the ORHP or OLHP.

C. If \(p_i\) is complex and say equal to \(a_i + j b_i\), then there must be another root \(a_i - j b_i\). Now let \(\pm (c_i + j d_i)\) denote the square root of \(a_i + j b_i\). Then, the square roots of \(a_i - j b_i\) are \(\pm (c_i - j d_i)\). Thus, out of the four roots, two are in the the ORHP and two are in the OLHP.

Thus, in all cases, no. of roots in the ORHP = no. of roots in the OLHP.

The rows that we have encountered until the row of all zeros corresponds to the remainder polynomial. We interpret is the same way we did the normal case.

We first compute the Routh Array

$s^{5}$$1$$6$$8$
$s^{4}$$\cancel{7} 1$$\cancel{42} 6$$\cancel{56} 8$
$s^{3}$$\displaystyle -\frac{\DET{ 1 & 6 \\ 1 & 6}}{1} = \cancel{0} \cancel{4} 1$$\displaystyle -\frac{\DET{ 1 & 8 \\ 1 & 8}}{1} = \cancel{0} \cancel{12} 3$
$s^{2}$$\displaystyle -\frac{\DET{ 1 & 6 \\ 1 & 3}}{1} = 3$$\displaystyle -\frac{\DET{ 1 & 8 \\ 1 & 0}}{1} = 8$
$s^{1}$$\displaystyle -\frac{\DET{ 1 & 3 \\ 3 & 8}}{3} = \frac{1}{3}$
$s^{0}$$\displaystyle -\frac{\DET{ 3 & 8 \\ \frac{1}{3} & 0}}{\frac{1}{3}} = 8$

We now look at the signs of the terms in the first column:

TermSign
$s^{5}$$+$
$s^{4}$$+$
$s^{3}$$+$
$s^{2}$$+$
$s^{1}$$+$
$s^{0}$$+$

Since we have a row of zeros, we split our analysis into two cases. The remainder polynomial, which is rows \(s^5\) to \(s^4\) and has no sign change, and the divisor polynomial, which is rows \(s^4\) to \(s^0\) and has no sign changes either. So, we have

Polynomial degree no. of sign changes ORHP OLHP \(j ω\)-axis
Remainder 1 0 0 1 0
Divisor 4 0 0 0 4
\(D(s)\) 5 0 1 4

We can verify this by factorizing \(D(s)\), which gives (there are four roots on the \(j ω\) axis but we get small error in the location of the roots due to numerical accuracy)

D(s) = 56 + 8*s + 42*s^2 + 6*s^3 + 7*s^4 + s^5
5-element Vector{ComplexF64}:
     -7.000000000000012 + 0.0im
  6.938893903907228e-17 - 1.414213562373094im
  6.938893903907228e-17 + 1.414213562373094im
 3.0531133177191805e-16 - 1.9999999999999993im
 3.0531133177191805e-16 + 1.9999999999999993im

Example 7.7 Consider \[ D(s) = (s^2 + 2)(s+1) = s^3 + s^2 + 2s + 2. \] Use Routh Hurwitz to find the location of the roots.

We write the Routh Array

$s^{3}$$1$$2$
$s^{2}$$1$$2$
$s^{1}$$\displaystyle -\frac{\DET{ 1 & 2 \\ 1 & 2}}{1} = \cancel{0} 2$
$s^{0}$$\displaystyle -\frac{\DET{ 1 & 2 \\ 2 & 0}}{2} = 2$

We now look at the signs of the terms in the first column:

TermSign
$s^{3}$$+$
$s^{2}$$+$
$s^{1}$$+$
$s^{0}$$+$

Since we have a row of zeros, we split our analysis into two cases. The remainder polynomial, which is rows \(s^3\) to \(s^2\) and has no sign change, and the divisor polynomial, which is rows \(s^2\) to \(s^0\) and has no sign changes either. So, we have

Polynomial degree no. of sign changes ORHP OLHP \(j ω\)-axis
Remainder 1 0 0 1 0
Divisor 2 0 0 0 2
\(D(s)\) 3 0 1 2

7.6 Stability in state space

So far, we have examined stability of the TF. But we can use Routh-Hurwitz method to determine stability of state-space models as well. For example, consider a SSM \[ A = \MATRIX{0 & 0 & 1 \\ 1 & 0 & 1 \\ -10 & -5 & -2}, \quad B = \MATRIX{10 \\ 0 \\ 0}, \quad C = \MATRIX{1 & 0 & 0}. \] The denominator of the TF \(C(sI - A)^{-1}B\) is the characteristic polynomial of \(A\) given by \[\begin{align*} \det(sI - A) &= \DET{s & 0 & -1 \\ -1 & s & -1 \\ 10 & 5 & s + 2} \\ &= s \DET{s & -1 \\ 5 & s + 2} -1 \DET{-1 & -1 \\ s & 5} \\ &= s^3 + 2 s^2 + 15s + 5 \end{align*}\]

We can determine if the SSM is stable by checking the stability of the characteristic polynomial of \(A\). In this instance, we have

$s^{3}$ $1$ $15$
$s^{2}$ $2$ $5$
$s^{1}$ $\displaystyle -\frac{\DET{ 1 & 15 \\ 2 & 5}}{2} = \frac{25}{2}$
$s^{0}$ $\displaystyle -\frac{\DET{ 2 & 5 \\ \frac{25}{2} & 0}}{\frac{25}{2}} = 5$

We now look at the sign of the terms in the first column:

TermSign
$s^{3}$$+$
$s^{2}$$+$
$s^{1}$$+$
$s^{0}$$+$

Since there are no sign changes, the system is stable!

7.7 Stable design via Routh-Hurwitz

The Routh-Hurwitz criterion can be used to identify values of controller parameters for which a closed loop system is stable. We illustrate this via some examples.

Example 7.8 Consider the system shown below:

A proportional controller

Find the value of \(K\) for which the system is stable.

The closed loop transfer function is given by \[ T(s) = \frac{K G(s)}{1 + K G(s)} = \frac{K(s+1)}{s^3 + 5s^2 + (K-6)s + K \]

Therefore, the Routh array is given by

$s^{3}$ $1$ $-6 + K$
$s^{2}$ $5$ $K$
$s^{1}$ $\displaystyle -\frac{\DET{ 1 & -6 + K \\ 5 & K}}{5} = -6 + \frac{4}{5} K$
$s^{0}$ $\displaystyle -\frac{\DET{ 5 & K \\ -6 + \frac{4}{5} K & 0}}{-6 + \frac{4}{5} K} = K$

For stability, all terms in the first column must be positive. Thus, we have \[ \dfrac{4K}{5} - 6 > 0 \quad\text{and}\quad K > 0 \] which is equivalent to \[ K > 7.5 \]

Example 7.9 Consider the system shown below:

A proportional-integral (PI) controller

Find the values of \(A\) and \(B\) for which the system is stable.

The closed loop transfer function is given by \[ T(s) = \frac{\left(A + \dfrac{B}{s}\right) G(s)}{1 + \left(A + \dfrac{B}{s}\right)G(s)} = \frac{As + B}{s^3 + 3s^2 + (A + 2)s + B}. \]

Therefore, the Routh array is given by

$s^{3}$ $1$ $2 + A$
$s^{2}$ $3$ $B$
$s^{1}$ $\displaystyle -\frac{\DET{ 1 & 2 + A \\ 3 & B}}{3} = 2 + A - \frac{1}{3} B$
$s^{0}$ $\displaystyle -\frac{\DET{ 3 & B \\ 2 + A - \frac{1}{3} B & 0}}{2 + A - \frac{1}{3} B} = B$

For stability, all terms in the first column must be positive. Thus, we have \[ 2 + A - \frac{1}{3}B > 0 \quad\text{and}\quad B > 0. \]

Example 7.10 Consider the system shown below:

Another proportional controller

Find the value of \(K\) for which the system is stable.

The closed loop transfer function is given by \[ T(s) = \frac{K G(s)}{1 + K G(s)} = \frac{6K}{s^3 + 6s^2 + 11s + 6(1+K)} \]

Therefore, the Routh array is given by

$s^{3}$ $1$ $11$
$s^{2}$ $6$ $6 + 6 K$
$s^{1}$ $\displaystyle -\frac{\DET{ 1 & 11 \\ 6 & 6 + 6 K}}{6} = 10 - K$
$s^{0}$ $\displaystyle -\frac{\DET{ 6 & 6 + 6 K \\ 10 - K & 0}}{10 - K} = 6 + 6 K$

For stability, all terms in the first column must be positive. Thus, we have \[ 10 - K > 0 \quad\text{and}\quad 6(1+K) > 0 \] which is equivalent to \[ -1 < K < 10 \]