Least Squares Estimation of Stationary Linear Systems

Least squares estimators are typically used to determine a constant on the basis of several noisy measurements of that constant. Mathematically I think they’re pretty beautiful, and the derivation of them leads us to the somewhat ubiquitous Moore-Penrose pseudo inverse.

Let’s just jump right in!

Suppose we want to determine a “best” estimate of some state constant n-element state vector, x, and y is a k-element noisy measurement vector. Typically estimates of something are denoted as that “something” with a hat over it, so we will call our estimated vector \hat{x}. Let’s assume that the process governing this system is linear, otherwise our lives have just gotten a whole lot harder, such that each element of the measurement vector y is a linear combination of elements of the state vector, x, with some added noise on top.

\begin{aligned} 1) \displaystyle \qquad y_1 &= H_{11}x_1 +\dotsb +H_{1n}x_n + v_1 \\  &\vdots \\ y_1 &= H_{k1}x_1 +\dotsb +H_{kn}x_n + v_k  \end{aligned} 

Which can be written in matrix form simply as:

2) \displaystyle \qquad y = Hx + v

Now, let’s introduce something called the measurement residual, which is simply the difference between the noisy measurement (actual system output) and the estimated output.

3) \displaystyle \qquad \epsilon_y = y - H \hat{x}

So, as we can see from equation 3, the smaller the measurement residual becomes, in some sense, the better our estimate of that parameter is. Simply by inspection it makes sense that if our measurement residual becomes zero we have done a pretty good job estimating x.

Since we are dealing with least squares estimation, the above “some sense” is in a least squares sense, so we will construct a cost function, J, that is the sum of squares of our measurement residuals as follows:

\begin{aligned}  4) \displaystyle \qquad J &= \epsilon_{y1}^2 + \dotsb + \epsilon_{yk}^2\\  &= \epsilon_y^T \epsilon_y   \end{aligned}  

Now, let’s substitute equation 3 into equation 4 so we obtain the cost function in terms of our measurement vector, y, the output matrix, H, and estimated state vector \hat{x} .

\begin{aligned}  5) \displaystyle \qquad J &= \left (y - H \hat{x} \right )^T \left (y - H \hat{x} \right )\\  &= y^Ty - \hat{x}^TH^Ty - y^TH\hat{x} + \hat{x}^TH^Th\hat{x}   \end{aligned}  

Cool, now that we have our cost function constructed, the next step is to minimize said function with respect to \hat{x}. Remembering back to calc 3 we know that in order to minimize a multivariable equation with respect to one of its variables we must compute the partial derivative and set it equal to zero.

\begin{aligned}  6) \displaystyle \qquad \frac{\partial J}{\partial \hat{x}} &= -y^TH -y^TH + 2 \hat{x}^TH^TH\\  &= 0  \end{aligned}  

If we gently massage equation 6 so as to solve for our state estimate we obtain:

\begin{aligned}  7) \displaystyle \qquad H^Ty &= H^TH\hat{x}\\  \hat{x} &= \left (H^TH \right ) ^{-1}H^Ty \\  &= H^Ly  \end{aligned}  

Where H^L is the left pseudo inverse. The pseudo inverse exists if k \geq n and H is full rank. This just means that we need to have more measurements than there are number of variables and the measurements must be linearly independent.

Example Time!

The example that I always see given for this simple problem is that of estimating the resistance of an unmarked resistor based on k measurements from a multimeter. In this scenario our state variable x is the resistance of the resistor, and y are our noisy measurements from the multimeter.

\begin{aligned}  8) \displaystyle \qquad y_1 &= x + v_1\\  &\vdots \\  y_k &= x + v_k \\  \end{aligned}  

which again just reduces to the simple matrix equation

\begin{aligned}  \displaystyle \qquad y = x + v\\  \end{aligned}  

What the hell, our output matrix isn’t there! Well, it is, it’s just a k x 1 matrix filled with 1’s. Now, if we just plug this into equation 7 from above, our optimal (in the sense of least squares) estimate of the state x is simply

\begin{aligned}  9) \displaystyle \qquad \hat{x} &= \left (H^TH \right )^{-1}H^Ty\\  &= \frac{1}{k}\left (y1 + \dotsb + yk \right)  \end{aligned}  

In this, almost stupidly, simple example, the least squares estimate reduces to just what a normal person would have done and is the average of the noisy measurements.

Leave a Reply

Your email address will not be published. Required fields are marked *