I recently heard, from one of my professors of embedded systems, for a particular estimation algorithm called kalman filter. He told me that he was contacted by a company interested in implementing it in the process of measurement of the weight of products that run on the conveyor belt.
Well, I was not impressed until I found, rummaging in the literature, some interesting application…
The Kalman filter is named after its inventor, electrical engineer and mathematician Rudolf Emil Kalman that Published this technique in 1960. During kalman visit to NASA Ames Reserch Center, Schmidt saw the possibility to apply the idea of Kalman on the Apollo program that in those years was developing. The Kalman filter was incorporated into the navigation computer of each spacecraft of the Apollo program.
After this first application, the Kalman filter played a vital role in the military, in the implementation of the navigation systems of the missiles nuclear-tipped, ballistic submarines U.S. Navy, systems guidance and navigation of Tomahawk missiles, and Air Launched Missile Cruise.
In civil is still used for the driving control and navigation of NASA Space Shuttle and for the attitude control and navigation of the space station ISS.
For this discussion, Rudolf Emil Kalman, was awarded the National Medal of Science by U.S. President Barak Obama in 2008.
Wow, I thought that i did not want to miss this! So after some research here’s a little discussion about:
The Kalman filter invokes the more general problem of state estimation
x € R^n of a discrete-time process governed by stochastic differential equations
x k + 1 = A k x k + B k u k + w k
The estimation is performed by successive measurements z that:
z k = H k x k + v k
Where v k ,w k represent the process and measurement noises, assumed to be uncorrelated, white and with normal probability distribution with zero mean and variance, respectively Q and R.
p( w ) ∼ N( 0, Q )
p( v ) ∼ N( 0, R )
The matrices A k , B k e H k , are, respectively, the dependence of the current state from the previous state and the inputs (uk) and the extent of state.
Defined xk¯and xk the values of the a priori and a posteriori estimate and xk¯ and xk their predicted values, we calculate the estimation error as a priori and a posteriori as:
ek ¯= xk¯-xk¯
ek = xk –xk
Therefore we calculate the covariance a priori and a posteriori
Pk¯ = E[ek ¯ ek ¯ᵀ]
Pk = E[ek ek ᵀ]
We are looking for an equation that compute the a posteriori estimate xk as a linear combination of the a priori estimate xk¯ and the measurement of the difference between z k prediction of the measurement H k x k¯ weighted by a coefficient K
xk = xk¯ + K(z k – H k x k¯)
The origins of the probabilistic formula reside in the rule of Beyes that is, in a priori estimate of the state xk¯ conditioned by previous measurements z k.
The coefficient K is called the Kalman gain, and is chosen so as to minimize the covariance of a posteriori error.
A possible form that minimizes this covariance is
Kk = Pk¯ H k ᵀ * (H kPk¯ H k ᵀ+R)ˉ ¹
Expressing the covariance a posteriori
Pk = Pk¯ *(I + H k ᵀRˉ ¹H kPk¯)ˉ ¹
Through some simple matrix operations you can calculate the following expression of the gain of kalman ( to simplify keep Hk= 1 )
Kk = Pk * R ˉ ¹
From the above expression it is evident that the gain is directly proportional to the covariance and inversely proportional to the variance of the measurement error. You can interpret this fact by saying that the greater the error that you made in the previous estimate and the greater the refinement of the current estimate (gain) induced by the filtration, much less the measurements deviate from the real value, the more possible to obtain accurate estimates.
From the probabilistic point of view it is interesting to highlight how the filter maintains the first two moments
E[x k] = x k
E[(x k – x k) (x k – x k)ᵀ] = P k
The a posteriori state estimate reflects the average, while the a posteriori error covariance reflects the variance of the distribution of states.
Now that we have an idea of how it works, we can outline the steps to follow to implement the algorithm
The operations are divided into two groups: Time Update (Forecast) and Mesurement Update (Correction).
The Time Update operations provide a prediction of the current state and error covariance in order to obtain an a priori estimate of the next step.
The Mesurement Update operations are responsible for the feedback: they are used to join a new measurement with the a priori estimate to obtain a better a posteriori estimate.
The variances Q and R relate to the knowledge of the process that is being analyzed and the measuring system. The adjustment of these parameters can affect on the efficiency of the filter and is typically performed with other Kalman filters or otherwise outside of the algorithm that estimates the state of the process. It can however prove that once appropriately chosen Q and R, and that these remain constant, the covariance estimation P and the Kalman gain K stabilize in a certain number of cycles and then remain constants.
Here is the block diagram that I draw for my implementation.
I’m currently working of an FPGA implementation on Altera DE2 board of this algorithm to see what this algotirth can do with a good hardware, stay tuned for Verilog and C codes!
As always, below you can find the codes carefully discussed, of a possible implementation in Matlab and in C++ of this algorithm and some comparison between standard mean algorithms, and more below an odd but smart application 🙂 Hope you’ve enjoyed!!
- Using a Kalman Filter to Predict Ticket Prices (chairnerd.seatgeek.com)