Archive | MATLAB RSS for this section

An FPGA implementation of Kalman Filter on Altera DE2

As promise, here is the complete project of an hardware implementation of the Kalman estimation algorithm on Altera DE2 board.

This time I made a video with a small presentation and demonstration of the project. Enjoy!

For those who are willing to go into that and chew a bit of Italian, here is the link to the report associated with the project:
Implementazione del fltro di Kalman su FPGA

Check below for the link at complete project materials
Download

Advertisements

Kalman Filter

I recently heard, from one of my professors of embedded systems, for a particular estimation algorithm called kalman filter. He told me that he was contacted by a company interested in implementing it in the process of measurement of the weight of products that run on the conveyor belt.
Well, I was not impressed until I found, rummaging in the literature, some interesting application…

The Kalman filter is named after its inventor, electrical engineer and mathematician Rudolf Emil Kalman that Published this technique in 1960. During kalman visit to NASA Ames Reserch Center, Schmidt saw the possibility to apply the idea of ​​Kalman on the Apollo program that in those years was developing. The Kalman filter was incorporated into the navigation computer of each spacecraft of the Apollo program.
After this first application, the Kalman filter played a vital role in the military, in the implementation of the navigation systems of the missiles nuclear-tipped, ballistic submarines U.S. Navy, systems guidance and navigation of Tomahawk missiles, and Air Launched Missile Cruise.
In civil is still used for the driving control and navigation of NASA Space Shuttle and for the attitude control and navigation of the space station ISS.
For this discussion, Rudolf Emil Kalman, was awarded the National Medal of Science by U.S. President Barak Obama in 2008.

Wow, I thought that i did not want to miss this! So after some research here’s a little discussion about:

The Kalman filter invokes the more general problem of state estimation
x € R^n of a discrete-time process governed by stochastic differential equations

x k + 1 = A k x k + B k u k + w k

The estimation is performed by successive measurements z that:

z k = H k x k + v k

Where v k ,w k represent the process and measurement noises, assumed to be uncorrelated, white and with normal probability distribution with zero mean and variance, respectively Q and R.

p( w ) N( 0, Q )

p( v ) N( 0, R )

The matrices A k , B k e H k , are, respectively, the dependence of the current state from the previous state and the inputs (uk) and the extent of state.
Defined xk¯and xk the values ​​of the a priori and a posteriori estimate and xk¯ and xk their predicted values, ​​we calculate the estimation error as a priori and a posteriori as:

ek ¯= xk¯-xk¯

ek = xk xk

Therefore we calculate the covariance a priori and a posteriori

Pk¯ = E[ek ¯ ek ¯]

Pk = E[ek ek ]

We are looking for an equation that compute the a posteriori estimate xk as a linear combination of the a priori estimate xk¯ and the measurement of the difference between z k prediction of the measurement H k x k¯ weighted by a coefficient K

xk = xk¯ + K(z k – H k x k¯)

The origins of the probabilistic formula reside in the rule of Beyes that is, in a priori estimate of the state xk¯ conditioned by previous measurements z k.
The coefficient K is called the Kalman gain, and is chosen so as to minimize the covariance of a posteriori error.
A possible form that minimizes this covariance is

Kk = Pk¯ H k ᵀ * (H kPk¯ H k ᵀ+R)ˉ ¹

Expressing the covariance a posteriori

Pk = Pk¯ *(I + H k ᵀRˉ ¹H kPk¯)ˉ ¹

Through some simple matrix operations you can calculate the following expression of the gain of kalman ( to simplify keep Hk= 1 )

Kk = Pk * R ˉ ¹

From the above expression it is evident that the gain is directly proportional to the covariance and inversely proportional to the variance of the measurement error. You can interpret this fact by saying that the greater the error that you made in the previous estimate and the greater the refinement of the current estimate (gain) induced by the filtration, much less the measurements deviate from the real value, the more possible to obtain accurate estimates.
From the probabilistic point of view it is interesting to highlight how the filter maintains the first two moments

E[x k] = x k

E[(x k x k) (x k x k)] = P k

The a posteriori state estimate reflects the average, while the a posteriori error covariance reflects the variance of the distribution of states.

Now that we have an idea of how it works, we can outline the steps to follow to implement the algorithm

kalman flowchart 2

The operations are divided into two groups: Time Update (Forecast) and Mesurement Update (Correction).
The Time Update operations provide a prediction of the current state and error covariance in order to obtain an a priori estimate of the next step.
The Mesurement Update operations are responsible for the feedback: they are used to join a new measurement with the a priori estimate to obtain a better a posteriori estimate.

The variances Q and R relate to the knowledge of the process that is being analyzed and the measuring system. The adjustment of these parameters can affect on the efficiency of the filter and is typically performed with other Kalman filters or otherwise outside of the algorithm that estimates the state of the process. It can however prove that once appropriately chosen Q and R, and that these remain constant, the covariance estimation P and the Kalman gain K stabilize in a certain number of cycles and then remain constants.

Kalman flowchart

Here is the block diagram that I draw for my implementation.

I’m currently working of an FPGA implementation on Altera DE2 board of this algorithm to see what this algotirth can do with a good hardware, stay tuned for Verilog and C codes!

As always, below you can find the codes carefully discussed, of a possible implementation in Matlab and in C++ of this algorithm and some comparison between standard mean algorithms, and more below an odd but smart application 🙂 Hope you’ve enjoyed!!

Download C++ version

Download MATLAB version

TCP/IP UDP/IP communication with MATLAB

One of the things that makes MATLAB so powerful is the ability to interface it with different data sources, in fact, what’s the point having extended calculation capabilities without a proper interface to the data to be processed?
If sometimes, like me, you had the need to evaluate data flows through the ethernet port, here’s how to do it:
  • Tcp-ip communication between two instances of Matlab on the same machine:

% Create a server interface on port 1200 and open it, accept a connection
% from any machine.
socket = tcpip(‘0.0.0.0’, 1200, ‘NetworkRole’, ‘server’);

  • Tcp-ip communication between two instances of Matlab on different machines or between a PC running MATLAB and a custom hardware like microcontroller, FPGA or DSP.

% Create a server interface on port 1207 and open it, accept a connection
% from 192.168.1.200:1200
% socket = tcpip(‘192.168.1.200’, 1200, ‘LocalPort’, 1207);

set(socket, ‘InputBufferSize’, 3000000);  % Set size of rx buffer.
set(socket, ‘OutputBufferSize’, 3000000); % Set size of tx buffer.
fopen(socket); % Open a connection.
fwrite(socket, data_out); % Write data_out to socket
data_in = fread(socket, N, ‘double’); % Read N double to from socket

For UDP communication use socket=udp(); instead.

Below you can find an exaustive example with comments of both server and client script, with signal generation, trasmission, echo from client host and plots of data sent.

Enjoy and comment!

%d bloggers like this: