Mar 20, 2010


A Quick Insight

It's not a filter at all, it's a recursive estimator which is very convenient to implement as a computer algorithm ; i.e. for each instance, you use the previous output as an input.

To grasp the meaning of Kalman Filter by starting from definitions and complicated equations is nearly impossible.The equation below, is much easier to start with.









The k's on the subscript are states. Here we can treat it as discrete time intervals, such as k=1 means 1ms, k=2 means 2ms.

Our purpose is to find , the estimate of the signal x. And we wish to find it for each consequent k's.

Also here, is the measurement value. Keep in mind that, we are not perfectly sure of these values. Otherwise, we won't be needing to do all these. And is called "Kalman Gain" (which is the key point of all these), and is the estimate of the signal on the previous state.


The only unknown component in this equation is the Kalman Gain . Because, we have the measurement values, and we already have the previous estima
ted signal. You should calculate this Kalman Gain for each consequent state.

On the other hand, let's assume to be 0.5, what do we get? It's a simple averaging! In other words, we should find smarter coefficients at each state. The bottom line is :

"Kalman filter finds the most optimum averaging factor for each consequent state;& somehow remembers a little bit about the past states."



Build a Model

First of all, we must be sure that, Kalman filtering conditions fit to our problem.

The two equations of KLF are:

It means that each xk (our signal values) may be evaluated by using a linear stochastic equation (the first one). Any xk is a linear combination of its previous value plus a control signal uk and a process noise (which may be hard to conceptualize). Remember that, most of the time, there's no control signal uk.

The second equation tells that any measurement value (which we are not sure its accuracy) is a linear combination of the signal value and the measurement noise. They are both considered to be Gaussian.

The process noise and measurement noise are *statisticlly independent.

[*(of events or values) having the probability of their joint occurrence equal to the product of their individual probabilities.]

The entities A, B and H are in general form matrices. But in most of our signal processing problems, we use models such that these entities are just numeric values. Also as an additional ease, while these values may change between states, most of the time, we can assume that they're constant.

If we are pretty sure that our system fits into this model , the only thing left is to estimate the mean and standard deviation of the noise functions Wk-1 and vk. We know that, in real life, no signal is pure Gaussian, but we may assume it with some approximation. This is not a big problem, because we'll see that the Kalman Filtering Algorithm tries to converge into correct estimations, even if the Gaussian noise parameters are poorly estimated.

The only thing to keep in mind is : "The better you estimate the noise parameters, the better estimates you get."



Starting the Process


When you fit your problem(model) into the KLF, the next step is to determine the parameters and the initial values.

We have two distinct set of equations :

*Time Update (prediction) and *Measurement Update (correction). Both equation sets are applied at each kth state.

[*Can be derived from the linear stochastic difference equation (2 equations of KLF), by taking the partial derivative and setting them to zero ]

Time Update (prediction)


Measurement Update (correction)


When we make the modeling , we know the matrices A, B and H. Most probably, they will be numerical constants.(And even most probably, they'll be equal to 1.)

R is rather simple to find out, because, in general, we're quite sure about the noise in the environment. But finding out Q is not so obvious.

To start the process, we need to know the estimate of x0, and P0.



Iterate

Since we gathered all the information we need and started the process, now we can iterate through the estimates. Previous estimates will be the input for the current state.





Here, is the "prior estimate" which in a way, means the rough estimate before the measurement update correction. And also is the "prior error covariance". We use these "prior" values in our Measurement Update equations.

In Measurement Update equations, we really find which is the estimate of x at time k (the very thing we wish to find). Also, we find which is necessary for the k 1 (future) estimate, together with . The Kalman Gain ( ) we evaluate is not needed for the next iteration step, it's a hidden, mysterious and the most important part of this set of equations.

The values we evaluate at Measurement Update stage are also called "posterior" values. Which also makes sense.


source: http://bilgin.esme.org


********************************************************


A Simple Example


Now let's try to estimate a scalar random constant, such as a "voltage reading" from a source. So let's assume that it has a constant value of aV (volts) , but of of course these are noisy readings above and below a volts. And we assume that the standard deviation of the measurement noise is 0.1 V.

Now let's build our model:



We have reduced the equations to a very simple form.

• Above all, we have a 1 dimensional signal problem, so every entity in our model is a numerical value, not a matrix.

• We have no such control signal uk, and it's out of the game

• As the signal is a constant value, the constant A is just 1, because we already know that the next value will be same as the previous one. We are lucky that we have a constant value in this example, but even if it were any other linear nature, again we could easily assume that the value A will be 1.

• The value H = 1, because we know that the measurement is composed of the state value and some noise. You'll rarely encounter real life cases that H is different from 1.

Let's as

TIME
(ms)
1 2 3 4 5 6 7 8 9 10
VALUE
(V)
0.39 0.50 0.48 0.29 0.25 0.32 0.34 0.48 0.41 0.45

We should start from somewhere, such as k=0. We should find or assume some initial state. Here, we throw out some initial values. Let's assume estimate of X0 = 0, and P0 = 1. Then why didn't we choose P0 = 0 for example? It's simple. If we chose that way, this would mean that there's no noise in the environment, and this assumption would lead all the consequent to be zero (remaining as the initial state). So we choose P0 something other that zero.

Let's write the Time Update and Measurement Update equations.



Now, let's calculate the values for each iteration.



Here, I displayed the first 2 state iterations in detail, the others follow the same pattern. I've completed the other numerical values via a computer algorithm, which is the appropriate solution.

The chart here (right) shows that the Kalman Filter algorithm converges to the true voltage value. Here, I displayed the first 10 iterations and we clearly see the signs of convergence. In 50 or so iterations, it'll converge even better.

To enable the convergence in fewer steps, you should

• Model the system more elegantly
• Estimate the noise more precisely





Mar 18, 2010

Applications of KLF (Research Papers)

1.An Extended Kalman Filtering Approach to Modeling Nonlinear Dynamic Gene Regulatory Networks via Short Gene Expression Time Series

In this paper, the extended Kalman filter (EKF) algorithm is applied to model the gene regulatory network from gene time series data. The gene regulatory network is considered as a nonlinear dynamic stochastic model that consists of the gene measurement equation and the gene regulation equation. After specifying the model structure, the EKF algorithm is applied for identifying both the model parameters and the actual value of gene expression levels. It is shown that the EKF algorithm is an online estimation algorithm that can identify a large number of parameters (including parameters of nonlinear functions) through iterative procedure by using a small number of observations. Four real-world gene expression data sets are employed to demonstrate the effectiveness of the EKF algorithm, and the obtained models are evaluated from the viewpoint of bioinformatics.

2.Relaxed parametric design with probabilistic constraints

Parametric design is an important modelling paradigm in computer-aided design. Relationships (constraints) are specified between the degrees of freedom (DOFs) of the model, instead of the DOFs themselves, resulting in efficient design modifications and variations. Current parametric modellers require an exact specification of all the constraints involved, which causes overwork for the designer during design iterations. The relaxed-parametric-design modelling paradigm is described, in which decisions which needlessly limit the freedom of design in later stages are avoided. The designer usesb soft constraints, and specifies the level of exactness with which they are to be met. As a specific scheme for implementing relaxed parametric design, probabilistic constraints are presented, where a parametric model is viewed as a stochastic process. The softness of a constraint is represented as the covariance of a suitably distributed random variable. A novel method is described of expressing the DOFs and the model as a system of probabilistic equations, which is then solved using the Kalman filter, a powerful estimation tool for stochastic systems. An a priori covariance matrix associated with a DOF can be used as a guideline for the solver to select a particular solution from multiple solutions.

3.An alternative Kalman innovation filter approach for receiver position estimation based on GPS measurements

This article presents an alternative Kalman innovation filter approach for receiver position estimation, based on pseudorange measurements of the global positioning system. First, a dynamic pseudorange model is represented as an ARMAX model and a pseudorange state-space innovation model suitable for both parameter identification and state estimation. The Kalman gain in the pseudorange coordinates is directly calculated from the identified parameters without prior knowledge of the noise properties and the receiver parameters. Then, the pseudorange state-space innovation model is transformed into the receiver state-space innovation model for optimal estimation of the receiver position. Hence, the proposed approach overcomes the drawbacks of the classical Kalman filter approach since it does not require prior knowledge of the noise properties, and the receiver's dynamic model to calculate the Kalman gain. In addition, due to its simplicity, it can be easily implemented in any receiver. To demonstrate the effectiveness of the approach, it is utilized to estimate the position of a stationary receiver and its performance is compared against two versions of the classical Kalman filter approach. The results show that the proposed approach yields consistently good estimation of the receiver position and outperforms the other methods.

4 Parallel computation of the modified extended kalman filter

In this paper, certain techniques for mapping the modified extended Kalman filter (MEKF) onto systolic array processors are described. First, a square-root algorithm based on the singular value decomposition (SVD) for the Kalman filteris introduced. Then, VLSI architecture of the systolic array type for its implementation is developed. Compared with other existing square-root Kalman filtering algorithms, this new design is numerically more stable and has nicer parallel and pipelining characteristics when it is applied to the MEKF. Moreover, it achieves higher efficiency. For n-dimensional state vector estimations, the proposed architecture consists of O(3/2n2) processing elements and completes an iteration in time O((s + 8)n), in contrast to the time complexity of O((s + 3)n3) for a sequential implementation, where s ≈ log n.






When you are at a cross road...

To see a world in a grain of sand
Or a heaven in a wild flower,
Hold infinity in the palm of your hand
And eternity in an hour.
WILLIAM BLAKE (1757-1827)

Some good links about research : Thanks to (http://damithakumarage.wordpress.com/)

[1] http://www.cse.iitk.ac.in/users/jalote/GenArticles/ET23-06-03.htm
[2] http://www-2.cs.cmu.edu/~mblum/research/pdf/grad.html
[3] http://www-2.cs.cmu.edu/~harchol/gradschooltalk.pdf




Followers