I have been working to implement a Kalman filter to search for anomalies in a two dimensional data set. Very similar to the excellent post that I found here. As a next step, I'd like to predict confidence intervals (for example 95% confidence for floor and ceiling values) for what I predict the next values will fall in. So in addition to the line below, I'd like to be able to generate two additional lines which represent a 95% confidence that the next value will be above the floor or below the ceiling.
I assume that I'll want to use the uncertainty covariance matrix (P) that is returned with each prediction generated by the Kalman filter but I'm not sure if it's right. Any guidance or reference to how to do this would be much appreciated!
kalman 2d filter in python
The code in the post above generates a set of measurements over time and uses a Kalman filter to smooth the results.
import numpy as np
import matplotlib.pyplot as plt
def kalman_xy(x, P, measurement, R,
motion = np.matrix('0. 0. 0. 0.').T,
Q = np.matrix(np.eye(4))):
"""
Parameters:
x: initial state 4-tuple of location and velocity: (x0, x1, x0_dot, x1_dot)
P: initial uncertainty convariance matrix
measurement: observed position
R: measurement noise
motion: external motion added to state vector x
Q: motion noise (same shape as P)
"""
return kalman(x, P, measurement, R, motion, Q,
F = np.matrix('''
1. 0. 1. 0.;
0. 1. 0. 1.;
0. 0. 1. 0.;
0. 0. 0. 1.
'''),
H = np.matrix('''
1. 0. 0. 0.;
0. 1. 0. 0.'''))
def kalman(x, P, measurement, R, motion, Q, F, H):
'''
Parameters:
x: initial state
P: initial uncertainty convariance matrix
measurement: observed position (same shape as H*x)
R: measurement noise (same shape as H)
motion: external motion added to state vector x
Q: motion noise (same shape as P)
F: next state function: x_prime = F*x
H: measurement function: position = H*x
Return: the updated and predicted new values for (x, P)
See also http://en.wikipedia.org/wiki/Kalman_filter
This version of kalman can be applied to many different situations by
appropriately defining F and H
'''
# UPDATE x, P based on measurement m
# distance between measured and current position-belief
y = np.matrix(measurement).T - H * x
S = H * P * H.T + R # residual convariance
K = P * H.T * S.I # Kalman gain
x = x + K*y
I = np.matrix(np.eye(F.shape[0])) # identity matrix
P = (I - K*H)*P
# PREDICT x, P based on motion
x = F*x + motion
P = F*P*F.T + Q
return x, P
def demo_kalman_xy():
x = np.matrix('0. 0. 0. 0.').T
P = np.matrix(np.eye(4))*1000 # initial uncertainty
N = 20
true_x = np.linspace(0.0, 10.0, N)
true_y = true_x**2
observed_x = true_x + 0.05*np.random.random(N)*true_x
observed_y = true_y + 0.05*np.random.random(N)*true_y
plt.plot(observed_x, observed_y, 'ro')
result = []
R = 0.01**2
for meas in zip(observed_x, observed_y):
x, P = kalman_xy(x, P, meas, R)
result.append((x[:2]).tolist())
kalman_x, kalman_y = zip(*result)
plt.plot(kalman_x, kalman_y, 'g-')
plt.show()
demo_kalman_xy()
The 2D generalization of the 1-sigma interval is the confidence ellipse which is characterized by the equation (x-mx).T P^{-1}.(x-mx)==1, with x being the parameter 2D-Vector, mx the 2D mean or ellipse center and P^{-1} the inverse covariance matrix. See this answer on how to draw one. Like the sigma-intervals the ellipses area corresponds to a fixed probability that the true value lies within. By scaling with the factor n (scaling the interval length or the ellipse radii) a higher confidence can be reached. Note that the Factors n have different probabilities in one and two dimensions:
|`n` | 1D-Intverval | 2D Ellipse |
==================================
1 | 68.27% | 39.35%
2 | 95.5% | 86.47%
3 | 99.73% | 98.89%
Calculating these values in 2D is a bit involved and unfortunately I don't have a public reference to it.
If you want a 95% interval to predict the next values will fall in, then you want a prediction interval and not a confidence interval (http://en.wikipedia.org/wiki/Prediction_interval).
For 2-D (3-D) data, the semi-axes of the ellipse (ellipsoid) can be found by calculating the eigenvalues of the covariance matrix of the data and adjusting the size of the semi-axes to account for the necessary prediction probability.
See Prediction ellipse and prediction ellipsoid for a Python code to calculate the 95% prediction ellipse or ellipsoid.
This might help you to calculate the prediction ellipse for your data.
Because your statistic is of course derived from a sample, the probability the population statistic is greater than the 2 sigma standard deviation is 0.5. Therefore, I would contemplate the significance of considering whether you have a good prediction of a value you expect the next measure to be below with probability 0.95 if you have not applied an upper confidence factor of the 2x standard deviation. The magnitude of that factor will depend on the sample size used to derive the 0.5 population probability. The smaller the sample size used to derive the covariance matrix the larger the factor to derive the 0.95 probability the population 0.95 statistic is less than the factored up sample statistic.
Related
I'm trying to find the distance between a fitted hyperplane and five points. Most of the responses I've read use SVM, but I'm not trying to do a classification problem. I know there are probably multiple ways to do this in Python, but I'm a little stumped.
As an example here are my points:
[[ 163.3828172 169.65537306 144.69201418]
[-212.50951396 -167.06555958 56.69388025]
[-164.65129832 -163.42420063 -149.97008725]
[ 41.8704004 52.2538316 14.0683657 ]
[-128.38386078 -102.76840542 -303.4960438 ]]
To find the equation of a fitted plane I use SVD to compute the coefficients ax + by + cz - b = 0.
def fit_plane(points):
assert points.shape[1] == 3
centroid = points.mean(axis=0)
x = points - centroid[None, :]
U, S, Vt = np.linalg.svd(x.T # x)
#normal vector of best fitting plane is the left
#singular vector corresponding to the least singular value
normal = U[:, -1]
#calculate the distance from origin
origin_distance = normal # centroid
return np.hstack([normal, -origin_distance])
fit_plane(X)
Giving the equation:
-0.67449074x + 0.73767288y -0.03001614z -10.75632119 = 0
Now how do I calculate the distance between the points and the hyperplane? The answer I've seen used in conjunction with SVMs is d = |w^Tx +b|/||w||, but I don't know how to go from the equation I have already.
You can find the distance between an equation π and a point P by dropping a perpendicular N from P to π and get the point A where N and π intersect. The distance you are looking for is the distance between A and P.
This video explains the math of finding A (although it is about finding the reflection, finding A is part of it).
I'm new to working with Gaussian processes, so please target answers to a relative beginner. I'm trying to sample correlated noise where the mean of each sample is 0 (i.e., mean-centered), so I've been running the following code. However, while the variance across samples is the same for each dimension before mean-centering each sample, mean-centering causes the variance to be increasingly large at the ends of the vectors. I'm fairly certain I understand why this happens, but I'm struggling to figure out if there's a way to have each sample mean-centered while maintaining equal variance across dimensions.
import numpy as np
def rbf_kernel(x_1, x_2, sig):
return np.exp((-(x_1-x_2)**2)/2*(sig**2))
X = np.array([[0.08333333],
[0.25 ],
[0.41666667],
[0.58333333],
[0.75 ],
[0.91666667]])
r = 0.1
covNoise = np.zeros((6, 6))
for i, x1 in enumerate(X):
for j, x2 in enumerate(X):
covNoise[i,j] = rbf_kernel(x1, x2, r)
noise = np.random.multivariate_normal(np.zeros(6), covNoise, 1000)
np.var(noise, axis=0)
# Variance before mean-centering -- variance is constant across the vector
# array([0.99994815, 0.99941361, 0.9989251 , 0.99848157, 0.99806782, 0.99768438])
noise_meanCentered = noise - noise.mean(axis=1, keepdims=True)
np.var(noise_meanCentered, axis=0)
# Variance after mean-centering -- variance is greatest at the ends of the vector
# array([0.15211363, 0.0589172, 0.01052137, 0.01053556, 0.0589244, 0.15203642])
I am trying to learn about Perlin Noise and procedural generation. I am reading through an online tutorial about generating landscapes with noise, but I don't understand part of the author's explanation about making areas with higher elevation.
On this webpage under the "islands" section there is the text
Design a shape that matches what you want from islands. Use the lower shape to push the map up and the upper shape to push the map down. These shapes are functions from distance d to elevation 0-1. Set e = lower(d) + e * (upper(d) - lower(d)).
I want to do this, but I'm not sure what the author means when they're talking about upper and lower shapes.
What could the author mean by "Use the lower shape to push the map up and the upper shape to push the map down"?
Code Example:
from __future__ import division
import numpy as np
import math
import noise
def __noise(noise_x, noise_y, octaves=1, persistence=0.5, lacunarity=2):
"""
Generates and returns a noise value.
:param noise_x: The noise value of x
:param noise_y: The noise value of y
:return: numpy.float32
"""
value = noise.pnoise2(noise_x, noise_y,
octaves, persistence, lacunarity)
return np.float32(value)
def __elevation_map():
elevation_map = np.zeros([900, 1600], np.float32)
for y in range(900):
for x in range(1600):
noise_x = x / 1600 - 0.5
noise_y = y / 900 - 0.5
# find distance from center of map
distance = math.sqrt((x - 800)**2 + (y - 450)**2)
distance = distance / 450
value = __noise(noise_x, noise_y, 8, 0.9, 2)
value = (1 + value - distance) / 2
elevation_map[y][x] = value
return elevation_map
The author means that you should describe the final elevation, fe, of a point in terms of its distance from the centre, d, as well as the initial elevation, e, which was presumably generated by noise.
So, for example, if you wanted your map to look something like a bowl, but maintaining the noisy characteristic of your originally generated terrain, you could use the following functions:
def lower(d):
# the lower elevation is 0 no matter how near you are to the centre
return 0
def upper(d):
# the upper elevation varies quadratically with distance from the centre
return d ** 2
def modify(d, initial_e):
return lower(d) + initial_e * (upper(d) - lower(d))
Note in particular the paragraph starting with "How does this work?", which I found quite illuminating.
Since learning about point charges in my physics II class this semester, I want to be able to investigate not only the static force and field distributions but the actual trajectories of movement of electrically charged particles. The first stage in doing this is to build a naive engine for simulating the dynamics of n individual point particles. I've implemented the solution using matrices in python and was hoping someone could comment on whether I've done so correctly. As I don't know what kind of dynamics to expect, I can't tell directly from the videos that my implementation of my equations is correct.
My Particular Problem
In particular, I cannot tell if in my calculation of Force magnitude I am computing the 1/r^(3/2) factor correctly. Why? because when I simulate a dipole and use $2/2$ as an exponent the particles start going in an elliptical orbit. which is what I would expect. However, when I use the correct exponent, I get this: Where is my code going wrong? What am I supposed to expect
I'll first write down the equations I'm using:
Given n charges q_1, q_2, ..., q_n, with masses m_1, m_2, ..., m_n located at initial positions r_1, r_2, ..., r_n, with velocities (d/dt)r_1, (d/dt)r_2, ..., (d/dt)r_n the force induced on q_i by q_j is given by
F_(j -> i) = k(q_iq_j)/norm(r_i-r_j)^{3/2} * (r_i - r_j)
Now, the net marginal force on particle $q_i$ is given as the sum of the pairwise forces
F_(N, i) = sum_(j != i)(F_(j -> i))
And then the net acceleration of particle $q_i$ just normalizes the force by the mass of the particle:
(d^2/dt^2)r_i = F_(N, i)/m_i
In total, for n particles, we have an n-th order system of differential equations. We will also need to specify n initial particle velocities and n initial positions.
To implement this in python, I need to be able to compute pairwise point distances and pairwise charge multiples. To do this I tile the q vector of charges and the r vector of positions and take, respectively, their product and difference with their transpose.
def integrator_func(y, t, q, m, n, d, k):
y = np.copy(y.reshape((n*2,d)))
# rj across, ri down
rs_from = np.tile(y[:n], (n,1,1))
# ri across, rj down
rs_to = np.transpose(rs_from, axes=(1,0,2))
# directional distance between each r_i and r_j
# dr_ij is the force from j onto i, i.e. r_i - r_j
dr = rs_to - rs_from
# Used as a mask to ignore divides by zero between r_i and r_i
nd_identity = np.eye(n).reshape((n,n,1))
# WHAT I AM UNSURE ABOUT
drmag = ma.array(
np.power(
np.sum(np.power(dr, 2), 2)
,3./2)
,mask=nd_identity)
# Pairwise q_i*q_j for force equation
qsa = np.tile(q, (n,1))
qsb = np.tile(q, (n,1)).T
qs = qsa*qsb
# Directional forces
Fs = (k*qs/drmag).reshape((n,n,1))
# Dividing by m to obtain acceleration vectors
a = np.sum(Fs*dr, 1)
# Setting velocities
y[:n] = np.copy(y[n:])
# Entering the acceleration into the velocity slot
y[n:] = np.copy(a)
# Flattening it out for scipy.odeint to work properly
return np.array(y).reshape(n*2*d)
def sim_particles(t, r, v, q, m, k=1.):
"""
With n particles in d dimensions:
t: timepoints to integrate over
r: n*d matrix. The d-dimensional initial positions of n particles
v: n*d matrix of initial particle velocities
q: n*1 matrix of particle charges
m: n*1 matrix of particle masses
k: electric constant.
"""
d = r.shape[-1]
n = r.shape[0]
y0 = np.zeros((n*2,d))
y0[:n] = r
y0[n:] = v
y0 = y0.reshape(n*2*d)
yf = odeint(
integrator_func,
y0,
t,
args=(q,m,n,d,k)).reshape(t.shape[0],n*2,d)
return yf
My input is 2d (x,y) time series of a dot moving on a screen for a tracker software. It has some noise I want to remove using Kalman filter. Does someone can point me for a python code for Kalman 2d filter?
In scipy cookbook I found only a 1d example:
http://www.scipy.org/Cookbook/KalmanFiltering
I saw there is implementation for Kalman filter in OpenCV, but couldn't find code examples.
Thanks!
Here is my implementation of the Kalman filter based on the equations given on wikipedia. Please be aware that my understanding of Kalman filters is very rudimentary so there are most likely ways to improve this code. (For example, it suffers from the numerical instability problem discussed here. As I understand it, this only affects the numerical stability when Q, the motion noise, is very small. In real life, the noise is usually not small, so fortunately (at least for my implementation) in practice the numerical instability does not show up.)
In the example below, kalman_xy assumes the state vector is a 4-tuple: 2 numbers for the location, and 2 numbers for the velocity.
The F and H matrices have been defined specifically for this state vector: If x is a 4-tuple state, then
new_x = F * x
position = H * x
It then calls kalman, which is the generalized Kalman filter. It is general in the sense it is still useful if you wish to define a different state vector -- perhaps a 6-tuple representing location, velocity and acceleration. You just have to define the equations of motion by supplying the appropriate F and H.
import numpy as np
import matplotlib.pyplot as plt
def kalman_xy(x, P, measurement, R,
motion = np.matrix('0. 0. 0. 0.').T,
Q = np.matrix(np.eye(4))):
"""
Parameters:
x: initial state 4-tuple of location and velocity: (x0, x1, x0_dot, x1_dot)
P: initial uncertainty convariance matrix
measurement: observed position
R: measurement noise
motion: external motion added to state vector x
Q: motion noise (same shape as P)
"""
return kalman(x, P, measurement, R, motion, Q,
F = np.matrix('''
1. 0. 1. 0.;
0. 1. 0. 1.;
0. 0. 1. 0.;
0. 0. 0. 1.
'''),
H = np.matrix('''
1. 0. 0. 0.;
0. 1. 0. 0.'''))
def kalman(x, P, measurement, R, motion, Q, F, H):
'''
Parameters:
x: initial state
P: initial uncertainty convariance matrix
measurement: observed position (same shape as H*x)
R: measurement noise (same shape as H)
motion: external motion added to state vector x
Q: motion noise (same shape as P)
F: next state function: x_prime = F*x
H: measurement function: position = H*x
Return: the updated and predicted new values for (x, P)
See also http://en.wikipedia.org/wiki/Kalman_filter
This version of kalman can be applied to many different situations by
appropriately defining F and H
'''
# UPDATE x, P based on measurement m
# distance between measured and current position-belief
y = np.matrix(measurement).T - H * x
S = H * P * H.T + R # residual convariance
K = P * H.T * S.I # Kalman gain
x = x + K*y
I = np.matrix(np.eye(F.shape[0])) # identity matrix
P = (I - K*H)*P
# PREDICT x, P based on motion
x = F*x + motion
P = F*P*F.T + Q
return x, P
def demo_kalman_xy():
x = np.matrix('0. 0. 0. 0.').T
P = np.matrix(np.eye(4))*1000 # initial uncertainty
N = 20
true_x = np.linspace(0.0, 10.0, N)
true_y = true_x**2
observed_x = true_x + 0.05*np.random.random(N)*true_x
observed_y = true_y + 0.05*np.random.random(N)*true_y
plt.plot(observed_x, observed_y, 'ro')
result = []
R = 0.01**2
for meas in zip(observed_x, observed_y):
x, P = kalman_xy(x, P, meas, R)
result.append((x[:2]).tolist())
kalman_x, kalman_y = zip(*result)
plt.plot(kalman_x, kalman_y, 'g-')
plt.show()
demo_kalman_xy()
The red dots show the noisy position measurements, the green line shows the Kalman predicted positions.
For a project of mine, I needed to create intervals for time-series modeling, and to make the procedure more efficient I created tsmoothie: A python library for time-series smoothing and outlier detection in a vectorized way.
It provides different smoothing algorithms together with the possibility to computes intervals.
In the case of KalmanSmoother, you can operate a smoothing of a curve putting together different components: level, trend, seasonality, long seasonality
import numpy as np
import matplotlib.pyplot as plt
from tsmoothie.smoother import *
from tsmoothie.utils_func import sim_randomwalk
# generate 3 randomwalks timeseries of lenght 100
np.random.seed(123)
data = sim_randomwalk(n_series=3, timesteps=100,
process_noise=10, measure_noise=30)
# operate smoothing
smoother = KalmanSmoother(component='level_trend',
component_noise={'level':0.1, 'trend':0.1})
smoother.smooth(data)
# generate intervals
low, up = smoother.get_intervals('kalman_interval', confidence=0.05)
# plot the first smoothed timeseries with intervals
plt.figure(figsize=(11,6))
plt.plot(smoother.smooth_data[0], linewidth=3, color='blue')
plt.plot(smoother.data[0], '.k')
plt.fill_between(range(len(smoother.data[0])), low[0], up[0], alpha=0.3)
I point out also that tsmoothie can carry out the smoothing of multiple timeseries in a vectorized way