distance measurement using disparity map - python

I was working on 3D reconstruction and distance measurement using OpenCP and Python. I generate the disparity map for the left camera and then I used this formula to get the distance:
D=(f*b/disp)
Where f is the focal length, b is the distance between the 2 cameras and disp is the matrix of the disparity map.
My questions are:
The numbers that I get, are they supposed to be the distance of each point in the picture?
What is the max distance that I can get with this method (for example in my project the max number i get is 110)?
img_L = cv2.pyrDown( cv2.imread(Li) )
img_R = cv2.pyrDown( cv2.imread(Ri) )
'''h, w = img_L.shape[:2]
diff=(6,6,6)
mask=np.zeros((h+2,w+2),np.uint8)
window_size = 3
min_disp = 16
num_disp = 112-min_disp
stereo = cv2.StereoSGBM(minDisparity = min_disp,
numDisparities = num_disp,
SADWindowSize = window_size,
uniquenessRatio = 10,
speckleWindowSize = 100,
speckleRange = 32,
disp12MaxDiff = 1,
P1 = 8*3*window_size**2,
P2 = 32*3*window_size**2,
fullDP = False
)
print "computing disparity..."
disp = stereo.compute(img_L, img_R).astype(np.float32) / 16.0
print "generating 3d point cloud..."
h, w = img_L.shape[:2]
f = 0.8*w # guess for focal length
points = cv2.reprojectImageTo3D(disp, Mat)
colors = cv2.cvtColor(img_L, cv2.COLOR_BGR2RGB)
mask = disp > disp.min()
cv2.imshow('left', img_L)
disparity=(disp-min_disp)/num_disp
cv2.imshow('disparity',disparity )
b=6.50
D=b*f/disp
cv2.waitKey()
cv.DestroyAllWindows()
return D

The values D that you get using this formula are the depths of each point for which you provided a disparity.
The depth and the distance are two slightly different things. If you use the standard coordinate system for a camera (i.e. Z axis along the optical axis, X and Y axis in the directions of the image X and Y axis), then a 3D point M = (X, Y, Z) has a distance of sqrt(X²+Y²+Z²) from the optical center and a depth of Z. The D in the formula is the depth, not the distance.
If you want to retrieve the 3D point M = (X, Y, Z) from the depth value, you need to know the camera matrix K: M = D * inv(K) * [u; v; 1], where (u, v) are the image coordinates of the point.
Edit: Concerning your second question, the maximum depth that you can get with this method is linked to the minimum disparity (not the maximum, since disp is on the denominator). And since disparity estimation is quantified (done pixel by pixel), you can't estimate depth up to infinity.

Related

Alpha Shapes in 3D - Follow Up

Problem
I am trying to implement Edelsbrunner's Algorithm for the alpha shape of a 3D point cloud in python as presented in this SO post. However, I'm having trouble plotting results. Half my sphere looks good, and the other half garbled.
I suspect it may have something to do with the fact that I have negative coordinates, but I'm not sure.
Definitions
I'm adding these so programmers can contribute without being bogged down by math. These are simplified definitions and not meant to be precise (Feel free to skip this part; for more, see see Introduction to Alpha Shapes and Discrete Differential Geometry):
Delaunay triangulation: for a 2D point set, a tesselation of the points into triangles (i.e. "triangulation") where the circumscribed circle (i.e. "circumcircle") about every triangle contains no other points in the set. For 3D points, replace "triangle" with "tetrahedron" and "circumcircle" with "circumsphere".
Affinely independent: a collection of points p0, ..., pk such that all vectors vi := pi-p0 are linearly independent (i.e. in 2D not collinear, in 3D not coplanar); also called "points in general position"
k-simplex: the convex hull of k+1 affinely-independent points; we call the points its vertices.
0-simplex = point (consists of 0+1 = 1 points)
1-simplex = line (consists of 1+1 = 2 points)
2-simplex = triangle (consists of 2+1 = 3 points)
3-simplex = tetrahedron (consists of 3+1 = 4 points)
face: any simplex whose vertices are a subset of the vertices of another simplex; i.e. "a part of a simplex"
(geometric) simplicial complex: a collection of simplices where (1) the intersection of two simplices is a simplex, and (2) every face of a simplex is in the complex; i.e. "a bunch of simplices"
alpha-exposed: a simplex within a point set where the circle (2D) or ball (3D) of radius alpha through its vertices doesn't contain any other point in the point set
alpha shape: the boundary of all alpha-exposed simplices of a point set
Algorithm
Edelsbrunner's Algorithm is as follows:
Given a point cloud pts:
Compute the Delaunay triangulation DT of the point cloud
Find the alpha-complex: search all simplices in the Delaunay triangulation and (a) if any ball around a simplex is empty and has
a radius less than alpha (called the "alpha test"), then add it to the
alpha complex
The boundary of the alpha complex is the alpha shape
Code
from scipy.spatial import Delaunay
import numpy as np
from collections import defaultdict
from matplotlib import pyplot as plt
import pyvista as pv
fig = plt.figure()
ax = plt.axes(projection="3d")
plotter = pv.Plotter()
def alpha_shape_3D(pos, alpha):
"""
Compute the alpha shape (concave hull) of a set of 3D points.
Parameters:
pos - np.array of shape (n,3) points.
alpha - alpha value.
return
outer surface vertex indices, edge indices, and triangle indices
"""
tetra = Delaunay(pos)
# Find radius of the circumsphere.
# By definition, radius of the sphere fitting inside the tetrahedral needs
# to be smaller than alpha value
# http://mathworld.wolfram.com/Circumsphere.html
tetrapos = np.take(pos,tetra.vertices,axis=0)
normsq = np.sum(tetrapos**2,axis=2)[:,:,None]
ones = np.ones((tetrapos.shape[0],tetrapos.shape[1],1))
a = np.linalg.det(np.concatenate((tetrapos,ones),axis=2))
Dx = np.linalg.det(np.concatenate((normsq,tetrapos[:,:,[1,2]],ones),axis=2))
Dy = -np.linalg.det(np.concatenate((normsq,tetrapos[:,:,[0,2]],ones),axis=2))
Dz = np.linalg.det(np.concatenate((normsq,tetrapos[:,:,[0,1]],ones),axis=2))
c = np.linalg.det(np.concatenate((normsq,tetrapos),axis=2))
r = np.sqrt(Dx**2+Dy**2+Dz**2-4*a*c)/(2*np.abs(a))
# Find tetrahedrals
tetras = tetra.vertices[r<alpha,:]
# triangles
TriComb = np.array([(0, 1, 2), (0, 1, 3), (0, 2, 3), (1, 2, 3)])
Triangles = tetras[:,TriComb].reshape(-1,3)
Triangles = np.sort(Triangles,axis=1)
# Remove triangles that occurs twice, because they are within shapes
TrianglesDict = defaultdict(int)
for tri in Triangles:TrianglesDict[tuple(tri)] += 1
Triangles=np.array([tri for tri in TrianglesDict if TrianglesDict[tri] ==1])
#edges
EdgeComb=np.array([(0, 1), (0, 2), (1, 2)])
Edges=Triangles[:,EdgeComb].reshape(-1,2)
Edges=np.sort(Edges,axis=1)
Edges=np.unique(Edges,axis=0)
Vertices = np.unique(Edges)
return Vertices,Edges,Triangles
def ptcloud_sphere():
r = 3
phi = np.linspace(0, np.pi, 18)
theta = np.linspace(0, 2 * np.pi, 36)
PHI, THETA = np.meshgrid(phi, theta)
x = r * np.sin(PHI) * np.cos(THETA)
y = r * np.sin(PHI) * np.sin(THETA)
z = r * np.cos(PHI)
ax.scatter(x, y, z)
plt.show()
pts = np.stack((x.ravel(), y.ravel(), z.ravel()), axis=1)
return np.unique(pts, axis=0)
if __name__ == "__main__":
pts = ptcloud_sphere()
verts, edges, faces = alpha_shape_3D(pts, alpha=10)
faces_conn_list = np.insert(faces, 0, 3, axis=1)
num_faces = faces.shape[0]
mesh = pv.PolyData(pts[verts], faces_conn_list, n_faces=num_faces)
plotter.add_mesh(mesh, reset_camera=True)
plotter.show()
Output
Point Cloud:
Alpha Shape:
Update 09-SEP-2021
Per #akaszynski, the problem does indeed appear to be a combination of unique and negative points. He fixed this issue with the following:
pts = np.stack((x.ravel(), y.ravel(), z.ravel()), axis=1) + 10
return np.unique(pts, axis=0) - 10
However, if someone could perform a deeper investigation to determine why this is the problem, that would help.
Update 09-SEP-2021 #2
Per #AndrasDeak, both pyvista and VTK support the creation of 2d and 3d alpha shapes. The pyvista function delaunay_3d uses vtkDelaunay3D under the hood, which accepts an alpha parameter.
(See vtkDelaunay3D docs)

Aligning a Point Cloud with the Floor (plane) Using Open3D

Community,
I am trying to align a point cloud with the detected floor using Open3D. So far I implemented the following steps (partly of this answer):
Detecting the floor using Open3D's plane segmentation
Translating the plane to the coordinate center
Calculating rotation angle between plane normal & z-axis
Calculating the axis of rotation
Rotating the point cloud using Open3Ds get_rotation_matrix_from_axis_angle function (see 3)
The results are OK, but I have to use an optimization-factor at the end for better results. Is there a mistake or a simpler/ more precise way for the alignment?
# See functions below
# Get the plane equation of the floor → ax+by+cz+d = 0
floor = get_floor_plane(pcd)
a, b, c, d = floor
# Translate plane to coordinate center
pcd.translate((0,-d/c,0))
# Calculate rotation angle between plane normal & z-axis
plane_normal = tuple(floor[:3])
z_axis = (0,0,1)
rotation_angle = vector_angle(plane_normal, z_axis)
# Calculate rotation axis
plane_normal_length = math.sqrt(a**2 + b**2 + c**2)
u1 = b / plane_normal_length
u2 = -a / plane_normal_length
rotation_axis = (u1, u2, 0)
# Generate axis-angle representation
optimization_factor = 1.4
axis_angle = tuple([x * rotation_angle * optimization_factor for x in rotation_axis])
# Rotate point cloud
R = pcd.get_rotation_matrix_from_axis_angle(axis_angle)
pcd.rotate(R, center=(0,0,0))
# FUNCTIONS
def vector_angle(u, v):
return np.arccos(np.dot(u,v) / (np.linalg.norm(u)* np.linalg.norm(v)))
def get_floor_plane(pcd, dist_threshold=0.02, visualize=False):
plane_model, inliers = pcd.segment_plane(distance_threshold=dist_threshold,
ransac_n=3,
num_iterations=1000)
[a, b, c, d] = plane_model
return plane_model

Why does the output contain only 2 values but not the displacement for the entire image?

I have been stuck here for sometime now. I cannot understand what am I doing wrong in calculating the displacement vectors along x-axis and y-axis using the Lucas Kanade method.
I implemented it as given in the above Wikipedia link. Here is what I have done:
import cv2
import numpy as np
img_a = cv2.imread("./images/1.png",0)
img_b = cv2.imread("./images/2.png",0)
# Calculate gradient along x and y axis
ix = cv2.Sobel(img_a, cv2.CV_64F, 1, 0, ksize = 3, scale = 1.0/3.0)
iy = cv2.Sobel(img_a, cv2.CV_64F, 0, 1, ksize = 3, scale = 1.0/3.0)
# Calculate temporal difference between the 2 images
it = img_b - img_a
ix = ix.flatten()
iy = iy.flatten()
it = -it.flatten()
A = np.vstack((ix, iy)).T
atai = np.linalg.inv(np.dot(A.T,A))
atb = np.dot(A.T, it)
v = np.dot(np.dot(np.linalg.inv(np.dot(A.T,A)),A.T),it)
print(v)
This code runs without an error but it prints an array of 2 values! I had expected the v matrix to be of the same size as that of the image. Why does this happen? What am I doing incorrectly?
PS: I know there are methods directly available with OpenCV but I want to write this simple algorithm (as also given in the Wikipedia link shared above) myself.
To properly compute the Lucas–Kanade optical flow estimate you need to solve the system of two equations for every pixel, using information from its neighborhood, not for the image as a whole.
This is the recipe (notation refers to that used on the Wikipedia page):
Compute the image gradient (A) for the first image (ix, iy in the OP) using any method (Sobel is OK, I prefer Gaussian derivatives; note that it is important to apply the right scaling in Sobel: 1/8).
ix = cv2.Sobel(img_a, cv2.CV_64F, 1, 0, ksize = 3, scale = 1.0/8.0)
iy = cv2.Sobel(img_a, cv2.CV_64F, 0, 1, ksize = 3, scale = 1.0/8.0)
Compute the structure tensor (ATWA): Axx = ix * ix, Axy = ix * iy, Ayy = iy * iy. Each of these three images must be smoothed with a Gaussian filter (this is the windowing). For example,
Axx = cv2.GaussianBlur(ix * ix, (0,0), 5)
Axy = cv2.GaussianBlur(ix * iy, (0,0), 5)
Ayy = cv2.GaussianBlur(iy * iy, (0,0), 5)
These three images together form the structure tensor, which is a 2x2 symmetric matrix at each pixel. For a pixel at (i,j), the matrix is:
| Axx(i,j) Axy(i,j) |
| Axy(i,j) Ayy(i,j) |
Compute the temporal gradient (b) by subtracting the two images (it in the OP).
it = img_b - img_a
Compute ATWb: Abx = ix * it, Aby = iy * it, and smooth these two images with the same Gaussian filter as above.
Abx = cv2.GaussianBlur(ix * it, (0,0), 5)
Aby = cv2.GaussianBlur(iy * it, (0,0), 5)
Compute the inverse of ATWA (a symmetric positive-definite matrix) and multiply by ATWb. Note that this inverse is of the 2x2 matrix at each pixel, not of the images as a whole. You can write this out as a set of simple arithmetic operations on the images Axx, Axy, Ayy, Abx and Aby.
The inverse of the matrix ATWA is given by:
| Ayy -Axy |
| -Axy Axx | / ( Axx*Ayy - Axy*Axy )
so you can write the solution as
norm = Axx*Ayy - Axy*Axy
vx = ( Ayy * Abx - Axy * Aby ) / norm
vy = ( Axx * Aby - Axy * Abx ) / norm
If the image is natural, it will have at least a tiny bit of noise, and norm will not have zeros. But for artificial images norm could have zeros, meaning you can't divide by it. Simply adding a small value to it will avoid division by zero errors: norm += 1e-6.
The size of the Gaussian filter is chosen as a compromise between precision and allowed motion speed: a larger filter will yield less precise results, but will work with larger shifts between images.
Typically, the vx and vy is only evaluated where the two eigenvalues of the matrix ATWA are sufficiently large (if at least one is small, the result is inaccurate or possibly wrong).
Using DIPlib (disclosure: I'm an author) this is all very easy because it supports images with a matrix at each pixel. You would do this as follows:
import diplib as dip
img_a = dip.ImageRead("./images/1.png")
img_b = dip.ImageRead("./images/2.png")
A = dip.Gradient(img_a, [1.0])
b = img_b - img_a
ATA = dip.Gauss(A * dip.Transpose(A), [5.0])
ATb = dip.Gauss(A * b, [5.0])
v = dip.Inverse(ATA) * ATb

Trilinear Interpolation on Voxels at specific angle

I'm currently attempting to implement this algorithm for volume rendering in Python, and am conceptually confused about their method of generating the LH histogram (see section 3.1, page 4).
I have a 3D stack of DICOM images, and calculated its gradient magnitude and the 2 corresponding azimuth and elevation angles with it (which I found out about here), as well as finding the second derivative.
Now, the algorithm is asking me to iterate through a set of voxels, and "track a path by integrating the gradient field in both directions...using the second order Runge-Kutta method with an integration step of one voxel".
What I don't understand is how to use the 2 angles I calculated to integrate the gradient field in said direction. I understand that you can use trilinear interpolation to get intermediate voxel values, but I don't understand how to get the voxel coordinates I want using the angles I have.
In other words, I start at a given voxel position, and want to take a 1 voxel step in the direction of the 2 angles calculated for that voxel (one in the x-y direction, the other in the z-direction). How would I take this step at these 2 angles and retrieve the new (x, y, z) voxel coordinates?
Apologies in advance, as I have a very basic background in Calc II/III, so vector fields/visualization of 3D spaces is still a little rough for me.
Creating 3D stack of DICOM images:
def collect_data(data_path):
print "collecting data"
files = [] # create an empty list
for dirName, subdirList, fileList in os.walk(data_path):
for filename in fileList:
if ".dcm" in filename:
files.append(os.path.join(dirName,filename))
# Get reference file
ref = dicom.read_file(files[0])
# Load dimensions based on the number of rows, columns, and slices (along the Z axis)
pixel_dims = (int(ref.Rows), int(ref.Columns), len(files))
# Load spacing values (in mm)
pixel_spacings = (float(ref.PixelSpacing[0]), float(ref.PixelSpacing[1]), float(ref.SliceThickness))
x = np.arange(0.0, (pixel_dims[0]+1)*pixel_spacings[0], pixel_spacings[0])
y = np.arange(0.0, (pixel_dims[1]+1)*pixel_spacings[1], pixel_spacings[1])
z = np.arange(0.0, (pixel_dims[2]+1)*pixel_spacings[2], pixel_spacings[2])
# Row and column directional cosines
orientation = ref.ImageOrientationPatient
# This will become the intensity values
dcm = np.zeros(pixel_dims, dtype=ref.pixel_array.dtype)
origins = []
# loop through all the DICOM files
for filename in files:
# read the file
ds = dicom.read_file(filename)
#get pixel spacing and origin information
origins.append(ds.ImagePositionPatient) #[0,0,0] coordinates in real 3D space (in mm)
# store the raw image data
dcm[:, :, files.index(filename)] = ds.pixel_array
return dcm, origins, pixel_spacings, orientation
Calculating gradient magnitude:
def calculate_gradient_magnitude(dcm):
print "calculating gradient magnitude"
gradient_magnitude = []
gradient_direction = []
gradx = np.zeros(dcm.shape)
sobel(dcm,0,gradx)
grady = np.zeros(dcm.shape)
sobel(dcm,1,grady)
gradz = np.zeros(dcm.shape)
sobel(dcm,2,gradz)
gradient = np.sqrt(gradx**2 + grady**2 + gradz**2)
azimuthal = np.arctan2(grady, gradx)
elevation = np.arctan(gradz,gradient)
azimuthal = np.degrees(azimuthal)
elevation = np.degrees(elevation)
return gradient, azimuthal, elevation
Converting to patient coordinate system to get actual voxel position:
def get_patient_position(dcm, origins, pixel_spacing, orientation):
"""
Image Space --> Anatomical (Patient) Space is an affine transformation
using the Image Orientation (Patient), Image Position (Patient), and
Pixel Spacing properties from the DICOM header
"""
print "getting patient coordinates"
world_coordinates = np.empty((dcm.shape[0], dcm.shape[1],dcm.shape[2], 3))
affine_matrix = np.zeros((4,4), dtype=np.float32)
rows = dcm.shape[0]
cols = dcm.shape[1]
num_slices = dcm.shape[2]
image_orientation_x = np.array([ orientation[0], orientation[1], orientation[2] ]).reshape(3,1)
image_orientation_y = np.array([ orientation[3], orientation[4], orientation[5] ]).reshape(3,1)
pixel_spacing_x = pixel_spacing[0]
# Construct affine matrix
# Method from:
# http://nipy.org/nibabel/dicom/dicom_orientation.html
T_1 = origins[0]
T_n = origins[num_slices-1]
affine_matrix[0,0] = image_orientation_y[0] * pixel_spacing[0]
affine_matrix[0,1] = image_orientation_x[0] * pixel_spacing[1]
affine_matrix[0,3] = T_1[0]
affine_matrix[1,0] = image_orientation_y[1] * pixel_spacing[0]
affine_matrix[1,1] = image_orientation_x[1] * pixel_spacing[1]
affine_matrix[1,3] = T_1[1]
affine_matrix[2,0] = image_orientation_y[2] * pixel_spacing[0]
affine_matrix[2,1] = image_orientation_x[2] * pixel_spacing[1]
affine_matrix[2,3] = T_1[2]
affine_matrix[3,3] = 1
k1 = (T_1[0] - T_n[0])/ (1 - num_slices)
k2 = (T_1[1] - T_n[1])/ (1 - num_slices)
k3 = (T_1[2] - T_n[2])/ (1 - num_slices)
affine_matrix[:3, 2] = np.array([k1,k2,k3])
for z in range(num_slices):
for r in range(rows):
for c in range(cols):
vector = np.array([r, c, 0, 1]).reshape((4,1))
result = np.matmul(affine_matrix, vector)
result = np.delete(result, 3, axis=0)
result = np.transpose(result)
world_coordinates[r,c,z] = result
# print "Finished slice ", str(z)
# np.save('./data/saved/world_coordinates_3d.npy', str(world_coordinates))
return world_coordinates
Now I'm at the point where I want to write this function:
def create_lh_histogram(patient_positions, dcm, magnitude, azimuthal, elevation):
print "constructing LH histogram"
# Get 2nd derivative
second_derivative = gaussian_filter(magnitude, sigma=1, order=1)
# Determine if voxels lie on boundary or not (thresholding)
# Still have to code out: let's say the thresholded voxels are in
# a numpy array called voxels
#Iterate through all thresholded voxels and integrate gradient field in
# both directions using 2nd-order Runge-Kutta
vox_it = voxels.nditer(voxels, flags=['multi_index'])
while not vox_it.finished:
# ???

OpenCV Kalman Filter python

Can anyone provide me a sample code or some sort of example of Kalman filter implementation in python 2.7 and openCV 2.4.13
I want to implement it in a video to track a person but, I don't have any reference to learn and I couldn't find any python examples.
I know Kalman Filter exists in openCV as cv2.KalmanFilter but I have no idea how to use it. Any guidance would be appreciated
The kalman.py code below is the example included in OpenCV 3.2 source in github. It should be easy to change the syntax back to 2.4 if needed.
#!/usr/bin/env python
"""
Tracking of rotating point.
Rotation speed is constant.
Both state and measurements vectors are 1D (a point angle),
Measurement is the real point angle + gaussian noise.
The real and the estimated points are connected with yellow line segment,
the real and the measured points are connected with red line segment.
(if Kalman filter works correctly,
the yellow segment should be shorter than the red one).
Pressing any key (except ESC) will reset the tracking with a different speed.
Pressing ESC will stop the program.
"""
# Python 2/3 compatibility
import sys
PY3 = sys.version_info[0] == 3
if PY3:
long = int
import cv2
from math import cos, sin, sqrt
import numpy as np
if __name__ == "__main__":
img_height = 500
img_width = 500
kalman = cv2.KalmanFilter(2, 1, 0)
code = long(-1)
cv2.namedWindow("Kalman")
while True:
state = 0.1 * np.random.randn(2, 1)
kalman.transitionMatrix = np.array([[1., 1.], [0., 1.]])
kalman.measurementMatrix = 1. * np.ones((1, 2))
kalman.processNoiseCov = 1e-5 * np.eye(2)
kalman.measurementNoiseCov = 1e-1 * np.ones((1, 1))
kalman.errorCovPost = 1. * np.ones((2, 2))
kalman.statePost = 0.1 * np.random.randn(2, 1)
while True:
def calc_point(angle):
return (np.around(img_width/2 + img_width/3*cos(angle), 0).astype(int),
np.around(img_height/2 - img_width/3*sin(angle), 1).astype(int))
state_angle = state[0, 0]
state_pt = calc_point(state_angle)
prediction = kalman.predict()
predict_angle = prediction[0, 0]
predict_pt = calc_point(predict_angle)
measurement = kalman.measurementNoiseCov * np.random.randn(1, 1)
# generate measurement
measurement = np.dot(kalman.measurementMatrix, state) + measurement
measurement_angle = measurement[0, 0]
measurement_pt = calc_point(measurement_angle)
# plot points
def draw_cross(center, color, d):
cv2.line(img,
(center[0] - d, center[1] - d), (center[0] + d, center[1] + d),
color, 1, cv2.LINE_AA, 0)
cv2.line(img,
(center[0] + d, center[1] - d), (center[0] - d, center[1] + d),
color, 1, cv2.LINE_AA, 0)
img = np.zeros((img_height, img_width, 3), np.uint8)
draw_cross(np.int32(state_pt), (255, 255, 255), 3)
draw_cross(np.int32(measurement_pt), (0, 0, 255), 3)
draw_cross(np.int32(predict_pt), (0, 255, 0), 3)
cv2.line(img, state_pt, measurement_pt, (0, 0, 255), 3, cv2.LINE_AA, 0)
cv2.line(img, state_pt, predict_pt, (0, 255, 255), 3, cv2.LINE_AA, 0)
kalman.correct(measurement)
process_noise = sqrt(kalman.processNoiseCov[0,0]) * np.random.randn(2, 1)
state = np.dot(kalman.transitionMatrix, state) + process_noise
cv2.imshow("Kalman", img)
code = cv2.waitKey(100)
if code != -1:
break
if code in [27, ord('q'), ord('Q')]:
break
cv2.destroyWindow("Kalman")
Here is the OpenCV 2.4 Doc on Kalman Filter. Hope this help.
I know you specifically mentioned that you needs "Python 2.7" code. Still, if anyone need, I provide some information about that.
A video from my channel on Multi-target tracking: https://www.youtube.com/watch?v=bkn6M4LAoHk
The basics that you should know about Kalman Filtering and Multiple-Human Tracking:
Camera as a sensor: You need a proper detector (YOLO etc.) that provides you frame-by-frame bounding box.
Tracking the bounding box:
The track handling is done by the Kalman filtering framework. The eight-dimensional state space that contains the bounding box center position, aspect ratio, height, and their respective velocities in image coordinates. A standard Kalman filter is used with constant velocity motion and linear observation model, where bounding coordinates are taken as direct observations of the object state.
Frame-to-Frame association: What if there are three people in scene? Since detectors does not provide any identification on bounding boxes, you need to match current frame's bounding boxes to previous bounding boxes. I suggest you to search "Gating" and "Data Association" keywords on that.
class KalmanFilter(object):
"""
A simple Kalman filter for tracking bounding boxes in image space.
The 8-dimensional state space
x, y, a, h, vx, vy, va, vh
contains the bounding box center position (x, y), aspect ratio a, height h,
and their respective velocities.
Object motion follows a constant velocity model. The bounding box location
(x, y, a, h) is taken as direct observation of the state space (linear
observation model).
"""
def __init__(self):
ndim, dt = 4, 1.
# Create Kalman filter model matrices.
self._motion_mat = np.eye(2 * ndim, 2 * ndim)
for i in range(ndim):
self._motion_mat[i, ndim + i] = dt
self._update_mat = np.eye(ndim, 2 * ndim)
# Motion and observation uncertainty are chosen relative to the current
# state estimate. These weights control the amount of uncertainty in
# the model. This is a bit hacky.
self._std_weight_position = 1. / 20
self._std_weight_velocity = 1. / 160
def initiate(self, measurement):
"""Create track from unassociated measurement.
Parameters
----------
measurement : ndarray
Bounding box coordinates (x, y, a, h) with center position (x, y),
aspect ratio a, and height h.
Returns
-------
(ndarray, ndarray)
Returns the mean vector (8 dimensional) and covariance matrix (8x8
dimensional) of the new track. Unobserved velocities are initialized
to 0 mean.
"""
mean_pos = measurement
mean_vel = np.zeros_like(mean_pos)
mean = np.r_[mean_pos, mean_vel]
std = [
2 * self._std_weight_position * measurement[3],
2 * self._std_weight_position * measurement[3],
1e-2,
2 * self._std_weight_position * measurement[3],
10 * self._std_weight_velocity * measurement[3],
10 * self._std_weight_velocity * measurement[3],
1e-5,
10 * self._std_weight_velocity * measurement[3]]
covariance = np.diag(np.square(std))
return mean, covariance
def predict(self, mean, covariance):
"""Run Kalman filter prediction step.
Parameters
----------
mean : ndarray
The 8 dimensional mean vector of the object state at the previous
time step.
covariance : ndarray
The 8x8 dimensional covariance matrix of the object state at the
previous time step.
Returns
-------
(ndarray, ndarray)
Returns the mean vector and covariance matrix of the predicted
state. Unobserved velocities are initialized to 0 mean.
"""
std_pos = [
self._std_weight_position * mean[3],
self._std_weight_position * mean[3],
1e-2,
self._std_weight_position * mean[3]]
std_vel = [
self._std_weight_velocity * mean[3],
self._std_weight_velocity * mean[3],
1e-5,
self._std_weight_velocity * mean[3]]
motion_cov = np.diag(np.square(np.r_[std_pos, std_vel]))
mean = np.dot(self._motion_mat, mean)
covariance = np.linalg.multi_dot((
self._motion_mat, covariance, self._motion_mat.T)) + motion_cov
return mean, covariance
def project(self, mean, covariance):
"""Project state distribution to measurement space.
Parameters
----------
mean : ndarray
The state's mean vector (8 dimensional array).
covariance : ndarray
The state's covariance matrix (8x8 dimensional).
Returns
-------
(ndarray, ndarray)
Returns the projected mean and covariance matrix of the given state
estimate.
"""
std = [
self._std_weight_position * mean[3],
self._std_weight_position * mean[3],
1e-1,
self._std_weight_position * mean[3]]
innovation_cov = np.diag(np.square(std))
mean = np.dot(self._update_mat, mean)
covariance = np.linalg.multi_dot((
self._update_mat, covariance, self._update_mat.T))
return mean, covariance + innovation_cov
def update(self, mean, covariance, measurement):
"""Run Kalman filter correction step.
Parameters
----------
mean : ndarray
The predicted state's mean vector (8 dimensional).
covariance : ndarray
The state's covariance matrix (8x8 dimensional).
measurement : ndarray
The 4 dimensional measurement vector (x, y, a, h), where (x, y)
is the center position, a the aspect ratio, and h the height of the
bounding box.
Returns
-------
(ndarray, ndarray)
Returns the measurement-corrected state distribution.
"""
projected_mean, projected_cov = self.project(mean, covariance)
chol_factor, lower = scipy.linalg.cho_factor(
projected_cov, lower=True, check_finite=False)
kalman_gain = scipy.linalg.cho_solve(
(chol_factor, lower), np.dot(covariance, self._update_mat.T).T,
check_finite=False).T
innovation = measurement - projected_mean
new_mean = mean + np.dot(innovation, kalman_gain.T)
new_covariance = covariance - np.linalg.multi_dot((
kalman_gain, projected_cov, kalman_gain.T))
return new_mean, new_covariance
def gating_distance(self, mean, covariance, measurements,
only_position=False):
"""Compute gating distance between state distribution and measurements.
A suitable distance threshold can be obtained from `chi2inv95`. If
`only_position` is False, the chi-square distribution has 4 degrees of
freedom, otherwise 2.
Parameters
----------
mean : ndarray
Mean vector over the state distribution (8 dimensional).
covariance : ndarray
Covariance of the state distribution (8x8 dimensional).
measurements : ndarray
An Nx4 dimensional matrix of N measurements, each in
format (x, y, a, h) where (x, y) is the bounding box center
position, a the aspect ratio, and h the height.
only_position : Optional[bool]
If True, distance computation is done with respect to the bounding
box center position only.
Returns
-------
ndarray
Returns an array of length N, where the i-th element contains the
squared Mahalanobis distance between (mean, covariance) and
`measurements[i]`.
"""
mean, covariance = self.project(mean, covariance)
if only_position:
mean, covariance = mean[:2], covariance[:2, :2]
measurements = measurements[:, :2]
cholesky_factor = np.linalg.cholesky(covariance)
d = measurements - mean
z = scipy.linalg.solve_triangular(
cholesky_factor, d.T, lower=True, check_finite=False,
overwrite_b=True)
squared_maha = np.sum(z * z, axis=0)
return squared_maha
And this is a basic multi-target tracker.
class Tracker:
"""
This is the multi-target tracker.
Parameters
----------
metric : nn_matching.NearestNeighborDistanceMetric
A distance metric for measurement-to-track association.
max_age : int
Maximum number of missed misses before a track is deleted.
n_init : int
Number of consecutive detections before the track is confirmed. The
track state is set to `Deleted` if a miss occurs within the first
`n_init` frames.
Attributes
----------
metric : nn_matching.NearestNeighborDistanceMetric
The distance metric used for measurement to track association.
max_age : int
Maximum number of missed misses before a track is deleted.
n_init : int
Number of frames that a track remains in initialization phase.
kf : kalman_filter.KalmanFilter
A Kalman filter to filter target trajectories in image space.
tracks : List[Track]
The list of active tracks at the current time step.
"""
def __init__(self, metric, max_iou_distance=0.7, max_age=30, n_init=3):
self.metric = metric
self.max_iou_distance = max_iou_distance
self.max_age = max_age
self.n_init = n_init
self.kf = kalman_filter.KalmanFilter()
self.tracks = []
self._next_id = 1
def predict(self):
"""Propagate track state distributions one time step forward.
This function should be called once every time step, before `update`.
"""
for track in self.tracks:
track.predict(self.kf)
def update(self, detections):
"""Perform measurement update and track management.
Parameters
----------
detections : List[deep_sort.detection.Detection]
A list of detections at the current time step.
"""
# Run matching cascade.
matches, unmatched_tracks, unmatched_detections = \
self._match(detections)
# Update track set.
for track_idx, detection_idx in matches:
self.tracks[track_idx].update(
self.kf, detections[detection_idx])
for track_idx in unmatched_tracks:
self.tracks[track_idx].mark_missed()
for detection_idx in unmatched_detections:
self._initiate_track(detections[detection_idx])
self.tracks = [t for t in self.tracks if not t.is_deleted()]
# Update distance metric.
active_targets = [t.track_id for t in self.tracks if t.is_confirmed()]
features, targets = [], []
for track in self.tracks:
if not track.is_confirmed():
continue
features += track.features
targets += [track.track_id for _ in track.features]
track.features = []
self.metric.partial_fit(
np.asarray(features), np.asarray(targets), active_targets)
def _match(self, detections):
def gated_metric(tracks, dets, track_indices, detection_indices):
features = np.array([dets[i].feature for i in detection_indices])
targets = np.array([tracks[i].track_id for i in track_indices])
cost_matrix = self.metric.distance(features, targets)
cost_matrix = linear_assignment.gate_cost_matrix(
self.kf, cost_matrix, tracks, dets, track_indices,
detection_indices)
return cost_matrix
# Split track set into confirmed and unconfirmed tracks.
confirmed_tracks = [
i for i, t in enumerate(self.tracks) if t.is_confirmed()]
unconfirmed_tracks = [
i for i, t in enumerate(self.tracks) if not t.is_confirmed()]
# Associate confirmed tracks using appearance features.
matches_a, unmatched_tracks_a, unmatched_detections = \
linear_assignment.matching_cascade(
gated_metric, self.metric.matching_threshold, self.max_age,
self.tracks, detections, confirmed_tracks)
# Associate remaining tracks together with unconfirmed tracks using IOU.
iou_track_candidates = unconfirmed_tracks + [
k for k in unmatched_tracks_a if
self.tracks[k].time_since_update == 1]
unmatched_tracks_a = [
k for k in unmatched_tracks_a if
self.tracks[k].time_since_update != 1]
matches_b, unmatched_tracks_b, unmatched_detections = \
linear_assignment.min_cost_matching(
iou_matching.iou_cost, self.max_iou_distance, self.tracks,
detections, iou_track_candidates, unmatched_detections)
matches = matches_a + matches_b
unmatched_tracks = list(set(unmatched_tracks_a + unmatched_tracks_b))
return matches, unmatched_tracks, unmatched_detections
def _initiate_track(self, detection):
mean, covariance = self.kf.initiate(detection.to_xyah())
self.tracks.append(Track(
mean, covariance, self._next_id, self.n_init, self.max_age,
detection.feature))
self._next_id += 1

Categories