Can anyone provide me a sample code or some sort of example of Kalman filter implementation in python 2.7 and openCV 2.4.13
I want to implement it in a video to track a person but, I don't have any reference to learn and I couldn't find any python examples.
I know Kalman Filter exists in openCV as cv2.KalmanFilter but I have no idea how to use it. Any guidance would be appreciated
The kalman.py code below is the example included in OpenCV 3.2 source in github. It should be easy to change the syntax back to 2.4 if needed.
#!/usr/bin/env python
"""
Tracking of rotating point.
Rotation speed is constant.
Both state and measurements vectors are 1D (a point angle),
Measurement is the real point angle + gaussian noise.
The real and the estimated points are connected with yellow line segment,
the real and the measured points are connected with red line segment.
(if Kalman filter works correctly,
the yellow segment should be shorter than the red one).
Pressing any key (except ESC) will reset the tracking with a different speed.
Pressing ESC will stop the program.
"""
# Python 2/3 compatibility
import sys
PY3 = sys.version_info[0] == 3
if PY3:
long = int
import cv2
from math import cos, sin, sqrt
import numpy as np
if __name__ == "__main__":
img_height = 500
img_width = 500
kalman = cv2.KalmanFilter(2, 1, 0)
code = long(-1)
cv2.namedWindow("Kalman")
while True:
state = 0.1 * np.random.randn(2, 1)
kalman.transitionMatrix = np.array([[1., 1.], [0., 1.]])
kalman.measurementMatrix = 1. * np.ones((1, 2))
kalman.processNoiseCov = 1e-5 * np.eye(2)
kalman.measurementNoiseCov = 1e-1 * np.ones((1, 1))
kalman.errorCovPost = 1. * np.ones((2, 2))
kalman.statePost = 0.1 * np.random.randn(2, 1)
while True:
def calc_point(angle):
return (np.around(img_width/2 + img_width/3*cos(angle), 0).astype(int),
np.around(img_height/2 - img_width/3*sin(angle), 1).astype(int))
state_angle = state[0, 0]
state_pt = calc_point(state_angle)
prediction = kalman.predict()
predict_angle = prediction[0, 0]
predict_pt = calc_point(predict_angle)
measurement = kalman.measurementNoiseCov * np.random.randn(1, 1)
# generate measurement
measurement = np.dot(kalman.measurementMatrix, state) + measurement
measurement_angle = measurement[0, 0]
measurement_pt = calc_point(measurement_angle)
# plot points
def draw_cross(center, color, d):
cv2.line(img,
(center[0] - d, center[1] - d), (center[0] + d, center[1] + d),
color, 1, cv2.LINE_AA, 0)
cv2.line(img,
(center[0] + d, center[1] - d), (center[0] - d, center[1] + d),
color, 1, cv2.LINE_AA, 0)
img = np.zeros((img_height, img_width, 3), np.uint8)
draw_cross(np.int32(state_pt), (255, 255, 255), 3)
draw_cross(np.int32(measurement_pt), (0, 0, 255), 3)
draw_cross(np.int32(predict_pt), (0, 255, 0), 3)
cv2.line(img, state_pt, measurement_pt, (0, 0, 255), 3, cv2.LINE_AA, 0)
cv2.line(img, state_pt, predict_pt, (0, 255, 255), 3, cv2.LINE_AA, 0)
kalman.correct(measurement)
process_noise = sqrt(kalman.processNoiseCov[0,0]) * np.random.randn(2, 1)
state = np.dot(kalman.transitionMatrix, state) + process_noise
cv2.imshow("Kalman", img)
code = cv2.waitKey(100)
if code != -1:
break
if code in [27, ord('q'), ord('Q')]:
break
cv2.destroyWindow("Kalman")
Here is the OpenCV 2.4 Doc on Kalman Filter. Hope this help.
I know you specifically mentioned that you needs "Python 2.7" code. Still, if anyone need, I provide some information about that.
A video from my channel on Multi-target tracking: https://www.youtube.com/watch?v=bkn6M4LAoHk
The basics that you should know about Kalman Filtering and Multiple-Human Tracking:
Camera as a sensor: You need a proper detector (YOLO etc.) that provides you frame-by-frame bounding box.
Tracking the bounding box:
The track handling is done by the Kalman filtering framework. The eight-dimensional state space that contains the bounding box center position, aspect ratio, height, and their respective velocities in image coordinates. A standard Kalman filter is used with constant velocity motion and linear observation model, where bounding coordinates are taken as direct observations of the object state.
Frame-to-Frame association: What if there are three people in scene? Since detectors does not provide any identification on bounding boxes, you need to match current frame's bounding boxes to previous bounding boxes. I suggest you to search "Gating" and "Data Association" keywords on that.
class KalmanFilter(object):
"""
A simple Kalman filter for tracking bounding boxes in image space.
The 8-dimensional state space
x, y, a, h, vx, vy, va, vh
contains the bounding box center position (x, y), aspect ratio a, height h,
and their respective velocities.
Object motion follows a constant velocity model. The bounding box location
(x, y, a, h) is taken as direct observation of the state space (linear
observation model).
"""
def __init__(self):
ndim, dt = 4, 1.
# Create Kalman filter model matrices.
self._motion_mat = np.eye(2 * ndim, 2 * ndim)
for i in range(ndim):
self._motion_mat[i, ndim + i] = dt
self._update_mat = np.eye(ndim, 2 * ndim)
# Motion and observation uncertainty are chosen relative to the current
# state estimate. These weights control the amount of uncertainty in
# the model. This is a bit hacky.
self._std_weight_position = 1. / 20
self._std_weight_velocity = 1. / 160
def initiate(self, measurement):
"""Create track from unassociated measurement.
Parameters
----------
measurement : ndarray
Bounding box coordinates (x, y, a, h) with center position (x, y),
aspect ratio a, and height h.
Returns
-------
(ndarray, ndarray)
Returns the mean vector (8 dimensional) and covariance matrix (8x8
dimensional) of the new track. Unobserved velocities are initialized
to 0 mean.
"""
mean_pos = measurement
mean_vel = np.zeros_like(mean_pos)
mean = np.r_[mean_pos, mean_vel]
std = [
2 * self._std_weight_position * measurement[3],
2 * self._std_weight_position * measurement[3],
1e-2,
2 * self._std_weight_position * measurement[3],
10 * self._std_weight_velocity * measurement[3],
10 * self._std_weight_velocity * measurement[3],
1e-5,
10 * self._std_weight_velocity * measurement[3]]
covariance = np.diag(np.square(std))
return mean, covariance
def predict(self, mean, covariance):
"""Run Kalman filter prediction step.
Parameters
----------
mean : ndarray
The 8 dimensional mean vector of the object state at the previous
time step.
covariance : ndarray
The 8x8 dimensional covariance matrix of the object state at the
previous time step.
Returns
-------
(ndarray, ndarray)
Returns the mean vector and covariance matrix of the predicted
state. Unobserved velocities are initialized to 0 mean.
"""
std_pos = [
self._std_weight_position * mean[3],
self._std_weight_position * mean[3],
1e-2,
self._std_weight_position * mean[3]]
std_vel = [
self._std_weight_velocity * mean[3],
self._std_weight_velocity * mean[3],
1e-5,
self._std_weight_velocity * mean[3]]
motion_cov = np.diag(np.square(np.r_[std_pos, std_vel]))
mean = np.dot(self._motion_mat, mean)
covariance = np.linalg.multi_dot((
self._motion_mat, covariance, self._motion_mat.T)) + motion_cov
return mean, covariance
def project(self, mean, covariance):
"""Project state distribution to measurement space.
Parameters
----------
mean : ndarray
The state's mean vector (8 dimensional array).
covariance : ndarray
The state's covariance matrix (8x8 dimensional).
Returns
-------
(ndarray, ndarray)
Returns the projected mean and covariance matrix of the given state
estimate.
"""
std = [
self._std_weight_position * mean[3],
self._std_weight_position * mean[3],
1e-1,
self._std_weight_position * mean[3]]
innovation_cov = np.diag(np.square(std))
mean = np.dot(self._update_mat, mean)
covariance = np.linalg.multi_dot((
self._update_mat, covariance, self._update_mat.T))
return mean, covariance + innovation_cov
def update(self, mean, covariance, measurement):
"""Run Kalman filter correction step.
Parameters
----------
mean : ndarray
The predicted state's mean vector (8 dimensional).
covariance : ndarray
The state's covariance matrix (8x8 dimensional).
measurement : ndarray
The 4 dimensional measurement vector (x, y, a, h), where (x, y)
is the center position, a the aspect ratio, and h the height of the
bounding box.
Returns
-------
(ndarray, ndarray)
Returns the measurement-corrected state distribution.
"""
projected_mean, projected_cov = self.project(mean, covariance)
chol_factor, lower = scipy.linalg.cho_factor(
projected_cov, lower=True, check_finite=False)
kalman_gain = scipy.linalg.cho_solve(
(chol_factor, lower), np.dot(covariance, self._update_mat.T).T,
check_finite=False).T
innovation = measurement - projected_mean
new_mean = mean + np.dot(innovation, kalman_gain.T)
new_covariance = covariance - np.linalg.multi_dot((
kalman_gain, projected_cov, kalman_gain.T))
return new_mean, new_covariance
def gating_distance(self, mean, covariance, measurements,
only_position=False):
"""Compute gating distance between state distribution and measurements.
A suitable distance threshold can be obtained from `chi2inv95`. If
`only_position` is False, the chi-square distribution has 4 degrees of
freedom, otherwise 2.
Parameters
----------
mean : ndarray
Mean vector over the state distribution (8 dimensional).
covariance : ndarray
Covariance of the state distribution (8x8 dimensional).
measurements : ndarray
An Nx4 dimensional matrix of N measurements, each in
format (x, y, a, h) where (x, y) is the bounding box center
position, a the aspect ratio, and h the height.
only_position : Optional[bool]
If True, distance computation is done with respect to the bounding
box center position only.
Returns
-------
ndarray
Returns an array of length N, where the i-th element contains the
squared Mahalanobis distance between (mean, covariance) and
`measurements[i]`.
"""
mean, covariance = self.project(mean, covariance)
if only_position:
mean, covariance = mean[:2], covariance[:2, :2]
measurements = measurements[:, :2]
cholesky_factor = np.linalg.cholesky(covariance)
d = measurements - mean
z = scipy.linalg.solve_triangular(
cholesky_factor, d.T, lower=True, check_finite=False,
overwrite_b=True)
squared_maha = np.sum(z * z, axis=0)
return squared_maha
And this is a basic multi-target tracker.
class Tracker:
"""
This is the multi-target tracker.
Parameters
----------
metric : nn_matching.NearestNeighborDistanceMetric
A distance metric for measurement-to-track association.
max_age : int
Maximum number of missed misses before a track is deleted.
n_init : int
Number of consecutive detections before the track is confirmed. The
track state is set to `Deleted` if a miss occurs within the first
`n_init` frames.
Attributes
----------
metric : nn_matching.NearestNeighborDistanceMetric
The distance metric used for measurement to track association.
max_age : int
Maximum number of missed misses before a track is deleted.
n_init : int
Number of frames that a track remains in initialization phase.
kf : kalman_filter.KalmanFilter
A Kalman filter to filter target trajectories in image space.
tracks : List[Track]
The list of active tracks at the current time step.
"""
def __init__(self, metric, max_iou_distance=0.7, max_age=30, n_init=3):
self.metric = metric
self.max_iou_distance = max_iou_distance
self.max_age = max_age
self.n_init = n_init
self.kf = kalman_filter.KalmanFilter()
self.tracks = []
self._next_id = 1
def predict(self):
"""Propagate track state distributions one time step forward.
This function should be called once every time step, before `update`.
"""
for track in self.tracks:
track.predict(self.kf)
def update(self, detections):
"""Perform measurement update and track management.
Parameters
----------
detections : List[deep_sort.detection.Detection]
A list of detections at the current time step.
"""
# Run matching cascade.
matches, unmatched_tracks, unmatched_detections = \
self._match(detections)
# Update track set.
for track_idx, detection_idx in matches:
self.tracks[track_idx].update(
self.kf, detections[detection_idx])
for track_idx in unmatched_tracks:
self.tracks[track_idx].mark_missed()
for detection_idx in unmatched_detections:
self._initiate_track(detections[detection_idx])
self.tracks = [t for t in self.tracks if not t.is_deleted()]
# Update distance metric.
active_targets = [t.track_id for t in self.tracks if t.is_confirmed()]
features, targets = [], []
for track in self.tracks:
if not track.is_confirmed():
continue
features += track.features
targets += [track.track_id for _ in track.features]
track.features = []
self.metric.partial_fit(
np.asarray(features), np.asarray(targets), active_targets)
def _match(self, detections):
def gated_metric(tracks, dets, track_indices, detection_indices):
features = np.array([dets[i].feature for i in detection_indices])
targets = np.array([tracks[i].track_id for i in track_indices])
cost_matrix = self.metric.distance(features, targets)
cost_matrix = linear_assignment.gate_cost_matrix(
self.kf, cost_matrix, tracks, dets, track_indices,
detection_indices)
return cost_matrix
# Split track set into confirmed and unconfirmed tracks.
confirmed_tracks = [
i for i, t in enumerate(self.tracks) if t.is_confirmed()]
unconfirmed_tracks = [
i for i, t in enumerate(self.tracks) if not t.is_confirmed()]
# Associate confirmed tracks using appearance features.
matches_a, unmatched_tracks_a, unmatched_detections = \
linear_assignment.matching_cascade(
gated_metric, self.metric.matching_threshold, self.max_age,
self.tracks, detections, confirmed_tracks)
# Associate remaining tracks together with unconfirmed tracks using IOU.
iou_track_candidates = unconfirmed_tracks + [
k for k in unmatched_tracks_a if
self.tracks[k].time_since_update == 1]
unmatched_tracks_a = [
k for k in unmatched_tracks_a if
self.tracks[k].time_since_update != 1]
matches_b, unmatched_tracks_b, unmatched_detections = \
linear_assignment.min_cost_matching(
iou_matching.iou_cost, self.max_iou_distance, self.tracks,
detections, iou_track_candidates, unmatched_detections)
matches = matches_a + matches_b
unmatched_tracks = list(set(unmatched_tracks_a + unmatched_tracks_b))
return matches, unmatched_tracks, unmatched_detections
def _initiate_track(self, detection):
mean, covariance = self.kf.initiate(detection.to_xyah())
self.tracks.append(Track(
mean, covariance, self._next_id, self.n_init, self.max_age,
detection.feature))
self._next_id += 1
Related
I have been stuck here for sometime now. I cannot understand what am I doing wrong in calculating the displacement vectors along x-axis and y-axis using the Lucas Kanade method.
I implemented it as given in the above Wikipedia link. Here is what I have done:
import cv2
import numpy as np
img_a = cv2.imread("./images/1.png",0)
img_b = cv2.imread("./images/2.png",0)
# Calculate gradient along x and y axis
ix = cv2.Sobel(img_a, cv2.CV_64F, 1, 0, ksize = 3, scale = 1.0/3.0)
iy = cv2.Sobel(img_a, cv2.CV_64F, 0, 1, ksize = 3, scale = 1.0/3.0)
# Calculate temporal difference between the 2 images
it = img_b - img_a
ix = ix.flatten()
iy = iy.flatten()
it = -it.flatten()
A = np.vstack((ix, iy)).T
atai = np.linalg.inv(np.dot(A.T,A))
atb = np.dot(A.T, it)
v = np.dot(np.dot(np.linalg.inv(np.dot(A.T,A)),A.T),it)
print(v)
This code runs without an error but it prints an array of 2 values! I had expected the v matrix to be of the same size as that of the image. Why does this happen? What am I doing incorrectly?
PS: I know there are methods directly available with OpenCV but I want to write this simple algorithm (as also given in the Wikipedia link shared above) myself.
To properly compute the Lucas–Kanade optical flow estimate you need to solve the system of two equations for every pixel, using information from its neighborhood, not for the image as a whole.
This is the recipe (notation refers to that used on the Wikipedia page):
Compute the image gradient (A) for the first image (ix, iy in the OP) using any method (Sobel is OK, I prefer Gaussian derivatives; note that it is important to apply the right scaling in Sobel: 1/8).
ix = cv2.Sobel(img_a, cv2.CV_64F, 1, 0, ksize = 3, scale = 1.0/8.0)
iy = cv2.Sobel(img_a, cv2.CV_64F, 0, 1, ksize = 3, scale = 1.0/8.0)
Compute the structure tensor (ATWA): Axx = ix * ix, Axy = ix * iy, Ayy = iy * iy. Each of these three images must be smoothed with a Gaussian filter (this is the windowing). For example,
Axx = cv2.GaussianBlur(ix * ix, (0,0), 5)
Axy = cv2.GaussianBlur(ix * iy, (0,0), 5)
Ayy = cv2.GaussianBlur(iy * iy, (0,0), 5)
These three images together form the structure tensor, which is a 2x2 symmetric matrix at each pixel. For a pixel at (i,j), the matrix is:
| Axx(i,j) Axy(i,j) |
| Axy(i,j) Ayy(i,j) |
Compute the temporal gradient (b) by subtracting the two images (it in the OP).
it = img_b - img_a
Compute ATWb: Abx = ix * it, Aby = iy * it, and smooth these two images with the same Gaussian filter as above.
Abx = cv2.GaussianBlur(ix * it, (0,0), 5)
Aby = cv2.GaussianBlur(iy * it, (0,0), 5)
Compute the inverse of ATWA (a symmetric positive-definite matrix) and multiply by ATWb. Note that this inverse is of the 2x2 matrix at each pixel, not of the images as a whole. You can write this out as a set of simple arithmetic operations on the images Axx, Axy, Ayy, Abx and Aby.
The inverse of the matrix ATWA is given by:
| Ayy -Axy |
| -Axy Axx | / ( Axx*Ayy - Axy*Axy )
so you can write the solution as
norm = Axx*Ayy - Axy*Axy
vx = ( Ayy * Abx - Axy * Aby ) / norm
vy = ( Axx * Aby - Axy * Abx ) / norm
If the image is natural, it will have at least a tiny bit of noise, and norm will not have zeros. But for artificial images norm could have zeros, meaning you can't divide by it. Simply adding a small value to it will avoid division by zero errors: norm += 1e-6.
The size of the Gaussian filter is chosen as a compromise between precision and allowed motion speed: a larger filter will yield less precise results, but will work with larger shifts between images.
Typically, the vx and vy is only evaluated where the two eigenvalues of the matrix ATWA are sufficiently large (if at least one is small, the result is inaccurate or possibly wrong).
Using DIPlib (disclosure: I'm an author) this is all very easy because it supports images with a matrix at each pixel. You would do this as follows:
import diplib as dip
img_a = dip.ImageRead("./images/1.png")
img_b = dip.ImageRead("./images/2.png")
A = dip.Gradient(img_a, [1.0])
b = img_b - img_a
ATA = dip.Gauss(A * dip.Transpose(A), [5.0])
ATb = dip.Gauss(A * b, [5.0])
v = dip.Inverse(ATA) * ATb
I am trying to do texture analysis in a satellite imagery using GLCM algorithm. The scikit-image documentation is very helpful on that but for GLCM calculation we need a window size looping over the image. This is too slow in Python. I found many posts on stackoverflow about sliding windows but the computation takes for ever. I have an example shown below, it works but takes forever. I guess this must be a a naive way of doing it
image = np.pad(image, int(win/2), mode='reflect')
row, cols = image.shape
feature_map = np.zeros((M, N))
for m in xrange(0, row):
for n in xrange(0, cols):
window = image[m:m+win, n:n+win]
glcm = greycomatrix(window, d, theta, levels)
contrast = greycoprops(glcm, 'contrast')
feature_map[m,n] = contrast
I came across with skimage.util.view_as_windows method which might be good solution for me. My problem is that, when I try to calculate the GLCM I get an error which says:
ValueError: The parameter image must be a 2-dimensional array
This is because the result of the GLCM image has 4d dimensions and scikit-image view_as_windows method accepts only 2d arrays. Here is my attempt
win_w=40
win_h=40
features = np.zeros(image.shape, dtype='uint8')
target = features[win_h//2:-win_h//2+1, win_w//2:-win_w//2+1]
windowed = view_as_windows(image, (win_h, win_w))
GLCM = greycomatrix(windowed, [1], [0, np.pi/4, np.pi/2, 3*np.pi/4], symmetric=True, normed=True)
haralick = greycoprops(GLCM, 'ASM')
Does anyone have an idea on how I can calculate the GLCM using skimage.util.view_as_windows method?
The feature extraction you are trying to perform is a computer-intensive task. I have speeded up your method by computing the co-occurrence map only once for the whole image, rather than computing the co-occurrence map over and over on overlapping positions of the sliding window.
The co-occurrence map is a stack of images of the same size as the original image, in which - for each pixel - intensity levels are replaced by integer numbers that encode the co-occurrence of two intensities, namely Ii at that pixel and Ij at an offset pixel. The co-occurrence map has as many layers as we considered offsets (i.e. all the possible distance-angle pairs). By retaining the co-occurrence map you don't need to compute the GLCM at each position of the sliding window from the scratch, as you can reuse the previously computed co-occurrence maps to obtain the adjacency matrices (the GLCMs) for each distance-angle pair. This approach provides you with a significant speed gain.
The solution I came up with relies on the functions below:
import numpy as np
from skimage import io
from scipy import stats
from skimage.feature import greycoprops
def offset(length, angle):
"""Return the offset in pixels for a given length and angle"""
dv = length * np.sign(-np.sin(angle)).astype(np.int32)
dh = length * np.sign(np.cos(angle)).astype(np.int32)
return dv, dh
def crop(img, center, win):
"""Return a square crop of img centered at center (side = 2*win + 1)"""
row, col = center
side = 2*win + 1
first_row = row - win
first_col = col - win
last_row = first_row + side
last_col = first_col + side
return img[first_row: last_row, first_col: last_col]
def cooc_maps(img, center, win, d=[1], theta=[0], levels=256):
"""
Return a set of co-occurrence maps for different d and theta in a square
crop centered at center (side = 2*w + 1)
"""
shape = (2*win + 1, 2*win + 1, len(d), len(theta))
cooc = np.zeros(shape=shape, dtype=np.int32)
row, col = center
Ii = crop(img, (row, col), win)
for d_index, length in enumerate(d):
for a_index, angle in enumerate(theta):
dv, dh = offset(length, angle)
Ij = crop(img, center=(row + dv, col + dh), win=win)
cooc[:, :, d_index, a_index] = encode_cooccurrence(Ii, Ij, levels)
return cooc
def encode_cooccurrence(x, y, levels=256):
"""Return the code corresponding to co-occurrence of intensities x and y"""
return x*levels + y
def decode_cooccurrence(code, levels=256):
"""Return the intensities x, y corresponding to code"""
return code//levels, np.mod(code, levels)
def compute_glcms(cooccurrence_maps, levels=256):
"""Compute the cooccurrence frequencies of the cooccurrence maps"""
Nr, Na = cooccurrence_maps.shape[2:]
glcms = np.zeros(shape=(levels, levels, Nr, Na), dtype=np.float64)
for r in range(Nr):
for a in range(Na):
table = stats.itemfreq(cooccurrence_maps[:, :, r, a])
codes = table[:, 0]
freqs = table[:, 1]/float(table[:, 1].sum())
i, j = decode_cooccurrence(codes, levels=levels)
glcms[i, j, r, a] = freqs
return glcms
def compute_props(glcms, props=('contrast',)):
"""Return a feature vector corresponding to a set of GLCM"""
Nr, Na = glcms.shape[2:]
features = np.zeros(shape=(Nr, Na, len(props)))
for index, prop_name in enumerate(props):
features[:, :, index] = greycoprops(glcms, prop_name)
return features.ravel()
def haralick_features(img, win, d, theta, levels, props):
"""Return a map of Haralick features (one feature vector per pixel)"""
rows, cols = img.shape
margin = win + max(d)
arr = np.pad(img, margin, mode='reflect')
n_features = len(d) * len(theta) * len(props)
feature_map = np.zeros(shape=(rows, cols, n_features), dtype=np.float64)
for m in xrange(rows):
for n in xrange(cols):
coocs = cooc_maps(arr, (m + margin, n + margin), win, d, theta, levels)
glcms = compute_glcms(coocs, levels)
feature_map[m, n, :] = compute_props(glcms, props)
return feature_map
DEMO
The following results correspond to a (250, 200) pixels crop from a Landsat image. I have considered two distances, four angles, and two GLCM properties. This results in a 16-dimensional feature vector for each pixel. Notice that the sliding window is squared and its side is 2*win + 1 pixels (in this test a value of win = 19 was used). This sample run took around 6 minutes, which is fairly shorter than "forever" ;-)
In [331]: img.shape
Out[331]: (250L, 200L)
In [332]: img.dtype
Out[332]: dtype('uint8')
In [333]: d = (1, 2)
In [334]: theta = (0, np.pi/4, np.pi/2, 3*np.pi/4)
In [335]: props = ('contrast', 'homogeneity')
In [336]: levels = 256
In [337]: win = 19
In [338]: %time feature_map = haralick_features(img, win, d, theta, levels, props)
Wall time: 5min 53s
In [339]: feature_map.shape
Out[339]: (250L, 200L, 16L)
In [340]: feature_map[0, 0, :]
Out[340]:
array([ 10.3314, 0.3477, 25.1499, 0.2738, 25.1499, 0.2738,
25.1499, 0.2738, 23.5043, 0.2755, 43.5523, 0.1882,
43.5523, 0.1882, 43.5523, 0.1882])
In [341]: io.imshow(img)
Out[341]: <matplotlib.image.AxesImage at 0xce4d160>
This is more a question on theory of Gaussian filters, than specific coding question.
I've got an implementation of a 2D D.O.G. filter in python. I want to make noise masks at different spatial frequency bands e.g. 1-5 cpd. To do this I first create a white noise array and then I will add the DOG filters to bandpass filter the noise across different spatial ranges.
Is there a way to explicitly define the bandwidth of a Difference of Gaussian filter from the parameters of each contributing Gaussian filter?
(BONUS Q's: Would it be possible to take a Fourier transform of each of these Gaussians and then view this as a spectrum of their individual bandwidths, and then DOG bandwidth? What would the units be in the Fourier space? How could I convert this into a spatial frequency scale? Sorry lots of questions)
Many thanks,
NOTE: I use the conv2 function, rather than inbuilt python 2d convolutions, for speed (other applications).
import numpy as np
import math
import matplotlib.pylab as plt
from scipy.ndimage.filters import convolve
def Gaussian2D(GCenter, Gamp, Ggamma,Gconst): #new_theta > 0.4:
"""
Produces a 2D Gaussian pulse *EDITED BY WMBM
Parameters
----------
GCenter : int
Centre point of Gaussian pulse
Gamp : int
Amplitude of Gaussian pulse
Ggamma : int
FWHM of Gaussian pulse
Gconst : float
Unkown parameter of density function
Returns
----------
GKernel : array_like
Gaussian kernel
"""
new_theta = math.sqrt(Gconst**-1)*Ggamma
SizeHalf = np.int(math.floor(9*new_theta))
[y, x] = np.meshgrid(np.arange(-SizeHalf,SizeHalf+1), np.arange(-SizeHalf,SizeHalf+1))
part1=(x-GCenter[0])**2+(y-GCenter[1])**2
GKernel = Gamp*np.exp(-0.5*Ggamma**-2*Gconst*part1)
return GKernel
def conv2(x,y,mode='same'):
"""
Emulate the Matlab function conv2 from Mathworks.
Usage:
z = conv2(x,y,mode='same')
"""
if not(mode == 'same'):
raise Exception("Mode not supported")
# Add singleton dimensions
if (len(x.shape) < len(y.shape)):
dim = x.shape
for i in range(len(x.shape),len(y.shape)):
dim = (1,) + dim
x = x.reshape(dim)
elif (len(y.shape) < len(x.shape)):
dim = y.shape
for i in range(len(y.shape),len(x.shape)):
dim = (1,) + dim
y = y.reshape(dim)
origin = ()
# Apparently, the origin must be set in a special way to reproduce
# the results of scipy.signal.convolve and Matlab
for i in range(len(x.shape)):
if ( (x.shape[i] - y.shape[i]) % 2 == 0 and
x.shape[i] > 1 and
y.shape[i] > 1):
origin = origin + (-1,)
else:
origin = origin + (0,)
z = convolve(x,y, mode='constant', origin=origin)
return z
# Create white noise array
N=50 # Noise array dimension
A=10 # Noise amplitude
noise = np.random.rand(N,N)*A
# Gaussian Noise paramerers
GCenter=[0,0]
Gconst=1
# First gaussian filter
cutoff_f1 = 0.05 # < pi/10
gamma1 = 1/(2*np.pi*cutoff_f1) #minimum gamma == 0.5
Gamp1 = 1/(2*np.pi*gamma1)
filtr1 = Gaussian2D([0,0],Gamp1,gamma1,Gconst)
# Second gaussian filter
cutoff_f2 = 0.04 # < pi/10
gamma2 = 1/(2*np.pi*cutoff_f2) #minimum gamma == 0.5
Gamp2 = 1/(2*np.pi*gamma2)
filtr2 = Gaussian2D([0,0],Gamp2,gamma2,Gconst)
# Convolve filters with noise
noise_filtr1 = conv2(noise, filtr1, mode='same')
noise_filtr2 = conv2(noise, filtr2, mode='same')
# Difference of Gaussian Output
noise_out = noise_filtr1- noise_filtr2
I want to use Hawkes process to model some data. I could not find whether PyMC supports Hawkes process. More specifically I want an observed variable with Hawkes Process and learn a posterior on its params.
If it is not there, then could I define it in PyMC in some way e.g. #deterministic etc.??
It's been quite a long time since your question, but I've worked it out on PyMC today so I'd thought I'd share the gist of my implementation for the other people who might get across the same problem. We're going to infer the parameters λ and α of a Hawkes process. I'm not going to cover the temporal scale parameter β, I'll leave that as an exercise for the readers.
First let's generate some data :
def hawkes_intensity(mu, alpha, points, t):
p = np.array(points)
p = p[p <= t]
p = np.exp(p - t)
return mu + alpha * np.sum(p)
def simulate_hawkes(mu, alpha, window):
t = 0
points = []
lambdas = []
while t < window:
m = hawkes_intensity(mu, alpha, points, t)
s = np.random.exponential(scale=1/m)
ratio = hawkes_intensity(mu, alpha, points, t + s)
t = t + s
if t < window:
points.append(t)
lambdas.append(ratio)
else:
break
points = np.sort(np.array(points, dtype=np.float32))
lambdas = np.array(lambdas, dtype=np.float32)
return points, lambdas
# parameters
window = 1000
mu = 8
alpha = 0.25
points, lambdas = simulate_hawkes(mu, alpha, window)
num_points = len(points)
We just generated some temporal points using some functions that I adapted from there : https://nbviewer.jupyter.org/github/MatthewDaws/PointProcesses/blob/master/Temporal%20points%20processes.ipynb
Now, the trick is to create a matrix of size (num_points, num_points) that contains the temporal distance of the ith point from all the other points. So the (i, j) point of the matrix is the temporal interval separating the ith point to the jth. This matrix will be used to compute the sum of the exponentials of the Hawkes process, ie. the self-exciting part. The way to create this matrix as well as the sum of the exponentials is a bit tricky. I'd recommend to check every line yourself so you can see what they do.
tile = np.tile(points, num_points).reshape(num_points, num_points)
tile = np.clip(points[:, None] - tile, 0, np.inf)
tile = np.tril(np.exp(-tile), k=-1)
Σ = np.sum(tile, axis=1)[:-1] # this is our self-exciting sum term
We have points and we have a matrix containg the sum of the excitations term.
The duration between two consecutive events of a Hawkes process follow an exponential distribution of parameter λ = λ0 + ∑ excitation. This is what we are going to model, but first we have to compute the duration between two consecutive points of our generated data.
interval = points[1:] - points[:-1]
We're now ready for inference:
with pm.Model() as model:
λ = pm.Exponential("λ", 1)
α = pm.Uniform("α", 0, 1)
lam = pm.Deterministic("lam", λ + α * Σ)
interarrival = pm.Exponential(
"interarrival", lam, observed=interval)
trace = pm.sample(2000, tune=4000)
pm.plot_posterior(trace, var_names=["λ", "α"])
plt.show()
print(np.mean(trace["λ"]))
print(np.mean(trace["α"]))
7.829
0.284
Note: the tile matrix can become quite large if you have many data points.
I was working on 3D reconstruction and distance measurement using OpenCP and Python. I generate the disparity map for the left camera and then I used this formula to get the distance:
D=(f*b/disp)
Where f is the focal length, b is the distance between the 2 cameras and disp is the matrix of the disparity map.
My questions are:
The numbers that I get, are they supposed to be the distance of each point in the picture?
What is the max distance that I can get with this method (for example in my project the max number i get is 110)?
img_L = cv2.pyrDown( cv2.imread(Li) )
img_R = cv2.pyrDown( cv2.imread(Ri) )
'''h, w = img_L.shape[:2]
diff=(6,6,6)
mask=np.zeros((h+2,w+2),np.uint8)
window_size = 3
min_disp = 16
num_disp = 112-min_disp
stereo = cv2.StereoSGBM(minDisparity = min_disp,
numDisparities = num_disp,
SADWindowSize = window_size,
uniquenessRatio = 10,
speckleWindowSize = 100,
speckleRange = 32,
disp12MaxDiff = 1,
P1 = 8*3*window_size**2,
P2 = 32*3*window_size**2,
fullDP = False
)
print "computing disparity..."
disp = stereo.compute(img_L, img_R).astype(np.float32) / 16.0
print "generating 3d point cloud..."
h, w = img_L.shape[:2]
f = 0.8*w # guess for focal length
points = cv2.reprojectImageTo3D(disp, Mat)
colors = cv2.cvtColor(img_L, cv2.COLOR_BGR2RGB)
mask = disp > disp.min()
cv2.imshow('left', img_L)
disparity=(disp-min_disp)/num_disp
cv2.imshow('disparity',disparity )
b=6.50
D=b*f/disp
cv2.waitKey()
cv.DestroyAllWindows()
return D
The values D that you get using this formula are the depths of each point for which you provided a disparity.
The depth and the distance are two slightly different things. If you use the standard coordinate system for a camera (i.e. Z axis along the optical axis, X and Y axis in the directions of the image X and Y axis), then a 3D point M = (X, Y, Z) has a distance of sqrt(X²+Y²+Z²) from the optical center and a depth of Z. The D in the formula is the depth, not the distance.
If you want to retrieve the 3D point M = (X, Y, Z) from the depth value, you need to know the camera matrix K: M = D * inv(K) * [u; v; 1], where (u, v) are the image coordinates of the point.
Edit: Concerning your second question, the maximum depth that you can get with this method is linked to the minimum disparity (not the maximum, since disp is on the denominator). And since disparity estimation is quantified (done pixel by pixel), you can't estimate depth up to infinity.