I am a complete beginner I am trying to obtain real depth map from left and right image. I've used OpenCV to get the disparity map via block matching as you can see in the code bellow.
import cv2
import cv2.cv as cv
import sys
import numpy as np
def getDisparity(imgLeft, imgRight, method="BM"):
gray_left = cv2.cvtColor(imgLeft, cv.CV_BGR2GRAY)
gray_right = cv2.cvtColor(imgRight, cv.CV_BGR2GRAY)
print gray_left.shape
c, r = gray_left.shape
if method == "BM":
sbm = cv.CreateStereoBMState()
disparity = cv.CreateMat(c, r, cv.CV_32F)
sbm.SADWindowSize = 11
sbm.preFilterType = 1
sbm.preFilterSize = 5
sbm.preFilterCap = 61
sbm.minDisparity = -50
sbm.numberOfDisparities = 112
sbm.textureThreshold = 507
sbm.uniquenessRatio= 0
sbm.speckleRange = 8
sbm.speckleWindowSize = 0
gray_left = cv.fromarray(gray_left)
gray_right = cv.fromarray(gray_right)
cv.FindStereoCorrespondenceBM(gray_left, gray_right, disparity, sbm)
disparity_visual = cv.CreateMat(c, r, cv.CV_8U)
cv.Normalize(disparity, disparity_visual, 0, 255, cv.CV_MINMAX)
disparity_visual = np.array(disparity_visual)
elif method == "SGBM":
sbm = cv2.StereoSGBM()
sbm.SADWindowSize = 9;
sbm.numberOfDisparities = 0;
sbm.preFilterCap = 63;
sbm.minDisparity = -21;
sbm.uniquenessRatio = 7;
sbm.speckleWindowSize = 0;
sbm.speckleRange = 8;
sbm.disp12MaxDiff = 1;
sbm.fullDP = False;
disparity = sbm.compute(gray_left, gray_right)
disparity_visual = cv2.normalize(disparity, alpha=0, beta=255, norm_type=cv2.cv.CV_MINMAX, dtype=cv2.cv.CV_8U)
return disparity_visual
imgLeft = cv2.imread('1.png')
imgRight = cv2.imread('2.png')
try:
method = "BM"
except IndexError:
method = "BM"
disparity = getDisparity(imgLeft, imgRight, method)
cv2.imshow("disparity", disparity)
#cv2.imshow("left", imgLeft)
#cv2.imshow("right", imgRight)
cv2.waitKey(0)
My question is what is the easiest way to obtain real depth map (distance) from disparity using python?
In order to calculate depth for stereo, you need to know the translation and rotation between the cameras. If you have that, you can take each disparity value and use triangulation to calculate the depth for that 3D point.
I recommend reading http://www.robots.ox.ac.uk/~vgg/hzbook/
for a detailed explanation.
Assuming your cameras are calibrated, the images rectified, you can use the formula provided by this tutorial which is:
disparity = Baseline * focal-lens / depth
So,
depth = Baseline * focal-lens / disparity
Related
I am trying to horizontally stretch an image in a very specific way. Each x prime coordinate should follow a tangent path with respect to the original x coordinate. I believe there are two ways to do this:
Inverse the tangent function and map it normally
Map the tangent function and then inverse the mapping
Using this answer for map inversion, Im trying to figure out why the two images are not the same. I know that the first method gives me the correct image that I'm looking for, so why doesnt the second method work? Is it because of the "limited precision" that #ChristophRackwitz commented on the answer?
import cv2
import glob
import numpy as np
import math
A = -1010
B = -3.931
C = 5.258
D = 978.3
M = -193.8
N = 1740
def get_tan_func_value(x):
return A * math.tan((((x-N)/M)+B)/C) + D
def get_inverse_tan_func_value(x):
return M * (C*math.atan((x-D)/A) - B) + N
# answer from linked post
def invert_map(F, shape):
I = np.zeros_like(F)
I[:,:,1], I[:,:,0] = np.indices(shape)
P = np.copy(I)
for i in range(10):
P += I - cv2.remap(F, P, None, interpolation=cv2.INTER_LINEAR)
return P
# import image
images = glob.glob('*.jpg')
img = cv2.imread(images[0])
h, w = img.shape[:2]
map_x_tan = np.zeros((img.shape[0], img.shape[1]), dtype=np.float32)
map_x_inverse_tan = np.zeros((img.shape[0], img.shape[1]), dtype=np.float32)
map_y = np.zeros((img.shape[0], img.shape[1]), dtype=np.float32)
# x tan function map
for i in range(map_x_tan.shape[0]):
map_x_tan[i,:] = [get_tan_func_value(x) for x in range(map_x_tan.shape[1])]
# x inverse tan function map
for i in range(map_x_inverse_tan.shape[0]):
map_x_inverse_tan[i,:] = [get_inverse_tan_func_value(x) for x in range(map_x_inverse_tan.shape[1])]
# default y map
for j in range(map_y.shape[1]):
map_y[:,j] = [y for y in range(map_y.shape[0])]
# convert x tan map to 2 channel (x,y) map
(xymap_tan, _) = cv2.convertMaps(map1=map_x_tan, map2=map_y, dstmap1type=cv2.CV_32FC2)
# invert the 2 channel x tan map
xymap_inverted = invert_map(xymap_tan, (h,w))
# remap and write the target image (inverse tan function with normal map)
target = cv2.remap(img, map_x_inverse_tan, map_y, cv2.INTER_LINEAR)
cv2.imwrite("target.jpg", target)
# remap and write the attempted image (normal tan function with inverted map)
attempt = cv2.remap(img, xymap_inverted, None, cv2.INTER_LINEAR)
cv2.imwrite("attempt.jpg", attempt)
Method 1: Target Image
Method 2: Attempt Image
The results show that the attempt (normal tan function with inverted map) has less stretching near the edges of the image than expected. Almost everywhere else on the images are identical except the edges. I did not post the original picture to save space.
I've played around with that invert_map procedure. It seems slightly susceptible to oscillation.
use this instead:
def invert_map(F):
(h, w) = F.shape[:2] # (h, w, 2), "xymap"
I = np.zeros_like(F)
I[:,:,1], I[:,:,0] = np.indices((h,w)) # identity map
P = np.copy(I)
for i in range(10):
correction = I - cv2.remap(F, P, None, interpolation=cv2.INTER_LINEAR)
P += correction * 0.5
return P
I simply damped the correction by 0.5, which makes the fixed point iteration tamer, converging a lot faster too.
In my experiments with your tan map, I've found that 5-10 iterations are good enough already, and there's no further progress in further iterations.
Entire notebook of my explorations: https://gist.github.com/crackwitz/67f76f8a9eff21476b080c06d20660d0
Feature request: https://github.com/opencv/opencv/issues/22120
I'm trying to turn some Matlab's routines doing shape analysis in Python and I have an issue with cv2.findContours which can't separate two contours when difference is on the scale of one pixel. Instead bwboundary captures correctly the difference in contours. Any idea how I could do ?
Here's a working exemple:
in Python
import numpy as np
import cv2
import matplotlib.pyplot as plt
plt.interactive(True)
X,Y = np.meshgrid(np.linspace(-1,1,100),np.linspace(-1,1,100))
deltaX = np.min(np.diff(X[0,:]))
mask = np.zeros(X.shape)
I1 = X**2/0.75**2+Y**2/0.5**2 < 1
I2 = X**2/(0.75-deltaX)**2+Y**2/0.4**2 < 1
mask[I1] = 1
mask[I2]=0
plt.pcolormesh(mask)
contours, hierarchy = cv2.findContours(mask.astype('uint8'),cv2.RETR_CCOMP,cv2.CHAIN_APPROX_NONE)
[plt.plot(contours[r][:,0,0],contours[r][:,0,1],'m') for r in range(0,len(contours))]
in Matlab:
[X,Y] = meshgrid(linspace(-1,1,100),linspace(-1,1,100));
deltaX = min(diff(X(1,:)));
mask = zeros(size(X));
I1 = X.^2/0.75^2+Y.^2/0.5^2 < 1;
I2 = X.^2/(0.75-deltaX)^2+Y.^2/0.4^2 < 1;
mask(I1) = 1;
mask(I2)=0;
pcolor(mask);shading flat
[Boun,Label,nobj,Adj] = bwboundaries(mask);
hold on
for i=1:length(Boun)
plot(Boun{i}(:,2),Boun{i}(:,1),'m','linewidth',2);
end
giving respectively
I am trying to find an equivalent Python function for MATLAB imflatfield function.
I have a section of code that modifies an image and I want to convert it to Python.
Here is the MATLAB code:
I = imread('lcs2.png');
out2 = imflatfield(I,30);
shadow_lab = rgb2lab(out2);
max_luminosity = 100;
L = shadow_lab(:,:,1)/max_luminosity;
shadow_adapthisteq = shadow_lab;
shadow_adapthisteq(:,:,1) = adapthisteq(L)*max_luminosity;
shadow_adapthisteq = lab2rgb(shadow_adapthisteq);
imwrite(shadow_adapthisteq,'lcs2_adap.jpg');
Original image
Final results from MATLAB
Since MATLAB releases the source code of imflatfield, it is not so difficult to implement it in Python using OpenCV.
Note: The implementation is specific to uint8 type and colored image (BGR format in Python).
Here is a MATLAB "manual" implementation of imflatfield:
function B = my_imflatfield(I, sigma)
A = im2single(I);
Ihsv = rgb2hsv(A);
A = Ihsv(:,:,3);
filterSize = 2*ceil(2*sigma)+1;
shading = imgaussfilt(A, sigma, 'Padding', 'symmetric', 'FilterSize', filterSize); % Calculate shading
meanVal = mean(A(:),'omitnan');
% Limit minimum to 1e-6 instead of testing using isnan and isinf after division.
shading = max(shading, 1e-6);
B = A*meanVal./shading;
%B(isnan(B)) = 0; % sometimes instances of 0/0 happen, making NaN values.
%B(isinf(B)) = 0; % sometimes values are divided by 0, making Inf values.
% Put processed V channel back into HSV image, convert to RGB
Ihsv(:,:,3) = B;
B = hsv2rgb(Ihsv);
B = im2uint8(B);
end
Here is an equivalent Python implementation (using OpenCV):
import cv2
import numpy as np
def imflatfield(I, sigma):
"""Python equivalent imflatfield implementation
I format must be BGR and type of I must be uint8"""
A = I.astype(np.float32) / 255 # A = im2single(I);
Ihsv = cv2.cvtColor(A, cv2.COLOR_BGR2HSV) # Ihsv = rgb2hsv(A);
A = Ihsv[:, :, 2] # A = Ihsv(:,:,3);
filterSize = int(2*np.ceil(2*sigma) + 1); # filterSize = 2*ceil(2*sigma)+1;
# shading = imgaussfilt(A, sigma, 'Padding', 'symmetric', 'FilterSize', filterSize); % Calculate shading
shading = cv2.GaussianBlur(A, (filterSize, filterSize), sigma, borderType=cv2.BORDER_REFLECT)
meanVal = np.mean(A) # meanVal = mean(A(:),'omitnan')
#% Limit minimum to 1e-6 instead of testing using isnan and isinf after division.
shading = np.maximum(shading, 1e-6) # shading = max(shading, 1e-6);
B = A*meanVal / shading # B = A*meanVal./shading;
#% Put processed V channel back into HSV image, convert to RGB
Ihsv[:, :, 2] = B # Ihsv(:,:,3) = B;
B = cv2.cvtColor(Ihsv, cv2.COLOR_HSV2BGR) # B = hsv2rgb(Ihsv);
B = np.round(np.clip(B*255, 0, 255)).astype(np.uint8) # B = im2uint8(B);
return B
# Read input imgae
I = cv2.imread('destroyer.jpg')
sigma = 30
out2 = imflatfield(I, sigma)
cv2.imwrite('imflatfield_py_destroyer.png', out2)
The above implementation reads the input image, and write the result to image file.
Comparing results using MATLAB (for testing):
I = imread('destroyer.jpg');
out1 = imflatfield(I, 30);
out2 = my_imflatfield(I, 30);
% Compare results of imflatfield and my_imflatfield:
max(max(max(imabsdiff(out1, out2))))
% figure;imshow(out2)
imwrite(out2, 'imflatfield_destroyer.png');
% Read Python result
out3 = imread('imflatfield_py_destroyer.png');
% Compare results of imflatfield and Python imflatfield:
max(max(max(imabsdiff(out1, out3))))
The maximum absolute difference between MATALB imflatfield and my_imflatfield is 0.
The maximum absolute difference between MATALB imflatfield and Python imflatfield is 1.
Converting the complete MATLAB code to Python:
sigma = 30
out2 = imflatfield(I, sigma)
# Conver out2 to float32 before converting to LAB
out2 = out2.astype(np.float32) / 255 # out2 = im2single(out2);
shadow_lab = cv2.cvtColor(out2, cv2.COLOR_BGR2Lab) # shadow_lab = rgb2lab(out2);
max_luminosity = 100
L = shadow_lab[:, :, 0] / max_luminosity # L = shadow_lab(:,:,1)/max_luminosity;
shadow_adapthisteq = shadow_lab.copy() # shadow_adapthisteq = shadow_lab;
# shadow_adapthisteq(:,:,1) = adapthisteq(L)*max_luminosity;
clahe = cv2.createCLAHE(clipLimit=20, tileGridSize=(8,8))
cl1 = clahe.apply((L*(2**16-1)).astype(np.uint16)) # CLAHE in OpenCV does not support float32 (convert to uint16 and back).
shadow_adapthisteq[:, :, 0] = cl1.astype(np.float32) * max_luminosity / (2**16-1)
shadow_adapthisteq = cv2.cvtColor(shadow_adapthisteq, cv2.COLOR_Lab2BGR) # shadow_adapthisteq = lab2rgb(shadow_adapthisteq);
# Convert shadow_adapthisteq to uint8
shadow_adapthisteq = np.round(np.clip(shadow_adapthisteq*255, 0, 255)).astype(np.uint8) # B = im2uint8(B);
cv2.imwrite('shadow_adapthisteq.jpg', shadow_adapthisteq) # imwrite(shadow_adapthisteq,'lcs2_adap.jpg');
Result is not going to be identical to MATLAB, because adapthisteq in MATLAB is not identical to CLAHE in OpenCV.
Result:
I found on https://fsix.github.io/mnist/Deskewing.html how to deskew the images of the MNIST dataset. It seems to work. My problem is that before deskewing each pixel has a value between 0 and 1. But after deskewing the image the values are not between 0 and 1 any more. They can be negative and can be greater than 1. How can this be fixed?
Here is the code:
def moments(image):
c0,c1 = np.mgrid[:image.shape[0],:image.shape[1]] # A trick in numPy to create a mesh grid
totalImage = np.sum(image) #sum of pixels
m0 = np.sum(c0*image)/totalImage #mu_x
m1 = np.sum(c1*image)/totalImage #mu_y
m00 = np.sum((c0-m0)**2*image)/totalImage #var(x)
m11 = np.sum((c1-m1)**2*image)/totalImage #var(y)
m01 = np.sum((c0-m0)*(c1-m1)*image)/totalImage #covariance(x,y)
mu_vector = np.array([m0,m1]) # Notice that these are \mu_x, \mu_y respectively
covariance_matrix = np.array([[m00,m01],[m01,m11]]) # Do you see a similarity between the covariance matrix
return mu_vector, covariance_matrix
def deskew(image):
c,v = moments(image)
alpha = v[0,1]/v[0,0]
affine = np.array([[1,0],[alpha,1]])
ocenter = np.array(image.shape)/2.0
offset = c-np.dot(affine,ocenter)
return interpolation.affine_transform(image,affine,offset=offset)
You can just normalize the image to a range between 0 and 1 after the skewing process.
img = deskew(img)
img = (img - img.min()) / (img.max() - img.min())
See this question.
To incorporate this in the deskew function, you could rewrite it like this:
def deskew(image):
c,v = moments(image)
alpha = v[0,1]/v[0,0]
affine = np.array([[1,0],[alpha,1]])
ocenter = np.array(image.shape)/2.0
offset = c-np.dot(affine,ocenter)
img = interpolation.affine_transform(image,affine,offset=offset)
return (img - img.min()) / (img.max() - img.min())
I have this image.
Image of a map
I want to:-
1. Recognize all the regions in this image
2. Recognize which region is connected to other regions
My goal is to apply four color theorem in this image and output a properly colored image. I'm a beginner in both python and opencv.
Your assistance in this matter would be greatly appreciated.
here is a MATLAB code which does what you want. It shouldn't be too complicated to implement it in Python+OpenCV:
% read image
m = rgb2gray(imread('map.jpg'));
% remove noise
m = medfilt2(m);
b = m < 250;
se1 = strel('disk',1,0);
se2 = ones(7);
b = imclose(imopen(b,se1),se2);
% skeletonize
B = bwmorph(b,'skel',inf);
% remove background
R = padarray(~B,[1 1],1);
bg = bwselect(R,1,1,4);
R(bg) = 0;
R = R(2:end-1,2:end-1);
% get regions connected components
ccRegions = bwconncomp(R,4);
maskRegions = false([size(R),ccRegions.NumObjects]);
MAP = zeros(size(R));
% generate a binary mask for each region, dilate it to detect overlaps
% between neigbors
for ii = 1:ccRegions.NumObjects
maskRegions((ii - 1)*numel(R) + (ccRegions.PixelIdxList{ii})) = 1;
maskRegions(:,:,ii) = imdilate(maskRegions(:,:,ii),se1);
MAP(maskRegions(:,:,ii)) = ii;
end
% detect neighbors using masks overlapping
neighborsRegions = cell(ccRegions.NumObjects,1);
for ii = 1:ccRegions.NumObjects
r = repmat(maskRegions(:,:,ii),[1 1 ccRegions.NumObjects]);
idxs = any(any(r & maskRegions,1),2); %indexes of touching neighbors
idxs(ii) = 0; % remove self index
neighborsRegions{ii} = find(idxs);
end
% show result
imshow(MAP,[])
c = regionprops(ccRegions,'Centroid');
for ii = 1:ccRegions.NumObjects
text(c(ii).Centroid(1),c(ii).Centroid(2),num2str(ii),'FontSize',20,'Color','r');
end
The output looks like that:
And the neighbors of each region is:
neighborsRegions = {[2;3]
[1;3;4]
[1;2;4;5;6]
[2;3;6]
[3;6]
[3;4;5]
[]}