I'm trying to calibrate a fisheye camera using OpenCV 3.0.0 python bindings (with an asymmetric circle grid), but I have problems to format the object and image point arrays correctly. My current source looks like this:
import cv2
import glob
import numpy as np
def main():
circle_diameter = 4.5
circle_radius = circle_diameter/2.0
pattern_width = 4
pattern_height = 11
num_points = pattern_width*pattern_height
images = glob.glob('*.bmp')
criteria = (cv2.TERM_CRITERIA_EPS + cv2.TERM_CRITERIA_MAX_ITER, 30, 0.001)
imgpoints = []
objpoints = []
obj = []
for i in range(pattern_height):
for j in range(pattern_width):
obj.append((
float(2*j + i % 2)*circle_radius,
float(i*circle_radius),
0
))
for name in images:
image = cv2.imread(name)
grayimage = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
retval, centers = cv2.findCirclesGrid(grayimage, (pattern_width, pattern_height), flags=(cv2.CALIB_CB_ASYMMETRIC_GRID + cv2.CALIB_CB_CLUSTERING))
imgpoints_tmp = np.zeros((num_points, 2))
if retval:
for i in range(num_points):
imgpoints_tmp[i, 0] = centers[i, 0, 0]
imgpoints_tmp[i, 1] = centers[i, 0, 1]
imgpoints.append(imgpoints_tmp)
objpoints.append(obj)
# Convertion to numpy array
imgpoints = np.array(imgpoints, dtype=np.float32)
objpoints = np.array(objpoints, dtype=np.float32)
K, D = cv2.fisheye.calibrate(objpoints, imgpoints, image_size=(1280, 800), K=None, D=None)
if __name__ == '__main__':
main()
The error message is:
OpenCV Error: Assertion failed (objectPoints.type() == CV_32FC3 || objectPoints.type() == CV_64FC3) in cv::fisheye::calibrate
objpoints has shape (31,44,3).
So objpoints array needs to be formatted in a different way, but I'm not able to achieve the correct layout. Maybe someone can help here?
In the sample of OpenCV (Camera Calibration) they set the objp to objp2 = np.zeros((8*9,3), np.float32)
However, in omnidirectional camera or fisheye camera, it should be:
objp = np.zeros((1,8*9,3), np.float32)
Idea is from here Calibrate fisheye lens using OpenCV — part 1
The correct layout of objpoints is a list of numpy arrays with len(objpoints) = "number of pictures" and each entry beeing a numpy array.
Please have a look at the official help. OpenCV documentation talks about "vectors", which is equivalent of a list or numpy.array. In this instance a "vector of vectors" can be interpreted as a list of numpy.arrays.
The data type is correct, but the shape is not. The expected shape of objpoints supposed to be (n_observations, 1, n_corners_per_observation, 3). Therefore, the code in your case should be:
imgpoints = np.array(imgpoints, dtype=np.float32).reshape(
-1,
1,
pattern_width * pattern_height,
3
)
or more general:
imgpoints = np.array(imgpoints, dtype=np.float32).reshape(
n_observations,
1,
n_corners_per_observation,
3
)
The error message is slightly misleading.
Didn't find a satisfying answer here so I messed around and eventually got this chunk to work:
calibration_flags = cv2.fisheye.CALIB_RECOMPUTE_EXTRINSIC + cv2.fisheye.CALIB_CHECK_COND + cv2.fisheye.CALIB_FIX_SKEW
# lists with each element a [1,n_points,_] array of type float32
obj_points = [np.random.rand(1,10,3).astype(np.float32)]
fisheye_points = [np.random.rand(1,10,2).astype(np.float32)]
# initialize empty variables of correct size and type, where total_num_points is summed across all arrays in each above list
rvecs = [np.zeros((1, 1, 3), dtype=np.float32) for i in range(total_num_points)]
tvecs = [np.zeros((1, 1, 3), dtype=np.float32) for i in range(total_num_points)]
D = np.zeros([4,1]).astype(np.float32)
K = np.zeros([3,3]).astype(np.float32)
outputs = cv2.fisheye.calibrate(gt_points,fisheye_points,(1920,1080),K,D,rvecs,tvecs)
Related
I am translating code from MATLAB to python but cannot perfectly replicate the results of MATLAB's imresize3. My input is a 101x101x101 array. First four inputs ([0,0:3,0] or (1,1:4,1)) are: 0.3819 0.4033 0.4336 0.2767. The data input for both languages is identical.
sampleQDNormSmall = imresize3(sampleQDNorm,0.5);
This results in a 51x51x51 array where the first four values (1,1:4,1) for example are: 0.3443 0.2646 0.2700 0.2835
Now I've tried two different pieces of code in python to replicate these results:
from skimage.transform import resize
from skimage.transform import rescale
sampleQDNormSmall = resize(sampleQDNorm,(0.5*sampleQDNorm.shape[0],0.5*sampleQDNorm.shape[1],0.5*sampleQDNorm.shape[2]),order=3,anti_aliasing=True);
sampleQDNormSmall1=rescale(sampleQDNorm,0.5,order=3,anti_aliasing=True)
The first one gives a 51x51x51 array that has the first four values [0,0:3,0] of: 0.3452 0.2669 0.2774 0.3099. Which is very close but not exactly the same numerical outputs. I don't know enough about the optional arguments to know might get me a better result.
The second one gives a 50x50x50 array that has the first four values [0,0:3,0] of: 0.3422 0.2623 0.2810 0.3006. This is a different output array size and also doesn't reproduce the same numerical outputs as the MATLAB code or the other python function
I don't know enough about the optional arguments to know might get me a better result. I know for this type of array, MATLAB's default is cubic interpolation which is why I am using order 3 in python. The default for anti-aliasing in both is true. I have a two bigger arrays that I am having the same issues with: a (873x873x873) array and a bool (873x873x873) array.
The MATLAB code I'm using is considered the "correct answer" for the work I am doing so I am trying to replicate the results as accurately as possible into python. Please let me know what I can try in python to reproduce the correct data.
sampleQDNorm is roughly random decimals between 0 and 1 for [0:100,0:100,0:100] and is padded with zeros on sides [:,:,101],[:,101,:],[101,:,:]
Getting the exact same result as MATLAB imresize3 is challenging.
One reason is that MATLAB enables Antialiasing filter by default, and I can't seem to find the equivalent Python implementation.
The closet existing Python alternatives are described in this post.
scipy.ndimage.zoom supports 3D resizing.
It could be that skimage.transform.resize gives closer result, but none are identical to MATLAB result.
Reimplementing imresize3:
Looking at the MATLAB implementation of imresize3 (MATLAB source code), it is apparent that MATLAB implementation "simply" uses resize along each axis:
Resize (by half) along the vertical axis.
Resize the above result (by half) along the horizontal axis.
Resize the above result (by half) along the depth axis.
Here is a MATLAB codes sample that demonstrates the implementation (using cubic interpolation):
I1 = imread('peppers.png');
I2 = imresize(imread('autumn.tif'), [size(I1, 1), size(I1, 2)]);
I3 = imresize(imread('football.jpg'), [size(I1, 1), size(I1, 2)]);
I4 = imresize(imread('cameraman.tif'), [size(I1, 1), size(I1, 2)]);
I = cat(3, I1, I2, I3, I4);
J = imresize3(I, 0.5, 'cubic', 'Antialiasing', false);
imwrite(I1, '/Tmp/I1.png');
imwrite(I2, '/Tmp/I2.png');
imwrite(I3, '/Tmp/I3.png');
imwrite(I4, '/Tmp/I4.png');
imwrite(J(:,:,1), '/Tmp/J1.png');
imwrite(J(:,:,2), '/Tmp/J2.png');
imwrite(J(:,:,3), '/Tmp/J3.png');
imwrite(J(:,:,4), '/Tmp/J4.png');
imwrite(J(:,:,5), '/Tmp/J5.png');
K = cubicResize3(I, 0.5);
max_abs_diff = max(abs(double(J(:)) - double(K(:))));
disp(['max_abs_diff = ', num2str(max_abs_diff)])
function B = cubicResize3(A, scale)
order = [1 2 3];
B = A;
for k = 1:numel(order)
dim = order(k);
B = cubicResizeAlongDim(B, dim, scale);
end
end
function out = cubicResizeAlongDim(in, dim, scale)
% If dim is 3, permute the input matrix so that the third dimension
% becomes the first dimension. This way, we can resize along the
% third dimensions as though we were resizing along the first dimension.
isThirdDimResize = (dim == 3);
if isThirdDimResize
in = permute(in, [3 2 1]);
dim = 1;
end
if dim == 1
out_rows = round(size(in, 1)*scale);
out_cols = size(in, 2);
else % dim == 2
out_rows = size(in, 1);
out_cols = round(size(in,2)*scale);
end
out = zeros(out_rows, out_cols, size(in, 3), class(in)); % Allocate array for storing the output.
for i = 1:size(in, 3)
% Resize each color plane separately
out(:, :, i) = imresize(in(:, :, i), [out_rows, out_cols], 'bicubic', 'Antialiasing', false);
end
% Permute back so that the original dimensions are restored if we were
% resizing along the third dimension.
if isThirdDimResize
out = permute(out, [3 2 1]);
end
end
The result is max_abs_diff = 0, meaning that cubicResize3 and imresize3 gave the same output.
Note:
The above implementation stores images in Tmp folder to be used a input for testing Python implementation.
Here is a Python implementation using OpenCV:
import numpy as np
import cv2
#from scipy.ndimage import zoom
def cubic_resize_along_dim(inp, dim, scale):
""" Implementation is based on MATLAB source code of resizeAlongDim function """
# If dim is 3, permute the input matrix so that the third dimension
# becomes the first dimension. This way, we can resize along the
# third dimensions as though we were resizing along the first dimension.
is_third_dim_resize = (dim == 2)
if is_third_dim_resize:
inp = np.swapaxes(inp, 2, 0).copy() # in = permute(in, [3 2 1])
dim = 0
if dim == 0:
out_rows = int(np.round(inp.shape[0]*scale)) # out_rows = round(size(in, 1)*scale);
out_cols = inp.shape[1] # out_cols = size(in, 2);
else: # dim == 1
out_rows = inp.shape[0] # out_rows = size(in, 1);
out_cols = int(np.round(inp.shape[1]*scale)) # out_cols = round(size(in,2)*scale);
out = np.zeros((out_rows, out_cols, inp.shape[2]), inp.dtype) # out = zeros(out_rows, out_cols, size(in, 3), class(in)); % Allocate array for storing the output.
for i in range(inp.shape[2]):
# Resize each color plane separately
out[:, :, i] = cv2.resize(inp[:, :, i], (out_cols, out_rows), interpolation=cv2.INTER_CUBIC) # out(:, :, i) = imresize(inp(:, :, i), [out_rows, out_cols], 'bicubic', 'Antialiasing', false);
# Permute back so that the original dimensions are restored if we were
# resizing along the third dimension.
if is_third_dim_resize:
out = np.swapaxes(out, 2, 0) # out = permute(out, [3 2 1]);
return out
def cubic_resize3(a, scale):
b = a.copy()
for k in range(3):
b = cubic_resize_along_dim(b, k, scale)
return b
# Build 3D input image (10 channels with resolution 512x384).
i1 = cv2.cvtColor(cv2.imread('/Tmp/I1.png', cv2.IMREAD_UNCHANGED), cv2.COLOR_BGR2RGB)
i2 = cv2.cvtColor(cv2.imread('/Tmp/I2.png', cv2.IMREAD_UNCHANGED), cv2.COLOR_BGR2RGB)
i3 = cv2.cvtColor(cv2.imread('/Tmp/I3.png', cv2.IMREAD_UNCHANGED), cv2.COLOR_BGR2RGB)
i4 = cv2.imread('/Tmp/I4.png', cv2.IMREAD_UNCHANGED)
im = np.dstack((i1, i2, i3, i4)) # Stack arrays along the third axis
# Read and adjust MATLAB output (out_mat is used as reference for testing).
# out_mat is the result of J = imresize3(I, 0.5, 'cubic', 'Antialiasing', false);
j1 = cv2.imread('/Tmp/J1.png', cv2.IMREAD_UNCHANGED)
j2 = cv2.imread('/Tmp/J2.png', cv2.IMREAD_UNCHANGED)
j3 = cv2.imread('/Tmp/J3.png', cv2.IMREAD_UNCHANGED)
j4 = cv2.imread('/Tmp/J4.png', cv2.IMREAD_UNCHANGED)
j5 = cv2.imread('/Tmp/J5.png', cv2.IMREAD_UNCHANGED)
out_mat = np.dstack((j1, j2, j3, j4, j5)) # Stack arrays along the third axis
#out_py = zoom(im, 0.5, order=3, mode='reflect')
# Execute 3D resize in Python
out_py = cubic_resize3(im, 0.5)
abs_diff = np.absolute(out_mat.astype(np.int16) - out_py.astype(np.int16))
print(f'max_abs_diff = {abs_diff.max()}')
The Python implementation reads the input files stored by MATLAB (and convert from BGR to RGB when required).
The implementation compares the result of cubic_resize3 with the MATLAB output of imresize3.
The maximum difference is 12 (not zero).
Apparently cv2.resize and MATLAB imresize gives slightly different results.
Update:
Replacing:
out[:, :, i] = cv2.resize(inp[:, :, i], (out_cols, out_rows), interpolation=cv2.INTER_CUBIC)
with:
out[:, :, i] = transform.resize(inp[:, :, i], (out_rows, out_cols), order=3, mode='edge', anti_aliasing=False, preserve_range=True)
Reduces the maximum difference to 4.
I implemented FFT-based convolution in Pytorch and compared the result with spatial convolution via conv2d() function. The convolution filter used is an average filter. The conv2d() function produced smoothened output due to average filtering as expected but the fft-based convolution returned a more blurry output.
I have attached the code and outputs here -
spatial convolution -
from PIL import Image, ImageOps
import torch
from matplotlib import pyplot as plt
from torchvision.transforms import ToTensor
import torch.nn.functional as F
import numpy as np
im = Image.open("/kaggle/input/tiger.jpg")
im = im.resize((256,256))
gray_im = im.convert('L')
gray_im = ToTensor()(gray_im)
gray_im = gray_im.squeeze()
fil = torch.tensor([[1/9,1/9,1/9],[1/9,1/9,1/9],[1/9,1/9,1/9]])
conv_gray_im = gray_im.unsqueeze(0).unsqueeze(0)
conv_fil = fil.unsqueeze(0).unsqueeze(0)
conv_op = F.conv2d(conv_gray_im,conv_fil)
conv_op = conv_op.squeeze()
plt.figure()
plt.imshow(conv_op, cmap='gray')
FFT-based convolution -
def fftshift(image):
sh = image.shape
x = np.arange(0, sh[2], 1)
y = np.arange(0, sh[3], 1)
xm, ym = np.meshgrid(x,y)
shifter = (-1)**(xm + ym)
shifter = torch.from_numpy(shifter)
return image*shifter
shift_im = fftshift(conv_gray_im)
padded_fil = F.pad(conv_fil, (0, gray_im.shape[0]-fil.shape[0], 0, gray_im.shape[1]-fil.shape[1]))
shift_fil = fftshift(padded_fil)
fft_shift_im = torch.rfft(shift_im, 2, onesided=False)
fft_shift_fil = torch.rfft(shift_fil, 2, onesided=False)
shift_prod = fft_shift_im*fft_shift_fil
shift_fft_conv = fftshift(torch.irfft(shift_prod, 2, onesided=False))
fft_op = shift_fft_conv.squeeze()
plt.figure('shifted fft')
plt.imshow(fft_op, cmap='gray')
original image -
spatial convolution output -
fft-based convolution output -
Could someone kindly explain the issue?
The main problem with your code is that Torch doesn't do complex numbers, the output of its FFT is a 3D array, with the 3rd dimension having two values, one for the real component and one for the imaginary. Consequently, the multiplication does not do a complex multiplication.
There currently is no complex multiplication defined in Torch (see this issue), we'll have to define our own.
A minor issue, but also important if you want to compare the two convolution operations, is the following:
The FFT takes the origin of its input in the first element (top-left pixel for an image). To avoid a shifted output, you need to generate a padded kernel where the origin of the kernel is the top-left pixel. This is quite tricky, actually...
Your current code:
fil = torch.tensor([[1/9,1/9,1/9],[1/9,1/9,1/9],[1/9,1/9,1/9]])
conv_fil = fil.unsqueeze(0).unsqueeze(0)
padded_fil = F.pad(conv_fil, (0, gray_im.shape[0]-fil.shape[0], 0, gray_im.shape[1]-fil.shape[1]))
generates a padded kernel where the origin is in pixel (1,1), rather than (0,0). It needs to be shifted by one pixel in each direction. NumPy has a function roll that is useful for this, I don't know the Torch equivalent (I'm not at all familiar with Torch). This should work:
fil = torch.tensor([[1/9,1/9,1/9],[1/9,1/9,1/9],[1/9,1/9,1/9]])
padded_fil = fil.unsqueeze(0).unsqueeze(0).numpy()
padded_fil = np.pad(padded_fil, ((0, gray_im.shape[0]-fil.shape[0]), (0, gray_im.shape[1]-fil.shape[1])))
padded_fil = np.roll(padded_fil, -1, axis=(0, 1))
padded_fil = torch.from_numpy(padded_fil)
Finally, your fftshift function, applied to the spatial-domain image, causes the frequency-domain image (the result of the FFT applied to the image) to be shifted such that the origin is in the middle of the image, rather than the top-left. This shift is useful when looking at the output of the FFT, but is pointless when computing the convolution.
Putting these things together, the convolution is now:
def complex_multiplication(t1, t2):
real1, imag1 = t1[:,:,0], t1[:,:,1]
real2, imag2 = t2[:,:,0], t2[:,:,1]
return torch.stack([real1 * real2 - imag1 * imag2, real1 * imag2 + imag1 * real2], dim = -1)
fft_im = torch.rfft(gray_im, 2, onesided=False)
fft_fil = torch.rfft(padded_fil, 2, onesided=False)
fft_conv = torch.irfft(complex_multiplication(fft_im, fft_fil), 2, onesided=False)
Note that you can do one-sided FFTs to save a bit of computation time:
fft_im = torch.rfft(gray_im, 2, onesided=True)
fft_fil = torch.rfft(padded_fil, 2, onesided=True)
fft_conv = torch.irfft(complex_multiplication(fft_im, fft_fil), 2, onesided=True, signal_sizes=gray_im.shape)
Here the frequency domain is about half the size as in the full FFT, but it is only redundant parts that are left out. The result of the convolution is unchanged.
I'm trying to convert an image from RGB to LMS -and vice versa- using OpenCV in Python. From what I understand, I am supposed to use a given 3x3 transformation matrix and multiply it to a 3x1 RGB/LMS matrix. The transformation matrices used can be found here.
I've explored previously asked questions on this site but unfortunately they're in C++, a language I have yet to be proficient in and I have difficulty in understanding how exactly they've solved their problems.
Here is my code so far: [Solved as of 2019-05-19]
import numpy as np
import cv2
#Transformation Matrix#
MsRGB = np.zeros((3,3), dtype='float')
MHPE = np.zeros((3,3), dtype='float')
MsRGB = np.array([[0.4124564, 0.3575761, 0.1804375],
[0.2126729, 0.7151522, 0.0721750],
[0.0193339, 0.1191920, 0.9503041]])
MHPE = np.array([[ 0.4002, 0.7076, -0.0808],
[-0.2263, 1.1653, 0.0457],
[ 0, 0, 0.9182]])
Trgb2lms = MHPE # MsRGB
Tlms2rgb = np.linalg.inv(Trgb2lms)
imgpath = "(insert file directory here)"
imgIN = cv2.imread(imgpath,cv2.IMREAD_UNCHANGED)
imgINrgb = cv2.cvtColor(imgIN, cv2.COLOR_BGR2RGB)
x,y,z = imgINrgb.shape
imgLMS = np.zeros((x,y,z), dtype='float')
imgReshaped = imgINrgb.transpose(2, 0, 1).reshape(3,-1)
imgLMS = Trgb2lms # imgReshaped #Convert to LMS
imgOUT = Tlms2rgb # imgLMS #Convert back to RGB
imgLMS = imgLMS.reshape(z, x, y).transpose(1, 2, 0).astype(np.uint8)
imgOUT = imgOUT.reshape(z, x, y).transpose(1, 2, 0).astype(np.uint8)
imgOUT = cv2.cvtColor(imgOUT, cv2.COLOR_RGB2BGR)
cv2.imshow('Input', imgIN)
cv2.imshow('LMS', imgLMS)
cv2.imshow('Output', imgOUT)
cv2.waitKey(0)
cv2.destroyAllWindows()
The code is now able to perform linear transformation on a given RGB image using a given transformation matrix. Results can be found here.
There are a few errors given the context of your question:
T is not defined. Judging from the context of your code, this should be Trgb2lms instead so we need to change those.
From what I can gather from the question, you are applying a linear transformation to all pixels in the image. To do this, you want to reshape the matrix so that we have three rows where each row corresponds to a single pixel followed by an unravelling of all pixels along the columns. In that case, the reshape method is incorrect. You need not only shuffle the dimensions so that the last dimension is first but you'll also need to set the last dimension of the reshape so that it's -1. This means that we will automatically fill up the columns so that it contains the total number of pixels in the image.
Finally, once you do the linear transformation, you need to reshape the matrix back to the original image size. You can use a final reshape call and use x, y and z from the original call you made to infer the image dimensions. Remember that when we reshape, the channels come first so we'll have to permute the dimensions again. You'll also want to go back to unsigned 8-bit precision after we do the transformation.
Also to compare, let's run this through the inverse transformation to make sure we have the original.
Therefore:
import numpy as np
import cv2
#Transformation Matrix#
MsRGB = np.zeros((3,3), dtype='float')
MHPE = np.zeros((3,3), dtype='float')
MsRGB = np.array([[0.4124564, 0.3575761, 0.1804375],
[0.2126729, 0.7151522, 0.0721750],
[0.0193339, 0.1191920, 0.9503041]])
MHPE = np.array([[ 0.4002, 0.7076, -0.0808],
[-0.2263, 1.1653, 0.0457],
[ 0, 0, 0.9182]])
Trgb2lms = MHPE # MsRGB
# Change
Tlms2rgb = np.linalg.inv(Trgb2lms)
imgpath = "(insert filename here)"
imgIN = cv2.imread(imgpath,cv2.IMREAD_UNCHANGED)
imgINrgb = cv2.cvtColor(imgIN, cv2.COLOR_BGR2RGB)
x,y,z = imgINrgb.shape
imgLMS = np.zeros((x,y,z), dtype='float')
#imgFlatten = imgINrgb.flatten()
# Change
imgReshaped = imgINrgb.transpose(2, 0, 1).reshape(3,-1)
# Change
imgLMS = Trgb2lms # imgReshaped
imgOUT = Tlms2rgb # imgLMS
# New
imgLMS = imgLMS.transpose(z, x, y).permute(1, 2, 0).astype(np.uint8)
imgOUT = imgOUT.transpose(z, x, y).permute(1, 2, 0).astype(np.uint8)
Given a 9x9 matrix representing an image (its entries are a [R, G, B]), I want to create a new resized image with size 3x3 which each entry is computed as follows :
divide the 9x9 matrix into 9 blocks of 3x3 matrices
compute the mean (component-wise) of each 3x3 matrix bloc
create the 3x3 image with these means.
So far I have used the cv2 library with Python 3.6
image_blurred = cv2.resize(original_image, (3,3), interpolation=cv2.INTER_AREA)
But I am not sure about what precisely cv2.INTER_AREA does.
Could you give me some information about this ? (There are some information here but they do not give so many details.)
Many thanks.
It seems that the interpolation cv2.INTER_AREA does this averaging. I wrote a test below if you are interested.
import cv2
import numpy as np
n = 9
grid_colors = []
for _ in range(n):
column = []
for _ in range(n):
colors = []
for k in range(3):
colors.append(np.random.randint(256))
column.append(colors)
grid_colors.append(column)
moy = []
for a in range(3):
col = []
for b in range(3):
colors = []
for c in range(3):
colors.append(round(sum([grid_colors[i+3*a][j+3*b][c] for i in range(3) for j in range(3)]) / 9))
col.append(colors)
moy.append(col)
image_blurred = cv2.resize(np.array(grid_colors, dtype = np.uint8), (len(grid_colors[0]) // 3, len(grid_colors) // 3), interpolation=cv2.INTER_AREA)
print("image blurred: ")
print(image_blurred)
print("grid_colors: ")
print(grid_colors)
I'm currently trying to video stabilization using OpenCV and Python.
I use the following function to calculate rotation:
def accumulate_rotation(src, theta_x, theta_y, theta_z, timestamps, prev, current, f, gyro_delay=None, gyro_drift=None, shutter_duration=None):
if prev == current:
return src
pts = []
pts_transformed = []
for x in range(10):
current_row = []
current_row_transformed = []
pixel_x = x * (src.shape[1] / 10)
for y in range(10):
pixel_y = y * (src.shape[0] / 10)
current_row.append([pixel_x, pixel_y])
if shutter_duration:
y_timestamp = current + shutter_duration * (pixel_y - src.shape[0] / 2)
else:
y_timestamp = current
transform = getAccumulatedRotation(src.shape[1], src.shape[0], theta_x, theta_y, theta_z, timestamps, prev,
current, f, gyro_delay, gyro_drift)
output = cv2.perspectiveTransform(np.array([[pixel_x, pixel_y]], dtype="float32"), transform)
current_row_transformed.append(output)
pts.append(current_row)
pts_transformed.append(current_row_transformed)
o = utilities.meshwarp(src, pts_transformed)
return o
I get the following error when it gets to output = cv2.perspectiveTransform(np.array([[pixel_x, pixel_y]], dtype="float32"), transform):
cv2.error: /Users/travis/build/skvark/opencv-python/opencv/modules/core/src/matmul.cpp:2271: error: (-215) scn + 1 == m.cols in function perspectiveTransform
Any help or suggestions would really be appreciated.
This implementation really needs to be changed in a future version, or the docs should be more clear.
From the OpenCV docs for perspectiveTransform():
src – input two-channel (...) floating-point array
Slant emphasis added by me.
>>> A = np.array([[0, 0]], dtype=np.float32)
>>> A.shape
(1, 2)
So we see from here that A is just a single-channel matrix, that is, two-dimensional. One row, two cols. You instead need a two-channel image, i.e., a three-dimensional matrix where the length of the third dimension is 2 or 3 depending on if you're sending in 2D or 3D points.
Long story short, you need to add one more set of brackets to make the set of points you're sending in three-dimensional, where the x values are in the first channel, and the y values are in the second channel.
>>> A = np.array([[[0, 0]]], dtype=np.float32)
>>> A.shape
(1, 1, 2)
Also, as suggested in the comments:
If you have an array points of shape (n_points, dimension) (i.e. dimension is 2 or 3), a nice way to re-format it for this use-case is points[np.newaxis]
It's not intuitive, and though it's documented, it's not very explicit on that point. That's all you need. I've answered an identical question before, but for the cv2.transform() function.