After the stereo calibration i try to check the quality of calibration. I take the following steps
stereoRectify
getOptimalNewCameraMatrix left/right camera
initUndistortRectifyMap left/right camera
and then i call the check calibration
opencv c++ example
double err = 0;
int npoints = 0;
vector<Vec3f> lines[2];
for( i = 0; i < nimages; i++ )
{
int npt = (int)imagePoints[0][i].size();
Mat imgpt[2];
for( k = 0; k < 2; k++ )
{
imgpt[k] = Mat(imagePoints[k][i]);
undistortPoints(imgpt[k], imgpt[k], cameraMatrix[k], distCoeffs[k], Mat(), cameraMatrix[k]);
computeCorrespondEpilines(imgpt[k], k+1, F, lines[k]);
}
for( j = 0; j < npt; j++ )
{
double errij = fabs(imagePoints[0][i][j].x*lines[1][j][0] +
imagePoints[0][i][j].y*lines[1][j][1] + lines[1][j][2]) +
fabs(imagePoints[1][i][j].x*lines[0][j][0] +
imagePoints[1][i][j].y*lines[0][j][1] + lines[0][j][2]);
err += errij;
}
npoints += npt;
}
cout << "average epipolar err = " << err/npoints << endl;
opencv python example
def check_calibration(self, calibration):
sides = "left", "right"
which_image = {sides[0]: 1, sides[1]: 2}
undistorted, lines = {}, {}
for side in sides:
undistorted[side] = cv2.undistortPoints(
np.concatenate(self.image_points[side]).reshape(-1,1, 2),
calibration.cam_mats[side],
calibration.dist_coefs[side],
P=calibration.cam_mats[side])
lines[side] = cv2.computeCorrespondEpilines(undistorted[side],
which_image[side],
calibration.f_mat)
total_error = 0
this_side, other_side = sides
for side in sides:
for i in range(len(undistorted[side])):
total_error += abs(undistorted[this_side][i][0][0] *
lines[other_side][i][0][0] +
undistorted[this_side][i][0][1] *
lines[other_side][i][0][1] +
lines[other_side][i][0][2])
other_side, this_side = sides
total_points = self.image_count * len(self.object_points)
return total_error / total_points
First, i detect the undistorted points and then compute the epipolar lines for each side:
for side in sides:
''' (u,v) is the input point, (u', v') is the output point
camera_matrix=[fx 0 cx; 0 fy cy; 0 0 1]
P=[fx' 0 cx' tx; 0 fy' cy' ty; 0 0 1 tz] '''
undistorted[side] = cv2.undistortPoints(
np.concatenate(self.image_points[side]).reshape(-1, 1, 2),
calibration.cam_mats[side],
calibration.dist_coefs[side],
P=calibration.cam_mats[side])
lines[side] = cv2.computeCorrespondEpilines(undistorted[side],
which_image[side],
calibration.f_mat)
In next step is being computed the total error. I don't understand this step.
In the C++ sample above you can find the same calculation.
total_error = 0
this_side, other_side = sides
for side in sides:
for i in range(len(undistorted[side])):
total_error += abs(undistorted[this_side][i][0][0] *
lines[other_side][i][0][0] +
undistorted[this_side][i][0][1] *
lines[other_side][i][0][1] +
lines[other_side][i][0][2])
other_side, this_side = sides
total_points = self.image_count * len(self.object_points)
return total_error / total_points
If i want to check the stereo calibration i need to use the following constrain: u'^T*F*v' = 0 for every point.
This equation will get back not exact 0, maybe something < 1. I can not see this equation in the for loop.
Maybe somebody can explain me this calculation.
Thanks!
I would like to implement a variant of convolution in pycuda.
For simplicity, I'll show rectangle kernel of the interpolation.
The standard convolution can be applied as following:
import pycuda.autoinit
import pycuda.driver as drv
import numpy as np
from pycuda.compiler import SourceModule
mod = SourceModule("""
#include <stdio.h>
__global__ void func(float *dest, float *a)
{
const int img_size = 64;
const int kernel_size = 3;
const int kernel_size_half = kernel_size/2;
const int tx = blockIdx.x * blockDim.x + threadIdx.x;
const int ty = blockIdx.y * blockDim.y + threadIdx.y;
int tx_kernel;
tx_kernel = tx - kernel_size_half;
for (int idx=-kernel_size_half; idx <= kernel_size_half; idx++)
{
tx_kernel = tx + idx ;
if ((tx_kernel < 0) || (tx_kernel > img_size-1))
continue;
dest[ty * img_size + tx] += a[ty * img_size + tx_kernel] / ((float) kernel_size);
}
}
""")
Instead of calculating the current position wrt neighbours, I would like to do the opposite,
to add the value of the current pixel to the neighbours.
I.e:
to change the line:
dest[ty * img_size + tx] += a[ty * img_size + tx_kernel] / ((float) kernel_size);
to:
dest[ty * img_size + tx_kernel] += a[ty * img_size + tx] / ((float) kernel_size);
However, The first works fine but the second is not, it fails by updating the neighbours.
Is there a way to bypass it?
Note:
I simplified the question to focus on what I need,
the general problem is to use a different convolution kernel for each pixel instead of same one as I asked in the question.
to change the line:
dest[ty * img_size + tx] += a[ty * img_size + tx_kernel] / ((float) kernel_size);
to:
dest[ty * img_size + tx_kernel] += a[ty * img_size + tx] / ((float) kernel_size);
However, The first works fine but the second is not, it fails by updating the neighbours. Is there a way to bypass it?
The first method is preferred from a performance perspective. However if you wish to "update the neighbors" then it should be possible to recast the second operation as:
atomicAdd(&(dest[ty * img_size + tx_kernel]), a[ty * img_size + tx] / ((float) kernel_size));
I have made a function who calculate area polygon with Shoelace way.
That's works perfectly but right now I wonder if there is not a faster way to have the same result.
I want to know that because this function must work faster with polygon with a lot of coordinates.
My function :
def shoelace_formula(polygonBoundary, absoluteValue = True):
nbCoordinates = len(polygonBoundary)
nbSegment = nbCoordinates - 1
l = [(polygonBoundary[i+1][0] - polygonBoundary[i][0]) * (polygonBoundary[i+1][1] + polygonBoundary[i][1]) for i in xrange(nbSegment)]
if absoluteValue:
return abs(sum(l) / 2.)
else:
return sum(l) / 2.
My polygon :
polygonBoundary = ((5, 0), (6, 4), (4, 5), (1, 5), (1, 0))
Result :
22.
Any ideas?
I try with Numpy :
It's speedest but you have to convert your coordinates first.
import numpy as np
x, y = zip(*polygonBoundary)
def shoelace_formula_3(x, y, absoluteValue = True):
result = 0.5 * np.array(np.dot(x, np.roll(y, 1)) - np.dot(y, np.roll(x, 1)))
if absoluteValue:
return abs(result)
else:
return result
For me the fastest way would be using numpy where you have to send a numpy array of (x,y) cordinates as an argument in shoelace method:
import numpy as np
def shoelace(x_y):
x_y = np.array(x_y)
x_y = x_y.reshape(-1,2)
x = x_y[:,0]
y = x_y[:,1]
S1 = np.sum(x*np.roll(y,-1))
S2 = np.sum(y*np.roll(x,-1))
area = .5*np.absolute(S1 - S2)
return area
Take a look at the Rosetta Code example using Numpy. Numpy gives a fast solution.
For your convenience, I paste below the Rosetta Code snippet:
import numpy as np
# x,y are arrays containing coordinates of the polygon vertices
x=np.array([3,5,12,9,5])
y=np.array([4,11,8,5,6])
i=np.arange(len(x))
#Area=np.sum(x[i-1]*y[i]-x[i]*y[i-1])*0.5 # signed area, positive if the vertex sequence is counterclockwise
Area=np.abs(np.sum(x[i-1]*y[i]-x[i]*y[i-1])*0.5) # one line of code for the shoelace formula
EDIT
You can now find the Shoelace formula implemented in the Scikit-Spatial library.
Here's a version that uses 1/2 as many multiplications: https://stackoverflow.com/a/717367/763269
If you need even greater performance, you could consider doing this in a Python C extension. C can be dramatically faster than Python, especially for math operations -- sometimes 100-1000x.
Another interesting approach (although slower)
m = np.vstack([x,y])
result = 0.5 * np.abs(np.linalg.det(as_strided(m, (m.shape[1]-1, 2, 2), (m.itemsize, m.itemsize*m.shape[1], m.itemsize))).sum())
class Point //a new class for an any point a(X,Y), b(X,Y), c(X,Y), d(X,Y)
{
//private int x, y;
public int X { get; set; }
public int Y { get; set; }
}
static class Geometry
{
public static void GerArea(Point a, Point b, Point c)
{
double area = 0.5 * ( (a.X * b.Y) + (b.X * c.Y) + (c.X * a.Y) - (b.X * a.Y) - (c.X * b.Y) - (a.X * c.Y) );
Console.WriteLine(Math.Abs(area));
}
public static void GerArea(Point a, Point b, Point c, Point d)
{
double area = 0.5 * ((a.X * b.Y) + (b.X * c.Y) + (c.X * d.Y) + (d.X * a.Y) - (b.X * a.Y) - (c.X * b.Y) - (d.X * c.Y) - (a.X * d.Y ) );
Console.WriteLine(Math.Abs(area));
}
}
class Program
{
static void Main(string[] args)
{
Point a = new Point() { X = -12, Y = 12 };
Point b = new Point() { X = 15, Y = 15 };
Point c = new Point() { X = -15, Y = -16 };
Point d = new Point() { X = 16, Y = -15 };
Console.WriteLine("****Shoelace formula****\n");
Console.Write("Area of tringle: ");
Geometry.GerArea(a, b, c);
Console.Write("Area of quad: ");
Geometry.GerArea(a, b, c, d);
Console.ReadLine();
}
}
This is a very simple implementation of shoelace formula in python
class Polygon:
def __init__(self,arr):
self.arr = arr
def area(self):
total=0
i = 0
while i != len(self.arr)-1:
total+=self.arr[i][0]*self.arr[i+1][1]
total-=self.arr[i+1][0]*self.arr[i][1]
i+=1
return abs(total +self.arr[-1][0]*self.arr[0][1] -self.arr[-1][-1]*self.arr[0][0])/2
Try simplest way, raw shoelace formula for triangles and polygons:
def shoelace_formula(x1, y1, x2, y2, x3, y3, x4, y4, x5, y5):
return abs(0.5 * (x1*y2 + x2*y3 + x3*y4 + x4*y5 + x5*y1
- x2*y1 - x3*y2 - x4*y3 - x5*y4 - y1*y5))
print(shoelace_formula(5, 0, 6, 4, 4, 5, 1, 5, 1, 0))
I have an image and I would like to keep the centre opaque and slowly increase the transparency of the image going towards the edges. I'm trying to do this in Python with cv2 but am not having any luck. Initial I though of creating a gaussian function in the alpha channel. Does anyone know a good way to do this?
This is a C++ solution that assumes you have an RGBA image (UINT8 or UINT16)
void addTransparency(cv::Mat &mat)
{
int s = min(mat.rows, mat.cols) / 2;
for (int i = 0; i < mat.rows; ++i) {
for (int j = 0; j < mat.cols; ++j) {
cv::Vec4b& rgba = mat.at<cv::Vec4b>(i, j);
float r = sqrt( (float) (i - mat.rows/2)*(i - mat.rows/2) + (j - mat.cols/2)*(j - mat.cols/2) );
rgba[3] = saturate_cast<uchar>( 255 * exp( - (r * r) / (2 * s * s) ) );
}
}
}
You can read more here
I'm attempting to implement the discrete time wave equation in OpenCL. I think I'm pretty close, but the results look like what I would expect from the heat equation. I know they're very similar, but when I've implemented the 2D wave equation (not using OpenCL) I got distinct wavefronts and reflections. With the OpenCL kernel below everything diffuses until it is a wash.
__kernel void wave_calc(
__global float* height,
__global float* height_old,
const unsigned int len_x,
const unsigned int len_y,
const unsigned int len_z,
const float dtxc_term)
{
unsigned int x = get_global_id(0);
unsigned int y = get_global_id(1);
unsigned int z = get_global_id(2);
int this_cell = x + len_y * (y + len_x * z);
float laplacian;
if (x==0 || x==(len_x-1) || y==0 || y==(len_y-1) || z==0 || z==(len_z-1)) {
laplacian = 0;
height_old[this_cell] = height[this_cell];
height[this_cell] = 0;
}
else if ( x < len_x-1 && y < len_y-1 && z < len_z-1 ){
int n1 = x - 1 + len_y * (y + len_x * z);
int n2 = x + 1 + len_y * (y + len_x * z);
int n3 = x + len_y * (y - 1 + len_x * z);
int n4 = x + len_y * (y + 1 + len_x * z);
int n5 = x + len_y * (y + len_x * (z -1));
int n6 = x + len_y * (y + len_x * (z + 1));
laplacian = -6 * height[this_cell] +
height[n1] +
height[n2] +
height[n3] +
height[n4] +
height[n5] +
height[n6];
height_old[this_cell] = height[this_cell];
height[this_cell] = (dtxc_term*laplacian+2*height[this_cell]) - height_old[this_cell];
}
}
(DTXC is the result of ((DT * DT)/(DX * DX)) * C passed from the host)
Every step I copy height back to the host for plotting, and then call the function again.
for i in np.arange(steps):
#copy height from host to device
cl.enqueue_copy(queue, d_height, h_height)
#step once
wave_calc(queue, field_3d.shape, None, d_height, d_height_old, LEN_X, LEN_Y, LEN_Z, DTXC)
queue.finish()
#copy height back
cl.enqueue_copy(queue, h_height, d_height)
#do my plotting
Any thoughts/suggestions/condescending remarks? All would be appreciated. :)
Here is an update to answer Joel's question:
I'm not much good when it comes to calculus, but I'm taking a working C++ implementation in 2D and trying to adapt it for 3D. Below is the C++. The only modification I made was to the loop, since there are 6 neighbor cells in 3D instead of 4. In both cases the outer walls of the plane/cube are set to 0:
for(int x=1; x<field.xRes()-1;x++) {
for (int y=1; y<field.yRes()-1; y++) {
laplacian(x,y) = -4 * height(x,y) +
height(x-1,y) +
height(x+1,y) +
height(x,y-1) +
height(x,y+1);
}
}
const float dt = 0.001;
const float xLen = 1.0;
const float C = 1.0;
const float dx = xLen/xRes;
backup = height;
height = ((dt*dt)/(dx*dx))*C*laplacian+2*height;
height = height - heightOld;
heightOld = backup;