Fisheye camera calibration opencv - python

I need to evaluate whether a camera is viewing a 3D real object. To do so I count with the 3D model of the world I am moving and the pose from the robot my camera is attached to. So far, so good, the camera coordinate will be
[x,y,z]' = RX +T
where X is the real object position, and . The camera I am using is a 170º FOV camera, and I need to calibrate it in order to convert these [x,y,z] into pixel coordinates I can evaluate. If the pixel coordinates are bigger than (0,0) and smaller than (width,height) I will consider that the camera is looking at the object.
Can I do a similar test without the conversion to pixel coordinates? I guess not, so I am trying to calibrate the fisheye camera with https://bitbucket.org/amitibo/pyfisheye/src, which is a wrapper over the faulty opencv 3.1.0 fisheye model.
Here is one of my calibration images:
Using the simplest test ( https://bitbucket.org/amitibo/pyfisheye/src/default/example/test_fisheye.py) This is the comparison with the undistorted image:
It looks really nice, and here is the undistorted:
How can I get the whole "butterfly" undistorted image? I am currently seeing the lower border...

Related

Eye-In-Hand Calibration OpenCV

I have a setup where a (2D) camera is mounted on the end-effector of a robot arm - similar to the OpenCV documentation:
I want to calibrate the camera and find the transformation from camera to end-effector.
I have already calibrated the camera using this OpenCV guide, Camera Calibration, with a checkerboard where the undistorted images are obtained.
My problem is about finding the transformation from camera to end-effector. I can see that OpenCV has a function, calibrateHandEye(), which supposely should achieve this. I already have the "gripper2base" vectors and are missing the "target2cam" vectors. Should this be based on the size of the checkerboard squares or what am I missing?
Any guidance in the right direction will be appreciated.
You are close to the answer.
Yes, it is based on the size of the checkerboard. But instead of directly taking those parameters and an image, this function is taking target2cam. How to get target2cam? Just simply move your robot arm above the chessboard so that the camera can see the chessboard and take a picture. From the picture of the chessboard and camera intrinsics, you can find target2cam. Calculating the extrinsic from the chessboard is already given in opencv.
Repeat this a couple of times at different robot poses and collect multiple target2cam. Put them calibrateHandEye() and you will get what you need.

Find coordinates of object from different cameras

I’m trying to find coordinates of object from one image in another image.
There are 2 fixed vertically arranged cameras one above the other (for example 10 cm between cameras). They look in the same direction.
Using calibrateCamera from OpenCV I found the following parameters for each camera: ret, mtx, dist, rvecs, tvecs.
How do I calculate coordinates of an object from an image from one camera in another camera image? This is assuming that this object exists in both camera images.
Consider this as a stereo setup and do stereo calibration.
It will provide you with the Rotation and translation between the coordinates of both the cameras, Essential matrix and fundamental matrix.
You can estimate the fundamental matrix using just point correspondences, refer to the following link
Fundamental Matrix gives you a line in the other image along which a point in the reference image can exist.

Chessboard Camera Calibration using OpenCV

I am trying to compute the distance of an object from the camera using stereo vision approach. But before computing the disparity map, I must ensure that my cameras are calibrated.
I followed the opencv python tutorial on camera calibration. They have used a chessboard to calibrate their cameras. Now my question is, if I want to calibrate my cameras, do I need to click photos of a chessboard from various angles manually? Or can I use the 14 chessboard images they have made available?
My next question (depending on the answer to the previous question) is, if I can use their images to calibrate my cameras, what is the logic behind this? How can images clicked from their cameras be used to calibrate my cameras, i.e. get the camera matrix for my cameras? I would like to get more intuition behind this camera calibration process.
Any help will be appreciated. Thanks.
1- No, you print something similar with a chessboard pattern and you use it to calibrate your own camera. You can use code at here.
2- The process basically goes like this: To determine coordinate of a pixel in an image, you need to know two(counting only most fundamental ones, currently I exclude distortion parameters) set of parameters. First set of parameters are inner parameters of your camera (intrinsic parameters) are optical center of your camera (basically center pixel of your sensor/lens) and focal length of your camera. Intrinsic parameters are fixed for your camera unless you change some settings of the device or some settings change with time. Second set of parameters are a position and rotation vector that describes where your camera is in the 3D world (these are extrinsic parameters). Extrinsic parameters change for every example image you have. You can think of camera calibration as an optimization process that tries to find best parameters (parameters that give minimum reprojection error) for the example images you have given.

Fundamental understanding of tvecs rvecs in OpenCV-ArUco

I want to use ArUco to find the "space coordinates" of a marker.
I have problems understanding the tvecs and rvecs. I came so far as to the tvecs are the translation and the rvecs are for rotation. But how are they oriented, in which order are they written in the code, or how do I orient them?
I have a camera (laptop webcam just drawn to illustrate the orientation of the camera) at the position X,Y,Z, the Camera is oriented, which can be described with angle a around X, angle b around Y, angle c around Z (angles in Rad).
So if my camera is stationary I would take different pictures of the ChArUco Boards and give the camera calibration algorithm the tvecs_camerapos (Z,Y,X) and the rvecs_camerapos (c,b,a). I get the cameraMatrix, distCoeffs and tvecs_cameracalib, rvecs_cameracalib. t/rvecs_camerapos and t/rvecs_cameracalib are different which I find weird.
Is this nomination/order of t/rvecs correct?
Should I use camerapos or cameracalib for pose estimation if the camera does not move?
I think t/rvecs_cameracalib is negligible because I am only interested in the intrinsic parameters of the camera calibration algorithm.
Now I want to find the X,Y,Z position of the marker, I use aruco.estimatePoseSingleMarkers with t/rvecs_camerapos and retrive t/rvecs_markerpos. The tvecs_markerpos don't match my expected values.
Do I need a transformation of t/rvecs_markerpos to find X,Y,Z of the Marker?
Where is my misconception?
OpenCV routines that deal with cameras and camera calibration (including AruCo) use a pinhole camera model. The world origin is defined as the centre of projection of the camera model (where all light rays entering the camera converge), the Z axis is defined as the optical axis of the camera model, and the X and Y axes form an orthogonal system with Z. +Z is in front of the camera, +X is to the right, and +Y is down. All AruCo coordinates are defined in this coordinate system. That explains why your "camera" tvecs and rvecs change: they do not define your camera's position in some world coordinate system, but rather the markers' positions relative to your camera.
You don't really need to know how the camera calibration algorithm works, other than that it will give you a camera matrix and some lens distortion parameters, which you use as input to other AruCo and OpenCV routines.
Once you have calibration data, you can use AruCo to identify markers and return their positions and orientations in the 3D coordinate system defined by your camera, with correct compensation for the distortion of your camera lens. This is adequate to do, for example, augmented reality using OpenGL on top of the video feed from your camera.
The tvec of a marker is the translation (x,y,z) of the marker from the origin; the distance unit is whatever unit you used to define your printed calibration chart (ie, if you described your calibration chart to OpenCV using mm, then the distance unit in your tvecs is mm).
The rvec of a marker is a 3D rotation vector which defines both an axis of rotation and the rotation angle about that axis, and gives the marker's orientation. It can be converted to a 3x3 rotation matrix using the Rodrigues function (cv::Rodrigues()). It is either the rotation which transforms the marker's local axes onto the world (camera) axes, or the inverse -- I can't remember, but you can easily check.
In my understanding, the camera coordinate is the reference frame of the 3D world. The rvec and tvec are the transformations used to get the position of any other 3D point(in the world reference frame) w.r.t the camera coordinate system. So both these vectors are the extrinsic parameters [R|t]. The intrinsic parameters are generally derived from calibration. Now, if you want to project any other 3D point w.r.t the world reference frame on to the image plane, you will need to get that 3D point into the camera coordinate system first and then project it onto the image to get a correct perspective.
Point in image plane (u,v,1)=[intrinsic] [extrinsic] [3D point,1]
The reference coordinate system is the camera. rvec,tvec gives the 6D pose of the marker wrt to camera.

Opencv map 2D pixel coordinates to 3D world coordinates using fixed camera position

I am working on an autonomous vehicle project using opencv 3.0 and a raspberry pi. I am using the raspberry pi camera to do image processing for navigation. I am trying to relate pixel coordinates to 3D coordinates relative to my vehicle. The camera will be fixed on the vehicle facing forward with a known height and angle. If anyone can point me in the right direction that would be extremely helpful.
thanks!
Image warp
If getting the distance of the object from the car is what you are asking.
OpenCV allows Image warp. You can do a perspective transform to make the image seem as if it is being looked at from a bird's eye view.
Some calculation and calibration will be required to associate a distance with the warp values you are using. Then you'll be able to calculate the distance between the car and the object.

Categories