I have a setup where a (2D) camera is mounted on the end-effector of a robot arm - similar to the OpenCV documentation:
I want to calibrate the camera and find the transformation from camera to end-effector.
I have already calibrated the camera using this OpenCV guide, Camera Calibration, with a checkerboard where the undistorted images are obtained.
My problem is about finding the transformation from camera to end-effector. I can see that OpenCV has a function, calibrateHandEye(), which supposely should achieve this. I already have the "gripper2base" vectors and are missing the "target2cam" vectors. Should this be based on the size of the checkerboard squares or what am I missing?
Any guidance in the right direction will be appreciated.
You are close to the answer.
Yes, it is based on the size of the checkerboard. But instead of directly taking those parameters and an image, this function is taking target2cam. How to get target2cam? Just simply move your robot arm above the chessboard so that the camera can see the chessboard and take a picture. From the picture of the chessboard and camera intrinsics, you can find target2cam. Calculating the extrinsic from the chessboard is already given in opencv.
Repeat this a couple of times at different robot poses and collect multiple target2cam. Put them calibrateHandEye() and you will get what you need.
Related
I'm an OpenCV beginner, just wondering which way would be the best to measure
the distance between the camera to an object in a given video.
Every tutorial I encountered before tutor by using camera calibration first and then undistorting the camera lens. But in this case I don't use my own camera, so is it necessary for me to use these functions?
In addition, I some data of the recording camera, such as:
(fx,fy) = focal length
(cx,cy) = principle point
(width,height) = image shape
radial = radial distortion
(t1,t2) = tangential distortion.
Usually, one does measure the distance between a single camera and an object with prior knowledge of the object. It could be the dimensions of a planar pattern or the 3D positions of edges that can easily be detected automatically using image analysis.
The computation of the position of the object with respect to the camera is usually done by solving a PnP problem.
https://en.m.wikipedia.org/wiki/Perspective-n-Point
Solving the PnP equations do require the camera parameters (at least the intrinsic matrix, and ideally the distortion coefficients for more accuracy).
These parameters can be estimated by calibrating your camera. OpenCV provides a handful of functions that you can use to calibrate your monocular camera. Alternatively, you can use a platform like CalibPro to compute these parameters for you.
[Disclaimer] I am the founder of Calibpro. I am happy to help you use the platform and I'd love your feedbacks on your experience using it.
I’m trying to find coordinates of object from one image in another image.
There are 2 fixed vertically arranged cameras one above the other (for example 10 cm between cameras). They look in the same direction.
Using calibrateCamera from OpenCV I found the following parameters for each camera: ret, mtx, dist, rvecs, tvecs.
How do I calculate coordinates of an object from an image from one camera in another camera image? This is assuming that this object exists in both camera images.
Consider this as a stereo setup and do stereo calibration.
It will provide you with the Rotation and translation between the coordinates of both the cameras, Essential matrix and fundamental matrix.
You can estimate the fundamental matrix using just point correspondences, refer to the following link
Fundamental Matrix gives you a line in the other image along which a point in the reference image can exist.
I am trying to compute the distance of an object from the camera using stereo vision approach. But before computing the disparity map, I must ensure that my cameras are calibrated.
I followed the opencv python tutorial on camera calibration. They have used a chessboard to calibrate their cameras. Now my question is, if I want to calibrate my cameras, do I need to click photos of a chessboard from various angles manually? Or can I use the 14 chessboard images they have made available?
My next question (depending on the answer to the previous question) is, if I can use their images to calibrate my cameras, what is the logic behind this? How can images clicked from their cameras be used to calibrate my cameras, i.e. get the camera matrix for my cameras? I would like to get more intuition behind this camera calibration process.
Any help will be appreciated. Thanks.
1- No, you print something similar with a chessboard pattern and you use it to calibrate your own camera. You can use code at here.
2- The process basically goes like this: To determine coordinate of a pixel in an image, you need to know two(counting only most fundamental ones, currently I exclude distortion parameters) set of parameters. First set of parameters are inner parameters of your camera (intrinsic parameters) are optical center of your camera (basically center pixel of your sensor/lens) and focal length of your camera. Intrinsic parameters are fixed for your camera unless you change some settings of the device or some settings change with time. Second set of parameters are a position and rotation vector that describes where your camera is in the 3D world (these are extrinsic parameters). Extrinsic parameters change for every example image you have. You can think of camera calibration as an optimization process that tries to find best parameters (parameters that give minimum reprojection error) for the example images you have given.
I need to evaluate whether a camera is viewing a 3D real object. To do so I count with the 3D model of the world I am moving and the pose from the robot my camera is attached to. So far, so good, the camera coordinate will be
[x,y,z]' = RX +T
where X is the real object position, and . The camera I am using is a 170º FOV camera, and I need to calibrate it in order to convert these [x,y,z] into pixel coordinates I can evaluate. If the pixel coordinates are bigger than (0,0) and smaller than (width,height) I will consider that the camera is looking at the object.
Can I do a similar test without the conversion to pixel coordinates? I guess not, so I am trying to calibrate the fisheye camera with https://bitbucket.org/amitibo/pyfisheye/src, which is a wrapper over the faulty opencv 3.1.0 fisheye model.
Here is one of my calibration images:
Using the simplest test ( https://bitbucket.org/amitibo/pyfisheye/src/default/example/test_fisheye.py) This is the comparison with the undistorted image:
It looks really nice, and here is the undistorted:
How can I get the whole "butterfly" undistorted image? I am currently seeing the lower border...
I've been doing a lot of Image Processing recently on Python using OpenCV and I've worked all this while with 2-D Images in the generic BGR style.
Now, I'm trying to figure out how to incorporate depth and work with depth information as well.
I've seen the documentation on creating simple point clouds using the Left and Right images of a Stereocamera, but I was hoping to gain some intuition on Depth-based cameras themselves like Kinect.
What kind of camera should I use for this purpose, and more importantly: how do I process these images in Python - as I can't find a lot of documentation on handling RGBD images in OpenCV.
If you want to work with depth based cameras you can go for Time of Flight(ToF) cameras like picoflexx and picomonstar cameras. They will give you X,Y and Z values. Where your x and y values are distances from camera centre of that point (like in 2D space) and Z will five you the direct distance of that point (not perpendicular) from camera centre.
For this camera and this 3d data processing you can use Point Cloud Library fro processing.