I am trying to find the realistic location of an object using yolov5. but idk if I should use OpenCV camera calibration before running detect.py or use it after the model gave me center x and center y of the detected object. if I need to use camera calibration after detection then what function do I have to use?
[Result without camera calibration] (https://i.stack.imgur.com/kLrLB.jpg)
the image on the right is an output of my model. I am focus on the object with details shown in the last line of the text file on the left. since my camera didn't take the image right above these objects then the results.txt shouldn't give the exact location.
Related
I am new here and am very thankful to be one of this awesome community.
I am working right now on a object detection and planar localization project with the 6DOF robot UR10e. I have already detected the object using an Mask R-CNN approach, got the segmented part and obtained all image features for that purpose using OpenCV. Afterwards, I calculated the center(x,y) and angle in respect of z-axis. At the robot TCP will be a RGB-D camera(Azure kinect from Microsoft) installed for detection and tracking.
Azure kinect has already a very useful ros driver to get calibration parameters via ros topics. https://github.com/microsoft/Azure_Kinect_ROS_Driver/blob/melodic/docs/usage.md
Here is a image frame from the object
My question is now , how can I transform the center coordinates (x,y) and the object orientation or angle from the image frame(see the picture above) to the azure camera coordinates for picking this object? I also assume the height between that object and the camera as known.
I am trying to compute the distance of an object from the camera using stereo vision approach. But before computing the disparity map, I must ensure that my cameras are calibrated.
I followed the opencv python tutorial on camera calibration. They have used a chessboard to calibrate their cameras. Now my question is, if I want to calibrate my cameras, do I need to click photos of a chessboard from various angles manually? Or can I use the 14 chessboard images they have made available?
My next question (depending on the answer to the previous question) is, if I can use their images to calibrate my cameras, what is the logic behind this? How can images clicked from their cameras be used to calibrate my cameras, i.e. get the camera matrix for my cameras? I would like to get more intuition behind this camera calibration process.
Any help will be appreciated. Thanks.
1- No, you print something similar with a chessboard pattern and you use it to calibrate your own camera. You can use code at here.
2- The process basically goes like this: To determine coordinate of a pixel in an image, you need to know two(counting only most fundamental ones, currently I exclude distortion parameters) set of parameters. First set of parameters are inner parameters of your camera (intrinsic parameters) are optical center of your camera (basically center pixel of your sensor/lens) and focal length of your camera. Intrinsic parameters are fixed for your camera unless you change some settings of the device or some settings change with time. Second set of parameters are a position and rotation vector that describes where your camera is in the 3D world (these are extrinsic parameters). Extrinsic parameters change for every example image you have. You can think of camera calibration as an optimization process that tries to find best parameters (parameters that give minimum reprojection error) for the example images you have given.
I am using ROS to control a drone for real-time image processing applications. I have calibrated camera by using cameracalibrator.py node in Ros. When I use image_proc node to compare raw and rectified images, I don't get what I want. Although image is rectified, border of the image is getting distorted toward opposite direction as the image below:
As a result, rectified image is still useless for me.
Thus, this time, I calibrated my camera using opencv so that I can get region of interest (ROI) in the image after undistortion operation. Thus, the rectified image becomes perfect for me. However, I need ROS to do that while streaming rectified image by using image_proc. Is there anyway to do that ?
You can directly use the image_proc/crop_decimate nodelet.
You can configure it using dynamic_reconfigure to set up ROI or interpolation.
However, since these are software operations, interpolation methods should be handled with care (but fastest NN method is standard anyway) since you have a real time application.
I need to evaluate whether a camera is viewing a 3D real object. To do so I count with the 3D model of the world I am moving and the pose from the robot my camera is attached to. So far, so good, the camera coordinate will be
[x,y,z]' = RX +T
where X is the real object position, and . The camera I am using is a 170ยบ FOV camera, and I need to calibrate it in order to convert these [x,y,z] into pixel coordinates I can evaluate. If the pixel coordinates are bigger than (0,0) and smaller than (width,height) I will consider that the camera is looking at the object.
Can I do a similar test without the conversion to pixel coordinates? I guess not, so I am trying to calibrate the fisheye camera with https://bitbucket.org/amitibo/pyfisheye/src, which is a wrapper over the faulty opencv 3.1.0 fisheye model.
Here is one of my calibration images:
Using the simplest test ( https://bitbucket.org/amitibo/pyfisheye/src/default/example/test_fisheye.py) This is the comparison with the undistorted image:
It looks really nice, and here is the undistorted:
How can I get the whole "butterfly" undistorted image? I am currently seeing the lower border...
I have drawn simple pattern of geometrical shapes on a paper and placed it one a object as marker. I'm able to detect and analyze pattern successfully. However when object moves a little faster the motion blur is introduced which can be rotational or linear. This way detected regions overlap e.g. a strip of arrows moving in direction of arrows, is detected as a single line after introduction of motion blur. Therefore I need to fix it somehow. So I can detect individual arrows and analyze them.
Below are images of markers with and without motion blur.
Is there any python module or open source implementation that can be used to solve it?
Motion can be in any direction at any speed so PSF is not known and required for Wiener, Lucy-Richardson methods.
Also it is a realtime tracking problem so I need something that executes fast.
P.S. I'm using Python 2.7 and Opencv 3
This problem can be solved by limiting the exposure time of your camera. This can be done using opencv by using:
cap.set(cv2.CAP_PROP_EXPOSURE,40)
or using the v4l2-ctl command line utility.
first step is to check whether camera is suitable for opencv properties such as
CAP_PROP_FRAME_WIDTH
CAP_PROP_FRAME_HEIGHT
in order to check camera suitability
second step is to is use CV_CAP_PROP_EXPOSURE like
cap.set(cv2.CAP_PROP_EXPOSURE, 40)
value can be change accordingly to avoid motion blur