I am working on camera calibration using opencv in python. I’ve already done calibration using cv2.calibrateCamera and got the camera matrix and distortion coefficients. I also have evaluated the validity of camera matrix; in other words, the estimated focal lens is very close to the sensor’s focal lens in the datasheet (I know the pixel size and the focal lens in mm from the datasheet). I should mention that in order to undistort new images, I follow the instructions below; as I NEED to keep all source pixels in the undistorted images.
alpha = 1. # to keep all pixels
scale = 1. # to change output image size
w,h = 200,200 # original image size captured by camera
newcameramtx, roi = cv2.getOptimalNewCameraMatrix(camera_matrix, dist_coefs, (w,h), alpha, (int(scale2*w), int(scale2*h))))
mapx, mapy = cv2.initUndistortRectifyMap(camera_matrix, dist_coefs, None, newcameramtx, (w,h), 5)
dst = cv2.remap(img, mapx, mapy, cv2.INTER_CUBIC)
x_, y_, w_, h_ = roi
dst_cropped = dst[y_:y_+h_, x_:x_+w_]
And now the issues and my questions:
The source images are suffering high positive radial distortions, and dst images resulted from undistorsion process are satisfying and seems the positive radial distortion is already canceled, at lease visually. Because of alpha = 1. I also have all source pixels in the dst image. However, the roi is really small and it crops a region in the middle of imae. I could say that dst_cropped only contains the pixels close to the center of dst. According to the links below:
cv2.getOptimalNewCameraMatrix returns ROI of [0,0,0,0] on some data sets
https://answers.opencv.org/question/28438/undistortion-at-far-edges-of-image/?answer=180493#post-id-180493
I found that the probable issue might be because of my dataset; then I tried to balance the dataset to have more images having chessboard close to the image boundaries. I repeated the calibration and the obtained results are very close to the first trial, however still the same effect is presented in the dst_cropped images. I tried to play with alpha parameter as well, but any number less than 1. does not keep all source pixels in dst image. Considering all above information, it seems that I'm obliged to keep using dst images instead of dst_cropped ones; then another issue arises from dst size which is the same as source image (w,h). It is clear that because of alpha=1. the dst contains all source pixels as well as zero pixels, but my question is that how can keep the resolution as before. If I don't make a mistake, seems all points are mapped and then the resulted image is scaled down to fit (w,h). Then, my question is that how can I force the calibration to KEEP resolution as before? For example, if some points are mappted to (-100,-100) or (300,300) the dst should be [400,400] and not [200,200]. How to expand images instead of scaling down?
Thanks in advance for your helps or advice,
Related
I am working on a stereo vision project. My goal is to locate the 3D-coordinate of a point on a target that marked by a laser point.
I do the stereo calibration with full size pictures. After getting the parameters, "initUndistortRectifyMap" is applied to get the mapping data "map1" and "map2".
cv.initUndistortRectifyMap( cameraMatrix, distCoeffs, R, newCameraMatrix, size, m1type[, map1[, map2]] ) -> map1, map2
Since my target is just a small area and I would like to increase my acquiring fps, My cameras acquire the ROI instead of full size pictures.
Here comes my problem.
Can I just map the ROI of an image instead of full picture?
It is easy to map the same size picture as map1 and map2 with remap function, however, how can I just map the ROI of the picture.
cv.remap( src, map1, map2, interpolation[, dst[, borderMode[, borderValue]]] ) -> dst
Note, I try to crop the ROI of the "map1" and "map2" but it is not simply mapping pixels from source picture to destination picture.
According to https://stackoverflow.com/a/34265822/18306909, I can not directly use map_x and map_y to get the destination of ROI.
As stated in the docs you refer to, it is dst(x, y) = src(map_x(x, y), map_y(x, y)). Transforming points dst -> src is easy (lookup in map_x and map_y), but the OP wants the other (more natural) direction: src -> dst. This is admittingly confusing because cv::remap works "inversely" (for numerical stability reasons). I.e., in order to map an image src -> dst, you supply a mapping from dst -> src. Unfortunately, that's only efficient when transforming many points on a regular grid (i.e. image). Transforming a single random point is pretty difficult. – pasbi Feb 24, 2021 at 10:17
I calibrated my camera following this tutorial, using 20 images of a point pattern.
The drawn point centers look suitable, however the reprojection error I obtain is 11.5 pixel -which seems large to me? No subpixeling is done yet.
Next, I am using the same images with the calibration data from above to find the poses of the point pattern, using solvePnP-function.
Here, as shown in the following pictures - it seems as if the center is always found correctly, however the drawn tripod is off - its ends should correspond to
(1,0,0), (0,1,0) and (0,0,-1)
respectively.
My question is - why is the tripod off randomly - I would be happy about any hint.
Thanks
unfortunately not having any rep - i cant post pictures here. Thus just links...
img 1
img 2
img 3
img 4
img 5
Update:
It seems to be a problem with using solvePnP:
I reprojected all the objectpoints during calibration at their found position - which looks good:
calibration
However, when using solvePnP different rvecs and tvecs are returned, resulting in wrong projections of the object points.
solvePnP
Any thoughts are welcome ;-)
Here the code how solvePnP is used:
#gray is a grayvalue img of calib plate
#objp is an array of floats containing objpoints
#camera matrix and dist. coeff are imported from previous calibration
axis = np.float32([[1,0,0], [8,0,0], [0,0,-1]])
shape = (4,11)
ret, centers = cv2.findCirclesGrid( \
gray, shape, flags=cv2.CALIB_CB_ASYMMETRIC_GRID)
if ret == True:
# Find the rotation and translation vectors.
ret, rvecs, tvecs = cv2.solvePnP(objp, centers, camera_matrix,
distortion_coefficients)
# project 3D points to image plane
imgpts, jac = cv2.projectPoints(axis, rvecs, tvecs, camera_matrix,
distortion_coefficients)
I think the problem is in distortion coefficients. Probably, these coefficients differ to much for images, which should not be. Try to calibrate your camera with this tutorial :
https://learnopencv.com/camera-calibration-using-opencv/
I am trying to find all the circular particles in the image attached. This is the only image I am have (along with its inverse).
I have read this post and yet I can't use hsv values for thresholding. I have tried using Hough Transform.
circles = cv2.HoughCircles(img, cv2.HOUGH_GRADIENT, dp=0.01, minDist=0.1, param1=10, param2=5, minRadius=3,maxRadius=6)
and using the following code to plot
names =[circles]
for nums in names:
color_img = cv2.imread(path)
blue = (211,211,211)
for x, y, r in nums[0]:
cv2.circle(color_img, (x,y), r, blue, 1)
plt.figure(figsize=(15,15))
plt.title("Hough")
plt.imshow(color_img, cmap='gray')
The following code was to plot the mask:
for masks in names:
black = np.zeros(img_gray.shape)
for x, y, r in masks[0]:
cv2.circle(black, (x,y), int(r), 255, -1) # -1 to draw filled circles
plt.imshow(black, gray)
Yet I am only able to get the following mask which if fairly poor.
This is an image of what is considered a particle and what is not.
One simple approach involves slightly eroding the image, to separate touching circular objects, then doing a connected component analysis and discarding all objects larger than some chosen threshold, and finally dilating the image back so the circular objects are approximately of the original size again. We can do this dilation on the labelled image, such that you retain the separated objects.
I'm using DIPlib because I'm most familiar with it (I'm an author).
import diplib as dip
a = dip.ImageRead('6O0Oe.png')
a = a(0) > 127 # the PNG is a color image, but OP's image is binary,
# so we binarize here to simulate OP's condition.
separation = 7 # tweak these two parameters as necessary
size_threshold = 500
b = dip.Erosion(a, dip.SE(separation))
b = dip.Label(b, maxSize=size_threshold)
b = dip.Dilation(b, dip.SE(separation))
Do note that the image we use here seems to be a zoomed-in screen grab rather than the original image OP is dealing with. If so, the parameters must be made smaller to identify the smaller objects in the smaller image.
My approach is based on a simple observation that most of the particles in your image have approximately same perimeter and the "not particles" have greater perimeter than them.
First, have a look at the RANSAC algorithm and how does it find inliers and outliers. It basically is for 2D data but we will have to transform it to 1D data in our case.
In your case, I am calling inliers to the correct particles and Outliers to incorrect particles.
Our data on which we have to work on will be the perimeter of these particles. To get the perimeter, find contours in this image and get the perimeter of each contour. Refer this for information about Contours.
Now we have the data, knowledge about RANSAC algo and our simple observation mentioned above. Now in this data, we have to find the most dense and compact cluster which will contain all the inliers and others will be outliers.
Now let's assume the inliers are in the range of 40-60 and the outliers are beyond 60. Let's define a threshold value T = 0. We say that for each point in the data, inliers for that point are in the range of (value of that point - T, value of that point + T).
Now first iterate over all the points in the data and count number of inliers to that point for a T and store this information. Find the maximum number of inliers possible for a value of T. Now increment the value of T by 1 and again find the maximum number of inliers possible for that T. Repeat these steps by incrementing value of T one by one.
There will be a range of values of T for which Maximum number of inliers are the same. These inliers are the particles in your image and the particles having perimeter greater than these inliers are the outliers thus the "not particles" in your image.
I have tried this algorithm in my test cases which are similar to your and it works. I am always able to determine the outliers. I hope it works for you too.
One last thing, I see that boundary of your particles are irregular and not smooth, try to make them smooth and use this algorithm if this doesn't work for you in this image.
We need to detect whether the images produced by our tunable lens are blurred or not.
We want to find a proxy measure for blurriness.
My current thinking is to first apply Sobel along the x direction because the jumps or the stripes are mostly along this direction. Then computing the x direction marginal means and finally compute the standard deviation of these marginal means.
We expect this Std is bigger for a clear image and smaller for a blurred one because clear images shall have a large intensity or more bigger jumps of pixel values.
But we get the opposite results. How could we improve this blurriness measure?
def sobel_image_central_std(PATH):
# use the blue channel
img = cv2.imread(PATH)[:,:,0]
# extract the central part of the image
hh, ww = img.shape
hh2 = hh // 2
ww2 = ww// 2
hh4 = hh // 4
ww4 = hh //4
img_center = img[hh4:(hh2+hh4), ww4:(ww2+ww4)]
# Sobel operator
sobelx = cv2.Sobel(img_center, cv2.CV_64F, 1, 0, ksize=3)
x_marginal = sobelx.mean(axis = 0)
plt.plot(x_marginal)
return(x_marginal.std())
Blur #1
Blur #2
Clear #1
Clear #2
In general:
Is there a way to detect if an image is blurry?
You can combine calculation this with your other question where you are searching for the central angle.
Once you have the angle (and the center, maybe outside of the image) you can make an axis transformation to remove the circular component of the cone. Instead you get x (radius) and y (angle) where y would run along the circular arcs.
Maybe you can get the center of the image from the camera set-up.
Then you don't need to calculate it using the intersection of the edges from the central angle. Or just do it manually once if it is fixed for all images.
Look at polar coordinate systems.
Due to the shape of the cone the image will be more dense at the peak but this should be a fixed factor. But this will probably bias the result when calculation the blurriness along the transformed image.
So what you could to correct this is create a synthetic cone image with circular lines and do the transformation on it. Again, requires some try-and-error.
But it should deliver some mask that you could use to correct the "blurriness bias".
I work on MRIs. The problem is that the images are not always centered. In addition, there are often black bands around the patient's body.
I would like to be able to remove the black borders and center the patient's body like this:
I have already tried to determine the edges of the patient's body by reading the pixel table but I haven't come up with anything very conclusive.
In fact my solution works on only 50% of the images... I don't see any other way to do it...
Development environment: Python3.7 + OpenCV3.4
I'm not sure this is the standard or most efficient way to do this, but it seems to work:
# Load image as grayscale (since it's b&w to start with)
im = cv2.imread('im.jpg', cv2.IMREAD_GRAYSCALE)
# Threshold it. I tried a few pixel values, and got something reasonable at min = 5
_,thresh = cv2.threshold(im,5,255,cv2.THRESH_BINARY)
# Find contours:
im2, contours, hierarchy = cv2.findContours(thresh,cv2.RETR_TREE,cv2.CHAIN_APPROX_SIMPLE)
# Put all contours together and reshape to (_,2).
# The first "column" will be your x values of your contours, and second will be y values
c = np.vstack(contours).reshape(-1,2)
# Extract the most left, most right, uppermost and lowermost point
xmin = np.min(c[:,0])
ymin = np.min(c[:,1])
xmax = np.max(c[:,0])
ymax = np.max(c[:,1])
# Use those as a guide of where to crop your image
crop = im[ymin:ymax, xmin:xmax]
cv2.imwrite('cropped.jpg', crop)
What you get in the end is this:
There are multiple ways to do this, and this is answer is pretty much computer vision tips and tricks.
If the mass is in the center, and the area outside is always going to be black, you can threshold the image and then find the edge pixels like you already are. I'd add 10 pixels to the border to adjust for variances in the threshold process.
Or if the body is always similarly sized, you can find the centroid of the blob (white area in the threshold image), and then crop a fixed area around it.