I am rotating a bitmap using the the three shear method documented in these articles [1][2].
From about 0-90°, the quality is acceptable, but beyond that it gets progressively more distorted until it's unintelligible.
Can anyone help me locate what is going wrong? There are a few calls to methods from the application Cinema 4D's API, but I believe the issue is coming from the math. Thank you!
This is my shear function:
def shear(angle,x,y):
'''
|1 -tan(𝜃/2) | |1 0| |1 -tan(𝜃/2) |
|0 1 | |sin(𝜃) 1| |0 1 |
'''
# shear 1
tangent=math.tan(angle/2)
new_x=round(x-y*tangent)
new_y=y
#shear 2
new_y=round(new_x*math.sin(angle)+new_y) #since there is no change in new_x according to the shear matrix
#shear 3
new_x=round(new_x-new_y*tangent) #since there is no change in new_y according to the shear matrix
return new_x,new_y
This is the code in the draw function:
cos = math.cos(self.rotation)
sin = math.sin(self.rotation)
# Define the width and height of the destination image
newWidth = round(abs(w*cos)+abs(h*sin))+1
newHeight = round(abs(h*cos)+abs(w*sin))+1
destBmp = c4d.bitmaps.BaseBitmap() #creates a new BaseBitmap instance for the destination image
destBmp.Init(newWidth,newHeight) #initializes the bitmap
destAlpha = destBmp.AddChannel(True, False) #adds an alpha channel
# Find the center of the source image for rotation
origCenterWidth = round(((w+1)/2)-1) #with respect to the source image
origCenterHeight = round(((h+1)/2)-1) #with respect to the source image
# Find the center of the destination image
newCenterWidth = round(((newWidth+1)/2)-1) #with respect to the destination image
newCenterHeight = round(((newHeight)/2)-1) #with respect to the destination image
for xP in range(w):
for yP in range(h):
destBmp.SetPixel(int(xP), int(yP), 0, 0, 255) #sets the destination bitmap's background color to blue
for i in range(h):
for j in range(w):
#co-ordinates of pixel with respect to the center of source image
x = w-1-j-origCenterWidth
y = h-1-i-origCenterHeight
#Applying the Shear Transformation
new_x,new_y = shear(self.rotation,x,y)
#with rotation, the center will change so new_x and new_y will be the new center
new_y = newCenterHeight-new_y
new_x = newCenterWidth-new_x
alphaValue = sourceBmp.GetAlphaPixel(alphaChannel, j, i) #gets the source image pixel's alpha
col = sourceBmp.GetPixelDirect(j, i) #gets the source image pixel's color as a Color Vector
destBmp.SetAlphaPixel(nBmpAlpha, int(new_x), int(new_y), alphaValue) #sets the destination image pixel's alpha
destBmp.SetPixel(int(new_x), int(new_y), int(col.x), int(col.y), int(col.z)) #sets the destination image pixel's color
I ran accross the same exact problem today. It seems as though there is something about the shearing method that is most optimal between 315 and 45 degrees, and noticeably degrades between 90 and 270. What I did to get around this was flip the image both on x, and y if the rotation is between 90 and 270, and then tack on an extra 180, to get it back into the desired range of rotation.
Here's basically what that was:
if (rotation > 90 && rotation < 270) {
scale.x = scale.x * -1.0f;
scale.y = scale.y * -1.0f;
rotation += 180;
if (rotation >= 360) {
rotation -= 360;
}
}
Related
I have image with with many cars, every car has coordinates of polygon and keypoints. I use this code to crop object by polygon and get new keypoints.
x,y,w,h = cv2.boundingRect(points_poly_int)
cropped_img = img[y:y+h,x:x+w]
head_coords_after_crop = np.asarray([head_coords_old[0] - x, head_coords_old[1] -y])
center_coords_after_crop = np.asarray([center_coords_old[0] - x, center_coords_old[1] -y])
Here example of cropped image and keypoints:
What I need is rotate the whole image by any angle and remap coordinates of polygons and keypoints for every object
Here method which return rotated image and matrix of transformation:
def rotate_image(mat, angle):
"""
Rotates an image (angle in degrees) and expands image to avoid cropping
"""
height, width = mat.shape[:2] # image shape has 3 dimensions
image_center = (width/2, height/2) # getRotationMatrix2D needs coordinates in reverse order (width, height) compared to shape
rotation_mat = cv2.getRotationMatrix2D(image_center, angle, 1.)
# rotation calculates the cos and sin, taking absolutes of those.
abs_cos = abs(rotation_mat[0,0])
abs_sin = abs(rotation_mat[0,1])
# find the new width and height bounds
bound_w = int(height * abs_sin + width * abs_cos)
bound_h = int(height * abs_cos + width * abs_sin)
# subtract old image center (bringing image back to origo) and adding the new image center coordinates
rotation_mat[0, 2] += bound_w/2 - image_center[0]
rotation_mat[1, 2] += bound_h/2 - image_center[1]
# rotate image with the new bounds and translated rotation matrix
rotated_mat = cv2.warpAffine(mat, rotation_mat, (bound_w, bound_h))
return rotated_mat, rotation_mat
What I do next is multiplying old coordinates with matrix of transformation. Here code:
img_roated, C = rotate_image(img, 180)
#Remap polygons coordinates
ones = np.ones((points_poly.shape[0], 1))
new_poly = np.hstack((points_poly,ones))
new_poly = (C # new_poly.T).T
new_poly = new_poly.astype(np.int32)
#Crop by new polygons
x,y,w,h = cv2.boundingRect(new_poly)
cropped_img = img_roated[y:y+h,x:x+w]
#Reamp keypoints coordinates
head_coords_new = np.asarray([756.600, 1687.900, 1])
center_coords_new = np.asarray([762.300, 1708.400, 1])
head_coords_new = (C # head_coords_new.T).T
center_coords_new = (C # center_coords_new.T).T
head_coords_new = np.asarray([head_coords_old[0] - x, head_coords_old[1] - y])
center_coords_new = np.asarray([center_coords_old[0] - x, center_coords_old[1] - y])
head_coords_new = head_coords_new.astype(np.int32)
center_coords_new = center_coords_new.astype(np.int32)
But result is differnt from first picture, Here new picture:
Somehow keypoints shift, and it happens with every angle. And I don't know how to fix it.
Here the source image: https://drive.google.com/file/d/14K_MQHMwtWlw-QCQbaB5ecrREbWwyKhO/view?usp=sharing
And polygons with keypoints:
{'keypoints': [{'id': 'head', 'pos': '756.600;1687.900'},
{'id': 'roof_center', 'pos': '762.300;1708.400'}],
'polygon': '{(759.700;1717.300);(770.000;1714.200);(762.000;1687.400);(756.600;1687.900);(751.200;1690.700);(759.700;1717.300)}'}
If you wish to reproduce the issue.
Thanks in advnced
Here the differnce. Right pic is first image rotated in pic viewer. Left is transformed pic
Hi all as the title said is there a way for this? For example, I want to crop fourth quadrant of an image and the other area will be turned to black while retaining its original size. Currently, I am getting the center width and height of the image then accessing the pixel:
Cropped = I[centerHeight:,centerWidth:]
but that just stores the fourth quadrant cropped image. Thanks!
I don't think there is a function in OpenCV that does that work. This function will solve your problem.
import numpy as np
def crop_image(img, cx, cy, w, h):
"""
args:
cx: x coordinate of center
cy: y coordinate of center
w: width of crop
h: height of crop
"""
result = np.zeros(img.shape, dtype=np.uint8)
result[cx - w//2 :cx + w//2, cy - h//2:cy + h//2] = img[cx - w//2 :cx + w//2, cy - h//2:cy + h//2]
return result
I wanted to make a bot for a game that looks for a certain item on the floor, and then clicks on it. I managed to get the first part right (it even draws a rectangle around it) but whats embarrassing is that i cant get the coordinates of that object right. I use cv2.matchTemplate method. This is my code:
import numpy as np
import pyautogui
img_bgr = cv2.imread('gra.png')
img_gray = cv2.cvtColor(img_bgr, cv2.COLOR_BGR2GRAY)
template = cv2.imread('bones2.png', 0)
w, h = template.shape[:: -1]
res = cv2.matchTemplate(img_gray, template, cv2.TM_CCOEFF_NORMED)
threshhold = 0.90
loc = np.where( res >= threshhold)
for pt in zip(*loc[:: -1]):
cv2.rectangle(img_bgr, pt, (pt[0] + w, pt[1] + h),(0, 255, 255), 2 )
#here i wanted to move the mouse to the coordinates of a found item, however
#i cant get these two right ↓ ↓
pyautogui.moveTo( ? , ? ,duration=0.5)
cv2.imshow('znalezione', img_bgr)
cv2.waitKey()
cv2.destroyAllWindows()
I tried this:
pyautogui.moveTo( (pt[0] *2 + w)/2 , (pt[1] *2 + h)/2 ,duration=0.5)
but this doesn't work at all. Can someone explain to me what pt is at all and how to get the coordinates?
Also here is a screen shot of what i achieved so far:
From my understanding, both OpenCV and pyautogui uses the same coordinate system as illustrated with an example 1920x1080 resolution.
0,0 X increases -->
+---------------------------+
| | Y increases
| | |
| 1920 x 1080 screen | |
| | V
| |
| |
+---------------------------+ 1919, 1079
OpenCV's cv2.rectangle function takes the top left coordinates and the bottom right coordinates of the rectangle as parameters. Since you were able to draw the bounding box in your image, you have the correct coordinates of the ROI you want to examine. From the docs the moveTo function takes two parameters: x and y. Assuming you want to move the mouse to the center of the bounding box, you can do
pyautogui.moveTo(pt[0] + w/2, pt[1] + h/2, duration=0.5)
first of all. you dont need so complex calculation
x=pt[0]
y=pt[1]
center_x = x + 0.5 * w
center_y = y + 0.5 * h
In terms of points, I don't see any issues. It's not the coordinate issue. I guess high chance, It's the pyautoui function issue. But I could not verify as I cant seems to install it on my PC.
based on example
>>> pyautogui.moveTo(100, 200, 2)
Try to call the same first to rule out the last parameter issue. If can, then its simple format isse.
If can not then it might be an image conversion issue. pyautogui function is using Pillow which is giving a format that must be adapted to work with opencv. So its either datatype RGB, BGR or image coordinate issue(e.g opencv refer to image coordinate, and pyautogui uses desktop coordinate?).
I made a small example for myself to play around with OpenCVs wrapPerspective, but the output is not completely as I expected.
My input is a bar at an 45° angle. I want to transform it so that it's vertically aligned / at an 90° angle. No problem with that. However, what I don't understand is that everything around the actual destination points is black. The reason I don't understand this is, that actually only the transformation matrix gets passed to the wrapPerspective function, not the destination points themselves. So my expected output would be a bar at an 90° angle and most around it to be yellow instead of black. Where's my error in reasoning?
# helper function
def showImage(img, title):
fig = plt.figure()
plt.suptitle(title)
plt.imshow(img)
# read and show test image
img = mpimg.imread('test_transform.jpg')
showImage(img, "input image")
# source points
top_left = [194,430]
top_right = [521,103]
bottom_right = [549,131]
bottom_left = [222,458]
pts = np.array([bottom_left,bottom_right,top_right,top_left])
# target points
y_off = 400; # y offset
top_left_dst = [top_left[0], top_left[1] - y_off]
top_right_dst = [top_left_dst[0] + 39.6, top_left_dst[1]]
bottom_right_dst = [top_right_dst[0], top_right_dst[1] + 462.4]
bottom_left_dst = [top_left_dst[0], bottom_right_dst[1]]
dst_pts = np.array([bottom_left_dst, bottom_right_dst, top_right_dst, top_left_dst])
# generate a preview to show where the warped bar would end up
preview=np.copy(img)
cv2.polylines(preview,np.int32([dst_pts]),True,(0,0,255), 5)
cv2.polylines(preview,np.int32([pts]),True,(255,0,255), 1)
showImage(preview, "preview")
# calculate transformation matrix
pts = np.float32(pts.tolist())
dst_pts = np.float32(dst_pts.tolist())
M = cv2.getPerspectiveTransform(pts, dst_pts)
# wrap image and draw the resulting image
image_size = (img.shape[1], img.shape[0])
warped = cv2.warpPerspective(img, M, dsize = image_size, flags = cv2.INTER_LINEAR)
showImage(warped, "warped")
The result using this code is:
Here's my input image test_transform.jpg:
And here is the same image with coordinates added:
By request, here is the transformation matrix:
[[ 6.05504680e-02 -6.05504680e-02 2.08289910e+02]
[ 8.25714275e+00 8.25714275e+00 -5.12245707e+03]
[ 2.16840434e-18 3.03576608e-18 1.00000000e+00]]
Your ordering in your arrays or their positions might be the fault. Check this Transformed Image: The dst_pts array is: np.array([[196,492],[233,494],[234,32],[196,34]]), thats more or less like the blue rectangle in your preview image.(I made the coordinates myself to make sure they are right)
NOTE: Your source and destination points should be in right order
I'm working on depth map with OpenCV. I can obtain it but it is reconstructed from the left camera origin and there is a little tilt of this latter and as you can see on the figure, the depth is "shifted" (the depth should be close and no horizontal gradient):
I would like to express it as with a zero angle, i try with the warp perspective function as you can see below but i obtain a null field...
P = np.dot(cam,np.dot(Transl,np.dot(Rot,A1)))
dst = cv2.warpPerspective(depth, P, (2048, 2048))
with :
#Projection 2D -> 3D matrix
A1 = np.zeros((4,3))
A1[0,0] = 1
A1[0,2] = -1024
A1[1,1] = 1
A1[1,2] = -1024
A1[3,2] = 1
#Rotation matrice around the Y axis
theta = np.deg2rad(5)
Rot = np.zeros((4,4))
Rot[0,0] = np.cos(theta)
Rot[0,2] = -np.sin(theta)
Rot[1,1] = 1
Rot[2,0] = np.sin(theta)
Rot[2,2] = np.cos(theta)
Rot[3,3] = 1
#Translation matrix on the X axis
dist = 0
Transl = np.zeros((4,4))
Transl[0,0] = 1
Transl[0,2] = dist
Transl[1,1] = 1
Transl[2,2] = 1
Transl[3,3] = 1
#Camera Intrisecs matrix 3D -> 2D
cam = np.concatenate((C1,np.zeros((3,1))),axis=1)
cam[2,2] = 1
P = np.dot(cam,np.dot(Transl,np.dot(Rot,A1)))
dst = cv2.warpPerspective(Z0_0, P, (2048*3, 2048*3))
EDIT LATER :
You can download the 32MB field dataset here: https://filex.ec-lille.fr/get?k=cCBoyoV4tbmkzSV5bi6. Then, load and view the image with:
from matplotlib import pyplot as plt
import numpy as np
img = np.load('testZ0.npy')
plt.imshow(img)
plt.show()
I have got a rough solution in place. You can modify it later.
I used the mouse handling operations available in OpenCV to crop the region of interest in the given heatmap.
(Did I just say I used a mouse to crop the region?) Yes, I did. To learn more about mouse functions in OpenCV SEE THIS. Besides, there are many other SO questions that can help you in this regard.:)
Using those functions I was able to obtain the following:
Now to your question of removing the tilt. I used the homography principal by taking the corner points of the image above and using it on a 'white' image of a definite size. I used the cv2.findHomography() function for this.
Now using the cv2.warpPerspective() function in OpenCV, I was able to obtain the following:
Now you can the required scale to this image as you wanted.
CODE:
I have also attached some snippets of code for your perusal:
#First I created an image of white color of a definite size
back = np.ones((435, 379, 3)) # size
back[:] = (255, 255, 255) # white color
Next I obtained the corner points pts_src on the tilted image below :
pts_src = np.array([[25.0, 2.0],[403.0,22.0],[375.0,436.0],[6.0,433.0]])
I wanted the points above to be mapped to the points 'pts_dst' given below :
pts_dst = np.array([[2.0, 2.0], [379.0, 2.0], [379.0, 435.0],[2.0, 435.0]])
Now I used the principal of homography:
h, status = cv2.findHomography(pts_src, pts_dst)
Finally I mapped the original image to the white image using perspective transform.
fin = cv2.warpPerspective(img, h, (back.shape[1],back.shape[0]))
# img -> original tilted image.
# back -> image of white color.
Hope this helps! I also got to learn a great deal from this question.
Note: The points fed to the 'cv2.findHomography()' must be in float.
For more info on Homography , visit THIS PAGE