Related
I have image with with many cars, every car has coordinates of polygon and keypoints. I use this code to crop object by polygon and get new keypoints.
x,y,w,h = cv2.boundingRect(points_poly_int)
cropped_img = img[y:y+h,x:x+w]
head_coords_after_crop = np.asarray([head_coords_old[0] - x, head_coords_old[1] -y])
center_coords_after_crop = np.asarray([center_coords_old[0] - x, center_coords_old[1] -y])
Here example of cropped image and keypoints:
What I need is rotate the whole image by any angle and remap coordinates of polygons and keypoints for every object
Here method which return rotated image and matrix of transformation:
def rotate_image(mat, angle):
"""
Rotates an image (angle in degrees) and expands image to avoid cropping
"""
height, width = mat.shape[:2] # image shape has 3 dimensions
image_center = (width/2, height/2) # getRotationMatrix2D needs coordinates in reverse order (width, height) compared to shape
rotation_mat = cv2.getRotationMatrix2D(image_center, angle, 1.)
# rotation calculates the cos and sin, taking absolutes of those.
abs_cos = abs(rotation_mat[0,0])
abs_sin = abs(rotation_mat[0,1])
# find the new width and height bounds
bound_w = int(height * abs_sin + width * abs_cos)
bound_h = int(height * abs_cos + width * abs_sin)
# subtract old image center (bringing image back to origo) and adding the new image center coordinates
rotation_mat[0, 2] += bound_w/2 - image_center[0]
rotation_mat[1, 2] += bound_h/2 - image_center[1]
# rotate image with the new bounds and translated rotation matrix
rotated_mat = cv2.warpAffine(mat, rotation_mat, (bound_w, bound_h))
return rotated_mat, rotation_mat
What I do next is multiplying old coordinates with matrix of transformation. Here code:
img_roated, C = rotate_image(img, 180)
#Remap polygons coordinates
ones = np.ones((points_poly.shape[0], 1))
new_poly = np.hstack((points_poly,ones))
new_poly = (C # new_poly.T).T
new_poly = new_poly.astype(np.int32)
#Crop by new polygons
x,y,w,h = cv2.boundingRect(new_poly)
cropped_img = img_roated[y:y+h,x:x+w]
#Reamp keypoints coordinates
head_coords_new = np.asarray([756.600, 1687.900, 1])
center_coords_new = np.asarray([762.300, 1708.400, 1])
head_coords_new = (C # head_coords_new.T).T
center_coords_new = (C # center_coords_new.T).T
head_coords_new = np.asarray([head_coords_old[0] - x, head_coords_old[1] - y])
center_coords_new = np.asarray([center_coords_old[0] - x, center_coords_old[1] - y])
head_coords_new = head_coords_new.astype(np.int32)
center_coords_new = center_coords_new.astype(np.int32)
But result is differnt from first picture, Here new picture:
Somehow keypoints shift, and it happens with every angle. And I don't know how to fix it.
Here the source image: https://drive.google.com/file/d/14K_MQHMwtWlw-QCQbaB5ecrREbWwyKhO/view?usp=sharing
And polygons with keypoints:
{'keypoints': [{'id': 'head', 'pos': '756.600;1687.900'},
{'id': 'roof_center', 'pos': '762.300;1708.400'}],
'polygon': '{(759.700;1717.300);(770.000;1714.200);(762.000;1687.400);(756.600;1687.900);(751.200;1690.700);(759.700;1717.300)}'}
If you wish to reproduce the issue.
Thanks in advnced
Here the differnce. Right pic is first image rotated in pic viewer. Left is transformed pic
How do I get ImageOps.fit(source28x32, (128, 128)) to fit without cropping off the top/bottom/sides? Do I really have to find the aspect, resize accordingly so the enlarged version does not exceed 128x128, and then add border pixels (or center the image in a 128x128 canvas)? Mind you that the source can be of any ratio, the 28x32 is just an example.
source image (28x32)
fitted image (128x128)
This is my attempt so far, not particularly elegant
def fit(im):
size = 128
x, y = im.size
ratio = float(x) / float(y)
if x > y:
x = size
y = size * 1 / ratio
else:
y = size
x = size * ratio
x, y = int(x), int(y)
im = im.resize((x, y))
new_im = Image.new('L', (size, size), 0)
new_im.paste(im, ((size - x) / 2, (size - y) / 2))
return new_im
New fitted image
Here is the function implemented in both PIL and cv2. The input can be of any size; the function finds the scale needed to fit the largest edge to the desired width, and then puts it onto a black square image of the desired width.
In PIL
def resize_PIL(im, output_edge):
scale = output_edge / max(im.size)
new = Image.new(im.mode, (output_edge, output_edge), (0, 0, 0))
paste = im.resize((int(im.width * scale), int(im.height * scale)), resample=Image.NEAREST)
new.paste(paste, (0, 0))
return new
In cv2
def resize_cv2(im, output_edge):
scale = output_edge / max(im.shape[:2])
new = np.zeros((output_edge, output_edge, 3), np.uint8)
paste = cv2.resize(im, None, fx=scale, fy=scale, interpolation=cv2.INTER_NEAREST)
new[:paste.shape[0], :paste.shape[1], :] = paste
return new
With a desired width of 128:
→
→
Not shown: these functions work on images larger than the desired size
This works pretty good to fit the image to size you want while filling in the rest with black space
from PIL import Image, ImageOps
def fit(im, width):
border = int((max(im.width, im.height) - min(im.width, im.height))/2)
im = ImageOps.expand(im, border)
im = ImageOps.fit(im, (width, width))
return im
I'm pasting a randomly generated barcode on a background image.
This barcode has been randomly rotated, skewed, and scaled.
Then, this barcode is randomly placed onto the background image.
I'm trying to find out the coordinates of the actual barcode, ignoring the expanded black mask.
I'm a beginner in matrices and image manipulation so any help, especially in the math, would be appreciated.
This is where I generate the barcode, using pdf417gen library, along with the coordinates of the barcode.
import numpy as np
import os
import random
import sys
from pdf417gen import encode, render_image
from PIL import Image
def generate_barcode(self):
barcode = encode("random text data", columns=5, security_level=5)
scale = 5
ratio = 3
padding = 5
barcode_image = render_image(barcode, scale=scale, ratio=ratio, padding=padding)
barcode_coords = np.array([
[(barcode_image.width - padding) / float(barcode_image.width), (barcode_image.height - padding) / float(barcode_image.height)],
[padding / float(barcode_image.width), (barcode_image.height - padding) / float(barcode_image.height)],
[padding / float(barcode_image.width), padding / float(barcode_image.height)],
[(barcode_image.width - padding) / float(barcode_image.width), padding / float(barcode_image.height)]
])
return (barcode_coords, barcode_image)
Once I have the barcode's image and coordinate, I do the following.
transform the barcode's image
attempt to match the coordinates with the image's transformation
paste the image onto a background image
then draw a red outline using the coordinates
The red outline should outline the barcode's image.
Here's where I transform the barcode image and paste it to the background image.
def composite_images(self, background_image, barcode_coords, barcode_image):
coords = barcode_coords
barcode = barcode_image
# instantiating the transformation variables
scale = random.randrange(4, 50) / 100.0
size = int( min(background_image.size) * scale) # background_image.size returns (width, height)
barcode = barcode.resize((int(size * 2.625), size)) # width:height ratio is 2.625:1
rotation = random.randrange(0, 360)
xstretch = random.randrange(0, 100) / 100.0
ystretch = random.randrange(0, 100) / 100.0
xshear = random.randrange(0, 100) / 100.0
yshear = random.randrange(0, 100) / 100.0
# set affine transform on the barcode coordinates
affine_transform = get_affine_transform(rotation, xstretch, ystretch, xshear, yshear)
coords = transform_coords(coords, affine_transform, True)
expand_mask = transform_coords(np.array([ # shifts expand mask based on transformation
[0.0, 0.0],
[float(size * 2.625), 0.0],
[float(size * 2.625), float(size)],
[0.0, float(size)]
]), mat, False)
minx = min(expand_mask[:,0])
maxx = max(expand_mask[:,0])
miny = min(expand_mask[:,1])
maxy = max(expand_mask[:,1])
mat_inv = np.linalg.inv(np.array([ # the inverse matrix
[mat[0,0], mat[0,1], -minx],
[mat[1,0], mat[1,1], -miny],
[0,0,1.0]
]))
image_matrix = (mat_inv[0,0], mat_inv[0,1], mat_inv[0,2],
mat_inv[1,0], mat_inv[1,1], mat_inv[1,2])
new_size = (int(maxx-minx), int(maxy-miny))
# set affine transform on the barcode image using data from coordinates affine transformation
barcode = barcode.transform(new_size, method=Image.AFFINE, data=image_matrix)
# paste the barcode image onto a random position on background image
region_x = random.randrange(0, background_image.width - size)
region_y = random.randrange(0, background_image.height - size)
background_image.paste(barcode, (region_x, region_y))
coords *= scale
coords += [region_x / float(background_image.width), region_y / float(background_image.height)]
return(coords, background_image)
def get_affine_transform(self, rotation, xstretch, ystretch, xshear, yshear):
theta = -(rotation / 180.0) * np.pi
return np.array([
[np.cos(theta) * xstretch, -np.sin(theta) * xshear],
[np.sin(theta) * ystretch, np.cos(theta) * yshear]
])
def transform_coords(self, coords, affine_transform, center):
if center:
coords -= (.5, .5) # center on origin
coords = np.dot(coords, affine_transform.T)
if center:
coords += (.5, .5) # reset centering
return coords
Now I draw the red outline using the coords and image (with pasted barcode) returned from composite_images().
def draw_red_outline(self, box_coords, image):
outline = box_coords * [image.width, image.height]
outline = outline.astype(int)
outline = tuple(map(tuple, outline))
draw = ImageDraw.Draw(image)
draw.poly(outline, outline=(255,0,0,0))
del draw
image.show()
I'm unsure as to where my math is going wrong.
To get coordinates of transformed points you can do the following:
After getting transformation matrix:
transformed_img = cv2.warpPerspective(source_img, m, image_shape)
You apply it to image:
transformed_img = cv2.warpPerspective(source_img, m, image_shape)
and transformed image contains result with coordinates which you want to calculate and some black region.
So, the solution for each of 4 points' coordinates (if there are no 0 coordinates) is the following:
point = np.array([w, h]) #width and hight of the source point (before transform)
homg_point = [point[0], point[1], 1] # homogeneous coords
transf_homg_point = m.dot(homg_point) # transform
transf_homg_point /= transf_homg_point1[2] # scale
transf_point = transf_homg_point[:2] # remove Cartesian coords
print(transf_point) #check the result
I have georeferenced tiff, gdalinfo output:
Driver: GTiff/GeoTIFF
Files: generated.tiff
generated.tiff.aux.xml
Size is 6941, 4886
Coordinate System is `'
GCP Projection =
GEOGCS["WGS 84",
DATUM["WGS_1984",
SPHEROID["WGS 84",6378137,298.257223563,
AUTHORITY["EPSG","7030"]],
AUTHORITY["EPSG","6326"]],
PRIMEM["Greenwich",0],
UNIT["degree",0.0174532925199433],
AUTHORITY["EPSG","4326"]]
GCP[ 0]: Id=1, Info=
(0,0) -> (0.01,0.05886,0)
GCP[ 1]: Id=2, Info=
(6941,0) -> (0.07941,0.05886,0)
GCP[ 2]: Id=3, Info=
(6941,4886) -> (0.07941,0.01,0)
GCP[ 3]: Id=4, Info=
(0,4886) -> (0.01,0.01,0)
Metadata:
AREA_OR_POINT=Area
Software=paint.net 4.0
Image Structure Metadata:
INTERLEAVE=BAND
Corner Coordinates:
Upper Left ( 0.0, 0.0)
Lower Left ( 0.0, 4886.0)
Upper Right ( 6941.0, 0.0)
Lower Right ( 6941.0, 4886.0)
Center ( 3470.5, 2443.0)
There is second file containing a map marker image - called marker1.png (36x60 pixels).
I want to overlay marker1.png on top of the above generated.tiff - so that its top left corner is located at coordinates 0.037,0.025 of the geotiff file. Visually the result should look like a google map with a single marker on top of it.
How would I go about achieving that?
I have managed to partially implement this, but I am not sure whether this is the right path.
import gdal
gdal.UseExceptions()
s = gdal.Open('generated.tiff')
drv = gdal.GetDriverByName("VRT")
vrt = drv.CreateCopy('test.vrt', s, 0)
band = vrt.GetRasterBand(1)
source_path = 'marker1.png'
source_band = 1
x_size = 36
y_size = 60
x_block = 36
y_block = 1
x_offset = 0
y_offset = 0
x_source_size = 36
y_source_size = 60
dest_x_offset = 2000
dest_y_offset = 2000
x_dest_size = 36
y_dest_size = 60
simple_source = '<SimpleSource><SourceFilename relativeToVRT="1">%s</SourceFilename>' % source_path + \
'<SourceBand>%i</SourceBand>' % source_band + \
'<SourceProperties RasterXSize="%i" RasterYSize="%i" DataType="Byte" BlockXSize="%i" BlockYSize="%i"/>' % (x_size, y_size, x_block, y_block) + \
'<SrcRect xOff="%i" yOff="%i" xSize="%i" ySize="%i"/>' % (x_offset, y_offset, x_source_size, y_source_size) + \
'<DstRect xOff="%i" yOff="%i" xSize="%i" ySize="%i"/></SimpleSource>' % (dest_x_offset, dest_y_offset, x_dest_size, y_dest_size)
band.SetMetadata({'source_0': simple_source}, "new_vrt_sources")
band.SetMetadataItem("NoDataValue", '1')
p = gdal.GetDriverByName("PNG")
p.CreateCopy('result.png', vrt, 0)
vrt = None
This uses pixel coordinates instead of geographical ones (but that conversion is easy), however the marker images show up as black blobs (but with right dimensions) - looks like the palette might be wrong?
I tried multiple different approaches, none worked properly - colors were wrong, transparency was wrong or incorrect.
Finally I just did it with the help of PIL, with the code below. Its just a few lines, its actually readable (as opposed to anything I could think up using gdal) and most importantly - it works.
Of course, it can be improved.
from PIL import Image, ImageFont, ImageDraw
from osgeo import gdal,ogr
image = 'generated.tiff'
src_ds = gdal.Open(image)
gt = src_ds.GetGeoTransform() # used to convert geographical coordinates to pixel coordinates
font = ImageFont.truetype("sans-serif.ttf", 16)
img = Image.open(image)
def add_marker (gt, watermark, font, img, mx, my, text):
px = int((mx - gt[0]) / gt[1]) #x pixel
py = int((my - gt[3]) / gt[5]) #y pixel
wmark = Image.open(watermark)
draw = ImageDraw.Draw(wmark)
draw.text((12, 10), text, (0, 0, 0), font=font)
img.paste(wmark, (px, py), wmark)
add_marker(gt, 'marker1.png', font, img, 0.012, 0.0132, "1")
img.save("result.png", "PNG")
The following picture will tell you what I want.
I have the information of the rectangles in the image (width, height, center point and rotation degree). Now, I want to write a script to cut them out and save them as an image, but straighten them as well. As in, I want to go from the rectangle shown inside the image to the rectangle that is shown outside.
I am using OpenCV Python. Please tell me a way to accomplish this.
Kindly show some code as examples of OpenCV Python are hard to find.
You can use the warpAffine function to rotate the image around a defined center point. The suitable rotation matrix can be generated using getRotationMatrix2D (where theta is in degrees).
You then can use Numpy slicing to cut the image.
import cv2
import numpy as np
def subimage(image, center, theta, width, height):
'''
Rotates OpenCV image around center with angle theta (in deg)
then crops the image according to width and height.
'''
# Uncomment for theta in radians
#theta *= 180/np.pi
shape = ( image.shape[1], image.shape[0] ) # cv2.warpAffine expects shape in (length, height)
matrix = cv2.getRotationMatrix2D( center=center, angle=theta, scale=1 )
image = cv2.warpAffine( src=image, M=matrix, dsize=shape )
x = int( center[0] - width/2 )
y = int( center[1] - height/2 )
image = image[ y:y+height, x:x+width ]
return image
Keep in mind that dsize is the shape of the output image. If the patch/angle is sufficiently large, edges get cut off (compare image above) if using the original shape as--for means of simplicity--done above. In this case, you could introduce a scaling factor to shape (to enlarge the output image) and the reference point for slicing (here center).
The above function can be used as follows:
image = cv2.imread('owl.jpg')
image = subimage(image, center=(110, 125), theta=30, width=100, height=200)
cv2.imwrite('patch.jpg', image)
I had problems with wrong offsets while using the solutions here and in similar questions.
So I did the math and came up with the following solution that works:
def subimage(self,image, center, theta, width, height):
theta *= 3.14159 / 180 # convert to rad
v_x = (cos(theta), sin(theta))
v_y = (-sin(theta), cos(theta))
s_x = center[0] - v_x[0] * ((width-1) / 2) - v_y[0] * ((height-1) / 2)
s_y = center[1] - v_x[1] * ((width-1) / 2) - v_y[1] * ((height-1) / 2)
mapping = np.array([[v_x[0],v_y[0], s_x],
[v_x[1],v_y[1], s_y]])
return cv2.warpAffine(image,mapping,(width, height),flags=cv2.WARP_INVERSE_MAP,borderMode=cv2.BORDER_REPLICATE)
For reference here is an image that explains the math behind it:
Note that
w_dst = width-1
h_dst = height-1
This is because the last coordinate has the value width-1 and not width, or height.
The other methods will work only if the content of the rectangle is in the rotated image after rotation and will fail badly in other situations. What if some of the part are lost? See an example below:
If you are to crop the rotated rectangle text area using the above method,
import cv2
import numpy as np
def main():
img = cv2.imread("big_vertical_text.jpg")
cnt = np.array([
[[64, 49]],
[[122, 11]],
[[391, 326]],
[[308, 373]]
])
print("shape of cnt: {}".format(cnt.shape))
rect = cv2.minAreaRect(cnt)
print("rect: {}".format(rect))
box = cv2.boxPoints(rect)
box = np.int0(box)
print("bounding box: {}".format(box))
cv2.drawContours(img, [box], 0, (0, 0, 255), 2)
img_crop, img_rot = crop_rect(img, rect)
print("size of original img: {}".format(img.shape))
print("size of rotated img: {}".format(img_rot.shape))
print("size of cropped img: {}".format(img_crop.shape))
new_size = (int(img_rot.shape[1]/2), int(img_rot.shape[0]/2))
img_rot_resized = cv2.resize(img_rot, new_size)
new_size = (int(img.shape[1]/2)), int(img.shape[0]/2)
img_resized = cv2.resize(img, new_size)
cv2.imshow("original contour", img_resized)
cv2.imshow("rotated image", img_rot_resized)
cv2.imshow("cropped_box", img_crop)
# cv2.imwrite("crop_img1.jpg", img_crop)
cv2.waitKey(0)
def crop_rect(img, rect):
# get the parameter of the small rectangle
center = rect[0]
size = rect[1]
angle = rect[2]
center, size = tuple(map(int, center)), tuple(map(int, size))
# get row and col num in img
height, width = img.shape[0], img.shape[1]
print("width: {}, height: {}".format(width, height))
M = cv2.getRotationMatrix2D(center, angle, 1)
img_rot = cv2.warpAffine(img, M, (width, height))
img_crop = cv2.getRectSubPix(img_rot, size, center)
return img_crop, img_rot
if __name__ == "__main__":
main()
This is what you will get:
Apparently, some of the parts are cut out! Why do not directly warp the rotated rectangle since we can get its four corner points with cv.boxPoints() method?
import cv2
import numpy as np
def main():
img = cv2.imread("big_vertical_text.jpg")
cnt = np.array([
[[64, 49]],
[[122, 11]],
[[391, 326]],
[[308, 373]]
])
print("shape of cnt: {}".format(cnt.shape))
rect = cv2.minAreaRect(cnt)
print("rect: {}".format(rect))
box = cv2.boxPoints(rect)
box = np.int0(box)
width = int(rect[1][0])
height = int(rect[1][1])
src_pts = box.astype("float32")
dst_pts = np.array([[0, height-1],
[0, 0],
[width-1, 0],
[width-1, height-1]], dtype="float32")
M = cv2.getPerspectiveTransform(src_pts, dst_pts)
warped = cv2.warpPerspective(img, M, (width, height))
Now the cropped image becomes
Much better, isn't it? If you check carefully, you will notice that there are some black area in the cropped image. That is because a small part of the detected rectangle is out of the bound of the image. To remedy this, you may pad the image a little bit and do the crop after that. There is an example illustrated in this answer.
Now, we compare the two methods to crop the rotated rectangle from the image.
This method do not require rotating the image and can deal with this problem more elegantly with less code.
Similar recipe for openCV version 3.4.0.
from cv2 import cv
import numpy as np
def getSubImage(rect, src):
# Get center, size, and angle from rect
center, size, theta = rect
# Convert to int
center, size = tuple(map(int, center)), tuple(map(int, size))
# Get rotation matrix for rectangle
M = cv2.getRotationMatrix2D( center, theta, 1)
# Perform rotation on src image
dst = cv2.warpAffine(src, M, src.shape[:2])
out = cv2.getRectSubPix(dst, size, center)
return out
img = cv2.imread('img.jpg')
# Find some contours
thresh2, contours, hierarchy = cv2.findContours(img, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE)
# Get rotated bounding box
rect = cv2.minAreaRect(contours[0])
# Extract subregion
out = getSubImage(rect, img)
# Save image
cv2.imwrite('out.jpg', out)
This is my C++ version that performs the same task. I have noticed it is a bit slow. If anyone sees anything that would improve the performance of this function, then please let me know. :)
bool extractPatchFromOpenCVImage( cv::Mat& src, cv::Mat& dest, int x, int y, double angle, int width, int height) {
// obtain the bounding box of the desired patch
cv::RotatedRect patchROI(cv::Point2f(x,y), cv::Size2i(width,height), angle);
cv::Rect boundingRect = patchROI.boundingRect();
// check if the bounding box fits inside the image
if ( boundingRect.x >= 0 && boundingRect.y >= 0 &&
(boundingRect.x+boundingRect.width) < src.cols &&
(boundingRect.y+boundingRect.height) < src.rows ) {
// crop out the bounding rectangle from the source image
cv::Mat preCropImg = src(boundingRect);
// the rotational center relative tot he pre-cropped image
int cropMidX, cropMidY;
cropMidX = boundingRect.width/2;
cropMidY = boundingRect.height/2;
// obtain the affine transform that maps the patch ROI in the image to the
// dest patch image. The dest image will be an upright version.
cv::Mat map_mat = cv::getRotationMatrix2D(cv::Point2f(cropMidX, cropMidY), angle, 1.0f);
map_mat.at<double>(0,2) += static_cast<double>(width/2 - cropMidX);
map_mat.at<double>(1,2) += static_cast<double>(height/2 - cropMidY);
// rotate the pre-cropped image. The destination image will be
// allocated by warpAffine()
cv::warpAffine(preCropImg, dest, map_mat, cv::Size2i(width,height));
return true;
} // if
else {
return false;
} // else
} // extractPatch
This was a very frustrating endeavor, but finally I solved it based on rroowwllaanndd's answer. I just had to add the angle correction when the width < height. Without this I got very strange results for images which fulfilled this condition.
def crop_image(rect, image):
shape = (image.shape[1], image.shape[0]) # cv2.warpAffine expects shape in (length, height)
center, size, theta = rect
width, height = tuple(map(int, size))
center = tuple(map(int, center))
if width < height:
theta -= 90
width, height = height, width
matrix = cv.getRotationMatrix2D(center=center, angle=theta, scale=1.0)
image = cv.warpAffine(src=image, M=matrix, dsize=shape)
x = int(center[0] - width // 2)
y = int(center[1] - height // 2)
image = image[y : y + height, x : x + width]
return image