question about python and opencv for merge images - python

I wrote this code by python and opencv
I have 2 images (first is an image from football match 36.jpg) :
and (second is pitch.png an image (Lines of football field (Red Color)) with png format = without white background) :
With this code , I selected 4 coordinate points in both of the 2 images (4 corners of the right penalty area)
and then, with ( cv2.warpPerspective ) and showing it , we can show that first image from (Top View)
as below:
My question is, what changes do I need to make in my code so that the red colored lines from the second image appear on the first image in the same way as the images below (drawn in the Paint app): 
this is my code :
import cv2
import numpy as np
if __name__ == '__main__' :
# Read source image.
im_src = cv2.imread('c:/36.jpg')
# Four corners of penalty area in first image
pts_src = np.array([[314, 108], [693, 108], [903, 493],[311, 490]])
# Read destination image.
im_dst = cv2.imread('c:pitch.png')
# Four corners of right penalty area in pitch image.
pts_dst = np.array([[480, 76],[569, 76],[569, 292],[480, 292]])
# Calculate Homography
h, status = cv2.findHomography(pts_src, pts_dst)
# Warp source image to destination based on homography
im_out = cv2.warpPerspective(im_src, h, (im_dst.shape[1],im_dst.shape[0]))
# Display images
cv2.imshow("Source Image", im_src)
cv2.imshow("Destination Image", im_dst)
cv2.imshow("Warped Source Image", im_out)
cv2.waitKey(0)

Swap your source and destination images and points. Then, warp the source image:
im_out = cv2.warpPerspective(im_src, h, (im_dst.shape[1],im_dst.shape[0]), borderValue=[255,255,255])
and add this code
mask = im_out[:,:,0] < 100
im_out_overlapped = im_dst.copy()
im_out_overlapped[mask] = [0,0,255]

Related

How to find key points in 2 images automatically for homography?

At first, thanks to dear Christoph Rackwitz that guided me to write some parts of this code by python and opencv.
I have a question! This is my code :
import cv2
import numpy as np
if __name__ == '__main__' :
# Read source image.
im_dst = cv2.imread('c:/36.jpg')
# Four corners of the book in source image
pts_dst = np.array([[314, 107], [693, 107], [903, 493],[311, 491]])
# Read destination image.
im_src = cv2.imread('c:/pitch.jpg')
# Four corners of the book in destination image.
pts_src = np.array([[487, 81],[575, 81],[575, 297],[487, 297]])
# Calculate Homography
h, status = cv2.findHomography(pts_src, pts_dst)
# Warp source image to destination based on homography
im_out = cv2.warpPerspective(im_src, h, (im_dst.shape[1],im_dst.shape[0]), borderValue=[255,255,255])
mask = im_out[:,:,0] < 100
im_out_overlapped = im_dst.copy()
im_out_overlapped[mask] = [0,0,255]
# Display images
cv2.imshow("Source Image", im_src)
cv2.imshow("Destination Image", im_dst)
cv2.imshow("Warped Source Image", im_out)
cv2.imshow("Warped", im_out_overlapped)
cv2.waitKey(0)
with this code I can import these 2 images :
and the result is this : (after warpPerspective)
Now I have a new problem, as you see in code: for doing homography between my 2 images, I should import 4 points coordinates (4 corners of right penalty area) for each image, it means we need to find 8 points coordinates.
Is there a way that my app finds these coordinates points AUTOMATICALLY in both images? And don't I need to write coordinates of points myself?

Replacing a solid green region with another image with OpenCV

The image below has a green region that I'm looking to replace with any other image. It's not necessary for its perspective to match.
I've been able to create a mask. But haven't really been successful with resizing and aligning the other image to this one with the green region. Most resources I've found online mention both images' need for the same size, but I'm only looking to resize the new image to fit inside the green rectangle instead of having two square images overlapping with one of them with a cutout.
What's a good approach here?
Here is one solution using Python OpenCV.
Read both images.
Measure and enter 4 corresponding sets of x,y control points.
Compute homography (perspective coefficients)
Warp the source image using the homography -- the background will be black
Create a binary mask from the dst image using the green color range.
Invert the mask.
Apply the inverted mask to the dst image to blacken the inside of the region of interest (where the src will go)
Add the warped src to the masked dst to form the result
src:
dst:
#!/python3.7
import cv2
import numpy as np
# Read source image.
src = cv2.imread('original.jpg')
# Four corners of source image
# Coordinates are in x,y system with x horizontal to the right and y vertical downward
# listed clockwise from top left
pts_src = np.float32([[0, 0], [325, 0], [325, 472], [0, 472]])
# Read destination image.
dst = cv2.imread('green_rect.png')
# Four corners of destination image.
pts_dst = np.float32([[111, 59], [206, 60], [216, 215], [121, 225]])
# Calculate Homography if more than 4 points
# h = forward transformation matrix
#h, status = cv2.findHomography(pts_src, pts_dst)
# Alternate if only 4 points
h = cv2.getPerspectiveTransform(pts_src,pts_dst)
# Warp source image to destination based on homography
# size argument is width x height, so have to reverse shape values
src_warped = cv2.warpPerspective(src, h, (dst.shape[1],dst.shape[0]))
# Set BGR color ranges
lowerBound = np.array([0, 255, 0]);
upperBound = np.array([0, 255, 0]);
# Compute mask (roi) from ranges in dst
mask = cv2.inRange(dst, lowerBound, upperBound);
# Dilate mask, if needed, when green border shows
kernel = np.ones((3,3),np.uint8)
mask = cv2.dilate(mask,kernel,iterations = 1)
# Invert mask
inv_mask = cv2.bitwise_not(mask)
# Mask dst with inverted mask
dst_masked = cv2.bitwise_and(dst, dst, mask=inv_mask)
# Put src_warped over dst
result = cv2.add(dst_masked, src_warped)
# Save outputs
cv2.imwrite('warped_src.jpg', src_warped)
cv2.imwrite('inverted_mask.jpg', inv_mask)
cv2.imwrite('masked_dst.jpg', dst_masked)
cv2.imwrite('perspective_composite.jpg', result)
warped_src:
inverted_mask:
masked_dst:
result:
I will leave it to the reader to filter the excess green border or edit the control points in the dst image to make the region of interest larger.
Note: if the aspect ratio of the src does not match that of the green rectangle, then the src will get distorted with this method.
Per request in comments to my previous answer doing it in perspective, here is one way to do it with a simple scale and translation affine warp.
Read both images
Measure the height of the green region and get the height of the src image
Measure the center (x,y) of the green region and get the center of the src image
Compute the affine matrix coefficients for scale and translation only (no rotation or skew)
Warp the source image using the affine matrix -- the background will be black
Create a binary mask from the warped src image making everything not black into white
Invert the mask
Apply the inverted mask to the dst image
Add the warped src over the masked dst to form the result
src:
dst:
#!/python3.7
import cv2
import numpy as np
# Read source image.
src = cv2.imread('original.jpg')
h_src, w_src = src.shape[:2]
# Read destination image.
dst = cv2.imread('green_rect.png')
h_dst, w_dst = dst.shape[:2]
# compute scale from height of src and height of green region
h_green=170
scale = h_green/h_src
# compute offsets from center of scaled src and center of green
x_src = (scale)*w_src/2
y_src = (scale)*h_src/2
x_green = 165
y_green = 140
xoff = (x_green - x_src)
yoff = (y_green - y_src)
# build affine matrix for scale and translate only
affine_matrix = np.float32([ [scale,0,xoff], [0,scale,yoff] ])
# do affine warp
# add 1 to src to ensure no pure black
src_warped = cv2.warpAffine(src+1, affine_matrix, (w_dst, h_dst), cv2.INTER_AREA)
# Compute mask (roi) in warped src
_, mask = cv2.threshold(src_warped,1,255,cv2.THRESH_BINARY)
# Invert single channel of mask
inv_mask = cv2.bitwise_not(mask[:,:,0])
# Mask dst with inverted mask
dst_masked = cv2.bitwise_and(dst, dst, mask=inv_mask)
# Put warped src over masked dst
result = cv2.add(dst_masked,src_warped)
# Save outputs
cv2.imwrite('warped_src.jpg', src_warped)
cv2.imwrite('masked_src.jpg', mask)
cv2.imwrite('affine_composite.jpg', result)
warped_src:
inverted mask:
masked dst
result:

OpenCV Python: Detecting lines only in ROI

I'd like to detect lines inside a region of interest. My output image should display the original image and the detected lines in the selected ROI. So far it has not been a problem to find lines in the original image or select a ROI but finding lines only inside the ROI did not work. My MWE reads an image, converts it to grayscale and lets me select a ROI but gives an error when HoughLinesP wants to read roi.
import cv2
import numpy as np
img = cv2.imread('example.jpg',1)
gray = cv2.cvtColor(img ,cv2.COLOR_BGR2GRAY)
# Select ROI
fromCenter = False
roi = cv2.selectROI(gray, fromCenter)
# Crop ROI
roi = img[int(roi[1]):int(roi[1]+roi[3]), int(roi[0]):int(roi[0]+roi[2])]
# Find lines
minLineLength = 100
maxLineGap = 30
lines = cv2.HoughLinesP(roi,1,np.pi/180,100,minLineLength,maxLineGap)
for x in range(0, len(lines)):
for x1,y1,x2,y2 in lines[x]:
cv2.line(img,(x1,y1),(x2,y2),(237,149,100),2)
cv2.imshow('Image',img)
cv2.waitKey(0) & 0xFF
cv2.destroyAllWindows()
The console shows:
lines = cv2.HoughLinesP(roi,1,np.pi/180,100,minLineLength,maxLineGap)
error: OpenCV(3.4.1)
C:\Miniconda3\conda-bld\opencv-suite_1533128839831\work\modules\imgproc\src\hough.cpp:441:
error: (-215) image.type() == (((0) & ((1 << 3) - 1)) + (((1)-1) <<
3)) in function cv::HoughLinesProbabilistic
My assumption is that roi does not have the correct format. I am using Python 3.6 with Spyder 3.2.8.
Thanks for any help!
The function cv2.HoughLinesP is expecting a single-channel image, so the cropped region could be taken from the gray image and that would remove the error:
# Crop the image
roi = list(map(int, roi)) # Convert to int for simplicity
cropped = gray[roi[1]:roi[1]+roi[3], roi[0]:roi[0]+roi[2]]
Note that I'm changing the output name from roi to cropped, and that's because you're going to still need the roi box. The points x1, x2, y1, and y2 are pixel positions in the cropped image, not the full image. To get the images drawn correctly, you can just add the upper left corner pixel position from roi.
Here's the for loop with relevant edits:
# Find lines
minLineLength = 100
maxLineGap = 30
lines = cv2.HoughLinesP(cropped,1,np.pi/180,100,minLineLength,maxLineGap)
for x in range(0, len(lines)):
for x1,y1,x2,y2 in lines[x]:
cv2.line(img,(x1+roi[0],y1+roi[1]),(x2+roi[0],y2+roi[1]),(237,149,100),2)

Python cv2 edge and contour detection

I am trying to detect bubbles on an OMR sheet which looks something like this:
My code for edge detection and contour display is referenced from here. However, before finding the actual contours, I am trying to detect the edges but somehow not able to set the correct values of parameters.
This is what I get:
Code:
from imutils.perspective import four_point_transform
from imutils import contours
import numpy as np
import argparse
import imutils
import cv2
def auto_canny(image, sigma=0.50):
# compute the median of the single channel pixel intensities
v = np.median(image)
# apply automatic Canny edge detection using the computed median
lower = int(max(0, (1.0 - sigma) * v))
upper = int(min(255, (1.0 + sigma) * v))
edged = cv2.Canny(image, lower, upper)
# return the edged image
return edged
# construct the argument parse and parse the arguments
ap = argparse.ArgumentParser()
ap.add_argument("-i", "--image", required=True,
help="path to the input image")
args = vars(ap.parse_args())
image = cv2.imread(args["image"])
r = 500.0 / image.shape[1]
dim = (500, int(image.shape[0] * r))
# perform the actual resizing of the image and show it
image = cv2.resize(image, dim, interpolation = cv2.INTER_AREA)
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
equalized_img = cv2.equalizeHist(gray)
cv2.imshow('Equalized', equalized_img)
# cv2.waitKey(0)
blurred = cv2.GaussianBlur(equalized_img, (7, 7), 0)
# edged =cv2.Canny(equalized_img, 30, 160)
edged = auto_canny(blurred)
cv2.imshow('edged', edged)
cv2.waitKey(0)
How can I get all the 90*4 circles?
You should be using Hough to search for circles. This method project every single white pixel as a circle, and tries to get as many overlapping pixels possible. You'll have to specify the predicted radiuses of circles to be found within image.
Left - original image
Top-right - each white pixel is projected as red circle - they are too small to find intersecting point
Bottom-right - green circle is larger, and all the intersecting points meet exactly at the middle of the circle! Both radius and position is returned by cvHoughCircles
This person dealt with blob detection (that's what finding circles is called I think) using cvHoughCircles with cvCanny-ized image (read OPs update).
OpenCV: Error in cvHoughCircles usage
You need to improve your contour detection.
Eventually by not changing it, but by better pre-processing the earlier stage.
Contour detection works better with more contrast and color separation in image. If you don´t have yet need to threshold you image with techniques like Simple Threshold, Adaptive or more smart techniques like Otsu's. Check Open CV document here.
Besides that, for your case eventually need more advanced techniques like "Adaptive Thresholding Using the Integral Image", described here.

Image alignment with reduced opacity

In the program given below I am aligning two images using homography and reducing the opacity of im_dst image in im_out image (say opacity=0.5), so that I can see both im_src and im_dst images in im_out image. But all I am getting is a blackened im_dst image in im_out image!
import cv2
import numpy as np
im_src = cv2.imread('src.jpg')
pts_src = np.array([[141, 131], [480, 159], [493, 630],[64, 601]])
im_dst = cv2.imread('dst.jpg')
pts_dst = np.array([[318, 256],[534, 372],[316, 670],[73, 473]])
h, status = cv2.findHomography(pts_src, pts_dst)
img1 = np.array(im_dst , dtype=np.float)
img2 = np.array(im_src , dtype=np.float)
img1 /= 255.0
# pre-multiplication
a_channel = np.ones(img1.shape, dtype=np.float)/2.0
im_dst = img1*a_channel
im_src = img2*(1-a_channel)
im_out = cv2.warpPerspective(im_src, h, (im_dst.shape[1],im_dst.shape[0]))
cv2.imshow("Warped Image", im_out)
cv2.waitKey(0)
I am new to openCV, so I might be missing something simple. Thanks for help!
Hey I've seen those points before!
What your code is doing is reducing the values of two images, im_dst and im_src, but then you're simply moving the faded image of im_src to a new position and displaying that. Instead, you should add the faded and warped image to the destination image and output that. The following would be a working modification of the end of your code:
alpha = 0.5
im_dst = img1 * alpha
im_src = img2 * (1-alpha)
im_out = cv2.warpPerspective(im_src, h, (im_dst.shape[1],im_dst.shape[0]))
im_blended = im_dst + im_out
cv2.imshow("Blended Warped Image", im_blended)
cv2.waitKey(0)
However you only divided img1 and not img2 by 255 so you would want to divide both first.
However, there is no reason to do this manually as you have to worry about converting the image types and scaling and all that. Instead, a much easier way is to use the built-in OpenCV function addWeighted() to add two images together with alpha-blending. So your entire code would instead be this short:
import cv2
import numpy as np
im_src = cv2.imread('src.jpg')
pts_src = np.array([[141, 131], [480, 159], [493, 630],[64, 601]])
im_dst = cv2.imread('dst.jpg')
pts_dst = np.array([[318, 256],[534, 372],[316, 670],[73, 473]])
h, status = cv2.findHomography(pts_src, pts_dst)
im_out = cv2.warpPerspective(im_src, h, (im_dst.shape[1],im_dst.shape[0]))
alpha = 0.5
beta = (1.0 - alpha)
dst_warp_blended = cv2.addWeighted(im_dst, alpha, im_out, beta, 0.0)
cv2.imshow('Blended destination and warped image', dst_warp_blended)
cv2.waitKey(0)
The function addWeighted() multiplies the first image im_dst by alpha, and the second image im_out by beta. The last argument is positive shift you can add to the result should you need it. Finally, the result is saturated so that values above whatever is allowable for your datatype is truncated at the maximum. And this way, your result is the same type as your inputs---you don't have to convert to float.
Last point about your code. A lot of tutorials, the one linked above included, use findHomography() to get a homography from four matching points. It is more appropriate to use getPerspectiveTransform() in this case. The function findHomography() finds an optimal homography based on many matching points, using an outlier rejection scheme and random sampling to speed up going through all the possible sets of four matching points. It works fine for sets of four points of course, but it makes more sense to use getPerspectiveTransform() when you have four matching points, and findHomography() when you have more than four. Although, annoyingly, the points you pass into getPerspectiveTransform() have to be of type np.float32 for whatever reason. So this would be my final suggestion for your code:
import cv2
import numpy as np
# Read source image.
im_src = cv2.imread('src.jpg')
# Four corners of the book in source image
pts_src = np.array([[141, 131], [480, 159], [493, 630],[64, 601]], dtype=np.float32)
# Read destination image.
im_dst = cv2.imread('dst.jpg')
# Four corners of the book in destination image.
pts_dst = np.array([[318, 256],[534, 372],[316, 670],[73, 473]], dtype=np.float32)
# Calculate Homography
h = cv2.getPerspectiveTransform(pts_src, pts_dst)
# Warp source image to destination based on homography
warp_src = cv2.warpPerspective(im_src, h, (im_dst.shape[1],im_dst.shape[0]))
# Blend the warped image and the destination image
alpha = 0.5
beta = (1.0 - alpha)
dst_warp_blended = cv2.addWeighted(im_dst, alpha, warp_src, beta, 0.0)
# Show the output
cv2.imshow('Blended destination and warped image', dst_warp_blended)
cv2.waitKey(0)
This (and all the other solutions above) will produce the following image:

Categories