Currently, I have an array containing the coordinates of the midpoints then I use it to draw a line through those coordinates. Then I want to split the image above and below the body line that I drew, but I have no idea for it, so please give me an idea; I really appreciate it
I want the result to look like this picture
array a = [(954, 88), (905, 97), (855, 107), (805, 114), (755, 125), (705, 134), (655, 139), (605, 141), (555, 139), (505, 134), (455,
139), (405, 146), (355, 146), (305, 144), (255, 145), (205, 1
25), (155, 120), (105, 114), (55, 104), (1, 156)]
image used to run the program :
original image
and the results after running :
result
Here is one way to do that in Python/OpenCV.
Read the input
List Numpy array of original points
Draw polygon on input copy from array of points
Augment the array of points to make a polygon around the top of the image
Draw a white filled polygon on a black background as mask1
Use the mask1 to blacken out the bottom and keep the top of the input image
Augment the array of points to make a polygon around the bottom of the image
Draw a white filled polygon on a black background as mask2
Use the mask2 to blacken out the top and keep the bottom of the input image
Save the results
Input:
import cv2
import numpy as np
# read the input
img = cv2.imread('fish.png')
h, w = img.shape[:2]
# create black image like input
black = np.zeros_like(img)
# define original point
points = np.array([[954,88],[905,97],[855,107],[805,114],[755,125],[705,134],[655,139],[605,141],[555,139],[505,134],[455,139],[405,146],[355,146],[305,144],[255,145],[205,125],[155,120],[105,114],[55,104],[1,156]])
# draw points on input
img_pts = img.copy()
cv2.polylines(img_pts, [points], False, (0,0,255), 2)
# augment original polygon array for top region
points1 = np.array([[0,0],[w-1,0],[w-1,88],[954,88],[905,97],[855,107],[805,114],[755,125],[705,134],[655,139],[605,141],[555,139],[505,134],[455,139],[405,146],[355,146],[305,144],[255,145],[205,125],[155,120],[105,114],[55,104],[1,156],[0,156]])
# draw white filled closed polygon on black background
mask1 = black.copy()
cv2.fillPoly(mask1, [points1], (255,255,255))
# Use mask to select top part of image
result1 = np.where(mask1==255, img, black)
# augment polygon array for bottom region
points2 = np.array([[0,h-1],[w-1,h-1],[w-1,88],[954,88],[905,97],[855,107],[805,114],[755,125],[705,134],[655,139],[605,141],[555,139],[505,134],[455,139],[405,146],[355,146],[305,144],[255,145],[205,125],[155,120],[105,114],[55,104],[1,156],[0,156]])
# draw white filled closed polygon on black background
mask2 = black.copy()
cv2.fillPoly(mask2, [points2], (255,255,255))
# Use mask to select part of image
result2 = np.where(mask2==255, img, black)
# save results
cv2.imwrite('fish_with_points.jpg', img_pts)
cv2.imwrite('fish_top_mask.jpg', mask1)
cv2.imwrite('fish_top_masked.jpg', result1)
cv2.imwrite('fish_bottom_mask.jpg', mask2)
cv2.imwrite('fish_bottom_masked.jpg', result2)
# show results
cv2.imshow('img_points',img_pts)
cv2.imshow('mask1',mask1)
cv2.imshow('result1',result1)
cv2.imshow('mask2',mask2)
cv2.imshow('result2',result2)
cv2.waitKey(0)
Original Point Polyline on Input:
Mask1:
Result1:
Mask2:
Result2:
Related
I have two images: A grayscale image, and a binary mask with the same dimensions. How do I color the image on the mask, while the rest of the image remains grayscale?
Here's and example:
Expressing your grey image pixel values across 3-channels gives you a color image. The result would look the same but when you check the dimensions it would be 3.
gray_3_channel = cv2.merge((gray, gray, gray))
gray.shape
>>> (158, 99)
gray_3_channel.shape
>>> (158, 99, 3)
For every white (255) pixel in the mask, assign the color (255, 255, 0) in gray_3_channel:
gray_3_channel[mask==255]=(255, 255, 0)
I am trying to split this image by the gray color of each rectangle.
The idea is to return the coordinate of each corner so I can use it to process what is inside.
I am a OpenCV newbie so I would like to know what is the best approach to do this. Is findContours enough to get these coordinates or is there a better function for this?
Regards
It simple to solve using findContours:
Read input image as Grayscale (not as RGB).
Apply threshold, and inverse polarity (make all gray pixel white, and background black).
Find contours on threshold image.
Find bounding rectangle of each contour.
The solution draws a green rectangle around each contour, for testing.
Here is a working code sample:
import numpy as np
import cv2
# Read input image as Grayscale
img = cv2.imread('boxes.png', cv2.IMREAD_GRAYSCALE)
# Convert img to uint8 binary image with values 0 and 255
# All white pixels above 250 goes to 255, and other pixels goes to 0
ret, thresh_gray = cv2.threshold(img, 250, 255, cv2.THRESH_BINARY)
# Inverse polarity
thresh_gray = 255 - thresh_gray
# Find contours in thresh_gray.
contours = cv2.findContours(thresh_gray, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_NONE)[-2] # [-2] indexing takes return value before last (due to OpenCV compatibility issues).
corners = []
# Iterate contours, find bounding rectangles, and add corners to a list
for c in contours:
# Get bounding rectangle
x, y, w, h = cv2.boundingRect(c)
# Append corner to list of corners - format is corners[i] holds a tuple: ((x0, y0), (x1, y1))
corners.append(((x, y), (x+w, y+h)))
# Convert grayscale to BGR (just for testing - for drawing rectangles in green color).
out = cv2.cvtColor(img, cv2.COLOR_GRAY2BGR)
# Draw green rectangle (for testing)
for c in corners:
cv2.rectangle(out, c[0], c[1], (0, 255, 0), thickness = 2)
cv2.imwrite('out.png', out) #Save out to file (for testing).
# Show result (for testing).
cv2.imshow('out', out)
cv2.waitKey(0)
cv2.destroyAllWindows()
Result:
List of corners:
corners = [((491, 153), (523, 181)),
((24, 151), (68, 178)),
((231, 123), (277, 158)),
((442, 103), (488, 131)),
((7, 99), (76, 132)),
((211, 75), (285, 110)),
((268, 57), (269, 58)),
((420, 49), (494, 84)),
((5, 47), (58, 83)),
((213, 18), (267, 59)),
((420, 0), (477, 33))]
For example,
For coordinate images (X, Y) namely (576, 0) until (726, 1371) I want to know which coordinates have pixel intensities in the range Red [165 to 225] Green [176 to 200] and Blue [186 to 198].
The output code is coordinate.
Here is one way to do that with Python/OpenCV/Numpy.
Create a mask for the region
Create a mask from the colors
Combine masks
Get coordinates where combined mask is not black
Input:
import cv2
import numpy as np
# load image
img = cv2.imread("monet2.jpg")
# create region mask
mask1 = np.zeros_like(img)[:,:,0]
mask1[0:0+75, 90:90+75] = 255
# create color mask
lower =(0,100,150) # lower bound for each channel
upper = (40,160,2100) # upper bound for each channel
mask2 = cv2.inRange(img, lower, upper)
# combine masks
mask3 = cv2.bitwise_and(mask1, mask2)
# get coordinates
coords = np.argwhere(mask3)
for p in coords:
px = (p[0],p[1])
print (px)
# apply mask to image (to see where data is obtained)
mask3 = cv2.merge([mask3,mask3,mask3])
img_masked = cv2.bitwise_and(img, mask3)
# display images
cv2.imshow("mask1", mask1)
cv2.imshow("mask2", mask2)
cv2.imshow("mask3", mask3)
cv2.imshow("img_masked", img_masked)
cv2.waitKey(0)
# write results to disk
cv2.imwrite("monet2_mask1.jpg", mask1)
cv2.imwrite("monet2_mask2.jpg", mask2)
cv2.imwrite("monet2_mask3.jpg", mask3)
cv2.imwrite("monet2_masked.jpg", img_masked)
Region Mask:
Color Mask:
Combined Mask:
Masked Image:
Coordinates List:
(6, 128)
(7, 122)
(7, 125)
...
(63, 125)
(63, 126)
(63, 134)
(63, 135)
I have few Journal pages images where there are two columns I want to mask one column white without a changing the dimension.which means the output image should have same dimensions as input image even though there is one column.
I was able to mask image but the mask part is coming black which I want as white.
import cv2
import numpy as np
# Load the original image
image = cv2.imread(filename = "D:\output_final_word5\image1.jpg")
# Create the basic black image
mask = np.zeros(shape = image.shape, dtype = "uint8")
# Draw a white, filled rectangle on the mask image
cv2.rectangle(img = mask, pt1 = (0, 0), pt2 = (795, 3000), color = (255, 255,
255), thickness = -1)
# Apply the mask and display the result
maskedImg = cv2.bitwise_and(src1 = image, src2 = mask)
#cv2.namedWindow(winname = "masked image", flags = cv2.WINDOW_NORMAL)
cv2.imshow("masked image",maskedImg)
cv2.waitKey(delay = 0)
cv2.imwrite("D:\Test_Mask.jpg",maskedImg)
My final objective is to read a folder where are several Journal pages In which need to be saved by masking first one column and then another column without affecting the dimension of Input image and mask part should be white.
Below are Input Image Attached...
And Output Should be like this....
You don't need mask to draw rectangle. You can draw it directly on image.
You can also use image.copy() to create second image with other column
BTW: if 795 is in the middle of width then you can use image.shape to get its (height,width) and use width//2 instead of 795 so it will work with images which have different widths. But if 795 is not ideally in the middle then use half_width = 795
import cv2
image_1 = cv2.imread('image.jpg')
image_2 = image_1.copy()
height, width, depth = image_1.shape # it gives `height,width`, not `width,height`
half_width = width//2
#half_width = 795
cv2.rectangle(img=image_1, pt1=(0, 0), pt2=(half_width, height), color=(255, 255, 255), thickness=-1)
cv2.rectangle(img=image_2, pt1=(half_width, 0), pt2=(width, height), color=(255, 255, 255), thickness=-1)
cv2.imwrite("image_1.jpg", image_1)
cv2.imwrite("image_2.jpg", image_2)
cv2.imshow("image 1", image_1)
cv2.imshow("image 2", image_2)
cv2.waitKey(0)
cv2.destroyAllWindows()
I am trying to make a transparent image and draw on it, and after I will addWeighted over the base image.
How can I initialize fully transparent image with width and hight in openCV python?
EDIT: I want to make a effect like in Photoshop, having stack of the layers, all stacked layers are initially transparent and drawing is performed on fully transparent layer. On the end I will merge all layers to get final image
For creating a transparent image you need a 4 channel matrix, 3 of which would represent RGB colors and the 4th channel would represent Alpha channel, To create a transparent image, you can ignore the RGB values and directly set the alpha channel to be 0. In Python OpenCV uses numpy to manipulate matrices, so a transparent image can be created as
import numpy as np
import cv2
img_height, img_width = 300, 300
n_channels = 4
transparent_img = np.zeros((img_height, img_width, n_channels), dtype=np.uint8)
# Save the image for visualization
cv2.imwrite("./transparent_img.png", transparent_img)
If you want to draw on several "layers" and then stack the drawings together, then how about this:
import cv2
import numpy as np
#create 3 separate BGRA images as our "layers"
layer1 = np.zeros((500, 500, 4))
layer2 = np.zeros((500, 500, 4))
layer3 = np.zeros((500, 500, 4))
#draw a red circle on the first "layer",
#a green rectangle on the second "layer",
#a blue line on the third "layer"
red_color = (0, 0, 255, 255)
green_color = (0, 255, 0, 255)
blue_color = (255, 0, 0, 255)
cv2.circle(layer1, (255, 255), 100, red_color, 5)
cv2.rectangle(layer2, (175, 175), (335, 335), green_color, 5)
cv2.line(layer3, (170, 170), (340, 340), blue_color, 5)
res = layer1[:] #copy the first layer into the resulting image
#copy only the pixels we were drawing on from the 2nd and 3rd layers
#(if you don't do this, the black background will also be copied)
cnd = layer2[:, :, 3] > 0
res[cnd] = layer2[cnd]
cnd = layer3[:, :, 3] > 0
res[cnd] = layer3[cnd]
cv2.imwrite("out.png", res)
To convert an image's white parts to transparent:
import cv2
import numpy as np
img = cv2.imread("image.png", cv2.IMREAD_UNCHANGED)
img[np.where(np.all(img[..., :3] == 255, -1))] = 0
cv2.imwrite("transparent.png", img)