When trying to join two images to create one:
img3 = imread('image_home.png')
img4 = imread('image_away.png')
result = np.hstack((img3,img4))
imwrite('Home_vs_Away.png', result)
This error sometimes appears:
all the input array dimensions for the concatenation axis must match exactly, but along dimension 0, the array at index 0 has size 192 and the array at index 1 has size 191
How should I proceed to generate the image when there is this difference in array size when np.hstack does not work?
Note:
I use several images, so not always the largest image is the first and not always the largest is the second, it can be quite random which is the smallest or largest between the two.
You can manually add a row/column with a color of your choice to match the shapes. Or you can simply let cv2.resize handle the resizing for you. In this code I show how to use both methods.
import numpy as np
import cv2
img1 = cv2.imread("image_home.png")
img2 = cv2.imread("image_away.png")
# Method 1 (add a column and a row to the smallest image)
padded_img = np.ones(img1.shape, dtype="uint8")
color = np.array(img2[-1, -1]) # take the border color
padded_img[:-1, :-1, :] = img2
padded_img[-1, :, :] = color
padded_img[:, -1, :] = color
# Method 2 (let OpenCV handle the resizing)
padded_img = cv2.resize(img2, img1.shape[:2][::-1])
result = np.hstack((img1, padded_img))
cv2.imwrite("Home_vs_Away.png", result)
Related
I'm trying to calculate image histograms of an numpy array of images. The array of images is of shape (n_images, width, height, colour_channels) and I want to return an array of shape (n_images, count_in_each_bin (i.e. 255)). This is done via two intermediary steps of averaging each colour channel for each image and then flattening each 2D image to a 1D one.
I think have successfully done this with the code below, however I have cheated a bit with the for loop at the end. My question is this - is there a way of getting rid of the last for loop and using an optimised numpy function instead?
def histogram_helper(flattened_image: np.array) -> np.array:
counts, _ = np.histogram(flattened_image, bins=[n for n in range(0, 256)])
return counts
# Using 10 RGB images of width and height 300
images = np.zeros((10, 300, 300, 3))
# Take the mean of the three colour channels
channel_avg = np.mean(images, axis=3)
# Flatten each image in the array of images, resulting in a 1D representation of each image.
flat_images = channel_avg.reshape(*channel_avg.shape[:-2], -1)
# Now calculate the counts in each of the colour bins for each image in the array.
# This will provide us with a count of how many times each colour appears in an image.
result = np.empty((0, len(self.histogram_bins) - 1), dtype=np.int32)
for image in flat_images:
colour_counts = self.histogram_helper(image)
colour_counts = colour_counts.reshape(1, -1)
result = np.concatenate([result, colour_counts])
You don't necessarily need to call np.histogram or np.bincount for this, since pixel values are in the range 0 to N. That means that you can treat them as indices and simply use a counter.
Here's how I would transform the initial images, which I imaging are of dtype np.uint8:
images = np.random.randint(0, 255, size=(10, 5, 5, 3)) # 10 5x5 images, 3 channels
reshaped = np.round(images.reshape(images.shape[0], -1, images.shape[-1]).mean(-1)).astype(images.dtype)
Now you can simply count the histograms using unbuffered addition with np.add.at:
result = np.zeros((images.shape[0], 256), int)
index = np.arange(len(images))[:, None]
np.add.at(result, (index, reshaped), 1)
The last operation is in-place and therefore returns None, but the answer will be in result nevertheless.
How to connect disjoint vertical lines in an image without expanding them over the outer margin area?
Input image (simplified example) :
Using Morph Close operation :
inputImg = cv2.imread(imgPath)
grayInput = cv2.cvtColor(inputImg, cv2.COLOR_BGR2GRAY)
thre = cv2.inRange(grayInput, 0, 155)
closed = cv2.morphologyEx(thre, cv2.MORPH_CLOSE, np.ones((500,1), np.uint8))
Current output :
Desired output :
P.S. I could just add margin that is greater than the closing kernel but this would be memory inefficient due to the large size of the production images. Production images have important amount of lines with random gaps as well.
Binarize your image, find the columns that contain white pixels, then connect the lines between the lowest and highest white pixels in each column.
The below code will accomplish this. I explained it with comments.
import cv2
import numpy as np
img = cv2.imread("xoxql.png") # Load the image
img = (img < 255).all(axis=2) # Binarize the image
rows, cols = img.nonzero() # Get all nonzero indices
unique_cols = np.unique(cols) # Select columns that contain nonzeros
for col_index in unique_cols: # Loop through the columns
start = rows[col_index == cols].min() # Select the lowest nonzero row index
end = rows[col_index == cols].max() # Select the highest nonzero row index
img[start:end, col_index] = True # Set all "pixels" to True between the lowest and highest row indices
img = img * np.uint8(255) # Multiply by 255 to convert the matrix back to an image
cv2.imwrite("result.png", img) # Save the image
The lines on the right side of your image are not exactly lined up, which will leave some gaps on the edges.
Similar to the question here but I'd like to return a count of the total number of different pixels between the two images.
I'm sure it is doable with OpenCV in Python but I'm not sure where to start.
Assuming that the size of two images is the same
import numpy as np
import cv2
im1 = cv2.imread("im1.jpg")
im2 = cv2.imread("im2.jpg")
# total number of different pixels between im1 and im2
np.sum(im1 != im2)
You can use openCVs absDiff to get the difference between the images, then countNonZero to get the number of differing pixels.
img1 = cv2.imread('img1.png')
img2 = cv2.imread('img2.png')
difference = cv2.absdiff(img1, img2)
num_diff = cv2.countNonZero(difference)
Since cv2 images are just numpy arrays of shape (height, width, num_color_dimensions) for color images, and (height, width) for black and white images, this is easy to do with ordinary numpy operations. For black/white images, we sum the number of differing pixels:
(img1 != img2).sum()
(Note that True=1 and False=0, so we can sum the array to get the number of True elements.)
For color images, we want to find all pixels where any of the components of the color differ, so we first perform a check if any of the components differ along that axis (axis=2, since the shape components are zero-indexed):
(img1 != img2).any(axis=2).sum()
I have an image with a black background that contains different shapes in different colors. I want to generate an image per shape, in which the shape is white and the background is black. I have been able to do this with numpy, but I would like to optimize my code using vectorization. This is what I have so far:
import numpy as np
import cv2
image = cv2.imread('mask.png')
image.shape
# (720, 1280, 3)
# Get all colors that are not black
colors = np.unique(image.reshape(-1,3), axis=0)
colors = np.delete(colors, [0,0,0], axis=0)
colors.shape
# (5, 3)
# Example for one color. I could do a for-loop, but I want to vectorize instead
c = colors[0]
query = (image == c).all(axis=2)
# Make the image all black, except for the pixels that match the shape
image[query] = [255,255,255]
image[np.logical_not(query)] = [0,0,0]
Approach #1
You can save a lot on intermediate array data with extension of unique colors into higher dim and then comparing against original data array and then using the mask directly to get the final output -
# Get unique colors (remove black)
colors = np.unique(image.reshape(-1,3), axis=0)
colors = np.delete(colors, [0,0,0], axis=0)
mask = (colors[:,None,None,:]==image).all(-1)
out = mask[...,None]*np.array([255,255,255])
Approach #2
A better/memory-efficient way to get that mask would be with something like this -
u,ids = np.unique(image.reshape(-1,3), axis=0, return_inverse=1)
m,n = image.shape[:-1]
ids = ids.reshape(m,n)-1
mask = np.zeros((ids.max()+1,m,n),dtype=bool)
mask[ids,np.arange(m)[:,None],np.arange(n)] = ids>=0
and hence, a better way to get the final output, like so -
out = np.zeros(mask.shape + (3,), dtype=np.uint8)
out[mask] = [255,255,255]
and probably a better way to get ids would be with matrix-multiplication. Hence :
u,ids = np.unique(image.reshape(-1,3), axis=0, return_inverse=1)
could be replaced by :
image2D = np.tensordot(image,256**np.arange(3),axes=(-1,-1))
ids = np.unique(image2D,return_inverse=1)[1]
I was able to solve it the following way:
import numpy as np
import cv2
# Read the image
image = cv2.imread('0-mask.png')
# Get unique colors (remove black)
colors = np.unique(image.reshape(-1,3), axis=0)
colors = np.delete(colors, [0,0,0], axis=0)
# Get number of unique colors
instances = colors.shape[0]
# Reshape colors and image for broadcasting
colors = colors.reshape(instances,1,1,3)
image = image[np.newaxis]
# Generate multiple images, one per instance
mask = np.ones((instances, 1, 1, 1))
images = (image * mask)
# Run query with the original image
query = (image == colors).all(axis=3)
# For every image, color the shape white, everything else black
images[query] = [255,255,255]
images[np.logical_not(query)] = [0,0,0]
I'm trying to cut multiple images with a green background. The center of the pictures is green and i want to cut the rest out of the picture. The problem is, that I got the pictures from a video, so sometimes the the green center is bigger and sometimes smaller. My true task is to use K-Means on the knots, therefore i have for example a green background and two ropes, one blue and one red.
I use python with opencv, numpy and matplotlib.
I already cut the center, but sometimes i cut too much and sometimes i cut too less. My Imagesize is 1920 x 1080 in this example.
Here the knot is left and there is more to cut
Here the knot is in the center
Here is another example
Here is my desired output from picture 1
Example 1 which doesn't work with all algorithm
Example 2 which doesn't work with all algorithm
Example 3 which doesn't work with all algorithm
Here is my Code so far:
import numpy as np
import cv2
import matplotlib.pyplot as plt
from PIL import Image, ImageEnhance
img = cv2.imread('path')
print(img.shape)
imgRGB = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
crop_img = imgRGB[500:500+700, 300:300+500]
plt.imshow(crop_img)
plt.show()
You can change color to hsv.
src = cv2.imread('path')
imgRGB = cv2.cvtColor(src, cv2.COLOR_BGR2RGB)
imgHSV = cv2.cvtColor(imgRGB, cv2.COLOR_BGR2HSV)
Then use inRange to find only green values.
lower = np.array([20, 0, 0]) #Lower values of HSV range; Green have Hue value equal 120, but in opencv Hue range is smaler [0-180]
upper = np.array([100, 255, 255]) #Uppervalues of HSV range
imgRange = cv2.inRange(imgHSV, lower, upper)
Then use morphology operations to fill holes after not green lines
#kernels for morphology operations
kernel_noise = np.ones((3,3),np.uint8) #to delete small noises
kernel_dilate = np.ones((30,30),np.uint8) #bigger kernel to fill holes after ropes
kernel_erode = np.ones((38,38),np.uint8) #bigger kernel to delete pixels on edge that was add after dilate function
imgErode = cv2.erode(imgRange, kernel_noise, 1)
imgDilate = cv2.dilate(imgErode , kernel_dilate, 1)
imgErode = cv2.erode(imgDilate, kernel_erode, 1)
Put mask on result image. You can now easly find corners of green screen (findContours function) or use in next steps result image
res = cv2.bitwise_and(imgRGB, imgRGB, mask = imgErode) #put mask with green screen on src image
The code below does what you want. First it converts the image to the HSV colorspace, which makes selecting colors easier. Next a mask is made where only the green parts are selected. Some noise is removed and the rows and columns are summed up. Finally a new image is created based on the first/last rows/cols that fall in the green selection.
Since in all provided examples a little extra of the top needed to be cropped off I've added code to do that. First I've inverted the mask. Now you can use the sum of the rows/cols to find the row/col that is fully within the green selection. It is done for the top. In the image below the window 'Roi2' is the final image.
Edit: updated code after comment by ts.
Updated result:
Code:
import numpy as np
import cv2
# load image
img = cv2.imread("gr.png")
# convert to HSV
hsv = cv2.cvtColor(img, cv2.COLOR_BGR2HSV)
# set lower and upper color limits
lower_val = (30, 0, 0)
upper_val = (65,255,255)
# Threshold the HSV image to get only green colors
# the mask has white where the original image has green
mask = cv2.inRange(hsv, lower_val, upper_val)
# remove noise
kernel = np.ones((8,8),np.uint8)
mask = cv2.morphologyEx(mask, cv2.MORPH_OPEN, kernel)
# sum each row and each volumn of the image
sumOfCols = np.sum(mask, axis=0)
sumOfRows = np.sum(mask, axis=1)
# Find the first and last row / column that has a sum value greater than zero,
# which means its not all black. Store the found values in variables
for i in range(len(sumOfCols)):
if sumOfCols[i] > 0:
x1 = i
print('First col: ' + str(i))
break
for i in range(len(sumOfCols)-1,-1,-1):
if sumOfCols[i] > 0:
x2 = i
print('Last col: ' + str(i))
break
for i in range(len(sumOfRows)):
if sumOfRows[i] > 0:
y1 = i
print('First row: ' + str(i))
break
for i in range(len(sumOfRows)-1,-1,-1):
if sumOfRows[i] > 0:
y2 = i
print('Last row: ' + str(i))
break
# create a new image based on the found values
#roi = img[y1:y2,x1:x2]
#show images
#cv2.imshow("Roi", roi)
# optional: to cut off the extra part at the top:
#invert mask, all area's not green become white
mask_inv = cv2.bitwise_not(mask)
# search the first and last column top down for a green pixel and cut off at lowest common point
for i in range(mask_inv.shape[0]):
if mask_inv[i,0] == 0 and mask_inv[i,x2] == 0:
y1 = i
print('First row: ' + str(i))
break
# create a new image based on the found values
roi2 = img[y1:y2,x1:x2]
cv2.imshow("Roi2", roi2)
cv2.imwrite("img_cropped.jpg", roi2)
cv2.waitKey(0)
cv2.destroyAllWindows()
First step is to extract green channel from your image, this is easy with OpenCV numpy and would produce grayscale image (2D numpy array)
import numpy as np
import cv2
img = cv2.imread('knots.png')
imgg = img[:,:,1] #extracting green channel
Second step is using thresholding, which mean turning grayscale image into binary (black and white ONLY) image for which OpenCV has ready function: https://docs.opencv.org/3.4.0/d7/d4d/tutorial_py_thresholding.html
imgt = cv2.threshold(imgg,127,255,cv2.THRESH_BINARY)[1]
Now imgt is 2D numpy array consisting solely of 0s and 255s. Now you have to decide how you would look for places of cuts, I suggest following:
topmost row of pixel containing at least 50% of 255s
bottommost row of pixel containing at least 50% of 255s
leftmost column of pixel containing at least 50% of 255s
rightmost column of pixel containing at least 50% of 255s
Now we have to count number of occurences in each row and each column
height = img.shape[0]
width = img.shape[1]
columns = np.apply_along_axis(np.count_nonzero,0,imgt)
rows = np.apply_along_axis(np.count_nonzero,1,imgt)
Now columns and rows are 1D numpy arrays containing number of 255s for each column/row, knowing height and width we could get 1D numpy arrays of bool values following way:
columns = columns>=(height*0.5)
rows = rows>=(width*0.5)
Here 0.5 means 50% mentioned earlier, feel free to adjust that value to your needs. Now it is time to find index of first True and last True in columns and rows.
icolumns = np.argwhere(columns)
irows = np.argwhere(rows)
leftcut = int(min(icolumns))
rightcut = int(max(icolumns))
topcut = int(min(irows))
bottomcut = int(max(irows))
Using argwhere I got numpy 1D arrays of indexes of Trues, then found lowest and greatest. Finally you can clip your image and save it
imgout = img[topcut:bottomcut,leftcut:rightcut]
cv2.imwrite('out.png',imgout)
There are two places which might be requiring adjusting: % of 255s (in my example 50%) and threshold value (127 in cv2.threshold).
EDIT: Fixed line with cv2.threshold
Based on the new images you added I assume that you do not only want to cut out the non green parts as you asked, but that you want a smaller frame around the ropes/knot. Is that correct? If not, you should upload the video and describe the purpose/goal of the cropping a bit more, so that we can better help you.
Assuming you want a cropped image with only the ropes, the solution is quite similar the the previous answer. However, this time the red and blue of the ropes are selected using HSV. The image is cropped based on the resulting mask. If you want the image somewhat bigger than just the ropes, you can add extra margins - but be sure to account/check for the edge of the image.
Note: the code below works for the images that that have a full green background, so I suggest you combine it with one of the solutions that only selects the green area. I tested this for all your images as follows: I took the code from my other answer, put it in a function and added return roi2 at the end. This output is fed into a second function that holds the code below. All images were processed successful.
Result:
Code:
import numpy as np
import cv2
# load image
img = cv2.imread("image.JPG")
# blue
lower_val_blue = (110, 0, 0)
upper_val_blue = (179,255,155)
# red
lower_val_red = (0, 0, 150)
upper_val_red = (10,255,255)
# Threshold the HSV image
mask_blue = cv2.inRange(img, lower_val_blue, upper_val_blue)
mask_red = cv2.inRange(img, lower_val_red, upper_val_red)
# combine masks
mask_total = cv2.bitwise_or(mask_blue,mask_red)
# remove noise
kernel = np.ones((8,8),np.uint8)
mask_total = cv2.morphologyEx(mask_total, cv2.MORPH_CLOSE, kernel)
# sum each row and each volumn of the mask
sumOfCols = np.sum(mask_total, axis=0)
sumOfRows = np.sum(mask_total, axis=1)
# Find the first and last row / column that has a sum value greater than zero,
# which means its not all black. Store the found values in variables
for i in range(len(sumOfCols)):
if sumOfCols[i] > 0:
x1 = i
print('First col: ' + str(i))
break
for i in range(len(sumOfCols)-1,-1,-1):
if sumOfCols[i] > 0:
x2 = i
print('Last col: ' + str(i))
break
for i in range(len(sumOfRows)):
if sumOfRows[i] > 0:
y1 = i
print('First row: ' + str(i))
break
for i in range(len(sumOfRows)-1,-1,-1):
if sumOfRows[i] > 0:
y2 = i
print('Last row: ' + str(i))
break
# create a new image based on the found values
roi = img[y1:y2,x1:x2]
#show image
cv2.imshow("Result", roi)
cv2.imshow("Image", img)
cv2.waitKey(0)
cv2.destroyAllWindows()