Current code:
# Mask array with mask pixels set to 1 and rest set to 0
mask = nib.load(os.path.join(mask_path, f))
# Grey scale MRI image slice
image = nib.load(os.path.join(images_path, f))
# Extracting pixel arrays
mask_data = mask.get_fdata()
image_data = image.get_fdata()
# Setting image pixels to 0 where mask pixels are set to 1
masked_image = np.where(mask_data == 0, image_data,0)
# Transposing and flipping to fix visual orientation
masked_image = masked_image.transpose((1,0))
masked_image = np.flip(masked_image,axis=0)
# Want something like this
# colors = np.where(mask_data == 1, 'autumn','gray')
fig,ax = plt.subplots()
ax.imshow(masked_image,cmap="gray")
Output image:
Currently my code takes in an image and a mask, I set the pixel values in the image_array to 0 based on the mask. I ideally want this to show up as red:
The only way I can see how to do this is a different colormap for all pixels with a value of 0. I don't know how to write a custom colour map based on pixel value. Is there any way to do this?
Completely forgot matplotlib draws on top of pre existing data if using the same figure. The below code works to do what I want without needing to create my own colormap.
plt.clf()
# Mask array with mask pixels set to 1 and rest set to 0
mask = nib.load(os.path.join(mask_path, f))
# Grey scale MRI image slice
image = nib.load(os.path.join(images_path, f))
# Extracting pixel arrays
mask_data = mask.get_fdata()
image_data = image.get_fdata()
# Transposing and flipping to fix visual orientation
image_data = image_data.transpose((1,0))
image_data = np.flip(image_data,axis=0)
mask_data = mask_data.transpose((1,0))
mask_data = np.flip(mask_data,axis=0)
mask_data[mask_data==0] = None
plt.imshow(image_data,cmap='gray')
plt.imshow(mask_data,cmap = 'autumn')
Result:
Related
If an image is given , find out the unique colors in that image and write output images corresponding to each unique color.
In that all other pixels which don't have that unique color should me marked white.
for eg , if an image has 3 colors - in the output folder there should be three images where each color is separated. Using Open CV & Python.
I've created the unique color list using my methods. What I want is to give a count of all those unique colors in the sample.png image and give the corresponding images output as per the question.
I believe the code below (with comments) should help you with this!
Feel free to follow up if any of the code is unclear!
import numpy as np
import cv2 as cv
import matplotlib.pyplot as plt
from copy import deepcopy
# Load image and convert it from BGR (opencv default) to RGB
fpath = "dog.png" # TODO: replace with your path
IMG = cv.cvtColor(cv.imread(fpath), cv.COLOR_BGR2RGB)
# Get dimensions and reshape into (H * W, C) vector - i.e. a long vector, where each element is a tuple corresponding to a color!
H, W, C = IMG.shape
IMG_FLATTENED = np.vstack([IMG[:, w, :] for w in range(W)])
# Get unique colors using np.unique function, and their counts
colors, counts = np.unique(IMG_FLATTENED, axis=0, return_counts = True)
# Jointly loop through colors and counts
for color, count in zip(colors, counts):
print("COLOR: {}, COUNT: {}".format(color, count))
# Create placeholder image and mark all pixels as white
SINGLE_COLOR = (255 * np.ones(IMG.shape)).astype(np.uint8) # Make sure casted to uint8
# Compute binary mask of pixel locations where color is, and set color in new image
color_idx = np.all(IMG[..., :] == color, axis=-1)
SINGLE_COLOR[color_idx, :] = color
# Write file to output with color and counts specified
cv.imwrite("color={}_count={}.png".format(color, count), SINGLE_COLOR)
Ack, he beat me to it. Well, here's what I've got.
Oh no, I don't think the line
blank[img == color] = img[img == color]
behaves how I think it does. I think it just coincidentally works for this case. I'll edit the code with a solution I'm more confident works for all cases.
Original Image
import cv2
import numpy as np
# load image
img = cv2.imread("circles.png");
# get uniques
unique_colors, counts = np.unique(img.reshape(-1, img.shape[-1]), axis=0, return_counts=True);
# split off each color
splits = [];
for a in range(len(unique_colors)):
# get the color
color = unique_colors[a];
blank = np.zeros_like(img);
mask = cv2.inRange(img, color, color); # edited line 1
blank[mask == 255] = img[mask == color]; # edited line 2
# show
cv2.imshow("Blank", blank);
cv2.waitKey(0);
# save each color with its count
file_str = "";
for b in range(3):
file_str += str(color[b]) + "_";
file_str += str(counts[a]) + ".png";
cv2.imwrite(file_str, blank);
I am trying to make my code more robust compared to my first revision. The goal is to generate a final single image by comparing image A and image B to get image C. Currently I am working to show differences in images composed of black lines. In this case, that would be image A and B. I have a working method with imaging resizing and the pre-processing done (resizing, noise reduction, etc). The code I developed to show the differences (image C) is shown below:
np_image_A = np.array(image_A)
np_image_B = np.array(image_B)
# Set the green and red channels respectively to 0. Leaves a blue image
np_image_A[:, :, 1] = 0
np_image_A[:, :, 2] = 0
# Set the blue channels to 0.
np_image_B[:, :, 0] = 0
# Add the np images after color modification
overlay_image = cv2.add(np_image_A, np_image_B)
I currently don't feel that is is robust enough and may lead to some issues down the line. I want to use a method that shows the image differences between image A and B in a single image. And image A be assigned one color for differences and image B be assigned another color (such as blue and red, and black represents areas that are the same). This is highlighted in the image below:
To remedy this, I received some help from StackOverflow and now have a method that uses masking and merging in OpenCV. The issue that I have found is that only additive changes are shown, and if an item is removed, it is not show in the difference image.
Here is the updated code that gets me part of the way to the solution that I am seeking.The issue with this code is that it produces what is found in image D and not image C. I tried to essentially run this block of code twice, switching img = imageA and imageB, but the output is mangled for some reason.
# load image A as color image
img = cv2.imread('1a.png')
# load A and B as grayscale
imgA = cv2.imread('1a.png',0)
imgB = cv2.imread('1b.png',0)
# invert grayscale images for subtraction
imgA_inv = cv2.bitwise_not(imgA)
imgB_inv = cv2.bitwise_not(imgB)
# subtract the original (A) for the new version (B)
diff = cv2.subtract(imgB_inv, imgA_inv)
# split color image A into blue,green,red color channels
b,g,r = cv2.split(img)
# merge channels back into image, subtracting the diff from
# the blue and green channels, leaving the shape of diff red
res = cv2.merge((b-diff,g-diff,r))
# display result
cv2.imshow('Result',res)
cv2.waitKey(0)
cv2.destroyAllWindows()
The result that I am looking for is image C, but currently I can only achieve image D with the revised code.
Edit: Here are the test images A and B for use.
You're almost there, but you need to create two separate diffs. One diff represents the black pixels that are in A but not in B, and the other diff represents the black pixels that are in B but not in A.
Result:
import cv2
import numpy as np
# load A and B as grayscale
imgA = cv2.imread('1a.png',0)
imgB = cv2.imread('1b.png',0)
# invert grayscale images for subtraction
imgA_inv = cv2.bitwise_not(imgA)
imgB_inv = cv2.bitwise_not(imgB)
# create two diffs, A - B and B - A
diff1 = cv2.subtract(imgB_inv, imgA_inv)
diff2 = cv2.subtract(imgA_inv, imgB_inv)
# create a combined image of the two inverted
combined = cv2.add(imgA_inv, imgB_inv)
combined_inv = cv2.bitwise_not(combined)
# convert the combined image back to rbg,
# so that we can modify individual color channels
combined_rgb = cv2.cvtColor(combined_inv, cv2.COLOR_GRAY2RGB)
# split combined image into blue,green,red color channels
b,g,r = cv2.split(combined_rgb)
# merge channels back into image, adding the first diff to
# the red channel and the second diff to the blue channel
res = cv2.merge((b+diff2,g,r+diff1))
# display result
cv2.imshow('Result',res)
cv2.waitKey(0)
cv2.destroyAllWindows()
I have an image with a black background that contains different shapes in different colors. I want to generate an image per shape, in which the shape is white and the background is black. I have been able to do this with numpy, but I would like to optimize my code using vectorization. This is what I have so far:
import numpy as np
import cv2
image = cv2.imread('mask.png')
image.shape
# (720, 1280, 3)
# Get all colors that are not black
colors = np.unique(image.reshape(-1,3), axis=0)
colors = np.delete(colors, [0,0,0], axis=0)
colors.shape
# (5, 3)
# Example for one color. I could do a for-loop, but I want to vectorize instead
c = colors[0]
query = (image == c).all(axis=2)
# Make the image all black, except for the pixels that match the shape
image[query] = [255,255,255]
image[np.logical_not(query)] = [0,0,0]
Approach #1
You can save a lot on intermediate array data with extension of unique colors into higher dim and then comparing against original data array and then using the mask directly to get the final output -
# Get unique colors (remove black)
colors = np.unique(image.reshape(-1,3), axis=0)
colors = np.delete(colors, [0,0,0], axis=0)
mask = (colors[:,None,None,:]==image).all(-1)
out = mask[...,None]*np.array([255,255,255])
Approach #2
A better/memory-efficient way to get that mask would be with something like this -
u,ids = np.unique(image.reshape(-1,3), axis=0, return_inverse=1)
m,n = image.shape[:-1]
ids = ids.reshape(m,n)-1
mask = np.zeros((ids.max()+1,m,n),dtype=bool)
mask[ids,np.arange(m)[:,None],np.arange(n)] = ids>=0
and hence, a better way to get the final output, like so -
out = np.zeros(mask.shape + (3,), dtype=np.uint8)
out[mask] = [255,255,255]
and probably a better way to get ids would be with matrix-multiplication. Hence :
u,ids = np.unique(image.reshape(-1,3), axis=0, return_inverse=1)
could be replaced by :
image2D = np.tensordot(image,256**np.arange(3),axes=(-1,-1))
ids = np.unique(image2D,return_inverse=1)[1]
I was able to solve it the following way:
import numpy as np
import cv2
# Read the image
image = cv2.imread('0-mask.png')
# Get unique colors (remove black)
colors = np.unique(image.reshape(-1,3), axis=0)
colors = np.delete(colors, [0,0,0], axis=0)
# Get number of unique colors
instances = colors.shape[0]
# Reshape colors and image for broadcasting
colors = colors.reshape(instances,1,1,3)
image = image[np.newaxis]
# Generate multiple images, one per instance
mask = np.ones((instances, 1, 1, 1))
images = (image * mask)
# Run query with the original image
query = (image == colors).all(axis=3)
# For every image, color the shape white, everything else black
images[query] = [255,255,255]
images[np.logical_not(query)] = [0,0,0]
I'm trying to cut multiple images with a green background. The center of the pictures is green and i want to cut the rest out of the picture. The problem is, that I got the pictures from a video, so sometimes the the green center is bigger and sometimes smaller. My true task is to use K-Means on the knots, therefore i have for example a green background and two ropes, one blue and one red.
I use python with opencv, numpy and matplotlib.
I already cut the center, but sometimes i cut too much and sometimes i cut too less. My Imagesize is 1920 x 1080 in this example.
Here the knot is left and there is more to cut
Here the knot is in the center
Here is another example
Here is my desired output from picture 1
Example 1 which doesn't work with all algorithm
Example 2 which doesn't work with all algorithm
Example 3 which doesn't work with all algorithm
Here is my Code so far:
import numpy as np
import cv2
import matplotlib.pyplot as plt
from PIL import Image, ImageEnhance
img = cv2.imread('path')
print(img.shape)
imgRGB = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
crop_img = imgRGB[500:500+700, 300:300+500]
plt.imshow(crop_img)
plt.show()
You can change color to hsv.
src = cv2.imread('path')
imgRGB = cv2.cvtColor(src, cv2.COLOR_BGR2RGB)
imgHSV = cv2.cvtColor(imgRGB, cv2.COLOR_BGR2HSV)
Then use inRange to find only green values.
lower = np.array([20, 0, 0]) #Lower values of HSV range; Green have Hue value equal 120, but in opencv Hue range is smaler [0-180]
upper = np.array([100, 255, 255]) #Uppervalues of HSV range
imgRange = cv2.inRange(imgHSV, lower, upper)
Then use morphology operations to fill holes after not green lines
#kernels for morphology operations
kernel_noise = np.ones((3,3),np.uint8) #to delete small noises
kernel_dilate = np.ones((30,30),np.uint8) #bigger kernel to fill holes after ropes
kernel_erode = np.ones((38,38),np.uint8) #bigger kernel to delete pixels on edge that was add after dilate function
imgErode = cv2.erode(imgRange, kernel_noise, 1)
imgDilate = cv2.dilate(imgErode , kernel_dilate, 1)
imgErode = cv2.erode(imgDilate, kernel_erode, 1)
Put mask on result image. You can now easly find corners of green screen (findContours function) or use in next steps result image
res = cv2.bitwise_and(imgRGB, imgRGB, mask = imgErode) #put mask with green screen on src image
The code below does what you want. First it converts the image to the HSV colorspace, which makes selecting colors easier. Next a mask is made where only the green parts are selected. Some noise is removed and the rows and columns are summed up. Finally a new image is created based on the first/last rows/cols that fall in the green selection.
Since in all provided examples a little extra of the top needed to be cropped off I've added code to do that. First I've inverted the mask. Now you can use the sum of the rows/cols to find the row/col that is fully within the green selection. It is done for the top. In the image below the window 'Roi2' is the final image.
Edit: updated code after comment by ts.
Updated result:
Code:
import numpy as np
import cv2
# load image
img = cv2.imread("gr.png")
# convert to HSV
hsv = cv2.cvtColor(img, cv2.COLOR_BGR2HSV)
# set lower and upper color limits
lower_val = (30, 0, 0)
upper_val = (65,255,255)
# Threshold the HSV image to get only green colors
# the mask has white where the original image has green
mask = cv2.inRange(hsv, lower_val, upper_val)
# remove noise
kernel = np.ones((8,8),np.uint8)
mask = cv2.morphologyEx(mask, cv2.MORPH_OPEN, kernel)
# sum each row and each volumn of the image
sumOfCols = np.sum(mask, axis=0)
sumOfRows = np.sum(mask, axis=1)
# Find the first and last row / column that has a sum value greater than zero,
# which means its not all black. Store the found values in variables
for i in range(len(sumOfCols)):
if sumOfCols[i] > 0:
x1 = i
print('First col: ' + str(i))
break
for i in range(len(sumOfCols)-1,-1,-1):
if sumOfCols[i] > 0:
x2 = i
print('Last col: ' + str(i))
break
for i in range(len(sumOfRows)):
if sumOfRows[i] > 0:
y1 = i
print('First row: ' + str(i))
break
for i in range(len(sumOfRows)-1,-1,-1):
if sumOfRows[i] > 0:
y2 = i
print('Last row: ' + str(i))
break
# create a new image based on the found values
#roi = img[y1:y2,x1:x2]
#show images
#cv2.imshow("Roi", roi)
# optional: to cut off the extra part at the top:
#invert mask, all area's not green become white
mask_inv = cv2.bitwise_not(mask)
# search the first and last column top down for a green pixel and cut off at lowest common point
for i in range(mask_inv.shape[0]):
if mask_inv[i,0] == 0 and mask_inv[i,x2] == 0:
y1 = i
print('First row: ' + str(i))
break
# create a new image based on the found values
roi2 = img[y1:y2,x1:x2]
cv2.imshow("Roi2", roi2)
cv2.imwrite("img_cropped.jpg", roi2)
cv2.waitKey(0)
cv2.destroyAllWindows()
First step is to extract green channel from your image, this is easy with OpenCV numpy and would produce grayscale image (2D numpy array)
import numpy as np
import cv2
img = cv2.imread('knots.png')
imgg = img[:,:,1] #extracting green channel
Second step is using thresholding, which mean turning grayscale image into binary (black and white ONLY) image for which OpenCV has ready function: https://docs.opencv.org/3.4.0/d7/d4d/tutorial_py_thresholding.html
imgt = cv2.threshold(imgg,127,255,cv2.THRESH_BINARY)[1]
Now imgt is 2D numpy array consisting solely of 0s and 255s. Now you have to decide how you would look for places of cuts, I suggest following:
topmost row of pixel containing at least 50% of 255s
bottommost row of pixel containing at least 50% of 255s
leftmost column of pixel containing at least 50% of 255s
rightmost column of pixel containing at least 50% of 255s
Now we have to count number of occurences in each row and each column
height = img.shape[0]
width = img.shape[1]
columns = np.apply_along_axis(np.count_nonzero,0,imgt)
rows = np.apply_along_axis(np.count_nonzero,1,imgt)
Now columns and rows are 1D numpy arrays containing number of 255s for each column/row, knowing height and width we could get 1D numpy arrays of bool values following way:
columns = columns>=(height*0.5)
rows = rows>=(width*0.5)
Here 0.5 means 50% mentioned earlier, feel free to adjust that value to your needs. Now it is time to find index of first True and last True in columns and rows.
icolumns = np.argwhere(columns)
irows = np.argwhere(rows)
leftcut = int(min(icolumns))
rightcut = int(max(icolumns))
topcut = int(min(irows))
bottomcut = int(max(irows))
Using argwhere I got numpy 1D arrays of indexes of Trues, then found lowest and greatest. Finally you can clip your image and save it
imgout = img[topcut:bottomcut,leftcut:rightcut]
cv2.imwrite('out.png',imgout)
There are two places which might be requiring adjusting: % of 255s (in my example 50%) and threshold value (127 in cv2.threshold).
EDIT: Fixed line with cv2.threshold
Based on the new images you added I assume that you do not only want to cut out the non green parts as you asked, but that you want a smaller frame around the ropes/knot. Is that correct? If not, you should upload the video and describe the purpose/goal of the cropping a bit more, so that we can better help you.
Assuming you want a cropped image with only the ropes, the solution is quite similar the the previous answer. However, this time the red and blue of the ropes are selected using HSV. The image is cropped based on the resulting mask. If you want the image somewhat bigger than just the ropes, you can add extra margins - but be sure to account/check for the edge of the image.
Note: the code below works for the images that that have a full green background, so I suggest you combine it with one of the solutions that only selects the green area. I tested this for all your images as follows: I took the code from my other answer, put it in a function and added return roi2 at the end. This output is fed into a second function that holds the code below. All images were processed successful.
Result:
Code:
import numpy as np
import cv2
# load image
img = cv2.imread("image.JPG")
# blue
lower_val_blue = (110, 0, 0)
upper_val_blue = (179,255,155)
# red
lower_val_red = (0, 0, 150)
upper_val_red = (10,255,255)
# Threshold the HSV image
mask_blue = cv2.inRange(img, lower_val_blue, upper_val_blue)
mask_red = cv2.inRange(img, lower_val_red, upper_val_red)
# combine masks
mask_total = cv2.bitwise_or(mask_blue,mask_red)
# remove noise
kernel = np.ones((8,8),np.uint8)
mask_total = cv2.morphologyEx(mask_total, cv2.MORPH_CLOSE, kernel)
# sum each row and each volumn of the mask
sumOfCols = np.sum(mask_total, axis=0)
sumOfRows = np.sum(mask_total, axis=1)
# Find the first and last row / column that has a sum value greater than zero,
# which means its not all black. Store the found values in variables
for i in range(len(sumOfCols)):
if sumOfCols[i] > 0:
x1 = i
print('First col: ' + str(i))
break
for i in range(len(sumOfCols)-1,-1,-1):
if sumOfCols[i] > 0:
x2 = i
print('Last col: ' + str(i))
break
for i in range(len(sumOfRows)):
if sumOfRows[i] > 0:
y1 = i
print('First row: ' + str(i))
break
for i in range(len(sumOfRows)-1,-1,-1):
if sumOfRows[i] > 0:
y2 = i
print('Last row: ' + str(i))
break
# create a new image based on the found values
roi = img[y1:y2,x1:x2]
#show image
cv2.imshow("Result", roi)
cv2.imshow("Image", img)
cv2.waitKey(0)
cv2.destroyAllWindows()
I have a color palette image like this one and a binarized image in a numpy array, for example a square such as this:
img = np.zeros((100,100), dtype=np.bool)
img[25:75,25:75] = 1
(The real images are more complicated of course.)
I would like to do the following:
Extract all RGB colors from the color palette image.
For each color, save a copy of img in that color with a transparent background.
My code so far (see below) can save the img as a black object with transparent background. What I am struggling with is a good way of extracting the RGB colors so I can apply them to the image.
# Create an MxNx4 array (RGBA)
img_rgba = np.zeros((img.shape[0], img.shape[1], 4), dtype=np.bool)
# Fill R, G and B with inverted copies of the image
# Note: This creates a black object; instead of this, I need the colors from the palette.
for c in range(3):
img_rgba[:,:,c] = ~img
# For alpha just use the image again (makes background transparent)
img_rgba[:,:,3] = img
# Save image
imsave('img.png', img_rgba)
You can use a combination of a reshape and np.unique to extract the unique RGB values from your color palette image:
# Load the color palette
from skimage import io
palette = io.imread(os.path.join(os.getcwd(), 'color_palette.png'))
# Use `np.unique` following a reshape to get the RGB values
palette = palette.reshape(palette.shape[0]*palette.shape[1], palette.shape[2])
palette_colors = np.unique(palette, axis=0)
(Note that the axis argument for np.unique was added in numpy version 1.13.0, so you may need to upgrade numpy for this to work.)
Once you have palette_colors, you can pretty much use the code you already have to save the image, except you now add the different RGB values instead of copies of ~img to your img_rgba array.
for p in range(palette_colors.shape[0]):
# Create an MxNx4 array (RGBA)
img_rgba = np.zeros((img.shape[0], img.shape[1], 4), dtype=np.uint8)
# Fill R, G and B with appropriate colors
for c in range(3):
img_rgba[:,:,c] = img.astype(np.uint8) * palette_colors[p,c]
# For alpha just use the image again (makes background transparent)
img_rgba[:,:,3] = img.astype(np.uint8) * 255
# Save image
imsave('img_col'+str(p)+'.png', img_rgba)
(Note that you need to use np.uint8 as datatype for your image, since binary images obviously cannot represent different colors.)