How to find the average position of pixels using numpy and cv2 - python

I made a color mask with cv2 and all I need to do is get the average position of all the white pixels on the mask which are labeled 255. My array looks something like this:
[
[0, 0, 0, 0, 255, 255, ...],
[0, 0, 0, 0, 255, 255, ...],
[0, 0, 0, 0, 255, 255, ...],
[0, 0, 0, 0, 255, 255, ...],
...
]
The size of the screen is 400px high and 600px wide so I can't use it for loops without it looking like an android camera from 2014. I've seen a lot of tutorials on how to average the pixel value but not the position. Is anyone able to help me out?

Related

Is there a way to make a square pattern mask that matches the size of my image in python?

I was wondering if there is a way to make an array/column like [255, 255, 255, 255, 0, 0, 0, 0, 255, 255, 255, 255, 0, 0, 0, 0, 255....] that repeats until a certain length (length of a imported image) for every row/array. The goal of this will be to make an image that will show 255 as white "pixels" and 0 as the black "pixels". Is this possible?
proposed mask concept
final result
import numpy as np
from skimage import data
from skimage import io
path = r'C:\C:\Python36\bmp_files\slide_3.png'
sz = 400
image = data.astronaut()
patch1 = image[250:250+sz,250:250+sz,:]
patch1.shape
#(48, 48, 3)
mask = np.tile(np.array([[[1],[0]],[[0],[0]]],dtype=np.uint8),(sz/2,sz/2,3))
mask.shape
#(48, 48, 3)
print (mask)
patch2 = patch1 * mask
patch12 = np.hstack((patch1,patch2))`

Black and white color addition

When red is mixed with green, I get yellow as expected.
RGB for Red: [255, 0, 0]
RGB for Green: [0, 255, 0]
Result: [255, 255, 0]
But when white is mixed with black, I should get normally grey but I get white. Shouldn't I get grey?
RGB for Black: [0, 0, 0]
RGB for White: [255, 255, 255]
Result: [255, 255, 255]
Here is the code:
from PIL import Image, ImageChops
import math
import matplotlib.pylab as plt
im1= Image.open(r'.\red.jpg')
im2= Image.open(r'.\green.jpg')
result = ImageChops.add(im1, im2)
plt.imshow(result)
I think what #Cris Luengo said ("If you want to get gray, average the white and black pixels together") is valid; Also I think one additional thing can be a MixFactor.
You can use OpenCV for this.
Imports:
import sys
import cv2
import numpy as np
Load image:
im = cv2.imread(sys.path[0]+'/im.png')
Main code:
color=[0,0,0]
mixFactor=.5
im = (1-mixFactor)*im+[(mixFactor)*x for x in color]
My input values for color:
[0, 0, 0] black
[255, 255, 255] white
[255, 0, 0] blue (BGR)
[0, 255, 0] green
[0, 0, 255] red
I draw the first colorful image using a graphical software; The other 8 created with this Python code.
Credit: The text on the image is written using the default Hershey font included in OpenCV. And you can read more here and here.
Update:
If you want to use imshow for the output of this blending method; use it like this:
cv2.imshow("preview", im/255)

How to convert 2D matrix of RGBA tuples into PIL Image?

Suppose if I have image img with contents:
[[(255, 255, 255, 255), (0, 0, 0, 255), (0, 0, 0, 255), (0, 0, 0, 255)],
[(0, 0, 0, 255), (255, 255, 255, 255), (0, 0, 0, 255), (0, 0, 0, 255)],
[(0, 0, 0, 255), (0, 0, 0, 255), (255, 255, 255, 255), (0, 0, 0, 255)],
[(0, 0, 0, 255), (0, 0, 0, 255), (0, 0, 0, 255), (255, 255, 255, 255)]]
Is there's any way I can make PIL Image from it?
I tried Image.fromarray(np.asarray(img)) and I got the following error:
TypeError: Cannot handle this data type: (1, 1, 4), <i4
How can I resolve it? Also is there's any solution without the usage of numpy module? Thanks in advance.
I think you want this (quite self explanatory from the doc):
from PIL import Image
arr = np.array(img)
PIL_image = Image.frombuffer('RGBA',(arr.shape[0],arr.shape[1]),np.uint8(arr.reshape(arr.shape[0]*arr.shape[1],arr.shape[2])),'raw','RGBA',0,1)
You need to explicitly set dtype of an array as np.uint8 to let the Image object generator know the format of input data. And I would also recommend to specify the mode because I don't know how PIL choose between RGBA and CMYK when there are 4 channels. The solution is here:
from PIL import Image
Image.fromarray(np.asarray(img, dtype=np.uint8), mode='RGBA')

Denoising Comet Assay

I have been trying to reduce the noise in the attached image. Basically, I want to remove the background dust from the image. Currently, I have tried looking for small points throughout the image(anything that fits within a 10 by 10 grid with low Green pixel intensity) and then blacked out the 10 by 10 region. However, I was hoping to remove more noise from the image. Is there a possibly way to run some filters in OpenCv to do so.
A simple approach can be: Convert the image to grayscale, threshold it, and the apply morphological opening in it to get estimate results.
img = cv2.imread("commitdust.png")
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
ret, th = cv2.threshold(gray, 80, 255, cv2.THRESH_BINARY)
k = np.array([[0, 0, 1, 0, 0],
[0, 1, 1, 1, 0],
[1, 1, 1, 1, 1],
[0, 1, 1, 1, 0],
[0, 0, 1, 0, 0]], dtype=np.uint8)
th = cv2.morphologyEx(th, cv2.MORPH_OPEN, k)
cv2.imshow("th", th)
cv2.waitKey(0)
cv2.destroyAllWindows()

cv2.VideoWriter cuts out a row of pixels from image stack

I am using cv2.VideoWriter() as an intermediate step in a larger image processing workflow. Basically I have a stack of images that need to be turned into a timelapse, and then the frames in that are processed and then used downstream to mask original imagery. My masking isn't working because array sizes to not correspond with one another, and I've diagnosed the problem to arise from cv2.VideoWriter(). My time lapse assembly process came from here.
There are a ton of posts about cv2.VideoWriter() not working because the frame size is wrong etc. but my problem is not that the video won't write - it's that dimensions of my imagery are being changed. In fact, I'm not even sure if the top row or bottom row is what's being cut off, or if there is some underlying resampling step or something.
import cv2
import numpy as np
import glob
imgs = glob.glob('*.jpg')
img_array = []
for filename in imgs:
img = cv2.imread(filename)
height, width, layers = img.shape
size = (width,height)
img_array.append(img)
size # calling `size` returns (250,187)
out = cv2.VideoWriter('project.avi',cv2.VideoWriter_fourcc(*'DIVX'), 15, size)
for i in range(len(img_array)):
out.write(img_array[i])
out.release()
cap = cv2.VideoCapture('project.avi')
mycap = cap.read()
mycap[1].shape # this returns (186,250,3)
I would have expected mycap[1].shape to have the same attributes as size but while size indicates I have a 250 pixel wide and 187 pixel tall array, mycap[1].shape shows that the video has dimensions 250x186.
After some testing I confirmed that cv2.VideoWriter() is not simply clipping an image with odd dimension values, but is instead altering values in the arrays while changing dimensions:
import numpy as np
import pylab as plt
import cv2 as cv
# Create RGB layers
r = np.array([[255, 0, 255, 0, 255, 0, 255, 0, 255], [255, 0, 255, 0, 255, 0, 255, 0, 255], [255, 0, 255, 0, 255, 0, 255, 0, 255]],dtype=np.uint8)
g = np.array([[0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0]],dtype=np.uint8)
b = np.array([[10, 0, 10, 0, 10, 0, 255, 0, 255], [10, 0, 10, 0, 10, 0, 255, 0, 255], [10, 0, 10, 0, 10, 0, 255, 0, 255]],dtype=np.uint8)
# Create a few image layers
rgb1 = np.dstack((r,g,b))
rgb2 = np.dstack((r,g,b))
rgb3 = np.dstack((r,g,b))
rgb4 = np.dstack((r,g,b))
plt.imshow(rgb1)
imgs = [rgb1,rgb2,rgb3,rgb4]
# Create timelapse
img_array = []
for img in imgs:
height, width, layers = img.shape
size = (width,height)
img_array.append(img)
out = cv.VideoWriter('SO_question.avi',cv.VideoWriter_fourcc(*'DIVX'), 15, size)
for i in range(len(img_array)):
out.write(img_array[i])
out.release()
# Read video in
cap = cv.VideoCapture('SO_question.avi')
cap.read()[1].shape
plt.imshow(cap.read()[1])
plt.imshow(rgb1) produces the following image:
But plt.imshow(cap.read()[1]) produces the following image:
Furthermore, using print(cap.read()[1]) shows that array values are not maintained across the process. Thus, I conclude that a resampling process is occurring (rather than a simple crop step) when width and height are an odd number of pixels.

Categories