how can i make mask image bigger with python numpy? - python

I have array that represent a mask with shape (3080, 2000) and dtype('uint8'),
the following picture
[mask image ][1]
my mask is not always a rectangle could be any shape, I just want to make the mask bigger, I mean to extend it a few pixels in every direction how can I achieve that.
[1]: https://i.stack.imgur.com/K2zaA.png

Related

iterating through two image arrays to create merged image

I have two 128x128 black and white images, black background and white image in centre of image. One is an input into another function and the output is the output of this function. I need to take the bottom right 64x64 pixels of the output image and compare them to the bottom right 64x64 of the input image array. Where a pixel is white in the output I need the corresponding pixel in the input to be made white also.
I have tried just using numpy slicing to cut and paste the 64x64 part of the output over the input but occasionally the input is larger than the output so this is not desirable.
I've tried looping through a single image with the following type of looping code:
for (i,row) in enumerate(image_mesh):
for (j,pixel) in enumerate(row):
print(pixel)
But am stuck on how to loop through two arrays simultaneously and compare individual pixels. Sample images attached.
Assuming your images are RGB and you are trying to compare black (0, 0, 0) against white (1, 1, 1)
You can simply compare them
comparison_ab = np.allclose(img_a[64:, 64:, :], img_b[64:, 64:, :]) # 64x64 bool
and create an RGB image out of it using broadcasting
comparison_ab * np.array([1, 1, 1])[None, None, :] # 64x64x3 float
or take the elementwise minimum
np.minimum(img_a, img_b)
Couldn't get Nil's answer to work but resolved it using:
#convert input files to bool
img_a=input > 200
img_b=output[:,:,0] > 200
comparison = img_b>img_a
#convert from bool to uint8
comparison=comparison.astype('uint8')*255
Converting to a bool array was helpful, a direct greater than operation separated out the differences nicely. I was able to paste this in with a slicing operation easily.

Python Otsu Threshold of a 2-dimensional array

I wrote an Ant Colony Optimalization code in Python for finding edges in an image.
I based it off a document : http://ohm.ecce.admu.edu.ph/wiki/pub/Main/TotoOppus/ant_colony_optimization_for_image_edge_detection_WSEASConference.pdf
In the end i have a pheromone matrix size [I,J] (numpy array) where I,J are equal to Width and Height of the given picture that i find edges in.
Now i want to find the Otsu Threshold of that matrix and put white pixels onto black canvas of [I,J] size, in the places where the corresponding pheromones values are bigger then the threshold. Is there a function to easily find Otsu threshold of a numpy 2 dimensional array ?
This function from scikit-image should work to derive the Otsu threshold for your array, as the image input is an ndarray.

How to get border pixels of an image in python?

I have an image, using steganography I want to save the data in border pixels only.
In other words, I want to save data only in the least significant bits(LSB) of border pixels of an image.
Is there any way to get border pixels to store data( max 15 characters text) in the border pixels?
Plz, help me out...
OBTAINING BORDER PIXELS:
Masking operations are one of many ways to obtain the border pixels of an image. The code would be as follows:
a= cv2.imread('cal1.jpg')
bw = 20 //width of border required
mask = np.ones(a.shape[:2], dtype = "uint8")
cv2.rectangle(mask, (bw,bw),(a.shape[1]-bw,a.shape[0]-bw), 0, -1)
output = cv2.bitwise_and(a, a, mask = mask)
cv2.imshow('out', output)
cv2.waitKey(5000)
After I get an array of ones with the same dimension as the input image, I use cv2.rectangle function to draw a rectangle of zeros. The first argument is the image you want to draw on, second argument is start (x,y) point and the third argument is the end (x,y) point. Fourth argument is the color and '-1' represents the thickness of rectangle drawn (-1 fills the rectangle). You can find the documentation for the function here.
Now that we have our mask, you can use 'cv2.bitwise_and' (documentation) function to perform AND operation on the pixels. Basically what happens is, the pixels that are AND with '1' pixels in the mask, retain their pixel values. Pixels that are AND with '0' pixels in the mask are made 0. This way you will have the output as follows:
.
The input image was :
You have the border pixels now!
Using LSB planes to store your info is not a good idea. It makes sense when you think about it. A simple lossy compression would affect most of your hidden data. Saving your image as JPEG would result in loss of info or severe affected info. If you want to still try LSB, look into bit-plane slicing. Through bit-plane slicing, you basically obtain bit planes (from MSB to LSB) of the image. (image from researchgate.net)
I have done it in Matlab and not quite sure about doing it in python. In Matlab,
the function, 'bitget(image, 1)', returns the LSB of the image. I found a question on bit-plane slicing using python here. Though unanswered, you might want to look into the posted code.
To access border pixel and enter data into it.
A shape of an image is accessed by t= img.shape. It returns a tuple of the number of rows, columns, and channels.A component is RGB which 1,2,3 respectively.int(r[0]) is variable in which a value is stored.
import cv2
img = cv2.imread('xyz.png')
t = img.shape
print(t)
component = 2
img.itemset((0,0,component),int(r[0]))
img.itemset((0,t[1]-1,component),int(r[1]))
img.itemset((t[0]-1,0,component),int(r[2]))
img.itemset((t[0]-1,t[1]-1,component),int(r[3]))
print(img.item(0,0,component))
print(img.item(0,t[1]-1,component))
print(img.item(t[0]-1,0,component))
print(img.item(t[0]-1,t[1]-1,component))
cv2.imwrite('output.png',img)

Using OpenCV Python, How would you make all black pixels transparent, and then overlay it over original image

I'm trying to make a colored mask, white.
And my idea is to:
make black pixels transparent in the mask
merge the two images
crop images
so then my original masked area will be white.
What kind of OpenCV python code/methods would I need?
Like so:
Original
Mask
Desired result (mocked up - no green edges)
Instead of
I suppose to do a color threshold to get the mask itself.
The result I got in a first quick and dirty attempt with Hue 43-81, Saturation 39-197 and Brightness from 115-255 is:
The next step is a whole fill algorithm to fill the inside of the mask. Note that also one small area to the right is selected.
The next step is a substraction of the two results (mask-filled_mask):
Again fill the wholes and get rid of the noisy pixels with binary opening:
Last mask the image with the created mask.
Every step can be adjusted to yield optimal results. A good idea is to try the steps out (for example with imageJ) to get your workflow set up and then script the steps in python/openCV.
Refer also to http://fiji.sc/Segmentation.
I am assuming your mask is a boolean numpy array and your 2 images are numpy arrays image1 and image2.
Then you can use the boolean array as multiplier.
overlay= mask*image1 + (-mask)*image2
So you get the "True" pixels from image1 and the False pixels from image2

Add two 3D numpy arrays with a 2D mask

I would like to add two 3D numpy arrays (RGB image arrays) with a 2D mask generated by some algorithms on a greyscale image. What is the best way to do this?
As an example of what I am trying to do:
from PIL import Image, ImageChops, ImageOps
import numpy as np
img1=Image.open('./foo.jpg')
img2=Image.open('./bar.jpg')
img1Grey=ImageOps.grayscale(img1)
img2Grey=ImageOps.grayscale(img2)
# Some processing for example:
diff=ImageChops.difference(img1Grey,img2Grey)
mask=np.ma.masked_array(img1,diff>1)
img1Array=np.asarray(im1)
img2Array=np.asarray(im2)
imgResult=img1Array+img2Array[mask]
I was thinking:
1) break up the RGB image and do each color separately
2) duplicate the mask into a 3D array
or is there a more pythonic way to do this?
Thanks in advance!
Wish I could add a comment instead of an answer. Anyhow:
masked_array is not for making masks. It's for including only the data outside the mask in calculations such as sum, mean, etc.. scientific statistical applications. It's comprised of an array and the mask for the array.
It's probably NOT what you want.
You probably just want a normal boolean mask, as in:
mask = diff>1
Then you'll need to modify the shape so numpy broadcasts in the correct dimension, then broadcast it into the 3rd dimension:
mask.shape = mask.shape + (1,)
mask = np.broadcast_arrays(img1Array, mask)[1]
After that, you can just add the pixels:
img1Array[mask] += img2Array[mask]
A further point of clarification:
imgResult=img1Array+img2Array[mask]
That could never work. You are saying 'add some of the pixels from img2Array to all of the pixels in img1Array' 6_9
If you want to apply a ufunc between two or more arrays, they must be either the same shape, or broadcastable to the same shape.

Categories