When red is mixed with green, I get yellow as expected.
RGB for Red: [255, 0, 0]
RGB for Green: [0, 255, 0]
Result: [255, 255, 0]
But when white is mixed with black, I should get normally grey but I get white. Shouldn't I get grey?
RGB for Black: [0, 0, 0]
RGB for White: [255, 255, 255]
Result: [255, 255, 255]
Here is the code:
from PIL import Image, ImageChops
import math
import matplotlib.pylab as plt
im1= Image.open(r'.\red.jpg')
im2= Image.open(r'.\green.jpg')
result = ImageChops.add(im1, im2)
plt.imshow(result)
I think what #Cris Luengo said ("If you want to get gray, average the white and black pixels together") is valid; Also I think one additional thing can be a MixFactor.
You can use OpenCV for this.
Imports:
import sys
import cv2
import numpy as np
Load image:
im = cv2.imread(sys.path[0]+'/im.png')
Main code:
color=[0,0,0]
mixFactor=.5
im = (1-mixFactor)*im+[(mixFactor)*x for x in color]
My input values for color:
[0, 0, 0] black
[255, 255, 255] white
[255, 0, 0] blue (BGR)
[0, 255, 0] green
[0, 0, 255] red
I draw the first colorful image using a graphical software; The other 8 created with this Python code.
Credit: The text on the image is written using the default Hershey font included in OpenCV. And you can read more here and here.
Update:
If you want to use imshow for the output of this blending method; use it like this:
cv2.imshow("preview", im/255)
Related
I was wondering if there is a way to make an array/column like [255, 255, 255, 255, 0, 0, 0, 0, 255, 255, 255, 255, 0, 0, 0, 0, 255....] that repeats until a certain length (length of a imported image) for every row/array. The goal of this will be to make an image that will show 255 as white "pixels" and 0 as the black "pixels". Is this possible?
proposed mask concept
final result
import numpy as np
from skimage import data
from skimage import io
path = r'C:\C:\Python36\bmp_files\slide_3.png'
sz = 400
image = data.astronaut()
patch1 = image[250:250+sz,250:250+sz,:]
patch1.shape
#(48, 48, 3)
mask = np.tile(np.array([[[1],[0]],[[0],[0]]],dtype=np.uint8),(sz/2,sz/2,3))
mask.shape
#(48, 48, 3)
print (mask)
patch2 = patch1 * mask
patch12 = np.hstack((patch1,patch2))`
When I open a FULLY WHITE file with pillow (from PIL import Image) and then obtain the color of all of the pixels, you SHOULD see something like [255, 255, 255, 255, 255, 255]..... but instead i only see [0, 0, 0, 0, 0, 0]....., code is as follows:
from PIL import Image
image = Image.open("index.png", "r")
pixels = list(image.getdata())
print(pixels)
Your code doesn't convert white pixels values to black pixels. It somehow represents pixel values in different way. We can check whether it converts white pixel values to black pixels using RGB color domain. Code is shown below:
from PIL import Image
import numpy as np
img = Image.open("index.png") # open colour image
imgRgb = img.convert('RGB')
pixels = list(imgRgb.getdata())
width, height = imgRgb.size
pixels = np.asarray([pixels[i * width:(i + 1) * width] for i in range(height)], dtype=int)
I am trying to save a 2D list as an image in python (greyscale image) so 0 values in the array would be black and 255 would be white. For example:
255 255 255
255 0 255
255 0 255
255 0 255
255 255 255
Would save an l like shape.
I have tried the following code utilising the PIL library as suggested by other questions on stack overflow:
WIDTH, HEIGHT = img.size
imgData = list(img.getdata())
imgData = [imgData[offset:offset + WIDTH] for offset in range(0, WIDTH * HEIGHT, WIDTH)]
#to print the image
for row in data:
print(' '.join('{:3}'.format(value) for value in row))
imgData = np.array(imgData)
**IMG VALUES AUGMENTED HERE**
newimg = Image.new('L', (WIDTH, HEIGHT), 'white')
newimg.putdata(imgData)
newimg.save('C:/File/Name.png')
However the image this creates does not reflect the list at all. If I was to have the 0s and 255s in different positions the same image is created. Anyone know a solution?
As your example is lacking any input data, I have just typed it in as you describe, made the image and then enlarged it. I also artificially added a red border so you can see the extent of it on StackOverflow's white background:
#!/usr/bin/env python3
from PIL import Image
import numpy as np
pixels = [[255,255,255],
[255,0,255],
[255,0,255],
[255,0,255],
[255,255,255]]
# Make list of pixels into Image
im = Image.fromarray(np.array(pixels,dtype=np.uint8))
im.save('result.png')
Instead of:
newimg.putdata(imgData)
you need the line:
newimg.putdata([j[0] for i in imgData for j in i])
The grayscale data is specified in a 1d list, not a 2d list.
This creates the list:
>>> [j[0] for i in imgData for j in i]
[255, 255, 255, 255, 0, 255, 255, 0, 255, 255, 0, 255, 255, 255, 255]
Which is:
[255, 255, 255,
255, 0 , 255,
255, 0 , 255,
255, 0 , 255,
255, 255, 255]
EDIT
The above solution works if you edit imgData with imgData[0][0] = [0, 0, 0, 255]. If you're editing imgData with imgData[0][0] = 0, then you'll need the line to be:
[j[0] if hasattr(j, '__iter__') else j for i in imgData for j in i]
or, you can make it nicer with:
imgData = np.array([[j[0] for j in i] for i in imgData])
imgData[0][0] = 0
newimg.putdata(imgData.flatten())
I'm trying to create an 8-bit 1-channel mask used for use in some image operations. I have an image that has certain pixels filled with fuscia (255, 0, 255) in the original image which indicates that pixel should be used in masking.
My idea is to simply copy the original picture, then replace all the fuscia pixels with white, and all the non-fuscia pixels with black. I am using numpy.place to do this. It appears, however, to only really "apply" the last place operation.
For example, in the code below, I am trying to first set all the fuscia pixels to white, and then all the non-fuscia pixels to black. However, when I go and actually save the image out and look at it, only the non-fuscia pixels have been turned black.
mask = original.copy()
np.place(mask, mask == (255, 0, 255), (255, 255, 255))
np.place(mask, mask != (255, 0, 255), (0, 0, 0))
mask = mask.reshape((h, w, 3))
mask = cv2.cvtColor(mask, cv2.COLOR_RGB2GRAY)
original
mask
I expect the fuscia area to be white, but it isn't. It is the greyscale version of the fuscia color (112, 112, 112)
I'm fairly new to numpy, so I may even be barking up the wrong tree and there could be an easier way to do this. What am I doing wrong? Is there an easier way to do what I'm describing? Thanks!
Seems like you could use a boolean array as the mask. For example:
mask = np.any(original==[255, 0, 255], axis=-1)
Now you can do original[mask] to get only the magenta pixels, or orignal[~mask] to get the others.
You'll find you can't overwrite original but you can overwrite a copy:
newimg = original.copy()
newimg[mask] = [255, 255, 255]
newimg[~mask] = [0, 0, 0]
By the way, I think you're 'supposed' to use masked arrays for this sort of thing, but I never got to grips with those.
I got a color image as input and I want to check the color information(like [0, 0, 0] - [255, 255, 255]) on the variance. So if the variance is over a certain point I want to change it to white.
So something like that:
for y in range(img.shape[0]):
for x in range(img.shape[1]):
if numpy.var(img[y][x]) > 1000:
img[y][x] = [255, 255, 255]
But I need good performance. So I tried it with the numpy.where() function, but I couldn't find a solution.
You can use numpy's indexing for this:
import numpy as np
import matplotlib.pyplot as plt
img = (np.random.rand(100,100,3)*255).astype(int)
img2 = np.copy(img)
img2[np.var(img, 2)>1000] = np.array([255, 255, 255])
fig, ax = plt.subplots(ncols=2)
ax[0].imshow(img)
ax[1].imshow(img2)
The second parameter of np.var is the axis you want to calculate the variance on; in this case the colors.
Result: