I have two black and white images that I would like to merge with the final image showing the lighter/ white pixel at each pixel location in both images. I tried the following code but it did not work.
background=Image.open('ABC.jpg').convert("RGBA")
overlay=Image.open('DEF.jpg').convert("RGBA")
background_width=1936
background_height=1863
background_width,background_height = background.size
overlay_resize= overlay.resize((background_width,background_height),Image.ANTIALIAS)
background.paste(overlay_resize, None, overlay_resize)
overlay=background.save("overlay.jpg")
fn=np.maximum(background,overlay)
fn1=PIL.Image.fromarray(fn)
plt.imshow(fnl)
plt.show()
The error message I get is cannot handle this data type. Any help or advice anyone could give would be great.
I think you are over-complicating things. You just need to read in both images and make them greyscale numpy arrays, then choose the lighter of the two pixels at each location.
So starting with these two images:
#!/usr/local/bin/python3
import numpy as np
from PIL import Image
# Open two input images and convert to greyscale numpy arrays
bg=np.array(Image.open('a.png').convert('L'))
fg=np.array(Image.open('b.png').convert('L'))
# Choose lighter pixel at each location
result=np.maximum(bg,fg)
# Save
Image.fromarray(result).save('result.png')
You will get this:
Keywords: numpy, Python, image, image processing, compose, blend, blend mode, lighten, lighter, Photoshop, equivalent, darken, overlay.
Related
I am wanting to create a function that would determine how colorblind-friendly an image is (on a scale from 0-1). I have several color-related functions that are capable of performing the following tasks:
Take an image (as a PIL image, filename, or RGB array) and transform it into an image representative of what a colorblind person would see (for the different types of colorblindness)
Take an image and determine the rgb colors associated with each pixel of the image (transform into numpy array of rgb colors)
Determine the color palette associated with an image
Find the similarity between two rgb arrays (using CIELAB- see colormath package)
My first instinct was to transform the image and colorblind version of the image into RGB arrays and then use the CIELAB function to determine the similarity between the two images. However, that doesn't really solve the problem since it wouldn't be able to pick out things like readability (e.g. if the text and background color end up being very similar after adjusting for colorblindness).
Any ideas for how to determine how colorblind-friendly an image is?
I have a (540, 960, 1) shaped image with values ranging from [0..255] which is black and white. I need to convert it to a "heatmap" representation. As an example, pixels with 255 should be of most heat and pixels with 0 should be with least heat. Others in-between. I also need to return the heat maps as Numpy arrays so I can later merge them to a video. Is there a way to achieve this?
Here are two methods, one using Matplotlib and one using only OpenCV
Method #1: OpenCV + matplotlib.pyplot.get_cmap
To implement a grayscale (1-channel) -> heatmap (3-channel) conversion, we first load in the image as grayscale. By default, OpenCV reads in an image as 3-channel, 8-bit BGR.
We can directly load in an image as grayscale using cv2.imread() with the cv2.IMREAD_GRAYSCALE parameter or use cv2.cvtColor() to convert a BGR image to grayscale with the cv2.COLOR_BGR2GRAY parameter. Once we load in the image, we throw this grayscale image into Matplotlib to obtain our heatmap image. Matplotlib returns a RGB format so we must convert back to Numpy format and switch to BGR colorspace for use with OpenCV. Here's a example using a scientific infrared camera image as input with the inferno colormap. See choosing color maps in Matplotlib for available built-in colormaps depending on your desired use case.
Input image:
Output heatmap image:
Code
import matplotlib.pyplot as plt
import numpy as np
import cv2
image = cv2.imread('frame.png', 0)
colormap = plt.get_cmap('inferno')
heatmap = (colormap(image) * 2**16).astype(np.uint16)[:,:,:3]
heatmap = cv2.cvtColor(heatmap, cv2.COLOR_RGB2BGR)
cv2.imshow('image', image)
cv2.imshow('heatmap', heatmap)
cv2.waitKey()
Method #2: cv2.applyColorMap()
We can use OpenCV's built in heatmap function. Here's the result using the cv2.COLORMAP_HOT heatmap
Code
import cv2
image = cv2.imread('frame.png', 0)
heatmap = cv2.applyColorMap(image, cv2.COLORMAP_HOT)
cv2.imshow('heatmap', heatmap)
cv2.waitKey()
Note: Although OpenCV's built-in implementation is short and quick, I recommend using Method #1 since there is a larger colormap selection. Matplotlib has hundreds of various colormaps and allows you to create your own custom color maps while OpenCV only has 12 to choose from. Here's the built in OpenCV colormap selection:
You need to convert the image to a proper grayscale representation. This can be done a few ways, particularly with imread(filename, cv2.IMREAD_GRAYSCALE). This reduces the shape of the image to (54,960) (hint, no third dimension).
I have a uint16 3-dim numpy array reppresenting an RGB image, the array is created from a TIF image.
The problem is that when I import the original image in QGIS for example is displayed correctly, but if I try to display within python (with plt.imshow) the result is different (in this case more green):
QGIS image:
Plot image:
I think it is somehow related to the way matplotlib manages uint16 but even if I try to divide by 255 and convert to uint8 I can't get good results.
Going by your comment, the image isn't encoded using an RGB colour space, since the R, G and B channels have a value range of [0-255] assuming 8 bits per channel.
I'm not sure exactly which colour space the image is using, but TIFF files generally use CMYK which is optimised for printing.
Other common colour spaces to try include YCbCr (YUV) and HSL, however there are lots of variations of these that have been created over the years as display hardware and video streaming technologies have advanced.
To convert the entire image to an RGB colour space, I'd recommend the opencv-python pip package. The package is well documented, but as an example, here's how you would convert a numpy array img from YUV to RGB:
img_bgr = cv.cvtColor(img, cv.COLOR_YUV2RGB)
When using plt.imshow there's the colormap parameter you can play with, try adding cmap="gray" so for example
plt.imshow(image, cmap="gray")
source:
https://matplotlib.org/3.1.1/api/_as_gen/matplotlib.pyplot.imshow.html
If I try to normalize the image I get good results:
for every channel:
image[i,:,:] = image[i,:,:] / image[i,:,:].max()
However, some images appear darker than others:
different images
I'm working on an imaging project that needs to read images, split them into overlapping patches, run some operation on the patches, and then recombine them into a single image. For this task, I decided to the sci-kit learn methods extract_patches_2d, and reconstruct_from_patches_2d.
https://scikit-learn.org/stable/modules/generated/sklearn.feature_extraction.image.extract_patches_2d.html
https://scikit-learn.org/stable/modules/generated/sklearn.feature_extraction.image.reconstruct_from_patches_2d.html
import numpy as np
import cv2
from sklearn.feature_extraction import image as extraction
img = cv2.imread("cat_small.jpg", cv2.IMREAD_COLOR)
grid_size = 500
images = extraction.extract_patches_2d(img, (grid_size, grid_size), max_patches=100)
image = extraction.reconstruct_from_patches_2d(images, img.shape)
cv2.imwrite("stack_overflow_test.jpg", image)
I can tell the extraction works correctly, since each of the patches can be saved as an individual image. The reconstruction does not work.
The image:
becomes:
Which looks entirely black when viewed on a white background, but does have some white pixels toward the top left (can be seen when opened in a separate tab). This same problem happens in grayscale.
I have tried adding astype(np.uint8) as explained in
How to convert array to image colour channel in python?
to no avail. How is this method used properly?
I have a hundred 10x10 px images, and I want to combine them into a big 100x100 image. I'm using the Image library to first create a blank image and then paste in the smaller images:
blank = Image.new('P',(100,100))
blank.paste(im,box)
The smaller images are in color, but the resulting image turns out in all grayscale. Is there a fix or workaround for this?
It's probably something to do with using a palette type image (mode P). Is there a specific reason you are doing this? If not, try passing 'RGB' as the first argument.