Matplotlib: How to save an image at full resolution? - python

I've created a mozaic image, such that a large picture consists of many tiny pictures. I can view this image just fine in the matplotlib viewer and I can zoom in to see all the tiny pictures. But when I save the image, no matter the extension, the image loses the zoom ability such that the tiny images become blurred when zooming in. Is there a way to save the image in full resolution?
I have the image in rgb in a numpy array so if there are other libraries more suitable that would work too.

This should work :
from PIL import Image
Image.fromarray(numpy_img).save("img_path.png")

I think you're being mislead by Windows's Photos Application here, which applies blur automatically if you zoom too much.
The image is being saved correctly with all pixel values by Matplotlib, you can check it by zooming again on the pixels of the loaded image.
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
# save image
array = np.random.random((4000, 4000, 3))
plt.imsave("save.png", array)
# load image
img = mpimg.imread("save.png")
plt.imshow(img)
plt.show()

Another option is the small library that I wrote called numpngw. If the numpy array is, say, img (an array of 8-bit or 16-bit unsigned integers with shape (m, n, 3)), you could use:
from numpngw import write_png
write_png('output.png', img)
(If the array is floating point, you'll have to convert the values to unsigned integers. The PNG file format does not store floating point values.)

You can try imageio library:
https://imageio.readthedocs.io/en/stable/userapi.html#imageio.imwrite
from imageio import imwrite
imwrite("image.png", array)

Related

Perlin noise in python

I have searched everywhere for an answer for this question but no luck. I want to figure out how to take the image generated by python's noise library and export it. does anybody know how?
The GitHub repository for the noise library has an examples folder. One of them is an example of how to generate 2D perlin noise and write it to a file.
If you want more standard file formats, such as a PNG or TIFF, you can use Numpy to create an array and write the perlin noise values to the array, and then save the array as an image file using OpenCV.
Convert your data list to a numpy array then use PIL library to save the numpy array to a grayscale image:
# Install PIL: pip install pillow (probably already installed)
from PIL import Image
import numpy as np
from perlin_noise import PerlinNoise
noise = PerlinNoise(octaves=10, seed=1)
xpix, ypix = 100, 100
pic = [[noise([i/xpix, j/ypix]) for j in range(xpix)] for i in range(ypix)]
image = Image.fromarray(np.array(pic) * 255, 'L')
image.save('output.png')

Cannot convert PIL image from Paint.NET to numpy array

I am getting a ValueError in numpy when performing operations on images. The problem seems to be that the images edited by Paint.NET are missing the RGB dimension when opened using PIL and converted to a numpy array.
If PIL is giving you a 861x1091 image when you are expecting an 861x1091x3 image, that is almost certainly because it is a palette image - see here for explanation.
The simplest thing to do, if you want a 3-channel RGB image rather than a single channel palette image is to convert it to RGB when you open it:
im = Image.open(path).convert('RGB')

How to convert a grayscale image to heatmap image with Python OpenCV

I have a (540, 960, 1) shaped image with values ranging from [0..255] which is black and white. I need to convert it to a "heatmap" representation. As an example, pixels with 255 should be of most heat and pixels with 0 should be with least heat. Others in-between. I also need to return the heat maps as Numpy arrays so I can later merge them to a video. Is there a way to achieve this?
Here are two methods, one using Matplotlib and one using only OpenCV
Method #1: OpenCV + matplotlib.pyplot.get_cmap
To implement a grayscale (1-channel) -> heatmap (3-channel) conversion, we first load in the image as grayscale. By default, OpenCV reads in an image as 3-channel, 8-bit BGR.
We can directly load in an image as grayscale using cv2.imread() with the cv2.IMREAD_GRAYSCALE parameter or use cv2.cvtColor() to convert a BGR image to grayscale with the cv2.COLOR_BGR2GRAY parameter. Once we load in the image, we throw this grayscale image into Matplotlib to obtain our heatmap image. Matplotlib returns a RGB format so we must convert back to Numpy format and switch to BGR colorspace for use with OpenCV. Here's a example using a scientific infrared camera image as input with the inferno colormap. See choosing color maps in Matplotlib for available built-in colormaps depending on your desired use case.
Input image:
Output heatmap image:
Code
import matplotlib.pyplot as plt
import numpy as np
import cv2
image = cv2.imread('frame.png', 0)
colormap = plt.get_cmap('inferno')
heatmap = (colormap(image) * 2**16).astype(np.uint16)[:,:,:3]
heatmap = cv2.cvtColor(heatmap, cv2.COLOR_RGB2BGR)
cv2.imshow('image', image)
cv2.imshow('heatmap', heatmap)
cv2.waitKey()
Method #2: cv2.applyColorMap()
We can use OpenCV's built in heatmap function. Here's the result using the cv2.COLORMAP_HOT heatmap
Code
import cv2
image = cv2.imread('frame.png', 0)
heatmap = cv2.applyColorMap(image, cv2.COLORMAP_HOT)
cv2.imshow('heatmap', heatmap)
cv2.waitKey()
Note: Although OpenCV's built-in implementation is short and quick, I recommend using Method #1 since there is a larger colormap selection. Matplotlib has hundreds of various colormaps and allows you to create your own custom color maps while OpenCV only has 12 to choose from. Here's the built in OpenCV colormap selection:
You need to convert the image to a proper grayscale representation. This can be done a few ways, particularly with imread(filename, cv2.IMREAD_GRAYSCALE). This reduces the shape of the image to (54,960) (hint, no third dimension).

How to Use Sci-kit Learn reconstruct_from_patches_2d

I'm working on an imaging project that needs to read images, split them into overlapping patches, run some operation on the patches, and then recombine them into a single image. For this task, I decided to the sci-kit learn methods extract_patches_2d, and reconstruct_from_patches_2d.
https://scikit-learn.org/stable/modules/generated/sklearn.feature_extraction.image.extract_patches_2d.html
https://scikit-learn.org/stable/modules/generated/sklearn.feature_extraction.image.reconstruct_from_patches_2d.html
import numpy as np
import cv2
from sklearn.feature_extraction import image as extraction
img = cv2.imread("cat_small.jpg", cv2.IMREAD_COLOR)
grid_size = 500
images = extraction.extract_patches_2d(img, (grid_size, grid_size), max_patches=100)
image = extraction.reconstruct_from_patches_2d(images, img.shape)
cv2.imwrite("stack_overflow_test.jpg", image)
I can tell the extraction works correctly, since each of the patches can be saved as an individual image. The reconstruction does not work.
The image:
becomes:
Which looks entirely black when viewed on a white background, but does have some white pixels toward the top left (can be seen when opened in a separate tab). This same problem happens in grayscale.
I have tried adding astype(np.uint8) as explained in
How to convert array to image colour channel in python?
to no avail. How is this method used properly?

overlay an image and show lighter pixel at each pixel location

I have two black and white images that I would like to merge with the final image showing the lighter/ white pixel at each pixel location in both images. I tried the following code but it did not work.
background=Image.open('ABC.jpg').convert("RGBA")
overlay=Image.open('DEF.jpg').convert("RGBA")
background_width=1936
background_height=1863
background_width,background_height = background.size
overlay_resize= overlay.resize((background_width,background_height),Image.ANTIALIAS)
background.paste(overlay_resize, None, overlay_resize)
overlay=background.save("overlay.jpg")
fn=np.maximum(background,overlay)
fn1=PIL.Image.fromarray(fn)
plt.imshow(fnl)
plt.show()
The error message I get is cannot handle this data type. Any help or advice anyone could give would be great.
I think you are over-complicating things. You just need to read in both images and make them greyscale numpy arrays, then choose the lighter of the two pixels at each location.
So starting with these two images:
#!/usr/local/bin/python3
import numpy as np
from PIL import Image
# Open two input images and convert to greyscale numpy arrays
bg=np.array(Image.open('a.png').convert('L'))
fg=np.array(Image.open('b.png').convert('L'))
# Choose lighter pixel at each location
result=np.maximum(bg,fg)
# Save
Image.fromarray(result).save('result.png')
You will get this:
Keywords: numpy, Python, image, image processing, compose, blend, blend mode, lighten, lighter, Photoshop, equivalent, darken, overlay.

Categories