How do I display an image from an array in python - python

I am trying to use PIL to display an image from an array. The array is a long list of elements which are pixel values of an image. How do I display these pixel values as an image ?

You don't specify what kind of data is in your list, so I assume it is an array with 25 elements (grouped in 5 groups of 5), which will be converted to a 5 by 5 black & white image.
from PIL import Image
import random
data = [
[1,0,0,1,0],
[1,1,1,0,0],
[1,1,0,1,0],
[1,0,1,1,0],
[0,1,1,0,1],
]
img = Image.new("1", (5, 5))
pixels = img.load()
for i in range(img.size[0]):
for j in range(img.size[1]):
pixels[i, j] = data[i][j]
img.show()
img.save("img.png")
This is similar to this question: How can I write a binary array as an image in Python?

Related

How to generate a bespoke black and white bitmap

image example:
I want to create a black and white bitmap from scratch (not converting or manipulating an existing image) and have the ability to change individual pixels to either black or white using pixel coordinates, somehow, maybe via a dictionary?. Something like a chessboard but with one pixel per chessboard-square (if that makes sense?).I found something to generate a colour spectrum image but don't know how to adapt this.
from PIL import Image
img = Image.new( 'RGB', (300,50), "black") # Create a new black image
pixels = img.load() # Create the pixel map
for i in range(img.size[0]): # For every pixel:
for j in range(img.size[1]):
pixels[i,j] = (i, j, 100) # Set the colour accordingly
img.show()
Zoomed in on leftmost edge of bitmap
You can do individual odd pixels here and there like this:
from PIL import PIL
# Create new black image - L mode for b/w
img = Image.new( 'L', (10,6))
# Make pixels at locations (0,5) and (2,1) white (255)
img.putpixel((0,5), 255)
img.putpixel((2,1), 255)
# Save result
img.save('result.png')
However, if you want to do whole rows, or columns, or longer lines, I would recommend round-tripping to Numpy like this:
import numpy as np
# Create new black image - L mode for b/w
img = Image.new( 'L', (10,6))
# Convert to Numpy array for easy processing
na = np.array(img)
# Make row 1 white
na[1,:] = 255
# Make column 8 white
na[:,8] = 255
# Revert to PIL Image from Numpy array and save
Image.fromarray(na).save('result.png')
Or if you want to do a block:
... as above ...
na[1:3,5:9] = 255
... as above ...

convert image saved in hexadecimal in a np.array to import it in opencv

I get an image stored as an object from a camera that look like this (here reduced to make it understandable):
image = np.array([['#49312E', '#4A3327', '#493228', '#472F2A'],
['#452C29', '#49312E', '#4B3427', '#49312A'],
['#473026', '#472F2C', '#48302B', '#4C342B']])
is it possible to 'import' it as an 'image' in opencv?
I tried to look at the documentation of cv2.imdecode but could get it to work.
I could preprocess this array to get it to another format but I am not sure what could 'fit' to opencv.
Thank you for your help
This is a very succinct and pythonic (using NumPy) way to implement a conversion from your hexadecimal values matrix to an RGB matrix that could be read by OpenCV.
image = np.array([['#49312E', '#4A3327', '#493228', '#472F2A'],
['#452C29', '#49312E', '#4B3427', '#49312A'],
['#473026', '#472F2C', '#48302B', '#4C342B']])
def to_rgb(v):
return np.array([np.int(v[1:3],16), np.int(v[3:5],16) , np.int(v[5:7],16)])
image_cv = np.array([to_rgb(h) for h in image.flatten()]).reshape(3, 4, 3)
cv2.imwrite('result.png', image_cv)
OpenCV requires either a RGB or a BGR input, which is to say you need to give the values of Red Green Blue or Blue Green Red on a scale from 0-255 (8 bit). I have shared with you the code to convert your array to an image.
Initially, I count the number of rows to find the height in terms of pixels. Then I count the number of items in a row to find the width.
Then I create an empty array of the given dimensions using np.zeros.
I then go to each cell and convert the hex code to its RGB equivalent, using the following formula #RRGGBB, R = int(RR,16), G = int(GG, 16), B = int(BB, 16). This converts the hexadecimal string to int.
#!/usr/bin/env python3
import numpy as np
import re
import cv2
# Your image
image = np.array([['#49312E', '#4A3327', '#493228', '#472F2A'],
['#452C29', '#49312E', '#4B3427', '#49312A'],
['#473026', '#472F2C', '#48302B', '#4C342B']])
# Enter the image height and width
height = int(len(image[0]))
width = int(len(image[0][0]))
# Create numpy array of BGR triplets
im = np.zeros((height,width,3), dtype=np.uint8)
for row in range (height):
for col in range(width):
hex = image[row, col][1:]
R = int(hex[0:2],16)
G = int(hex[2:4],16)
B = int(hex[4:6],16)
im[row,col] = (B,G,R)
# Save to disk
cv2.imwrite('result.png', im)

How do I resize image but maintain features in the image? Python

I have an image that looks like this:
array.resize(20,20,3)
img = Image.fromarray(array, 'RGB')
img.save('my.png',quality=90)
img.show()
It is currently a 500x500x3 NumPy array. The underlying space is a 20x20 grid of cells and I want to resize the image so that each grid cell has entries in a 20x20x3 NumPy Array corresponding to it's RGB values instead of (500/20)*(500/20)*3 entries per cell.
The code above does not seem to work unfortunately as it seems to be giving more entries per cell than I expected although I am not 100% sure.
To resize image with pillow you can use Image.resize()
from PIL import Image
import urllib.request
import numpy as np
data = urllib.request.urlopen('https://i.stack.imgur.com/7bPlZ.png')
old_img = Image.open(data)
new_img = old_img.resize((20, 20))
new_img.save('my.png',quality=90)
new_img.show()
array = np.array(new_img)
print(array)
But resizing image you can create pixels with half-tones.
Maybe you should get values directly from numpy.array. You have solid colors so you could get single pixel from every cell - because every cell has size 25x25 so it could be:
new_array = old_array[::25,::25,:]
and then you don't have to convert to image.
And if you convert this array to image then it should be sharper than create with Image.resize.
from PIL import Image
import urllib.request
import numpy as np
data = urllib.request.urlopen('https://i.stack.imgur.com/7bPlZ.png')
old_img = Image.open(data)
old_array = np.array(old_img)
new_array = array[::25,::25,:]
print(new_array)
new_img = Image.fromarray(new_array)
new_img.save('my.png',quality=90)
new_img.show()
Try this
size = 20, 20
img = Image.fromarray(array, 'RGB')
img.thumbnail(size, Image.ANTIALIAS)
img.save('my.png',quality=90)
img.show()

Iterate with for loop through binarized image possible?

this is my python code:
import cv2
img = cv2.imread("foo.jpg")
#here I can iterate trough each pixel since I have a 2D array
for x in range(img.shape[0]):
for y in range(img.shape[1]):
pass #maipulate each pixel
gary = cv2.cvtColor(img, COLOR_BGR2GRAY)
bin = cv2.threshold(gray,127,255,cv2.THRESH_BINARY)
#here I can not iterate trough each pixel since I have a 1D array
for x in range(img.shape[0]):
for y in range(img.shape[1]):
pass
My Question:
How can I iterate through each pixel of the binary image?
I want to use a sliding window search algorithm.
Your code doesn't work because threshold() returns a tuple of 2 values: the threshold value you set (127) and a binary image. If you seperate these you can use the same double loop to access each value / pixel.
I've modified your code, as there where also a couple of typo's.
import cv2
img = cv2.imread("foo.jpg")
#here I can iterate trough each pixel since I have a 2D array
for x in range(img.shape[0]):
for y in range(img.shape[1]):
pass #maipulate each pixel
gray= cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
thresh, bin_img = cv2.threshold(gray,127,255,cv2.THRESH_BINARY)
for x in range(bin_img.shape[0]):
for y in range(bin_img.shape[1]):
pass

Converting 2D Numpy array of grayscale values to a PIL image

Say I have a 2D Numpy array of values on the range 0 to 1, which represents a grayscale image. How do I then convert this into a PIL Image object? All attempts so far have yielded extremely strange scattered pixels or black images.
for x in range(image.shape[0]):
for y in range(image.shape[1]):
image[y][x] = numpy.uint8(255 * (image[x][y] - min) / (max - min))
#Create a PIL image.
img = Image.fromarray(image, 'L')
In the code above, the numpy array image is normalized by (image[x][y] - min) / (max - min) so every value is on the range 0 to 1. Then it is multiplied by 255 and cast to an 8 bit integer. This should, in theory, process through Image.fromarray with mode L into a grayscale image - but the result is a set of scattered white pixels.
I think the answer is wrong. The Image.fromarray( ____ , 'L') function seems to only work properly with an array of integers between 0 and 255. I use the np.uint8 function for this.
You can see this demonstrated if you try to make a gradient.
import numpy as np
from PIL import Image
# gradient between 0 and 1 for 256*256
array = np.linspace(0,1,256*256)
# reshape to 2d
mat = np.reshape(array,(256,256))
# Creates PIL image
img = Image.fromarray(np.uint8(mat * 255) , 'L')
img.show()
Makes a clean gradient
vs
import numpy as np
from PIL import Image
# gradient between 0 and 1 for 256*256
array = np.linspace(0,1,256*256)
# reshape to 2d
mat = np.reshape(array,(256,256))
# Creates PIL image
img = Image.fromarray( mat , 'L')
img.show()
Has the same kind of artifacting.
If I understood you question, you want to get a grayscale image using PIL.
If this is the case, you do not need to multiply each pixels by 255.
The following worked for me
import numpy as np
from PIL import Image
# Creates a random image 100*100 pixels
mat = np.random.random((100,100))
# Creates PIL image
img = Image.fromarray(mat, 'L')
img.show()
im = Image.fromarray(np.uint8(mat), 'L')
or
im = Image.fromarray(np.uint8(mat))
Apparently it accepts type np.uint8(insert array here), also may be able to remove 'L' for conciseness.

Categories