How to pixelate a square image to 256 big pixels with python? - python

I need to find an way to reduce a square image to 256 big pixels with python, preferably with the matplotlib and pillow libraries.
Got any ideas ?

Nine months pass and I can now cobble together some Python - per your original request on how to pixellate an image with Python and PIL/Pillow.
#!/usr/local/bin/python3
from PIL import Image
# Open image
img = Image.open("artistic-swirl.jpg")
# Resize smoothly down to 16x16 pixels
imgSmall = img.resize((16,16), resample=Image.Resampling.BILINEAR)
# Scale back up using NEAREST to original size
result = imgSmall.resize(img.size, Image.Resampling.NEAREST)
# Save
result.save('result.png')
Original Image
Result
If you take it down to 32x32 pixels (instead of 16x16) and then resize back up, you get:

Another option is to use PyPXL
Python script to pixelate images and video using K-Means clustering in the Lab colorspace. Video pixelating support multi-processing to achieve better performance.
Using the Paddington image as a source you can run:
python pypxl_image.py -s 16 16 paddington.png paddington_pixelated.png
Which gives this result
Of course if you wanted it to have 256 x 256 pixels rather than just 256 big pixels you could run
python pypxl_image.py -s 256 256 paddington.png paddington_pixelated.png
Which gives this result
Both results have a more retro 8-bit look to them compared to the other solutions, which might suit your needs.

Sorry, I can't give you a Python solution, but I can show you the technique and the result, just using ImageMagick at the command-line:
Starting with this:
First, resize the image down to 16x16 pixels using normal cubic, or bilinear interpolation, then scale the image back up to its original size using "nearest neighbour" interpolation:
magick artistic-swirl.jpg -resize 16x16 -scale 500x500 result.png
Keywords:
Pixelate, pixellate, pixelated, pixellated, ImageMagick, command-line, commandline, image, image processing, nearest neighbour interpolation.

Here is my solution using sliding window
import numpy as np
import matplotlib.pyplot as plt
def pixelate_rgb(img, window):
n, m, _ = img.shape
n, m = n - n % window, m - m % window
img1 = np.zeros((n, m, 3))
for x in range(0, n, window):
for y in range(0, m, window):
img1[x:x+window,y:y+window] = img[x:x+window,y:y+window].mean(axis=(0,1))
return img1
img = plt.imread('test.png')
fig, ax = plt.subplots(1, 4, figsize=(20,10))
ax[0].imshow(pixelate_rgb(img, 5))
ax[1].imshow(pixelate_rgb(img, 10))
ax[2].imshow(pixelate_rgb(img, 20))
ax[3].imshow(pixelate_rgb(img, 30))
# remove frames
[a.set_axis_off() for a in ax.flatten()]
plt.subplots_adjust(wspace=0.03, hspace=0)
The main idea is to slide a window of a certain size through the image and calculate the mean color for that area. Then replace the original pixels in that area with this color.
Another idea is to process grayscale images. Here we calculate the mean color of a grayscale image for a region, but now we replace the original pixels with white or black color depending on whether the mean exceeds a threshold:
def pixelate_bin(img, window, threshold):
n, m = img.shape
n, m = n - n % window, m - m % window
img1 = np.zeros((n,m))
for x in range(0, n, window):
for y in range(0, m, window):
if img[x:x+window,y:y+window].mean() > threshold:
img1[x:x+window,y:y+window] = 1
return img1
# convert image to grayscale
img = np.dot(plt.imread('test.png'), [0.299 , 0.587, 0.114])
fig, ax = plt.subplots(1, 3, figsize=(15,10))
plt.tight_layout()
ax[0].imshow(pixelate_bin(img, 5, .2), cmap='gray')
ax[1].imshow(pixelate_bin(img, 5, .3), cmap='gray')
ax[2].imshow(pixelate_bin(img, 5, .45), cmap='gray')
# remove frames
[a.set_axis_off() for a in ax.flatten()]
plt.subplots_adjust(wspace=0.03, hspace=0)
Live demo
Keep in mind: png has values between 0 and 1, whereas jpg between 0 and 255

Related

Linear-Blurring an Image

I'm trying to blurr an image by mapping each pixel to the average of the N pixels to the right of it (in the same row). My iterative solution produces good output, but my linear-algebra solution is producing bad output.
From testing, I believe my kernel-matrix is correct; and, I know the last N rows don't get blurred, but that's fine for now. I'd appreciate any hints or solutions.
iterative-solution output (good), linear-algebra output (bad)
original image; and here is the failing linear-algebra code:
def blur(orig_img):
# turn image-mat into a vector
flattened_img = orig_img.flatten()
L = flattened_img.shape[0]
N = 3
# kernel
kernel = np.zeros((L, L))
for r, row in enumerate(kernel[0:-N]):
row[r:r+N] = [round(1/N, 3)]*N
print(kernel)
# blurr the img
print('starting blurring')
blurred_img = np.matmul(kernel, flattened_img)
blurred_img = blurred_img.reshape(orig_img.shape)
return blurred_img
The equation I'm modelling is this:
One option might be to just use a kernel and a convolution?
For example if we load a gray scale image like so:
import numpy as np
import matplotlib.pyplot as plt
from PIL import Image
from scipy import ndimage
# load a hackinsh grayscale image
image = np.asarray(Image.open('cup.jpg')).mean(axis=2)
plt.imshow(image)
plt.title('Gray scale image')
plt.show()
Now one can use a kernel and convolution. For example to create a filter that filters just one rows and compute the value of the center pixel as the difference between the pixels to the right and left one can do the following:
# Create a kernel that takes the difference between neighbors horizontal pixes
k = np.array([[-1,0,1]])
plt.subplot(121)
plt.title('Kernel')
plt.imshow(k)
plt.subplot(122)
plt.title('Output')
plt.imshow(ndimage.convolve(image, k, mode='constant', cval=0.0))
plt.show()
Therefore, one can blurr an image by mapping each pixel to the average of the N pixels to the right of it by creating the appropiate kernel.
# Create a kernel that takes the average of N pixels to the right
n=10
k = np.zeros(n*2);k[n:]=1/n
k = k[np.newaxis,...]
plt.subplot(121)
plt.title('Kernel')
plt.imshow(k)
plt.subplot(122)
plt.title('Output')
plt.imshow(ndimage.convolve(image, k, mode='constant', cval=0.0))
plt.show()
The issue was incorrect usage of cv2.imshow() in displaying the output image. It expects floating-point pixel values to be in [0, 1]; which, is done in the below code (near bottom):
def blur(orig_img):
flattened_img = orig_img.flatten()
L = flattened_img.shape[0]
N = int(round(0.1 * orig_img.shape[0], 0))
# mask (A)
mask = np.zeros((L, L))
for r, row in enumerate(mask[0:-N]):
row[r:r+N] = [round(1/N, 2)]*N
# blurred img = A * flattened_img
print('starting blurring')
blurred_img = np.matmul(mask, flattened_img)
blurred_img = blurred_img.reshape(orig_img.shape)
cv2.imwrite('blurred_img.png', blurred_img)
# normalize img to [0,1]
blurred_img = (
blurred_img - blurred_img.min()) / (blurred_img.max()-blurred_img.min())
return blurred_img
Ammended output
Thank you to #CrisLuengo for identifying the issue.

How to analyze only a part of an image?

I want to analyse a specific part of an image, as an example I'd like to focus on the bottom right 200x200 section and count all the black pixels, so far I have:
im1 = Image.open(path)
rgb_im1 = im1.convert('RGB')
for pixel in rgb_im1.getdata():
Whilst you could do this with cropping and a pair of for loops, that is really slow and not ideal.
I would suggest you use Numpy as it is very commonly available, very powerful and very fast.
Here's a 400x300 black rectangle with a 1-pixel red border:
#!/usr/bin/env python3
import numpy as np
from PIL import Image
# Open the image and make into Numpy array
im = Image.open('image.png')
ni = np.array(im)
# Declare an ROI - Region of Interest as the bottom-right 200x200 pixels
# This is called "Numpy slicing" and is near-instantaneous https://www.tutorialspoint.com/numpy/numpy_indexing_and_slicing.htm
ROI = ni[-200:,-200:]
# Calculate total area of ROI and subtract non-zero pixels to get number of zero pixels
# Numpy.count_nonzero() is highly optimised and extremely fast
black = 200*200 - np.count_nonzero(ROI)
print(f'Black pixel total: {black}')
Sample Output
Black pixel total: 39601
Yes, you can make it shorter, for example:
h, w = 200,200
im = np.array(Image.open('image.png'))
black = h*w - np.count_nonzero(ni[-h:,-w:])
If you want to debug it, you can take the ROI and make it into a PIL Image which you can then display. So just use this line anywhere after you make the ROI:
# Display image to check
Image.fromarray(ROI).show()
You can try cropping the Image to the specific part that you want:-
img = Image.open(r"Image_location")
x,y = img.size
img = img.crop((x-200, y-200, x, y))
The above code takes an input image, and crops it to its bottom right 200x200 pixels. (make sure the image dimensions are more then 200x200, otherwise an error will occur)
Original Image:-
Image after Cropping:-
You can then use this cropped image, to count the number of black pixels, where it depends on your use case what you consider as a BLACK pixel (a discrete value like (0, 0, 0) or a range/threshold (0-15, 0-15, 0-15)).
P.S.:- The final Image will always have a dimension of 200x200 pixels.
from PIL import Image
img = Image.open("ImageName.jpg")
crop_area = (a,b,c,d)
cropped_img = img.crop(crop_area)

K-means color clustering - omit background pixels with masked numpy arrays

I'm trying to find the 3 dominant colours of an several images using K-means clustering. The problem I'm facing is that K-means also clusters the background of the image. I am using Python 2.7 and OpenCV 3
All images have the same grey background of the following RGB colour: 150,150,150. To avoid that K-means also clusters the background color, I created a masked array which masks all '150' pixel values from the original image array, theoretically leaving only the non-background pixels in the array for K-Means to work with. However, when I run my script, it still returns the grey as one of the dominant colours.
My question: is a masked array the way to go (and did I do something wrong) or are there better alternatives to somehow exclude pixels from K-means clustering?
Please find my code below:
from sklearn.cluster import KMeans
from sklearn import metrics
import cv2
import numpy as np
def centroid_histogram(clt):
numLabels = np.arange(0, len(np.unique(clt.labels_)) + 1)
(hist, _) = np.histogram(clt.labels_, bins=numLabels)
hist = hist.astype("float")
hist /= hist.sum()
return hist
image = cv2.imread("test1.jpg")
image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
h, w, _ = image.shape
w_new = int(100 * w / max(w, h))
h_new = int(100 * h / max(w, h))
image = cv2.resize(image, (w_new, h_new))
image_array = image.reshape((image.shape[0] * image.shape[1], 3))
image_array = np.ma.masked_values(image_array,150)
clt = KMeans(n_clusters=3)
clt.fit(image_array)
hist = centroid_histogram(clt)
zipped = zip(hist, clt.cluster_centers_)
zipped.sort(reverse=True, key=lambda x: x[0])
hist, clt.cluster_centers = zip(*zipped)
print(clt.cluster_centers_)
If you want to extract the values of pixels other than your background, you can use numpy indexation :
img2=image_array[image_array!=[150,150,150]]
img2=img2.reshape((len(img2)/3,3))
This will yield the list of pixels which are not [150,150,150].
However, it does not preserve the structure of the image, just gives you the list of pixels values. I can't really remember, but maybe for K-means you need to give the whole image, i.e. you also need to feed it the position of the pixels ? But in that case, no masking will ever help because masking is just replacing values of certain pixels by another, not getting rid of pixels all together.

convert image (np.array) to binary image

Thank you for reading my question.
I am new to python and became interested in scipy. I am trying to figure out how I can make the image of the Racoon (in scipy misc) to a binary one (black, white). This is not taught in the scipy-lecture tutorial.
This is so far my code:
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
from scipy import misc #here is how you get the racoon image
face = misc.face()
image = misc.face(gray=True)
plt.imshow(image, cmap=plt.cm.gray)
print image.shape
def binary_racoon(image, lowerthreshold, upperthreshold):
img = image.copy()
shape = np.shape(img)
for i in range(shape[1]):
for j in range(shape[0]):
if img[i,j] < lowerthreshold and img[i,j] > upperthreshold:
#then assign black to the pixel
else:
#then assign white to the pixel
return img
convertedpicture = binary_racoon(image, 80, 100)
plt.imshow(convertedpicture, cmap=plt.cm.gist_gray)
I have seen other people using OpenCV to make a picture binary, but I am wondering how I can do it in this way by looping over the pixels? I have no idea what value to give to the upper and lower threshold, so I made a guess of 80 and 100. Is there also a way to determine this?
In case anyone else is looking for a quick minimal example to experiment with, here's what I used to binarize an image:
from scipy.misc import imread, imsave
# read in image as 8 bit grayscale
img = imread('cat.jpg', mode='L')
# specify a threshold 0-255
threshold = 150
# make all pixels < threshold black
binarized = 1.0 * (img > threshold)
# save the binarized image
imsave('binarized.jpg', binarized)
Input:
Output:
You're overthinking this:
def to_binary(img, lower, upper):
return (lower < img) & (img < upper)
In numpy, the comparison operators apply over the whole array elementwise. Note that you have to use & instead of and to combine the booleans, since python does not allow numpy to overload and
You don't need to iterate over the x and y positions of the image array. Use the numpy array to check if the array is above of below the threshold of interest. Here is some code that produces a boolean (true/false) array as the black and white image.
# use 4 different thresholds
thresholds = [50,100,150,200]
# create a 2x2 image array
fig, ax_arr = plt.subplots(2,2)
# iterate over the thresholds and image axes
for ax, th in zip(ax_arr.ravel(), thresholds):
# bw is the black and white array with the same size and shape
# as the original array. the color map will interpret the 0.0 to 1.0
# float array as being either black or white.
bw = 1.0*(image > th)
ax.imshow(bw, cmap=plt.cm.gray)
ax.axis('off')
# remove some of the extra white space
fig.tight_layout(h_pad=-1.5, w_pad=-6.5)

loop binary image pixel

i have this image with two people in it. it is binary image only contains black and white pixels.
first i want to loop over all the pixels and find white pixels in the image.
than what i want to do is that i want to find [x,y] for the one certain white pixel.
after that i want to use that particular[x,y] in the image which is for the white pixel in the image.
using that co-ordinate of [x,y] i want to convert neighbouring black pixels into white pixels. not whole image tho.
i wanted to post image here but i cant post it unfortunately. i hope my question is understandable now. in the below image you can see the edges.
say for example the edge of the nose i find that with loop using [x,y] and than turn all neighbouring black pixels into white pixels.
This is the binary image
The operation described is called dilation, from Mathematical Morphology. You can either use, for example, scipy.ndimage.binary_dilation or implement your own.
Here are the two forms to do it (one is a trivial implementation), and you can check the resulting images are identical:
import sys
import numpy
from PIL import Image
from scipy import ndimage
img = Image.open(sys.argv[1]).convert('L') # Input is supposed to the binary.
width, height = img.size
img = img.point(lambda x: 255 if x > 40 else 0) # "Ignore" the JPEG artifacts.
# Dilation
im = numpy.array(img)
im = ndimage.binary_dilation(im, structure=((0, 1, 0), (1, 1, 1), (0, 1, 0)))
im = im.view(numpy.uint8) * 255
Image.fromarray(im).save(sys.argv[2])
# "Other operation"
im = numpy.array(img)
white_pixels = numpy.dstack(numpy.nonzero(im != 0))[0]
for y, x in white_pixels:
for dy, dx in ((-1,0),(0,-1),(0,1),(1,0)):
py, px = dy + y, dx + x
if py >= 0 and px >= 0 and py < height and px < width:
im[py, px] = 255
Image.fromarray(im).save(sys.argv[3])

Categories