How to read a bitmap file and crop it afterwards? - python

I want to read a bitmap that represents a sprite, and crop a part of it.
Here is an example of an image.
In Matlab I would have done somethings like:
i = imread('myfile.png')
x1 = i(1:64,1:64,:)
How can I do the same in python?

PIL could do that, crop function in particular.
from PIL import Image
img = Image.open(r'yRc2a.png')
img = img.convert('RGBA')
img = img.crop((0, 0, 82, 82))
img.save(r'out.png')
I'm not sure, seems like your image uses palette? I added convert to RGBA call so output file has transparency. Without convert file has green (why green?) color on transparent pixels.

Related

Python add noise to image breaks PNG

I'm trying to create a image system in Python 3 to be used in a web app. The idea is to load an image from disk and add some random noise to it. When I try this, I get what looks like a totally random image, not resembling the original:
import cv2
import numpy as np
from skimage.util import random_noise
from random import randint
from pathlib import Path
from PIL import Image
import io
image_files = [
{
'name': 'test1',
'file': 'test1.png'
},
{
'name': 'test2',
'file': 'test2.png'
}
]
def gen_image():
rand_image = randint(0, len(image_files)-1)
image_file = image_files[rand_image]['file']
image_name = image_files[rand_image]['name']
image_path = str(Path().absolute())+'/img/'+image_file
img = cv2.imread(image_path)
noise_img = random_noise(img, mode='s&p', amount=0.1)
img = Image.fromarray(noise_img, 'RGB')
fp = io.BytesIO()
img.save(fp, format="PNG")
content = fp.getvalue()
return content
gen_image()
I have also tried using pypng:
import png
# Added the following to gen_image()
content = png.from_array(noise_img, mode='L;1')
content.save('image.png')
How can I load a png (With alpha transparency) from disk, add some noise to it, and return it so that it can be displayed by web server code (flask, aiohttp, etc).
As indicated in the answer by makayla, this makes it better: noise_img = (noise_img*255).astype(np.uint8) but the colors are still wrong and there's no transparency.
Here's the updated function for that:
def gen_image():
rand_image = randint(0, len(image_files)-1)
image_file = image_files[rand_image]['file']
image_name = image_files[rand_image]['name']
image_path = str(Path().absolute())+'/img/'+image_file
img = cv2.imread(image_path)
cv2.imshow('dst_rt', img)
cv2.waitKey(0)
cv2.destroyAllWindows()
# Problem exists somewhere below this line.
img = random_noise(img, mode='s&p', amount=0.1)
img = (img*255).astype(np.uint8)
img = Image.fromarray(img, 'RGB')
fp = io.BytesIO()
img.save(fp, format="png")
content = fp.getvalue()
return content
This will popup a pre-noise image and return the noised image. RGB (And alpha) problem exists in returned image.
I think the problem is it needs to be RGBA but when I change to that, I get ValueError: buffer is not large enough
Given all the new information I am updating my answer with a few more tips for debugging the issue.
I found a site here which creates sample transparent images. I created a 64x64 cyan (R=0, G=255, B=255) image with a transparency layer of 0.5. I used this to test your code.
I read in the image two ways to compare: im1 = cv2.imread(fileName) and im2 = cv2.imread(fileName,cv2.IMREAD_UNCHANGED). np.shape(im1) returned (64,64,3) and np.shape(im2) returned (64,64,4). This is why that flag is required--the default imread settings in opencv will read in a transparent image as a normal RGB image.
However opencv reads in as BGR instead of RGB, and since you cannot save out with opencv, you'll need to convert it to the correct order otherwise the image will have reversed color. For example, my cyan image, when viewed with the reversed color appears like this:
You can change this using openCV's color conversion function like this im = cv2.cvtColor(im, cv2.COLOR_BGRA2RGBA) (Here is a list of all the color conversion codes). Again, double check the size of your image if you need to, it should still have four channels since you converted it to RGBA.
You can now add your noise to your image. Just so you know, this is also going to add noise to your alpha channel as well, randomly making some pixels more transparent and others less transparent. The random_noise function from skimage converts your image to float and returns it as float. This means the image values, normally integers ranging from 0 to 255, are converted to decimal values from 0 to 1. Your line img = Image.fromarray(noise_img, 'RGB') does not know what to do with the floating point noise_img. That's why the image is all messed up when you save it, as well as when I tried to show it.
So I took my cyan image, added noise, and then converted the floats back to 8 bits.
noise_img = random_noise(im, mode='s&p', amount=0.1)
noise_img = (noise_img*255).astype(np.uint8)
img = Image.fromarray(noise_img, 'RGBA')
It now looks like this (screenshot) using img.show():
I used the PIL library to save out my image instead of openCV so it's as close to your code as possible.
fp = 'saved_im.png'
img.save(fp, format="png")
I loaded the image into powerpoint to double-check that it preserved the transparency when I saved it using this method. Here is a screenshot of the saved image overlaid on a red circle in powerpoint:

How do I add a transparent overlay to an image using pillow?

I'm trying to add an transparent overlay on an jpeg image. In the example below, the desired result would be a red image with a pieslice where the it is light red.
My input is an jpeg Image. Since jpeg doesn't have an alpha channel, I though I could convert it to an 'RGBA' image and paste the overlay on it:
from PIL import Image, ImageDraw
# img = Image.open('input.jpg').convert('RGBA')
img = Image.new('RGBA', (400,400), (255,0,0))
img2 = Image.new('RGBA', (400,400))
draw2 = ImageDraw.Draw(img2)
draw2.pieslice([0,0,400,400], 90, 180, fill='white')
img.putalpha(128)
img.save('img.png')
img2.save('img2.png')
img.paste(img2)
img.save('img1+2.png')
However, this doesn't have the desired effect, and windows Photos cannot even open it correctly.
I saw the blend and alpha_composite functions but it doesn't have the desired effect for me. I don't want to lower the alpha values of the background outside of the overlay.

Resizing JPG using PIL.resize gives a completely black image

I'm using PIL to resize a JPG. I'm expecting the same image, resized as output, but instead I get a correctly sized black box. The new image file is completely devoid of any information, just an empty file. Here is an excerpt for my script:
basewidth = 300
img = Image.open(path_to_image)
wpercent = (basewidth/float(img.size[0]))
hsize = int((float(img.size[1])*float(wpercent)))
img = img.resize((basewidth,hsize))
img.save(dir + "/the_image.jpg")
I've tried resizing with Image.LANCZOS as the second argument, (defaults to Image.NEAREST with 1 argument), but it didn't make a difference. I'm running Python3 on Ubunutu 16.04. Any ideas on why the image file is empty?
I also encountered the same issue when trying to resize an image with transparent background. The "resize" works after I add a white background to the image.
Code to add a white background then resize the image:
from PIL import Image
im = Image.open("path/to/img")
if im.mode == 'RGBA':
alpha = im.split()[3]
bgmask = alpha.point(lambda x: 255-x)
im = im.convert('RGB')
im.paste((255,255,255), None, bgmask)
im = im.resize((new_width, new_height), Image.ANTIALIAS)
ref:
Other's code for making thumbnail
Python: Image resizing: keep proportion - add white background
The simplest way to get to the bottom of this is to post your image! Failing that, we can check the various aspects of your image.
So, import Numpy and PIL, open your image and convert it to a Numpy ndarray, you can then inspect its characteristics:
import numpy as np
from PIL import Image
# Open image
img = Image.open('unhappy.jpg')
# Convert to Numpy Array
n = np.array(img)
Now you can print and inspect the following things:
n.shape # we are expecting something like (1580, 1725, 3)
n.dtype # we expect dtype('uint8')
n.max() # if there's white in the image, we expect 255
n.min() # if there's black in the image, we expect 0
n.mean() # we expect some value between 50-200 for most images

Reading BMP RGBA using python PIL doesn't work

I'm trying to read a RGBA BMP using python PIL, and it doesn't seem to work.
The following code segment shows that tensorflow bmp_decode function succeeds in this task, while PIL doesn't:
def read_image_tf(filename):
image_file = tf.read_file(filename, name='read_file')
decoded_bmp = tf.io.decode_bmp(bmp_image)
return decoded_bmp
def read_img_pil(filename):
img = np.asarray(Image.open(fh))
return img
img = K.eval(read_image_tf(<FILENAME>))
print (img.shape)
img = read_img_pil(<FILENAME>)
print (img.shape)
Output:
(3892, 3892, 4)
(3892, 3892, 3)
When trying to run imgobj.convert('RGBA') on Image.open(fh) I simply get a matrix that contains only the value of 255 (100% transparency, which is not the correct alpha value per pixel).
Is there a bug in PIL? Is there an alternative to reading RGBA using python?
PIL doesn't support 32 bit bitmap images. As the official documentation states:-
Pillow reads and writes Windows and OS/2 BMP files containing 1, L, P, or RGB data. 16-colour images are read as P images. Run-length encoding is not supported.
That's why it is generally recommended not to use Image.show() to view an image, as it converts the image to .bmp before displaying it. Therefore if the image contained alpha values (image of color mode LA, RGBA etc) the displayed image will not be display properly, and will have artifacts.
Therefore, when you try to open a .bmp image having RGBA color space in PIL, the color space gets truncated to RGB.
Example:-
from PIL import Image
# creating an red colored image with RGBA color space and full opacity
img = Image.new("RGBA", (100, 100), (255, 0, 0, 255))
# displaying the color mode of the image
print(img.mode)
# saving the image as a .bmp (bitmap)
img.save("new.bmp")
# Opening the previously saved .bmp image (having color mode RGBA)
img = Image.open("new.bmp")
# displaying the mode of the .bmp file
print(img.mode)
OUTPUT:-
RGBA
RGB

How can i read input only a part of the image without creating another image?

import cv2
fname = '1.png'
img=cv2.imread(fname, 0)
print (img)//the outcome is an array of values from 0 to 255 (grayscale)
ret, thresh = cv2.threshold(img, 254, 255, cv2.THRESH_BINARY)
thresh = cv2.bitwise_not(thresh)
nums, labels = cv2.connectedComponents(thresh, None, 4, cv2.CV_32S)
dst = cv2.convertScaleAbs(255.0*labels/nums)
cv2.imwrite(dest_dir+"output.png", dst)
that code works just fine, so i moved on to adjusting my code so it can take a portion of the image not the entire image:
from PIL import Image
img = Image.open(fname)
img2 = img.crop((int(xmin), int(yMin),int(xMax), int(yMax))
xmin ymin xmax ymax simply being the top left bottom right coordinates of the box.
then i did img = cv2.imread(img2) to continue as the previous code but got an error, i printed img2 and got <PIL.Image.Image image mode=RGB size=54x10 at 0x7F4D283AFB70> how can i adjust it to be able to input that crop or image portion instead of fname in my code above, and kindly note i don't want to save img2 as an image and carry on from there because i need to work on the main image.
try cv2.imshow() instead of printing it. In order to see an image you cropped, you need to use cv2 function. here is a sample code:
import numpy as np
import cv2
# Load an color image in grayscale
img = cv2.imread('messi5.jpg',0)
cv2.imshow('image',img)
cv2.waitKey(0)
cv2.destroyAllWindows()
The simple answer is NO you cannot.
Open up your terminal /IDE and type in help(cv2.imread).
It clearly states that The function imread loads an image from the specified file and returns it. So in order to use cv2.imread() you must pass it in as a file not an image.
Your best bet would be to save your cropped image as a file and then read it.

Categories