I've been handed a list of files from the backend of an application that are supposed to be jpeg files. However for the life of me I haven't been able to convert them into PIL image objects. When I call
str(curimg)
I get back:
<type 'str'>
. I have tried using open(), .read, io.BytesIO(img.read() and also doing nothing to it, but it keeps seeing it as a string. When i print the string, I get unrecognizable characters. Does anyone know how to tell python how to intepret this string as a jpeg and convert it into a pill image where I can call .size and np.array on?
from PIL import Image
import io
Image.open(io.BytesIO(image))
Note:
If image is on the web; you need to download it first.
import requests
image = requests.get(image_url).content #download image from web
And then pass it to io module.
io.BytesIO(image)
If image is in your hd; you can open directly with PIL.
Image.open('image_file.jpg') #image in your HD
You should be able to pass a StringIO object to PIL and open it that way.
ie:
from PIL import Image
import StringIO
tempBuff = StringIO.StringIO()
tempBuff.write(curimg)
tempBuff.seek(0) #need to jump back to the beginning before handing it off to PIL
Image.open(tempBuff)
For me, none of the solutions above worked.
I finally managed to read the string properly like this:
from PIL import Image
img = Image.frombytes('RGB', (640, 480), img_str, 'raw')
To test it, you can do something like
image = Image.open("some.png")
print(image.mode, image.size) # OUT: 'RGB' (640, 480)
image = Image.frombytes('RGB', (640, 480), image.tobytes(), 'raw')
image.show()
#CEO (per this comment) I don't know how what role SQL plays here, and I'm not exactly sure what you're trying to achieve, but I recall I had some issues and this is what works for my case, hope it helps
frame = self._rawNode.display_frame.copy()
width = int(self.customLayout.geometry().width())
height = int(frame.shape[0] * (width / frame.shape[1]))
display_frame = cv2.cvtColor(cv2.resize(frame, (width, height)), cv2.COLOR_BGR2RGB)
qImg = QtGui.QImage(display_frame.data, width, height, 3 * width, QtGui.QImage.Format_RGB888)
self.pixmap = QtGui.QPixmap(qImg)
self.Imagelabel.setPixmap(self.pixmap)
Related
I'm trying to resize images in pysimplegui however it crops the images instead of resizing.
My image element is written as:
ui.Image('{filename}'), size=(50,50)))
Which results to something like:
While the original looks like:
I've seen somewhere else that suggests PIL (link). However, this looks a lot longer than i liked and was wondering if there is an easier way to do this.
Peace
hi
to resize an image you need to take advantage of the pillow library, but you need to import other libraries too in order to convert it into bytes if needed, here is an example:
import PIL.Image
import io
import base64
def resize_image(image_path, resize=None): #image_path: "C:User/Image/img.jpg"
if isinstance(image_path, str):
img = PIL.Image.open(image_path)
else:
try:
img = PIL.Image.open(io.BytesIO(base64.b64decode(image_path)))
except Exception as e:
data_bytes_io = io.BytesIO(image_path)
img = PIL.Image.open(data_bytes_io)
cur_width, cur_height = img.size
if resize:
new_width, new_height = resize
scale = min(new_height/cur_height, new_width/cur_width)
img = img.resize((int(cur_width*scale), int(cur_height*scale)), PIL.Image.ANTIALIAS)
bio = io.BytesIO()
img.save(bio, format="PNG")
del img
return bio.getvalue()
ui.Image(key="-PHOTO-",size=(50,50) #after some change
elif event == "-IMG-": # the"-IMG-" key is in [ui.I(key="IMG",enable_events=True), ui.FileBrowse()]
window['-PHOTO-'].update(data=resize_image(value["-IMG-"],resize=(50,50)))
I hope this helps
Helloooo, heres my workaround to resize images in pysimplegui:
read the image stored in the path 'old_path'.
resize this image to my desired dimensions.
store the resized image in a folder as a 'png' file.
finally display the resized image.
old_path = os.path.join(
values["-FOLDER-"], values["-FILE LIST-"][0]
)
# read image using old_path
im = cv2.imread(old_path)
# resize image to desired dimensions
im = cv2.resize(im,[700,500])
# save image to temporary folder (new_path) as png
new_path ='temp_storage/image_to_show.png'
cv2.imwrite(new_path,im)
# update window with new resized image
window["-IMAGE-"].update(new_path)
if you need the full code let me know. The image storing folder only stores the image to be shown, it will override every time you choose a new image so no worries about images pilling up.
cv2 needed for reading, resizing and writing. (or PIL)
Goodluck!
I'm taking an image from the clipboard and trying to determine how much the image going to take space on a hard drive before actually saving it. I couldn't find the right code to do so.
I'm using this code:
from PIL import Image, ImageGrab
from io import BytesIO
def jpeg_or_png(img):
img_file = BytesIO()
img.save(img_file, 'png')
img_file_size_png = img_file.tell()
img_file = BytesIO()
img.save(img_file, 'jpeg')
img_file_size_jpeg = img_file.tell()
if img_file_size_jpeg < img_file_size_png:
return 'jpeg'
else:
return 'png'
img = ImageGrab.grabclipboard()
if img:
print(jpeg_or_png(img))
It always returns jpeg. Even if it's a blank white part image, for which png is more efficient.
If I save both versions, and of course .png is smaller in size, but the function still returns .jpeg.
Ideally, I want to determine which extension is going to take less space without saving it. I know I can save both, then find out the biggest file and remove it. But it's bad practice and form to do that because it will take up space to do that.
You code works for me. To help diagnose, you can modify your function to return the size of the image as a dictionary.
def jpeg_or_png(img):
j = BytesIO()
img.save(j, 'jpeg')
p = BytesIO()
img.save(p, 'png')
return {'jpg': j.tell(), 'png':p.tell()}
Here are two tests with different screen captures:
jpeg_or_png(ImageGrab.grabclipboard())
# returns:
{'jpg': 21753, 'png': 232163}
for this image:
jpeg_or_png(ImageGrab.grabclipboard())
# returns:
{'jpg': 7479, 'png': 3602}
for this image:
How can I get the same effect as:
from PIL import Image
with Image.open(image_path) as image:
image.thumbnail((200, 200), Image.ANTIALIAS)
image.save(temporary_thumbnail_path)
with open(temporary_thumbnail_path, "rb") as thumbnail_file:
thumbnail_as_string = base64.b64encode(thumbnail_file.read()).decode()
without having to write to disk ?
i.e. I would like to get the bytes representation of the compressed image, but without having to resort to temporary_thumbnail_path.
I know that PIL documentation recommends using
save(), with a BytesIO parameter for in-memory data.
but I am not sure to understand what this means and haven't found examples online.
It was not so hard:
import io
from PIL import Image
output = io.BytesIO()
with Image.open(image_path) as image:
image.thumbnail((400, 400), Image.ANTIALIAS)
image.save(output, format="JPEG")
thumbnail_as_string = base64.b64encode(output.getvalue()).decode()
I'm trying to use the Zbar library's QR code detection methods on images I extract with OpenCV's camera methods. Normally the QR code detection methods work with images (jpg, png, etc.) on my computer, but I guess the captured frames of OpenCV are different.
Is there a way of making the captured frame into a PIL Image?
Thank you.
from PIL import Image
import zbar
import cv2.cv as cv
capture = cv.CaptureFromCAM(1)
imgSize = cv.GetSize(cv.QueryFrame(capture))
img = cv.QueryFrame(capture)
#SOMETHING GOES HERE TO TURN FRAME INTO IMAGE
img = img.convert('L')
width, height = img.size
scanner = zbar.ImageScanner()
scanner.parse_config('enable')
zbar_img = zbar.Image(width, height, 'Y800', img.tostring())
# scan the image for barcodes
scanner.scan(zbar_img)
for symbol in zbar_img:
print symbol.data
With the python CV2, you can also do this:
import Image, cv2
cap = cv2.VideoCapture(0) # says we capture an image from a webcam
_,cv2_im = cap.read()
cv2_im = cv2.cvtColor(cv2_im,cv2.COLOR_BGR2RGB)
pil_im = Image.fromarray(cv2_im)
pil_im.show()
I think I may have found the answer. I'll edit later with results.
OpenCV to PIL Image
import Image, cv
cv_im = cv.CreateImage((320,200), cv.IPL_DEPTH_8U, 1)
pi = Image.fromstring("L", cv.GetSize(cv_im), cv_im.tostring())
Source: http://opencv.willowgarage.com/documentation/python/cookbook.html
Are you trying to obtain a RGB image? If that is the case, you need to change your parameters from this:
cv_im = cv.CreateImage((320,200), cv.IPL_DEPTH_8U, 1)
pi = Image.fromstring("L", cv.GetSize(cv_im), cv_im.tostring())
to that:
cv_im = cv.CreateImage((320,200), cv.IPL_DEPTH_8U, 3)
pi = Image.fromstring("RGB", cv.GetSize(cv_im), cv_im.tostring())
since it is documented almost nowhere, but the 'L' parameter of Image.fromstring is for 8-bit B&W images. Besides, you need to change the argument of your cv.CreateImage function from 1 (single channel image) to 3 (3 channels=RGB).
Hope it works for you.
Cheers
A simple way is to directly swap the channels. Suppose you are trying to convert a 3-channel image file between OpenCV format and PIL format. You can just use:
img[...,[0,2]]=img[...,[2,0]]
In this way, you won't be bothered with cv2.cvtColor as this function only works on images with certain depth.
When loading a png image with PIL and OpenCV, there is a color shift. Black and white remain the same, but brown gets changed to blue.
I can't post the image because this site does not allow newbies to post images.
The code is written as below rather than use cv.LoadImageM, because in the real case the raw image is received over tcp.
Here is the code:
#! /usr/bin/env python
import sys
import cv
import cv2
import numpy as np
import Image
from cStringIO import StringIO
if __name__ == "__main__":
# load raw image from file
f = open('frame_in.png', "rb")
rawImage = f.read()
f.close()
#convert to mat
pilImage = Image.open(StringIO(rawImage));
npImage = np.array(pilImage)
cvImage = cv.fromarray(npImage)
#show it
cv.NamedWindow('display')
cv.MoveWindow('display', 10, 10)
cv.ShowImage('display', cvImage)
cv. WaitKey(0)
cv.SaveImage('frame_out.png', cvImage)
How can the color shift be fixed?
OpenCV's images have color channels arranged in the order BGR whereas PIL's is RGB. You will need to switch the channels like so:
import PIL.Image
import cv2
...
image = np.array(pilImage) # Convert PIL Image to numpy/OpenCV image representation
image = cv2.cvtColor(image, cv2.COLOR_RGB2BGR) # You can use cv2.COLOR_RGBA2BGRA if you are sure you have an alpha channel. You will only have alpha channel if your image format supports transparency.
...
#Krish: Thanks for pointing out the bug. I didn't have time to test the code the last time.
Hope this helps.
Change
pilImage = Image.open(StringIO(rawImage))
to
pilImage = Image.open(StringIO(rawImage)).convert("RGB")
Light alchemist's answer did not work, but it did explain the issue. Wouldn't the reverse be screwed up by the Apha channel, i.e. it changes BRGA to AGRB. I would think Froyo's answer would solve it, but it did not change the displayed image at all. What did work was reversing the colors in OpenCV. I'm too much of a newbie to know why. They seem equivalent to me. Reversing the colors in numpy would be preferred as additional processing is planned in numpy. But thanks for the help, the answers steered me in the right direction.
pilImage = Image.open(StringIO(rawImage));
bgrImage = np.array(pilImage)
cvBgrImage = cv.fromarray(bgrImage)
# Reverse BGR
cvRgbImage = cv.CreateImage(cv.GetSize(cvBgrImage),8,3)
cv.CvtColor(cvBgrImage, cvRgbImage, cv.CV_BGR2RGB)
#show it
cv.ShowImage('display', cvRgbImage)
cv. WaitKey(30) # ms to allow display