how to display PIL image with pygame? - python

I am trying to do some video stream from my raspberry pi over the wifi. I used pygame, because i also have to use gamepad in my project. Unfortunately I stucked on displaying received frame. Shortly: i get jpeg frame, open it with PIL, convert to string - after that i can load image from string
image_stream = io.BytesIO()
...
frame_1 = Image.open(image_stream)
f = StringIO.StringIO()
frame_1.save(f, "JPEG")
data = f.getvalue()
frame = pygame.image.fromstring(frame_1,image_len,"RGB")
screen.fill(white)
screen.blit(frame, (0,0))
pygame.display.flip()
and error is :
Traceback (most recent call last):
File "C:\Users\defau_000\Desktop\server.py", line 57, in <module>
frame = pygame.image.fromstring(frame_1,image_len,"RGB")
TypeError: must be str, not instance

Sloth's answer is incorrect for newer versions of Pygame. The tostring() definition is deprecated. Here is a working variant for Python 3.6, PIL 5.1.0, Pygame 1.9.3:
raw_str = frame_1.tobytes("raw", 'RGBA')
pygame_surface = pygame.image.fromstring(raw_str, size, 'RGBA')

The first argument to pygame.image.fromstring has to be a str.
So when frame_1 is your PIL image, convert it to a string with tostring, and load this string with pygame.image.fromstring.
You have to know the size of the image for this to work.
raw_str = frame_1.tostring("raw", 'RGBA')
pygame_surface = pygame.image.fromstring(raw_str, size, 'RGBA')

Related

pythonnet: Convert System.Drawing.Bitmap to PIL.Image

I have a .net library which creates an image. I need access to this image in python so I'm trying to use pythonnet to call the .net DLL.
I'm trying to use the following answer to convert the .NET bytes: and then create the PIL.Image:
Convert Bytes[] to Python Bytes Using PythonNet
PIL: Convert Bytearray to Image
Here is my python code:
import clr
preloadingiterator = clr.AddReference(r"C:\Users\Ian\source\repos\PreloadingIterator\PreloadingIterator\bin\Debug\net48\PreloadingIterator.dll")
from PreloadingIterator import ImageIterator, ImageBytesIterator, FileBytesIterator
from pathlib import Path
import io
from PIL import Image
class FileBytesIteratorWrapper():
def __init__(self, paths):
self.paths = paths
self.iterator = FileBytesIterator(paths)
def __iter__(self):
for netbytes in self.iterator:
pythonbytes = bytes(netbytes)
numBytes = len(pythonbytes)
image = Image.frombytes('RGB', (1920, 1080), pythonbytes)
yield image
This errors with:
ValueError: not enough image data
I assumed this was because I'm returning the PNG encoded bytes, not raw, so I changed my code like this:
image = Image.frombytes('RGB', (1920, 1080), pythonbytes, decoder_name='png')
Which errors with:
'OSError: decoder png not available'
How can I return image data from .NET and decode it into a PIL Image?
Returning a raw image from .NET worked with this python:
def __iter__(self):
for netbytes in self.iterator:
pythonbytes = bytes(netbytes)
image = Image.frombytes('RGB', (1920, 1080), pythonbytes)
yield image
However, the speed of this was prohibitive, as the pythonnet was taking 1.3 seconds per image, and 0.07s in either python or .net natively.
Therefore I stopped using pythonnet and rewrote it to a TCP client/server architecture.

Python - Read image from a URL then use for face_recognition?

I am trying to feed an image from URL to a face_recognition library that I'm using, but it does not seem to be working.
I have tried the suggestion here: https://github.com/ageitgey/face_recognition/issues/442 but it did not work for me. I'm thinking that my problem is with the method that I'm using for fetching the image, and not the face_recognition library, that's why I decided to post the question here.
Bellow is my code:
from PIL import Image
import face_recognition
import urllib.request
url = "https://carlofontanos.com/wp-content/themes/carlo-fontanos/img/carlofontanos.jpg"
img = Image.open(urllib.request.urlopen(url))
image = face_recognition.load_image_file(img)
# Find all the faces in the image using the default HOG-based model.
face_locations = face_recognition.face_locations(image)
print("I found {} face(s) in this photograph.".format(len(face_locations)))
for face_location in face_locations:
# Print the location of each face in this image
top, right, bottom, left = face_location
print("A face is located at pixel location Top: {}, Left: {}, Bottom: {}, Right: {}".format(top, left, bottom, right))
# You can access the actual face itself like this:
face_image = image[top:bottom, left:right]
pil_image = Image.fromarray(face_image)
pil_image.show()
I'm getting the following response when running the above code:
Traceback (most recent call last):
File "test.py", line 10, in <module>
image = face_recognition.load_image_file(img)
File "C:\Users\Carl\AppData\Local\Programs\Python\Python37-32\lib\site-packages\face_recognition\api.py", line 83, in load_image_file
im = PIL.Image.open(file)
File "C:\Users\Carl\AppData\Local\Programs\Python\Python37-32\lib\site-packages\PIL\Image.py", line 2643, in open
prefix = fp.read(16)
AttributeError: 'JpegImageFile' object has no attribute 'read'
I think the problem is with the line AttributeError: 'JpegImageFile' object has no attribute 'read'
You don't need Image to load it
response = urllib.request.urlopen(url)
image = face_recognition.load_image_file(response)
urlopen() gives object which has methods read(), seek() so it is treated as file-like object. And load_image_file() needs filename or file-like object
urllib.request.urlopen(url) returns a http response and not an image file. i think you are supposed to download the image and give the path of the files as input to load_image_file().

CV2 saves pictures in BlackNWhite

I'm trying to send a picture with sockets module and pickles. It goes ok, but received picture is a bnw... could someone please tell me, which part I'm doing wrong?
first set parameters:
encode_param = [int(cv2.IMWRITE_JPEG_QUALITY)]
read the file
img = cv2.imread('th.jpg', 0)
encode it
ret, img = cv2.imencode('.jpg', img, encode_param)
dump and after that send it with pickles to my client side
msg = pickle.dumps(img, 0)
on my client side - open it, decode it and write to file
frame=pickle.loads(message, fix_imports=True, encoding="bytes")
frame = cv2.imdecode(frame, cv2.IMREAD_COLOR)
cv2.imwrite(f'{message_header}.jpg',frame)
but the saved file is bnw.... Where could be the problem?
Problem is with this line. img = cv2.imread('th.jpg', 0) change it to the following.
img = cv2.imread('th.jpg', 1)
According to the documentation, argument 0 in imread relates to grayscale and 1 relates to colour image.

What is the data format read by the function cv2.imread? Working with tkinter and python

Good day, I am quite new to Python programming and I was tasked to do my own GUI with image inside my GUI. I have been doing some good progress but i was stuck when I want to insert an image into my GUI from my webcam. However, I did manage to get an image from the webcam but it has to be a different window with the GUI window.
In my GUI codes, it includes a simple code like this:
(I use range i<25 because my webcam needs warming up)
for i in range (25):
_ , frame = cap.read()
frame = cv2.flip(frame, 1)
cv2image = cv2.cvtColor(frame, cv2.COLOR_BGR2RGBA)
i+=1
cv2.imshow("Latex Truck", cv2image)
img = cv2image
label = Label(root, image = img)
label.place(x = 300, y = 300)
Now, the problem is this. I successfully obtain the frame that I need and was able to show thanks to cv2.imshow but when I try to use the same source which is the "cv2image" in tkinter, it shows this error.
Traceback (most recent call last):
File "C:\Python34\lib\tkinter\__init__.py", line 1487, in __call__
return self.func(*args)
File "C:\Users\FF7_C\OneDrive\Desktop\Logo.py", line 82, in Capture
label = Label(root, image = img)
File "C:\Python34\lib\tkinter\__init__.py", line 2573, in __init__
Widget.__init__(self, master, 'label', cnf, kw)
File "C:\Python34\lib\tkinter\__init__.py", line 2091, in __init__
(widgetName, self._w) + extra + self._options(cnf))
_tkinter.TclError: image "[[[ 49 32 22 255]
Now, logically I think I did what I needed to do which is the extract an image from the webcam which I did, the only problem now is I need to understand why tkinter cannot read the same information read by cv2.imshow.
Can someone guide me on this? Thank you very much! :)
The format returned by cv2.cvtColor(...) is of type numpy.ndarray. You need to convert it to format recognized by tkinter by using Pillow module:
from tkinter import *
from PIL import Image, ImageTk
import cv2
root = Tk()
cap = cv2.VideoCapture(0)
ret, frame = cap.read()
img = cv2.cvtColor(frame, cv2.COLOR_BGR2RGBA)
# convert to image format recognized by tkinter
img = Image.fromarray(img)
tkimg = ImageTk.PhotoImage(image=img)
Label(root, image=tkimg).pack()
root.mainloop()

Loading image from a remote server in pyglet or PIL / python

I would like to feed images from a remote machine into pyglet (though I am open to other platforms where I can present images and record user's mouse clicks and keystrokes). Currently I am trying to do it using flask on the remote server and pulling it down with requests
import requests
from PIL import Image
import io
import pyglet
import numpy as np
r = requests.get('http://{}:5000/test/cat2.jpeg'.format(myip),)
This does not work:
im = pyglet.image.load(io.StringIO(r.text))
# Error:
File "/usr/local/lib/python3.4/dist-packages/pyglet/image/__init__.py", line 178, in load
file = open(filename, 'rb')
TypeError: invalid file: <_io.StringIO object at 0x7f6eb572bd38>
This also does not work:
im = Image.open(io.BytesIO(r.text.encode()))
# Error:
Traceback (most recent call last):
File "<ipython-input-68-409ca9b8f6f6>", line 1, in <module>
im = Image.open(io.BytesIO(r.text.encode()))
File "/usr/local/lib/python3.4/dist-packages/PIL/Image.py", line 2274, in open
% (filename if filename else fp))
OSError: cannot identify image file <_io.BytesIO object at 0x7f6eb5a8b6a8>
Is there another way to do it without saving files on disk?
The first example isn't working properly because I'm having encoding issues. But this will get you on the way of using manual ImageData objects to manipulate images:
import pyglet, urllib.request
# == The Web part:
img_url = 'http://hvornum.se/linux.jpg'
web_response = urllib.request.urlopen(img_url)
img_data = web_response.read()
# == Loading the image part:
window = pyglet.window.Window(fullscreen=False, width=700, height=921)
image = pyglet.sprite.Sprite(pyglet.image.ImageData(700, 921, 'RGB', img_data))
# == Stuff to render the image:
#window.event
def on_draw():
window.clear()
image.draw()
window.flip()
#window.event
def on_close():
print("I'm closing now")
pyglet.app.run()
Now to the more convenient, less manual way of doing things would be to use the io.BytesIO dummy file-handle and toss that into pyglet.image.load() with the parameter file=dummyFile like so:
import pyglet, urllib.request
from io import BytesIO
# == The Web part:
img_url = 'http://hvornum.se/linux.jpg'
web_response = urllib.request.urlopen(img_url)
img_data = web_response.read()
dummy_file = BytesIO(img_data)
# == Loading the image part:
window = pyglet.window.Window(fullscreen=False, width=700, height=921)
image = pyglet.sprite.Sprite(pyglet.image.load('noname.jpg', file=dummy_file))
# == Stuff to render the image:
#window.event
def on_draw():
window.clear()
image.draw()
window.flip()
#window.event
def on_close():
print("I'm closing now")
pyglet.app.run()
Works on my end and is rather quick as well.
One last note, try putting images into pyglet.sprite.Sprite objects, they tend to be quicker, easier to work with and gives you a whole bunch of nifty functions to work with (such as easy positioning, spr.scale and rotate functions)
You can show a remote image by PIL as follows:
import requests
from PIL import Image
from StringIO import StringIO
r = requests.get('http://{}:5000/test/cat2.jpeg', stream=True)
sio = StringIO(r.raw.read())
im = Image.open(sio)
im.show()
Note that the stream=True option is necessary to create a StringIO object from the data. Also, not using io.StringIO but StringIO.StringIO.

Categories