REQUIREMENT
I'm trying to retrieve an image from Database and set this image to kivy image widget, this operation throws a ValueError, unsure of the cause. Welcome any inputs.
Database: Sqlite3
Table name: Users
Columns: UserID, UserName, UserImage
def populate_fields(self): # NEW
# Code retrieves text data and display in textinput fields here.
# STEP 1: RETRIEVE IMAGE
connection = sqlite3.connect("demo.db")
with connection:
cursor = connection.cursor()
cursor.execute("SELECT UserImage from Users where
UserID=?",self.data_items[columns[0]]['text'] )
image = cursor.fetchone()
data = io.BytesIO(image[0])
#STEP 2: SET OUTPUT TO IMAGE WIDGET
self.image.source = data # ---> triggers an Error
ERROR TRACEBACK:
self.image.source = data
File "kivy\weakproxy.pyx", line 33, in kivy.weakproxy.WeakProxy.__setattr__ (kivy\weakproxy.c:1471)
File "kivy\properties.pyx", line 478, in kivy.properties.Property.__set__ (kivy\properties.c:5572)
File "kivy\properties.pyx", line 513, in kivy.properties.Property.set (kivy\properties.c:6352)
File "kivy\properties.pyx", line 504, in kivy.properties.Property.set (kivy\properties.c:6173)
File "kivy\properties.pyx", line 676, in kivy.properties.StringProperty.check (kivy\properties.c:8613)
ValueError: Image.source accept only str
After execution of io.BytesIO(), data is in Bytes. Use Kivy CoreImage and texture to convert data.
Replace
self.image.source = data
with:
self.image.texture = CoreImage(data, ext="png").texture
Image source
source
Filename / source of your image.
source is a StringProperty and defaults to None
Output
Ikolim's answer is good but to be more specific,
If you want to display a binary image directly into kivy you can simply work with io module (import io) and kivy image module (kivy.uix.image)
Check this code:
from kivy.uix.image import Image, CoreImage
import io
binary_data= #binary img extracted from sqlite
data = io.BytesIO(binary_data)
img=CoreImage(data, ext="png").texture
new_img= Image()
new_img.texture= img
Related
import mysql.connector
import base64
import io
from base64 import b64decode
from PIL import Image
import PIL.Image
with open('assets\zoha.jpeg', 'rb') as f:
photo = f.read()
encodestring = base64.b64encode(photo)
db=
mysql.connector.connect(user="root",password="",
host="localhost",database="pythonfacedetection")
mycursor=db.cursor()
sql = "INSERT INTO image(img) VALUES(%s)"
mycursor.execute(sql,(encodestring,))
db.commit()
sql1="select img from image where id=75"
mycursor.execute(sql1)
data = mycursor.fetchall()
image=data[0][0]
img = base64.b64decode(str(image))
img2=io.BytesIO(img )
img3= Image.open(img2)
img.show()
db.close()
I want to save my photo in database and display that photo from the database. The data has save on the database properly but can not display. I tried a lot but every time this error shows. Please advise me how can I solve this.
Traceback (most recent call last):
File "C:/Users/MDSHADMANZOHA/PycharmProjects/ImageFromDatabase/main.py", line 28, in
<module>
img3= Image.open(img2)
File "C:\Users\MDSHADMANZOHA\PycharmProjects\ImageFromDatabase\venv\lib\site-
packages\PIL\Image.py", line 3009, in open
"cannot identify image file %r" % (filename if filename else fp)
PIL.UnidentifiedImageError: cannot identify image file <_io.BytesIO object at
0x0000020247DCE308>
Here is part of my code:
import sqlite3
import tkinter
import time
import PIL.Image, PIL.ImageTk
from PIL import Image,ImageTk
import cv2
import numpy as np
from tkinter import Tk, Label, Button, Entry, Toplevel
r=Tk()
conn = sqlite3.connect('datastorage.db')
print("Opened database successfully");
def snapshot(self):
# Get a frame from the video source
ret, frame = self.vid.get_frame()
I am capturing a frame from video and I need to insert it into a database which contains a text column and a blob(Binary Large Object) column. There are other similar questions which suggest converting to string and storing but since I already have images in blob format stored and I am extracting them using decode as seen in the code below, I need to store blob only.
blob_data=row[1]
nparr = np.frombuffer(blob_data, np.uint8)
img_np = cv2.imdecode(nparr, cv2.IMREAD_COLOR)
image1=cv2.resize(img_np,(260,200))
#cv2.imshow("data",image1)
#break
#Rearrang the color channel
b,g,r = cv2.split(image1)
image1 = cv2.merge((r,g,b))
hsv1=cv2.cvtColor(image1, cv2.COLOR_RGB2HSV)
kernel2 = np.ones((3,3),np.uint8)
I tried using the following query:
cursor=conn.execute("create table if not exists user_6 (id text, img blob)")
cursor=conn.execute("insert into user_6 values (?,?)",(ins,sqlite3.Binary(frame)))
but I am unable to display it using the same method I used to display all the other entries. The code used to display is the 2nd code block. I am encountering an error as shown:
Exception in Tkinter callback
Traceback (most recent call last):
File "C:\Users\ABC\AppData\Local\Programs\Python\Python37\lib\tkinter\__init__.py", line 1705, in __call__
return self.func(*args)
File "C:\Users\ABC\Desktop\Python tut\gui databse.py", line 71, in display
image1=cv2.resize(img_np,(130,100))
cv2.error: OpenCV(4.2.0) C:\projects\opencv-python\opencv\modules\imgproc\src\resize.cpp:4045: error: (-215:Assertion failed) !ssize.empty() in function 'cv::resize'
Can anyone help me out?
I was able to do it using the following code:
img_str = cv2.imencode('.jpg', frame)[1].tostring()
cursor=conn.execute("create table if not exists user_6 (id text, img blob)")
cursor=conn.execute("insert into user_6 values (?,?)",(ins,img_str))
conn.commit()
Although the colors in the image are different from the captured image.
I am trying to feed an image from URL to a face_recognition library that I'm using, but it does not seem to be working.
I have tried the suggestion here: https://github.com/ageitgey/face_recognition/issues/442 but it did not work for me. I'm thinking that my problem is with the method that I'm using for fetching the image, and not the face_recognition library, that's why I decided to post the question here.
Bellow is my code:
from PIL import Image
import face_recognition
import urllib.request
url = "https://carlofontanos.com/wp-content/themes/carlo-fontanos/img/carlofontanos.jpg"
img = Image.open(urllib.request.urlopen(url))
image = face_recognition.load_image_file(img)
# Find all the faces in the image using the default HOG-based model.
face_locations = face_recognition.face_locations(image)
print("I found {} face(s) in this photograph.".format(len(face_locations)))
for face_location in face_locations:
# Print the location of each face in this image
top, right, bottom, left = face_location
print("A face is located at pixel location Top: {}, Left: {}, Bottom: {}, Right: {}".format(top, left, bottom, right))
# You can access the actual face itself like this:
face_image = image[top:bottom, left:right]
pil_image = Image.fromarray(face_image)
pil_image.show()
I'm getting the following response when running the above code:
Traceback (most recent call last):
File "test.py", line 10, in <module>
image = face_recognition.load_image_file(img)
File "C:\Users\Carl\AppData\Local\Programs\Python\Python37-32\lib\site-packages\face_recognition\api.py", line 83, in load_image_file
im = PIL.Image.open(file)
File "C:\Users\Carl\AppData\Local\Programs\Python\Python37-32\lib\site-packages\PIL\Image.py", line 2643, in open
prefix = fp.read(16)
AttributeError: 'JpegImageFile' object has no attribute 'read'
I think the problem is with the line AttributeError: 'JpegImageFile' object has no attribute 'read'
You don't need Image to load it
response = urllib.request.urlopen(url)
image = face_recognition.load_image_file(response)
urlopen() gives object which has methods read(), seek() so it is treated as file-like object. And load_image_file() needs filename or file-like object
urllib.request.urlopen(url) returns a http response and not an image file. i think you are supposed to download the image and give the path of the files as input to load_image_file().
I would like to feed images from a remote machine into pyglet (though I am open to other platforms where I can present images and record user's mouse clicks and keystrokes). Currently I am trying to do it using flask on the remote server and pulling it down with requests
import requests
from PIL import Image
import io
import pyglet
import numpy as np
r = requests.get('http://{}:5000/test/cat2.jpeg'.format(myip),)
This does not work:
im = pyglet.image.load(io.StringIO(r.text))
# Error:
File "/usr/local/lib/python3.4/dist-packages/pyglet/image/__init__.py", line 178, in load
file = open(filename, 'rb')
TypeError: invalid file: <_io.StringIO object at 0x7f6eb572bd38>
This also does not work:
im = Image.open(io.BytesIO(r.text.encode()))
# Error:
Traceback (most recent call last):
File "<ipython-input-68-409ca9b8f6f6>", line 1, in <module>
im = Image.open(io.BytesIO(r.text.encode()))
File "/usr/local/lib/python3.4/dist-packages/PIL/Image.py", line 2274, in open
% (filename if filename else fp))
OSError: cannot identify image file <_io.BytesIO object at 0x7f6eb5a8b6a8>
Is there another way to do it without saving files on disk?
The first example isn't working properly because I'm having encoding issues. But this will get you on the way of using manual ImageData objects to manipulate images:
import pyglet, urllib.request
# == The Web part:
img_url = 'http://hvornum.se/linux.jpg'
web_response = urllib.request.urlopen(img_url)
img_data = web_response.read()
# == Loading the image part:
window = pyglet.window.Window(fullscreen=False, width=700, height=921)
image = pyglet.sprite.Sprite(pyglet.image.ImageData(700, 921, 'RGB', img_data))
# == Stuff to render the image:
#window.event
def on_draw():
window.clear()
image.draw()
window.flip()
#window.event
def on_close():
print("I'm closing now")
pyglet.app.run()
Now to the more convenient, less manual way of doing things would be to use the io.BytesIO dummy file-handle and toss that into pyglet.image.load() with the parameter file=dummyFile like so:
import pyglet, urllib.request
from io import BytesIO
# == The Web part:
img_url = 'http://hvornum.se/linux.jpg'
web_response = urllib.request.urlopen(img_url)
img_data = web_response.read()
dummy_file = BytesIO(img_data)
# == Loading the image part:
window = pyglet.window.Window(fullscreen=False, width=700, height=921)
image = pyglet.sprite.Sprite(pyglet.image.load('noname.jpg', file=dummy_file))
# == Stuff to render the image:
#window.event
def on_draw():
window.clear()
image.draw()
window.flip()
#window.event
def on_close():
print("I'm closing now")
pyglet.app.run()
Works on my end and is rather quick as well.
One last note, try putting images into pyglet.sprite.Sprite objects, they tend to be quicker, easier to work with and gives you a whole bunch of nifty functions to work with (such as easy positioning, spr.scale and rotate functions)
You can show a remote image by PIL as follows:
import requests
from PIL import Image
from StringIO import StringIO
r = requests.get('http://{}:5000/test/cat2.jpeg', stream=True)
sio = StringIO(r.raw.read())
im = Image.open(sio)
im.show()
Note that the stream=True option is necessary to create a StringIO object from the data. Also, not using io.StringIO but StringIO.StringIO.
today I was making some code that will resize a few images that already been uploaded. It is actually a def function and called when the button is pressed on the Odoo web client.
Code:
#api.multi
def resize_image(self):
for record in self:
Image.open(self.foto1).resize((800, 600),Image.ANTIALIAS).save(self.foto1, quality=100)
Error:
File "/opt/odoo/addons/asset/asset.py", line 209, in resize_image
Image.open(self.foto1).resize((800, 600),Image.ANTIALIAS).save(self.foto1, quality=100)
IOError: [Errno 36] File name too long: '/9j/4AAQSkZJRgABAQEASABIAAD/7QFKUGhvdG9zaG9wIDMuMAA4QklNBAQAAAAAAREcAVoAAxslRxwCAAACAAIcAhkAJkFGLVMgR
I can't give all the error file name because it's probably the byte itself...
import cStringIO
#api.multi
def resize_image(self):
for record in self:
Image.open(cStringIO.StringIO(self.foto1.decode('base64'))).resize((800, 600),Image.ANTIALIAS).save(self.foto1, quality=100)