I have a file called resources.py which loads images to be used in the main project.
So far the code looks like this:
import pyglet
pyglet.resource.path = ["../resources", "C:/users/____/Pictures/useful icons"]
pyglet.resource.reindex()
checkbox_unchecked = pyglet.resource.image("checkbox_unchecked.png")
checkbox_checked = pyglet.resource.image("checkbox_checked.png")
checkbox_unchecked_dark = pyglet.resource.image("checkbox_unchecked_dark.png")
checkbox_checked_dark = pyglet.resource.image("checkbox_checked_dark.png")
checkbox_unchecked_thick = pyglet.resource.image("checkbox_unchecked_thick.png")
checkbox_checked_thick = pyglet.resource.image("checkbox_checked_thick.png")
checkbox_unchecked_disabled = pyglet.resource.image("checkbox_unchecked_disabled.png")
checkbox_checked_disabled = pyglet.resource.image("checkbox_checked_disabled.png")
I thought that this is an unwieldy way to do it, so what came to my mind is something like:
import pyglet
pyglet.resource.path = ['../resources', "C:/users/____/Pictures/useful icons"]
pyglet.resource.reindex()
images = ["checkbox_unchecked.png", "checkbox_checked.png", ...]
for image in images:
exec(f'{image} = pyglet.resource.image("{image}")')
This of course uses the exec function which I know is frowned upon as there is usually a better way of doing it. The only other way I can see of doing it is creating a dictionary instead.
Like so:
import pyglet
pyglet.resource.path = ['../resources', "C:/users/____/Pictures/useful icons"]
pyglet.resource.reindex()
images = ["checkbox_unchecked.png", "checkbox_checked.png", ...]
imageDict = {}
for image in images:
imageDict[image] = pyglet.resource.image(image)
Which of these (or other methods) is the most DRY-complient and
comprehensible way to load the images?
You might consider a dictionary comprehension in combination with the pathlib module so that when you call keys from the dictionary you don't have to call them with the extension
from pathlib import Path
import pyglet
pyglet.resource.path = ['../resources', "C:/users/____/Pictures/useful icons"]
pyglet.resource.reindex()
images = ["checkbox_unchecked.png", "checkbox_checked.png", ...]
imageDict = { Path(image).stem: pyglet.resource.image(image) for image in images }
Then you would get your images out with:
imageDict['checkbox_unchecked']
You can use your dictionary solution to get what you originally wanted by using globals(), which is a dict of all the global variables.
for image in images:
globals()[image.split('.')[0]] = pyglet.resource.image(image)
Or:
globals().update((image.split('.')[0], pyglet.resource.image(image)) for image in images)
Related
I can't set my game's window backgroun image.
This is what I have tried:
mainbg = Sky(texture = 'Assets/ruined_city_main_bg')
But that was incomprehensible and scary.
Also I have tried:
mainbg = Entity(parent = camera.ui, '''All other arguments''', texture = 'Assets/ruined_city_main_bg', position = (0, 0))
The "ENTITY BG" is not showing.
You can't see it because you didn't give it a model. Try adding model='quad'.
mainbg = Entity(parent=camera.ui, model='quad', texture='ruined_city_main_bg')
it looks like you are not setting the file extension.
For example:
ruined_city_main_bg.png
where .png is an image extension.
or try:
skybox_image = load_texture("sky_sunset.jpg")
Sky(texture=skybox_image)
Don't forget the file extension.
I wrote some code that uses OCR to extract text from screenshots of follower lists and then transfer them into a data frame.
The reason I have to do the hustle with "name" / "display name" and removing blank lines is that the initial text extraction looks something like this:
Screenname 1
name 1
Screenname 2
name 2
(and so on)
So I know in which order each extraction will be.
My code works well for 1-30 images, but if I take more than that its gets a bit slow. My goal is to run around 5-10k screenshots through it at once. I'm pretty new to programming so any ideas/tips on how to optimize the speed would be very appreciated! Thank you all in advance :)
from PIL import Image
from pytesseract import pytesseract
import os
import pandas as pd
from itertools import chain
list_final = [""]
list_name = [""]
liste_anzeigename = [""]
list_raw = [""]
anzeigename = [""]
name = [""]
sort = [""]
f = r'/Users/PycharmProjects/pythonProject/images'
myconfig = r"--psm 4 --oem 3"
os.listdir(f)
for file in os.listdir(f):
f_img = f+"/"+file
img = Image.open(f_img)
img = img.crop((240, 400, 800, 2400))
img.save(f_img)
for file in os.listdir(f):
f_img = f + "/" + file
test = pytesseract.image_to_string(PIL.Image.open(f_img), config=myconfig)
lines = test.split("\n")
list_raw = [line for line in lines if line.strip() != ""]
sort.append(list_raw)
name = {list_raw[0], list_raw[2], list_raw[4],
list_raw[6], list_raw[8], list_raw[10],
list_raw[12], list_raw[14], list_raw[16]}
list_name.append(name)
anzeigename = {list_raw[1], list_raw[3], list_raw[5],
list_raw[7], list_raw[9], list_raw[11],
list_raw[13], list_raw[15], list_raw[17]}
liste_anzeigename.append(anzeigename)
reihenfolge_name = list(chain.from_iterable(list_name))
index_anzeigename = list(chain.from_iterable(liste_anzeigename))
sortieren = list(chain.from_iterable(sort))
print(list_raw)
sort_name = sorted(reihenfolge_name, key=sortieren.index)
sort_anzeigename = sorted(index_anzeigename, key=sortieren.index)
final = pd.DataFrame(zip(sort_name, sort_anzeigename), columns=['name', 'anzeigename'])
print(final)
Use a multiprocessing.Pool.
Combine the code under the for-loops, and put it into a function process_file.
This function should accept a single argument; the name of a file to process.
Next using listdir, create a list of files to process.
Then create a Pool and use its map method to process the list;
import multiprocessing as mp
def process_file(name):
# your code goes here.
return anzeigename # Or watever the result should be.
if __name__ is "__main__":
f = r'/Users/PycharmProjects/pythonProject/images'
p = mp.Pool()
liste_anzeigename = p.map(process_file, os.listdir(f))
This will run your code in parallel in as many cores as your CPU has.
For a N-core CPU this will take approximately 1/N times the time as doing it without multiprocessing.
Note that the return value of the worker function should be pickleable; it has to be returned from the worker process to the parent process.
This question already has answers here:
Python 3.7, tkinter, jpg: couldn't recognize data in image file
(3 answers)
Closed 1 year ago.
I'm trying to get the image from the song album to display in the window with the song title and artist but it just doesn't do anything. I've tried replacing the "imageLabel" with
"imageLabel = tkinter.Label(window,image=tkinter.PhotoImage(file="CurrentSong.jpg"))" but it still doesn't work.
import requests
import time
import tkinter
token = ''
endpoint = "https://api.spotify.com/v1/me/player/currently-playing"
spotifyHeaders = {'Authorization':'Bearer ' + token}
requestAmount = 1
window = tkinter.Tk(className="|CurrentSong Spotify Song|")
window.geometry('400x400')
canvas = tkinter.Canvas(window,height=1000,width=1000)
canvas.pack()
songLabel = tkinter.Label(window,bg='grey')
songLabel.pack()
def GrabSpotifyCurSong(curSongJson):
return curSongJson['item']['name']
def GrabSpotifyCurArtist(curSongJson):
return curSongJson['item']['artists'][0]['name']
def GrabCurrentSongImage(curSongJson):
return curSongJson['item']['album']['images'][0]['url']
def displaySongs():
while True:
try:
curSong = requests.get(endpoint, headers=spotifyHeaders)
curSongJson = curSong.json()
break
except:
print("Please start listening to a song")
time.sleep(2)
with open('CurrentSong.png','wb+') as SongImage:
response = requests.get(GrabCurrentSongImage(curSongJson))
SongImage.write(response.content)
currentSong = GrabSpotifyCurSong(curSongJson)
currentArtist = GrabSpotifyCurArtist(curSongJson)
img = tkinter.PhotoImage(file="CurrentSong.png")
imageLabel = tkinter.Label(window,image=img)
# songLabel['text'] = f'{currentArtist} - {currentSong}'
# songLabel.place(height=400,width=400)
print(f'{currentArtist} - {currentSong}')
window.after(2500,displaySongs)
displaySongs()
window.mainloop()
Images with tkinter has to be PhotoImage instances, here it is just a string of location of the image and tkinter does not understand that. Furthermore, tkinter.PhotoImage does not recognize JPEG format, so you have to convert it to PNG or use PIL.ImageTk.PhotoImage to use JPEG.
For JPEG and other formats too:
First pip install Pillow and then:
import tkinter
from PIL import Image, ImageTk
....
img = ImageTk.PhotoImage(Image.open("CurrentSong.jpg"))
imageLabel = tkinter.Label(window,image=img)
Adding further here, you can also use ImageTk.PhotoImage(file="CurrentSong.jpg") but that will remove the flexibility that you could get if you want to, say, resize or do some filters to your image. If not, then use that.
For GIF, PGM, PPM, and PNG:
img = tkinter.PhotoImage(file="CurrentSong.png")
imageLabel = tkinter.Label(window,image=img)
Also note that if these are inside function you have to keep reference to the object to avoid it being collected by the gc after the function finishes running.
I have a module that creates a savedSettings.py file after the user used my tool this file is filled with variables to load into the gui next time the tool is used.
I have some checkboxes and a optionMenu. Reading and setting the variables for the checkboxes is as simple as:
# loadsettings into gui
if os.path.exists(userSettings):
sys.path.append(toolFolder)
import savedSettings
viewCBvalue = savedSettings.viewCheck
ornamentCBvalue = savedSettings.ornamentCheck
renderCBvalue = savedSettings.renderCheck
I thought the optionMenu would be the same and wrote:
encodingOMvalue = savedSettings.encodingCheck
When I now tell the GUI to use the variables:
cmds.checkBoxGrp( 'viewCB', label = 'View: ', value1 = viewCBvalue)
cmds.checkBoxGrp( 'ornamentCB', label = 'Show Ornaments: ', value1 = ornamentCBvalue)
cmds.checkBoxGrp( 'renderCB', label = 'Render offscreen: ', value1 = renderCBvalue)
cmds.optionMenuGrp( 'encodingOM', label = 'Encoding ', value = encodingOMvalue )
cmds.menuItem( 'tif', label = 'tif')
cmds.menuItem( 'jpg', label = 'jpg')
cmds.menuItem( 'png', label = 'png')
I get the follwing error:
RuntimeError: Item not found: tif #
My savedSettings.py looks like this:
# User Settings Savefile:
viewCheck = False
ornamentCheck = False
renderCheck = False
encodingCheck = "tif"
Would be great if someone explains me what I am doing wrong and how to set variables for the optionMenu.
Thanks for taking the time in advance and have a nice day coding!
Don't do this. instead use mayas internal mechanism, optionVar, for this.
But if you must do this then know that when you do:
import savedSettings
whatever is defined in savedSettings is stored inside savedSettings so if you have the var viewCBvalue then you call it with savedSettings.viewCBvalue. You could load this into main by calling import savedSettings as *, but you really, really do not want to do this!
Python import is not a normal function, results get cached.
Various other problems. Dont use import for this purpose
If you do not want to use optionVar, consider using pickle
I'm trying to save a captured 640x480 RGB image with NAO's front camera to my computer. I'm using python and PIL to do so. Unfortunately, the image just won't save on my computer, no matter what image type or path I use for the parameters of the Image.save()- Method. the image created with PIL contains valid RGB-information though. Here's my code sample from choregraphe:
import Image
def onInput_onStart(self):
cam_input = ALProxy("ALVideoDevice")
nameId = cam_input.subscribeCamera("Test_Cam", 1, 2, 13, 20)
image = cam_input.getImageRemote(nameId) #captures an image
w = image[0] #get the image width
h = image[1] #get the image height
pixel_array = image[6] #contains the image data
result = Image.fromstring("RGB", (w, h), pixel_array)
#the following line doesnt work
result.save("C:\Users\Claudia\Desktop\NAO\Bilder\test.png", "PNG")
cam_input.releaseImage(nameId)
cam_input.unsubscribe(nameId)
pass
Thank you so much for your help in advance!
- a frustrated student
In the comment, you say the code is pasted from choregraphe, so I guess you launch it using choregraphe.
If so, then the code is injected into your robot then started.
So your image is saved to the NAO hard drive and I guess your robot doesn't have a folder named: "C:\Users\Claudia\Desktop\NAO\Bilder\test.png".
So change the path to "/home/nao/test.png", start your code, then log into your NAO using putty or browse folder using winscp (as it looks like you're using windows).
And you should see your image-file.
In order for your code to run correctly it needs to be properly indented. Your code should look like this:
import Image
def onInput_onStart(self):
cam_input = ALProxy("ALVideoDevice")
nameId = cam_input.subscribeCamera("Test_Cam", 1, 2, 13, 20)
image = cam_input.getImageRemote(nameId) #captures an image
w = image[0] #get the image width
h = image[1] #get the image height
pixel_array = image[6] #contains the image data
...
Make sure to indent everything that's inside the def onInput_onStart(self): method.
Sorry for the late response, but it maybe helpful for someone. You should try it with naoqi. Here is the documentation for retriving images
http://doc.aldebaran.com/2-4/dev/python/examples/vision/get_image.html
The original code was not working for me so I made some tweeks.
parser = argparse.ArgumentParser()
parser.add_argument("--ip", type=str, default="nao.local.",
help="Robot IP address. On robot or Local Naoqi: use
'nao.local.'.")
parser.add_argument("--port", type=int, default=9559,
help="Naoqi port number")
args = parser.parse_args()
session = qi.Session()
try:
session.connect("tcp://" + args.ip + ":" + str(args.port))
except RuntimeError:
pass
"""
First get an image, then show it on the screen with PIL.
"""
# Get the service ALVideoDevice.
video_service = session.service("ALVideoDevice")
resolution = 2 # VGA
colorSpace = 11 # RGB
videoClient = video_service.subscribe("python_client",0,3,13,1)
t0 = time.time()
# Get a camera image.
# image[6] contains the image data passed as an array of ASCII chars.
naoImage = video_service.getImageRemote(videoClient)
t1 = time.time()
# Time the image transfer.
print ("acquisition delay ", t1 - t0)
#video_service.unsubscribe(videoClient)
# Now we work with the image returned and save it as a PNG using ImageDraw
# package.
# Get the image size and pixel array.
imageWidth = naoImage[0]
imageHeight = naoImage[1]
array = naoImage[6]
image_string = str(bytearray(array))
# Create a PIL Image from our pixel array.
im = Image.fromstring("RGB", (imageWidth, imageHeight), image_string)
# Save the image.
im.save("C:\\Users\\Lenovo\\Desktop\\PROJEKTI\\python2-
connect4\\camImage.png", "PNG")
Be careful to use Python 2.7.
The code runs on your computer not the NAO robot!