Python + GStreamer: scale video to window - python

I'm having some trouble rescaling video output of GStreamer to the dimension of the window the video is displayed in (retaining aspect ratio of the video). The problem is that I first need to preroll the video to be able to determine its dimensions by retrieving the negotiated caps, and then calculate the dimensions it needs to be displayed in to fit the window. Once I have prerolled the video and got the dimension caps, I cannot change the video's dimension anymore. Setting the new caps still results in the video being output in its original size. What must I do to solve this?
Just to be complete. In the current implementation I cannot render to an OpenGL texture which would have easily solved this problem because you could simply render output to the texture and scale it to fit the screen. I have to draw the output on a pygame surface, which needs to have the correct dimensions. pygame does offer functionality to scale its surfaces, but I think such an implementation (as I have now) is slower than retrieving the frames in their correct size directly from GStreamer (am I right?)
This is my code for loading and displaying the video (I omitted the main loop stuff):
def calcScaledRes(self, screen_res, image_res):
"""Calculate image size so it fits the screen
Args
screen_res (tuple) - Display window size/Resolution
image_res (tuple) - Image width and height
Returns
tuple - width and height of image scaled to window/screen
"""
rs = screen_res[0]/float(screen_res[1])
ri = image_res[0]/float(image_res[1])
if rs > ri:
return (int(image_res[0] * screen_res[1]/image_res[1]), screen_res[1])
else:
return (screen_res[0], int(image_res[1]*screen_res[0]/image_res[0]))
def load(self, vfile):
"""
Loads a videofile and makes it ready for playback
Arguments:
vfile -- the uri to the file to be played
"""
# Info required for color space conversion (YUV->RGB)
# masks are necessary for correct display on unix systems
_VIDEO_CAPS = ','.join([
'video/x-raw-rgb',
'red_mask=(int)0xff0000',
'green_mask=(int)0x00ff00',
'blue_mask=(int)0x0000ff'
])
self.caps = gst.Caps(_VIDEO_CAPS)
# Create videoplayer and load URI
self.player = gst.element_factory_make("playbin2", "player")
self.player.set_property("uri", vfile)
# Enable deinterlacing of video if necessary
self.player.props.flags |= (1 << 9)
# Reroute frame output to Python
self._videosink = gst.element_factory_make('appsink', 'videosink')
self._videosink.set_property('caps', self.caps)
self._videosink.set_property('async', True)
self._videosink.set_property('drop', True)
self._videosink.set_property('emit-signals', True)
self._videosink.connect('new-buffer', self.__handle_videoframe)
self.player.set_property('video-sink', self._videosink)
# Preroll movie to get dimension data
self.player.set_state(gst.STATE_PAUSED)
# If movie is loaded correctly, info about the clip should be available
if self.player.get_state(gst.CLOCK_TIME_NONE)[0] == gst.STATE_CHANGE_SUCCESS:
pads = self._videosink.pads()
for pad in pads:
caps = pad.get_negotiated_caps()[0]
self.vidsize = caps['width'], caps['height']
else:
raise exceptions.runtime_error("Failed to retrieve video size")
# Calculate size of video when fit to screen
self.scaledVideoSize = self.calcScaledRes((self.screen_width,self.screen_height), self.vidsize)
# Calculate the top left corner of the video (to later center it vertically on screen)
self.vidPos = ((self.screen_width - self.scaledVideoSize [0]) / 2, (self.screen_height - self.scaledVideoSize [1]) / 2)
# Add width and height info to video caps and reload caps
_VIDEO_CAPS += ", width={0}, heigh={1}".format(self.scaledVideoSize[0], self.scaledVideoSize[1])
self.caps = gst.Caps(_VIDEO_CAPS)
self._videosink.set_property('caps', self.caps) #??? not working, video still displayed in original size
def __handle_videoframe(self, appsink):
"""
Callback method for handling a video frame
Arguments:
appsink -- the sink to which gst supplies the frame (not used)
"""
buffer = self._videosink.emit('pull-buffer')
img = pygame.image.frombuffer(buffer.data, self.vidsize, "RGB")
# Upscale image to new surfuace if presented fullscreen
# Create the surface if it doesn't exist yet and keep rendering to this surface
# for future frames (should be faster)
if not hasattr(self,"destSurf"):
self.destSurf = pygame.transform.scale(img, self.destsize)
else:
pygame.transform.scale(img, self.destsize, self.destSurf)
self.screen.blit(self.destSurf, self.vidPos)
# Swap the buffers
pygame.display.flip()
# Increase frame counter
self.frameNo += 1

I'm pretty sure that your issue was (has it is very long time since you asked this question) that you never hooked up the bus to watch for messages that were emitted.
The code for this is usually something like this:
def some_function(self):
#code defining Gplayer (the pipeline)
#
# here
Gplayer.set_property('flags', self.GST_VIDEO|self.GST_AUDIO|self.GST_TEXT|self.GST_SOFT_VOLUME|self.GST_DEINTERLACE)
# more code
#
# finally
# Create the bus to listen for messages
bus = Gplayer.get_bus()
bus.add_signal_watch()
bus.enable_sync_message_emission()
bus.connect('message', self.OnBusMessage)
bus.connect('sync-message::element', self.OnSyncMessage)
# Listen for gstreamer bus messages
def OnBusMessage(self, bus, message):
t = message.type
if t == Gst.MessageType.ERROR:
pass
elif t == Gst.MessageType.EOS:
print ("End of Audio")
return True
def OnSyncMessage(self, bus, msg):
if msg.get_structure() is None:
return True
if message_name == 'prepare-window-handle':
imagesink = msg.src
imagesink.set_property('force-aspect-ratio', True)
imagesink.set_window_handle(self.panel1.GetHandle())
The key bit for your issue is setting up a call back for the sync-message and in that call-back, setting the property force-aspect-ratio to True.
This property ensures that the video fits the window it is being displayed in at all times.
Note the self.panel1.GetHandle() function names the panel in which you are displaying the video.
I appreciate that you will have moved on but hopefully this will help someone else trawling through the archives.

Related

How to train custom model for Tensorflow Lite and have the output be a .TFLITE file

I'm new to tensorflow and object detetion, and any help would be greatly appreciated! I got a database of 50 photos, used this video to get me started, and it DID work with Google's Sample Model (I'm using a RPi4B with 8 GB of RAM), then I wanted to create my own model. I tried a couple of options, but ultimately failed since the type of files I needed were a .TFLITE and a .txt one with the labels. I only managed to get a .LITE file which from what I tested didn't work
I tried his google collab sheet but the terminal got stuck at step 5 when I pressed the button to train the model, so I tried Edge Impulse but the output models were all in a .LITE file, and didn't provide a labelmap.txt file for the code. I tried manually changing the extension from .LITE to .TFLITE since according to this thread it was supposed to work, but it didn't!
I need this to be ready in 3 days from now... Isn't there a more beginner-friendly way to do this? How can I get a valid .TFLITE model to work with my RPI4? If I have to, I will change the code for this to work. Here's the code the tutorial provided:
######## Webcam Object Detection Using Tensorflow-trained Classifier #########
#
# Author: Evan Juras
# Date: 10/27/19
# Description:
# This program uses a TensorFlow Lite model to perform object detection on a live webcam
# feed. It draws boxes and scores around the objects of interest in each frame from the
# webcam. To improve FPS, the webcam object runs in a separate thread from the main program.
# This script will work with either a Picamera or regular USB webcam.
#
# This code is based off the TensorFlow Lite image classification example at:
# https://github.com/tensorflow/tensorflow/blob/master/tensorflow/lite/examples/python/label_image.py
#
# I added my own method of drawing boxes and labels using OpenCV.
# Import packages
import os
import argparse
import cv2
import numpy as np
import sys
import time
from threading import Thread
import importlib.util
# Define VideoStream class to handle streaming of video from webcam in separate processing thread
# Source - Adrian Rosebrock, PyImageSearch: https://www.pyimagesearch.com/2015/12/28/increasing-raspberry-pi-fps-with-python-and-opencv/
class VideoStream:
"""Camera object that controls video streaming from the Picamera"""
def _init_(self,resolution=(640,480),framerate=30):
# Initialize the PiCamera and the camera image stream
self.stream = cv2.VideoCapture(0)
ret = self.stream.set(cv2.CAP_PROP_FOURCC, cv2.VideoWriter_fourcc(*'MJPG'))
ret = self.stream.set(3,resolution[0])
ret = self.stream.set(4,resolution[1])
# Read first frame from the stream
(self.grabbed, self.frame) = self.stream.read()
# Variable to control when the camera is stopped
self.stopped = False
def start(self):
# Start the thread that reads frames from the video stream
Thread(target=self.update,args=()).start()
return self
def update(self):
# Keep looping indefinitely until the thread is stopped
while True:
# If the camera is stopped, stop the thread
if self.stopped:
# Close camera resources
self.stream.release()
return
# Otherwise, grab the next frame from the stream
(self.grabbed, self.frame) = self.stream.read()
def read(self):
# Return the most recent frame
return self.frame
def stop(self):
# Indicate that the camera and thread should be stopped
self.stopped = True
# Define and parse input arguments
parser = argparse.ArgumentParser()
parser.add_argument('--modeldir', help='Folder the .tflite file is located in',
required=True)
parser.add_argument('--graph', help='Name of the .tflite file, if different than detect.tflite',
default='detect.lite')
parser.add_argument('--labels', help='Name of the labelmap file, if different than labelmap.txt',
default='labelmap.txt')
parser.add_argument('--threshold', help='Minimum confidence threshold for displaying detected objects',
default=0.5)
parser.add_argument('--resolution', help='Desired webcam resolution in WxH. If the webcam does not support the resolution entered, errors may occur.',
default='1280x720')
parser.add_argument('--edgetpu', help='Use Coral Edge TPU Accelerator to speed up detection',
action='store_true')
args = parser.parse_args()
MODEL_NAME = args.modeldir
GRAPH_NAME = args.graph
LABELMAP_NAME = args.labels
min_conf_threshold = float(args.threshold)
resW, resH = args.resolution.split('x')
imW, imH = int(resW), int(resH)
use_TPU = args.edgetpu
# Import TensorFlow libraries
# If tflite_runtime is installed, import interpreter from tflite_runtime, else import from regular tensorflow
# If using Coral Edge TPU, import the load_delegate library
pkg = importlib.util.find_spec('tflite_runtime')
if pkg:
from tflite_runtime.interpreter import Interpreter
if use_TPU:
from tflite_runtime.interpreter import load_delegate
else:
from tensorflow.lite.python.interpreter import Interpreter
if use_TPU:
from tensorflow.lite.python.interpreter import load_delegate
# If using Edge TPU, assign filename for Edge TPU model
if use_TPU:
# If user has specified the name of the .tflite file, use that name, otherwise use default 'edgetpu.tflite'
if (GRAPH_NAME == 'detect.lite'):
GRAPH_NAME = 'edgetpu.tflite'
# Get path to current working directory
CWD_PATH = os.getcwd()
# Path to .tflite file, which contains the model that is used for object detection
PATH_TO_CKPT = os.path.join(CWD_PATH,MODEL_NAME,GRAPH_NAME)
# Path to label map file
PATH_TO_LABELS = os.path.join(CWD_PATH,MODEL_NAME,LABELMAP_NAME)
# Load the label map
with open(PATH_TO_LABELS, 'r') as f:
labels = [line.strip() for line in f.readlines()]
# Have to do a weird fix for label map if using the COCO "starter model" from
# https://www.tensorflow.org/lite/models/object_detection/overview
# First label is '???', which has to be removed.
if labels[0] == '???':
del(labels[0])
# Load the Tensorflow Lite model.
# If using Edge TPU, use special load_delegate argument
if use_TPU:
interpreter = Interpreter(model_path=PATH_TO_CKPT,
experimental_delegates=[load_delegate('libedgetpu.so.1.0')])
print(PATH_TO_CKPT)
else:
interpreter = Interpreter(model_path=PATH_TO_CKPT)
interpreter.allocate_tensors()
# Get model details
input_details = interpreter.get_input_details()
output_details = interpreter.get_output_details()
height = input_details[0]['shape'][1]
width = input_details[0]['shape'][2]
floating_model = (input_details[0]['dtype'] == np.float32)
input_mean = 127.5
input_std = 127.5
# Check output layer name to determine if this model was created with TF2 or TF1,
# because outputs are ordered differently for TF2 and TF1 models
outname = output_details[0]['name']
if ('StatefulPartitionedCall' in outname): # This is a TF2 model
boxes_idx, classes_idx, scores_idx = 1, 3, 0
else: # This is a TF1 model
boxes_idx, classes_idx, scores_idx = 0, 1, 2
# Initialize frame rate calculation
frame_rate_calc = 1
freq = cv2.getTickFrequency()
# Initialize video stream
videostream = VideoStream(resolution=(imW,imH),framerate=30).start()
time.sleep(1)
#for frame1 in camera.capture_continuous(rawCapture, format="bgr",use_video_port=True):
while True:
# Start timer (for calculating frame rate)
t1 = cv2.getTickCount()
# Grab frame from video stream
frame1 = videostream.read()
# Acquire frame and resize to expected shape [1xHxWx3]
frame = frame1.copy()
frame_rgb = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB)
frame_resized = cv2.resize(frame_rgb, (width, height))
input_data = np.expand_dims(frame_resized, axis=0)
# Normalize pixel values if using a floating model (i.e. if model is non-quantized)
if floating_model:
input_data = (np.float32(input_data) - input_mean) / input_std
# Perform the actual detection by running the model with the image as input
interpreter.set_tensor(input_details[0]['index'],input_data)
interpreter.invoke()
# Retrieve detection results
boxes = interpreter.get_tensor(output_details[boxes_idx]['index'])[0] # Bounding box coordinates of detected objects
classes = interpreter.get_tensor(output_details[classes_idx]['index'])[0] # Class index of detected objects
scores = interpreter.get_tensor(output_details[scores_idx]['index'])[0] # Confidence of detected objects
# Loop over all detections and draw detection box if confidence is above minimum threshold
for i in range(len(scores)):
if ((scores[i] > min_conf_threshold) and (scores[i] <= 1.0)):
# Get bounding box coordinates and draw box
# Interpreter can return coordinates that are outside of image dimensions, need to force them to be within image using max() and min()
ymin = int(max(1,(boxes[i][0] * imH)))
xmin = int(max(1,(boxes[i][1] * imW)))
ymax = int(min(imH,(boxes[i][2] * imH)))
xmax = int(min(imW,(boxes[i][3] * imW)))
cv2.rectangle(frame, (xmin,ymin), (xmax,ymax), (10, 255, 0), 2)
# Draw label
object_name = labels[int(classes[i])] # Look up object name from "labels" array using class index
label = '%s: %d%%' % (object_name, int(scores[i]*100)) # Example: 'person: 72%'
if object_name=='person' and int(scores[i]*100)>65:
print("YES")
else:
print("NO")
labelSize, baseLine = cv2.getTextSize(label, cv2.FONT_HERSHEY_SIMPLEX, 0.7, 2) # Get font size
label_ymin = max(ymin, labelSize[1] + 10) # Make sure not to draw label too close to top of window
cv2.rectangle(frame, (xmin, label_ymin-labelSize[1]-10), (xmin+labelSize[0], label_ymin+baseLine-10), (255, 255, 255), cv2.FILLED) # Draw white box to put label text in
cv2.putText(frame, label, (xmin, label_ymin-7), cv2.FONT_HERSHEY_SIMPLEX, 0.7, (0, 0, 0), 2) # Draw label text
# Draw framerate in corner of frame
cv2.putText(frame,'FPS: {0:.2f}'.format(frame_rate_calc),(30,50),cv2.FONT_HERSHEY_SIMPLEX,1,(255,255,0),2,cv2.LINE_AA)
# All the results have been drawn on the frame, so it's time to display it.
cv2.imshow('Object detector', frame)
# Calculate framerate
t2 = cv2.getTickCount()
time1 = (t2-t1)/freq
frame_rate_calc= 1/time1
# Press 'q' to quit
if cv2.waitKey(1) == ord('q'):
break
# Clean up
cv2.destroyAllWindows()
videostream.stop()
```
Easy, just downgrade to OpenCV version 3.4.16, and use Tensorflow 1.0 instead of 2.0 and that should solve all your problems. That will allow the use of .LITE files, as well that of .TFLITE
Also, try increasing the resolution to a 720x1280, most likely that can cause a ton of errors as well when working with tensorflow
Take a look here: https://www.tensorflow.org/tutorials/images/classification
This notebook sets up a new classification model, and ends with "Convert the Keras Sequential model to a TensorFlow Lite model"
https://www.tensorflow.org/tutorials/images/classification#convert_the_keras_sequential_model_to_a_tensorflow_lite_model
# Convert the model.
converter = tf.lite.TFLiteConverter.from_keras_model(model)
tflite_model = converter.convert()
# Save the model.
with open('model.tflite', 'wb') as f:
f.write(tflite_model)
This reliably produces a tflite model from a standard tf model.

Is there a way to save raw opencv images to video?

I'm pretty new to opencv and I've been trying to build an automatic stat tracker for a game I'm playing. I want to collect videos to test the object recognition and stat tracking code, and thought I might as well use the capture code I've already written to make the test videos. So I've been using the pywin32 api to make images for opencv to process, and thought I'd just directly use those images to save to video.
The problem I'm having is that the videos being made are invalid and useless.
So now that I've gone through the background I can move on to the actual questions.
Is there a specific format the opencv images need to be in before they can be saved? or is there a way to possibly feed the images through the VideoCapture class in a non clunky way (like recapturing the cv.imshow window)? I don't want to directly use the VideoCapture class, because it requires the specified window to be always on top. I might also just be doing it wrong.
Code included below.
class VideoScreenCapture:
frame_time_limit = 0
time_elapsed = 0
frame_width = None
frame_height = None
frame_size = None
fps = None
window_capture = None
filming = False
output_video = None
scaling_factor = 2/3
def __init__(self, window_capture=None, fps=5):
"""
window_capture: a custom window capture object that is associated with the window to be captured
fps: the frames per second of the resulting video
"""
self.fps = fps
# calculate the amount of time per frame (inverse of fps)
self.frame_time_limit = 1/self.fps
self.window_capture = window_capture
# get the size (in pixels) of the window getting captured
self.frame_width, self.frame_height, self.frame_size = self.window_capture.get_frame_dimentions()
# initialize video writer
self.output_video = cv.VideoWriter(
f'assets{sep}Videos{sep}test.avi',
cv.VideoWriter_fourcc(*'XVID'),
self.fps,
self.frame_size)
def update(self, dt):
"""dt: delta time -> the time since the last update"""
# only updates if the camera is on
if self.filming:
# update the elapsed time
self.time_elapsed += dt
# if the elapsed time is more than the time segment calculated for the given fps
if self.time_elapsed >= self.frame_time_limit:
self.time_elapsed -= self.frame_time_limit
# window capture takes screenshot of the capture window and resizes it to the wanted side
frame = self.window_capture.get_screenshot()
frame = cv.resize(
frame,
dsize=(0, 0),
fx=self.scaling_factor,
fy=self.scaling_factor)
# the image is then saved to video
self.output_video.write(frame)
# show the image in a separate window
cv.imshow("Computer Vision", frame)
"""methods for stopping and starting the video capture"""
def start(self):
self.filming = True
def stop(self):
self.filming = False
"""releases video writer: use when done"""
def release_vw(self):
self.output_video.release()
This while loop that runs the code
vsc.start()
loop_time = time()
while cv.waitKey(1) != ord('q'):
vsc.update(time() - loop_time)
loop_time = time()

Communication with Thorlabs uc480 camera

I was able to get the current image from a Thorlabs uc480 camera using instrumental. My issue is when I try to adjust the parameters for grab_image. I can change cx and left to any value and get an image. But cy and top only works if cy=600 and top=300. The purpose is to create a GUI so that the user can select values for these parameters to zoom in/out an image.
Here is my code
import instrumental
from instrumental.drivers.cameras import uc480
from matplotlib.figure import Figure
import matplotlib.pyplot as plt
paramsets = instrumental.list_instruments()
cammer = instrumental.instrument(paramsets[0])
plt.figure()
framer= cammer.grab_image(timeout='1s',copy=True,n_frames=1,exposure_time='5ms',cx=640,
left=10,cy=600,top=300)
plt.pcolormesh(framer)
The above code does not give an image if I choose cy=600 and top=10. Are there any particular value set to be used for these parameters? How can I get an image of the full sensor size?
Thorlabs has a Python programming interface available as a download on their website. It is very well documented, and can be installed locally via pip.
Link:
https://www.thorlabs.com/software_pages/ViewSoftwarePage.cfm?Code=ThorCam
Here is an example of a simple capture algorithm that might help get you started:
from thorlabs_tsi_sdk.tl_camera import TLCameraSDK
from thorlabs_tsi_sdk.tl_mono_to_color_processor import MonoToColorProcessorSDK
from thorlabs_tsi_sdk.tl_camera_enums import SENSOR_TYPE
# open the TLCameraSDK dll
with TLCameraSDK() as sdk:
cameras = sdk.discover_available_cameras()
if len(cameras) == 0:
print("Error: no cameras detected!")
with sdk.open_camera(cameras[0]) as camera:
#camera.disarm() # ensure any previous session is closed
# setup the camera for continuous acquisition
camera.frames_per_trigger_zero_for_unlimited = 0
camera.image_poll_timeout_ms = 2000 # 2 second timeout
camera.arm(2)
# need to save the image width and height for color processing
image_width = camera.image_width_pixels
image_height = camera.image_height_pixels
# initialize a mono to color processor if this is a color camera
is_color_camera = (camera.camera_sensor_type == SENSOR_TYPE.BAYER)
mono_to_color_sdk = None
mono_to_color_processor = None
if is_color_camera:
mono_to_color_sdk = MonoToColorProcessorSDK()
mono_to_color_processor = mono_to_color_sdk.create_mono_to_color_processor(
camera.camera_sensor_type,
camera.color_filter_array_phase,
camera.get_color_correction_matrix(),
camera.get_default_white_balance_matrix(),
camera.bit_depth
)
# begin acquisition
camera.issue_software_trigger()
# get the next frame
frame = camera.get_pending_frame_or_null()
# initialize frame attempts and max limit
frame_attempts = 0
max_attempts = 10
# if frame is null, try to get a frame until
# successful or until max_attempts is reached
if frame is None:
while frame is None:
frame = camera.get_pending_frame_or_null()
frame_attempts += 1
if frame_attempts == max_attempts:
raise TimeoutError("Timeout was reached while polling for a frame, program will now exit")
image_data = frame.image_buffer
if is_color_camera:
# transform the raw image data into RGB color data
color_data = mono_to_color_processor.transform_to_24(image_data, image_width, image_height)
save_data = np.reshape(color_data,(image_height, image_width,3))
camera.disarm()
You can also process the image after capture with the PIL library.

wxPython images taking up lots of memory

I have a data project that produces a number of plots with the same name only in different directories. Looking at these plots helps to see if anything funky is going on with the data, but it would be a pain try to open them all one by one or to copy them all to one directory, rename them, and then open up my standard image viewer.
Knowing some wxPython GUI programming I decided to whip up an app that would take in a list of directories to these images and I can slide show them with some arrow buttons. I can then add to this list as more plots come in.
I got the app running properly, but now when I am loading in around 23 ~3 MB images my program memory usage reaches near 10 GB.
The code is below, but in short I load them up all as wxPython Images, before displaying, and this is where the memory climbs. Switching between the images via stacks doesn't seem to make the memory climb further. I think it might be me holding onto no longer needed resources, however the usage seems disproportionate to the size and number of images.
Is someone able to point out a flaw in my method?
```
class Slide(wx.Frame):
def __init__(self):
wx.Frame.__init__(self, None, -1, "Slide Show")
self.display = wx.GetDisplaySize()
self.image_list = None
self.current = None # holds currently displayed bitmap
# stacks that hold the bitmaps
self.future = []
self.past = []
self.left = wx.BitmapButton(self, id=101, size=(100,-1), bitmap=wx.ArtProvider.GetBitmap(wx.ART_GO_BACK))
self.left.Enable(False)
self.right = wx.BitmapButton(self, id=102, size=(100,-1), bitmap=wx.ArtProvider.GetBitmap(wx.ART_GO_FORWARD))
self.right.Enable(False)
self.open = wx.BitmapButton(self, id=103, size=(100,-1), bitmap=wx.ArtProvider.GetBitmap(wx.ART_FILE_OPEN))
# Testing code
#png = wx.Image("set001_fit_001_1.png", wx.BITMAP_TYPE_ANY).ConvertToBitmap()
#image = wx.ImageFromBitmap(png)
#image = image.Scale(self.display[0]/1.4, self.display[1]/1.4, wx.IMAGE_QUALITY_HIGH)
#png = wx.BitmapFromImage(image)
# make empty image slot
self.img = wx.StaticBitmap(self, -1, size=(self.display[0]/1.4, self.display[1]/1.4))
## Main sizers
self.vertSizer = wx.BoxSizer(wx.VERTICAL)
# sub sizers
self.buttonSizer = wx.BoxSizer(wx.HORIZONTAL)
# place buttons together
self.buttonSizer.Add(self.left, flag=wx.ALIGN_CENTER)
als.AddLinearSpacer(self.buttonSizer, 15)
self.buttonSizer.Add(self.right, flag=wx.ALIGN_CENTER)
# place arrows and open together
self.vertSizer.Add(self.img, flag=wx.ALIGN_CENTER)
self.vertSizer.Add(self.open, flag=wx.ALIGN_CENTER)
als.AddLinearSpacer(self.vertSizer, 15)
self.vertSizer.Add(self.buttonSizer, flag=wx.ALIGN_CENTER)
als.AddLinearSpacer(self.vertSizer, 15)
self.Bind(wx.EVT_BUTTON, self.onOpen, id=103)
self.Bind(wx.EVT_BUTTON, self.onLeft, id=101)
self.Bind(wx.EVT_BUTTON, self.onRight, id=102)
self.SetSizer(self.vertSizer)
self.vertSizer.Fit(self)
def onOpen(self, evt):
print("Opening")
openFileDialog = wx.FileDialog(self, "Open Image List", "", "", "List (*.ls)|*.ls", wx.FD_OPEN | wx.FD_FILE_MUST_EXIST)
if openFileDialog.ShowModal() == wx.ID_CANCEL:
return
fileName = str(openFileDialog.GetPath())
print(fileName)
# put image list in a python list
image_list = []
f = open(fileName, 'r')
for line in f:
image_list.append(line.rstrip())
f.close()
print(image_list)
# convert every image to wx Image
png_list = []
for image in image_list:
png = wx.Image(image, wx.BITMAP_TYPE_ANY)
png_list.append(png)
# store pngs
print(image_list[::-1])
png_list = png_list[::-1]
for png in png_list[:-1]:
self.future.append(png)
# display first png
self.current = png_list[-1] # put into current buffer
png = self.current.Scale(self.display[0]/1.4, self.display[1]/1.4, wx.IMAGE_QUALITY_HIGH)
png = wx.BitmapFromImage(png)
self.img.SetBitmap(png)
self.Refresh()
self.right.Enable(True)
def onLeft(self, evt):
# put current image into future stack and grab a new current image from past stack
self.future.append(self.current)
self.current = self.past.pop()
png = self.current.Scale(self.display[0]/1.4, self.display[1]/1.4, wx.IMAGE_QUALITY_HIGH)
png = wx.BitmapFromImage(png)
self.img.SetBitmap(png)
self.Refresh()
if len(self.past) <= 0:
self.left.Enable(False)
self.right.Enable(True)
def onRight(self, evt):
# put current image into past stack and load in a new image from future stack
self.past.append(self.current)
self.current = self.future.pop()
png = self.current.Scale(self.display[0]/1.4, self.display[1]/1.4, wx.IMAGE_QUALITY_HIGH)
png = wx.BitmapFromImage(png)
self.img.SetBitmap(png)
self.Refresh()
if len(self.future) <= 0:
self.right.Enable(False)
self.left.Enable(True)
```
Loading them as needed would probably be the best way to reduce memory usage. Or perhaps keeping the current, next and previous images in memory to help speed up switching between views. Or maybe use a caching implementation to keep the N most recently viewed images (plus the next in the sequence) in memory. Then you can fiddle with the size of N to see what works well and is a reasonable compromise between memory and speed.
Another possible improvement would be to not scale the image each time you need to display it, just scale it once when it is read and keep that one in memory (or the cache). You may also want to experiment with pre-converting them to wx.Bitmaps too, as image to bitmap conversions is not a trivial operation.

Generating consecutive bitmaps with wxPython trouble

Well, while I don't consider myself an experienced programmer, especially when it comes to multimedia applications, I had to do this image viewer sort of a program that displays images fullscreen, and the images change when the users press <- or -> arrows.
The general idea was to make a listbox where all the images contained in a certain folder (imgs) are shown, and when someone does double click or hits RETURN a new, fullscreen frame is generated containing, at first, the image selected in the listbox but after the user hits the arrows the images change in conecutive order, just like in a regular image viewer.
At the beginning, when the first 15 - 20 images are generated, everything goes well, after that, although the program still works an intermediary, very unpleasant and highly unwanted, effect appears between image generations, where basically some images that got generated previously are displayed quickly on the screen and after this the right, consecutive image appears. On the first apparitions the effect is barelly noticeble, but after a while it takes longer and longer between generations.
Here's the code that runs when someone does double click on a listbox entry:
def lbclick(self, eve):
frm = wx.Frame(None, -1, '')
frm.ShowFullScreen(True)
self.sel = self.lstb.GetSelection() # getting the selection from the listbox
def pressk(eve):
keys = eve.GetKeyCode()
if keys == wx.WXK_LEFT:
self.sel = self.sel - 1
if self.sel < 0:
self.sel = len(self.chk) - 1
imgs() # invocking the function made for displaying fullscreen images when left arrow key is pressed
elif keys == wx.WXK_RIGHT:
self.sel = self.sel + 1
if self.sel > len(self.chk) - 1:
self.sel = 0
imgs() # doing the same for the right arrow
elif keys == wx.WXK_ESCAPE:
frm.Destroy()
eve.Skip()
frm.Bind(wx.EVT_CHAR_HOOK, pressk)
def imgs(): # building the function
imgsl = self.chk[self.sel]
itm = wx.Image(str('imgs/{0}'.format(imgsl)), wx.BITMAP_TYPE_JPEG).ConvertToBitmap() # obtaining the name of the image stored in the list self.chk
mar = itm.Size # Because not all images are landscaped I had to figure a method to rescale them after height dimension, which is common to all images
frsz = frm.GetSize()
marx = float(mar[0])
mary = float(mar[1])
val = frsz[1] / mary
vsize = int(mary * val)
hsize = int(marx * val)
panl = wx.Panel(frm, -1, size = (hsize, vsize), pos = (frsz[0] / 2 - hsize / 2, 0)) # making a panel container
panl.SetBackgroundColour('#000000')
imag = wx.Image(str('imgs/{0}'.format(imgsl)), wx.BITMAP_TYPE_JPEG).Scale(hsize, vsize, quality = wx.IMAGE_QUALITY_NORMAL).ConvertToBitmap()
def destr(eve): # unprofessionaly trying to destroy the panel container when a new image is generated hopeing the unvanted effect, with previous generated images will disappear. But it doesn't.
keycd = eve.GetKeyCode()
if keycd == wx.WXK_LEFT or keycd == wx.WXK_RIGHT:
try:
panl.Destroy()
except:
pass
eve.Skip()
panl.Bind(wx.EVT_CHAR_HOOK, destr) # the end of my futile tries
if vsize > hsize: # if the image is portrait instead of landscaped I have to put a black image as a container, otherwise in the background the previous image will remain, even if I use Refresh() on the container (the black bitmap named fundal)
intermed = wx.Image('./res/null.jpg', wx.BITMAP_TYPE_JPEG).Scale(frsz[0], frsz[1]).ConvertToBitmap()
fundal = wx.StaticBitmap(frm, 101, intermed)
stimag = wx.StaticBitmap(fundal, -1, imag, size = (hsize, vsize), pos = (frsz[0] / 2 - hsize / 2, 0))
fundal.Refresh()
stimag.SetToolTip(wx.ToolTip('Esc = exits fullscreen\n<- -> arrows = quick navigation'))
def destr(eve): # the same lame attempt to destroy the container in the portarit images situation
keycd = eve.GetKeyCode()
if keycd == wx.WXK_LEFT or keycd == wx.WXK_RIGHT:
try:
fundal.Destroy()
except:
pass
eve.Skip()
frm.Bind(wx.EVT_CHAR_HOOK, destr)
else: # the case when the images are landscape
stimag = wx.StaticBitmap(panl, -1, imag)
stimag.Refresh()
stimag.SetToolTip(wx.ToolTip('Esc = exits fullscreen\n<- -> arrows = quick navigation'))
imgs() # invocking the function imgs for the situation when someone does double click
frm.Center()
frm.Show(True)
Thanks for any advice in advance.
Added later:
The catch is I'm trying to do an autorun presentation for a DVD with lots of images on it. Anyway it's not necessarely to make the above piece of code work properly if there are any other options. I've already tried to use windows image viewer, but strangely enough it doesn't recognizes relative paths and when I do this
path = os.getcwd() # getting the path of the current working directory
sel = listbox.GetSelection() # geting the value of the current selection from the list box
imgname = memlist[sel] # retrieving the name of the images stored in a list, using the listbox selection, that uses as choices the same list
os.popen(str('rundll32.exe C:\WINDOWS\System32\shimgvw.dll,ImageView_Fullscreen {0}/imgsdir/{1}'.format(path, imgname))) # oepning the images up with bloody windows image viewer
it opens the images only when my program is on the hard disk, if it's on a CD / image drive it doesn't do anything.
There's an image viewer included with the wxPython demo package. I also wrote a really simple one here. Either one should help you in your journey. I suspect that you're not reusing your panels/frames and instead you're seeing old ones that were never properly destroyed.

Categories