Is there a way to save raw opencv images to video? - python

I'm pretty new to opencv and I've been trying to build an automatic stat tracker for a game I'm playing. I want to collect videos to test the object recognition and stat tracking code, and thought I might as well use the capture code I've already written to make the test videos. So I've been using the pywin32 api to make images for opencv to process, and thought I'd just directly use those images to save to video.
The problem I'm having is that the videos being made are invalid and useless.
So now that I've gone through the background I can move on to the actual questions.
Is there a specific format the opencv images need to be in before they can be saved? or is there a way to possibly feed the images through the VideoCapture class in a non clunky way (like recapturing the cv.imshow window)? I don't want to directly use the VideoCapture class, because it requires the specified window to be always on top. I might also just be doing it wrong.
Code included below.
class VideoScreenCapture:
frame_time_limit = 0
time_elapsed = 0
frame_width = None
frame_height = None
frame_size = None
fps = None
window_capture = None
filming = False
output_video = None
scaling_factor = 2/3
def __init__(self, window_capture=None, fps=5):
"""
window_capture: a custom window capture object that is associated with the window to be captured
fps: the frames per second of the resulting video
"""
self.fps = fps
# calculate the amount of time per frame (inverse of fps)
self.frame_time_limit = 1/self.fps
self.window_capture = window_capture
# get the size (in pixels) of the window getting captured
self.frame_width, self.frame_height, self.frame_size = self.window_capture.get_frame_dimentions()
# initialize video writer
self.output_video = cv.VideoWriter(
f'assets{sep}Videos{sep}test.avi',
cv.VideoWriter_fourcc(*'XVID'),
self.fps,
self.frame_size)
def update(self, dt):
"""dt: delta time -> the time since the last update"""
# only updates if the camera is on
if self.filming:
# update the elapsed time
self.time_elapsed += dt
# if the elapsed time is more than the time segment calculated for the given fps
if self.time_elapsed >= self.frame_time_limit:
self.time_elapsed -= self.frame_time_limit
# window capture takes screenshot of the capture window and resizes it to the wanted side
frame = self.window_capture.get_screenshot()
frame = cv.resize(
frame,
dsize=(0, 0),
fx=self.scaling_factor,
fy=self.scaling_factor)
# the image is then saved to video
self.output_video.write(frame)
# show the image in a separate window
cv.imshow("Computer Vision", frame)
"""methods for stopping and starting the video capture"""
def start(self):
self.filming = True
def stop(self):
self.filming = False
"""releases video writer: use when done"""
def release_vw(self):
self.output_video.release()
This while loop that runs the code
vsc.start()
loop_time = time()
while cv.waitKey(1) != ord('q'):
vsc.update(time() - loop_time)
loop_time = time()

Related

How to train custom model for Tensorflow Lite and have the output be a .TFLITE file

I'm new to tensorflow and object detetion, and any help would be greatly appreciated! I got a database of 50 photos, used this video to get me started, and it DID work with Google's Sample Model (I'm using a RPi4B with 8 GB of RAM), then I wanted to create my own model. I tried a couple of options, but ultimately failed since the type of files I needed were a .TFLITE and a .txt one with the labels. I only managed to get a .LITE file which from what I tested didn't work
I tried his google collab sheet but the terminal got stuck at step 5 when I pressed the button to train the model, so I tried Edge Impulse but the output models were all in a .LITE file, and didn't provide a labelmap.txt file for the code. I tried manually changing the extension from .LITE to .TFLITE since according to this thread it was supposed to work, but it didn't!
I need this to be ready in 3 days from now... Isn't there a more beginner-friendly way to do this? How can I get a valid .TFLITE model to work with my RPI4? If I have to, I will change the code for this to work. Here's the code the tutorial provided:
######## Webcam Object Detection Using Tensorflow-trained Classifier #########
#
# Author: Evan Juras
# Date: 10/27/19
# Description:
# This program uses a TensorFlow Lite model to perform object detection on a live webcam
# feed. It draws boxes and scores around the objects of interest in each frame from the
# webcam. To improve FPS, the webcam object runs in a separate thread from the main program.
# This script will work with either a Picamera or regular USB webcam.
#
# This code is based off the TensorFlow Lite image classification example at:
# https://github.com/tensorflow/tensorflow/blob/master/tensorflow/lite/examples/python/label_image.py
#
# I added my own method of drawing boxes and labels using OpenCV.
# Import packages
import os
import argparse
import cv2
import numpy as np
import sys
import time
from threading import Thread
import importlib.util
# Define VideoStream class to handle streaming of video from webcam in separate processing thread
# Source - Adrian Rosebrock, PyImageSearch: https://www.pyimagesearch.com/2015/12/28/increasing-raspberry-pi-fps-with-python-and-opencv/
class VideoStream:
"""Camera object that controls video streaming from the Picamera"""
def _init_(self,resolution=(640,480),framerate=30):
# Initialize the PiCamera and the camera image stream
self.stream = cv2.VideoCapture(0)
ret = self.stream.set(cv2.CAP_PROP_FOURCC, cv2.VideoWriter_fourcc(*'MJPG'))
ret = self.stream.set(3,resolution[0])
ret = self.stream.set(4,resolution[1])
# Read first frame from the stream
(self.grabbed, self.frame) = self.stream.read()
# Variable to control when the camera is stopped
self.stopped = False
def start(self):
# Start the thread that reads frames from the video stream
Thread(target=self.update,args=()).start()
return self
def update(self):
# Keep looping indefinitely until the thread is stopped
while True:
# If the camera is stopped, stop the thread
if self.stopped:
# Close camera resources
self.stream.release()
return
# Otherwise, grab the next frame from the stream
(self.grabbed, self.frame) = self.stream.read()
def read(self):
# Return the most recent frame
return self.frame
def stop(self):
# Indicate that the camera and thread should be stopped
self.stopped = True
# Define and parse input arguments
parser = argparse.ArgumentParser()
parser.add_argument('--modeldir', help='Folder the .tflite file is located in',
required=True)
parser.add_argument('--graph', help='Name of the .tflite file, if different than detect.tflite',
default='detect.lite')
parser.add_argument('--labels', help='Name of the labelmap file, if different than labelmap.txt',
default='labelmap.txt')
parser.add_argument('--threshold', help='Minimum confidence threshold for displaying detected objects',
default=0.5)
parser.add_argument('--resolution', help='Desired webcam resolution in WxH. If the webcam does not support the resolution entered, errors may occur.',
default='1280x720')
parser.add_argument('--edgetpu', help='Use Coral Edge TPU Accelerator to speed up detection',
action='store_true')
args = parser.parse_args()
MODEL_NAME = args.modeldir
GRAPH_NAME = args.graph
LABELMAP_NAME = args.labels
min_conf_threshold = float(args.threshold)
resW, resH = args.resolution.split('x')
imW, imH = int(resW), int(resH)
use_TPU = args.edgetpu
# Import TensorFlow libraries
# If tflite_runtime is installed, import interpreter from tflite_runtime, else import from regular tensorflow
# If using Coral Edge TPU, import the load_delegate library
pkg = importlib.util.find_spec('tflite_runtime')
if pkg:
from tflite_runtime.interpreter import Interpreter
if use_TPU:
from tflite_runtime.interpreter import load_delegate
else:
from tensorflow.lite.python.interpreter import Interpreter
if use_TPU:
from tensorflow.lite.python.interpreter import load_delegate
# If using Edge TPU, assign filename for Edge TPU model
if use_TPU:
# If user has specified the name of the .tflite file, use that name, otherwise use default 'edgetpu.tflite'
if (GRAPH_NAME == 'detect.lite'):
GRAPH_NAME = 'edgetpu.tflite'
# Get path to current working directory
CWD_PATH = os.getcwd()
# Path to .tflite file, which contains the model that is used for object detection
PATH_TO_CKPT = os.path.join(CWD_PATH,MODEL_NAME,GRAPH_NAME)
# Path to label map file
PATH_TO_LABELS = os.path.join(CWD_PATH,MODEL_NAME,LABELMAP_NAME)
# Load the label map
with open(PATH_TO_LABELS, 'r') as f:
labels = [line.strip() for line in f.readlines()]
# Have to do a weird fix for label map if using the COCO "starter model" from
# https://www.tensorflow.org/lite/models/object_detection/overview
# First label is '???', which has to be removed.
if labels[0] == '???':
del(labels[0])
# Load the Tensorflow Lite model.
# If using Edge TPU, use special load_delegate argument
if use_TPU:
interpreter = Interpreter(model_path=PATH_TO_CKPT,
experimental_delegates=[load_delegate('libedgetpu.so.1.0')])
print(PATH_TO_CKPT)
else:
interpreter = Interpreter(model_path=PATH_TO_CKPT)
interpreter.allocate_tensors()
# Get model details
input_details = interpreter.get_input_details()
output_details = interpreter.get_output_details()
height = input_details[0]['shape'][1]
width = input_details[0]['shape'][2]
floating_model = (input_details[0]['dtype'] == np.float32)
input_mean = 127.5
input_std = 127.5
# Check output layer name to determine if this model was created with TF2 or TF1,
# because outputs are ordered differently for TF2 and TF1 models
outname = output_details[0]['name']
if ('StatefulPartitionedCall' in outname): # This is a TF2 model
boxes_idx, classes_idx, scores_idx = 1, 3, 0
else: # This is a TF1 model
boxes_idx, classes_idx, scores_idx = 0, 1, 2
# Initialize frame rate calculation
frame_rate_calc = 1
freq = cv2.getTickFrequency()
# Initialize video stream
videostream = VideoStream(resolution=(imW,imH),framerate=30).start()
time.sleep(1)
#for frame1 in camera.capture_continuous(rawCapture, format="bgr",use_video_port=True):
while True:
# Start timer (for calculating frame rate)
t1 = cv2.getTickCount()
# Grab frame from video stream
frame1 = videostream.read()
# Acquire frame and resize to expected shape [1xHxWx3]
frame = frame1.copy()
frame_rgb = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB)
frame_resized = cv2.resize(frame_rgb, (width, height))
input_data = np.expand_dims(frame_resized, axis=0)
# Normalize pixel values if using a floating model (i.e. if model is non-quantized)
if floating_model:
input_data = (np.float32(input_data) - input_mean) / input_std
# Perform the actual detection by running the model with the image as input
interpreter.set_tensor(input_details[0]['index'],input_data)
interpreter.invoke()
# Retrieve detection results
boxes = interpreter.get_tensor(output_details[boxes_idx]['index'])[0] # Bounding box coordinates of detected objects
classes = interpreter.get_tensor(output_details[classes_idx]['index'])[0] # Class index of detected objects
scores = interpreter.get_tensor(output_details[scores_idx]['index'])[0] # Confidence of detected objects
# Loop over all detections and draw detection box if confidence is above minimum threshold
for i in range(len(scores)):
if ((scores[i] > min_conf_threshold) and (scores[i] <= 1.0)):
# Get bounding box coordinates and draw box
# Interpreter can return coordinates that are outside of image dimensions, need to force them to be within image using max() and min()
ymin = int(max(1,(boxes[i][0] * imH)))
xmin = int(max(1,(boxes[i][1] * imW)))
ymax = int(min(imH,(boxes[i][2] * imH)))
xmax = int(min(imW,(boxes[i][3] * imW)))
cv2.rectangle(frame, (xmin,ymin), (xmax,ymax), (10, 255, 0), 2)
# Draw label
object_name = labels[int(classes[i])] # Look up object name from "labels" array using class index
label = '%s: %d%%' % (object_name, int(scores[i]*100)) # Example: 'person: 72%'
if object_name=='person' and int(scores[i]*100)>65:
print("YES")
else:
print("NO")
labelSize, baseLine = cv2.getTextSize(label, cv2.FONT_HERSHEY_SIMPLEX, 0.7, 2) # Get font size
label_ymin = max(ymin, labelSize[1] + 10) # Make sure not to draw label too close to top of window
cv2.rectangle(frame, (xmin, label_ymin-labelSize[1]-10), (xmin+labelSize[0], label_ymin+baseLine-10), (255, 255, 255), cv2.FILLED) # Draw white box to put label text in
cv2.putText(frame, label, (xmin, label_ymin-7), cv2.FONT_HERSHEY_SIMPLEX, 0.7, (0, 0, 0), 2) # Draw label text
# Draw framerate in corner of frame
cv2.putText(frame,'FPS: {0:.2f}'.format(frame_rate_calc),(30,50),cv2.FONT_HERSHEY_SIMPLEX,1,(255,255,0),2,cv2.LINE_AA)
# All the results have been drawn on the frame, so it's time to display it.
cv2.imshow('Object detector', frame)
# Calculate framerate
t2 = cv2.getTickCount()
time1 = (t2-t1)/freq
frame_rate_calc= 1/time1
# Press 'q' to quit
if cv2.waitKey(1) == ord('q'):
break
# Clean up
cv2.destroyAllWindows()
videostream.stop()
```
Easy, just downgrade to OpenCV version 3.4.16, and use Tensorflow 1.0 instead of 2.0 and that should solve all your problems. That will allow the use of .LITE files, as well that of .TFLITE
Also, try increasing the resolution to a 720x1280, most likely that can cause a ton of errors as well when working with tensorflow
Take a look here: https://www.tensorflow.org/tutorials/images/classification
This notebook sets up a new classification model, and ends with "Convert the Keras Sequential model to a TensorFlow Lite model"
https://www.tensorflow.org/tutorials/images/classification#convert_the_keras_sequential_model_to_a_tensorflow_lite_model
# Convert the model.
converter = tf.lite.TFLiteConverter.from_keras_model(model)
tflite_model = converter.convert()
# Save the model.
with open('model.tflite', 'wb') as f:
f.write(tflite_model)
This reliably produces a tflite model from a standard tf model.

How to do face recognition in background in python?

Problem I am Facing
I am new to python and I want to do face recognition on the footage of webcam. So, I am using below code to achieve the task I want. I got it from youtube. My laptop has intel i7 7th gen, 4 core processor and 8 GB RAM, 4 GB graphics card (AMD R7 M445). When I start streaming I only get 20 fps even if my camera is capable of transmitting 30fps. On top of that when there is a person in frame, stream gets extremely unusable and it gets lower fps like 0.5 and even 0.2 fps.
My theory
So, I am guessing that this is a linear and pretty basic program. So, all the tasks including streaming and face recognition is being done on the same thread. That is why fps is getting delayed.
Solution I want
So, I want to create a structure where the face recognition will be done on a background thread and streaming will be displayed by on going foreground thread.
I still don't understand why my pc is displaying streaming at 20fps and not in 30fps.
Updated Code (Added perf_counter and added user defined functions)
Please refer to below link for old code.
Old py code
import cv2
from simple_facerec import SimpleFacerec
from datetime import datetime
from time import perf_counter
prev = 0
count = 0
vid_cod = ""
cap = ""
path =""
vid_name = ""
output = ""
sfr = ""
def vidCapture():
global cap,prev,count,sfr
while (cap.isOpened()):
ret, frame = cap.read()
# Detect Faces
startDetect = perf_counter()
face_locations, face_names = sfr.detect_known_faces(frame)
for face_loc, name in zip(face_locations, face_names):
y1, x2, y2, x1 = face_loc[0], face_loc[1], face_loc[2], face_loc[3]
#if the face is recognised then
if name != "Unknown" :
print("known Person detected :) => " + name)
stopDetect = perf_counter()
print(f"Time taken to identify face : {stopDetect-startDetect}")
#else if the face is unknown then
else :
stopDetect = perf_counter()
print(f"Time taken to identify face : {stopDetect-startDetect}")
print("Unknown Person detected :( => Frame captured...")
#show the frames on the display
cv2.imshow("Frame", frame)
#write frames to output object so it can save the video
output.write(frame)
#to terminate the process press "x"
if cv2.waitKey(1) & 0XFF == ord('x'):
break
cap.release()
output.release()
cv2.destroyAllWindows()
def __init__():
global prev,count,vid_cod,vid_name,cap,path,output,sfr
startSfr = perf_counter()
# Encode faces from a folder
sfr = SimpleFacerec()
sfr.load_encoding_images("images/")
stopSfr = perf_counter()
loadSfr = stopSfr-startSfr
print(f"Time taken to load sfr : {loadSfr}")
startCam = perf_counter()
# Load Camera
cap = cv2.VideoCapture('https://192.xxx.xxx.xxx:8080/videofeed')
#cap = cv2.VideoCapture(0)
stopCam = perf_counter()
loadCam = stopCam-startCam
print(f"Time taken to load cam : {loadCam}")
startCodec = perf_counter()
#Use MPEG video codec
vid_cod = cv2.VideoWriter_fourcc(*'MPEG')
stopCodec = perf_counter()
loadCodec = stopCodec-startCodec
print(f"Time taken to load codec : {loadCodec}")
startInit = perf_counter()
prev = 0
count = 0
#defining the path where video will be saved
path = 'C:/Users/Username/Documents/Recorded/'
#Encode video name with date and time
vid_name = str("recording_"+datetime.now().strftime("%b-%d-%Y_%H:%M").replace(":","_")+".avi")
#initialize video saving process as a "output" object
output = cv2.VideoWriter( str(path+vid_name) , vid_cod, 20.0 ,(640,480))
stopInit = perf_counter()
loadInit = stopInit - startInit
print(f"Time taken to load rest : {loadInit}")
vidCapture()
__init__()
This below file is simplefacerec which is imported in the above code.
simple_facerec.py
Note:
When I change fps in output from 20 to 25, saved video length gets decreased and when I change fps from 20 to 15 then video length gets increased. This is how I got the streaming fps of a video.
My Logs
After putting perf_counter() I found that face recognition alone is taking about 0.5 seconds. This whole code ran for about 7 to 8 seconds in which only 5 frames were displayed.
Time taken to load sfr : 2.3252885000001697
Time taken to load cam : 0.3193597999998019
Time taken to load codec : 1.3199998647905886e-05
Time taken to load rest : 0.0018557999974291306
known Person detected :) => Anirudhdhsinh Jadeja
Time taken to identify face : 0.4715902999996615
known Person detected :) => Anirudhdhsinh Jadeja
Time taken to identify face : 0.5326913999997487
known Person detected :) => Anirudhdhsinh Jadeja
Time taken to identify face : 0.4969602000019222
known Person detected :) => Anirudhdhsinh Jadeja
Time taken to identify face : 0.4868558000016492
known Person detected :) => Anirudhdhsinh Jadeja
Time taken to identify face : 0.4679383000002417

Communication with Thorlabs uc480 camera

I was able to get the current image from a Thorlabs uc480 camera using instrumental. My issue is when I try to adjust the parameters for grab_image. I can change cx and left to any value and get an image. But cy and top only works if cy=600 and top=300. The purpose is to create a GUI so that the user can select values for these parameters to zoom in/out an image.
Here is my code
import instrumental
from instrumental.drivers.cameras import uc480
from matplotlib.figure import Figure
import matplotlib.pyplot as plt
paramsets = instrumental.list_instruments()
cammer = instrumental.instrument(paramsets[0])
plt.figure()
framer= cammer.grab_image(timeout='1s',copy=True,n_frames=1,exposure_time='5ms',cx=640,
left=10,cy=600,top=300)
plt.pcolormesh(framer)
The above code does not give an image if I choose cy=600 and top=10. Are there any particular value set to be used for these parameters? How can I get an image of the full sensor size?
Thorlabs has a Python programming interface available as a download on their website. It is very well documented, and can be installed locally via pip.
Link:
https://www.thorlabs.com/software_pages/ViewSoftwarePage.cfm?Code=ThorCam
Here is an example of a simple capture algorithm that might help get you started:
from thorlabs_tsi_sdk.tl_camera import TLCameraSDK
from thorlabs_tsi_sdk.tl_mono_to_color_processor import MonoToColorProcessorSDK
from thorlabs_tsi_sdk.tl_camera_enums import SENSOR_TYPE
# open the TLCameraSDK dll
with TLCameraSDK() as sdk:
cameras = sdk.discover_available_cameras()
if len(cameras) == 0:
print("Error: no cameras detected!")
with sdk.open_camera(cameras[0]) as camera:
#camera.disarm() # ensure any previous session is closed
# setup the camera for continuous acquisition
camera.frames_per_trigger_zero_for_unlimited = 0
camera.image_poll_timeout_ms = 2000 # 2 second timeout
camera.arm(2)
# need to save the image width and height for color processing
image_width = camera.image_width_pixels
image_height = camera.image_height_pixels
# initialize a mono to color processor if this is a color camera
is_color_camera = (camera.camera_sensor_type == SENSOR_TYPE.BAYER)
mono_to_color_sdk = None
mono_to_color_processor = None
if is_color_camera:
mono_to_color_sdk = MonoToColorProcessorSDK()
mono_to_color_processor = mono_to_color_sdk.create_mono_to_color_processor(
camera.camera_sensor_type,
camera.color_filter_array_phase,
camera.get_color_correction_matrix(),
camera.get_default_white_balance_matrix(),
camera.bit_depth
)
# begin acquisition
camera.issue_software_trigger()
# get the next frame
frame = camera.get_pending_frame_or_null()
# initialize frame attempts and max limit
frame_attempts = 0
max_attempts = 10
# if frame is null, try to get a frame until
# successful or until max_attempts is reached
if frame is None:
while frame is None:
frame = camera.get_pending_frame_or_null()
frame_attempts += 1
if frame_attempts == max_attempts:
raise TimeoutError("Timeout was reached while polling for a frame, program will now exit")
image_data = frame.image_buffer
if is_color_camera:
# transform the raw image data into RGB color data
color_data = mono_to_color_processor.transform_to_24(image_data, image_width, image_height)
save_data = np.reshape(color_data,(image_height, image_width,3))
camera.disarm()
You can also process the image after capture with the PIL library.

How can I run a parallel thread for applying a function on each frame of a video stream?

I am trying to make an application with a flask that
Captures a video stream from a webcam
Applies an object detection algorithm on each frame
Concurrently displays the frames with bounding boxes and data provided by the above function in the form of a video
The problem is that due to a function operation, the result renders with a bit of lag. To overcome this I tried to run the function over another thread.
Can you please help in running the function over each frame of the video and displaying the resultant frames in the form of a video?
My main function looks somewhat like :
def detect_barcode():
global vs, outputFrame, lock
total = 0
# loop over frames from the video stream
while True:
frame = vs.read()
frame = imutils.resize(frame, width=400)
gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
gray = cv2.GaussianBlur(gray, (7, 7), 0)
timestamp = datetime.datetime.now()
cv2.putText(frame, timestamp.strftime(
"%A %d %B %Y %I:%M:%S%p"), (10, frame.shape[0] - 10),
cv2.FONT_HERSHEY_SIMPLEX, 0.35, (0, 0, 255), 1)
if total > frameCount:
# detect barcodes in the image
barcodes = md.detect(gray)
# check to see if barcode was found
if barcode in barcodes:
x,y,w,h = cv2.boundRect(barcode)
cv2.rectangle(frame,x,y,(x+w),(y+h),(255,0,0),2)
total += 1
# lock
with lock:
outputFrame = frame.copy()
if __name__ == '__main__':
# start a thread that will perform motion detection
t = threading.Thread(target=detect_barcode)
t.daemon = True
t.start()
app.run(debug=True)
vs.stop()
I literally just did something like this. My solution was asyncio's concurrent.futures module.
import concurrent.futures as futures
Basically, you create your executor and 'submit' your blocking method as a parameter. It returns something called Future, which you can check the result of (if any) every frame.
executor = futures.ThreadPoolExecutor()
would initialize the executor
future_obj = executor.submit(YOUR_FUNC, *args)
would start execution. Sometime later on you could check its status
if future_obj.done():
print(future_obj.result())
else:
print("task running")
# come back next frame and check again

Python + GStreamer: scale video to window

I'm having some trouble rescaling video output of GStreamer to the dimension of the window the video is displayed in (retaining aspect ratio of the video). The problem is that I first need to preroll the video to be able to determine its dimensions by retrieving the negotiated caps, and then calculate the dimensions it needs to be displayed in to fit the window. Once I have prerolled the video and got the dimension caps, I cannot change the video's dimension anymore. Setting the new caps still results in the video being output in its original size. What must I do to solve this?
Just to be complete. In the current implementation I cannot render to an OpenGL texture which would have easily solved this problem because you could simply render output to the texture and scale it to fit the screen. I have to draw the output on a pygame surface, which needs to have the correct dimensions. pygame does offer functionality to scale its surfaces, but I think such an implementation (as I have now) is slower than retrieving the frames in their correct size directly from GStreamer (am I right?)
This is my code for loading and displaying the video (I omitted the main loop stuff):
def calcScaledRes(self, screen_res, image_res):
"""Calculate image size so it fits the screen
Args
screen_res (tuple) - Display window size/Resolution
image_res (tuple) - Image width and height
Returns
tuple - width and height of image scaled to window/screen
"""
rs = screen_res[0]/float(screen_res[1])
ri = image_res[0]/float(image_res[1])
if rs > ri:
return (int(image_res[0] * screen_res[1]/image_res[1]), screen_res[1])
else:
return (screen_res[0], int(image_res[1]*screen_res[0]/image_res[0]))
def load(self, vfile):
"""
Loads a videofile and makes it ready for playback
Arguments:
vfile -- the uri to the file to be played
"""
# Info required for color space conversion (YUV->RGB)
# masks are necessary for correct display on unix systems
_VIDEO_CAPS = ','.join([
'video/x-raw-rgb',
'red_mask=(int)0xff0000',
'green_mask=(int)0x00ff00',
'blue_mask=(int)0x0000ff'
])
self.caps = gst.Caps(_VIDEO_CAPS)
# Create videoplayer and load URI
self.player = gst.element_factory_make("playbin2", "player")
self.player.set_property("uri", vfile)
# Enable deinterlacing of video if necessary
self.player.props.flags |= (1 << 9)
# Reroute frame output to Python
self._videosink = gst.element_factory_make('appsink', 'videosink')
self._videosink.set_property('caps', self.caps)
self._videosink.set_property('async', True)
self._videosink.set_property('drop', True)
self._videosink.set_property('emit-signals', True)
self._videosink.connect('new-buffer', self.__handle_videoframe)
self.player.set_property('video-sink', self._videosink)
# Preroll movie to get dimension data
self.player.set_state(gst.STATE_PAUSED)
# If movie is loaded correctly, info about the clip should be available
if self.player.get_state(gst.CLOCK_TIME_NONE)[0] == gst.STATE_CHANGE_SUCCESS:
pads = self._videosink.pads()
for pad in pads:
caps = pad.get_negotiated_caps()[0]
self.vidsize = caps['width'], caps['height']
else:
raise exceptions.runtime_error("Failed to retrieve video size")
# Calculate size of video when fit to screen
self.scaledVideoSize = self.calcScaledRes((self.screen_width,self.screen_height), self.vidsize)
# Calculate the top left corner of the video (to later center it vertically on screen)
self.vidPos = ((self.screen_width - self.scaledVideoSize [0]) / 2, (self.screen_height - self.scaledVideoSize [1]) / 2)
# Add width and height info to video caps and reload caps
_VIDEO_CAPS += ", width={0}, heigh={1}".format(self.scaledVideoSize[0], self.scaledVideoSize[1])
self.caps = gst.Caps(_VIDEO_CAPS)
self._videosink.set_property('caps', self.caps) #??? not working, video still displayed in original size
def __handle_videoframe(self, appsink):
"""
Callback method for handling a video frame
Arguments:
appsink -- the sink to which gst supplies the frame (not used)
"""
buffer = self._videosink.emit('pull-buffer')
img = pygame.image.frombuffer(buffer.data, self.vidsize, "RGB")
# Upscale image to new surfuace if presented fullscreen
# Create the surface if it doesn't exist yet and keep rendering to this surface
# for future frames (should be faster)
if not hasattr(self,"destSurf"):
self.destSurf = pygame.transform.scale(img, self.destsize)
else:
pygame.transform.scale(img, self.destsize, self.destSurf)
self.screen.blit(self.destSurf, self.vidPos)
# Swap the buffers
pygame.display.flip()
# Increase frame counter
self.frameNo += 1
I'm pretty sure that your issue was (has it is very long time since you asked this question) that you never hooked up the bus to watch for messages that were emitted.
The code for this is usually something like this:
def some_function(self):
#code defining Gplayer (the pipeline)
#
# here
Gplayer.set_property('flags', self.GST_VIDEO|self.GST_AUDIO|self.GST_TEXT|self.GST_SOFT_VOLUME|self.GST_DEINTERLACE)
# more code
#
# finally
# Create the bus to listen for messages
bus = Gplayer.get_bus()
bus.add_signal_watch()
bus.enable_sync_message_emission()
bus.connect('message', self.OnBusMessage)
bus.connect('sync-message::element', self.OnSyncMessage)
# Listen for gstreamer bus messages
def OnBusMessage(self, bus, message):
t = message.type
if t == Gst.MessageType.ERROR:
pass
elif t == Gst.MessageType.EOS:
print ("End of Audio")
return True
def OnSyncMessage(self, bus, msg):
if msg.get_structure() is None:
return True
if message_name == 'prepare-window-handle':
imagesink = msg.src
imagesink.set_property('force-aspect-ratio', True)
imagesink.set_window_handle(self.panel1.GetHandle())
The key bit for your issue is setting up a call back for the sync-message and in that call-back, setting the property force-aspect-ratio to True.
This property ensures that the video fits the window it is being displayed in at all times.
Note the self.panel1.GetHandle() function names the panel in which you are displaying the video.
I appreciate that you will have moved on but hopefully this will help someone else trawling through the archives.

Categories