I'm making a slideshow app with that oh-so-naught-ies pan and zoom effect. I'm using pygame.
The main display is therefore realtime 30fps+, and I don't want it stuttering when it has to load a new image - this takes it over 1/30th of a second.
So I wanted to use some parallel process to prepare the images and feed the main process with these objects, which are instances of a class.
I've tried with threading and with multiprocess. Threading 'works' but it's still jumpy (I blame python) - the whole thing slows down when the thread is busy! So the code ran but it didn't meet the goal of allowing a continually smooth display.
But multiprocess segfaults (pygame parachute) as soon as I call a method on the received prepared image from the main process. I've tried pipe and queue communications - both result in the same problem. The method runs up until it does a call to
sized = pygame.transform.scale(self.image, newsize )
Then there's the segfault. This class does not have any dependencies from the main process.
Does pygame just not like multiprocessing? Is there another way that is compatible? Is there a way to 'nice' secondary threads that might stop the threading method performing?
Any help greatly appreciated. Happy to post more code, just ask in comments, but didn't want to dump a big listing here unless needed.
Thanks in advance!
EDIT
This is as short as I could make it. You need to provide three paths to jpeg files in the constructor at the bottom.
#!/usr/bin/env python2
import pygame
import sys
import time
import re
import os
import pickle
from random import randrange, shuffle
from multiprocessing import Process, Pipe
import Queue
class Img:
"""The image objects I need to pass around"""
def __init__(self, filename=None):
image = pygame.image.load(filename).convert()
self.image = pygame.transform.scale(image, (640,480))
def getSurface(self):
"""Get a surface, blit our image onto it in right place."""
surface = pygame.Surface((640,480))
# xxx this next command fails
sized = pygame.transform.scale(self.image, (640,480))
surface.blit(sized, (0,0))
return surface
class Floaty:
"""demo"""
def __init__(self, fileList):
self.fileList = fileList
pygame.init()
self.screen = pygame.display.set_mode((640,480))
# open the first image to get it going
self.img = Img(self.fileList.pop())
# Set up parallel process for opening images
self.parent_conn, child_conn = Pipe()
self.feeder = Process(target=asyncPrep, args=(child_conn,))
self.feeder.start()
def draw(self):
"""draw the image"""
# collect image ready-prepared by other process
if self.parent_conn.poll():
self.img = self.parent_conn.recv()
print ("received ", self.img)
# request new image
self.parent_conn.send(self.fileList.pop())
self.screen.blit(self.img.getSurface(), (0, 0))
pygame.display.flip()
def asyncPrep(conn):
"""load up the files"""
while True:
if conn.poll(1):
filename = conn.recv()
print ("doing ", filename)
img = Img(filename)
conn.send(img)
if __name__ == '__main__':
fileList = ['/path/to/a.jpg', 'path/to/b.jpg', 'path/to/c.jpg']
f = Floaty(fileList)
clock = pygame.time.Clock()
while 1:
f.draw()
clock.tick(4);
When I run this (Python 2.7.6) I get:
('doing ', '/path/to/a.jpg')
('received ', <__main__.Img instance at 0x7f2dbde2ce60>)
('doing ', '/path/to/b.jpg')
Fatal Python error: (pygame parachute) Segmentation Fault
zsh: abort (core dumped)
I solved this using multiprocessing by making the Img class load the image into a string buffer, stored in a property, then adding a postReceive method which loads it into a surface stored in the image property.
The postReceive method is called by the parent process after receiving the Img object from the child.
Therefore the object created by the child is not bound to anything pygame-y.
self.imageBuffer = pygame.image.tostring(
pygame.transform.scale(image, (640,480)),
'RGB')
Then, the new method in Img is simply:
def: postReceive(self):
self.image = pygame.image.frombuffer( self.imagebuffer, 'RGB' )
Add a call to this here:
# collect image ready-prepared by other process
if self.parent_conn.poll():
self.img = self.parent_conn.recv().postReceive()
print ("received ", self.img)
Related
I have constructed some code to load assets into a character class. This works fine in a single thread, but I want to have a loading screen when the assets are loading.
I put the asset loading into a function, and tried to make it run in a separate thread. The issue is that it now hangs, and seemingly does not even execute the main thread.
I've kept an eye on the memory usage, and the memory is well-behaved (never goes above 500M out of 1976M, bits or bytes I'm not sure, it's whatever PyCharm reports).
import threading
import os
import pygame
from pygame.locals import *
def load_assets(screen: pygame.Surface, results: List):
print("thread: started to load assets in thread")
appearances1 = {} # some dictionary, Dict[str, Dict[str, str]]
default_outfit = "office_wear"
print("thread: instance of game class to be created")
game = Game("lmao", screen) # crashes somewhere here when running multithreaded
print("thread: game initialized") # this is never achieved
# rest of function not relevant
results.append(game)
def main():
# ====== INITIALIZE PYGAME ======
pygame.init()
pygame.font.init()
screen = pygame.display.set_mode((1024, 768))
clock = pygame.time.Clock()
loading = create_loading_screen(screen)
loading.display(screen)
pygame.display.flip()
# ====== START OTHER THREAD FOR LOADING ASSETS ======
results = []
x = threading.Thread(
target=load_assets, args=(screen, results), daemon=True
)
x.start()
while x.is_alive():
# never see this print statement!
print("x is fine") # want to blit loading screen here
# if I comment out the threading, and run the following instead, my game boots up just fine
# in under 3-4 seconds of waiting for the assets to load
# load_assets(screen, results)
game = results[0]
# other code here to render the game, no more threading after this point
while 1:
for event in pygame.event.get():
if event.type == QUIT:
return -1
game.main()
pygame.display.flip()
clock.tick(game.fps)
if __name__ == '__main__':
main()
You can see the print statements here (everything hangs and we don't even make it to the main thread!)
I figured out the issue. I feel really dumb now... The issue was my Game initializer was touching the pygame.display() which I already knew it shouldn't... I need a nap...
This is what I have at the moment but its getting really slow read times. It takes about 6-8 seconds alone just to read the frame with opencv and for my project I need to be able to get a picture at specific intervals as it reads a pressure transducer. Is there a way to make this program faster with cv2 or can is there a way using arrays or what not to do this much quicker.
import cv2
import timeit
def main(): #Define camera function
start = timeit.default_timer() #Starts a runtime timer
hud_Cam = cv2.VideoCapture(0) #Call camera resource
gauge_Cam = cv2.VideoCapture(1)
rval, hud_Img = hud_Cam.read() #Read/Grab camera frame
rval, gauge_Img = gauge_Cam.read()
stop = timeit.default_timer() #Stops a runtime timer
print('Time: ', stop - start) #Calculate runtime timer
start1 = timeit.default_timer() #Starts a runtime timer
hud_name = ('HudPicture0.jpg') #Initialize file names
gauge_name = ('GaugePicture0.jpg')
cv2.imwrite(hud_name, hud_Img) #Write camera frame to file names
cv2.imwrite(gauge_name, gauge_Img)
print("Hud Picture written!") #Log to console
print("Gauge Picture written!")
hud_Cam.release() #Release camera resource to clear memory
gauge_Cam.release()
stop1 = timeit.default_timer() #Stops a runtime timer
print('Time: ', stop1 - start1) #Calculate runtime timer```
As I understand your question, you want to be able to take two images from two cameras as soon as possible after Labview requests it.
On Linux or macOS, I would start capturing continuously as soon as possible and then signal the capturing process using Unix signals. Unfortunately, you are using Windows and signals don't work so well there. So, I am going to use the filesystem to signal - if Labview wants a picture taken, it just creates a file with or without content called capture.txt and that makes the Python process save the current image. There are other more sophisticated methods, but this demonstrates the concept and as you learn more, you may replace the signalling mechanism with a write on a socket, or an MQTT message or something else.
I put the two cameras in two independent threads so they can work in parallel, i.e. faster.
#!/usr/bin/env python3
import cv2
import threading
import logging
from pathlib import Path
def capture(stream, path):
"""Capture given stream and save to file on demand"""
# Start capturing to RAM
logging.info(f'[captureThread]: starting stream {stream}')
cam = cv2.VideoCapture(stream, cv2.CAP_DSHOW)
while True:
# Read video continuously
_, im = cam.read()
# Check if Labview wants it
if CaptureFile.exists():
# Intermediate filename
temp = Path(f'saving-{stream}.jpg')
# Save image with temporary name
cv2.imwrite(str(temp), im)
# Rename so Labview knows it is complete and not still being written
temp.rename(path)
logging.info(f'[captureThread]: image saved')
break
logging.info(f'[captureThread]: done')
if __name__=="__main__":
# Set up logging - advisable when threading
format = "%(asctime)s: %(message)s"
logging.basicConfig(format=format, level=logging.INFO, datefmt="%H:%M:%S")
logging.info("[MAIN]: starting")
# Labview will create this file when a capture is required, ensure not there already
CaptureFile = Path('capture.txt')
CaptureFile.unlink(True)
# Create a thread for each camera and start them
HUDthread = threading.Thread(target=capture, args=(0, Path('HUD.jpg')))
Gaugethread = threading.Thread(target=capture, args=(1, Path('Gauge.jpg')))
HUDthread.start()
Gaugethread.start()
# Wait for both to exit
HUDthread.join()
Gaugethread.join()
logging.info("[MAIN]: done")
This got the time down to, Time:2.066412200000059
import timeit
start = timeit.default_timer()
import cv2
hud_Cam = cv2.VideoCapture(0, cv2.CAP_DSHOW) #Call camera resource
gauge_Cam = cv2.VideoCapture(1, cv2.CAP_DSHOW)
def main(hud_Cam, gauge_Cam, start): #Define camera function
while True:
rval, hud_Img = hud_Cam.read() #Read/Grab camera frame
rval, gauge_Img = gauge_Cam.read()
hud_name = ('HudPicture0.jpg') #Initialize file names
gauge_name = ('GaugePicture0.jpg')
cv2.imwrite(hud_name, hud_Img) #Write camera frame to file names
cv2.imwrite(gauge_name, gauge_Img)
print("Hud Picture written!") #Log to console
print("Gauge Picture written!")
hud_Cam.release() #Release camera resource to clear memory
gauge_Cam.release()
stop = timeit.default_timer()
print('Time: ', stop - start) #Calculate runtime timer
break
# =============================================================================
while True:
img1 = cv2.imread(hud_name)
img2 = cv2.imread(gauge_name)
cv2.imshow("Hud Image", img1)
cv2.imshow("Gauge Image", img2)
k = cv2.waitKey(60)
if k%256 == 27:
cv2.destroyAllWindows()
break
main(hud_Cam, gauge_Cam, start) #Call camera function```
Is it possible instead of copying a variable content to a given location ( stream.copy_to('motion.h264') ) save it in a variable to be further processed. Reason I would like to do so, is to be able to downgrade video resolution once the video buffer has been collected.
Thanks in advance!
import io
import random
import picamera
def motion_detected():
# Randomly return True (like a fake motion detection routine)
return random.randint(0, 10) == 0
camera = picamera.PiCamera()
stream = picamera.PiCameraCircularIO(camera, seconds=20)
camera.start_recording(stream, format='h264')
try:
while True:
camera.wait_recording(1)
if motion_detected():
# Keep recording for 10 seconds and only then write the
# stream to disk
camera.wait_recording(10)
stream.copy_to('motion.h264')
finally:
camera.stop_recording()
I wanna run a multiple object detection(YOLOv3) algorithm with different parameters.
I should use more CPU cores that by detecting and counting algorithms are too heavy to control.
So I made object class which can set different video names and GPU numbers but I am new about multiprocessing in python that I can't handle multiprocessing class well.
Features
1. Draw baselines in PySide2 or PyQt.
2. Select a GPU which will run the YOLOv3.
3. Count cars which crossed a specific line in a road.
But I wanna test if multiprocessing can run object class before really implementing.
# main.py
from stream import video
import multiprocessing as mp
print(mp.current_process())
process1 = mp.Process(target=video, args=('DJI_0474_2.MOV', 0))
process2 = mp.Process(target=video, args=('DJI_0474_3.MOV', 1))
process1.daemon = True
process2.daemon = True
process1.start()
process2.start()
# stream.py
import cv2
from threading import Timer
import multiprocessing as mp
# from yolov3 import YOLO
class video(object):
def __init__(self, name, gpu_num):
print(mp.current_process())
self.cap = cv2.VideoCapture(name)
# self.detector = YOLO(gpu_num)
self.timer = Timer(0.066, self.run)
self.timer.start()
def run(self):
ret, frame = self.cap.read()
if ret:
# frame = self.detector(frame)
cv2.imshow('frame', frame)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
I made a simple code to test multiprocessing.
But when I initialize video class with two args('DJI_0474_2.MOV', 0), main.py closed itself.
I tried initializing before run multiprocessing but the error say "can't pickle cv2.VideoCapture objects".
# main2.py
from stream2 import video
import multiprocessing as mp
print(mp.current_process())
play = video('DJI_0474_2.MOV')
# process = mp.Process(target=video, args=('DJI_0474_2.MOV',))
process = mp.Process(target=play.run)
process.daemon = True
process.start()
# stream2.py
import cv2
from threading import Timer
import multiprocessing as mp
# from yolov3 import YOLO
class video(object):
def __init__(self, name ):
print(mp.current_process())
self.cap = cv2.VideoCapture(name)
def run(self):
self.timer = Timer(0.066, self.update)
self.timer.start()
def update(self):
ret, frame = self.cap.read()
if ret:
cv2.imshow('frame', frame)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
I think that's because 'video class' is on Main core but 'run' is on other core, so they can't communicate at the same time.
I will try queue to communicate between cores but I am worried about the latency... so the best is launching multiprocessing in the object class.
Is there any solution that I can run multiprocessing in object class?
No ez way, what i can think is ros,boost, multithread and all kind of mutex. gonna take a while
The most stupid way is to build several yolo. say with gpu and various cpu version.
Calling each 1 at different terminal to start new thread.
Then split the trained model for different subset by retraining N model.
And each build version is calling a sub model
Then all finding findings to N txt file and then concatenated into one txt to avoide mutex lock issue.
I wanted to see the current CPU load on top of the video image (source is /dev/video0), and I thought textoverlay element would be perfect for this.
I have constructed a (seemingly) working pipeline, except that the textoverlay keeps showing the value originally set to it.
The pipeline is currently like this:
v4l2src > qtdemux > queue > ffmpegcolorspace > textoverlay > xvimagesink
And code looks like this (I have removed bunch of gtk window, thread handling code and some other signal handling, and only left the relevant part):
#!/usr/bin/env python
import sys, os, time, signal
import pygtk, gtk, gobject
import pygst
pygst.require("0.10")
import gst
# For cpu load stats
import psutil
from multiprocessing import Process, Value, Lock # For starting threads
class Video:
def __init__(self):
window = gtk.Window(gtk.WINDOW_TOPLEVEL)
vbox = gtk.VBox()
window.add(vbox)
self.movie_window = gtk.DrawingArea()
vbox.add(self.movie_window)
window.show_all()
# Set up the gstreamer pipeline
self.pipeline = gst.Pipeline("pipeline")
self.camera = gst.element_factory_make("v4l2src","camera")
self.camera.set_property("device","""/dev/video0""")
self.pipeline.add(self.camera)
# Demuxer
self.demuxer = gst.element_factory_make("qtdemux","demuxer")
# Create a dynamic callback for the demuxer
self.demuxer.connect("pad-added", self.demuxer_callback)
self.pipeline.add(self.demuxer)
# Demuxer doesnt have static pads, but they are created at runtime, we will need a callback to link those
self.videoqueue = gst.element_factory_make("queue","videoqueue")
self.pipeline.add(self.videoqueue)
self.videoconverter = gst.element_factory_make("ffmpegcolorspace","videoconverter")
self.pipeline.add(self.videoconverter)
## Text overlay stuff
self.textoverlay = gst.element_factory_make("textoverlay","textoverlay")
self.overlay_text = "cpu load, initializing"
self.textoverlay.set_property("text",self.overlay_text)
self.textoverlay.set_property("halign", "left")
self.textoverlay.set_property("valign", "top")
self.textoverlay.set_property("shaded-background","true")
self.pipeline.add(self.textoverlay)
self.videosink = gst.element_factory_make("xvimagesink","videosink")
self.pipeline.add(self.videosink)
self.camera.link(self.videoqueue)
gst.element_link_many(self.videoqueue, self.videoconverter, self.textoverlay, self.videosink)
bus = self.pipeline.get_bus()
bus.add_signal_watch()
bus.enable_sync_message_emission()
# Start stream
self.pipeline.set_state(gst.STATE_PLAYING)
# CPU stats calculator thread
cpu_load_thread = Process(target=self.cpu_load_calculator, args=())
cpu_load_thread.start()
def demuxer_callback(self, dbin, pad):
if pad.get_property("template").name_template == "video_%02d":
print "Linking demuxer & videopad"
qv_pad = self.videoqueue.get_pad("sink")
pad.link(qv_pad)
def cpu_load_calculator(self):
cpu_num = len( psutil.cpu_percent(percpu=True))
while True:
load = psutil.cpu_percent(percpu=True)
self.parsed_load = ""
for i in range (0,cpu_num):
self.parsed_load = self.parsed_load + "CPU%d: %s%% " % (i, load[i])
print self.textoverlay.get_property("text") # Correctly prints previous cycle CPU load
self.textoverlay.set_property("text",self.parsed_load)
time.sleep(2)
c = Video()
gtk.threads_init()
gtk.main()
The cpu_load_calculator keeps running in the background, and before I set the new value, I print out the previous using the get_property() function, and it is set properly. However on the actual video outputwindow, it keeps to the initial value..
How can I make the textoverlay to update properly also to the video window ?
The problem is that you are trying to update textoverlay from different Process. And processes unlike threads run in separate address space.
You can switch to threads:
from threading import Thread
...
# CPU stats calculator thread
cpu_load_thread = Thread(target=self.cpu_load_calculator, args=())
cpu_load_thread.start()
Or you can run cpu_load_calculator loop from the main thread. This will work because self.pipeline.set_state(gst.STATE_PLAYING) starts it's own thread in background.
So this will be enough:
# Start stream
self.pipeline.set_state(gst.STATE_PLAYING)
# CPU stats calculator loop
self.cpu_load_calculator()