cv2 error while in execution Bad file descriptor - python

I'm running some Python scripts in a loop (but not so fast).
example:
import time
import cv2
stream = cv2.VideoCapture(0)
while True:
ret,frame = stream.read()
cv2.imshow("Frame",frame)
cv2.waitKey(1)
time.sleep(10)
After some minutes of execution I'm getting this error:
E1205 11:19:12.803174714 32302 backup_poller.cc:132] Run client channel backup poller: {"created":"#1607177952.802346313","description":"pollset_work","file":"src/core/lib/iomgr/ev_epollex_linux.cc","file_line":324,"referenced_errors":[{"created":"#1607177952.802226759","description":"Bad file descriptor","errno":9,"file":"src/core/lib/iomgr/ev_epollex_linux.cc","file_line":954,"os_error":"Bad file descriptor","syscall":"epoll_wait"}]}
Does someone know what is this? I looked into Google but I didn't find any good solutions.

after creating a VideoCapture, always test for stream.isOpened(). if that is False, you can't do anything to the video.
after ret, frame = stream.read(), always test if ret is True. if it's False, the video has ended and you need to end the loop.
I would also recommend not simply sleeping for ten seconds. during that time, the GUI event loop won't run (because you aren't running waitKey), which makes the imshow window unresponsive and your OS might decide to kill the process.
use waitKey(10000) instead. check if waitKey() waited the full interval. it may return earlier because of a key event. you could accept that or repeat waitKey with the remaining time.
a camera will produce frames in regular intervals, no matter if you read them or not. if you don't read them, they queue up. the driver will not like that and may decide to throw errors at you.

Related

Moviepy write_videofile works the second time but not the first?

I'm concatenating a list of video objects together then writing them with write_videofile, weirdly enough the first time I write the file, it plays fine for the first halfish then the first few frames of each clip in the file afterwards plays before freezing. But here's the odd part, If I write the exact same video object right after the first video writes, it writes just fine and plays perfectly.
Here's my code
from moviepy.editor import VideoFileClip, concatenate_videoclips
clipslist = []
clips = ['https://clips-media-assets2.twitch.tv/AT-cm%7C787619651.mp4', 'https://clips-media-assets2.twitch.tv/AT-cm%7C787628097.mp4', 'https://clips-media-assets2.twitch.tv/2222789345-offset-20860.mp4', 'https://clips-media-assets2.twitch.tv/AT-cm%7C787624765.mp4', 'https://clips-media-assets2.twitch.tv/AT-cm%7C787539697.mp4', 'https://clips-media-assets2.twitch.tv/39235981488-offset-3348.mp4', 'https://clips-media-assets2.twitch.tv/AT-cm%7C788412970.mp4', 'https://clips-media-assets2.twitch.tv/AT-cm%7C787682495.mp4', 'https://clips-media-assets2.twitch.tv/AT-cm%7C787962593.mp4', 'https://clips-media-assets2.twitch.tv/AT-cm%7C787627256.mp4', 'https://clips-media-assets2.twitch.tv/AT-cm%7C787573008.mp4', 'https://clips-media-assets2.twitch.tv/AT-cm%7C788543065.mp4', 'https://clips-media-assets2.twitch.tv/AT-cm%7C787593688.mp4', 'https://clips-media-assets2.twitch.tv/AT-cm%7C788079881.mp4', 'https://clips-media-assets2.twitch.tv/AT-cm%7C788707738.mp4', 'https://clips-media-assets2.twitch.tv/AT-cm%7C788021727.mp4', 'https://clips-media-assets2.twitch.tv/AT-cm%7C787595029.mp4', 'https://clips-media-assets2.twitch.tv/39233367648-offset-9536.mp4', 'https://clips-media-assets2.twitch.tv/AT-cm%7C788517651.mp4', 'https://clips-media-assets2.twitch.tv/AT-cm%7C788087743.mp4', 'https://clips-media-assets2.twitch.tv/AT-cm%7C787497542.mp4', 'https://clips-media-assets2.twitch.tv/39233367648-offset-9154.mp4', 'https://clips-media-assets2.twitch.tv/7109626012888880881-offset-4818.mp4', 'https://clips-media-assets2.twitch.tv/72389234-offset-760.mp4', 'https://clips-media-assets2.twitch.tv/AT-cm%7C787774924.mp4', 'https://clips-media-assets2.twitch.tv/AT-cm%7C787565708.mp4']
for clip in clips:
dlclip = VideoFileClip(clip, target_resolution=(1080, 1920)) # Download clip
clipslist.append(dlclip)
videofile = concatenate_videoclips(clipslist)
videofile.write_videofile("final1.mp4") # Broken after the first halfish
videofile.write_videofile("final2.mp4") # Works entirely fine.
videofile.close
Any ideas? Any suggestions appreciated.
Sometimes when the video is small enough it seems to write the first time just fine too.
It seems there is no set point where it breaks, each time I write it for the first time it typically breaks at a different spot.
I've tried waiting for the thread to exit and sleeping after the concatenation and that doesn't seem to fix the issue.
If you cannot consistently replicate the issue, it's most likely not an issue with your code.
Try opening the produced clip with a different program, such as VLC.
I came across with the same problem when writting multiple videos at the same time with write_videofile, it seems like the later tasks will cause the wrong outputs of the previous write_videofile tasks by hanging their writting processes, although the processes will continue after the later tasks finish, the result videos of previous tasks break at the hanging spots, haven't found a solution

Python Flask web application hanging when called multiple times

I created a web application by using Flask, in order to trigger a detection routine through HTTP request.
Basically, every time a GET request is sent to the endpoint URL, I want a function to be executed.
The code I'm using is:
web_app = Flask(__name__)
#web_app.route('/test/v1.0/run', methods=['GET'])
def main():
cap = cv2.VideoCapture(0)
cap.set(cv2.CAP_PROP_BUFFERSIZE, 1)
while(True):
ret, frame = cap.read()
***performing operations on each frame and running ML models on them***
return ('', 204)
if __name__ == '__main__':
web_app.run(debug=True)
Everything works fine the first time, if I use:
curl -i http://localhost:5000/test/v1.0/run
the function main() is executed, and at the end the results are uploaded to an online database.
After this, the program keeps listening on the URL. If I send another GET request, main() is executed again, but it hangs after the first iteration of the while loop.
I tried simply running the same code multiple times by placing it in a for loop:
for i in range(10):
cap = cv2.VideoCapture(0)
cap.set(cv2.CAP_PROP_BUFFERSIZE, 1)
while(True):
ret, frame = cap.read()
***performing operations on each frame and running ML models on them***
and it works without any problem, so the hanging should not depend on anything I'm doing inside the code.
The problem should be caused by the fact that I'm using flask to trigger the function, but in this case I don't understand why main() hangs after starting. I'm very new to flask and web applications in general, so I'm probably missing something very simple here.
The problem was that I was also displaying the frames collected from camera, using
cv2.imshow("window", frame)
and, even if at the end of the program I was closing everything by:
cap.release()
cv2.destroyAllWindows()
something remained hanging, and it got the process stuck at the next iteration.
I solved by removing the cv2.imshow...I don't really need to visualize the stream, so I can live with it.
Mostly out of curiosity though, I'll try to figure out how to make it work even when visualizing the video frames.

How image capturing with Pygame actually works?

My goal is to set up a webcam with Raspberry Pi to be active during a certain time (let's say 2 minutes). During this time I should be able to capture a still image at any time and save that image to a folder. I chose to work with pygame. My project is about capturing an image as soon as a sensor has been triggered, so it needs to be very responsive.
According to the documentation for Pygame camera here it says:
open()
Opens the camera device, attempts to initialize it, and begins
recording images to a buffer. The camera must be started before any of
the below functions can be used.
get_image()
Pulls an image off of the buffer as an RGB Surface. It can optionally
reuse an existing Surface to save time. The bit-depth of the surface
is either 24 bits or the same as the optionally supplied Surface.
So for my case, the get_image() simply seems to pull out the first image captured after start() has been called. My question is, how can I reach the buffer with all captured images or how does the capturing actually work? I can't find a solution for capturing and saving a still image (at any time) after in between I call start() and stop() on the pygame camera. Since the start() function initiates during a few seconds, it is simply too slow to just call start(), get_image() and stop() after one another. Any help or suggestions would be appreciated.
See my python code below:
def activateCapturing:
pygame.init()
pygame.camera.init()
cam = pygame.camera.Camera("/dev/video0",(320,180))
cam.start()
pngdata = None
imageAsbytes = []
activated = True
while activated:
if readSensor():
img = cam.get_image()
pygame.image.save(img, "/tmp/image.png")
activated = False
with open("/tmp/image.png", 'rb') as f:
pngdata = f.read()
imageAsbytes = bytearray(pngdata)
cam.stop()
return imageAsbytes
Thanks in advance!
You simpy do not stop the camera after capturing one image.
See https://www.pygame.org/docs/tut/CameraIntro.html.
get_image() gets the image the camera currently sees from the buffer - the buffer NOT being "all pictures since start()" but simply the currently viewed image.
You use stop() after your 3s of "capturing window" to stop aquiering more images.
If you are after performance, you might want to scroll down that page and review the section about Capturing a Live Stream - if you did the same (w/o displaying the stream) and just saved 1 image to disk when needed you should get a decent framerate.
Api: get_image()

BeagleBone Black OpenCV Python is too slow

I try to get images from webcam wtih opencv and python. Code is so basic like:
import cv2
import time
cap=cv2.VideoCapture(0)
cap.set(cv2.cv.CV_CAP_PROP_FRAME_WIDTH,640)
cap.set(cv2.cv.CV_CAP_PROP_FRAME_HEIGHT,480)
cap.set(cv2.cv.CV_CAP_PROP_FPS, 20)
a=30
t=time.time()
while (a>0):
now=time.time()
print now-t
t=now
ret,frame=cap.read()
#Some processes
print a,ret
print frame.shape
a=a-1
k=cv2.waitKey(20)
if k==27:
break
cv2.destroyAllWindows()
But it works slowly. output of program:
VIDIOC_QUERYMENU: Invalid argument
VIDIOC_QUERYMENU: Invalid argument
VIDIOC_QUERYMENU: Invalid argument
VIDIOC_QUERYMENU: Invalid argument
VIDIOC_QUERYMENU: Invalid argument
VIDIOC_QUERYMENU: Invalid argument
HIGHGUI ERROR: V4L: Property <unknown property string>(5) not supported by device
8.82148742676e-06
select timeout
30 True
(480, 640, 3)
2.10035800934
select timeout
29 True
(480, 640, 3)
2.06729602814
select timeout
28 True
(480, 640, 3)
2.07144904137
select timeout
Configuration:
Beaglebone Black RevC
Debian-wheezly
opencv 2.4
python 2.7
The "secret" to obtaining higher FPS when processing video streams with OpenCV is to move the I/O (i.e., the reading of frames from the camera sensor) to a separate thread.
When calling read() method along with cv2.VideoCapture function, it makes the entire process very slow as it has to wait for each I/O operation to be completed for it to move on to the next one (Blocking Process).
In order to accomplish this FPS increase/latency decrease, our goal is to move the reading of frames from a webcam or USB device to an entirely different thread, totally separate from our main Python script.
This will allow frames to be read continuously from the I/O thread, all while our root thread processes the current frame. Once the root thread has finished processing its frame, it simply needs to grab the current frame from the I/O thread. This is accomplished without having to wait for blocking I/O operations.
You can read Increasing webcam FPS with Python and OpenCV to know the steps in implementing threads.
EDIT
Based on the discussions in our comments, I feel you could rewrite the code as follows:
import cv2
cv2.namedWindow("output")
cap = cv2.VideoCapture(0)
if cap.isOpened(): # Getting the first frame
ret, frame = cap.read()
else:
ret = False
while ret:
cv2.imshow("output", frame)
ret, frame = cap.read()
key = cv2.waitKey(20)
if key == 27: # exit on Escape key
break
cv2.destroyWindow("output")
I encountered a similar problem when I was working on a project using OpenCV 2.4.9 on the Intel Edison platform. Before doing any processing, it was taking roughly 80ms just to perform the frame grab. It turns out that OpenCV's camera capture logic for Linux doesn't seem to be implemented properly, at least in the 2.4.9 release. The underlying driver only uses one buffer, so it's not possible to use multi-threading in the application layer to work around it - until you attempt to grab the next frame, the only buffer in the V4L2 driver is locked.
The solution is to not use OpenCV's VideoCapture class. Maybe it was fixed to use a sensible number of buffers at some point, but as of 2.4.9, it wasn't. In fact, if you look at this article by the same author as the link provided by #Nickil Maveli, you'll find that as soon as he provides suggestions for improving the FPS on a Raspberry Pi, he stops using OpenCV's VideoCapture. I don't believe that is a coincidence.
Here's my post about it on the Intel Edison forum: https://communities.intel.com/thread/58544.
I basically wound up writing my own class to handle the frame grabs, directly using V4L2. That way you can provide a circular list of buffers and allow the frame grabbing and application logic to be properly decoupled. That was done in C++ though, for a C++ application. Assuming the above link delivers on its promises, that might be a far easier approach. I'm not sure whether it would work on BeagleBone, but maybe there's something similar to PiCamera out there. Good luck.
EDIT: I took a look at the source code for 2.4.11 of OpenCV. It looks like they now default to using 4 buffers, but you must be using V4L2 to take advantage of that. If you look closely at your error message HIGHGUI ERROR: V4L: Property..., you see that it references V4L, not V4L2. That means that the build of OpenCV you're using is falling back on the old V4L driver. In addition to the singular buffer causing performance issues, you're using an ancient driver that probably has many limitations and performance problems of its own.
Your best bet would be to build OpenCV yourself to make sure that it uses V4L2. If I recall correctly, the OpenCV configuration process checks whether the V4L2 drivers are installed on the machine and builds it accordingly, so you'll want to make sure that V4L2 and any related dev packages are installed on the machine you use to build OpenCV.
try this one ! I replaced some code in the cap.set() section
import cv2
import time
cap=cv2.VideoCapture(0)
cap.set(3,640)
cap.set(4,480)
cap.set(5, 20)
a=30
t=time.time()
while (a>0):
now=time.time()
print now-t
t=now
ret,frame=cap.read()
#Some processes
print a,ret
print frame.shape
a=a-1
k=cv2.waitKey(20)
if k==27:
break
cv2.destroyAllWindows()
output (pc webcam) your code was wrong for me.
>>0.0
>>30 True
>>(480, 640, 3)
>>0.246999979019
>>29 True
>>(480, 640, 3)
>>0.0249998569489
>>28 True
>>(480, 640, 3)
>>0.0280001163483
>>27 True
>>(480, 640, 3)
>>0.0320000648499

How to run two functions simultaneously

I am running test but I want to run 2 functions at the same time. I have a camera and I am telling it to move via suds, I am then logging into the camera via SSH to check the speed the camera is set to. When I check the speed the camera has stopped so no speed is available. Is there a way I can get these functions to run at the same time to test the speed of the camera. Sample code is below:
class VerifyPan(TestAbsoluteMove):
def runTest(self):
self.dest.PanTilt._x=350
# Runs soap move command
threading.Thread(target = SudsMove).start()
self.command = './ptzpanposition -c 0 -u degx10'
# Logs into camera and checks speed
TestAbsoluteMove.Ssh(self)
# Position of the camera verified through Ssh (No decimal point added to the Ssh value)
self.assertEqual(self.Value, '3500')
I have now tried the threading module as mentioned below. The thread does not run in sync with the function TestAbsoluteMove.Ssh(). Is there any other code I need to make this work.
I have looked at putting arguments into the thread statement that state the thread runs when the Ssh() function. Does anyone know what to enter in this statement?
Sorry if I haven't explained correctly. The 'SudsMove' function moves the camera and the 'Ssh' function logs into the camera and checks the speed the camera is currently moving at. The problem is that by the time the 'Ssh' function logs in the camera has stopped. I need both functions to run in parallel so I can check the camera speed while it is still moving.
Thanks
Import the threading module and run SudsMove() like so:
threading.Thread(target = SudsMove).start()
That will create and start a background thread which does the movement.
ANSWER TO EDITED QUESTION:
As far as I understand this, TestAbsoluteMove.Ssh(self) polls the speed once and stores the result in self.Value?! And you're testing the expected end tilt/rotation/position with self.assertEqual(self.Value, '3500')?!
If that's correct, you should wait for the camera to start its movement. You could probably poll the speed in a certain interval:
# Move camera in background thread
threading.Thread(target = SudsMove).start()
# What does this do?
self.command = './ptzpanposition -c 0 -u degx10'
# Poll the current speed in an interval of 250 ms
import time
measuredSpeedsList = []
for i in xrange(20):
# Assuming that this call will put the result in self.Value
TestAbsoluteMove.Ssh(self)
measuredSpeedsList.append(self.Value)
time.sleep(0.25)
print "Measured movement speeds: ", measuredSpeedsList
The movement speed will be the biggest value in measuredSpeedsList (i.e. max(measuredSpeedsList)). Hope that makes sense...
If you want to use the common Python implementation (CPython), you can certainly use the multiprocessing module, which does wonders (you can pass non-pickleable arguments to subprocesses, kill tasks,…), offers an interface similar to that of threads, and does not suffer from the Global Interpreter Lock.
The downside is that subprocesses are spawned, which takes more time than creating threads; this should only be a problem if you have many, many short tasks. Also, since data is passed (via serialization) between processes, large data both takes a long time to pass around and ends up having a large memory footprint (as it is duplicated between each process). In situations where each task takes a "long" time and the data in and out of each task is not too large, the multiprocessing module should be great.
There can only be one thread running at the same time. This has been answered extensively here. One solution will be to use two separate processes. The above answer provides some tips.
If you can get your code to run under Jython or IronPython, then you can run several threads simultaneously; they don't have that goofy "Global Interpreter Lock" thing of CPython.

Categories