PiCamera and continuous_capture with python and raspberry pi - python

I'm using a raspberry pi and trying to create a video stream using flask and the pi camera library. My understanding is that I need to use continuous_capture to get the lowest latency within the system.
However, I can't find a way to preview these images that are supposedly being taken. Is there a way I can view these before I try and implement it into flask which has many issues itself as I will be using the falsk website to control a robot as well.
Any suggestions on how to do this are appreciated as well as telling me there is an easier way to do it. Please note I am only an intermediate programmer so nothing too complex for me to understand as this is the whole idea of the project.

I believe you are looking for capture_continuous.
Here's the general process:
Import the necessary packages
Pause to allow the camera to warm up
Iterate through the camera frames
Capture and show the frame
from picamera.array import PiRGBArray
from picamera import PiCamera
import time
import cv2
# initialize the camera and grab a reference to the raw camera capture
camera = PiCamera()
camera.resolution = (640, 480)
camera.framerate = 32
rawCapture = PiRGBArray(camera, size=(640, 480))
# allow the camera to warmup
time.sleep(0.1)
# capture frames from the camera
for frame in camera.capture_continuous(rawCapture, format="bgr", use_video_port=True):
image = frame.array
# show the frame
cv2.imshow("Frame", image)
key = cv2.waitKey(1) & 0xFF
# clear the stream in preparation for the next frame
rawCapture.truncate(0)
# if the `q` key was pressed, break from the loop
if key == ord("q"):
break

Related

Holding down a key freezes a video pipeline in OpenCV

I have a program, which gets video from industrial camera. I use OpenCV in order to display video and do some image analysing. I have several circles displayed on an image and there's a possibility to move them with keyboard. And when I press down a keyboard button and hold it, image acquisition freezes, but moving of a circle still happens (you can see this by unpressing the keyboard button and the circle moved).
Here is a simplified example:
import numpy as np
import cv2
WINDOW_NAME = 'program'
cv2.namedWindow(WINDOW_NAME, cv2.WINDOW_NORMAL)
cv2.setWindowProperty(WINDOW_NAME, cv2.WND_PROP_FULLSCREEN,
cv2.WINDOW_GUI_EXPANDED)
while True:
# Simplified image acquisition.
frame = (np.random.rand(1080, 1920, 3)*255).astype(np.uint8)
cv2.imshow(WINDOW_NAME, frame)
pressed_key_code = cv2.waitKey(1)
if pressed_key_code == ord('q'):
cv2.destroyAllWindows()
break
# Simplified action on key press.
elif pressed_key_code == ord('p'):
print('moving action')
In this example if one presses and holds p the image acquisition stops, but one can see prints still happening.
I tried using different functions for waiting key: cv2.pollKey(), cv2.waitKeyEx(). I used different waiting time. Also after researching the problem, I've found some suggestions on using several cv2.waitKey() functions simultaneously.
I expect to see action not freezing image acquisition. What can I do?
OpenCV version: 4.7.0
Platform: Windows 10.0.17763 AMD64
HighGUI backend: WIN32UI

Recording the real time face expression detection

I have coded the face expression detection using Jupyter notebook, detecting seven expressions of the face (Anger, Sad, Disgust, Happy, ...) and tried the real-time detection using the camera of my laptop. Now I want to record those expressions detected by the model in the real-time detection and create a figure of the detected expressions over time. First of all, is it possible to do so? If not, what other options do I have? For example, can I record the video taken by the camera and later detect the expressions from the video and make a figure from all the expressions detected over time? Thank you all for helping me!
You could do something like this:
from tensorflow import keras
import cv2
all_labels = ["Anger", "Sad", "Disgust", "Happy"]
# load the trained model, or train a model
model = keras.models.load_model('path/to/location')
# Open the camera
cap = cv2.VideoCapture(0)
# Or similarly open a saved video
cap = cv2.VideoCapture('path/to/video')
# Check if camera was opened correctly
if not (cap.isOpened()):
print("Could not open video device")
# Fetch one frame at a time from your camera in real-time or from the video
i=0
while(True):
# frame is a numpy array, that you can predict on
ret, frame = cap.read()
# Obtain the prediction (you may have to reshape frame according to your model)
prediction = model(frame, training=False)
# obtain a label from prediction, depending on your label list
# saving the frame in a different folder depending on label predicted
if label in all_labels:
cv2.imwrite('{}/frame_{}.jpg'.format(label, i), frame)
i = i+1
#Waits for a user input to quit the application
if cv2.waitKey(1) & 0xFF == ord('q'):
break
# When everything is done, release the capture
cap.release()
cv2.destroyAllWindows()
I made an answer to a similar but not identical problem. Maybe you can draw inspiration from that. Also this is a great tutorial for capturing live videos made by OpenCV.

How to stream and grab frames from a video file to test real-time processing in Python

I'm working on a project that will eventually have to process webcam images in real-time. I have some suitable test videos that I use to test my program. However, I don't know how to simulate real-time processing with a video file. I can read in each frame and process it, but this is not realistic since the algorithm is too heavy to run on every frame. I would like to 'stream' the video separately and pull in a frame each time the algorithm starts to test with a realistic fps, but I don't know how to do this.
Here is the basic layout of the code you can use :
import cv2
cap = cv2.VideoCapture('path to video file')
while cap.isOpened():
ret,frame = cap.read()
### YOUR CODE HERE ###
if cv2.waitKey(10) & 0xFF == ord('q'):
break
cap.release()
cv2.destroyAllWindows() # destroy all opened windows

Python Opencv high resolution bug

I'm using opencv as part of a beam profiler software. For this I have a high resolution camera (5496x3672, Daheng Imaging MER-2000-19U3M). I'm using now a basic program to show the captured frames. The program works fine for a normal webcam, however when I connect my high resolution camera (through USB 3.0) it becomes buggy. Most of the frame is black and at the top there are three small instances of the recording (screenshot here). On the other hand, the camera software displays the image properly, so I assume there must be a problem in how opencv access the camera. Here is the code used to diplay the image:
import cv2
cap = cv2.VideoCapture(0)
cap.set(3,5496)
cap.set(4,3672)
while(True):
ret, frame = cap.read()
frame2=cv2.resize(frame,(1280,720))
cv2.imshow('frame',frame2)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
cap.release()
cv2.destroyAllWindows()

display video on python without opencv?

I am using openCv for making video processing. What I do is reading a video frame by frame, then applying some processing on each frame and then displaying the new modified frame. My code looks like that :
video_capture = cv2.VideoCapture('video.mp4')
while True:
# Capture frame-by-frame
ret, frame = video_capture.read()
# Applying some processing to frame
.
.
.
# Displaying the new frame with processing
img=cv2.imshow('title', frame)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
This way I can display instantaneously the processed video. The problem is the display is lagging much due to the presence of the 'waitkey'. Is there another way for display images in real time to form a video, but with another module than cv2?
Thank you
One option is Tkinter, you can find some information here. Along with Tkinter, it uses python-gstreamer and python-gobject. It is much more complicated to set up, however it allows for more customization options.

Categories