How to display picture Server side with Flask and OpenCV? - python

I have a Flask server that shows a black image via OpenCV on a monitor. When arrives an HTTP request, it must show on the same window another image, which name arrives via parameter. After another request arrives on the endpoint "/destroy", a black images must be shown in the same window. I don' understand why this code doesn't work, do you have any suggestion? For not blocking the server, I need to use two different process in the endpoint, is this the correct way to use them?
#show black image before the server starts
image = cv2.imread("black.jpg",1)
screen = screeninfo.get_monitors()[1]
cv2.moveWindow("image", screen.x - 1, screen.y - 1)
cv2.imshow("image",image)
cv2.waitKey(1)
def imageShower(name):
#get name parameter and shows image with that name
image = cv2.imread(name+".jpg",1)
cv2.imshow("image",image)
cv2.waitKey(1)
def imageDestroyer():
#return to black image
image = cv2.imread("black.jpg",1)
cv2.imshow("image",image)
cv2.waitKey(1)
#at endpoint / start process that invokes imageShower()
#app.route('/')
def showImg():
#store image name arrived via parameter
name = request.args.get('name')
#start process
p = Process(target=imageShower(name))
p.start()
#return back to front-end
return "image showed"
#return black image on endpoint "/destroy"
#app.route('/destroy')
def destroy():
#start process
pd = Process(target=imageDestroyer)
pd.start()
#return back to front-end
return "returned to black"

Related

OpenCV Python on Raspberry can't open camera by index. Device is busy

I am trying to stream the video feed of my FLIR Lepton to a webpage. I got myself some code that I don't yet fully understand. I am new to Flask. However, I got the camera working in VLC and a simple python script that captures 200 frames and saves them to a video file.
As stated in the title, I use OpenCV to do all of this, but I seem unable to "get" the camera. When trying to connect to the website, I get the "could not read" printout. I should also add that I am also experiencing problems, although different issues, with the PiCam module when trying the same thing.
What makes this even more mysterious is that the camera seems to go into captivity. The LED starts to flash, much as it would be when capturing video in the "simple" script.
This is the source of the file I am experiencing problems with:
from flask import Flask, render_template, Response
import cv2
from flirpy.camera.lepton import Lepton
fcamera = Lepton()
app = Flask(__name__)
camera = cv2.VideoCapture(1) # use 0 for web camera
# for cctv camera use rtsp://username:password#ip_address:554/user=username_password='password'_channel=channel_number_stream=0.sdp' instead of camera
# for local webcam use cv2.VideoCapture(0)
def gen_frames(): # generate frame by frame from camera
while True:
# Capture frame-by-frame
success, frame = camera.read() # read the camera frame
if not success:
print("could not read")
break
else:
ret, buffer = cv2.imencode('.jpg', frame)
frame = buffer.tobytes()
yield (b'--frame\r\n'
b'Content-Type: image/jpeg\r\n\r\n' + frame + b'\r\n') # concat frame one by one and show the result
#app.route('/video_feed')
def video_feed():
#Video streaming route. Put this in the src attribute of an img tag
return Response(gen_frames(), mimetype='multipart/x-mixed-replace; boundary=frame')
#app.route('/')
def index():
"""Video streaming home page."""
return render_template('index.html')
if __name__ == '__main__':
app.run(debug=True)
And this is the simple file that runs without problems:
from flirpy.camera.lepton import Lepton
import matplotlib.pyplot as plt
import cv2 as cv
from time import sleep
cap = cv.VideoCapture(1)
fourcc = cv.VideoWriter_fourcc(*'XVID')
out = cv.VideoWriter('output.avi', fourcc, 20.0, (int(cap.get(3)), int(cap.get(4))))
i = 0
while cap.isOpened():
ret, frame = cap.read()
if not ret:
print("Can't receive frame (stream end?). Exiting ...")
break
# write the flipped frame
out.write(frame)
if i > 200:
break
i = i + 1
# Release everything if the job is finished
cap.release()
out.release()
I did not attempt to use any of these scripts simultaneously. Also, VLC is not running.
Thanks for any help!
javamaster10000

Is there a way to use Python to read and process a camera's frame then save it to a file. Without using libraries such as OpenCV?

This is what I have at the moment but its getting really slow read times. It takes about 6-8 seconds alone just to read the frame with opencv and for my project I need to be able to get a picture at specific intervals as it reads a pressure transducer. Is there a way to make this program faster with cv2 or can is there a way using arrays or what not to do this much quicker.
import cv2
import timeit
def main(): #Define camera function
start = timeit.default_timer() #Starts a runtime timer
hud_Cam = cv2.VideoCapture(0) #Call camera resource
gauge_Cam = cv2.VideoCapture(1)
rval, hud_Img = hud_Cam.read() #Read/Grab camera frame
rval, gauge_Img = gauge_Cam.read()
stop = timeit.default_timer() #Stops a runtime timer
print('Time: ', stop - start) #Calculate runtime timer
start1 = timeit.default_timer() #Starts a runtime timer
hud_name = ('HudPicture0.jpg') #Initialize file names
gauge_name = ('GaugePicture0.jpg')
cv2.imwrite(hud_name, hud_Img) #Write camera frame to file names
cv2.imwrite(gauge_name, gauge_Img)
print("Hud Picture written!") #Log to console
print("Gauge Picture written!")
hud_Cam.release() #Release camera resource to clear memory
gauge_Cam.release()
stop1 = timeit.default_timer() #Stops a runtime timer
print('Time: ', stop1 - start1) #Calculate runtime timer```
As I understand your question, you want to be able to take two images from two cameras as soon as possible after Labview requests it.
On Linux or macOS, I would start capturing continuously as soon as possible and then signal the capturing process using Unix signals. Unfortunately, you are using Windows and signals don't work so well there. So, I am going to use the filesystem to signal - if Labview wants a picture taken, it just creates a file with or without content called capture.txt and that makes the Python process save the current image. There are other more sophisticated methods, but this demonstrates the concept and as you learn more, you may replace the signalling mechanism with a write on a socket, or an MQTT message or something else.
I put the two cameras in two independent threads so they can work in parallel, i.e. faster.
#!/usr/bin/env python3
import cv2
import threading
import logging
from pathlib import Path
def capture(stream, path):
"""Capture given stream and save to file on demand"""
# Start capturing to RAM
logging.info(f'[captureThread]: starting stream {stream}')
cam = cv2.VideoCapture(stream, cv2.CAP_DSHOW)
while True:
# Read video continuously
_, im = cam.read()
# Check if Labview wants it
if CaptureFile.exists():
# Intermediate filename
temp = Path(f'saving-{stream}.jpg')
# Save image with temporary name
cv2.imwrite(str(temp), im)
# Rename so Labview knows it is complete and not still being written
temp.rename(path)
logging.info(f'[captureThread]: image saved')
break
logging.info(f'[captureThread]: done')
if __name__=="__main__":
# Set up logging - advisable when threading
format = "%(asctime)s: %(message)s"
logging.basicConfig(format=format, level=logging.INFO, datefmt="%H:%M:%S")
logging.info("[MAIN]: starting")
# Labview will create this file when a capture is required, ensure not there already
CaptureFile = Path('capture.txt')
CaptureFile.unlink(True)
# Create a thread for each camera and start them
HUDthread = threading.Thread(target=capture, args=(0, Path('HUD.jpg')))
Gaugethread = threading.Thread(target=capture, args=(1, Path('Gauge.jpg')))
HUDthread.start()
Gaugethread.start()
# Wait for both to exit
HUDthread.join()
Gaugethread.join()
logging.info("[MAIN]: done")
This got the time down to, Time:2.066412200000059
import timeit
start = timeit.default_timer()
import cv2
hud_Cam = cv2.VideoCapture(0, cv2.CAP_DSHOW) #Call camera resource
gauge_Cam = cv2.VideoCapture(1, cv2.CAP_DSHOW)
def main(hud_Cam, gauge_Cam, start): #Define camera function
while True:
rval, hud_Img = hud_Cam.read() #Read/Grab camera frame
rval, gauge_Img = gauge_Cam.read()
hud_name = ('HudPicture0.jpg') #Initialize file names
gauge_name = ('GaugePicture0.jpg')
cv2.imwrite(hud_name, hud_Img) #Write camera frame to file names
cv2.imwrite(gauge_name, gauge_Img)
print("Hud Picture written!") #Log to console
print("Gauge Picture written!")
hud_Cam.release() #Release camera resource to clear memory
gauge_Cam.release()
stop = timeit.default_timer()
print('Time: ', stop - start) #Calculate runtime timer
break
# =============================================================================
while True:
img1 = cv2.imread(hud_name)
img2 = cv2.imread(gauge_name)
cv2.imshow("Hud Image", img1)
cv2.imshow("Gauge Image", img2)
k = cv2.waitKey(60)
if k%256 == 27:
cv2.destroyAllWindows()
break
main(hud_Cam, gauge_Cam, start) #Call camera function```

How to broadcast Webcam Image stream to multiple devices using Opencv and Flask?

I have the following code. It works well when trying to connect with one device to the server. When two devices connect, only one works the other freezes and gives the following error.
The goal is to broadcast a video stream to multiple devices. Also, is there a way to improve the FPS transmitted and reduce latency with Flask?
CODE
Camera.py
import cv2
class VideoCamera(object):
def __init__(self):
# Using OpenCV to capture from device 0. If you have trouble capturing
# from a webcam, comment the line below out and use a video file
# instead.
self.video = cv2.VideoCapture(0)
# If you decide to use video.mp4, you must have this file in the folder
# as the main.py.
# self.video = cv2.VideoCapture('video.mp4')
def __del__(self):
self.video.release()
def get_frame(self):
success, image = self.video.read()
# We are using Motion JPEG, but OpenCV defaults to capture raw images,
# so we must encode it into JPEG in order to correctly display the
# video stream.
ret, jpeg = cv2.imencode('.jpg', image)
return jpeg.tobytes()
Main.py
import os
install_opencv = os.system("pip install flask opencv-python")
print("Install OPENCV-PYTHON: ", install_opencv)
from flask import Flask, render_template, Response
from camera import VideoCamera
app = Flask(__name__)
#app.route('/')
def index():
return render_template('index.html')
def gen(camera):
while True:
frame = camera.get_frame()
yield (b'--frame\r\n'
b'Content-Type: image/jpeg\r\n\r\n' + frame + b'\r\n\r\n')
#app.route('/video_feed')
def video_feed():
return Response(gen(VideoCamera()),
mimetype='multipart/x-mixed-replace; boundary=frame')
if __name__ == '__main__':
app.run(host='0.0.0.0', debug=True)
ERROR
Traceback (most recent call last):
cv2.error: OpenCV(3.4.3) C:\projects\opencv-python\opencv\modules\imgcodecs\src\grfmt_base.cpp:145: error: (-10:Unknown error code -10) Raw image encoder error: Empty JPEG image (DNL not supported) in function 'cv::BaseImageEncoder::throwOnEror'
127.0.0.1 - - [01/Nov/2018 16:16:21] "GET /video_feed HTTP/1.1" 200
I also did this way and could not access through and two devices.
There is a project at github https://github.com/miguelgrinberg/flask-video-streaming or you can see a better explanation on the developer's site https://blog.miguelgrinberg.com/post/flask-video-streaming-revisited #commentform
With this code I was able to access from multiple devices, but only allows a camera to be displayed. I am working so that it is possible to view 2 or more cameras, as if it were a DVR.

Python OpenCV: Returning cvBridge Image from ROS

I recently just started OpenCv so apologies for any dumb questions.
So my ultimate aim is to stream video from my ZED camera on to a VR headset & I've not had any luck in Unity or UnReal as relevant plugins fail to work on my Linux machine. I need to be able to isolate the dual cameras from my ZED device but right now only ROS allows me to access the individual topics of either cam.
So I used a ZED wrapper to publish image data on to a ROS node and found code to interact with ROS messages from here.
The code works perfectly and I am able to display a stream of images captured on my monitor. But my issue is how do I basically save these images into a buffer/queue?
I modified the example to code to try to return the images converted by cvBridge but I'm not having any luck getting the returned image to show on screen. Think it may be because Image is initialised to None at first and so cv2.imshow() cannot display an empty picture. But how do I check if the rest of the images are being returned correctly? Here is my code:
import cv2
import rospy
from sensor_msgs.msg import Image
from cv_bridge import CvBridge, CvBridgeError
from Queue import Queue
class ImageConverter(object):
def __init__(self, object):
self.topic=object
self.bridge= CvBridge()
self.image_queue = Queue(maxsize=100)
self.image_sub = rospy.Subscriber(self.topic, Image, self.callback, queue_size=100)
self.image=None
def callback(self, data):
try:
cv_image=self.bridge.imgmsg_to_cv2(data, "bgr8")
except CvBridgeError as e:
print(e)
else:
self.image = cv_image
self.image_queue.put(cv_image)
def get_image(self):
try:
image = self.image_queue.get(block=False)
except:
image = self.image
cv2.imshow("Image window", image)
cv2.waitKey(3)
def subscribe(position):
ic= ImageConverter(position)
ic.get_image()
rospy.init_node('image_converter', anonymous=True)
try:
rospy.spin()
except KeyboardInterrupt:
print ("goodbye")
cv2.destroyAllWindows()
I have had such a difficult time trying to figure out how to do this all so any help would be very much appreciated. Thanks!
Your call back function has the following code:
try:
cv_image=self.bridge.imgmsg_to_cv2(data, "bgr8")
except CvBridgeError as e:
print(e)
else:
self.image = cv_image
self.image_queue.put(cv_image)
I think your else statement is a typo and may not be getting executed.

Raspberry Pi Python Open CV Photo Booth

I'm working on making a photo booth running on a Raspberry Pi using Python and OpenCV. I found a great code example of how to capture images from a web cam here. http://joseph-soares.com/programacao/python/python-opencv-salvar-imagem-da-camera-no-disco/
The code that is provided alone works perfect on both the Pi and on my Windows PC. When I start adding code to this I'm having issues. I can no longer see what the web cam is seeing on the Pi and on Windows it is hit or miss. They are both capturing pictures though. In Windows it will actually display the image taken in the frame. If I revert back to the original code, again it works just fine.
I'm basically using a loop to give a count down before the picture is taken and flashing a light on the Arduino to represent the digit output that will be added in. I originally thought it was a memory issue on the Pi but it not working on my desktop is making me think otherwise. Any help would be appreciated.
import time, cv
from datetime import datetime
import pyfirmata
import serial
#board = pyfirmata.Arduino('/dev/ttyACM0')
board = pyfirmata.Arduino('COM7')
#arduino.open()
# start an iterator thread so
# serial buffer doesn't overflow
iter8 = pyfirmata.util.Iterator(board)
iter8.start()
greenLED = board.get_pin('d:13:o')
debug = 1
def captureImage():
snapped = cv.QueryFrame(capture)
cv.SaveImage(datetime.now().strftime('%Y%m%d_%Hh%Mm%Ss%f') + '.jpg', snapped)
if __name__ == '__main__':
capture = cv.CaptureFromCAM(0)
cv.NamedWindow('Webcam')
try:
while True:
for n in range(0, 4):
frame = cv.QueryFrame(capture)
cv.ShowImage('Webcam', frame)
cv.WaitKey(10)
if debug: print "count down"
for i in range (0, 5):
print i
greenLED.write(1)
time.sleep(1)
greenLED.write(0)
time.sleep(0.2)
i+=1
greenLED.write(1)
time.sleep(0.2)
print "Say Cheese"
captureImage()
greenLED.write(0)
if debug: print "Waiting for 5 seconds"
time.sleep(5)
n+=1
break
capture = None
cv.DestroyAllWindows()
board.exit()
except KeyboardInterrupt:
cv.DestroyAllWindows()
board.exit()

Categories