The end goal of this is to create a program running on a Raspberry Pi that can stream video data to a remote client. To do this I (believe) I need to change the frame data to raw bytes in order for them to be sent over sockets. Before deploying this into the real world, I'm simply checking to make sure I can do the transformation to and from bytes. I do get output and it is reading data from the camera in real-time, but the way it's displayed is in a 1 pixel wide vertical left-aligned line. (When using the default full screen button on the OpenCV window it increases to about 5 pixels wide.) Also just to clarify, the tostring() function apparently transforms the given data into raw bytes and not a string? In checking, Python said the new variable was bytes.
My previous attempts were focused on just taking the raw image data and attempting to encode and decode that, but I was met with an error. I think I'm on the right track but this is a bump in the road.
import cv2
import numpy as np
vid = cv2.VideoCapture(0)
while True:
empty, frame = vid.read()
frameString = frame.tostring()
# Intermediary socket stuffs.
newFrame = np.frombuffer(frameString)
cv2.imshow("s", newFrame)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
vid.release()
Considering this is all working through Numpy I would expect to have equal results on either end of the transformation but something goes wrong and I'm not even sure where to start looking.(Standard and full screen screenshots: https://imgur.com/a/BIPxr50)
You can use cv2.imencode() to encode the frame and then turn it into a string. From there you can send it through your socket. On the receiving end, you can decode it using np.fromString() and cv2.imdecode().
import cv2
import numpy as np
vid = cv2.VideoCapture(0)
while True:
if vid.isOpened():
empty, frame = vid.read()
data = cv2.imencode('.jpg', frame)[1].tostring()
# Intermediary socket stuffs
nparr = np.fromstring(data, np.uint8)
newFrame = cv2.imdecode(nparr, cv2.IMREAD_COLOR)
cv2.imshow("s", newFrame)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
vid.release()
Related
I am trying to make this videocapture opencv2 python script allow me to do multiple video streams from my laptop cam and USB cams and I succeeded (with help of youtube) to do so only every time I add a camera I have to edit the line of code and add another videocapture line and another frame and another cv2.imshow. But I want to edit the video capture code in a way that allows me to stream as many cameras as detected without the need to add a line every time there is a camera using a loop. I'm obviously new here so accept my apologies if the solution is too simple.
This is the code that allows me to stream multiple cameras but with adding a line for each camera.
import urllib.request
import time
import numpy as np
import cv2
# Defining URL for camera
video_capture_0 = cv2.VideoCapture(0)
video_capture_1 = cv2.VideoCapture(1)
while True:
# Capture frame-by-frame
ret0, frame0 = video_capture_0.read()
ret1, frame1 = video_capture_1.read()
if (ret0):
# Display the resulting frame
cv2.imshow('Cam 0', frame0)
if (ret1):
# Display the resulting frame
cv2.imshow('Cam 1', frame1)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
# When everything is done, release the capture
video_capture_0.release()
video_capture_1.release()
cv2.destroyAllWindows()
I tried making a list camlist = [i for i in range(100)] and then adding that to a for loop that keeps adding it to videocapture. But I believe that's a mess so I deleted the code plus that doesn't seem so effective.
If you want to work with many cameras then first you should keep them on list - and then you can use for-loop to get all frame. And frames you should also keep on list so later you can use for-loop to display them. And finally you can use for-loop to release cameras
import cv2
#video_captures = [cv2.VideoCapture(x) for x in range(2)]
video_captures = [
cv2.VideoCapture(0),
#cv2.VideoCapture(1),
cv2.VideoCapture('https://imageserver.webcamera.pl/rec/krupowki-srodek/latest.mp4'),
cv2.VideoCapture('https://imageserver.webcamera.pl/rec/krakow4/latest.mp4'),
cv2.VideoCapture('https://imageserver.webcamera.pl/rec/warszawa/latest.mp4'),
]
while True:
results = []
for cap in video_captures:
ret, frame = cap.read()
results.append( [ret, frame] )
for number, (ret, frame) in enumerate(results):
if ret:
cv2.imshow(f'Cam {number}', frame)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
for cap in video_captures:
cap.release()
cv2.destroyAllWindows()
And now if you even add new camera to list then you don't have to change rest of code.
But for many cameras it is good to run every camera in separated thread - but still keep all on lists.
Few days ago was question: How display multi videos with threading using tkinter in python?
I'm simply trying to read IP Camera live stream through OpenCV's simple code, i.e as follows:
import numpy as np
import cv2
src = 'rtsp://id:pass#xx.xx.xx.xx'
cap = cv2.VideoCapture(src)
while(True):
# Capture frame-by-frame
ret, frame = cap.read()
# Our operations on the frame come here
gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
# Display the resulting frame
cv2.imshow('frame',gray)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
# When everything done, release the capture
cap.release()
cv2.destroyAllWindows()
The problem here is, sometime it works like a charm by showing the running live video, but sometime else it creates a lot of blank windows which keeps popping up until the job is killed. Like the below image:
Why does it happen, also how can we avoid it?
Maybe you should cover the case that the video capture fails to establish a healthy stream.
Note that it is possible to not to receive a frame in some cases even though video capture opens. This can happen due to various reasons such as congested network traffic, insufficient computational resources, power saving mode of some IP cameras.
Therefore, I would suggest you to check in the frame size and make sure that your VideoCapture object is receiving the frame at right shape. (You can debug and see the size of a visible frame to learn the expected resolution of the camera.)
A change in your loop like following might help
min_expected_frame_size = [some integer]
while(cap.isOpened()):
ret, frame = cap.read()
width = cap.get(cv2.CAP_PROP_FRAME_WIDTH)
height = cap.get(cv2.CAP_PROP_FRAME_HEIGHT)
if ret==True and ((width*height) >= min_expected_frame_size):
cv2.imshow('frame',frame)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
else:
break
I'm trying to write a program that cuts my gameplay clips for me. In order to achieve what I want I need to detect a red "X" (indicates a kill) in the middle of the frame, which is a 12px * 12px region, so I need full quality. While I was debugging I realized that the frames read by OpenCV seemed lower quality than what I see in a video player.
Here is an example:
This is the "X" I've cut from a video player
This is the "X" that OpenCV read in
Both are from the same frame.
Here is the code I was debugging with:
import cv2
import numpy as np
vid = cv2.VideoCapture(path)
success, frame = vid.read()
while success:
cv2.imshow("Frame", frame)
# Pause video
if cv2.waitKey(1) & 0xFF == ord('s'):
cv2.waitKey(0)
# Quit video
if cv2.waitKey(1) & 0xFF == ord('q'):
break
success, frame = vid.read()
cv2.destroyAllWindows()
I've also checked the width and the height and they seem to be the same as the original video. I'm using OpenCV version 4.2.0. What could cause this issue?
Any help is appreciated.
If you are trying to read the frame from a file and want to change it's resolution, you'd probably want to use the resize method as described here. This would need to be done inside the loop, right after you read in the frame. It could be something like:
resize(ret, Size(width, height), 0, 0, INTER_CUBIC);
Example :
b = cv2.resize(frame,(1280,720),fx=0,fy=0, interpolation = cv2.INTER_CUBIC)
I hope this helps.
I figured it out. The only thing I had to do is to convert the picture to jpeg (for some reason). I used this function to convert it in memory:
def convertToJpeg(img):
result, encoded = cv2.imencode('.jpg', img, [cv2.IMWRITE_JPEG_QUALITY, 100])
return cv2.imdecode(encoded, 1)
Thanks for all the help!
We're doing a project in school where we need to do basic image processing. Our goal is to use every video frame for the Raspberry Pi and do real time image processing.
We've tried to include raspistill in our python-program but so far nothing has worked. The goal of our project is to design a RC-car which follows a blue/red/whatever coloured line with help from image processing.
We thought it would be a good idea to make a python-program which does all image processing necessary, but we currently struggle with the idea of bringing recorded images into the python program. Is there a way to do this with picamera or should we try a different way?
For anyone curious, this is how our program currently looks
while True:
#camera = picamera.PiCamera()
#camera.capture('image1.jpg')
img = cv2.imread('image1.jpg')
width = img.shape[1]
height = img.shape[0]
height=height-1
for x in range (0,width):
if x>=0 and x<(width//2):
blue = img.item(height,x,0)
green = img.item(height,x,1)
red = img.item(height,x,2)
if red>green and red>blue:
OpenCV already contains functions to process live camera data.
This OpenCV documentation provides a simple example:
import numpy as np
import cv2
cap = cv2.VideoCapture(0)
while(True):
# Capture frame-by-frame
ret, frame = cap.read()
# Our operations on the frame come here
gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
# Display the resulting frame
cv2.imshow('frame',gray)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
# When everything done, release the capture
cap.release()
cv2.destroyAllWindows()
Of course, you do not want to show the image but all your processing can be done there.
Remember to sleep a few hundred milliseconds so the pi does not overheat that much.
Edit:
"how exactly would I go about it though. I used "img = cv2.imread('image1.jpg')" all the time. What do I need to use instead to get the "img" variable right here? What do I use? And what is ret, for? :)"
ret indicates whether the read was successful. Exit program if not.
The read frame is nothing other than your img = cv2.imread('image1.jpg') so your detection code should work exactly the same.
The only difference is that your image does not need to be saved and reopened. Also for debugging purposes you can save the recorded image, like:
import cv2, time
cap = cv2.VideoCapture(0)
ret, frame = cap.read()
if ret:
cv2.imwrite(time.strftime("%Y%m%d-%H%M%S"), frame)
cap.release()
You can use picamera to acquire images.
To make it "real time", you can acquire data each X milliseconds. You need to set X depending on the power of your hardware (and the complexity of the openCV algorithm).
Here's an example (from http://picamera.readthedocs.io/en/release-1.10/api_camera.html#picamera.camera.PiCamera.capture_continuous) how to acquire 60 images per second using picamera:
import time
import picamera
with picamera.PiCamera() as camera:
camera.start_preview()
try:
for i, filename in enumerate(camera.capture_continuous('image{counter:02d}.jpg')):
print(filename)
time.sleep(1)
if i == 59:
break
finally:
camera.stop_preview()
I have the following code:
# Import packages
from picamera.array import PiRGBArray
from picamera import PiCamera
import time
import cv2
X_RESOLUTION = 640
Y_RESOLUTION = 480
# Initialize the camera and grab a reference to the raw camera capture
camera = PiCamera()
camera.resolution = (X_RESOLUTION, Y_RESOLUTION)
camera.framerate = 10
rawCapture = PiRGBArray(camera, size = (X_RESOLUTION, Y_RESOLUTION))
# Allow camera to warmup
time.sleep(0.1)
#Capture frames from the camera
for frame in camera.capture_continuous(rawCapture, format="bgr", use_video_port=True):
# Grab the raw NumPy array representing the image
image = frame.array
# Show the frame
cv2.imshow("Frame", image)
key = cv2.waitKey(1) & 0xFF
# Clear the stream so it is ready to receive the next frame
rawCapture.truncate(0)
# If the 'q' key was pressed, break from the loop
if(key == ord('q')):
break
It is all fine and dandy. It captures video and displays it on my screen and it exits when I press 'q'. However, if I wanted to manipulate the frames somehow, say for example I wanted to set every pixels R value in each frame to 255 to make the image red. How would I do that?
My end goal is to write software that detects movement on a static background. I understand the theory and the actual data manipulation that needs to be done to make this happen, I just cant figure out how to access each frame's pixel data and operate on it. I attempted to change some values in 'image', but it says the array is immutable and cannot be written to, only read from.
Thanks for your time.
I have accessed each pixel{R,G,B values accessed separately } value randomly and have changed the value of it in the image. You can do it on an video by extracting each frame of it. It is implemented in c++ with opencv. Go through this link https://stackoverflow.com/a/32664968/3853072 you will get an idea.