I am using the following API by Google, https://developers.google.com/nest/device-access/traits/device/camera-live-stream
I have successfully been able to see a list of my devices and relevant information. I also am able to make a successful GenerateRtspStream request. I receive the following response as documented on their API
{
"results" : {
"streamUrls" : {
"rtsp_url" : "rtsps://someurl.com/CjY5Y3VKaTZwR3o4Y19YbTVfMF...?auth=g.0.streamingToken"
},
"streamExtensionToken" : "CjY5Y3VKaTZwR3o4Y19YbTVfMF...",
"streamToken" : "g.0.streamingToken",
"expiresAt" : "2018-01-04T18:30:00.000Z"
}
}
The problem however is I am unable to access the video feed. I have tried using things like VLC player and Pot Player to view the live feed, but they say that URL does not exist. I have also tried using OpenCV in python to try and access the live-feed as well and it also does not work ( I have tested opencv on local files and they work just fine ).
Am I doing something wrong with rtsps urls? How do I access the live-feed, either in python or some third-party application like VLC Player
Here is some examples of what I have already tried doing:
import cv2 as cv
x = cv.VideoCapture(STREAM_URL)
# ret is False --- it works on local files as it returns True and I am able to view the media
ret, img = x.read()
Here is the attempt using Pot Player/VLC
My goal is to do processing on this video-feed/image in python, so ideally my solution would be using opencv or something along those lines. I was mainly using VLC and other players to debug the issue with this url first.
UPDATE
I have tested using the following public link
rtsp://wowzaec2demo.streamlock.net/vod/mp4:BigBuckBunny_115k.mov
:
MYURL = "rtsp://wowzaec2demo.streamlock.net/vod/mp4:BigBuckBunny_115k.mov"
MYURL = STREAM_URL
import cv2 as cv
x = cv.VideoCapture(MYURL)
while(True):
ret, img = x.read()
if not ret:
print('URL not working')
break
cv.imshow('frame', img)
cv.waitKey(1)
And it works perfectly with opencv as well as Pot Player. So maybe the issue is with the Google Devices Access API? The URL they provide may not be correct? Or am I missing something here?
Maybe it has to do with the rtsps URL vs rtsp? How can I fix that?
Both ffmpeg and ffplay worked fine for me, no rebuild necessary. On MacOS I just did:
brew install ffmpeg
ffplay -rtsp_transport tcp "rtsps://..."
Fill in the huge stream URL. Note the quotes, there was something about the URL without quotes that zsh didn't like. Alternatively to save the stream to a file
ffmpeg -y -loglevel fatal -rtsp_transport tcp -i "rtsps://..." -acodec copy -vcodec copy /path/to/out.mp4
You could use different options with ffmpeg to transform the stream to something other than rtsps for consumption by some other application.
Interestingly, despite the API telling me this:
"maxVideoResolution": {
"width": 640,
"height": 480
},
this is the info from ffplay:
Metadata:
title : SDM
Duration: N/A, start: -0.110000, bitrate: N/A
Stream #0:0: Audio: aac (LC), 48000 Hz, stereo, fltp
Stream #0:1: Video: h264 (Main), yuvj420p(pc, bt709, progressive), 1600x1200 [SAR 1:1 DAR 4:3], 15 fps, 15 tbr, 90k tbn, 30 tbc
Indicating 1600x1200, not sure why maxVideoResolution isn't actually the max resolution?
I'd suggest trying with ffmpeg, however you may need to build it from source.
If you're having trouble with ffmpeg, you can modify the ffmpeg source to increase control_uri (in libavformat/rtsp.h) size from 1024 to 2048, and recompile. Then ffmpeg should be able to play the RTSPS streams.
Related
I can open a stream in VLC but in OpenCV I cannot capture frames. (Python 2.7, OpenCV 3.4.3 binary distribution x86, Windows 10). I've been following this guide: https://medium.com/#tomgrek/hackers-guide-to-the-aws-deeplens-1b8281bc6e24 but I cannot seem to read from random streams online (not sure whether I should be able to, I saw this question opencv videocapture can't open MJPEG stream about compiling with ffmpeg but I just downloaded the binary available in Sourceforge).
I am using an AWS Deeplens, updated to the latest version.
Installed ffmpeg, latest version.
Then, in /etc/ffserver.conf I added:
<Stream camera.h264>
File "/opt/awscam/out/ch1_out.h264"
VideoFrameRate 6
VideoSize 320x240
NoAudio
</Stream>
<Stream camera.mjpeg>
File "/opt/awscam/out/ch2_out.mjpeg"
VideoFrameRate 3
VideoSize 640x480
Format mjpeg
NoAudio
</Stream>
I start ffserver -f /etc/ffserver.conf
On my Windows machine, I use WSL and open an SSH tunnel into the AWS Deeplens ssh -L 8090:localhost:8090 aws_cam#192.168.0.10
At this point, in my Windows machine I can open VLC and if I point to http://localhost:8090/camera.mjpeg I can see the stream from the camera.
But if I run the following code:
cam = cv2.VideoCapture("http://localhost:8090/camera.mjpeg")
success, frame = cam.read()
opened = cam.isOpened()
success, frame, opened
I get:
False, None, False
If I browse to http://localhost:8090/stat.html, I see:
Available Streams
Path Served Conns bytes Format Bit rate kbits/s Video kbits/s Codec Audio kbits/s Codec Feed
test1.mpg 0 0 mpeg 96 64 mpeg1video 32 mp2 feed1.ffm
test.asf 0 0 asf_stream 320 256 msmpeg4 64 wmav2 feed1.ffm
stat.html 17 42150 - - - -
index.html 0 0 - - - -
camera.h264 3 6805k h264 0 0 libx264 0 /opt/awscam/out/ch1_out.h264
camera.mjpeg 12 41073k mjpeg 0 0 mjpeg 0 /opt/awscam/out/ch2_out.mjpeg
And every time I call VideoCapture() I see how the number of Served for the camera.mjpeg stream increased by a 2 or 3 and the bytes, increases a few megabytes but I don't see anything in OpenCV. I have not tried any other video device in my Windows 10 but I can read images no problem. I also tried a random stream online, also opens in VLC but not in OpenCV, tried this one: http://136.176.70.200/mjpg/video.mjpg
Any ideas?
Looks like I need to compile OpenCV myself with ffmpeg support.
What is the most convenient way to capture a video streamed in the browser with a frame rate around 15? I would like to avoid to capture the raw screen because I should play with x,y, width, height. I would like to have something less manual.
Edit The URL is unavailable, I can only access the player that shows the streaming in the browser.
If you want to simply capture a video from a given URL and save it to disk, you can do this:
import urllib2
link_to_movie = 'https://somemovie.com/themovie.mp4'
file_name = 'themovie.mp4'
response = urllib2.urlopen(link_to_movie)
with open(file_name,'wb') as f:
f.write(response.read())
Then if you want to set the frame rate for that movie you just downloaded, use FFMPEG:
ffmpeg -y -r 24 -i seeing_noaudio.mp4 seeing.mp4
FFMPEG answer from here: https://stackoverflow.com/a/50673808/596841
On OSX I can record from my webcam and write a video file with the following simple script:
import cv2
camera = cv2.VideoCapture(0)
# Define the codec and create VideoWriter object to save the video
fourcc = cv2.VideoWriter_fourcc(*'XVID')
video_writer = cv2.VideoWriter('output.avi', fourcc, 25.0, (640, 480))
while True:
try:
(grabbed, frame) = camera.read() # grab the current frame
frame = cv2.resize(frame, (640, 480)) # resize the frame
video_writer.write(frame) # Write the video to the file system
except KeyboardInterrupt:
camera.release()
break
The resulting avi file is quite big though. I want a smaller file, preferably an mp4. So I changed the filename to output.mp4 and the fourcc codec to H264. That writes a video file which works, but gives me the following error:
$ python write_video_file.py
OpenCV: FFMPEG: tag 0x34363248/'H264' is not supported with codec id 28 and format 'mp4 / MP4 (MPEG-4 Part 14)'
OpenCV: FFMPEG: fallback to use tag 0x00000021/'!???'
Since I thought I'm missing the H264 codec in ffmpeg I decided to uninstall ffmpeg and opencv and reinstall them again with H264 support. For this I used the following commands:
# First ffmpeg
brew install ffmpeg --with-fdk-aac --with-libvidstab --with-openh264 \
--with-openjpeg --with-openssl --with-tools --with-webp --with-x265 --with-zeromq
# then opencv3
brew tap homebrew/science
brew install opencv3 --with-contrib --with-ffmpeg --with-tbb
After this I ran the script again, using the following combinations:
output.mp4 with H264
output.mp4 with X264
Unfortunately I still get the OpenCV warnings/errors. The file is readable, but it still annoys me that I get these errors. Does anybody have any idea how I can make OpenCV write mp4 video file with the H264 codec?
All tips are welcome!
I spent ages trying to find a list of video codecs on macOS, and finding which codecs work with which containers, and then if QuickTime can actually read the resulting files.
I can summarise my findings as follows:
.mov container and fourcc('m','p','4','v') work and QuickTime can read it
.mov container and fourcc('a','v','c','1') work and QuickTime can read it
.avi container and fourcc('F','M','P','4') works but QuickTime cannot read it without conversion
I did manage to write h264 video in an mp4 container, as you wanted, just not using OpenCV's VideoWriter module. Instead I changed the code (mine happens to be C++) to just output raw bgr24 format data - which is how OpenCV likes to store pixels anyway:
So the output of a frame of video stored in a Mat called Frame becomes:
cout.write(reinterpret_cast<const char *>(Frame.data),width*height*3);
and then you pipe the output of your program into ffmpeg like this:
./yourProgram | ffmpeg -y -f rawvideo -pixel_format bgr24 -video_size 1024x768 -i - -c:v h264 -pix_fmt yuv420p video.mp4
Yes, I know I have made some assumptions:
that the data are CV8_UC3,
that the sizes match in OpenCV and ffmpeg, and
that there is no padding or mad stride in the OpenCV data,
but the basic technique works and you can adapt it for other sizes and situations.
I am looking to find a way to integrate a webcam into my python program.
I am running on a Raspberry Pi Model A OC'd to 900mHz, so the solution will need to be ARM compatible and (hopefully) lightweight.
Most posts I have seen recommend using the OpenCV module to read the webcam, but I am unable to get anything but a black frame to appear from my webcam. I assume that OpenCV is not compatible with my webcam. However, every other webcam application available for linux can detect and display the feed from my webcam.
I am wondering if there are any other lightweight or simple methods for capturing from my webcam using python. Perhaps a way that I could directly interface with the video0 device that comes up under /dev/ for my webcam? I am open to any suggestions; because what I am doing now, is not working.
Thanks
(as requested):
Output of v4l2-ctl --all:
Driver Info (not using libv4l2):
Driver name : uvcvideo
Card type : UVC Camera (046d:081b)
Bus info : usb-bcm2708_usb-1.2
Driver version: 3.2.27
Capabilities : 0x04000001
Video Capture
Streaming
Format Video Capture:
Width/Height : 640/480
Pixel Format : 'YUYV'
Field : None
Bytes per Line: 1280
Size Image : 614400
Colorspace : SRGB
Crop Capability Video Capture:
Bounds : Left 0, Top 0, Width 640, Height 480
Default : Left 0, Top 0, Width 640, Height 480
Pixel Aspect: 1/1
Video input : 0 (Camera 1: ok)
Streaming Parameters Video Capture:
Capabilities : timeperframe
Frames per second: 30.000 (30/1)
Read buffers : 0
And this is the code snippet I'm using:
import cv
cv.NamedWindow("camera", 1)
capture = cv.CaptureFromCAM(0)
while True:
img = cv.QueryFrame(capture)
cv.ShowImage("camera", img)
if cv.WaitKey(10) == 27:
break
cv.DestroyWindow("camera")
Thanks for your help!
You could use gstreamer-0.10.
Get it to work on the command line.
e.g.: gst-launch -v v4l2src ! decodebin ! ffmpegcolorspace ! pngenc ! filesink location=out.png
Use the parse_launch function to get a shortcut to a working pipeline in you python code.
import gst
pipeline = gst.parse_launch("""
v4l2src ! decodebin ! ffmpegcolorspace ! pngenc ! filesink location="%s"
""" % sys.argv[-1])
pipeline.set_state(gst.STATE_PLAYING)
I have tried several methods to capture single frames from a webcam:
uvccapture is one option and here's a command:
uvccapture -d /dev/video0 -o outfile.jpg
streamer is another and the command looks about like this:
streamer -c /dev/video0 -o outfile.jpeg
Yes I realize that this isn't the most high performance since you have to use Python's "command" module to execute this command and get the results, and then open up the resulting file in OpenCV to do processing.
BUT it does work. I've been using it in production on several automation projects quite successfully. The lag I experience is all based on my image processing software, raw images can be displayed VERY quickly.
I can't seem to capture frames from a file using OpenCV -- I've compiled from source on Ubuntu with all the necessary prereqs according to: http://opencv.willowgarage.com/wiki/InstallGuide%20%3A%20Debian
#!/usr/bin/env python
import cv
import sys
files = sys.argv[1:]
for f in files:
capture = cv.CaptureFromFile(f)
print capture
print cv.GetCaptureProperty(capture, cv.CV_CAP_PROP_FRAME_WIDTH)
print cv.GetCaptureProperty(capture, cv.CV_CAP_PROP_FRAME_HEIGHT)
for i in xrange(10000):
frame = cv.QueryFrame(capture)
if frame:
print frame
Output:
ubuntu#local:~/opencv$ ./test.py bbb.avi
<Capture 0xa37b130>
0.0
0.0
The frames are always None...
I've transcoded a video file to i420 format using:
mencoder $1 -nosound -ovc raw -vf format=i420 -o $2
Any ideas?
You don't have the gstreamer-ffmpeg or gsteamer-python or gsteamer-python-devel packages installed. I installed all three of them. and the exact same problem was resolved.
I'm using OpenCV 2.2.0, compiled on Ubuntu from source. I can confirm that the source code you provided works as expected. So the problem is somewhere else.
I couldn't reproduce your problem using mencoder (installing it is a bit of a problem on my machine) so I used ffmpeg to wrap a raw video in the AVI container:
ffmpeg -s cif -i ~/local/sample-video/foreman.yuv -vcodec copy foreman.avi
(foreman.yuv is a standard CIF image sequence you can find on the net if you look around).
Running the AVI from ffmpeg through your source gives this:
misha#misha-desktop:~/Desktop/stackoverflow$ python ocv_video.py foreman.avi
<Capture 0xa71120>
352.0
288.0
<iplimage(nChannels=3 width=352 height=288 widthStep=1056 )>
<iplimage(nChannels=3 width=352 height=288 widthStep=1056 )>
...
So things work as expected. What you should check:
Do you get any errors on standard output/standard error? OpenCV uses ffmpeg libraries to read video files, so be on the lookout for informative messages. Here's what happens if you try to play a RAW video file without a container (sounds similar to your problem):
error:
misha#misha-desktop:~/Desktop/stackoverflow$ python ocv_video.py foreman.yuv
[IMGUTILS # 0x7fff37c8d040] Picture size 0x0 is invalid
[IMGUTILS # 0x7fff37c8cf20] Picture size 0x0 is invalid
[rawvideo # 0x19e65c0] Could not find codec parameters (Video: rawvideo, yuv420p)
[rawvideo # 0x19e65c0] Estimating duration from bitrate, this may be inaccurate
GStreamer Plugin: Embedded video playback halted; module decodebin20 reported: Your GStreamer installation is missing a plug-in.
<Capture 0x19e3130>
0.0
0.0
Make sure your AVI file actually contains the required information to play back the video. At a minimum, this should be the frame dimensions. RAW video typically doesn't contain any information besides the actual pixel data, so knowing the frame dimensions and FPS is required. You can wrong-guess the FPS and still get a viewable video, but if you get the dimensions wrong, the video will be unviewable.
Make sure the AVI file you're trying to open is actually playable. Try ffplay file.avi -- if that fails, then the problem is likely to be with the file. Try using ffmpeg to transcode instead of mencoder.
Make sure you can play other videos, using the same method as above. If you can't, then it's likely that your ffmpeg install is broken.