What is the most convenient way to capture a video streamed in the browser with a frame rate around 15? I would like to avoid to capture the raw screen because I should play with x,y, width, height. I would like to have something less manual.
Edit The URL is unavailable, I can only access the player that shows the streaming in the browser.
If you want to simply capture a video from a given URL and save it to disk, you can do this:
import urllib2
link_to_movie = 'https://somemovie.com/themovie.mp4'
file_name = 'themovie.mp4'
response = urllib2.urlopen(link_to_movie)
with open(file_name,'wb') as f:
f.write(response.read())
Then if you want to set the frame rate for that movie you just downloaded, use FFMPEG:
ffmpeg -y -r 24 -i seeing_noaudio.mp4 seeing.mp4
FFMPEG answer from here: https://stackoverflow.com/a/50673808/596841
Related
from moviepy.editor import *
clip = ( VideoFileClip("https://filelink/file.mp4"))
clip.save_frame("frame.png", t = 3)
I am able to load video using moviepy but its loading complete video and then saving the frame. Is it possible not to load the complete video but only first four second and then save the frame at 3 second.
Unless I missed something, it's not possible using MoviePy.
You may use ffmpeg-python instead.
Here is a code sample using ffmpeg-python:
import ffmpeg
stream_url = "https://file-examples-com.github.io/uploads/2017/04/file_example_MP4_480_1_5MG.mp4"
# Input seeking example: https://trac.ffmpeg.org/wiki/Seeking
(
ffmpeg
.input(stream_url, ss='00:00:03') # Seek to third second
.output("frame.png", pix_fmt='rgb24', frames='1') # Select PNG codec in RGB color space and one frame.
.overwrite_output()
.run()
)
Notes:
The solution may not work for all mp4 URL files, because mp4 format is not so "WEB friendly" - I think the moov atom must be located at the beginning of the file.
You may need to manually install FFmpeg command line tool (but it supposed to be installed with MoviePy).
Result frame:
I am using the following API by Google, https://developers.google.com/nest/device-access/traits/device/camera-live-stream
I have successfully been able to see a list of my devices and relevant information. I also am able to make a successful GenerateRtspStream request. I receive the following response as documented on their API
{
"results" : {
"streamUrls" : {
"rtsp_url" : "rtsps://someurl.com/CjY5Y3VKaTZwR3o4Y19YbTVfMF...?auth=g.0.streamingToken"
},
"streamExtensionToken" : "CjY5Y3VKaTZwR3o4Y19YbTVfMF...",
"streamToken" : "g.0.streamingToken",
"expiresAt" : "2018-01-04T18:30:00.000Z"
}
}
The problem however is I am unable to access the video feed. I have tried using things like VLC player and Pot Player to view the live feed, but they say that URL does not exist. I have also tried using OpenCV in python to try and access the live-feed as well and it also does not work ( I have tested opencv on local files and they work just fine ).
Am I doing something wrong with rtsps urls? How do I access the live-feed, either in python or some third-party application like VLC Player
Here is some examples of what I have already tried doing:
import cv2 as cv
x = cv.VideoCapture(STREAM_URL)
# ret is False --- it works on local files as it returns True and I am able to view the media
ret, img = x.read()
Here is the attempt using Pot Player/VLC
My goal is to do processing on this video-feed/image in python, so ideally my solution would be using opencv or something along those lines. I was mainly using VLC and other players to debug the issue with this url first.
UPDATE
I have tested using the following public link
rtsp://wowzaec2demo.streamlock.net/vod/mp4:BigBuckBunny_115k.mov
:
MYURL = "rtsp://wowzaec2demo.streamlock.net/vod/mp4:BigBuckBunny_115k.mov"
MYURL = STREAM_URL
import cv2 as cv
x = cv.VideoCapture(MYURL)
while(True):
ret, img = x.read()
if not ret:
print('URL not working')
break
cv.imshow('frame', img)
cv.waitKey(1)
And it works perfectly with opencv as well as Pot Player. So maybe the issue is with the Google Devices Access API? The URL they provide may not be correct? Or am I missing something here?
Maybe it has to do with the rtsps URL vs rtsp? How can I fix that?
Both ffmpeg and ffplay worked fine for me, no rebuild necessary. On MacOS I just did:
brew install ffmpeg
ffplay -rtsp_transport tcp "rtsps://..."
Fill in the huge stream URL. Note the quotes, there was something about the URL without quotes that zsh didn't like. Alternatively to save the stream to a file
ffmpeg -y -loglevel fatal -rtsp_transport tcp -i "rtsps://..." -acodec copy -vcodec copy /path/to/out.mp4
You could use different options with ffmpeg to transform the stream to something other than rtsps for consumption by some other application.
Interestingly, despite the API telling me this:
"maxVideoResolution": {
"width": 640,
"height": 480
},
this is the info from ffplay:
Metadata:
title : SDM
Duration: N/A, start: -0.110000, bitrate: N/A
Stream #0:0: Audio: aac (LC), 48000 Hz, stereo, fltp
Stream #0:1: Video: h264 (Main), yuvj420p(pc, bt709, progressive), 1600x1200 [SAR 1:1 DAR 4:3], 15 fps, 15 tbr, 90k tbn, 30 tbc
Indicating 1600x1200, not sure why maxVideoResolution isn't actually the max resolution?
I'd suggest trying with ffmpeg, however you may need to build it from source.
If you're having trouble with ffmpeg, you can modify the ffmpeg source to increase control_uri (in libavformat/rtsp.h) size from 1024 to 2048, and recompile. Then ffmpeg should be able to play the RTSPS streams.
I have a large set of videos that I need to process but a this is slow going since the processing involves running through the entire video at about 0.6FPS and the vast majority of frames have little change between them.
Is there some way I could sample the video say every two seconds and save this as another video cutting the framerate and duration in two? I am not worried about losing information by doing this, I would gladly cut a 10 minute video down to a few hundred frames. I do need it to be a video file however and not a set of images.
You may solve it using cv2.VideoCapture and cv2.VideoWriter or using FFmpeg.
The solution bellow uses ffmpeg-python.
I posted both OpenCV solution, and FFmpeg solution.
Solving using OpenCV:
Open input video file for reading using cv2.VideoCapture.
Open output video file for writing using cv2.VideoWriter.
Read all frames, but write only every 10th frame to the output file.
Solving using FFmpeg:
Solution is based on: Selecting one every n frames from a video using ffmpeg, but uses ffmpeg-python.
The following example is "self contained" - generates synthetic input video file for testing, and executes the frame skipping on the generated input video.
Here is the code (example using ffmpeg-python is at the bottom):
import ffmpeg # Note: import ffmpeg is not a must, when using OpenCV solution.
import cv2
import sys
out_filename = 'out.mp4'
# Build synthetic video and read binary data into memory (for testing):
#########################################################################
in_filename = 'in.mp4'
width, height = 320, 240
fps = 1 # 1Hz (just for testing)
# Build synthetic video, for testing:
# ffmpeg -y -f lavfi -i testsrc=size=320x240:rate=1 -c:v libx264 -crf 18 -t 50 in.mp4
(
ffmpeg
.input('testsrc=size={}x{}:rate={}'.format(width, height, fps), f='lavfi')
.output(in_filename, vcodec='libx264', crf=18, t=50)
.overwrite_output()
.run()
)
#########################################################################
# Open video file for reading
in_vid = cv2.VideoCapture(in_filename)
#Exit if video not opened.
if not in_vid.isOpened():
print('Cannot open input video file')
sys.exit()
# Read first image (for getting resolution).
ok, frame = in_vid.read()
if not ok:
print('Cannot read video file')
sys.exit()
width, height = frame.shape[1], frame.shape[0]
# Get frame rate of input video.
fps = in_vid.get(cv2.CAP_PROP_FPS)
# Create video writer
# 264 doesn't come by default with the default installation of OpenCV, but I preferred using H.264 (supposed to be better than XVID).
# https://stackoverflow.com/questions/41972503/could-not-open-codec-libopenh264-unspecified-error
# (I had to download openh264-1.8.0-win64.dll)
out_vid = cv2.VideoWriter(out_filename, cv2.VideoWriter_fourcc(*'H264'), fps, (width, height))
frame_counter = 0
while True:
# Write every 10th frame to output video file.
if ((frame_counter % 10) == 0):
out_vid.write(frame)
# Read a new frame
ok, frame = in_vid.read()
if not ok:
break
frame_counter += 1
out_vid.release()
# Selecting one every n frames from a video using FFmpeg:
# https://superuser.com/questions/1274661/selecting-one-every-n-frames-from-a-video-using-ffmpeg
# ffmpeg -y -r 10 -i in.mp4 -vf "select=not(mod(n\,10))" -vsync vfr -vcodec libx264 -crf 18 1_every_10.mp4
out_filename = '1_every_10.mp4'
# Important: set input frame rate to fps*10
(
ffmpeg
.input(in_filename, r=str(fps*10))
.output(out_filename, vf='select=not(mod(n\,10))', vsync='vfr', vcodec='libx264', crf='18')
.overwrite_output()
.run()
)
Note: The solution looses some of the video quality, due to decoding and re-encoding.
You may also try FFmpeg decimate filter, if you are looking for solution that skips frames according to metrics calculation.
I would like to be able to capture YouTube videos (both live and recorded) using open cv.
I have found the question below, but is seems based on the comments and my own trials that the code/solutions provided below do not work on recent open cv versions.
Is it possible to stream video from https:// (e.g. YouTube) into python with OpenCV?
Is there any way to stream YouTube video through the most recent open cv version: opencv-python 4.1.0.25?
My goal is to use this to test a facial recognition algorithm on several random video streams which have human faces (for example news shows) to test for false positives.
Below is the method that i used for stream the data to opencv. But I was using a older version of opencv and link to caffe myself.
Install Pafy and youtubedl
pip install pafy
pip install youtube_dl
After install, copy the url from the video you want. Below is the sample code
url = 'https://youtu.be/1AbfRENy3OQ'
urlPafy = pafy.new(url)
videoplay = urlPafy.getbest(preftype="webm")
cap = cv2.VideoCapture(videoplay.url)
while (True):
ret,src = cap.read()
cv2.imshow('src',src)
#do your stuff here.
cap.release()
cv2.destroyAllWindows()
But if you want the auto select random video with face in it that will be a bit more complicated
You need to use YouTube-API to get random VideoId's from a set of search word( e.g pretty face, deep fake faces)
Then from the queried database, auto loop through for your learning algorithm. Below is a short sample from other post
import json
import urllib.request
import string
import random
count = 50
API_KEY = 'your_key'
random = ''.join(random.choice(string.ascii_uppercase + string.digits) for _ in range(3))
urlData = "https://www.googleapis.com/youtube/v3/search?key={}&maxResults={}&part=snippet&type=video&q={}".format(API_KEY,count,random)
webURL = urllib.request.urlopen(urlData)
data = webURL.read()
encoding = webURL.info().get_content_charset('utf-8')
results = json.loads(data.decode(encoding))
for data in results['items']:
videoId = (data['id']['videoId'])
print(videoId)
#store your ids
But without ground truth label, it is difficult to get a quantitative measure for your algo performance. Thus, I would suggest getting from one of the face video datasets for effective computing for the score. You need that properly generated score for publication.
https://www.cs.tau.ac.il/~wolf/ytfaces/
I have been using Moviepy to combine several shorter video files into hour long files. Some small files are "broken", they contain video but was not completed correctly (i.e. they play with VLC but there is no duration and you cannot skip around in the video).
I noticed this issue when I try to create a clip using VideoFileClip(file) function. The error that comes up is:
MoviePy error: failed to read the duration of file
Is there a way to still read the "good" frames from this video file and then add them to the longer video?
UPDATE
To clarify, my issue specifically is with the following function call:
clip = mp.VideoFileClip("/home/test/"+file)
Stepping through the code it seems to be an issue when checking the duration of the file in ffmpeg_reader.py where it looks for the duration parameter in the video file. However, since the file never finished recording properly this information is missing. I'm not very familiar with the way video files are structured so I am unsure of how to proceed from here.
You're correct. This issue arises commonly when the video duration info is missing from the file.
Here's a thread on the issue: GitHub moviepy issue 116
One user proposed the solution of using MP4Box to convert the video using this guide: RASPIVID tutorial
The final solution that worked for me involved specifying the path to ImageMagick's binary file as WDBell mentioned in this post.
I had the path correctly set in my environment variables, but it wasn't till I specificaly defined it in config_defaults.py that it started working:
I solved it in a simpler way, with the help of VLC I converted the file to the forma MPEG4 xxx TV/device,
and you can now use your new file with python without any problem
xxx = 720p or
xxx = 1080p
everything depends on your choice on the output format
I already answered this question on the blog: https://github.com/Zulko/moviepy/issues/116
This issue appears when VideoFileClip(file) function from moviepy it looks for the duration parameter in the video file and it's missing. To avoid this (in those corrupted files cases) you should make sure that the total frames parameter is not null before to shoot the function: clip = mp.VideoFileClip("/home/test/"+file)
So, I handled it in a simpler way using cv2.
The idea:
find out the total frames
if frames is null, then call the writer of cv2 and generate a temporary copy of the video clip.
mix the audio from the original video with the copy.
replace the original video and delete copy.
then call the function clip = mp.VideoFileClip("/home/test/"+file)
Clarification: Since OpenCV VideoWriter does not encode audio, the new copy will not contain audio, so it would be necessary to extract the audio from the original video and then mix it with the copy, before replacing it with the original video.
You must import cv2
import cv2
And then add something like this in your code before the evaluation:
cap = cv2.VideoCapture("/home/test/"+file)
frames = int(cap.get(cv2.CAP_PROP_FRAME_COUNT))
fps = int(cap.get(cv2.CAP_PROP_FPS))
print(f'Checking Video {count} Frames {frames} fps: {fps}')
This will surely return 0 frames but should return at least framerate (fps).
Now we can set the evaluation to avoid the error and handle it making a temp video:
if frames == 0:
print(f'No frames data in video {file}, trying to convert this video..')
writer = cv2.VideoWriter("/home/test/fixVideo.avi", cv2.VideoWriter_fourcc(*'DIVX'), int(cap.get(cv2.CAP_PROP_FPS)),(int(cap.get(cv2.CAP_PROP_FRAME_WIDTH)),int(cap.get(cv2.CAP_PROP_FRAME_HEIGHT))))
while True:
ret, frame = cap.read()
if ret is True:
writer.write(frame)
else:
cap.release()
print("Stopping video writer")
writer.release()
writer = None
break
Mix the audio from the original video with the copy. I have created a function for this:
def mix_audio_to_video(pathVideoInput, pathVideoNonAudio, pathVideoOutput):
videoclip = VideoFileClip(pathVideoInput)
audioclip = videoclip.audio
new_audioclip = CompositeAudioClip([audioclip])
videoclipNew = VideoFileClip(pathVideoNonAudio)
videoclipNew.audio = new_audioclip
videoclipNew.write_videofile(pathVideoOutput)
mix_audio_to_video("/home/test/"+file, "/home/test/fixVideo.avi", "/home/test/fixVideo.mp4")
replace the original video and delete copys:
os.replace("/home/test/fixVideo.mp4", "/home/test/"+file)
I had the same problem and I have found the solution.
I don't know why but if we enter the path in this method path = r'<path>' instead of ("F:\\path") we get no error.
Just click on the
C:\Users\gladi\AppData\Local\Programs\Python\Python311\Lib\site-packages\moviepy\video\io\ffmpeg_reader.py
and delete the the code and add this one
Provided by me in GITHUB - https://github.com/dudegladiator/Edited-ffmpeg-for-moviepy
clip1=VideoFileClip('path')
c=clip1.duration
print(c)