I would like to be able to capture YouTube videos (both live and recorded) using open cv.
I have found the question below, but is seems based on the comments and my own trials that the code/solutions provided below do not work on recent open cv versions.
Is it possible to stream video from https:// (e.g. YouTube) into python with OpenCV?
Is there any way to stream YouTube video through the most recent open cv version: opencv-python 4.1.0.25?
My goal is to use this to test a facial recognition algorithm on several random video streams which have human faces (for example news shows) to test for false positives.
Below is the method that i used for stream the data to opencv. But I was using a older version of opencv and link to caffe myself.
Install Pafy and youtubedl
pip install pafy
pip install youtube_dl
After install, copy the url from the video you want. Below is the sample code
url = 'https://youtu.be/1AbfRENy3OQ'
urlPafy = pafy.new(url)
videoplay = urlPafy.getbest(preftype="webm")
cap = cv2.VideoCapture(videoplay.url)
while (True):
ret,src = cap.read()
cv2.imshow('src',src)
#do your stuff here.
cap.release()
cv2.destroyAllWindows()
But if you want the auto select random video with face in it that will be a bit more complicated
You need to use YouTube-API to get random VideoId's from a set of search word( e.g pretty face, deep fake faces)
Then from the queried database, auto loop through for your learning algorithm. Below is a short sample from other post
import json
import urllib.request
import string
import random
count = 50
API_KEY = 'your_key'
random = ''.join(random.choice(string.ascii_uppercase + string.digits) for _ in range(3))
urlData = "https://www.googleapis.com/youtube/v3/search?key={}&maxResults={}&part=snippet&type=video&q={}".format(API_KEY,count,random)
webURL = urllib.request.urlopen(urlData)
data = webURL.read()
encoding = webURL.info().get_content_charset('utf-8')
results = json.loads(data.decode(encoding))
for data in results['items']:
videoId = (data['id']['videoId'])
print(videoId)
#store your ids
But without ground truth label, it is difficult to get a quantitative measure for your algo performance. Thus, I would suggest getting from one of the face video datasets for effective computing for the score. You need that properly generated score for publication.
https://www.cs.tau.ac.il/~wolf/ytfaces/
Related
I'm trying to use python to capture a photo from my ip cameras every 1 minute and save it as a file for later use.
I'm still not good at python and I don't get why some of the times I get the right image and sometimes I get a corrupted gray image.
I'm using hikvision api to get the rtsp stream and while the stream is working sometimes the images are still fully gray.
Here is the code I wrote:
import cv2
import time
count = 0
while True:
for x in range(1, 9):
count = count +1
RTSP_URL = f'rtsp://user:password#ip:port/ISAPI/Streaming/Channels/{x}01'
cap = cv2.VideoCapture(RTSP_URL, cv2.CAP_FFMPEG)
result, image = cap.read()
if result:
cv2.imwrite(f"pictures/{x}{count}.png", image)
time.sleep(60)
I would be happy to hear suggestions to find the best way to do this task.
We have a videostream from camera with the help of NDI. How can we get it in OpenCV?
import cv2
cap = cv2.VideoCapture("tcp://192.168.1.69")
while cap.isOpened():
_, frame = cap.read()
# frame processing
We have tried the following variation of a string:
tcp://192.168.1.69
tcp://192.168.1.69:8080
http://192.168.1.69
http://192.168.1.69:8080
udp://192.168.1.69:8080
But we get an error every time. What is the correct string to use NDI stream?
A bit late and I'm sure you may have already come across the solution. You also failed to state the platform requirements. So the solution I have is currently Windows only at the moment.
The "NDI Virtual Input" driver allows an NDI network stream to be treated as a Webcam source. Thus you can just set the video capture source to the ID of the device. This requires the driver be installed on the client system
import cv2
cap = cv2.VideoCapture(1) # Could be any number, it's system specific, but it's u=usually 0, 1 etc.
while cap.isOpened():
_, frame = cap.read()
# frame processing
Have a look at PyNDI - I added some examples there to show you how to get NDI into openCV.
The SimpleSourceViewer is command line based, the GUIExample uses TKInter to give you an interface.
I'm using the following code to open a video stream:
import cv2
video = cv2.VideoCapture()
video.open("some_m3u8_link")
success, image = video.read()
However, even if the code works as intended locally, on Heroku success is always false.
I'm using cedar-14 stack with the following buildpacks:
heroku/python
https://github.com/jonathanong/heroku-buildpack-ffmpeg-latest.git
(I tried several buildpack options for ffmpeg)
Running ffmpeg --version on heroku instance will return ffmpeg version 4.0-static https://johnvansickle.com/ffmpeg/
Is there any setting/configuration I missed in order to make it work on deployment? Thank you!
Later edit: I tried several links for "some_m3u8_link" including from twitch and other streaming services (including traffic streaming li
An example for reproducing:
python -c "import cv2; video=cv2.VideoCapture(); video.open('https://hddn01.skylinewebcams.com/live.m3u8?a=5tm6kfqrhqbpblan9j5d4bmua4'); success, image = video.read(); print(success)"
Returns True on local machine and False on Heroku.
(the link is taken from here)
You can use pafy module with cv2
-try opencv3 if its not working with cv2
import cv2, pafy
url = "Some url to stream"
video = pafy.new(url)
best = video.getbest(preftype="webm")
video=cv2.VideoCapture(best.url)
pafyPYPI
You can try this:
import cv2
video = cv2.VideoCapture("some_m3u8_link")
success, image = video.read()
Specifying the mode to open it might work.
video.open("some_m3u8_link", "r")
If that doesn't work then specifying the file extension might help.
You might also need to make a variable equal to the function
Ex:
""" replace .mp4 with the applicable file type,
I don't know if you need to specify file mode"""
import cv2
video = cv2.VideoCapture()
video = video.open("some_m3u8_link.mp4")
success, image = video.read()
If this doesn't work then I am out of ideas.
I have been using Moviepy to combine several shorter video files into hour long files. Some small files are "broken", they contain video but was not completed correctly (i.e. they play with VLC but there is no duration and you cannot skip around in the video).
I noticed this issue when I try to create a clip using VideoFileClip(file) function. The error that comes up is:
MoviePy error: failed to read the duration of file
Is there a way to still read the "good" frames from this video file and then add them to the longer video?
UPDATE
To clarify, my issue specifically is with the following function call:
clip = mp.VideoFileClip("/home/test/"+file)
Stepping through the code it seems to be an issue when checking the duration of the file in ffmpeg_reader.py where it looks for the duration parameter in the video file. However, since the file never finished recording properly this information is missing. I'm not very familiar with the way video files are structured so I am unsure of how to proceed from here.
You're correct. This issue arises commonly when the video duration info is missing from the file.
Here's a thread on the issue: GitHub moviepy issue 116
One user proposed the solution of using MP4Box to convert the video using this guide: RASPIVID tutorial
The final solution that worked for me involved specifying the path to ImageMagick's binary file as WDBell mentioned in this post.
I had the path correctly set in my environment variables, but it wasn't till I specificaly defined it in config_defaults.py that it started working:
I solved it in a simpler way, with the help of VLC I converted the file to the forma MPEG4 xxx TV/device,
and you can now use your new file with python without any problem
xxx = 720p or
xxx = 1080p
everything depends on your choice on the output format
I already answered this question on the blog: https://github.com/Zulko/moviepy/issues/116
This issue appears when VideoFileClip(file) function from moviepy it looks for the duration parameter in the video file and it's missing. To avoid this (in those corrupted files cases) you should make sure that the total frames parameter is not null before to shoot the function: clip = mp.VideoFileClip("/home/test/"+file)
So, I handled it in a simpler way using cv2.
The idea:
find out the total frames
if frames is null, then call the writer of cv2 and generate a temporary copy of the video clip.
mix the audio from the original video with the copy.
replace the original video and delete copy.
then call the function clip = mp.VideoFileClip("/home/test/"+file)
Clarification: Since OpenCV VideoWriter does not encode audio, the new copy will not contain audio, so it would be necessary to extract the audio from the original video and then mix it with the copy, before replacing it with the original video.
You must import cv2
import cv2
And then add something like this in your code before the evaluation:
cap = cv2.VideoCapture("/home/test/"+file)
frames = int(cap.get(cv2.CAP_PROP_FRAME_COUNT))
fps = int(cap.get(cv2.CAP_PROP_FPS))
print(f'Checking Video {count} Frames {frames} fps: {fps}')
This will surely return 0 frames but should return at least framerate (fps).
Now we can set the evaluation to avoid the error and handle it making a temp video:
if frames == 0:
print(f'No frames data in video {file}, trying to convert this video..')
writer = cv2.VideoWriter("/home/test/fixVideo.avi", cv2.VideoWriter_fourcc(*'DIVX'), int(cap.get(cv2.CAP_PROP_FPS)),(int(cap.get(cv2.CAP_PROP_FRAME_WIDTH)),int(cap.get(cv2.CAP_PROP_FRAME_HEIGHT))))
while True:
ret, frame = cap.read()
if ret is True:
writer.write(frame)
else:
cap.release()
print("Stopping video writer")
writer.release()
writer = None
break
Mix the audio from the original video with the copy. I have created a function for this:
def mix_audio_to_video(pathVideoInput, pathVideoNonAudio, pathVideoOutput):
videoclip = VideoFileClip(pathVideoInput)
audioclip = videoclip.audio
new_audioclip = CompositeAudioClip([audioclip])
videoclipNew = VideoFileClip(pathVideoNonAudio)
videoclipNew.audio = new_audioclip
videoclipNew.write_videofile(pathVideoOutput)
mix_audio_to_video("/home/test/"+file, "/home/test/fixVideo.avi", "/home/test/fixVideo.mp4")
replace the original video and delete copys:
os.replace("/home/test/fixVideo.mp4", "/home/test/"+file)
I had the same problem and I have found the solution.
I don't know why but if we enter the path in this method path = r'<path>' instead of ("F:\\path") we get no error.
Just click on the
C:\Users\gladi\AppData\Local\Programs\Python\Python311\Lib\site-packages\moviepy\video\io\ffmpeg_reader.py
and delete the the code and add this one
Provided by me in GITHUB - https://github.com/dudegladiator/Edited-ffmpeg-for-moviepy
clip1=VideoFileClip('path')
c=clip1.duration
print(c)
I need to split big video file into smaller pieces by time. Give me your suggestions, please, and if you can some tips for library usage. Thanks.
OpenCV has Python wrappers.
As you're interested in video IO, have a look at QueryFrame and its related functions there.
In the end, your code will look something like this (completely untested):
import cv
capture = cv.CaptureFromFile(filename)
while Condition1:
# Need a frame to get the output video dimensions
frame = cv.RetrieveFrame(capture) # Will return None if there are no frames
# New video file
video_out = cv.CreateVideoWriter(output_filenameX,
CV_FOURCC('M','J','P','G'), capture.fps, frame.size(), 1)
# Write the frames
cv.WriteFrame(video_out, frame)
while Condition2:
# Will return None if there are no frames
frame = cv.RetrieveFrame(capture)
cv.WriteFrame(video_out, frame)
By the way, there are also ways to do this without writing any code.
Check youtube-upload, it splits the videos using ffmpeg.
Youtube-upload is a command-line
script that uploads videos to Youtube.
If a video does not comply with
Youtube limitations (<2Gb and <15'),
it will be automatically splitted
before uploading. Youtube-upload
should work on any platform
(GNU/Linux, BSD, OS X, Windows, ...)
that runs Python and FFmpeg.