Redirecting the output of ffmpeg in subprocess - python

I have a folder named video_files where I have stored a bunch of video files e.g. 100.mp4, 101.mp4.... I have written a python script that iterates over each video file. Using the subprocess to call ffmpeg to extract the frames and then save the frames to the output directory named as frames. Here is my sample code for the same:
def frame_extractor(video_files_path):
video_files = sorted(glob.glob(video_files_path + "**/*.mp4", recursive=True))
print("Number of video files found: ", len(video_files))
for i, video in enumerate(video_files):
subprocess.call(["ffmpeg", "-i", video, "%04d.png"]
print("Extracted frames from all the videos")
The problem is that it extracts the frames in the present directory, from where I run this script but I want the frames to be extracted in the frames folder.
P.S: frames/%04d.png doesn't work.
Can anyone please tell me how to do this?

You could add a call after your function finishes to move all of the output files.
import os
import glob
for img in glob.glob(video_files_path + '**/*.png', recursive=True):
img_out = os.path.join('frames', os.path.split(img)[-1])
os.rename(img, img_out)

Related

Is there a way to add a .gif or .mp4 before each image, MoviePy -Python

I'm pretty new to Python and trying to add a .gif file or .mp4 file
before every image inserted into an .mp4 video file using MoviePy.
The code below is adding .jpg files from a folder and putting them together into an .mp4 video that shows each image for 10 seconds then to the next image in the folder. However, I would like add a .gif or .mp4 file before each image.
Here's the code:
from moviepy.editor import *
from glob import glob
#MAKE MOVIE
def makeMovie():
clips = [ImageClip(clip).set_duration(10) for clip in glob("imagesfolder\*.jpg")] #This adds the images from folder together
#After every image added to the video add a .gif or .mp4 file after it
video_clip = concatenate_videoclips(clips, method="compose")
video_clip.write_videofile("memes.mp4", fps=24, remove_temp=True, codec="libx264",
audio_codec="aac")
makeMovie()
I want to do something like ==> clips = [ImageClip(clip).set_duration(10) for clip in glob("imagesfolder/*.jpg") + "duckanim0000-0079.mp4"]

How can I save the output from a subprocess to a dataframe?

I'm working on a script to extract exif data (Latitude, Longitude, and Altitude) from RTK drone images. I have more or less copied the code below from a youtube video (Franchyze923)- with a few modifications. [I've been coding for a very short time]. How can I get the results of the subprocess to save to a table/dataframe (eventually I want to save the information to a .csv).
A different version of this script generated a .csv for every image - which I then imported all the csv files and pd.concat() them into one dataframe. That works but seems clunky.
import os
import subprocess
#Extracting exif data for images in Agisoft folder
exiftool_location = #path to exiftool.exe
images_to_extract_exif = #path to images
for path, directories, files in os.walk(images_to_extract_exif):
for images_to_extract_exif in files:
if images_to_extract_exif.endswith("JPG"):
full_jpg_path = os.path.join(path, images_to_extract_exif)
exiftool_command = [exiftool_location, "-filename", "-gpslatitude", "-gpslongitude", "-gpsaltitude", "-T", "-n", full_jpg_path]
subprocess.run(exiftool_command)
The output from the code looks great - I just have no clue how to save it to a table/dataframe.
DJI_0001.JPG 45.2405341666667 -95.3808298055556 354.427
DJI_0002.JPG 45.2405253333333 -95.3808253055556 354.434
DJI_0003.JPG 45.2404568888889 -95.3808200277778 354.447
DJI_0004.JPG 45.2403695277778 -95.3808205555556 354.431

Append series of videos together in Python/OpenCV

I'm processing a video file and decided to split it up into equal chunks for parallel processing with each chunk running on its own process. I generate this series of video files that I want to connect together to make the original video.
I'm wondering what's the most efficient way of stringing these videos together without having to append frame by frame? (and ideally deleting the video files after they are read so I'm only left with one big video).
I wanted a programmatic solution oppose to a command. I found moviepy very useful for concatenating videos (its based on ffmpeg). Natsort is very useful for organizing the files by numerical order.
from moviepy.editor import VideoFileClip, concatenate_videoclips
from natsort import natsorted
#path is path to folder of videos
def concatVideos(path) :
currentVideo = None
#List all files in the directory and read points from text files one by one
for filePath in natsorted(os.listdir(path)):
if filePath.endswith(".mov"):
if currentVideo == None:
currentVideo = VideoFileClip(path + filePath)
continue
video_2 = VideoFileClip(path+filePath)
currentVideo = concatenate_videoclips([currentVideo,video_2])
currentVideo.write_videofile("export".mp4")

Combine list of videos into one video

I have a list of videos (all .mp4), that I would like to combine to one large .mp4 video. The names of the files are as following: vid0.mp4, vid1.mp4, vid2.mp4, .... After searching, I found a Quora question which explains that the main file should be opened, then all sub files should be read (as bits) and then written. So here is my code:
import os
with open("MainVideo.mp4","wb") as f:
for video in os.listdir("/home/timmy/sd/videos/"):
temp=open('/home/timmy/sd/videos/%s'%video)
h=temp.read()
'''
for i in h:
f.write(i) #Error
'''
f.write(h)
temp.close()
This is only writing the first video. Is there a way to write it without using outside libraries? If not, please refer me to one.
I also tried the moviepy library but I get OSError.
code:
from moviepy.editor import VideoFileClip, concatenate_videoclips
li1=[]
for i in range(0,30):
name_of_file = "/home/timmy/sd/videos/vid%d.mp4"%i
clip = VideoFileClip(name_of_file)
#print(name_of_file)
li1.append(clip)
I get OSError after the 9th clip. (I think this is because of the size of the list.)

How do I get the face_recognition encoding from many images in a directory and store them in a CSV File?

This is the code I have and it works for single images:
Loading images and apply the encoding
from face_recognition.face_recognition_cli import image_files_in_folder
Image1 = face_recognition.load_image_file("Folder/Image1.jpg")
Image_encoding1 = face_recognition.face_encodings(Image1)
Image2 = face_recognition.load_image_file("Folder/Image2.jpg")
Image_encoding2 = face_recognition.face_encodings(Image2)
Face encodings are stored in the first array, after column_stack we have to resize
Encodings_For_File = np.column_stack(([Image_encoding1[0]],
[Image_encoding2[0]]))
Encodings_For_File.resize((2, 128))
Convert array to pandas dataframe and write to csv
Encodings_For_File_Panda = pd.DataFrame(Encodings_For_File)
Encodings_For_File_Panda.to_csv("Celebrity_Face_Encoding.csv")
How do I loop over the images in 'Folder' and extract the encoding into a csv file? I have to do this with many images and cannot do it manually. I tried several approaches, but none a working for me. Cv2 can be used instead of load_image_file?
Try this
Note: I am assuming you dont need to specify folder path before file name in your command. This code will show you how to iterate over the directory to list files and process them
import os
from face_recognition.face_recognition_cli import image_files_in_folder
my_dir = 'folder/path/' # Folder where all your image files reside. Ensure it ends with '/
encoding_for_file = [] # Create an empty list for saving encoded files
for i in os.listdir(my_dir): # Loop over the folder to list individual files
image = my_dir + i
image = face_recognition.load_image_file(image) # Run your load command
image_encoding = face_recognition.face_encodings(image) # Run your encoding command
encoding_for_file.append(image_encoding[0]) # Append the results to encoding_for_file list
encoding_for_file.resize((2, 128)) # Resize using your command
You can then convert to pandas and export to csv. Let me know how it goes

Categories