I am trying to remove files after I'm done using them but I get an error all the time...
The error:
PermissionError: [WinError 32] The process cannot access the file because it is being used by another process: '0.mp4'
I tried closing the Objects but Its not working...
import os
from moviepy.editor import VideoFileClip, concatenate_videoclips
DEFAULT_HEIGHT = 720
DEFAULT_WIDTH = 1280
clip_names = ["0.mp4", "1.mp4"]
clips = []
for name in clip_names:
clips.append(VideoFileClip(name).resize(width=DEFAULT_WIDTH,height=DEFAULT_HEIGHT))
final_clip = concatenate_videoclips(clips)
final_clip.write_videofile("video.mp4")
for clip in clips:
clip.__del__()
for name in clip_names:
os.remove(name)
I want to remove the file with os.remove...
Did you try simply closing the clips?
for clip in clips:
clip.close()
From the source here.
Or, if you want to do this cleanly in case of errors, use a with context:
import contextlib
with contextlib.ExitStack() as stack:
clips = []
for name in clip_names:
clip = VideoFileClip(name)
stack.enter_context(contextlib.closing(clip))
clips.append(clip.resize(width=DEFAULT_WIDTH,height=DEFAULT_HEIGHT))
final_clip = concatenate_videoclips(clips)
final_clip.write_videofile("video.mp4")
# exiting the `with` block released the clips
for name in clip_names:
os.remove(name)
This approach uses an ExitStack to track clips. When the program exits the with block, all contexts passed to enter_context are exited, releasing all the clips.
Related
I've put together the following code to try and automate the tedious parts of my video editing process. The issue I'm running into is that I run the code in Visual Studio Code, but nothing happens and I receive no errors.
I'm wanting the folder specified to be monitored for unedited/new .mp4 files. When one or multiple are detected, the .mp4 should be formatted in the specified for TikTok upload. (As I would within a video editor, the clip should maintain its aspect ratio within a project setting of 1080x1920/9:16, then a duplicate of the clip is put behind the clip, not maintaining its aspect ratio and have a gaussian blur applied to it. Once the process is done, each edited clip would be put into a separate "edited" folder, or back in the original destination, but renamed as edited.
Here is my current code:
import os
import shutil
import ffmpeg
import watchdog
import time
from watchdog.observers import Observer
from watchdog.events import FileSystemEventHandler
# path to monitor
monitor_path = "D:\Downloads\TikTok"
# path to output folder
output_folder = "D:\Downloads\Edited"
# define a function to be called when a new .mp4 file is detected
def detect_new_file(event):
# input video file path
input_video = event.src_path
# output video file name
output_video_name = os.path.basename(input_video)
# output video file path
output_video = os.path.join(output_folder, output_video_name)
# start ffmpeg command
ffmpeg_command = (
ffmpeg
.input(input_video)
.trim(start_time=0, end_time='-10')
.filter('scale', size='1080x1920')
.filter('blur', size=5)
.overlay(input_video, x=540, y=960)
.output(output_video, format='mp4')
.run()
)
# move the original file to a new folder
shutil.move(input_video, os.path.join(monitor_path, "D:\Downloads\OriginalClips"))
# create a handler for the file event
class MP4Handler(watchdog.events.FileSystemEventHandler):
def on_created(self, event):
if event.src_path.endswith('.mp4'):
detect_new_file(event)
handler = MP4Handler()
# create a watcher object
watcher = watchdog.observers.Observer()
# start monitoring the folder
watcher.schedule(handler, monitor_path, recursive=True)
watcher.start()
# wait for events
try:
while True:
time.sleep(1)
except KeyboardInterrupt:
watcher.stop()
# close the watcher
watcher.join()
Current received error(s):
TypeError: Expected incoming stream(s) to be of one of the following types: ffmpeg.nodes.FilterableStream; got <class 'str'>
Any help is appreciated.
I've been trying to get the hang of multithreading in Python. However, whenever I attempt to make it do something that might be considered useful, I run into issues.
In this case, I have 300 PDF files. For simplicity, we'll assume that each PDF only has a unique number on it (say 1 to 300). I'm trying to make Python open the file, grab the text from it, and then use that text to rename the file accordingly.
The non-multithreaded version I make works amazing. But it's a bit slow and I thought I'd see if I could speed it up a bit. However, this version finds the very first file, renames it correctly, and then throws an error saying:
FileNotFoundError: [Errno 2] No such file or directory: './pdfPages/1006941.pdf'
Which is it basically telling me that it can't find a file by that name. The reason it can't is because it already named it. And in my head that tells me that I've probably messed something up with this loop and/or multithreading in general.
Any help would be appreciated.
Source:
import PyPDF2
import os
from os import listdir
from os.path import isfile, join
from PyPDF2 import PdfFileWriter, PdfFileReader
from multiprocessing.dummy import Pool as ThreadPool
# Global
i=0
def readPDF(allFiles):
print(allFiles)
global i
while i < l:
i+=1
pdf_file = open(path+allFiles, 'rb')
read_pdf = PyPDF2.PdfFileReader(pdf_file)
number_of_pages = read_pdf.getNumPages()
page = read_pdf.getPage(0)
page_content = page.extractText()
pdf_file.close()
Text = str(page_content.encode('utf-8')).strip("b").strip("'")
os.rename(path+allFiles,path+pre+"-"+Text+".PDF")
pre = "77"
path = "./pdfPages/"
included_extensions = ['pdf','PDF']
allFiles = [f for f in listdir(path) if any(f.endswith(ext) for ext in included_extensions)] # Get all files in current directory
l = len(allFiles)
pool = ThreadPool(4)
doThings = pool.map(readPDF, allFiles)
pool.close()
pool.join()
Yes, you have, in fact, messed up with the loop as you say. The loop should not be there at all. This is implicitly handled by the pool.map(...) that ensures that each function call will receive a unique file name from your list to work with. You should not do any other looping.
I have updated your code below, by removing the loop and some other changes (minor, but still improvements, I think):
# Removed a number of imports
import PyPDF2
import os
from multiprocessing.dummy import Pool as ThreadPool
# Removed not needed global variable
def readPDF(allFiles):
# The while loop not needed, as pool.map will distribute the different
# files to different processes anyway
print(allFiles)
pdf_file = open(path+allFiles, 'rb')
read_pdf = PyPDF2.PdfFileReader(pdf_file)
number_of_pages = read_pdf.getNumPages()
page = read_pdf.getPage(0)
page_content = page.extractText()
pdf_file.close()
Text = str(page_content.encode('utf-8')).strip("b").strip("'")
os.rename(path+allFiles,path+pre+"-"+Text+".PDF")
pre = "77"
path = "./pdfPages/"
included_extensions = ('pdf','PDF') # Tuple instead of list
# Tuple allows for simpler "F.endswith"
allFiles = [f for f in os.listdir(path) if f.endswith(included_ext)]
pool = ThreadPool(4)
doThings = pool.map(readPDF, allFiles)
# doThings will be a list of "None"s since the readPDF returns nothing
pool.close()
pool.join()
Thus, the global variable and the counter are not needed, since all of that is handled implicitly. But, even with these changes, it is not at all certain that this will speed up your execution much. Most likely, the bulk of your program execution is waiting for the disk to load. In that case, it is possible that even if you have multiple threads, they will still have to wait for the main resource, i.e., the hard drive. But to know for certain, you have to test.
While experimenting with some of the code from the Reading Binary Data into a Mutable Buffer section of the O'Reilly website, I added a line at the end to remove the test file that was created.
However this always results in the following error:
PermissionError: [WinError 32] The process cannot access the file because it is being used by another process: 'data'
I don't understand this behavior because the with memory_map(test_filename) as m: should implicitly close the associated file, but apparently doesn't. I can work around this by saving the file descriptor returned from os.open() and then explicitly closing it with an os.close(fd) after the block of statements in the with suite finishes.
Is this a bug or have I missed something?
Code (with the couple of commented-out lines showing my hacky workaround):
import os
import mmap
test_filename = 'data'
def memory_map(filename, access=mmap.ACCESS_WRITE):
# global fd # Save to allow closing.
size = os.path.getsize(filename)
fd = os.open(filename, os.O_RDWR)
return mmap.mmap(fd, size, access=access)
# Create test file.
size = 1000000
with open(test_filename, 'wb') as f:
f.seek(size - 1)
f.write(b'\x00')
# Read and modify mmapped file in-place.
with memory_map(test_filename) as m:
print(len(m))
print(m[0:10])
# Reassign a slice.
m[0:11] = b'Hello World'
# os.close(fd) # Explicitly close the file descriptor -- WHY?
# Verify that changes were made
print('reading back')
with open(test_filename, 'rb') as f:
print(f.read(11))
# Delete test file.
# Causes PermissionError: [WinError 32] The process cannot access the file
# because it is being used by another process: 'data'
os.remove(test_filename)
From the documentation:
close()
Closes the mmap. Subsequent calls to other methods of the object will result in a ValueError exception being raised. This will not close the open file.
The memory mapping is independent of the file handle. You can use the file handle as a normal file futher on.
I have a set of system tests which fire up some processes, create files etc., then shut them all down and delete the files.
I am encountering two intermittent errors on the cleanup:
On a log file created by one of the processes:
os.remove(log_path)
WindowsError: [Error 32] The process cannot access the file because it is being used by another process: <path_to_file>
When trying to delete the output directory with shutil.rmtree:
File "C:\Python27\lib\shutil.py", line 254, in rmtree
os.rmdir(path)
WindowsError: [Error 145] The directory is not empty: 'C:\\TestTarget\\xxx'
Both errors go away if I insert a 2 second delay before the tidyup, so I think the problem is with the time Windows takes to release the files. Obviously I'd like to avoid putting in delays in my tests, is there a way to wait until the filesystem has caught up?
I had similar problem I search for proper solution for months but found none. For me the problem only occurred while running my script on windows with python2.7. On python3 most of the times there where no problem. On GNU/Linux I could use the file operations without this dirty solution.
I ended up using this functions for any files operation for windows: try_fail_wait_repeat (see below), you should do something similar. Also you can set the sleep to a different value.
import sys
import shutil
import time
import os
IS_WINDOWS = (sys.platform == "win32")
if IS_WINDOWS:
maximum_number_of_tries = 40
def move_folder(src, dst):
return try_fail_wait_repeat(maximum_number_of_tries, _move_dir, src, dst)
def read_file(path):
return try_fail_wait_repeat(maximum_number_of_tries, _read_file, path)
else:
def move_folder(src, dst):
return shutil.move(src, dst)
def read_file(path):
return _read_file(path)
def _read_file(file_path):
with open(file_path, "rb") as f_in:
data = f_in.read().decode("ISO-8859-1")
return data
def try_fail_wait_repeat(maximum_number_of_tries, func, *args):
"""A dirty solution for a dirty bug in windows python2"""
i = 0
while True:
try:
res = func(*list(args))
return res
except WindowsError as e:
i += 1
time.sleep(0.5)
if i > maximum_number_of_tries:
print("Too much trying to run {}({})".format(func, args))
raise e
The function you are using only deletes empty directories
Try with:
import shutil
shutil.rmtree('/folder_path')
Also, try adding a sleep interval before you shut down the proccesses.
How can you create a temporary FIFO (named pipe) in Python? This should work:
import tempfile
temp_file_name = mktemp()
os.mkfifo(temp_file_name)
open(temp_file_name, os.O_WRONLY)
# ... some process, somewhere, will read it ...
However, I'm hesitant because of the big warning in Python Docs 11.6 and potential removal because it's deprecated.
EDIT: It's noteworthy that I've tried tempfile.NamedTemporaryFile (and by extension tempfile.mkstemp), but os.mkfifo throws:
OSError -17: File already exists
when you run it on the files that mkstemp/NamedTemporaryFile have created.
os.mkfifo() will fail with exception OSError: [Errno 17] File exists if the file already exists, so there is no security issue here. The security issue with using tempfile.mktemp() is the race condition where it is possible for an attacker to create a file with the same name before you open it yourself, but since os.mkfifo() fails if the file already exists this is not a problem.
However, since mktemp() is deprecated you shouldn't use it. You can use tempfile.mkdtemp() instead:
import os, tempfile
tmpdir = tempfile.mkdtemp()
filename = os.path.join(tmpdir, 'myfifo')
print filename
try:
os.mkfifo(filename)
except OSError, e:
print "Failed to create FIFO: %s" % e
else:
fifo = open(filename, 'w')
# write stuff to fifo
print >> fifo, "hello"
fifo.close()
os.remove(filename)
os.rmdir(tmpdir)
EDIT: I should make it clear that, just because the mktemp() vulnerability is averted by this, there are still the other usual security issues that need to be considered; e.g. an attacker could create the fifo (if they had suitable permissions) before your program did which could cause your program to crash if errors/exceptions are not properly handled.
You may find it handy to use the following context manager, which creates and removes the temporary file for you:
import os
import tempfile
from contextlib import contextmanager
#contextmanager
def temp_fifo():
"""Context Manager for creating named pipes with temporary names."""
tmpdir = tempfile.mkdtemp()
filename = os.path.join(tmpdir, 'fifo') # Temporary filename
os.mkfifo(filename) # Create FIFO
try:
yield filename
finally:
os.unlink(filename) # Remove file
os.rmdir(tmpdir) # Remove directory
You can use it, for example, like this:
with temp_fifo() as fifo_file:
# Pass the fifo_file filename e.g. to some other process to read from.
# Write something to the pipe
with open(fifo_file, 'w') as f:
f.write("Hello\n")
How about using
d = mkdtemp()
t = os.path.join(d, 'fifo')
If it's for use within your program, and not with any externals, have a look at the Queue module. As an added benefit, python queues are thread-safe.
Effectively, all that mkstemp does is run mktemp in a loop and keeps attempting to exclusively create until it succeeds (see stdlib source code here). You can do the same with os.mkfifo:
import os, errno, tempfile
def mkftemp(*args, **kwargs):
for attempt in xrange(1024):
tpath = tempfile.mktemp(*args, **kwargs)
try:
os.mkfifo(tpath, 0600)
except OSError as e:
if e.errno == errno.EEXIST:
# lets try again
continue
else:
raise
else:
# NOTE: we only return the path because opening with
# os.open here would block indefinitely since there
# isn't anyone on the other end of the fifo.
return tpath
else:
raise IOError(errno.EEXIST, "No usable temporary file name found")
Why not just use mkstemp()?
For example:
import tempfile
import os
handle, filename = tempfile.mkstemp()
os.mkfifo(filename)
writer = open(filename, os.O_WRONLY)
reader = open(filename, os.O_RDONLY)
os.close(handle)