I want to convert every new wav file coming into an input dir to another dir on mp3. I've been look how to convert those file but, I don't know how to add a listener on the input dir or if it's even possible?
Edit:
Sorry, I've forget to share you the code I already have. I use ffmpeg to convert audio file
import os, sys, glob
FFMPEG_PATH = "C:\\ffmpeg\\bin"
fileName = ""
fileExt = ""
wavdir = ""
mp3dir = ""
for file in glob.glob('wav/*.wav'):
# get the name without .ext
fileName = os.path.basename(file)
fileName = fileName.split(".")[0]
# verify if no mp3 file with thesame name exist
if not os.path.isfile('./mp3/'+fileName+".mp3"):
# set var with the 2 types files dir
wavdir = file
mp3dir = "mp3/"+fileName+".mp3"
# start the convertion with ffmpeg by commande line
os.system("ffmpeg -i "+wavdir+" "+mp3dir)
I'm not sure if this is the best idea, however you can assign a process or a thread to check if a file was added to the directory every X seconds:
import os
import time
wav_files_path = "/WAV_dir_path"
prev_files = os.listdir(wav_files_path)
x = 1 # time to sleep
while True:
files = os.listdir(wav_files_path)
if len(files) > len(prev_files): # if files are not deleted it's better to check if any files were added
# NEW FILE(S) ADDED
for f in files:
if f not in prev_files:
convert_file(f)
prev_files = files
time.sleep(x)
This is highly suboptimal, but should do the job
import os, time
SLEEPTIME = 0.5
TARGET_DIRECTORY = 'path_of_your_folder'
while True:
time.sleep(SLEEPTIME)
files = os.listdir(TARGET_DIRECTORY)
for file in files:
if file.endswith('.wav'):
CONVERT
Make a while True loop. Then in the loop you make another loop with for item in os.listdir(yourdir) in there you move every item and then you can make time.sleep(1) to reduce lag.
Related
Consider i have 5 files in 5 different location.
Example = fileA in XYZ location
fileB in ZXC location
fileC in XBN location so on
I want to check if these files are actually saved in that location if they are not re run the code above that saves the file.
Ex:
if:
fileA, fileB so on are present in their particular location the proceed with code further
else:
re run the file saving code above
How do i do this in python i am not able to figure out.
You can store all your files with their locations in a list and then iterate all locations for existence then you can decide further what to do.
A Python example:
from os.path import exists
# all files to check in different locations
locations = [
'/some/location/xyz/fileA',
'/other/location/fileB',
'/yet/another/location/fileC',
]
# iterate to check each file existance
status = [exists(location) for location in locations]
# check the status of all files
# if any of the files doesn't exist, else will be called
if(all(status)):
print('All files are present.')
else:
print('Any or all files do not exist.')
I'm not a python dev, and I just wanted try to contribute to the community.
The first answer is way better than mine, but I'd like to share my solution for that question.
You could use sys to pass the files' names, inside a try block to handle when the files are not found.
If you run the script from a location while the files are at another location, it would be needed to provide their path.
check.py ../test1.txt ../test2.txt ../test3.txt
#!/usr/bin/python3
import os.path
import sys
try:
fpath = sys.argv[1]
fpath = sys.argv[2]
fpath = sys.argv[3]
if (os.path.isfile(fpath)) != "":
print("Exists on system")
else:
pass
except IndexError:
for i in range(3):
file1 = "test1.txt"
file2 = "test2.txt"
file3 = "test3.txt"
sys.stdout = open(file1, "w")
print("Saving content to files")
sys.stdout = open(file2, "w")
print("Saving content to files")
sys.stdout = open(file3, "w")
print("Saving content to files")
The exception part would then "save" the files, by creating new ones, writing whatever content you desire.
I am trying to create a program that lists all the files in a directory in real time. If a file is deleted than it deletes the filename in the txt file. If one file is added than it is added to the txt file.
So far I only got to create a program that lists and exports the content once. And as I am using a while(1) loop, it doesn`t stop creating files. I also need it to ignore duplicated names.
Can you help me with it? My code is as it follows:
import os
Path = 'Mypath'
arr = os.listdir(Path)
print (arr)
file1 = open ("File.txt","a")
while (1):
for file in arr:
# if file not in file1:
file1.writelines(file + "\n")
Simple solution using polling.
If something changed, replace the whole file.
import os
import time
path = 'mypath/'
cur_list = None
while True:
new_list = os.listdir(path)
if new_list != cur_list:
cur_list = new_list
with open("File.txt", "w") as f:
f.write('\n'.join(cur_list))
time.sleep(5)
My code needs to look for 3 files with specific names in a directory.For example: 123_xy_report.xlsm, 789_ab_file.xlsm, 222_ftp_review.xlsm.
Only when all these files arrive, continue or else keep looking for the files until they arrive.
I have tried glob and fnmatch, but unable to get it right to look for these specific files.
while True:
os.chdir(dir)
filelist=['08282019_xy_report.xlsm','08282019_ab_file.xlsm','08282019_ftp_review.xlsm']
if all([os.path.isfile(f)for f in filelist]):
print('All files exists')
break
else:
print('waiting for files to arrive')
time.sleep(60)
The files will have the datetime as prefix,so the files comes with new date every day.
So the files changes names after the date but however arrive at random times. Then you need to make a list with names formatted to your files with the current days date. This works. I am sure it can be made shorter but because of the filename requirements this is the best I came up with. Hope it helps!
import os
from datetime import date
import time
dir = ""
names = ["_xy_report.xlsm", "_ab_file.xlsm", "_ftp_review.xlsm"]
formatted_list = []
while True:
# Change to needed directory
os.chdir(dir)
# To days date formatted to list
days_date = str(date.today()).replace('-', '')
for n in names:
formatted_list.append(f"{days_date + n}")
# check if all files has arrived yet
if all([os.path.isfile(f)for f in formatted_list]):
print("All files exist")
break
else:
print(f"Waiting for {formatted_list} to arrive")
time.sleep(60)
Almost there...
If you are looking for all of those files and then, and only then, start doing work, then it doesn't matter in which order you wait for them.
I suggest this simple solution:
import os.path
import time
filelist = ['1.txt', '2.txt', '3.txt']
for f in filelist:
print('waiting for ' + f)
while not os.path.isfile(f):
time.sleep(2)
print(f + ' found')
print('all files found')
This script need to be in the same folder you search OR you can append the absolute path to the file names OR chdir to that directory
I have more than 10000 json files which I have to convert to for further processing. I am using the following code:
import json
import time
import os
import csv
import fnmatch
tweets = []
count = 0
search_folder = ('/Volumes/Transcend/Axiom/IPL/test/')
for root, dirs, files in os.walk(search_folder):
for file in files:
pathname = os.path.join(root, file)
for file in open(pathname):
try:
tweets.append(json.loads(file))
except:
pass
count = count + 1
This iterates over just one file and stops. I tried adding while True: before for file in open(pathname): and it just doesn't stop nor it creates the csv files. I want to read one file at a time, convert it to csv, then move on to the next file. I tried adding count = count + 1 at the end of the file after completing converting the csv. Still it stops after converting the first file. Can someone help please?
Your indentation is off; you need to put the second for loop inside the first one.
Separate from your main problem, you should use a with statement to open the file. Also, you were reusing the variable name file, which you shouldn't be using anyway since it's the name of a built-in. I also made a few other minor edits.
import json
import os
tweets = []
count = 0
search_folder = '/Volumes/Transcend/Axiom/IPL/test/'
for root, dirs, filenames in os.walk(search_folder):
for filename in filenames:
pathname = os.path.join(root, filename)
with open(pathname, 'r') as infile:
for line in infile:
try:
tweets.append(json.loads(line))
except: # Don't use bare except clauses
pass
count += 1
I'm currently looking to create a directory on Linux using Python v2.7 with the directory name as the date and time (ie. 27-10-2011 23:00:01). My code for this is below:-
import time
import os
dirfmt = "/root/%4d-%02d-%02d %02d:%02d:%02d"
dirname = dirfmt % time.localtime()[0:6]
os.mkdir(dirname)
This code works fine and generates the directory as requested. Nonetheless, what I'd also like to then is, within this directory create two csv files and a log file with the same name. Now as the directory name is dynamically generated, I'm unsure as to how to move into this directory to create these files. I'd like the directory together with the three files to all have the same name (csv files will be prefixed with a letter). So for example, given the above, I'd like a directory created called "27-10-2011 23:00:01" and then within this, two csv files called "a27-10-2011 23:00:01.csv" and "b27-10-2011 23:00:01.csv" and a log file called "27-10-2011 23:00:01.log".
My code for the file creations is as below:-
csvafmt = "a%4d-%02d-%02d %02d:%02d:%02d.csv"
csvbfmt = "b%4d-%02d-%02d %02d:%02d:%02d.csv"
logfmt = "%4d-%02d-%02d %02d:%02d:%02d.log"
csvafile = csvafmt % time.localtime()[0:6]
csvbfile = csvbfmt % time.localtime()[0:6]
logfile = logfmt % time.localtime()[0:6]
fcsva = open(csvafile, 'wb')
fcsvb = open(csvbfile, 'wb')
flog = open(logfile, 'wb')
Any suggestions how I can do this so that the second remains the same throughout? I appreciate this code would only take a split second to run but within that time, the second may change. I assume the key to this lies within altering "time.localtime" but I remain unsure.
Thanks
Sure, just save the time in a variable and then use that variable for the substitutions:
now = time.localtime()[0:6]
dirname = dirfmt % now
csvafile = os.path.join(dirname, csvafmt % now)
csvbfile = os.path.join(dirname, csvbfmt % now)
logfile = os.path.join(dirname, logfmt % now)
Edited to include creating the complete path to your csv and log files.
Only call time.localtime once.
current_time = time.localtime()[0:6]
csvafile = csvafmt % current_time
csvbfile = csvbfmt % current_time
logfile = logfmt % current_time