I try to do a python3 script and i have to delete all .xml in a folder.
It's a giant folder with any folders in this folders (etc..).
(in dossier1, i have 2 folders and 2 .xml files, in there 2 folders, i have 2 folders and 2 .xml files etc)
Is it possible to say: "in this folder, search all the .xml and delete them" ?
i tried this:
filelist=os.listdir(path)
for fichier in filelist[:]: # filelist[:] makes a copy of filelist.
if not(fichier.endswith(".xml")):
filelist.remove(fichier)
but python dont't go in differents folders.
Thanks for your help :)
You can use os.walk. This answer gives good info about it.
You can do something like this
for path, directories, files in os.walk('./'): # change './' to directory path
for file in files:
fname = os.path.join(path, file)
if fname.endswith('.xml'): # If file ends with .xml
print(fname)
os.remove(os.path.abspath(fname))# Use absolute path to remove file
You can use pathlib.Path.glob:
>>> import pathlib
>>> list(pathlib.Path('.').glob("**/*.xml"))
[PosixPath('b.xml'), PosixPath('a.xml'), PosixPath('test/y.xml'), PosixPath('test/x.xml')]
>>> _[0].unlink() # delete some file
>>> for file in pathlib.Path('.').glob("**/*.xml"):
... print(file)
... file.unlink() # delete everything
...
a.xml
test/y.xml
test/x.xml
Related
I have a folder structure where I have a few txt files in a few folders, and then I have to copy all the folders where I found txt files to another folder. My root folder is always different, so to find it I use os.path.dirname(os.path.abspath(file)) and search all the folders from there. Unfortunately I'm stuck here, because I never use any file handling in python.
From your question I'm assuming you have certain folders at a specific location and the folders contain specific txt files. Now you want to move all the folders containing text files to a specific location.
You can do this:
Walk through all the folders in the master_folder.
Walk through files in the sub-folders, and if any file has a .txt extension, it moves the folder to the target location:
import os
import shutil
for dirpath, dirs, files in os.walk('path_to_master_folder'):
for filename in files:
if filename.endswith('txt'):
shutil.move(dirpath, 'path_to_destination_folder')
I'm wanting to move .csv files after reading them.
The code I've come up with is to move any .csv files found in a folder, then direct to an archive folder.
src1 = "\\xxx\xxx\Source Folder"
dst1 = "\\xxx\xxx\Destination Folder"
for root, dirs, files in os.walk(src1):
for f in files:
if f.endswith('.csv'):
shutil.move(os.path.join(root,f), dst1)
Note: I imported shutil at the beginning of my code.
Note 2: The destination archive folder is within the source folder - will this have implications for the above code?
When I run this, nothing happens. I get no error messages and the file remains in the source folder.
Any insight is appreciated.
Edit (some context on my goal):
My overall code will be used to read .csv files that are moved manually into a source folder by users - I then want to archive these .csv files using Python once the data has been used. Every .csv file placed into the source folder by the users will have a different name - no .csv file name will be the same, which is why I want to search the source folder for .csv files and move them all.
You can use the pathlib module. I'm assuming you have got the same folder structure in the destination directory.
from pathlib import Path
src1 = "<Path to source folder>"
dst1 = "<Path to destination folder>"
for csv_file in Path(src1).glob('**/*.csv'):
relative_file_path = csv_file.relative_to(src1)
destination_path = dst1 / relative_file_path
csv_file.rename(destination_path)
Explanation-
for csv_file in Path(src1).glob('**/*.csv'):
The glob(returns generator object) will capture all the CSV files in the directory as well as in the subdirectory. Now, we can iterate over the files one by one.
relative_file_path = csv_file.relative_to(src1)
All the csv_files are now pathlib path objects. So, we can use the functions that the library provides. One such function is relative to. Here It'll copy the relative path of the file from the src folder. Let's say you have a CSV file like-
scr_folder/A/B/c.csv - It'll copy A/B/c.csv
destination_path = dst1 / relative_file_path
As the folder structure is the same the destination path now becomes -
dst_folder/A/B/c.csv
csv_file.rename(destination_path)
At Last, rename will just move the file from src to destination.
After a bunch of research I have found a solution:
import shutil
source = r"\\xx\Source"
destination = r"\\xx\Destination"
files = os.listdir(source)
for file in files:
new_path = shutil.move(f"{source}/{file}", destination)
print(new_path)
I was making it more complicated than it needed to be - because all files in the folder would be .csv anyway, I just needed to move all files. Thanks stackoverlfow.
Say I have a directory.
In this directory there are single files as well as folders.
Some of those folders could also have subfolders, etc.
What I am trying to do is find all of the files in this directory that start with "Incidences" and read each csv into a pandas data frame.
I am able to loop through all the files and get the names, but cannot read them into data frames.
I am getting the error that "___.csv" does not exist, as it might not be directly in the directory, but rather in a folder in another folder in that directory.
I have been trying the attached code.
inc_files2 = []
pop_files2 = []
for root, dirs, files in os.walk(directory):
for f in files:
if f.startswith('Incidence'):
inc_files2.append(f)
elif f.startswith('Population Count'):
pop_files2.append(f)
for file in inc_files2:
inc_frames2 = map(pd.read_csv, inc_files2)
for file in pop_files2:
pop_frames2 = map(pd.read_csv, pop_files2)
You are adding only file name to the lists, not their path. You can use something like this to add paths instead:
inc_files2.append(os.path.join(root, f))
You have to add the path from the root directory where you are
Append the entire pathname, not just the bare filename, to inc_files2.
You can use os.path.abspath(f) to read the full path of a file.
You can make use of this by making the following changes to your code.
for root, dirs, files in os.walk(directory):
for f in files:
f_abs = os.path.abspath(f)
if f.startswith('Incidence'):
inc_files2.append(f_abs)
elif f.startswith('Population Count'):
pop_files2.append(f_abs)
I have the following directory, in the parent dir there are several folders lets say ABCD and within each folder many zips with names as displayed and the letter of the parent folder included in the name along with other info:
-parent--A-xxxAxxxx_timestamp.zip
-xxxAxxxx_timestamp.zip
-xxxAxxxx_timestamp.zip
--B-xxxBxxxx_timestamp.zip
-xxxBxxxx_timestamp.zip
-xxxBxxxx_timestamp.zip
--C-xxxCxxxx_timestamp.zip
-xxxCxxxx_timestamp.zip
-xxxCxxxx_timestamp.zip
--D-xxxDxxxx_timestamp.zip
-xxxDxxxx_timestamp.zip
-xxxDxxxx_timestamp.zip
I need to unzip only selected zips in this tree and place them in the same directory with the same name without the .zip extension.
Output:
-parent--A-xxxAxxxx_timestamp
-xxxAxxxx_timestamp
-xxxAxxxx_timestamp
--B-xxxBxxxx_timestamp
-xxxBxxxx_timestamp
-xxxBxxxx_timestamp
--C-xxxCxxxx_timestamp
-xxxCxxxx_timestamp
-xxxCxxxx_timestamp
--D-xxxDxxxx_timestamp
-xxxDxxxx_timestamp
-xxxDxxxx_timestamp
My effort:
for path in glob.glob('./*/xxx*xxxx*'): ##walk the dir tree and find the files of interest
zipfile=os.path.basename(path) #save the zipfile path
zip_ref=zipfile.ZipFile(path, 'r')
zip_ref=extractall(zipfile.replace(r'.zip', '')) #unzip to a folder without the .zip extension
The problem is that i dont know how to save the A,B,C,D etc to include them in the path where the files will be unzipped. Thus, the unzipped folders are created in the parent directory. Any ideas?
The code that you have seems to be working fine, you just to make sure that you are not overriding variable names and using the correct ones. The following code works perfectly for me
import os
import zipfile
import glob
for path in glob.glob('./*/xxx*xxxx*'): ##walk the dir tree and find the files of interest
zf = os.path.basename(path) #save the zipfile path
zip_ref = zipfile.ZipFile(path, 'r')
zip_ref.extractall(path.replace(r'.zip', '')) #unzip to a folder without the .zip extension
Instead of trying to do it in a single statement , it would be much easier and more readable to do it by first getting list of all folders and then get list of files inside each folder. Example -
import os.path
for folder in glob.glob("./*"):
#Using *.zip to only get zip files
for path in glob.glob(os.path.join(".",folder,"*.zip")):
filename = os.path.split(path)[1]
if folder in filename:
#Do your logic
I'm uploading a zipped folder that contains a folder of text files, but it's not detecting that the folder that is zipped up is a directory. I think it might have something to do with requiring an absolute path in the os.path.isdir call, but can't seem to figure out how to implement that.
zipped = zipfile.ZipFile(request.FILES['content'])
for libitem in zipped.namelist():
if libitem.startswith('__MACOSX/'):
continue
# If it's a directory, open it
if os.path.isdir(libitem):
print "You have hit a directory in the zip folder -- we must open it before continuing"
for item in os.listdir(libitem):
The file you've uploaded is a single zip file which is simply a container for other files and directories. All of the Python os.path functions operate on files on your local file system which means you must first extract the contents of your zip before you can use os.path or os.listdir.
Unfortunately it's not possible to determine from the ZipFile object whether an entry is for a file or directory.
A rewrite or your code which does an extract first may look something like this:
import tempfile
# Create a temporary directory into which we can extract zip contents.
tmpdir = tempfile.mkdtemp()
try:
zipped = zipfile.ZipFile(request.FILES['content'])
zipped.extractall(tmpdir)
# Walk through the extracted directory structure doing what you
# want with each file.
for (dirpath, dirnames, filenames) in os.walk(tmpdir):
# Look into subdirectories?
for dirname in dirnames:
full_dir_path = os.path.join(dirpath, dirname)
# Do stuff in this directory
for filename in filenames:
full_file_path = os.path.join(dirpath, filename)
# Do stuff with this file.
finally:
# ... Clean up temporary diretory recursively here.
Usually to make things handle relative paths etc when running scripts you'd want to use os.path.
It seems to me that you're reading from a Zipfile the items you've not actually unzipped it so why would you expect the file/dirs to exist?
Usually I'd print os.getcwd() to find out where I am and also use os.path.join to join with the root of the data directory, whether that is the same as the directory containing the script I can't tell. Using something like scriptdir = os.path.dirname(os.path.abspath(__file__)).
I'd expect you would have to do something like
libitempath = os.path.join(scriptdir, libitem)
if os.path.isdir(libitempath):
....
But I'm guessing at what you're doing as it's a little unclear for me.