Python changing folder permissions [duplicate] - python

This question already has answers here:
Python changing file permissions when not wanted
(2 answers)
Closed 9 years ago.
for some reason this python script no longer works now. The script changes the folder permission to read only after it has been run? It runs once and deletes all the files in the folder but when it runs again it gets a Windows error 5 Access denied due to the script changing the permissions to read only on the folder. I can't see why it does this or how to avoid it?
The thing is i didn't write this script and know nothing about python. how would you change it to avoid this issue. Please could you give an example with the code in the script, i wouldn't know where to place it. thanks for the help!
import os
import shutil
for root, dirs, files in os.walk(eg.globals.tvzip):
for f in files:
os.remove(os.path.join(root, f))
for d in dirs:
shutil.rmtree(os.path.join(root, d))
for root, dirs, files in os.walk(eg.globals.tvproc):
for f in files:
os.remove(os.path.join(root, f))
for d in dirs:
shutil.rmtree(os.path.join(root, d))

I don't care whether you wrote this code or not, it makes no sense, and trying to make it work without fixing it is a silly idea.
First, if you want to remove an entire directory tree, don't try to walk the tree and remove each subtree before walking it. Just remove the whole tree:
shutil.rmtree(eg.globals.tvzip)
shutil.rmtree(eg.globals.tvproc)
If you want to remove all of the contents of the tree, but not the root itself, don't use os.walk, just os.listdir:
for p in os.listdir(eg.globals.tvzip):
shutil.rmtree(os.path.join(eg.globals.tvzip, p)
for p in os.listdir(eg.globals.tvproc):
shutil.rmtree(os.path.join(eg.globals.tvproc, p)
That will remove any errors caused by your code stepping on its own toes, trying to keep a directory open for its walk and trying to delete it at the same time.
If you still get errors, it could be because some of the files are read-only, but it could just as easily be because some other program has them open. The only way you will be able to debug that is to know which files, so you can examine them.
The exceptions that you get should include the full pathname to the file that failed in their output—in fact, you showed one in one of your other questions:
WindowsError: [Error 5] Access is denied: 'C:\\zDump\\TVzip\\Elem.avi'
So, how do you know what the problem is?
You can open C:\zDump\TVzip in Explorer and look at Elem.avi and see if it's read-only. Or you can use the DOS prompt, if you know how to do that.
To determine whether it's being kept open by another program, you need a third-party tool. The GUI tool Process Explorer and the command-line tool Handle, both from Sysinternals and published by Microsoft, are probably the simplest.

If you want to delete a whole tree of flies, shutil.rmtree will do it for you - you don't need to walk the list of files deleting them.
If you're trying to not delete the top level directory, you should add a check for that. According to the docs, you will be given the top-level directory:
os.walk(top, topdown=True, onerror=None, followlinks=False)
Generate
the file names in a directory tree by walking the tree either top-down
or bottom-up. For each directory in the tree rooted at directory top
(including top itself), it yields a 3-tuple (dirpath, dirnames,
filenames).
Could something else than your script be setting these folders read only? Perhaps you're deleting them, then getting Access Denied because they don't exist, or something else is re-creating them that way?

Related

Python on Linux: Iterating through a list with If Elif Elif Elif when the list is unknown

I am trying to make a python script that automatically moves files from my internal drive to any usb drive that is plugged in. However this destination path is unpredictable because I am not using the same usb drives everytime. With Raspbian Buster full version, the best I can do so far is automount into /media/pi/xxxxx, where that xxxxxx part is unpredictable. I am trying to make my script account for that. I can get the drive mounting points with
drives = os.listdir("/media/pi/")
but I am worried some will be invalid because of not being unmounted before they're yanked out (I need to run this w/o a monitor or keyboard or VNC or any user input other than replacing USB drives). So I'd think I'd need to do a series of try catch statements perhaps in an if elif elif elif chain, to make sure that the destination is truly valid, but I don't know how to do that w/o hardcoding the names in. The only way I know how to iterate thru a set of names I don't know is
for candidate_drive in drives:
but I don't know how to make it go onto the next candidate drive only if the current one is throwing an exception.
System: Raspberry Pi 4, Raspbian buster full, Python 3.7.
Side note: I am also trying this on Buster lite w/ usbmount, which does have predictable mounting names, but I can't get exfat and ntfs to mount and that is question for another post.
Update: I was thinking about this more and maybe I need a try, except, else statement where the except is pas and the else is break? I am trying it out.
Update2: I rethought my problem and maybe instead of looking for the exception to determine when to try the next possible drive, perhaps I could instead look for a successful transfer and break the loop if so.
import os
import shutil
files_to_send = os.listdir("/home/outgoing_files/")
source_path = "/home/outgoing_files/"
possible_USB_drives = os.listdir("/media/")
for a_possible_drive in possible_USB_drives:
try:
destpath = os.path.join("/media/", a_possible_drive)
for a_file in files_to_send:
shutil.copy(source_path + a_file, destpath)
except:
pass # transfer to possible drive didn't work
else:
break # Stops this entire program bc the transfer worked!
If you have an unsigned number of directories in side of a directory, etc... You cannot use nested for cicles. You need to implement a recursive call function. If you have directories inside a directory, you would like to review all the directories, this is posible iterating over the list of directories using a same function that iterate over them, over and over, until it founds the file.
Lets see an example, you have a path structure like this:
path
dir0
dir2
file3
dir1
file2
file0
file1
You have no way to now how many for cicles are required to iterate over al elements in this structure. You can call an iteration (for cicle) over all elements of a single directory, and do the same to the elements inside that elements. In this structure dirN where N is a number means a directory, and fileN means a file.
You can use os.listdir() function to get the contents of a directory:
print(os.listdir("path/"))
returns:
['dir0', 'dir1', 'file0.txt', 'file1.txt']
Using this function with all the directories you can get all the elements of the structure. You only need to iterate over all the directories. Specificly directories, because if you use a file:
print(os.listdir("path/file0.txt"))
you get an error:
NotADirectoryError: [WinError 267]
But, remember that in Python exists the generator expressions.
String work
If you have a mainpath you need to get access to a a directory inside this with a full string reference: "path/dirN". You cannot access directly to the file that does not is in the scope of the .py script:
print(os.listdir("dir0/"))
gets an error
FileNotFoundError: [WinError 3]
So you need to always format the initial mainpath with the actual path, in this way you can get access to al the elements of the structure.
Filtering
I said that you could use an generator expression to get just the directories of the structure, recursively. Lets take a look to a function:
def searchfile(paths: list,search: str):
for path in paths:
print("Actual path: {}".format(path))
contents = os.listdir(path)
print("Contents: {}".format(contents))
dirs = ["{}{}/".format(path,x) for x in contents if os.path.isdir("{}/{}/".format(path,x)) == True]
print("Directories: {} \n".format(dirs))
searchfile(dirs,search)
In this function we are getting the contents of the actual path, with os.listdir() and then filtering it with a generator expression. Obviusly we use recursive function call with the dirs of the actual path: searchfile(dirs,search)
This algorithm can be applied to any file structure, because the path argument is a list. So you can iterate over directories with directories with directories, and that directories with more directories inside of them.
If you want to get an specific file you could use the second argument, search. Also you can implement a conditional and get the specific path of the file found:
if search in contents:
print("File found! \n")
print("{}".format(os.path.abspath(search)))
sys.exit()
I hope have helped you.

Lost files while tried to move files using python shutil.move

I had 120 files in my source folder which I need to move to a new directory (destination). The destination is made in the function I wrote, based on the string in the filename. For example, here is the function I used.
path ='/path/to/source'
dropbox='/path/to/dropbox'
files = = [os.path.join(path,i).split('/')[-1] for i in os.listdir(path) if i.startswith("SSE")]
sam_lis =list()
for sam in files:
sam_list =sam.split('_')[5]
sam_lis.append(sam_list)
sam_lis =pd.unique(sam_lis).tolist()
# Using the above list
ID = sam_lis
def filemover(ID,files,dropbox):
"""
Function to move files from the common place to the destination folder
"""
for samples in ID:
for fs in files:
if samples in fs:
desination = dropbox + "/"+ samples + "/raw/"
if not os.path.isdir(desination):
os.makedirs(desination)
for rawfiles in fnmatch.filter(files, pat="*"):
if samples in rawfiles:
shutil.move(os.path.join(path,rawfiles),
os.path.join(desination,rawfiles))
In the function, I am creating the destination folders, based on the ID's derived from the files list. When I tried to run this for the first time it threw me FILE NOT exists error.
However, later when I checked the source all files starting with SSE were missing. In the beginning, the files were there. I want some insights here;
Whether or not os.shutil.move moves the files to somewhere like a temp folder instead of destination folder?
whether or not the os.shutil.move deletes the files from the source in any circumstance?
Is there any way I can test my script to find the potential reasons for missing files?
Any help or suggestions are much appreciated?
It is late but people don't understand the op's question. If you move a file into a non-existing folder, the file seems to become a compressed binary and get lost forever. It has happened to me twice, once in git bash and the other time using shutil.move in Python. I remember the python happens when your shutil.move destination points to a folder instead of to a copy of the full file path.
For example, if you run the code below, a similar situation to what the op described will happen:
src_folder = r'C:/Users/name'
dst_folder = r'C:/Users/name/data_images'
file_names = glob.glob(r'C:/Users/name/*.jpg')
for file in file_names:
file_name = os.path.basename(file)
shutil.move(os.path.join(src_folder, file_name), dst_folder)
Note that dst_folder in the else block is just a folder. It should be dst_folder + file_name. This will cause what the Op described in his question. I find something similar on the link here with a more detailed explanation of what went wrong: File moving mistake with Python
shutil.move does not delete your files, if for any reason your files failed to move to a given location, check the directory where your code is stored, for a '+' folder your files are most likely stored there.

FileNotFound Error: [Errno 2] No such file or directory b when iterating through list of files in Python on Windows?

I am getting a FileNotFound error when iterating through a list of files in Python on Windows.
The specific error I get looks like:
FileNotFoundError: File b'fileName.csv' does not exist
In my code, I first ask for input on where the file is located and generate a list using os (though I also tried glob):
directory = input('In what directory are your files located?')
fileList = [s for s in os.listdir(directory) if s.endswith('.csv')]
When I print the list, it does not contain byte b before any strings, as expected (but I still checked). My code seems to break at this step, which generates the error:
for file in fileList:
pd.read_csv(file) # breaks at this line
I have tried everything I could find on Stack Overflow to solve this problem. That includes:
Putting an r or b or rb right before the path string
Using both the relative and absolute file paths
Trying different variations of path separators (/, \, \\, etc.)
I've been dealing with Windows-related issues lately (since I normally work in Mac or Linux) so that was my first suspicion. I'd love another set of eyes to help me figure out where the snag is.
A2A.
Although the list was generated correctly because the full directory path was used, the working directory when the file was being run was .. You can verify this by running os.getcwd(). When later iterating through the list of file names, the program could not find those file names in the . directory, as it shouldn't. That list is just a list of file names; there was no directory tied to it so it used the current directory.
The easiest fix here is to change the directory the file is running through after the input. So,
directory = input('In what directory are your files located?')
os.chdir(directory) # Points os to given directory to avoid byte issue
If you need to access multiple directories, you could either do this right before you switch to each directory or save the full file path into your list instead.

Iterating over .wav files in subdirectories of parent directory

Cheers everybody,
I need help with something in python 3.6 exactly. So i have structure of data like this:
|main directory
| |subdirectory's(plural)
| | |.wav files
I'm currently working from a directory where main directory is placed so I don't need to specify paths before that. So firstly I wanna iterate over my main directory and find all subdirectorys. Then in each of them I wanna find the .wav files, and when done with processing them I wanna go to next subdirectory and so on until all of them are opened, and all .wav files are processed. Exactly what I wanna do with those .wav files is input them in my program, process them so i can convert them to numpy arrays, and then I convert that numpy array into some other object (working with tensorflow to be exact, and wanna convert to TF object). I wrote about the whole process if anybody has any fast advices on doing that too so why not.
I tried doing it with for loops like:
for subdirectorys in open(data_path, "r"):
for files in subdirectorys:
#doing some processing stuff with the file
The problem is that it always raises error 13, Permission denied showing on that data_path I gave him but when I go to properties there it seems okay and all permissions are fine.
I tried some other ways like with os.open or i replaced for loop with:
with open(data_path, "r") as data:
and it always raises permission denied error.
os.walk works in some way but it's not what I need, and when i tried to modify it id didn't give errors but it also didnt do anything.
Just to say I'm not any pro programmer in python so I may be missing an obvious thing but ehh, I'm here to ask and learn. I also saw a lot of similiar questions but they mainly focus on .txt files and not specificaly in my case so I need to ask it here.
Anyway thanks for help in advance.
Edit: If you want an example for glob (more sane), here it is:
from pathlib import Path
# The pattern "**" means all subdirectories recursively,
# with "*.wav" meaning all files with any name ending in ".wav".
for file in Path(data_path).glob("**/*.wav"):
if not file.is_file(): # Skip directories
continue
with open(file, "w") as f:
# do stuff
For more info see Path.glob() on the documentation. Glob patterns are a useful thing to know.
Previous answer:
Try using either glob or os.walk(). Here is an example for os.walk().
from os import walk, path
# Recursively walk the directory data_path
for root, _, files in walk(data_path):
# files is a list of files in the current root, so iterate them
for file in files:
# Skip the file if it is not *.wav
if not file.endswith(".wav"):
continue
# os.path.join() will create the path for the file
file = path.join(root, files)
# Do what you need with the file
# You can also use block context to open the files like this
with open(file, "w") as f: # "w" means permission to write. If reading, use "r"
# Do stuff
Note that you may be confused about what open() does. It opens a file for reading, writing, and appending. Directories are not files, and therefore cannot be opened.
I suggest that you Google for documentation and do more reading about the functions used. The documentation will help more than I can.
Another good answer explaining in more detail can be seen here.
import glob
import os
main = '/main_wavs'
wavs = [w for w in glob.glob(os.path.join(main, '*/*.wav')) if os.path.isfile(w)]
In terms of permissions on a path A/B/C... A, B and C must all be accessible. For files that means read permission. For directories, it means read and execute permissions (listing contents).

how to `shutil.move` with readonly files across drives

At least on windows, shutil.move a folder containing readonly files to another drive will fail. It fails because move is implemented with a copy followed by a rmtree. In the end, it's the rmtree trying to delete non writable files.
Currently I work around it by first setting the stat.S_IWUSER for all (nested) files, but now I should still restore the original stat afterwards:
def make_tree_writable(source_dir):
for root, dirs, files in os.walk(source_dir):
for name in files:
make_writable(path.join(root, name))
def make_writable(path_):
os.chmod(path_, stat.S_IWUSR)
def movetree_workaround(source_dir, target_dir):
make_tree_writable(source_dir)
shutil.move(source_dir, target_dir)
So I wonder: is this the way? Is there a shutil2 in the making that I could use? Can I be of any help there?
You can do that in two steps: first, use shutil.copytree() to copy the full directory and file structure with appropriate permissions. Then you can change permissions of the source to make sure you have rights to delete stuff, and use shutil.rmtree() to remove the old source.

Categories