Question: How can I change file permissions on a Windows 10 PC with a Python script?
I have written a Python script that takes folders, which are created by proprietary software, and moves them to a network drive with shutil.move().
It seems that the proprietary software creates folders that are read-only by default. I need to change the file permissions for these folders in order for shutil.move() to delete the folders after they are copied to the network drive.
I have searched on SO to discover that os.chmod(path, 0o777) only works to grant access on Unix systems. On Windows, it modifies the read-only attribute of a file or folder. This question seems to yield a solution, which I tried as follows:
import win32security
import ntsecuritycon as con
account = r"admin"
userx, domain, type = win32security.LookupAccountName ("", account)
sd = win32security.GetFileSecurity(path, win32security.DACL_SECURITY_INFORMATION)
dacl = sd.GetSecurityDescriptorDacl() # instead of dacl = win32security.ACL()
dacl.AddAccessAllowedAce(win32security.ACL_REVISION, con.FILE_GENERIC_READ | con.FILE_GENERIC_WRITE, userx)
sd.SetSecurityDescriptorDacl(1, dacl, 0) # may not be necessary
win32security.SetFileSecurity(path, win32security.DACL_SECURITY_INFORMATION, sd)
But it does not seem to work. Also, I don't understand what I am doing with the modules win32security and ntsecuritycon. Maybe someone can give an easy explanation.
edit: ok so i looked at stuff. This is the exception that gets raised:
Traceback (most recent call last):
File "copyscript.py", line 108, in <module>
copyscript()# the loop needs to be called as a function to delete all assigned variables after each loop
File "copyscript.py", line 93, in copyscript
shutil.move(run, str(target_dir2))#move files renamed to user folder
File "C:\Users\admin\AppData\Local\Programs\Python\Python35\lib\shutil.py", line 550, in move
rmtree(src)
File "C:\Users\admin\AppData\Local\Programs\Python\Python35\lib\shutil.py", line 488, in rmtree
return _rmtree_unsafe(path, onerror)
File "C:\Users\admin\AppData\Local\Programs\Python\Python35\lib\shutil.py", line 378, in _rmtree_unsafe
_rmtree_unsafe(fullname, onerror)
File "C:\Users\admin\AppData\Local\Programs\Python\Python35\lib\shutil.py", line 383, in _rmtree_unsafe
onerror(os.unlink, fullname, sys.exc_info())
File "C:\Users\admin\AppData\Local\Programs\Python\Python35\lib\shutil.py", line 381, in _rmtree_unsafe
os.unlink(fullname)
PermissionError: [WinError 5] Access is denied: 'THG126.D\\AcqData\\sample_info.xml'
the full path of this file is D:\MSD_Data\THG126.D\AcqData\sample_info.xml.
the user account is named "admin" and it belongs to the "Administrators" group.
"admin" is the owner and has "full control" according to the "advanced security settings" for MSD_Data, THG126.D, sample_info.xml and the python script.
i have also tried running the script via CLI using "run as administrator". The same error occurs.
i looked at all files in the folders and found that only sample_info.xml has RA attributes, whereas all other have only A, so i added
path2 = r"D:\\MSD_data\\"+run+r"\\AcqData\\sample_info.xml"
subprocess.check_call(["attrib", "-r", path2, "/S", "/D"])
to the script and it seems to work now. I need to wait a little for new folders to be generated by the other software to see if the script is working correctly now.
The problem seems to have been that a file had the attribute "RA", which means "read-only" and "archived". Even though the used user account wqas the owner of all files and folders, shutil.move() fails when it tries to delete the file after copying to the target location.
A workaround to this problem is to use
subprocess.check_call(["attrib", "-r", path])
to remove the read-only file attribute. This resolved my issue. If you still have trouble with shutil.move() you could also try this solution.
Related
I'm calling the Paramiko sftp_client.put(locapath,remotepath) method
This is throwing the [Errno 2] File not found error below.
01/07/2020 01:12:03 PM - ERROR - [Errno 2] File not found
Traceback (most recent call last):
File "file_transfer\TransferFiles.py", line 123, in main
File "paramiko\sftp_client.py", line 727, in put
File "paramiko\sftp_client.py", line 689, in putfo
File "paramiko\sftp_client.py", line 460, in stat
File "paramiko\sftp_client.py", line 780, in _request
File "paramiko\sftp_client.py", line 832, in _read_response
File "paramiko\sftp_client.py", line 861, in _convert_status
Having tried many of the other recommend fixes I found that the error is due to the server having an automatic trigger to move the file immediately to another location upon the file being uploaded.
I've not seen another post relating to this issue and wanted to know if anyone else has fixed this as the SFTP server is owned by a third party and not wanting to change trigger attributes.
The file actually uploads correctly, so I could catch the Exception and ignore the error. But I'd prefer to handle it, if possible.
Paramiko by default verifies a size of the uploaded file after the upload.
If the file is moved away immediately after upload, the check fails.
To avoid the check, set confirm parameter of SFTPClient.put to False.
sftp_client.put(localpath, remotepath, confirm=False)
I believe the check is redundant anyway, see
How to perform checksums during a SFTP file transfer for data integrity?
For a similar question about pysftp (what is a wrapper around Paramiko), see:
Python pysftp.put raises "No such file" exception although file is uploaded
Also had this issue of the file automatically getting moved before paramiko could do an os.stat on the uploaded file and compare the local and uploaded file sizes.
#Martin_Prikryl solution works works fine for removing the error by passing in confirm=False when using sftp.put or sftp.putfo
If you want this check to still run like you mention in the post to see if the file has been uploaded fully you can run something along these lines. For this to work you will need to know the moved file location and have the ability to read the file.
import os
sftp.putfo(source_file_object, destination_file, confirm=False)
upload_size = sftp.stat(moved_path).st_size
local_size = os.stat(source_file_object).st_size
if upload_size != local_size:
raise IOError(
"size mismatch in put! {} != {}".format(upload_size, local_size)
)
Both checks use os.stat
I'm trying to download some files at regular intervals, remove the old ones and replace them with new files. First time it runs well, but the second time it throws an error.
def check_update():
print ('looking for update')
shutil.rmtree(config.destination)
shutil.os.mkdir(config.destination)
threading.Timer(60.0,check_update).start()
def get_videos():
response = requests.get(config.api)
data = response.json()
files = list()
l = len(data)
for i in range(l):
files.append(data[i]['filename'])
return files
def get_newfiles(myfiles):
for i in range(len(myfiles)):
url = config.videos+myfiles[i]
filename = wget.download(url)
def move_files(myfiles):
for i in range(len(myfiles)):
file = myfiles[i]
shutil.move(config.source_files+file,config.destination)
def videos():
files = set(get_videos())
myfiles = list(files)
get_newfiles(myfiles)
move_files(myfiles)
videos()
print ("files are updated")
res = requests.get(config.api)
data = res.json()
return data
data = check_update()
Here is the error.
File "C:\Program Files (x86)\Python36-32\lib\threading.py", line 916, in _bootstrap_inner
self.run()
File "C:\Program Files (x86)\Python36-32\lib\threading.py", line 1182, in run
self.function(*self.args, **self.kwargs)
File "tornado.py", line 8, in check_update
shutil.rmtree(config.destination)
File "C:\Program Files (x86)\Python36-32\lib\shutil.py", line 494, in rmtree
return _rmtree_unsafe(path, onerror)
File "C:\Program Files (x86)\Python36-32\lib\shutil.py", line 389, in _rmtree_unsafe
onerror(os.unlink, fullname, sys.exc_info())
File "C:\Program Files (x86)\Python36-32\lib\shutil.py", line 387, in _rmtree_unsafe
os.unlink(fullname)
permissionError: [WinError 32] The process cannot access the file because it is being used by another process:
How can I overcome this?
The error occurs when attempting to delete config.destination dir. That happens because either the dir itself or one (or more) of its children (may be a dir or a file) is opened in another process (could also be the current one).Typical usecases that frequently lead to this situation:
A cmd console is open in that dir. Just cd outside the dir and try removing it again
A running program has a file opened
As an example could be a Notepad (or an IDE) that has opened a source file located in that dir.
But since it looks like you work with videos, maybe You wanted to check whether a downloaded file works and opened it in a video player.
No matter what the case, closing that program would fix the issue
This is specific to you: I don't know how wget.download works, but if it's not blocking (although according to the code it doesn't seem to be the case) maybe one video from your previous run is still downloading, hence it's open. Closing that python process would do (whether waiting til it finishes or killing it from Task Manager)
Note: When searching for the cause, you should try removing the dir from a file manager (e.g. Windows Explorer) to avoid the overhead introduced by the script.
I work for a photography studio that shoots and retouches thousands of images per day. The studio has been moving files manually for many years and we experience a lot of user error placing the files/folders in the wrong place. To help eliminate this I created a Python script (run via Apple Automator droplet application) which: 1. moves the files from the photographer's computer to the file server, 2. creates new directories if they do not already exist and 3. does name versioning changes on the file names if files with the same name already exist.
Here is the script:
import sys
import os
import subprocess
destinationPath = '/Volumes/extensis/product'
def updateName(name, incrementCounter):
bsc = name[:19]
seq = name[19:20]
ext = name[20:]
try:
seq_new = int(seq)
if incrementCounter:
seq_new = seq_new + 1
except ValueError:
seq_new = 1
name = bsc + str(seq_new).zfill(1) + ext
return name
def makeSureCounterIsNumeric(name):
return updateName(name, False)
def incrementCounter(name):
return updateName(name, True)
for f in sys.argv[1:]:
name = os.path.basename(f)
newdir = os.path.join(destinationPath, name[:1], name[:5], name[:10], name[:15])
print newdir
if not os.path.exists(newdir):
os.makedirs(newdir)
name = makeSureCounterIsNumeric(name)
while os.path.isfile(os.path.join(newdir,name)):
name = incrementCounter(name)
subprocess.call(['cp', f, os.path.join(newdir,name)])
This is working great on MOST computers without issue. Some computers can't run the application at all and get a python makedirs/permission denied error. Some computers have been able to run the application for a period of time and then all of a sudden have this same error. Sometimes this error clears if we restart the computer, but not always. Traceback:
Traceback (most recent call last):
File "<string>", line 31 in <module>
File "/System/Library/Frameworks/Python.framework/Version/2.7/lib/python2.7/os.py", Line 150, in makedirs makedirs(head, mode)
File "/System/Library/Frameworks/Python.framework/Version/2.7/lib/python2.7/os.py", Line 150, in makedirs makedirs(head, mode)
File "/System/Library/Frameworks/Python.framework/Version/2.7/lib/python2.7/os.py", Line 150, in makedirs makedirs(head, mode)
File "/System/Library/Frameworks/Python.framework/Version/2.7/lib/python2.7/os.py", Line 150, in makedirs makedirs(head, mode)
File "/System/Library/Frameworks/Python.framework/Version/2.7/lib/python2.7/os.py", Line 157, in mkdirs mkdirs(name, mode)
OSError: [Errno 13] Permission denied: '/Volumes/extensis/product'
We have confirmed that all local machines are running the same version of the OS/python, all have admin permissions for their individual machines, all are connecting to the same file server address, and everyone has the correct permissions to access/modify the server location. They can even manually make the directories or move files to this location without issue.
I also found some people on StackOverflow mentioning that the "/" used in the volume path can cause these issues because it refers to root folders, but I have tried all variants of the path. With "/", without "/" and the above is the only variation that I have been able to run successfully. The other variants either throw the permissions error or appear to run successfully but no files have been moved.
We have done a bunch of research without any answers. We've talked with our local IT department and software devs and nobody knows anything different to look at. The only thing they have suggested is that perhaps it is just a Mac-to-Windows issue and might not be anything we can fix. That doesn't help.
I'm learning web programming with Python, and still basically going through lectures/tutorial.
I'm trying to upload a file to a server. This is my code:
import ftplib
import sys
filename = sys.argv[1]
connect = ftplib.FTP("***.**.***.**")
connect.login("testuser","pass")
file = open(filename, "rb")
connect.storbinary("STOR " + filename, file)
connect.quit()
and this is the error I have:
File "C:\Users\test\putfile.py", line 8, in <module>
connect.storbinary("STOR " + filename, file)
File "C:\Python27\lib\ftplib.py", line 471, in storbinary
conn = self.transfercmd(cmd, rest)
File "C:\Python27\lib\ftplib.py", line 376, in transfercmd
return self.ntransfercmd(cmd, rest)[0]
File "C:\Python27\lib\ftplib.py", line 339, in ntransfercmd
resp = self.sendcmd(cmd)
File "C:\Python27\lib\ftplib.py", line 249, in sendcmd
return self.getresp()
File "C:\Python27\lib\ftplib.py", line 224, in getresp
raise error_perm, resp
ftplib.error_perm: 550 Permission denied.
testuser should have the permission to write files, since the folder is owned by him, and he has root privilege(was added in sudoer file).
the same thing happens if I add the line:
connect.cwd('/testfolder')
I will get error_perm: 550 Failed to change directory.
However I can still read the existing files just fine (with
connect.retrlines("RETR " + filename))
I'm pretty new about Python as well as Linux, so I don't have idea what I'm doing. I need some help.
Perhaps this can help:
With FTP is not sufficient be owner of files and directories.
The service and daemon FTP must be correctly configured in order to write and create files etc.
For example in Ubuntu:
Edit /etc/vsftpd.conf
And in the line
;write_enable=YES
Delete the semicolon
Finally restart the service:
sudo service vsftpd restart
I would check if you are in the right location. I got the same problem, and then I realised that I was in a different location that I intended, in the root folder, above "/public_html", so there was no folder that I wanted to enter, and I didn't have permissions to store any files.
You can check where you are this way:
print connect.pwd()
and what the contents of the current directory are:
print ftplib.FTP.dir(connect)
So, if you are in the root folder ("/"), above the "/public_html" and you want to change current directory to "/testfolder" you need to use:
connect.cwd('/public_html/testfolder')
Have you checked the access permission on the FTP server? I just ran into this same problem. This issue happened cause I did not have the permission to read the folder that I want to upload my files into.
There are a few things one can check if encountered with this error.
Check the current directory of the ftp server that you are trying to access using connect.pwd(). Make sure you have the write access to this directory. You can try copy pasting manually to verify the same.
Make sure you only provide the filename and not the complete path. For me, this was causing an issue. For example, filename = "upload_img.jpg" instead of filename = "D:/apth/to/upload_img.jpg". Workaround will be to extract the CWD using os.split() and then set the CWD using the os.chdir()
Yesterday I setup Apache to serve my Mercurial repositories and got everything working properly. I then tested pushing changes back to this repository and was presented with an error, and now that error pops up for every single operation I attempt - even just a simple GET request of the repositories! Here is the error:
mod_wsgi (pid=1771): Target WSGI script '/var/hg/hgweb.wsgi' cannot be loaded as Python module.
mod_wsgi (pid=1771): Exception occurred processing WSGI script '/var/hg/hgweb.wsgi'.
Traceback (most recent call last):
File "/var/hg/hgweb.wsgi", line 18, in ?
application = hgwebdir(config)
File "/usr/lib64/python2.4/site-packages/mercurial/hgweb/__init__.py", line 15, in hgwebdir
return hgwebdir_mod.hgwebdir(*args, **kwargs)
File "/usr/lib64/python2.4/site-packages/mercurial/hgweb/hgwebdir_mod.py", line 52, in __init__
self.refresh()
File "/usr/lib64/python2.4/site-packages/mercurial/hgweb/hgwebdir_mod.py", line 82, in refresh
self.repos = findrepos(paths)
File "/usr/lib64/python2.4/site-packages/mercurial/hgweb/hgwebdir_mod.py", line 36, in findrepos
for path in util.walkrepos(roothead, followsym=True, recurse=recurse):
File "/usr/lib64/python2.4/site-packages/mercurial/util.py", line 1164, in walkrepos
for hgname in walkrepos(fname, True, seen_dirs):
File "/usr/lib64/python2.4/site-packages/mercurial/util.py", line 1146, in walkrepos
for root, dirs, files in os.walk(path, topdown=True, onerror=errhandler):
File "/usr/lib64/python2.4/os.py", line 276, in walk
onerror(err)
File "/usr/lib64/python2.4/site-packages/mercurial/util.py", line 1127, in errhandler
raise err
OSError: [Errno 13] Permission denied: './dev/fd'
My repository directory is owned by apache, the user running Apache. I dont know why './dev/fd' is being operated on either. I've restarted the server numerous times, recreated the repository directory, but I still get this error no matter what! I dont have access to restart the machine, so that is not an option. But it seems to have gotten in a very bad persistent state, and I dont know how to fix it. Any help is appreciated!
This turned out to be a configuration error on my part, and rather than delete the question I'll post the resolution here in case someone has this problem in the future.
Here was the hgweb.config I was using:
[paths]
/ = /var/hg/repos/*
#[web]
style = gitweb
allow_archive = bz2 gz zip
maxchanges = 200
allow_push = *
push_ssl = false
Two problems here, one is obvious. I had the [web] header commented out, and I assume that many of the options are not valid for the [paths] section. Also, after re-reading the Hg docs again, the push_ssl directive does not belong in the hgweb.config file, but rather in each repository's .hg/hgrc (or the ~/.hgrc of the user that runs apache). After fixing these, things are working perfectly!