This question already has an answer here:
ftplib.error_perm: 553 Could not create file. (Python 2.4.4)
(1 answer)
Closed 2 years ago.
I am just starting to learn Python. I'm trying to upload a file as follows:
import ftplib
myurl = 'ftp.example.com'
user = 'user'
password = 'password'
myfile = '/Users/mnewman/Desktop/requested.txt'
ftp = ftplib.FTP(myurl, user, password)
ftp.encoding = "utf-8"
ftp.cwd('/public_html/')
ftp.storbinary('STOR '+myfile, open(myfile, 'rb'))
But get the following error:
Traceback (most recent call last):
File "/Users/mnewman/.spyder-py3/temp.py", line 39, in <module>
ftp.storbinary('STOR '+myfile, open(myfile, 'rb'))
File "ftplib.pyc", line 487, in storbinary
File "ftplib.pyc", line 382, in transfercmd
File "ftplib.pyc", line 348, in ntransfercmd
File "ftplib.pyc", line 275, in sendcmd
File "ftplib.pyc", line 248, in getresp
error_perm: 553 Can't open that file: No such file or directory
What does "that file" refer to and what do I need to do to fix this?
Reading the traceback, the error is deep in the ftp stack processing a response from the server. FTP server messages aren't standardized, but from the text its clear that the FTP server is unable to write the file on the remote side. This can happen for a variety of reasons - perhaps there is a permissions problem (the identity of the FTP server process does not have rights to a target), the write is outside of a sandbox setup on the server, or even that its already open in another program.
But in your case, you are using the full source file name in the "STOR" command when it wants the target path. Depending on whether you want to write subdirectories on the server, calculating the target name can get complicated. If you just want the server's current working directory, you could
ftp.storbinary(f'STOR {os.path.split(myfile)[1]}', open(myfile, 'rb'))
"That file" refers to the file that you're trying to upload to the FTP. According to your code, it refers to the line: myfile = '/Users/mnewman/Desktop/requested.txt'. You get this error because Python can't find the file in the path. Check whether it exists in the correct path. If you want to test whether there is an error in the script, you can add a test file to the directory in which your Python script exists and then run the script with the path of that file.
Example Script for FTP Upload:
import ftplib
session = ftplib.FTP('ftp.example.com','user','password')
file = open('hello.txt','rb') # file to send
session.storbinary('STOR hello.txt', file) # send the file
file.close() # close file and FTP
session.quit()
Related
I'm calling the Paramiko sftp_client.put(locapath,remotepath) method
This is throwing the [Errno 2] File not found error below.
01/07/2020 01:12:03 PM - ERROR - [Errno 2] File not found
Traceback (most recent call last):
File "file_transfer\TransferFiles.py", line 123, in main
File "paramiko\sftp_client.py", line 727, in put
File "paramiko\sftp_client.py", line 689, in putfo
File "paramiko\sftp_client.py", line 460, in stat
File "paramiko\sftp_client.py", line 780, in _request
File "paramiko\sftp_client.py", line 832, in _read_response
File "paramiko\sftp_client.py", line 861, in _convert_status
Having tried many of the other recommend fixes I found that the error is due to the server having an automatic trigger to move the file immediately to another location upon the file being uploaded.
I've not seen another post relating to this issue and wanted to know if anyone else has fixed this as the SFTP server is owned by a third party and not wanting to change trigger attributes.
The file actually uploads correctly, so I could catch the Exception and ignore the error. But I'd prefer to handle it, if possible.
Paramiko by default verifies a size of the uploaded file after the upload.
If the file is moved away immediately after upload, the check fails.
To avoid the check, set confirm parameter of SFTPClient.put to False.
sftp_client.put(localpath, remotepath, confirm=False)
I believe the check is redundant anyway, see
How to perform checksums during a SFTP file transfer for data integrity?
For a similar question about pysftp (what is a wrapper around Paramiko), see:
Python pysftp.put raises "No such file" exception although file is uploaded
Also had this issue of the file automatically getting moved before paramiko could do an os.stat on the uploaded file and compare the local and uploaded file sizes.
#Martin_Prikryl solution works works fine for removing the error by passing in confirm=False when using sftp.put or sftp.putfo
If you want this check to still run like you mention in the post to see if the file has been uploaded fully you can run something along these lines. For this to work you will need to know the moved file location and have the ability to read the file.
import os
sftp.putfo(source_file_object, destination_file, confirm=False)
upload_size = sftp.stat(moved_path).st_size
local_size = os.stat(source_file_object).st_size
if upload_size != local_size:
raise IOError(
"size mismatch in put! {} != {}".format(upload_size, local_size)
)
Both checks use os.stat
I'm calling the Paramiko sftp_client.put(locapath,remotepath) method
This is throwing the [Errno 2] File not found error below.
01/07/2020 01:12:03 PM - ERROR - [Errno 2] File not found
Traceback (most recent call last):
File "file_transfer\TransferFiles.py", line 123, in main
File "paramiko\sftp_client.py", line 727, in put
File "paramiko\sftp_client.py", line 689, in putfo
File "paramiko\sftp_client.py", line 460, in stat
File "paramiko\sftp_client.py", line 780, in _request
File "paramiko\sftp_client.py", line 832, in _read_response
File "paramiko\sftp_client.py", line 861, in _convert_status
Having tried many of the other recommend fixes I found that the error is due to the server having an automatic trigger to move the file immediately to another location upon the file being uploaded.
I've not seen another post relating to this issue and wanted to know if anyone else has fixed this as the SFTP server is owned by a third party and not wanting to change trigger attributes.
The file actually uploads correctly, so I could catch the Exception and ignore the error. But I'd prefer to handle it, if possible.
Paramiko by default verifies a size of the uploaded file after the upload.
If the file is moved away immediately after upload, the check fails.
To avoid the check, set confirm parameter of SFTPClient.put to False.
sftp_client.put(localpath, remotepath, confirm=False)
I believe the check is redundant anyway, see
How to perform checksums during a SFTP file transfer for data integrity?
For a similar question about pysftp (what is a wrapper around Paramiko), see:
Python pysftp.put raises "No such file" exception although file is uploaded
Also had this issue of the file automatically getting moved before paramiko could do an os.stat on the uploaded file and compare the local and uploaded file sizes.
#Martin_Prikryl solution works works fine for removing the error by passing in confirm=False when using sftp.put or sftp.putfo
If you want this check to still run like you mention in the post to see if the file has been uploaded fully you can run something along these lines. For this to work you will need to know the moved file location and have the ability to read the file.
import os
sftp.putfo(source_file_object, destination_file, confirm=False)
upload_size = sftp.stat(moved_path).st_size
local_size = os.stat(source_file_object).st_size
if upload_size != local_size:
raise IOError(
"size mismatch in put! {} != {}".format(upload_size, local_size)
)
Both checks use os.stat
I have a question to the zipfile library in python 2.7.12.
It seems that if I try to extract a .zip that is password protected, an exception is thrown.
I am able to extract a zipdirectory without password protection with python, and I have confirmed that i am able to extract it on my linux system without python and the right passphrase.
I will now post the unzipping process on my local console:
>>> import zipfile
>>> z = zipfile.ZipFile("folder.zip","r")
>>> z.extractall(pwd="taddel")
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/lib/python2.7/zipfile.py", line 1040, in extractall
self.extract(zipinfo, path, pwd)
File "/usr/lib/python2.7/zipfile.py", line 1028, in extract
return self._extract_member(member, path, pwd)
File "/usr/lib/python2.7/zipfile.py", line 1082, in _extract_member
with self.open(member, pwd=pwd) as source, \
File "/usr/lib/python2.7/zipfile.py", line 1007, in open
raise RuntimeError("Bad password for file", name)
RuntimeError: ('Bad password for file', <zipfile.ZipInfo object at 0x7f31cd1c3370>)
>>>
I could give you the script I was developing at, but broken down it does nothing more than trying to dictonary-force the password and split everything into nice threads.
I know this IS the right password for this file, cause i made it myself.
I also already tried to .strip("\n") and to convert to ("utf-8").
I also know I do not have to specify a folder in which everything is extracted into, cause with a not password protected zip-archieve this code works perfectly fine.
How can I fix this? Or even, are there updated python libs available if the standart ones are not working?
I'm learning web programming with Python, and still basically going through lectures/tutorial.
I'm trying to upload a file to a server. This is my code:
import ftplib
import sys
filename = sys.argv[1]
connect = ftplib.FTP("***.**.***.**")
connect.login("testuser","pass")
file = open(filename, "rb")
connect.storbinary("STOR " + filename, file)
connect.quit()
and this is the error I have:
File "C:\Users\test\putfile.py", line 8, in <module>
connect.storbinary("STOR " + filename, file)
File "C:\Python27\lib\ftplib.py", line 471, in storbinary
conn = self.transfercmd(cmd, rest)
File "C:\Python27\lib\ftplib.py", line 376, in transfercmd
return self.ntransfercmd(cmd, rest)[0]
File "C:\Python27\lib\ftplib.py", line 339, in ntransfercmd
resp = self.sendcmd(cmd)
File "C:\Python27\lib\ftplib.py", line 249, in sendcmd
return self.getresp()
File "C:\Python27\lib\ftplib.py", line 224, in getresp
raise error_perm, resp
ftplib.error_perm: 550 Permission denied.
testuser should have the permission to write files, since the folder is owned by him, and he has root privilege(was added in sudoer file).
the same thing happens if I add the line:
connect.cwd('/testfolder')
I will get error_perm: 550 Failed to change directory.
However I can still read the existing files just fine (with
connect.retrlines("RETR " + filename))
I'm pretty new about Python as well as Linux, so I don't have idea what I'm doing. I need some help.
Perhaps this can help:
With FTP is not sufficient be owner of files and directories.
The service and daemon FTP must be correctly configured in order to write and create files etc.
For example in Ubuntu:
Edit /etc/vsftpd.conf
And in the line
;write_enable=YES
Delete the semicolon
Finally restart the service:
sudo service vsftpd restart
I would check if you are in the right location. I got the same problem, and then I realised that I was in a different location that I intended, in the root folder, above "/public_html", so there was no folder that I wanted to enter, and I didn't have permissions to store any files.
You can check where you are this way:
print connect.pwd()
and what the contents of the current directory are:
print ftplib.FTP.dir(connect)
So, if you are in the root folder ("/"), above the "/public_html" and you want to change current directory to "/testfolder" you need to use:
connect.cwd('/public_html/testfolder')
Have you checked the access permission on the FTP server? I just ran into this same problem. This issue happened cause I did not have the permission to read the folder that I want to upload my files into.
There are a few things one can check if encountered with this error.
Check the current directory of the ftp server that you are trying to access using connect.pwd(). Make sure you have the write access to this directory. You can try copy pasting manually to verify the same.
Make sure you only provide the filename and not the complete path. For me, this was causing an issue. For example, filename = "upload_img.jpg" instead of filename = "D:/apth/to/upload_img.jpg". Workaround will be to extract the CWD using os.split() and then set the CWD using the os.chdir()
I have a python cgi script that runs an application via subprocess over and over again (several thousand times). I keep getting the same error...
Traceback (most recent call last):
File "/home/linuser/Webpages/cgi/SnpEdit.py", line 413, in <module>
webpage()
File "/home/linuser/Webpages/cgi/SnpEdit.py", line 406, in main
displayOmpResult(form['odfFile'].value)
File "/home/linuser/Webpages/cgi/SnpEdit.py", line 342, in displayContainerDiv
makeSection(position,sAoiInput)
File "/home/linuser/Webpages/cgi/SnpEdit.py", line 360, in displayData
displayTable(i,j,lAmpAndVars,dOligoSet[key],position)
File "/home/linuser/Webpages/cgi/SnpEdit.py", line 247, in displayTable
p = subprocess.Popen(['/usr/bin/pDat',sInputFileLoc,sOutputFileLoc],stdout=fh, stderr=fh)
File "/usr/lib/python2.6/subprocess.py", line 633, in __init__
errread, errwrite)
File "/usr/lib/python2.6/subprocess.py", line 1039, in _execute_child
errpipe_read, errpipe_write = os.pipe()
OSError: [Errno 24] Too many open files
The function causing it is below.
def displayTable(sData):
# convert the data to the proper format
sFormattedData = convertToFormat(sData)
# write the formatted data to file
sInputFile = tempfile.mkstemp(prefix='In_')[1]
fOpen = open(sInputFile,'w')
fOpen.write(sFormattedData)
fOpen.close()
sOutputFileLoc = sInputFile.replace('In_','Out_')
# run app, requires two files; an input and an output
# temp file to holds stdout stderr of subprocess
fh = tempfile.TemporaryFile(mode='w',dir=tempfile.gettempdir())
p = subprocess.Popen(['/usr/bin/pDat',sInputFileLoc,sOutputFileLoc],stdout=fh, stderr=fh)
p.communicate()
fh.close()
# open output file and print parsed data into a list of dictionaries
sOutput = open(sOutputFileLoc).read()
lOutputData = parseOutput(sOutput)
displayTableHeader(lOutputData)
displaySimpleTable(lOutputData)
As far as I can tell, I'm closing the files properly. When I run...
import resource
print resource.getrlimit(resource.RLIMIT_NOFILE)
I get...
(1024, 1024)
Do I have to increase this value? I read that subprocess opens several file descriptors. I tried adding "close_fds = True" and I tried using the with statement when creating my file but the result was the same. I suspect the problem may be with the application that I'm subprocessing, pDat, but this program was made by someone else. It requires two inputs; an input file and the location of where you want the output file written to. I suspect it may not be closing the output file that it creates. Aside from this, I can't see what I might be doing wrong. Any suggestions? Thanks.
EDIT:
I'm on ubuntu 10.04 running python 2.6.5 and apache 2.2.14
Instead of this...
sInputFile = tempfile.mkstemp(prefix='In_')[1]
fOpen = open(sInputFile,'w')
fOpen.write(sFormattedData)
fOpen.close()
I should have done this...
iFileHandle,sInputFile = tempfile.mkstemp(prefix='In_')
fOpen = open(sInputFile,'w')
fOpen.write(sFormattedData)
fOpen.close()
os.close(iFileHandle)
The mkstemp function makes OS level handles to a file and I wasn't closing them. The solution is described in more detail here...
http://www.logilab.org/blogentry/17873
You want to add close_fds=True to the popen call (just in case).
Then, here:
# open output file and print parsed data into a list of dictionaries
sOutput = open(sOutputFileLoc).read()
lOutputData = parseOutput(sOutput)
...I might remember wrong, but unless you use the with syntax, I do not think that the output file descriptor has been closed.
UPDATE: the main problem is that you need to know which files are open. On Windows this would require something like Process Explorer. In Linux it's a bit simpler; you just have to invoke the CGI from command line, or be sure that there is only one instance of the CGI running, and fetch its pid with ps command.
Once you have the pid, run a ls -la on the content of the /proc/<PID>/fd directory. All open file descriptors will be there, with the name of the files they point to. Knowing that file so-and-so is opened 377 times, that goes a long way towards finding out where exactly that file is opened (but not closed).