FTP upload error with Python and ftplib. - python

I'm trying to run a simple ftps script to upload a file on a scheduled basis from a Linux box to a ftps instance running on Windows Server 2012. When I try and test the script on my desktop (OS x), the script errors out:
Error uploading file: [Errno 54] Connection reset by peer
If I run the script on a linux box, same error, except 104 instead of 54:
Error uploading file: [Errno 104] Connection reset by peer
The files that I'm uploading have either been empty or are 8 bytes. I've verified that ftps is working with 2 other clients on my desktop. What am I missing / overlooking?
#!/usr/bin/env python
from ftplib import FTP_TLS
import fnmatch
import os
import ssl
import sys
server = '192.168.1.2'
user = 'myUsername'
passwd = 'myPassword'
def connect_ftp():
ftp = FTP_TLS(server, user, passwd)
ftp.set_pasv(True)
ftp.prot_p()
return ftp
def upload_file(ftp_connection, upload_file_path):
try:
upload_file = open("/tmp/test/" + upload_file_path, 'r')
print('Uploading ' + upload_file_path + "...")
ftp_connection.storbinary('STOR ' + upload_file_path, upload_file)
ftp_connection.quit()
ftp_connection.close()
upload_file.close()
print('Upload finished.')
except Exception, e:
print("Error uploading file: " + str(e))
ftp_conn = connect_ftp()
for file in os.listdir('/tmp/test'):
if fnmatch.fnmatch(file, 'bt_*.txt'):
upload_file(ftp_conn, file)

I think this problem only shows on MS FTP server.
First of all turn on debug
ftp.set_debuglevel(2)
In my case transfer hangs on
put 'STOR test.xml\r\n'
get '125 Data connection already open; Transfer starting.\n'
resp '125 Data connection already open; Transfer starting.'
Then I found this suggession http://www.sami-lehtinen.net/blog/python-32-ms-ftps-ssl-tls-lockup-fix
I've tried (commented conn.unwrap() in storbinary) it and it worked!
In my case in was line 513
# shutdown ssl layer
if _SSLSocket is not None and isinstance(conn, _SSLSocket):
pass #conn.unwrap()
This is obliviously very bad hack, but I couldn't find anything better.

I had the same issue and succeeded to solve it with this lines :
ftps = FTP_TLS(server)
ftps.set_debuglevel(2) # To show logs
ftps.ssl_version = ssl.PROTOCOL_TLS
ftps.set_pasv(True)
ftps.login(user="user", passwd="passwd")
And I switched from Python 3.5.1 to Python 3.8.3.

Related

Bad gate way Error while using ftp in server

Actually we are using python3.6.8 in our server, we are trying to connect the ftp server and pushing files to the ftp server through an api call, here when we try to push the files from local it is running fine and files are being pushed but when calling api to the server it is redirecting to a 502 bad gateway error after 14.8s time when tried with postman. the server we use is AWS EC2
ftp = ftplib.FTP()
host = config.FTP_HOST
port = 21
ftp.connect(host, port)
try:
ftp.login(config.FTP_USERNAME, config.FTP_PASSWORD)
file = open(path_image, 'rb')
ftp.cwd("/DailyDump/target/")
ftp.storbinary("STOR sample_file_name" + str(yesterday_date) + ".csv", file)
file.close()
ftp.close()
except:
pass
This problem was caused due to the maximum timeout reached on the API call, hence I transferred the codes from API to stand alone code to make it run for longer time period without giving error. so there is no error in ftp logging.

FTP: 421 Data timeout on Azure VM

I have a simple script that successfully downloads a 75MB file over FTP:
try:
ftp = ftplib.FTP(host)
ftp.login(username, password)
ftp.cwd(source_dir)
except ftplib.all_errors as e:
print('Ftp error = ', e)
return False
# Check filename exists
if filename in ftp.nlst():
local_filename = os.path.join(dest_dir, filename)
lf = open(local_filename, "wb")
ftp.retrbinary("RETR " + filename, lf.write)
lf.close()
print(filename, ' successfully downloaded')
else:
print(filename, ' not found in the path ', source_dir)
ftp.quit()
This script works fine on both my home and work laptops when run from Spyder IDE or a Windows scheduled task.
I have deployed the exact same script to a Windows Virtual Machine on Azure.
Files less than 10MB seem to download ok.
Files larger than 30MB return an exception:
421 Data timeout. Reconnect. Sorry.
I get around 700 Mbps on Azure and only around 8Mbps on my home network.
It looks like a timeout. I can see the file is partially downloaded.
I tried setting ftp.set_pasv(False), but this then returns me 500 Illegal Port, which is to be expected. I understand passive is the preferred approach anyhow.
What else can I do to troubleshoot and resolve this issue?
Just some suggestions for you.
According to the wiki page for File Transfer Protocol, FTP may run in active or passive mode, as the figure below. In active mode, the client requires a listening port for incoming data from the server. However, due to the listening port of client for FTP server is random assigned, you can not prepare in advance to add the port in NSG inbound rules. So you should use passive mode in the client side on Azure VM with FTP.set_pasv(True) or without FTP.set_pasv(False).
For the issue 421 Data timeout. Reconnect. Sorry., please check the timeout setting in your FTP server, such as the data_connection_timeout property of vsftpd.conf file of vftp, to set enough long value of time out
Try to set a timeout value longer then the global default setting for ftplib.FTP(host='', user='', passwd='', acct='', timeout=None, source_address=None) function.
Try to use function FTP.set_debuglevel(level) to debug output more details for your script to find out the possible reason.
Hope it helps.

Python FTP object returns socket.error: [Errno 10060] behind HTTP Proxy?

from ftplib import FTP
import time
try:
ftp = FTP('kcm.amazon-digital-ftp.com') # removed "sftp://" from url.
except IOError, e:
time.sleep(120)
ftp = FTP('kcm.amazon-digital-ftp.com')
This returns "socket.error: [Errno 10060]" I'm running this script from a VM, which has a proxy on it. I'm able to connect to this ftp site using Filezilla on the same VM without changing Filezilla's proxy settings.
In that case, I don't understand how FileZilla is dealing with that proxy in the background and how to mimic that in my script.
Resolved. I was trying to use FTPlib which doesn't support SFTP.
This worked for me in what I was trying to achieve:
https://stackoverflow.com/a/3635163/3219210

how to download a directory from ftps server using python in windows

I need to login ftps server in windows,am attaching the code which i tried to make a connection with ftps.
from ftplib import FTP_TLS
ftps = FTP_TLS('my host addres',990)
ftps.login('username','password')
ftps.prot_p()
ftps.retrlines('LIST')
when i execute this code am getting a socket error no 10060.i knew my ftp connection is implicit.I am very new to python.so please anyone help me out to solve this issue.
here is the answer for my question.in python there is one module called chilkat with that i can able to login into my implicit ftps server.
import sys
import chilkat
ftp = chilkat.CkFtp2()
# Any string unlocks the component for the 1st 30-days.
success = ftp.UnlockComponent("Anything for 30-day trial")
if (success != True):
print(ftp.lastErrorText())
sys.exit()
# If this example does not work, try using passive mode
# by setting this to True.
ftp.put_Passive(False)
ftp.put_Hostname("ur host ip")
ftp.put_Username("usename")
ftp.put_Password("passowrd")
ftp.put_Port(990)
# We don't want AUTH SSL:
ftp.put_AuthTls(False)
# We want Implicit SSL:
ftp.put_Ssl(True)
# Connect and login to the FTP server.
success = ftp.Connect()
if (success != True):
print(ftp.lastErrorText())
sys.exit()
else:
# LastErrorText contains information even when
# successful. This allows you to visually verify
# that the secure connection actually occurred.
print(ftp.lastErrorText())
print("FTPS Channel Established!")
# Do whatever you're doing to do ...
# upload files, download files, etc...
localFilename = "c:/temp/hamlet.xml"
remoteFilename = "hamlet.xml"#the file name which u download from the ftps
# Download a file.
success = ftp.GetFile(remoteFilename,localFilename)
if (success != True):
print(ftp.lastErrorText())
ftp.Disconnect()

Upload file via SFTP with Python

I wrote a simple code to upload a file to a SFTP server in Python. I am using Python 2.7.
import pysftp
srv = pysftp.Connection(host="www.destination.com", username="root",
password="password",log="./temp/pysftp.log")
srv.cd('public') #chdir to public
srv.put('C:\Users\XXX\Dropbox\test.txt') #upload file to nodejs/
# Closes the connection
srv.close()
The file did not appear on the server. However, no error message appeared. What is wrong with the code?
I have enabled logging. I discovered that the file is uploaded to the root folder and not under public folder. Seems like srv.cd('public') did not work.
I found the answer to my own question.
import pysftp
srv = pysftp.Connection(host="www.destination.com", username="root",
password="password",log="./temp/pysftp.log")
with srv.cd('public'): #chdir to public
srv.put('C:\Users\XXX\Dropbox\test.txt') #upload file to nodejs/
# Closes the connection
srv.close()
Put the srv.put inside with srv.cd
Do not use pysftp it's dead. Use Paramiko directly. See also pysftp vs. Paramiko.
The code with Paramiko will be pretty much the same, except for the initialization part.
import paramiko
with paramiko.SSHClient() as ssh:
ssh.load_system_host_keys()
ssh.connect(host, username=username, password=password)
sftp = ssh.open_sftp()
sftp.chdir('public')
sftp.put('C:\Users\XXX\Dropbox\test.txt', 'test.txt')
To answer the literal OP's question: the key point here is that pysftp Connection.cd works as a context manager (so its effect is discarded without with statement), while Paramiko SFTPClient.chdir does not.
import pysftp
with pysftp.Connection(host="www.destination.com", username="root",
password="password",log="./temp/pysftp.log") as sftp:
sftp.cwd('/root/public') # The full path
sftp.put('C:\Users\XXX\Dropbox\test.txt') # Upload the file
No sftp.close() is needed, because the connection is closed automatically at the end of the with-block
I did a minor change with cd to cwd
Syntax -
# sftp.put('/my/local/filename') # upload file to public/ on remote
# sftp.get('remote_file') # get a remote file

Categories