Knowing whether or not FTP is still connected with ftplib - python

I was wondering if there is an easy way of knowing whether a connection to an FTP server is still active using ftplib.
So if you have an active connection like this:
import ftplib
ftp = ftplib.FTP("ftp.myserver.com", "admin", "pass123")
is there something like the following pseudo code that can be queried to check if the connection is still active?
if ftp.is_connected() == True:
print "Connection still active"
else:
print "Disconnected"

You could try retrieving something from the server, and catching any exceptions and returning whether or not it's connected based on that.
For example:
def is_connected(ftp_conn):
try:
ftp_conn.retrlines('LIST')
except (socket.timeout, OSError):
return False
return True
This simple example will print the 'LIST' results to stdout, you can change that by putting your own callback into the retrlines method
(Make sure you set a timeout in the initial FTP object construction, as the default is for it to be None.)
ftp = ftplib.FTP("ftp.gnu.org", timeout=5, user='anonymous', passwd='')

Related

Reconnect to server python

I'm trying to connect a client to a serverr. After I connect I want to check do some sort of validation. (checking if I got the massage 'ready') and if not, try to connect again until the server will send 'ready' and not something else.
tcp_sock = socket.socket()
tcp_sock.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
tcp_sock.bind(('', 7865))
connected = False
while not connected:
tcp_sock.connect(('192.168.0.111', 7865))
if tcp_sock.recv(1024) == b'ready':
connected = True
else:
tcp_sock.close()
if I put tcp_sock.close() after the else, I get this error when connecting again: an operation was attempted on something that is not a socket.
if I put pass instead of tcp_sock.close() (do not close the socket) I get this error when connecting again: Only one usage of each socket address (protocol/network address/port) is normally permitted.
PS. It works without errors without the tcp_sock.bind(('', 7865)) but I have to bind the client to a specific port (here 7865) and I'm using tcp_sock.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1) to be able to reconnect again with the server.
How can I reconnect to the server without getting an error?
I think you are doing it correctly but I would suggest you use normal receiveing methods as it'll be easier for python to send and receive data.
CLIENT.py
import socket
import sys
import select
my_socket = socket.socket(socket.AF_INET,socket.SOCK_STREAM)
my_socket.connect(("__YOUR_IP__",7865)) #IMPORTANT
sys.stdout.write('your message: ')
sys.stdout.flush()
while True:
#...
else:
data = my_socket.recv(1024)
print "data" , data
If a TCP connection is not successful connect will throw an error. So handling exceptions from connect method should be enough to check if you are connected.
But if you want to fix this way; your close method also releases the socket therefore you need to define tcp-connect again in the while loop.

Disconnection from a FTP server in python with the librabry pyftpdlib

I would like to know if there is a way to translate in python the fact that someone disconnet from my ftp server with pyftplib. I know that there is a function call 'on_logout' but i don't know how to use it.
Thank you for your answers !
Create your own implementation of FTPHandler and use it with your server instance.
Though the on_logout is triggered in a response to user explicitly logging out. But most FTP clients do not do that, they simply disconnect. For that, use on_disconnect.
class MyFTPHandler(FTPHandler):
def on_logout(self, username):
print("%s logged out" % username)
def on_disconnect():
print("disconnected")
handler = MyFTPHandler
# ...
server = FTPServer(('', 21), handler)

python mysql.connector write failure on connection disconnection stalls for 30 seconds

I use python module mysql.connector for connecting to an AWS RDS instance.
Now, as we know, if we do not send a request to SQL server for a while, the connection disconnects.
To handle this, I reconnect to SQL in case a read/write request fails.
Now my problem with the "request fails", it takes significant to fail. And only then can I reconnect, and retry my request. (I have pointed this out as a comment in code snippet).
For a real-time application such as mine, this is a problem. How could I solve this? Is it possible to find out if the disconnection has already happened so that I can try a new connection without having to wait on a read/write request?
Here is how I handle it in my code right now:
def fetchFromDB(self, vid_id):
fetch_query = "SELECT * FROM <db>"
success = False
attempts = 0
output = []
while not success and attempts < self.MAX_CONN_ATTEMPTS:
try:
if self.cnx == None:
self._connectDB_()
if self.cnx:
cursor = self.cnx.cursor() # MY PROBLEM: This step takes too long to fail in case the connection has expired.
cursor.execute(fetch_query)
output = []
for entry in cursor:
output.append(entry)
cursor.close()
success = True
attempts = attempts + 1
except Exception as ex:
logging.warning("Error")
if self.cnx != None:
try:
self.cnx.close()
except Exception as ex:
pass
finally:
self.cnx = None
return output
In my application I cannot tolerate a delay of more than 1 second while reading from mysql.
While configuring mysql, I'm doing just the following settings:
SQL.user = '<username>'
SQL.password = '<password>'
SQL.host = '<AWS RDS HOST>'
SQL.port = 3306
SQL.raise_on_warnings = True
SQL.use_pure = True
SQL.database = <database-name>
There are some contrivances like generating an ALARM signal or similar if a function call takes too long. Those can be tricky with database connections or not work at all. There are other SO questions that go there.
One approach would be to set the connection_timeout to a known value when you create the connection making sure it's shorter than the server side timeout. Then if you track the age of the connection yourself you can preemptively reconnect before it gets too old and clean up the previous connection.
Alternatively you could occasionally execute a no-op query like select now(); to keep the connection open. You would still want to recycle the connection every so often.
But if there are long enough periods between queries (where they might expire) why not open a new connection for each query?

Python 'asynchat' chat server - make client wait till server is up

I created a simple chat server using asynchat module in python. My intention is to make the chat clients wait for a server to be up and running.
I tried doing this using the handle_connect_event by setting connected to True there like:
def handle_connect_event(self):
self.connected = True
Then I am looping on connect command till connected becomes True:
while not self.connected:
try:
self.connect((host, port))
except:
time.sleep(1)
I read in the asyncore dispatcher code that when connection is successful, handle_connect_event is called:
def connect(self, address):
self.connected = False
err = self.socket.connect_ex(address)
# XXX Should interpret Winsock return values
if err in (EINPROGRESS, EALREADY, EWOULDBLOCK):
return
if err in (0, EISCONN):
self.addr = address
self.handle_connect_event()
else:
raise socket.error(err, errorcode[err])
So I believe when the connection is created the code in handle_connect_event should be triggered, thereby setting connected to True, thereby breaking my loop. However this does not happen.
Does anybody know why? And, if this method is wrong, how do we make chat clients wait for server?
I am new to these things, so please explain keeping in mind I am a newbie :)
I guess my machine was crazy for a while but my code works :)
I am able to launch 2 client machines, then launch server and get the tasks done.
Best feeling ever ! :)

detect python-xmpp timeouts

I have an application which shall send xmpp messages. Those occasions are rare (sometimes none for days) but then again maybe coming in bunches. I have no use of receiving anything, I just want to send. The straight-forward approach runs into undetected timeouts. The last send() does not take place (receiver does not get anything) but returns without reporting the problem (returns a simple id as if everything worked fine). Only the next call to send() then raises an IOError('Disconnected from server.').
I could do a constant disconnect/reconnect for each message but I don't like this because sometimes this will disconnect and reconnect very often (and I don't know if servers appreciate on this multiple times in a second).
I could try the approach given as answer in this question here, but I do not really have a need for receiving the XMPP replies.
Question: Is there a simple way to detect the connection timeout before or after sending without trying to send a second message (which would spam the receiver in case everything worked fine)?
My straight-forward-approach:
import xmpp
def connectXmppClient(fromJidName, password):
fromJid = xmpp.protocol.JID(fromJidName)
xmppClient = xmpp.Client(fromJid.getDomain(), debug=[])
connection = xmppClient.connect()
if not connection:
raise Exception("could not setup connection", fromJid)
authentication = xmppClient.auth(
fromJid.getNode(), password, resource=fromJid.getResource())
if not authentication:
raise Exception("could not authenticate")
return xmppClient
def sendXmppMessage(xmppClient, toJidName, text):
return xmppClient.send(xmpp.protocol.Message(toJidName, text))
if __name__ == '__main__':
import sys, os, time, getpass
if len(sys.argv) < 2:
print "Syntax: xsend fromJID toJID"
sys.exit(0)
fromJidName = sys.argv[1]
toJidName = sys.argv[2]
password = getpass.getpass()
xmppClient = connectXmppClient(fromJidName, password)
while True:
line = sys.stdin.readline()
if not line:
break
print xmppClient.isConnected()
id = sendXmppMessage(xmppClient, toJidName, line)
print id
You need to register a disconnect handler using xmppClient.RegisterDisconnectHandler(). This lets you specify a function that will get called upon a disconnect.

Categories