Get "Ping Time" with Twisted Server - python

I got a Twisted Game server and I want to make a "ping" command server-side. (The client sends commands to server and the server do thnigs, and answer).
But I can't thing any way to get "Ping Time" of a connection between the server and the client. Is there a way to get it, for example with
self.transport
or other. But I can't find.
Any ideas please?
Thanks for help.

"ping time" is not an inherent property of a connection, but rather, the amount of time it takes for the client to send a do-nothing request to the server and have the server send an answer.
If you were using, for example, AMP, you could do something like this:
def pingTime(self):
then = reactor.seconds()
def pung(ignored):
now = reactor.seconds()
return now - then
return self.callRemote(Ping).addCallback(pung)

Related

How to handle socket.io broken connection in Flask?

I have a very simple Python (Flask socket.io) application which works as a server and another app written in AngularJS which is a client.
In order to handle connected and disconnected client I use respectlivy:
#socketio.on('connect')
def on_connect():
print("Client connected")
#socketio.on('disconnect')
def on_disconnect():
print("Client disconnected")
When Client connects to my app I get information about it, in case if client disconnect (for example because of problems with a network) I don't get any information.
What is the proper way to handle the situation in which client disconnects unexpectedly?
There are two types of connections: using long-pooling or WebSocket.
When you use WebSocket clients knows instantly that server was disconnected.
In the case of long-polling, there is need to set ping_interval and pint_timeout parameters (I also find information about heartbeat_interval and heartbeat_timeout but I don't know how they are related to ping_*).
From the server perspective: it doesn't know that client was disconnected and the only way to get that information is to set ping_interval and ping_timeout.

Detect when Websocket is disconnected, with Python Bottle / gevent-websocket

I'm using the gevent-websocket module with Bottle Python framework.
When a client closes the browser, this code
$(window).on('beforeunload', function() { ws.close(); });
helps to close the websocket connection properly.
But if the client's network connection is interrupted, no "close" information can be sent to the server.
Then, often, even 1 minute later, the server still believes the client is connected, and the websocket is still open on the server.
Question: How to detect properly that a websocket is closed because the client is disconnected from network?
Is there a websocket KeepAlive feature available in Python/Bottle/gevent-websocket?
One answer from Web Socket: cannot detect client connection on internet disconnect suggests to use a heartbeat/ping packet every x seconds to tell the server "I'm still alive". The other answer suggests using a setKeepAlive(true). feature. Would this feature be available in gevent-websocket?
Example server code, taken from here:
from bottle import get, template, run
from bottle.ext.websocket import GeventWebSocketServer
from bottle.ext.websocket import websocket
users = set()
#get('/')
def index():
return template('index')
#get('/websocket', apply=[websocket])
def chat(ws):
users.add(ws)
while True:
msg = ws.receive()
if msg is not None:
for u in users:
u.send(msg)
else:
break
users.remove(ws)
run(host='127.0.0.1', port=8080, server=GeventWebSocketServer)
First you need to add a timeout to the receive() method.
with gevent.Timeout(1.0, False):
msg = ws.receive()
Then the loop will not block, if you send even an empty packet and the client doesn't respond, WebsocketError will be thrown and you can close the socket.

send several messages in one connection, python socketIO

I'm using https://pypi.python.org/pypi/socketIO-client
ping = SocketIO(host, port)
ping.define(SIO)
ping.message(PING)
ping.wait(seconds=1)
namespace definition skipped.
this code works ok - I send one message and receive one from server.
But can't figure out how to send few messages in one connection and analyze responses in middle - I need to make short interactive session.
may be will be useful for someone someday...
i've figured out:
ping = SocketIO(host, port)
ping.define(SIO)
ping.message(PING)
ping.wait(seconds=1)
callResponceAnalyzer()
ping.message(MESSAGE1)
ping.wait(seconds=1)

How to receive an email over smtpd and sockets

I have some code trying to receive an email sent from a server on a client. The email is definitively sent from the server to the client, and a SMTP server on the client should be able to receive this email. Here is my test implementation:
# define the SMTP server (with the real IP adress of the client of course)
server = smtpd.PureProxy(('XXX.XXX.XXX.XXX', 25), None)
inputs = [server]
outputs = []
message_queues = {}
readable, writable, exceptional = select.select(inputs, outputs, inputs)
# Only one socket in the list returned (there is exactly one)
socket = readable[0]
# Accept the connection or get it or whatever
connection, client_address = socket.accept()
# get the data
data = connection.recv(1024)
print data
After a considerably long time some data is received, which in no way resembles the content of the email. It is always
EHLO YYY.YYY.YYY.YYY
with the YYY the address of the server. I am no expert in SMTP and sockets, but what am I doing wrong to correctly receive the emai and its contents?
Thanks
Alex
The EHLO is part of the SMTP protocol exchange and it represents the client sending its greeting to your server which doesn't respond properly (because it doesn't respond at all). When the client gets tired of waiting for "a considerably long time" the session times out and your server shows what it received.
You seem to be confused as to which process is the server. The smtpd module creates servers or Mail Transport Agents, not clients. As noted in the smtpd documentation for SMTPServer:
Create a new SMTPServer object, which binds to local address
localaddr. It will treat remoteaddr as an upstream SMTP relayer. It
inherits from asyncore.dispatcher, and so will insert itself into
asyncore‘s event loop on instantiation.
You also seem to have the sense of localaddr and remoteaddr confused. The localaddr is not (as your comment claims) the address of the client, but where that server should accept connections from. You might want to try in place of your code:
server = smtpd.DebuggingServer(('localhost', 2525), None)
asyncore.loop()
Which can be tested with client code (in a separate process) of:
client smtplib.SMTP('localhost', 2525)
client.sendmail('from', 'to', 'body')
Finally, having a PureProxy with a remoteaddr of None, it if works at all, would proxy mail into nowhere which is probably not what you want in a proxy.
That is the proper start of the ESMTP protocol dialog. Your program needs to understand and handle at least the basic SMTP verbs; see RFC5321.

Slow Python HTTP server on localhost

I am experiencing some performance problems when creating a very simple Python HTTP server. The key issue is that performance is varying depending on which client I use to access it, where the server and all clients are being run on the local machine. For instance, a GET request issued from a Python script (urllib2.urlopen('http://localhost/').read()) takes just over a second to complete, which seems slow considering that the server is under no load. Running the GET request from Excel using MSXML2.ServerXMLHTTP also feels slow. However, requesting the data Google Chrome or from RCurl, the curl add-in for R, yields an essentially instantaneous response, which is what I would expect.
Adding further to my confusion is that I do not experience any performance problems for any client when I am on my computer at work (the performance problems are on my home computer). Both systems run Python 2.6, although the work computer runs Windows XP instead of 7.
Below is my very simple server example, which simply returns 'Hello world' for any get request.
from BaseHTTPServer import BaseHTTPRequestHandler, HTTPServer
class MyHandler(BaseHTTPRequestHandler):
def do_GET(self):
print("Just received a GET request")
self.send_response(200)
self.send_header("Content-type", "text/html")
self.end_headers()
self.wfile.write('Hello world')
return
def log_request(self, code=None, size=None):
print('Request')
def log_message(self, format, *args):
print('Message')
if __name__ == "__main__":
try:
server = HTTPServer(('localhost', 80), MyHandler)
print('Started http server')
server.serve_forever()
except KeyboardInterrupt:
print('^C received, shutting down server')
server.socket.close()
Note that in MyHandler I override the log_request() and log_message() functions. The reason is that I read that a fully-qualified domain name lookup performed by one of these functions might be a reason for a slow server. Unfortunately setting them to just print a static message did not solve my problem.
Also, notice that I have put in a print() statement as the first line of the do_GET() routine in MyHandler. The slowness occurs prior to this message being printed, meaning that none of the stuff that comes after it is causing a delay.
The request handler issues a inverse name lookup in order to display the client name in the log. My Windows 7 issues a first DNS lookup that fails with no delay, followed by 2 successive NetBIOS name queries to the HTTP client, and each one run into a 2 sec timeout = 4 seconds delay !!
Have a look at https://bugs.python.org/issue6085
Another fix that worked for me is to override BaseHTTPRequestHandler.address_string() in my request handler with a version that does not perform the name lookup
def address_string(self):
host, port = self.client_address[:2]
#return socket.getfqdn(host)
return host
Philippe
This does not sound like a problem with the code. A nifty way of troubleshooting an HTTP server is to connect to it to telnet to it on port 80. Then you can type something like:
GET /index.html HTTP/1.1
host: www.blah.com
<enter> <enter>
and observe the server's response. See if you get a delay using this approach.
You may also want to turn off any firewalls to see if they are responsible for the slowdown.
Try replacing 127.0.0.1 for localhost. If that solves the problem, then that is a clue that the FQDN lookup may indeed be the possible cause.
Replacing localhost with 127.0.0.1 can solve the problem:)

Categories