Flask-Sockets keepalive - python

I recently started using flask-sockets in my flask application with native WebSocket API as client. I would like to know if there is proper way to send ping requests at certain intervals from the server as keepalive?
When going through the geventwebsocket library, I noticed the definition handle_ping(...), but it's never called. Is there a way to determine a ping interval on WS?
I see my sockets dying after a minute and a half inconsistently sometimes.
#socket_blueprint.route('/ws', defaults={'name':''})
def echo_socket(ws):
while not ws.closed:
ws_list.append(
msg = ws.receive()
ws.send(msg)
I could probably spin up a separate thread and send ping opcodes manually every 30 seconds to the clients if I keep them in a list, but I feel like there'd be a better way to handle that..

In service, create a thread in this thread send some data(any data) to client. If client already disconnected,after 15s the server will receive closed.
I haven't find any method about ping in gevent websocket or flask-sockets. So take this method.

Related

Pika connection closed after 3 heartbeats

I'm writing a script which receives HTTP requests (using Tornado), parses them, and sends them to a RabbitMQ broker using pika.
The code looks like this:
def main():
conn_params = pika.ConnectionParameters(
host=BROKER_NAME,
port=BROKER_PORT,
ssl=True,
virtual_host=VIRTUAL_HOST,
credentials=pika.PlainCredentials(BROKER_USER, BROKER_PASS),
heartbeat_interval=HEARTBEAT_INTERVAL
)
conn = pika.BlockingConnection(conn_params)
channel = conn.channel()
# Create the web server which handles application requests.
application = tornado.web.Application([
(URL_BILLING, SomeHandler, dict(channel=channel))
])
# Start the server
application.listen(LISTENING_PORT)
tornado.ioloop.IOLoop.instance().start()
As you can see, I open a single connection and channel, and pass the channel to any instance of the handler which is created, the idea being to save traffic and avoid opening a new connection/channel for every request.
The issue I'm experiencing is that the connection is closed after 3 heartbeats. I used Wireshark in order to figure out what the problem is, but all I can see is that the server sends a PSH (I'm assuming this is the heartbeat) and my scripts replies with an ACK. This happens 3 times with HEARTBEAT_INTERVAL in between them, and then the server just sends a FIN and the connection dies.
Any idea why that happens? Also, should I keep the connection open or is it better to create a new one for every message I need to send?
Thanks for the help.
UPDATE: I looked in the RabbitMQ log, and it says:
Missed heartbeats from client, timeout: 10s
I thought the server was meant to send heartbeats to the client, to make sure it answers, and this agrees with what I observed using Wireshark, but from this log it seems it is the client which is meant to report to the server, not the other way around, and the client, evidently, doesn't report. Am I getting this right?
UPDATE: Figured it out, sort of. A blocking connection (which is what I used) is unable to send heartbeats because it's, well, blocking. As mentioned in this issue, the heartbeat_interval parameters is only used to negotiate the connection with the server, but the client doesn't actually send heartbeats. Since this is the case, what is the best way to keep a long-running connection with pika? Even if I don't specify heartbeat_interval, the server defaults to a heartbeat every 10 minutes, so the connection will die after 30 minutes...
For future visitors:
Pika has an async example which uses heartbeat:
http://pika.readthedocs.org/en/0.10.0/examples/asynchronous_publisher_example.html
For Tornado specific, this example shows how to use Tornado's IOLoop in pika's async model:
http://pika.readthedocs.org/en/0.10.0/examples/tornado_consumer.html

Threading an UDP server

I would like to make a multi-threading UDP server in Python.
The purpose is to be able to connect several clients to the server (not sockets connections but username and password), act with each of them and do some actions on the server. All at the same time.
I am a little confuse with all the different type of threading and I don't know what to use.
To be clearer this is exactly what I want to do at the same time :
Wait for clients to send data for the first time and register their ip in a database
Act with "connected" clients by waiting for them to send datagrams and respond to them
Be able to act with the server. For exemple, change a client's password in my database
I would have a look at a framework that is good at handling asynchronous io. The idea is to not have a thread per socket and block until you receive data, but instead let one thread handle many sockets at once. This scales well if you want your server to handle many clients.
For example:
Gevent - "a coroutine-based Python networking library", example
Twisted - "an event-driven networking engine", example
Eventlet - "a concurrent networking library", example (TCP, but it uses a patched socket so you can also refer to the Python wiki page about UDP Communication)

Which web servers are compatible with gevent and how do the two relate?

I'm looking to start a web project using Flask and its SocketIO plugin, which depends on gevent (something something greenlets), but I don't understand how gevent relates to the webserver. Does using gevent restrict my server choice at all? How does it relate to the different levels of web servers that we have in python (e.g. Nginx/Apache, Gunicorn)?
Thanks for the insight.
First, lets clarify what we are talking about:
gevent is a library to allow the programming of event loops easily. It is a way to immediately return responses without "blocking" the requester.
socket.io is a javascript library create clients that can maintain permanent connections to servers, which send events. Then, the library can react to these events.
greenlet think of this a thread. A way to launch multiple workers that do some tasks.
A highly simplified overview of the entire process follows:
Imagine you are creating a chat client.
You need a way to notify the user's screens when anyone types a message. For this to happen, you need someway to tell all the users when a new message is there to be displayed. That's what socket.io does. You can think of it like a radio that is tuned to a particular frequency. Whenever someone transmits on this frequency, the code does something. In the case of the chat program, it adds the message to the chat box window.
Of course, if you have a radio tuned to a frequency (your client), then you need a radio station/dj to transmit on this frequency. Here is where your flask code comes in. It will create "rooms" and then transmit messages. The clients listen for these messages.
You can also write the server-side ("radio station") code in socket.io using node, but that is out of scope here.
The problem here is that traditionally - a web server works like this:
A user types an address into a browser, and hits enter (or go).
The browser reads the web address, and then using the DNS system, finds the IP address of the server.
It creates a connection to the server, and then sends a request.
The webserver accepts the request.
It does some work, or launches some process (depending on the type of request).
It prepares (or receives) a response from the process.
It sends the response to the client.
It closes the connection.
Between 3 and 8, the client (the browser) is waiting for a response - it is blocked from doing anything else. So if there is a problem somewhere, like say, some server side script is taking too long to process the request, the browser stays stuck on the white page with the loading icon spinning. It can't do anything until the entire process completes. This is just how the web was designed to work.
This kind of 'blocking' architecture works well for 1-to-1 communication. However, for multiple people to keep updated, this blocking doesn't work.
The event libraries (gevent) help with this because they accept and will not block the client; they immediately send a response and when the process is complete.
Your application, however, still needs to notify the client. However, as the connection is closed - you don't have a way to contact the client back.
In order to notify the client and to make sure the client doesn't need to "refresh", a permanent connection should be open - that's what socket.io does. It opens a permanent connection, and is always listening for messages.
So work request comes in from one end - is accepted.
The work is executed and a response is generated by something else (it could be a the same program or another program).
Then, a notification is sent "hey, I'm done with your request - here is the response".
The person from step 1, listens for this message and then does something.
Underneath is all is WebSocket a new full-duplex protocol that enables all this radio/dj functionality.
Things common between WebSockets and HTTP:
Work on the same port (80)
WebSocket requests start off as HTTP requests for the handshake (an upgrade header), but then shift over to the WebSocket protocol - at which point the connection is handed off to a websocket-compatible server.
All your traditional web server has to do is listen for this handshake request, acknowledge it, and then pass the request on to a websocket-compatible server - just like any other normal proxy request.
For Apache, you can use mod_proxy_wstunnel
For nginx versions 1.3+ have websocket support built-in

Sleep after ZMQ connect?

In a ROUTER-ROUTER setup, after I connect one ROUTER socket to another, if I don't sleep (for say 0.1s or so) after I connect() to the other ROUTER socket, the send() usually doesn't go through (although it sometimes does, by chance).
Is there a way to make sure I am connected before I send?
Why aren't the send()s queued and properly executed until the connection is made?
Also, this is not about whether the server on the other end is alive but rather that I send() too soon after I connect() and somehow it fails. I am not sure why.
Is there a way to make sure I am connected before I send?
Not directly. The recommended approach is to use something like the Freelanch Protocol and keep pinging until you receive a response. If you stop receiving responses to your pings you should consider yourself disconnected.
Why aren't the send()s queued and properly executed until the connection is made?
A router cannot send a message to a peer until both sides have completed an internal ZeroMQ handshake. That's just the way it works, since the ROUTER requires the ID of its peer in order to "route". Apparently sleeping for .1sec is the right amount of time on your dev system. If you need the ability to connect and then send without sleeping or retrying, then you need to use a different pattern.
For example, with DEALER-ROUTER, a DEALER client can connect and immediately send and ZeroMQ will queue the message until it is delivered. The reason the works is that the DEALER does not require the ID of the peer - since it does not "route". When the ROUTER server receives the message, that handshake is already complete so it can respond right away without sleeping.

Writing a p2p client/server app [duplicate]

This question already has an answer here:
Closed 10 years ago.
Possible Duplicate:
How to write a twisted server that is also a client?
How can I create a tcp client server app with twisted, where also the server can send requests, not just answer them? Sort of like a p2p app but where clients always initiate the connection. Since I don't know when the requests from the server will occur, I don't see how I can do this once the reactor is started.
The question you have to ask yourself is: why is the server sending a request?
Presumably something has happened in the world that would prompt the server to send a request; it wouldn't just do it at random. Even if it did it at random, the thing that has happened in the world would be "some random amount of time has passed". In other words, callLater(random(...), doSomething).
When you are writing a program with Twisted, you start off by setting up ways to react to events. Then you run the reactor - i.e. the "thing that reacts to events" - forever. At any time you can set up new ways to react to incoming network events (reactor.connectTCP, reactor.listenTCP, reactor.callLater) or tear down existing waiting things (protocol.loseConnection, port.stopListening, delayedCall.cancel). You don't need to re-start the reactor; in fact, really, the only thing you should do before the reactor runs is do reactor.callWhenRunning(someFunctionThatListensOrConnects), and write someFunctionThatListensOrConnects to do all your initial set-up. That set-up then happens once the reactor is already running, which demonstrates that you don't need to do anything in advance; the reactor is perfectly capable of changing its configuration as it runs.
If the event that causes the server to send an event to client B the fact that client A sent it a message, then your question is answered by the FAQ, "how do I make input on one connection result in output on another?"

Categories