Flask socket does not close - python

I have an application to stream a video on web using flask. Something like an example. But sometimes when the user closes its connection flask does not recognize the disconnection and keeps the socket open. Each socket in Linux is a file-descriptor and the maximum number of open file-descriptors in Linux is 1024 by default. After a while (e.g. 24 hours) new users cannot see the video stream because flask cannot create new socket (which is a file-descriptor).
The same happens when I use flask sockets: from flask_sockets import Sockets. I dont know what happens to these sockets. Most of the time when user refreshes the browser or closes it normally, the socket get closed on server.
I made a test and removed the network cable from my laptop (as a client) and realized that in this case sockets continue to be open and flask does not recognize this kind of disconnection.
Any help will be appreciated.
Update 1:
I put this code on top of main script and it can recognize some of disconnections. But the main problem still is there.
import socket
socket.setdefaulttimeout(1)
Update 2:
Seems duplicated with this post but no solution. I checked the sockets status(by sudo lsof -p your_process_id | egrep 'CLOSE_WAIT') and all of them are "CLOSE_WAIT".

Related

View real-time console on a web page with Flask python [duplicate]

This question already has answers here:
Display the contents of a log file as it is updated
(3 answers)
Closed 6 years ago.
I am trying to create a web application with Flask.
The problem is that it has been 2 weeks since I am stuck on a problem.
I would like to run a python command that launches a server, retrieve the standard output, and display in real time on the web-site.
I do not know at all how to do because if I use "render_template" I do not see how to update the web site-the values sent in the console.
I use python 2.7, thank you very much
It's gonna take a lot of work to get this done and probably more then you think but I'll still try to help you out.
To get any real time updates to the browser you're going to need something like a socket connection, something which allows you to send multiple messages at any time. Not just when the browser requests it.
So imagine this with a regular http connection you can only receive a message once and once you receive that message you cannot receive a message again. You can only call return once and not again.
Once you call return, you cannot call return again to send another message.
So basically with a regular http request you can only receive the log messages once and once any changes have been made to the log you cannot send those changes to the client again since the connection is end.
The connection is end the moment you call return.
There is a way to fix this by using a socket connection. A socket connection would allow you to open a connection with the user and server and they both can send messages at any time as long as the connection is open. The connection is only not open when you manually close it.
Check this answer for ways you could have real time updates with flask. If you want to do it with sockets (which is what I suggest you to use) then use the websocket interface instead.
There's options like socketio for python which allow you to write websocket applications in python.
Overall this is gonna be split into 5 parts:
Start a websocket server when the Flask application start
Create a javsacript file (one that the browser loads) that connects with the websocket server
Find the function that gets triggered whenever Flask logging occurs
Send a socket message with the log inside of it
Make the browser display the log whenever it receives a websocket message
Here's a sample application written in Flask and socketio which should give you a idea on how to use socketio.
There's a lot to it and there's part you might be new to like websockets but don't let that stop you from doing what you want to do.
I hope this help, if any part confuses you then feel free to respond.
The simple part : server side, you could redirect the stdout and stderr of the server to a file,
import sys
print("output will be redirected")
# stdout is saved
save_stdout = sys.stdout
fh = open("output.txt","w")
sys.stdout = fh
the server itself would then read that file within a subprocess.
f = subprocess.Popen(['tail','-F',"output.txt", '-n1'],\
stdout=subprocess.PIPE,stderr=subprocess.PIPE)
p = select.poll()
p.register(f.stdout)
and the following threaded :
while True :
if p.poll(1):
output+=f.stdout.readline()
You can also use the tailhead or tailer libraries instead of the system tail
Now, the problem is that the standard output is a kind of active pipe and output is going to grow forever, so you'll need to keep only a frame of that output buffer.
If you have only one user that can connect to that window, the problem would be different, as you could flush the output as soon as is it send to that only client. See the difference between a terminal window and multiplexed, remote terminal window ?
I don't know flask, but client side, you only need some javascript to poll the server every second with an ajax request asking for the complete log (or --case of unique client-- the buffer to be appended to the DOM). You could also use websockets, but it's not an absolute necessity.
A compromise between the two is possible (infinite log with real time append / multiplexed at different rate) and it requires to keep a separate output buffer for each client.

How to start clients from server itself in python?

I'm developting a automation framework with little manual intervention.
There is one server and 3 client machines.
what server does is it sends some command to each client one by one and get the output of that command and stores in a file.
But to establish the connection I have to manually start clients in different machine in the command line, is there a way that the server itself sends a signal or something to start the client sends command stores output and then start next client so on in python?
Edited.
After the below suggestion, I used spur module
import spur
ss = spur.SshShell(hostname = "172.16.6.58",username ='username',password='some_password',shell_type=spur.ssh.ShellTypes.minimal,missing_host_key=spur.ssh.MissingHostKey.accept)
res = ss.run(['python','clientsock.py'])
I'm trying to start the clientsock.py file in one of the client machine (server is already running in the current machine) but, it hangs there nothing happens. what am i missing here?

python graypy simply not sending

import logging
import graypy
my_logger = logging.getLogger('test_logger')
my_logger.setLevel(logging.DEBUG)
handler = graypy.GELFHandler('my_graylog_server', 12201)
my_logger.addHandler(handler)
my_adapter = logging.LoggerAdapter(logging.getLogger('test_logger'),
{ 'username': 'John' })
my_adapter.debug('Hello Graylog2 from John.')
is not working
I think the issue is the url that should send to /gelf
because when I curl from the terminal to my graylog server , it works
curl -XPOST http://my_graylog_server:12201/gelf -p0 -d '{"short_message":"Hello there", "host":"example1111.org", "facility":"test", "_foo":"bar"}'
Open your Graylog Webapp, click 'System'. You will see a list of links on the right. On of them is 'Input', click this one.
Now you have an overview of all running inputs, listening on different ports. On top of the page you can create a new one. There should be a drop box containing certain modes for the input listener (I think 'GELF AMQP' is standard). Change this one to GELF UDP and click 'Launch new input' in the next dialogue you can specify a port for the service.
You also have to set a node where the messages have to be stored. This node as (or should have) the same IP as your whole graylog2 system.
Now you should be able to receive messages
I think you've setup your input stream For Gelf TCP instead of UDP. I set up a TCP Stream and it got the curl message. However my python application wouldn't send to the stream. I then created a Gelf UDP stream and voila! I ran into this same issue configuring my Graylog EC2 Appliance just a few moments ago.
Also make sure firewall/security groups aren't blocking UDP protocol on port 12201 as well.
Good luck, and I hope this is your issue!

Which web servers are compatible with gevent and how do the two relate?

I'm looking to start a web project using Flask and its SocketIO plugin, which depends on gevent (something something greenlets), but I don't understand how gevent relates to the webserver. Does using gevent restrict my server choice at all? How does it relate to the different levels of web servers that we have in python (e.g. Nginx/Apache, Gunicorn)?
Thanks for the insight.
First, lets clarify what we are talking about:
gevent is a library to allow the programming of event loops easily. It is a way to immediately return responses without "blocking" the requester.
socket.io is a javascript library create clients that can maintain permanent connections to servers, which send events. Then, the library can react to these events.
greenlet think of this a thread. A way to launch multiple workers that do some tasks.
A highly simplified overview of the entire process follows:
Imagine you are creating a chat client.
You need a way to notify the user's screens when anyone types a message. For this to happen, you need someway to tell all the users when a new message is there to be displayed. That's what socket.io does. You can think of it like a radio that is tuned to a particular frequency. Whenever someone transmits on this frequency, the code does something. In the case of the chat program, it adds the message to the chat box window.
Of course, if you have a radio tuned to a frequency (your client), then you need a radio station/dj to transmit on this frequency. Here is where your flask code comes in. It will create "rooms" and then transmit messages. The clients listen for these messages.
You can also write the server-side ("radio station") code in socket.io using node, but that is out of scope here.
The problem here is that traditionally - a web server works like this:
A user types an address into a browser, and hits enter (or go).
The browser reads the web address, and then using the DNS system, finds the IP address of the server.
It creates a connection to the server, and then sends a request.
The webserver accepts the request.
It does some work, or launches some process (depending on the type of request).
It prepares (or receives) a response from the process.
It sends the response to the client.
It closes the connection.
Between 3 and 8, the client (the browser) is waiting for a response - it is blocked from doing anything else. So if there is a problem somewhere, like say, some server side script is taking too long to process the request, the browser stays stuck on the white page with the loading icon spinning. It can't do anything until the entire process completes. This is just how the web was designed to work.
This kind of 'blocking' architecture works well for 1-to-1 communication. However, for multiple people to keep updated, this blocking doesn't work.
The event libraries (gevent) help with this because they accept and will not block the client; they immediately send a response and when the process is complete.
Your application, however, still needs to notify the client. However, as the connection is closed - you don't have a way to contact the client back.
In order to notify the client and to make sure the client doesn't need to "refresh", a permanent connection should be open - that's what socket.io does. It opens a permanent connection, and is always listening for messages.
So work request comes in from one end - is accepted.
The work is executed and a response is generated by something else (it could be a the same program or another program).
Then, a notification is sent "hey, I'm done with your request - here is the response".
The person from step 1, listens for this message and then does something.
Underneath is all is WebSocket a new full-duplex protocol that enables all this radio/dj functionality.
Things common between WebSockets and HTTP:
Work on the same port (80)
WebSocket requests start off as HTTP requests for the handshake (an upgrade header), but then shift over to the WebSocket protocol - at which point the connection is handed off to a websocket-compatible server.
All your traditional web server has to do is listen for this handshake request, acknowledge it, and then pass the request on to a websocket-compatible server - just like any other normal proxy request.
For Apache, you can use mod_proxy_wstunnel
For nginx versions 1.3+ have websocket support built-in

python - can't restart socket connection from client if server becomes unavailable temporarily

I am running a Graphite server to monitor instruments at remote locations. I have a "perpetual" ssh tunnel to the machines from my server (loving autossh) to map their local ports to my server's local port. This works well, data comes through with no hasstles. However we use a flaky satellite connection to the sites, which goes down rather regularly. I am running a "data crawler" on the instrument that is running python and using socket to send packets to the Graphite server. The problem is, if the link goes down temporarily (or the server gets rebooted, for testing mostly), I cannot re-establish the connection to the server. I trap the error, and then run socket.close(), and then re-open, but I just can't re-establish the connection. If I quit the python program and restart it, the connection comes up just fine. Any ideas how I can "refresh" my socket connection?
It's hard to answer this correctly without a code sample. However, it sounds like you might be trying to reuse a closed socket, which is not possible.
If the socket has been closed (or has experienced an error), you must re-create a new connection using a new socket object. For this to work, the remote server must be able to handle multiple client connections in its accept() loop.

Categories