Is it possible to pretend reading the stream so I don't have to use a thread in order to keep the connection alive? If I don't read the stream, then the given websocket will be closed by the server. (How this works is roughly explained by the comments in the code)
My goal is to just receive messages from the websocket. I don't care about the stream itself.
Currently my code looks like this:
import requests
import threading
import websockets
def fake_read(response):
"""
if the server finds out that I'm not reading the data, the websocket will be closed.
This function is there to receive the data.
"""
for _ in response.iter_content():
pass # do nothing with the data.
def handle_websocket(wss_uri):
"""
using websockets lib, not relevant for this question. (ping_interval=None if somebody wants to connect to the server)
You can also use https://www.piesocket.com/websocket-tester to receive the messages.
"""
print(wss_url)
response = requests.get("https://stream.antenne.de/rockantenne", stream=True) # radio station stream. A cookie of the response contains the websocket URL
wss_addr = response.cookies["sm_websocket_url"]
threading.Thread(target=lambda: fake_read(response), daemon=True).start() # instantly start the thread
handle_websocket(wss_addr)
I want to avoid threads and, if possible, save up some bandwidth. Is there another way to handle streams or any alternatives?
I planned to look into aiohttp, since using async might be better here, but this doesn't eliminate all the problems I have, since there's a function needed to handle the stream and it's still downloading the data aswell.
Related
I'm now familiar with the general cause of this problem from another SO answer and from the uWSGI documentation, which states:
If an HTTP request has a body (like a POST request generated by a
form), you have to read (consume) it in your application. If you do
not do this, the communication socket with your webserver may be
clobbered.
However, I don't understand what exactly is happening at the TCP level for this problem to occur. Not knowing the details of this process, I would assume the server can simply discard what remains in the stream, but that's obviously not the case.
If I consume only part of the request body in my application and ultimately return a 200 response, a web browser will report a connection reset error. Who reset the connection? The webserver or the client? It seems like all the data has been sent by the client already, but the application has just not exhausted the stream. Is there something that happens when the stream is exhausted in the application that triggers the webserver to indicate it has finished reading?
My application is Python/Flask, but I've seen questions about this from several languages and frameworks. For example, this fails if exhaust() is not called on the request stream:
#app.route('/upload', methods=['POST'])
def handle-upload():
file = request.stream
pandas.read_csv(file, nrows=100)
response = # Do stuff
file.exhaust()
return jsonify(response)
While there is some buffering throughout the chain, large file transfers are not going to complete until the receiver has consumed them. The buffers will fill up, and packets will be dropped until the buffers are drained. Eventually, the browser will give up trying to send the file and drop the connection.
I am building web-sockets API on python using python WebSockets. According to the services that I am using in my API I need to wrap them by threads. So I have to make my asyncio web-sockets await while thread produces data for the response.
I found that there are a lot of ways to realize such service. I can use threads + native python socket module or multiplexing(python selector module) + python socket module or multithreading + python socket module or threads + async python web-sockets.
I want to have a web-socket python service working the next way. My client sends data to the server. The server starts thread_1 which somehow modifies given data then pass modified data to the thread_2 which modify them one more time and then return twice modified data as the response to the client. As I expect client will not wait for the response on the server to send next pack of data but if the server returns some result the client will handle it. In other words, client and server should work in async order. Although, it will be great if you will suggest some materials which help me to achieve the goal in that question.
One of the way to realize such architecture is to combine multithreading with asyncio web-sockets. That goal is achieved by using asyncio executors.
This question already has answers here:
Display the contents of a log file as it is updated
(3 answers)
Closed 6 years ago.
I am trying to create a web application with Flask.
The problem is that it has been 2 weeks since I am stuck on a problem.
I would like to run a python command that launches a server, retrieve the standard output, and display in real time on the web-site.
I do not know at all how to do because if I use "render_template" I do not see how to update the web site-the values sent in the console.
I use python 2.7, thank you very much
It's gonna take a lot of work to get this done and probably more then you think but I'll still try to help you out.
To get any real time updates to the browser you're going to need something like a socket connection, something which allows you to send multiple messages at any time. Not just when the browser requests it.
So imagine this with a regular http connection you can only receive a message once and once you receive that message you cannot receive a message again. You can only call return once and not again.
Once you call return, you cannot call return again to send another message.
So basically with a regular http request you can only receive the log messages once and once any changes have been made to the log you cannot send those changes to the client again since the connection is end.
The connection is end the moment you call return.
There is a way to fix this by using a socket connection. A socket connection would allow you to open a connection with the user and server and they both can send messages at any time as long as the connection is open. The connection is only not open when you manually close it.
Check this answer for ways you could have real time updates with flask. If you want to do it with sockets (which is what I suggest you to use) then use the websocket interface instead.
There's options like socketio for python which allow you to write websocket applications in python.
Overall this is gonna be split into 5 parts:
Start a websocket server when the Flask application start
Create a javsacript file (one that the browser loads) that connects with the websocket server
Find the function that gets triggered whenever Flask logging occurs
Send a socket message with the log inside of it
Make the browser display the log whenever it receives a websocket message
Here's a sample application written in Flask and socketio which should give you a idea on how to use socketio.
There's a lot to it and there's part you might be new to like websockets but don't let that stop you from doing what you want to do.
I hope this help, if any part confuses you then feel free to respond.
The simple part : server side, you could redirect the stdout and stderr of the server to a file,
import sys
print("output will be redirected")
# stdout is saved
save_stdout = sys.stdout
fh = open("output.txt","w")
sys.stdout = fh
the server itself would then read that file within a subprocess.
f = subprocess.Popen(['tail','-F',"output.txt", '-n1'],\
stdout=subprocess.PIPE,stderr=subprocess.PIPE)
p = select.poll()
p.register(f.stdout)
and the following threaded :
while True :
if p.poll(1):
output+=f.stdout.readline()
You can also use the tailhead or tailer libraries instead of the system tail
Now, the problem is that the standard output is a kind of active pipe and output is going to grow forever, so you'll need to keep only a frame of that output buffer.
If you have only one user that can connect to that window, the problem would be different, as you could flush the output as soon as is it send to that only client. See the difference between a terminal window and multiplexed, remote terminal window ?
I don't know flask, but client side, you only need some javascript to poll the server every second with an ajax request asking for the complete log (or --case of unique client-- the buffer to be appended to the DOM). You could also use websockets, but it's not an absolute necessity.
A compromise between the two is possible (infinite log with real time append / multiplexed at different rate) and it requires to keep a separate output buffer for each client.
I've got a client-server app I'm making and I'm having a bit of trouble when the server wait for data from the client.
After my the client connects to the server socket, the server open him new thread and get data from the client (in JSON format).
So far, my code works for a single message. When I added while loop, that always accept messages I got a problem. After some tests I found that the recv() function not waiting for new data and continues to the next line, and this is what creates the problem.
I will be happy if you can help me fix the problem.
my receive data loop (The first iteration of the loop works but Receive data in the second iteration not wait for data and make problem because the next line not get any data)-
while True:
data = self.client.recv(self.size) # receive data
message = self.JSON_parser(data) # parser the data (data in json format)
process_message = processing.Processing(message[0]['key'],message[0]['user'],message[0]['data']) # send the receive data to the initialize process
process_return = process_message.action() # call to the action function
self.client.send(process_return) # send to the client message back
If recv() is returning an empty string, it means that the connection has been closed.
The socket may be closed either by the client, or by the server. In this case, looking at the server code you posted, I'm almost sure that the client is closing the connection, perhaps because it's exiting.
In general, your server code should look like this:
while True:
data = self.client.recv(self.size)
if not data:
# client closed the connection
break
message = self.JSON_parser(data)
...
Bonus tip: a long JSON message may require more than one call to recv(). Similarly, more than one JSON message may be returned by recv().
Be sure to implement buffering appropriately. Consider wrapping your socket into a file-like object and reading your messages line-by-line (assuming that your messages are delimited by newline characters).
I have a cherrypy api that is intended to run for a long time on the server.
I have an unreliable client that can die or close connection for various reasons that are out of my control.
During the time my server api runs, I want to periodically check the status of the connection, making sure the client is still listening and abort my operation if the client has gone away.
I could not find any good place describing how to poll the connection status while serving a cherrypy request.
One example of such a long run is computing md5 of multiple big files (of tens of GBs) in chunks of small buffer (limited memory).
I don't need any solutions that shorten the runtime since that is not my goal here. I want to keep this connection open for as long as I can, but abort if it is closed.
Here is the simple sample of my code:
#cherrypy.expose
def foo(self):
cherrypy.response.headers['Content-Type'] = 'text/plain'
def run():
for result in get_results(): # get_results() is a heavy method mentioned
yield json.dumps(result)
return run()
foo._cp_config = {'response.stream': True}
The only reliable way to know that the client has died is to try to write some data to the socket, which for CherryPy can be done with yield. You must yield non-empty strings, so you'd have to be returning a Content-Type that can handle some filler text, like some extra spaces after the opening <head> tag of an HTML document. If the client closes the connection, the CherryPy server will stop requesting additional yielded data from the handler (and call any close method on the generator so you can clean up).
As far as I know, CherryPy doesn't provide you with any mechanism to detect that a client died. It will only tell you if a response took too long to complete (and therefore be sent out).
You may refer to this SO thread for more information.