I have Flask application with route (webhook) receiving POST requests (webhooks) from external phone application (incomming call = POST request). This route sets threading.Event.set() and based on this event, another route (eventsource) sends an event stream to opened EventSource connection on a webpage created by yet another route (eventstream).
telfa_called = Event()
telfa_called.clear()
call = ""
#telfa.route('/webhook', methods=['GET', 'POST'])
def webhook():
global call
print('THE CALL IS HERE')
x = request.data
y = ET.fromstring(x.decode())
caller_number = y.find('caller_number').text
telfa_called.set() # setting threading.Event for another route
return Response(status=200)
#telfa.route('/eventstream', methods = ['GET','POST'])
#login_required
def eventstream():
jsid = str(uuid.uuid4())
return render_template('telfa/stream.html', jsid=jsid)
def eventsource_gen():
while 1:
if telfa_called.wait(10):
telfa_called.clear()
print('JE TO TADY')
yield "data: {}\n\n".format(json.dumps(call))
#telfa.route('/eventsource', methods=['GET', 'POST'])
def eventsource():
return Response(eventsource_gen(), mimetype='text/event-stream')`
Everything works great when testing in pure Python application. The problem starts, when I move this to production server, where I use uWSGI with nginx. (Other parts of this Python application work without any troubles.)
When the eventSource connection is opened and incomming webhook should be processed, whole flask server stucks (for all other users, too), page stops to load and I cannot find, where the error is.
I only know, the POST request from external application is received, but the response to EventSource is not made.
I suspect it has something to do with processes - the EventSource connection from JavaScript is one process, the webhook route another - and they do not communicate. So or so, I suppose this has to have very trivial solution, but I didn't find it in past 3 days and nights. Any hints, please? Thanks in advance.
To be complete, this my uwsgi config file:
[uwsgi]
module = wsgi:app
enable-threads = true
master = true
processes = 5
threads = 2
uid = www-data
gid= www-data
socket = /tmp/myproject.sock
chmod-socket = 666
vacuum = true
die-on-term = true
limit-as=512
buffer-size = 512000
workers = 5
max-requests = 100
req-logger = file:/tmp/uwsg-req.log
logger = file:/tmp/uwsgi.log`
Related
in my flask based http server designed to remotely manage some services on RPI I've approached a problem I cannot solve alone, thus a kind request to you to give me a hint.
Concept:
Via flask and gevent I can stop and run some (two) services running on RPI. I use gevent and server side event with respect javascript in order to listen to the html updates.
The html page shows the status (on/off/processing) of the services and provides buttons to switch them on/off. Additionally display some system parameters (CPU, RAM, HDD, NET).
As long as there is only one user/page opened everything works as desired. As soon as there are more users accessing the flask server there is a race between greenlets serving each user/page and not all pages are getting reloaded.
Problem:
How can I send a message to all running greenlets sse_worker() and process it on top of their regular job?
Below a high level code. The complete source can be found here: https://github.com/petervflocke/flasksse_rpi check the sse.py file
def sse_worker(): #neverending task
while True:
if there_is_a_change_in_process_status:
reload_page=True
else:
reload_page=False
Do some other tasks:
update some_single_parameters_to_be_passed_to_html_page
yield 'data: ' + json.dumps(all_parameters)
gevent.sleep(1)
#app.route('/stream/', methods=['GET', 'POST'])
def stream():
return Response(sse_worker(), mimetype="text/event-stream")
if __name__ == "__main__":
gevent.signal(signal.SIGTERM, stop)
http_server = WSGIServer(('', 5000), app)
http_server.serve_forever()
...on the html page the streamed json data are processed accordingly. If a status of a service has been changed based on the reload_page variable javascript reload the complete page - code extract below:
<script>
function listen() {
var source = new EventSource("/stream/");
var target1 = document.getElementById("time");
....
source.onmessage = function(msg) {
obj = JSON.parse(msg.data);
target1.innerHTML = obj.time;
....
if (obj.reload == "1") {
location.reload();
}
}
}
listen();
</script>
My desired solution would be to extend the sse_worker() like this:
def sse_worker():
while True:
if there_is_a_change_in_process_status:
reload_page=True
# NEW: set up a semaphore/flag that there is a change on the page
message_set(reload)
elif message_get(block=false)==reload: # NEW: check the semaphore
# issue: the message_get must retun "reload" for _all_ active sse_workers, that all of them can push the reload to "their" pages
reload_page=True
else:
reload_page=False
Do some other tasks:
update some_single_parameters_to_be_passed_to_html_page
yield 'data: ' + json.dumps(all_parameters)
gevent.sleep(1)
I hope I could pass on my message. Any idea from your side how I can solve the synchronization? Please notice that we have here the producer and consumer in the same sse_worker function.
Any idea is very welcome!
best regards
Peter
For circumstances outside of my control, I need to use the Flask server to serve basic html files, the Flask SocketIO wrapper to provide a web socket interface between any clients and the server. The async_mode has to be threading instead of gevent or eventlet, I understand that it is less efficient to use threading, but I can't use the other two frameworks.
In my unit tests, I need to shut down and restart the web socket server. When I attempt to shut down the server, I get the RunTimeError 'Cannot stop unknown web server.' This is because the function werkzeug.server.shutdown is not found in the Flask Request Environment flask.request.environ object.
Here is how the server is started.
SERVER = flask.Flask(__name__)
WEBSOCKET = flask_socketio.SocketIO(SERVER, async_mode='threading')
WEBSOCKET.run(SERVER, host='127.0.0.1', port=7777)
Here is the short version of how I'm attempting to shut down the server.
client = WEBSOCKET.test_client(SERVER)
#WEBSOCKET.on('kill')
def killed():
WEBSOCKET.stop()
try:
client.emit('kill')
except:
pass
The stop method must be called from within a flask request context, hence the weird kill event callback. Inside the stop method, the flask.request.environ has the value
'CONTENT_LENGTH' (40503696) = {str} '0'
'CONTENT_TYPE' (60436576) = {str} ''
'HTTP_HOST' (61595248) = {str} 'localhost'
'PATH_INFO' (60437104) = {str} '/socket.io'
'QUERY_STRING' (60327808) = {str} ''
'REQUEST_METHOD' (40503648) = {str} 'GET'
'SCRIPT_NAME' (60437296) = {str} ''
'SERVER_NAME' (61595296) = {str} 'localhost'
'SERVER_PORT' (61595392) = {str} '80'
'SERVER_PROTOCOL' (65284592) = {str} 'HTTP/1.1'
'flask.app' (65336784) = {Flask} <Flask 'server'>
'werkzeug.request' (60361056) = {Request} <Request 'http://localhost/socket.io' [GET]>
'wsgi.errors' (65338896) = {file} <open file '<stderr>', mode 'w' at 0x0000000001C92150>
'wsgi.input' (65338848) = {StringO} <cStringIO.StringO object at 0x00000000039902D0>
'wsgi.multiprocess' (65369288) = {bool} False
'wsgi.multithread' (65369232) = {bool} False
'wsgi.run_once' (65338944) = {bool} False
'wsgi.url_scheme' (65338800) = {str} 'http'
'wsgi.version' (65338752) = {tuple} <type 'tuple'>: (1, 0)
My question is, how do I set up the Flask server to have the werkzeug.server.shutdownmethod available inside the flask request contexts?
Also this is using Python 2.7
I have good news for you, the testing environment does not use a real server, in that context the client and the server run inside the same process, so the communication between them does not go through the network as it does when you run things for real. Really in this situation there is no server, so there's nothing to stop.
It seems you are starting a real server, though. For unit tests, that server is not used, all you need are your unit tests which import the application and then use a test client to issue socket.io events. I think all you need to do is just not start the server, the unit tests should run just fine without it if all you use is the test client as you show above.
I have an internal website, which is required to have file sharing links which are direct links to a shared location on the pc that the table row represents.
When accessing the links, I would like to first test if the remote pc is available, in the quickest possible fashion. I thought this would be a ping, but for some reason, timeout does not work with -w (yes windows)
This is not allowed to take time, for some reason, it causes the web server to block on ping, even though I am using Tornado to serve Flask routes asynchronously.
Preferably, I would like to have the server continously updating the front end, with active/deactive links, allowing users to only access links with pc's online, and restrict them elsehow. Possibly even maintaining the value in a database.
Any and all advice is welcome, I've never really worked with File Sharing before.
Backend is Python 3.4, Flask & Tornado.
The Ajax Call
function is_drive_online2(sender){
hostname = sender.parentNode.parentNode.id;
$.get('Media/test',{
drive: hostname
},
function(returnedData){
console.log(returnedData[hostname]);
if(returnedData[hostname] == 0){
open("file://"+hostname+"/MMUsers");
}else{
alert("Server Offline");
}
}
);
}
The Response (Flask route)
#app.route('/Media/test', methods=['GET', 'POST'])
def ping_response():
before = datetime.datetime.now()
my_dict = dict()
drive = request.args.get('drive')
print(drive)
response = os.system("ping -n 1 -w 1 " + drive)
my_dict[drive] = response
after = datetime.datetime.now()
print(after-before)
return json.dumps(my_dict), 200, {'Content-Type': 'application/json'}
The ping call takes 18 seconds to resolve, even with -w 1 (or 1000)
I only need to support Internet Explorer 11. Is this even a plausible scenario? Are there hardware limitations to something like this Should the server have a long thread whose sole task is to continuously update active/deactivate links? I am not sure the best approach.
Thanks for reading.
EDIT 1:
Trying to apply the ping_response as native Tornado asynchronous response. Result is the same
class PingHandler(RequestHandler):
#asynchronous
def get(self):
dr = self.get_argument('drive')
print(dr)
b = datetime.datetime.now()
myreturn = {self.get_argument('drive'):
os.system("ping -n 1 -w 1 " + self.get_argument('drive'))}
a = datetime.datetime.now()
print(a-b)
self.write(myreturn)
wsgi = WSGIContainer(app)
application = Application([(r"/Media/test", PingHandler),
(r".*", FallbackHandler, dict(fallback=wsgi))])
application.listen(8080)
IOLoop.instance().start()
EDIT 2: Trying to Use Celery. Still blocking.
def make_celery(app):
celery = Celery(app.name, broker=app.config['CELERY_BROKER_URL'])
celery.conf.update(app.config)
TaskBase = celery.Task
class ContextTask(TaskBase):
abstract = True
def __call__(self, *args, **kwargs):
with app.app_context():
return TaskBase.__call__(self, *args, **kwargs)
celery.Task = ContextTask
return celery
celery = make_celery(app)
#celery.task
def ping(drive):
"""
Background Task to test is computer is online
:param drive: The drive name to test
:return: Non Zero status code for Offline boxes.
"""
response = os.system("ping -n 1 -w 1 " + drive)
return json.dumps({drive: response}), 200, {'Content-Type': 'application/json'}
#app.route('/Media/test', methods=['GET', 'POST'])
def ping_response():
before = datetime.datetime.now()
my_dict = dict()
drive = request.args.get('drive')
print(drive)
this_drive = temp_session.query(Drive).filter(Drive.name == drive).first()
address = this_drive.computer.ip_address if this_drive.computer.ip_address else this_drive.name
response = ping.apply_async(args=[address])
return response
Tornado isn't serving your Flask app asynchronously (that's impossible: asynchronousness is a property of the interface and ping_response is a synchronous function). Tornado's WSGIContainer is a poor fit for what you're trying to do (see the warning in its docs)
You should either use Flask with a multi-threaded server like gunicorn or uwsgi, or use native Tornado asynchronous RequestHandlers.
I am trying to be able to respond incoming web requests simultaneously, while processing of a request includes quite long IO call. I'm going to use gevent, as it's supposed to be "non-blocking"
The problem I found is that requests are processed sequentially even though I have a lot of gevent threads. For some reason requests get served by single green thread.
I have nginx (with default config which isn't relevant here I think), also I have uwsgi and simple wsgi app that emulates IO-blocking call as gevent.sleep(). Here they are:
uwsgi.ini
[uwsgi]
chdir = /srv/website
home = /srv/website/env
module = wsgi:app
socket = /tmp/uwsgi_mead.sock
#daemonize = /data/work/zx900/mob-effect.mead/logs/uwsgi.log
processes = 1
gevent = 100
gevent-monkey-patch
wsgi.py
import gevent
import time
from flask import Flask
app = Flask(__name__)
#app.route("/")
def hello():
t0 = time.time()
gevent.sleep(10.0)
t1 = time.time()
return "{1} - {0} = {2}".format(t0, t1, t1 - t0)
then I simultaneously (almost) open two tabs in my browser, and here is what I get as result:
1392297388.98 - 1392297378.98 = 10.0021491051
# first tab, processing finished at 1392297378.98
1392297398.99 - 1392297388.99 = 10.0081849098
# second tab, processing started at 1392297398.99
As you can see, first call blocked execution of the view. What did I wrong?
Send requests with curl or anything else than browser as browser has a limit on the number of simultaneous connections per site or per address. Or use two different browsers.
I'm running a socketio server with a flask app using gevent. My namespace code is here:
class ConversationNamespace(BaseNamespace):
def __init__(self, *args, **kwargs):
request = kwargs.get('request', None)
if request:
self.current_app = request['current_app']
self.current_user = request['current_user']
super(ConversationNamespace, self).__init__(*args, **kwargs)
def listener(self):
r = StrictRedis(host=self.current_app.config['REDIS_HOST'])
p = r.pubsub()
p.subscribe(self.current_app.config['REDIS_CHANNEL_CONVERSATION_KEY'] + self.current_user.user_id)
conversation_keys = r.lrange(self.current_app.config['REDIS_CONVERSATION_LIST_KEY'] +
self.current_user.user_id, 0, -1)
# Reverse conversations so the newest is up top.
conversation_keys.reverse()
# Emit conversation history.
pipe = r.pipeline()
for key in conversation_keys:
pipe.hgetall(self.current_app.config['REDIS_CONVERSATION_KEY'] + key)
self.emit(self.current_app.config['SOCKETIO_CHANNEL_CONVERSATION'] + self.current_user.user_id, pipe.execute())
# Listen for new conversations..
for m in p.listen():
conversation = r.hgetall(self.current_app.config['REDIS_CONVERSATION_KEY'] + str(m['data']))
self.emit(self.current_app.config['SOCKETIO_CHANNEL_CONVERSATION'] +
self.current_user.user_id, conversation)
def on_subscribe(self):
self.spawn(self.listener)
What I'm noticing in my app is that when I first start the SocketIO server (code below), the clients are able to connect via a websocket in firefox and chrome
#!vendor/venv/bin/python
from gevent import monkey
monkey.patch_all()
from yellowtomato import app_instance
import werkzeug.serving
from socketio.server import SocketIOServer
app = app_instance('sockets')
#werkzeug.serving.run_with_reloader
def runServer():
SocketIOServer(('0.0.0.0', app.config['SOCKET_PORT']), app, resource='socket.io').serve_forever()
runServer()
After sometime (maybe an hour or so), when I try to connect to that namespace via the browser client, it no longer communicates with a websocket but rather xhr-polling. Moreover, it takes about 20 seconds before the first response comes from the server. It gives the end user the perception that things have become very slow (but its only when rendering the page on the first subscibe, the xhr polling happens frequently and events get pushed to clients in a timely fashion).
What is triggering this latency and how can I assure that clients connect quickly using websockets.
Figured it out - I was running via the command line in an ssh session. Ending the sessions killed the parent process which was causing gevent to not work properly.
Forking the SocketIOServer process in a screen session fixed the problem