trigger WebSocket in Flask from external event - python

Is there a way to trigger the send() websocket command based on a external event? I am trying to push to the client every time a database is updated. I've tried using an sql notify, a uwsgi file monitor decorator etc. Basic code is
from flask.ext.uwsgi_websocket import GeventWebSocket
from uwsgidecorators import *
ws = GeventWebSocket(app)
#ws.route('/feed')
def socket(ws):
ws_readystate = ws.receive()
if ws_readystate == '1':
ws.send(json.dumps('this message is received just fine'))
# client is ready, start something to trigger sending a message here
#filemon("../mydb.sqlite")
def db_changed(x):
print 'DB changed'
ws.send(json.dumps('db changed'))
this will print "DB changed" in output, but client won't recieve the 'db changed' message. I'm running the app as
uwsgi --master --http :5000 --http-websockets --gevent 2 --wsgi my_app_name:app

gevent queues are a great way to manage such patterns
This is an example you can adapt to your situation
from uwsgidecorators import *
from gevent.queue import Queue
channels = []
#filemon('/tmp',target='workers')
def trigger_event(signum):
for channel in channels:
try:
channel.put_nowait(True)
except:
pass
def application(e, sr):
sr('200 OK', [('Content-Type','text/html')])
yield "Hello and wait..."
q = Queue()
channels.append(q)
q.get()
yield "event received, goodbye"
channels.remove(q)
if you dot not plan to use multiple processes feel free to remove target='workers' from the filemon decorator (this special target raise the uwsgi signal to all of the workers instead of the first avilable one)

Related

implementing threading to consume queues from rabbitmq

This is an update to my previous question. I realized that I should have added some code to explain my issue further. I am currently trying to implement threading to queues being consumed from a rabbitmq exchange. As I am new to rabbitmq and threading, I am finding it difficult to amalgamate both and apply them. I was wondering if anyone could provide any templates I could apply to begin with.
I am coding in visual studio platform, where a simulator is being used to generate data, emulating the producer(a smart device).
The first part of the code assigns a few variables relevant to the project. I have imported the required libraries and a few extra scripts I have made myself. The import inthe second line are scripts that assist with communicating with the smart device.
import pika, sys, os
import SockAlertMessage_pb2, SockDataProcessedMessage_pb2, SockDataRawMessage_pb2, SockDataSessionEndMessage_pb2, SockDataSessionStartMessage_pb2, SockMessage_pb2
import numpy as np
import scipy
import ampd
import python_file_3
import heartpy
import time
import threading
from scipy.signal import detrend
from python_file_3 import filter_signal, get_hrv, get_rmssd, get_std, heart_rate
from ampd import find_peaks_originalode here
The second part of the code
sock_data_session_start_queue = 'sock_data_session_start_queue'
sock_data_session_end_queue= 'sock_data_session_end_queue'
sock_data_raw_queue = 'sock_data_raw_queue'
tx_queue = 'tbd'
# establish connection with rabbitmq server
connection = pika.BlockingConnection(pika.ConnectionParameters('localhost'))
channel = connection.channel()
# create rx message queues
channel.queue_declare(queue=sock_data_session_start_queue, durable = True)
channel.queue_declare(queue=sock_data_session_end_queue, durable = True)
# create tx message queue
channel.queue_declare(queue=tx_queue, durable = True)e here
def sock_data_session_start_callback(ch, method, properties, body):
message = SockDataSessionStartMessage_pb2.SockDataSessionStartMessage()
message.ParseFromString(body)
new_dict['1'] = message
print(new_dict)
# todo: create thread w/ state
print(" [x] Start session %r" % message)
# send message
def sock_data_session_end_callback(ch, method, properties, body):
message = SockDataSessionEndMessage_pb2.SockDataSessionEndMessage()
message.ParseFromString(body)
# todo: destroy thread w/ state
print(" [x] End session %r" % message)
def sock_data_raw_callback(ch, method, properties, body):
message = SockDataRawMessage_pb2.SockDataRawMessage()
message.ParseFromString(body)
print(message)
# todo: destroy thread w/ state
print(" [x] Sock data raw %r" % message)
if __name__ == '__main__':
try:
channel.basic_consume(queue=sock_data_session_start_queue, auto_ack=True, on_message_callback=sock_data_session_start_callback)
channel.basic_consume(queue=sock_data_session_end_queue, auto_ack=True, on_message_callback=sock_data_session_end_callback)
channel.basic_consume(queue=sock_data_raw_queue, auto_ack=True, on_message_callback=sock_data_raw_callback)
print(' [*] Waiting for messages. To exit press CTRL+C')
channel.start_consuming()
except KeyboardInterrupt:
print('Interrupted')
# close connection
connection.close()
try:
sys.exit(0)
except SystemExit:
os._exit(0)
The start and end data callbacks refer to data sessions, where data session connection is acknowledged and started and then later ended. I believe the raw data callback is where I may need to implement my threads, where data will be processed and then sent back to another queue. The challenge is to make each data session a thread, and then process that.

python zerorpc and multiprocessing issue

I'm implementing a bi-directional ping-pong demo app between an electron app and a python backend.
This is the code for the python part which causes the problems:
import sys
import zerorpc
import time
from multiprocessing import Process
def ping_response():
print("Sleeping")
time.sleep(5)
c = zerorpc.Client()
c.connect("tcp://127.0.0.1:4243")
print("sending pong")
c.pong()
class Api(object):
def echo(self, text):
"""echo any text"""
return text
def ping(self):
p = Process(target=ping_response, args=())
p.start()
print("got ping")
return
def parse_port():
port = 4242
try:
port = int(sys.argv[1])
except Exception as e:
pass
return '{}'.format(port)
def main():
addr = 'tcp://127.0.0.1:' + parse_port()
s = zerorpc.Server(Api())
s.bind(addr)
print('start running on {}'.format(addr))
s.run()
if __name__ == '__main__':
main()
Each time ping() is called from javascript side it will start a new process that simulates some work (sleeping for 5 seconds) and replies by calling pong on nodejs server to indicate work is done.
The issue is that the pong() request never gets to javascript side. If instead of spawning a new process I create a new thread using _thread and execute the same code in ping_response(), the pong request arrives in the javascript side. Also if I manually run the bash command zerorpc tcp://localhost:4243 pong I can see that the pong request is received by the nodejs script so the server on the javascript side works ok.
What happens with zerorpc client when I create a new process and it doesn't manage to send the request ?
Thank you.
EDIT
It seems it gets stuck in c.pong()
Try using gipc.start_process() from the gipc module (via pip) instead of multiprocessing.Process(). It creates a new gevent context which otherwise multiprocessing will accidentally inherit.

Attaching ZMQStream with existing tornado ioloop

I have an application where every websocket connection (within tornado open callback) creates a zmq.SUB socket to an existing zmq.FORWARDER device. Idea is to receive data from zmq as callbacks, which can then be relayed to frontend clients over websocket connection.
https://gist.github.com/abhinavsingh/6378134
ws.py
import zmq
from zmq.eventloop import ioloop
from zmq.eventloop.zmqstream import ZMQStream
ioloop.install()
from tornado.websocket import WebSocketHandler
from tornado.web import Application
from tornado.ioloop import IOLoop
ioloop = IOLoop.instance()
class ZMQPubSub(object):
def __init__(self, callback):
self.callback = callback
def connect(self):
self.context = zmq.Context()
self.socket = self.context.socket(zmq.SUB)
self.socket.connect('tcp://127.0.0.1:5560')
self.stream = ZMQStream(self.socket)
self.stream.on_recv(self.callback)
def subscribe(self, channel_id):
self.socket.setsockopt(zmq.SUBSCRIBE, channel_id)
class MyWebSocket(WebSocketHandler):
def open(self):
self.pubsub = ZMQPubSub(self.on_data)
self.pubsub.connect()
self.pubsub.subscribe("session_id")
print 'ws opened'
def on_message(self, message):
print message
def on_close(self):
print 'ws closed'
def on_data(self, data):
print data
def main():
application = Application([(r'/channel', MyWebSocket)])
application.listen(10001)
print 'starting ws on port 10001'
ioloop.start()
if __name__ == '__main__':
main()
forwarder.py
import zmq
def main():
try:
context = zmq.Context(1)
frontend = context.socket(zmq.SUB)
frontend.bind('tcp://*:5559')
frontend.setsockopt(zmq.SUBSCRIBE, '')
backend = context.socket(zmq.PUB)
backend.bind('tcp://*:5560')
print 'starting zmq forwarder'
zmq.device(zmq.FORWARDER, frontend, backend)
except KeyboardInterrupt:
pass
except Exception as e:
logger.exception(e)
finally:
frontend.close()
backend.close()
context.term()
if __name__ == '__main__':
main()
publish.py
import zmq
if __name__ == '__main__':
context = zmq.Context()
socket = context.socket(zmq.PUB)
socket.connect('tcp://127.0.0.1:5559')
socket.send('session_id helloworld')
print 'sent data for channel session_id'
However, my ZMQPubSub class doesn't seem like is receiving any data at all.
I further experimented and realized that I need to call ioloop.IOLoop.instance().start() after registering on_recv callback within ZMQPubSub. But, that will just block the execution.
I also tried passing main.ioloop instance to ZMQStream constructor but doesn't help either.
Is there a way by which I can bind ZMQStream to existing main.ioloop instance without blocking flow within MyWebSocket.open?
In your now complete example, simply change frontend in your forwarder to a PULL socket and your publisher socket to PUSH, and it should behave as you expect.
The general principles of socket choice that are relevant here:
use PUB/SUB when you want to send a message to everyone who is ready to receive it (may be no one)
use PUSH/PULL when you want to send a message to exactly one peer, waiting for them to be ready
it may appear initially that you just want PUB-SUB, but once you start looking at each socket pair, you realize that they are very different. The frontend-websocket connection is definitely PUB-SUB - you may have zero-to-many receivers, and you just want to send messages to everyone who happens to be available when a message comes through. But the backend side is different - there is only one receiver, and it definitely wants every message from the publishers.
So there you have it - backend should be PULL and frontend PUB. All your sockets:
PUSH -> [PULL-PUB] -> SUB
publisher.py: socket is PUSH, connected to backend in device.py
forwarder.py: backend is PULL, frontend is PUB
ws.py: SUB connects and subscribes to forwarder.frontend.
The relevant behavior that makes PUB/SUB fail on the backend in your case is the slow joiner syndrome, which is described in The Guide. Essentially, subscribers take a finite time to tell publishers about there subscriptions, so if you send a message immediately after opening a PUB socket, the odds are it hasn't been told that it has any subscribers yet, so it's just discarding messages.
ZeroMq subscribers have to subscribe on what messages they wish to receive; I don't see that in your code. I believe the Python way is this:
self.socket.setsockopt(zmq.SUBSCRIBE, "")

How to stop flask application without using ctrl-c

I want to implement a command which can stop flask application by using flask-script.
I have searched the solution for a while. Because the framework doesn't provide app.stop() API, I am curious about how to code this. I am working on Ubuntu 12.10 and Python 2.7.3.
If you are just running the server on your desktop, you can expose an endpoint to kill the server (read more at Shutdown The Simple Server):
from flask import request
def shutdown_server():
func = request.environ.get('werkzeug.server.shutdown')
if func is None:
raise RuntimeError('Not running with the Werkzeug Server')
func()
#app.get('/shutdown')
def shutdown():
shutdown_server()
return 'Server shutting down...'
Here is another approach that is more contained:
from multiprocessing import Process
server = Process(target=app.run)
server.start()
# ...
server.terminate()
server.join()
Let me know if this helps.
I did it slightly different using threads
from werkzeug.serving import make_server
class ServerThread(threading.Thread):
def __init__(self, app):
threading.Thread.__init__(self)
self.server = make_server('127.0.0.1', 5000, app)
self.ctx = app.app_context()
self.ctx.push()
def run(self):
log.info('starting server')
self.server.serve_forever()
def shutdown(self):
self.server.shutdown()
def start_server():
global server
app = flask.Flask('myapp')
# App routes defined here
server = ServerThread(app)
server.start()
log.info('server started')
def stop_server():
global server
server.shutdown()
I use it to do end to end tests for restful api, where I can send requests using the python requests library.
This is a bit old thread, but if someone experimenting, learning, or testing basic flask app, started from a script that runs in the background, the quickest way to stop it is to kill the process running on the port you are running your app on.
Note: I am aware the author is looking for a way not to kill or stop the app. But this may help someone who is learning.
sudo netstat -tulnp | grep :5001
You'll get something like this.
tcp 0 0 0.0.0.0:5001 0.0.0.0:* LISTEN 28834/python
To stop the app, kill the process
sudo kill 28834
My method can be proceeded via bash terminal/console
1) run and get the process number
$ ps aux | grep yourAppKeywords
2a) kill the process
$ kill processNum
2b) kill the process if above not working
$ kill -9 processNum
As others have pointed out, you can only use werkzeug.server.shutdown from a request handler. The only way I've found to shut down the server at another time is to send a request to yourself. For example, the /kill handler in this snippet will kill the dev server unless another request comes in during the next second:
import requests
from threading import Timer
from flask import request
import time
LAST_REQUEST_MS = 0
#app.before_request
def update_last_request_ms():
global LAST_REQUEST_MS
LAST_REQUEST_MS = time.time() * 1000
#app.post('/seriouslykill')
def seriouslykill():
func = request.environ.get('werkzeug.server.shutdown')
if func is None:
raise RuntimeError('Not running with the Werkzeug Server')
func()
return "Shutting down..."
#app.post('/kill')
def kill():
last_ms = LAST_REQUEST_MS
def shutdown():
if LAST_REQUEST_MS <= last_ms: # subsequent requests abort shutdown
requests.post('http://localhost:5000/seriouslykill')
else:
pass
Timer(1.0, shutdown).start() # wait 1 second
return "Shutting down..."
This is an old question, but googling didn't give me any insight in how to accomplish this.
Because I didn't read the code here properly! (Doh!)
What it does is to raise a RuntimeError when there is no werkzeug.server.shutdown in the request.environ...
So what we can do when there is no request is to raise a RuntimeError
def shutdown():
raise RuntimeError("Server going down")
and catch that when app.run() returns:
...
try:
app.run(host="0.0.0.0")
except RuntimeError, msg:
if str(msg) == "Server going down":
pass # or whatever you want to do when the server goes down
else:
# appropriate handling/logging of other runtime errors
# and so on
...
No need to send yourself a request.
If you're working on the CLI and only have one flask app/process running (or rather, you just want want to kill any flask process running on your system), you can kill it with:
kill $(pgrep -f flask)
You don't have to press CTRL + C, but you can provide an endpoint which does it for you:
from flask import Flask, jsonify, request
import json, os, signal
#app.route('/stopServer', methods=['GET'])
def stopServer():
os.kill(os.getpid(), signal.SIGINT)
return jsonify({ "success": True, "message": "Server is shutting down..." })
Now you can just call this endpoint to gracefully shutdown the server:
curl localhost:5000/stopServer
If you're outside the request-response handling, you can still:
import os
import signal
sig = getattr(signal, "SIGKILL", signal.SIGTERM)
os.kill(os.getpid(), sig)
request.environ.get deprecated.
Pavel Minaev solution is pretty clear:
import os
from flask import Flask
app = Flask(__name__)
exiting = False
#app.route("/exit")
def exit_app():
global exiting
exiting = True
return "Done"
#app.teardown_request
def teardown(exception):
if exiting:
os._exit(0)
If someone else is looking how to stop Flask server inside win32 service - here it is. It's kinda weird combination of several approaches, but it works well. Key ideas:
These is shutdown endpoint which can be used for graceful shutdown. Note: it relies on request.environ.get which is usable only inside web request's context (inside #app.route-ed function)
win32service's SvcStop method uses requests to do HTTP request to the service itself.
myservice_svc.py
import win32service
import win32serviceutil
import win32event
import servicemanager
import time
import traceback
import os
import myservice
class MyServiceSvc(win32serviceutil.ServiceFramework):
_svc_name_ = "MyServiceSvc" # NET START/STOP the service by the following name
_svc_display_name_ = "Display name" # this text shows up as the service name in the SCM
_svc_description_ = "Description" # this text shows up as the description in the SCM
def __init__(self, args):
os.chdir(os.path.dirname(myservice.__file__))
win32serviceutil.ServiceFramework.__init__(self, args)
def SvcDoRun(self):
# ... some code skipped
myservice.start()
def SvcStop(self):
"""Called when we're being shut down"""
myservice.stop()
# tell the SCM we're shutting down
self.ReportServiceStatus(win32service.SERVICE_STOP_PENDING)
servicemanager.LogMsg(servicemanager.EVENTLOG_INFORMATION_TYPE,
servicemanager.PYS_SERVICE_STOPPED,
(self._svc_name_, ''))
if __name__ == '__main__':
os.chdir(os.path.dirname(myservice.__file__))
win32serviceutil.HandleCommandLine(MyServiceSvc)
myservice.py
from flask import Flask, request, jsonify
# Workaround - otherwise doesn't work in windows service.
cli = sys.modules['flask.cli']
cli.show_server_banner = lambda *x: None
app = Flask('MyService')
# ... business logic endpoints are skipped.
#app.route("/shutdown", methods=['GET'])
def shutdown():
shutdown_func = request.environ.get('werkzeug.server.shutdown')
if shutdown_func is None:
raise RuntimeError('Not running werkzeug')
shutdown_func()
return "Shutting down..."
def start():
app.run(host='0.0.0.0', threaded=True, port=5001)
def stop():
import requests
resp = requests.get('http://0.0.0.0:5001/shutdown')
You can use method bellow
app.do_teardown_appcontext()
Google Cloud VM instance + Flask App
I hosted my Flask Application on Google Cloud Platform Virtual Machine.
I started the app using python main.py But the problem was ctrl+c did not work to stop the server.
This command $ sudo netstat -tulnp | grep :5000 terminates the server.
My Flask app runs on port 5000 by default.
Note: My VM instance is running on Linux 9.
It works for this. Haven't tested for other platforms.
Feel free to update or comment if it works for other versions too.
A Python solution
Run with: python kill_server.py.
This is for Windows only. Kills the servers with taskkill, by PID, gathered with netstat.
# kill_server.py
import os
import subprocess
import re
port = 5000
host = '127.0.0.1'
cmd_newlines = r'\r\n'
host_port = host + ':' + str(port)
pid_regex = re.compile(r'[0-9]+$')
netstat = subprocess.run(['netstat', '-n', '-a', '-o'], stdout=subprocess.PIPE)
# Doesn't return correct PID info without precisely these flags
netstat = str(netstat)
lines = netstat.split(cmd_newlines)
for line in lines:
if host_port in line:
pid = pid_regex.findall(line)
if pid:
pid = pid[0]
os.system('taskkill /F /PID ' + str(pid))
# And finally delete the .pyc cache
os.system('del /S *.pyc')
If you are having trouble with favicon / changes to index.html loading (i.e. old versions are cached), then try "Clear Browsing Data > Images & Files" in Chrome as well.
Doing all the above, and I got my favicon to finally load upon running my Flask app.
app = MyFlaskSubclass()
...
app.httpd = MyWSGIServerSubclass()
...
#app.route('/shutdown')
def app_shutdown():
from threading import Timer
t = Timer(5, app.httpd.shutdown)
t.start()
return "Server shut down"
My bash script variant (LINUX):
#!/bin/bash
portFind="$1"
echo "Finding process on port: $portFind"
pid=$(netstat -tulnp | grep :"$1" | awk '{print $7}' | cut -f1 -d"/")
echo "Process found: $pid"
kill -9 $pid
echo "Process $pid killed"
Usage example:
sudo bash killWebServer.sh 2223
Output:
Finding process on port: 2223
Process found: 12706
Process 12706 killed
If the port is known (e.g., 5000) a simple solution I have found is to enter:
fuser -k 5000/tcp
this will kill the process on port 5000.
How to kill a process running on particular port in Linux?
For Windows, it is quite easy to stop/kill flask server -
Goto Task Manager
Find flask.exe
Select and End process

Redis pub/sub adding additional channels mid subscription

Is it possible to add additional subscriptions to a Redis connection? I have a listening thread but it appears not to be influenced by new SUBSCRIBE commands.
If this is the expected behavior, what is the pattern that should be used if users add a stock ticker feed to their interests or join chatroom?
I would like to implement a Python class similar to:
import threading
import redis
class RedisPubSub(object):
def __init__(self):
self._redis_pub = redis.Redis(host='localhost', port=6379, db=0)
self._redis_sub = redis.Redis(host='localhost', port=6379, db=0)
self._sub_thread = threading.Thread(target=self._listen)
self._sub_thread.setDaemon(True)
self._sub_thread.start()
def publish(self, channel, message):
self._redis_pub.publish(channel, message)
def subscribe(self, channel):
self._redis_sub.subscribe(channel)
def _listen(self):
for message in self._redis_sub.listen():
print message
The python-redis Redis and ConnectionPool classes inherit from threading.local, and this is producing the "magical" effects you're seeing.
Summary: your main thread and worker threads' self._redis_sub clients end up using two different connections to the server, but only the main thread's connection has issued the SUBSCRIBE command.
Details: Since the main thread is creating the self._redis_sub, that client ends up being placed into main's thread-local storage. Next I presume the main thread does a client.subscribe(channel) call. Now the main thread's client is subscribed on connection 1. Next you start the self._sub_thread worker thread which ends up having its own self._redis_sub attribute set to a new instance of redis.Client which constructs a new connection pool and establishes a new connection to the redis server.
This new connection has not yet been subscribed to your channel, so listen() returns immediately. So with python-redis you cannot pass an established connection with outstanding subscriptions (or any other stateful commands) between threads.
Depending on how you plan to implement your app you may need to switch to using a different client, or come up with some other way to communicate subscription state to the worker threads, e.g. send subscription commands through a queue.
One other issue is that python-redis uses blocking sockets, which prevents your listening thread from doing other work while waiting for messages, and it cannot signal it wishes to unsubscribe unless it does so immediately after receiving a message.
Async way:
Twisted framework and the plug txredisapi
Example code (Subscribe:
import txredisapi as redis
from twisted.application import internet
from twisted.application import service
class myProtocol(redis.SubscriberProtocol):
def connectionMade(self):
print "waiting for messages..."
print "use the redis client to send messages:"
print "$ redis-cli publish chat test"
print "$ redis-cli publish foo.bar hello world"
self.subscribe("chat")
self.psubscribe("foo.*")
reactor.callLater(10, self.unsubscribe, "chat")
reactor.callLater(15, self.punsubscribe, "foo.*")
# self.continueTrying = False
# self.transport.loseConnection()
def messageReceived(self, pattern, channel, message):
print "pattern=%s, channel=%s message=%s" % (pattern, channel, message)
def connectionLost(self, reason):
print "lost connection:", reason
class myFactory(redis.SubscriberFactory):
# SubscriberFactory is a wapper for the ReconnectingClientFactory
maxDelay = 120
continueTrying = True
protocol = myProtocol
application = service.Application("subscriber")
srv = internet.TCPClient("127.0.0.1", 6379, myFactory())
srv.setServiceParent(application)
Only one thread, no headache :)
Depends on what kind of app u coding of course. In networking case go twisted.

Categories