How to interpose RabbitMQ between REST client and (Python) REST server? - python

If I develop a REST service hosted in Apache and a Python plugin which services GET, PUT, DELETE, PATCH; and this service is consumed by an Angular client (or other REST interacting browser technology). Then how do I make it scale-able with RabbitMQ (AMQP)?
Potential Solution #1
Multiple Apache's still faces off against the browser's HTTP calls.
Each Apache instance uses an AMQP plugin and then posts message to a queue
Python microservices monitor a queue and pull a message, service it and return response
Response passed back to Apache plugin, in turn Apache generates the HTTP response
Does this mean the Python microservice no longer has any HTTP server code at all. This will change that component a lot. Perhaps best to decide upfront if you want to use this pattern as it seems it would be a task to rip out any HTTP server code.
Other potential solutions? I am genuinely puzzled as to how we're supposed to take a classic REST server component and upgrade it to be scale-able with RabbitMQ/AMQP with minimal disruption.

I would recommend switching wsgi to asgi(nginx can help here), Im not sure why you think rabbitmq is the solution to your problem, as nothing you described seems like that would be solved by using this method.
asgi is not supported by apache as far as I know, but it allows the server to go do work, and while its working it can continue to service new requests that come in. (gross over simplification)
If for whatever reason you really want to use job workers (rabbitmq, etc) then I would suggest returning to the user a "token" (really just the job_id) and then they can call with that token, and it will report back either the current job status or the result

Related

Slack API - queue for retrieving slash commands

I am tasked with building a Slack slash command app in Python which will respond to incoming slash commands. However, for security reasons, I am not allowed to open the firewall for incoming webhooks from Slack. Is there instead a way to check a queue of sent slash commands?
For example, a user types "/myslashapp" in a specific channel. My app will need to do something like call an endpoint every 30 seconds and check if the "/myslashapp" command was sent. If it was, my app should trigger a Lambda function in AWS.
Based on reading the Slack API docs, I haven't found any way to do this other than perhaps the RTM API, though it seems like overkill and still requires an open socket.
No. The Slack API has no build-in support that allows you to pull requests after-the-fact from a queue instead of receiving them from Slack when they happen.
The RTM API might work for you, because the connection to Slack is initiated from your side. So - provided you firewall allows it - would also work from within an intranet. However, you can not do slash commands with the RTM API or any of the other interesting interactive Slack features like buttons. Only simple messages and events.
You could implement your own bridging solution and pull from it. But I don't think that a pulling solution would work, because it creates a lot of latency for your app. Users expect an immediate response to their slash command, not a delay of 30 secs or more.
So in summary I think you only have two valid options:
Host your app internally and use a secure VPN like ngrok to expose a public URL to your app.
Run your app on the Internet and let it have a secure connection to your Intranet for accessing internal data. (similar to e.g. a shopping web site would work, that has a public app on the Internet, but also can transmit orders to the business applications on the companies Intranet.)

Client-Server framework for python

I'm currently working on a University project that needs to be implemented with a Client - Server model.
I had experiences in the past where I was managing the communication at socket level and that really sucked.
I was wondering if someone could suggest an easy to use python framework that I can use for that purpose.
I don't know what kind of details you may need to answer so I'm just going to describe the project briefly.
Communication should happen over HTTP, possibly HTTPS.
The server does not need to send data back or invoke methods on the clients, it just collects data
Many clients send data concurrently to server, who needs to distinguish the sender, process the data accordingly and put the result in a database.
You can use something like Flask or Django. Both frameworks are fairly easy to implement, Flask is much easier than Django IMO, although Django has a built in authentication layer that you can use, albeit more difficult to implement in a client/server scenario like you need.
I would personally use Flask and JWT (JSON Web Tokens), which will allow you to give a token to each client for authentication with the server, which will also let you differentiate between clients, and you can use HTTPS for your SSL/TLS requirement. It is tons easier to implement this, and although I like django better for what it brings to the table, it is probably overkill to have you learn it for a single assignment.
For Flask with SSL, here is a quick rundown of that.
For JWT with Flask, here is that.
You can use any database system you would like.
If I understood you correctly you can use any web framework in python. For instance, you can use Flask (I use it and I like it). Django is also a popular choice among the python web frameworks. However, you shouldn't be limited to only these two. There are plenty of them out there. Just google for them.
The implementation of the client depends on what kind of communication there will be between the clients and the server - I don't have enough details here. I only know it's unidirectional.
The client can be a browser accessing you web application written in Flask where users send only POST requests to the server. However, even here the communication will bidirectional (the clients need to open the page which means the server sends requests back to the client) and it violates your initial requirement.
Then it can be a specific client written in python sending some particular requests to your server over http/https. For instance, your client can use a requests package to send HTTP requests.

Communicating between appengine and an application

I'm working on a web interface which currently runs using PHP and communicates locally to a python script.
I'm moving the web side to appengine, which so far is going well when being used locally, I'm currently communicating from the appengine app to the python app via get requests that are handled by the python script.
The problem is, that obviously the machine running the python script will be behind a firewall, I've never needed to do this before and am not sure on how to implement this best.
The only idea I have so far is for the python script to send post requests to the appengine with some data and then as a response, send back some other data. The only problem with this is that the web interface should update the client quite fast.
Any ideas?
Take a look at ProtoRPC Python API: https://developers.google.com/appengine/docs/python/tools/protorpc/overview
Though it is still marked as experimental, it seems to be a decent framework for what you are trying to do - send messages back and forth between the apps.
Since you said your local app runs behind a firewall, I'm assuming you cannot open up an endpoint and protect it with some form of authentication.
Once you have messages flowing, you can either use Channel API to keep the front-end updated: https://developers.google.com/appengine/docs/python/channel/overview
Or if you want to go more basic, just implement long/short polling through AJAX.
Sorry with the limited amount of info you have provided, that's all I can think of right now. Please feel free to post more details and I'll try to help further.

How to control Apache via Django to connect to mongoose(another HTTP server)?

I have been doing lots of searching and reading to solve this.
The main goal is let a Django-based web management system connecting to a device which runs a http server as well. Django will handle user request and ask device for the real data, then feedback to user.
Now I have a "kinda-work-in-concept" solution:
Browser -> Apache Server: Browser have jQuery and HTML/CSS to collect user request.
Apache Server-> Device HTTP Server:
Apache + mod_python(or somesay Apache + mod_wsgi?) , so I might control the Apache to do stuff like build up a session and cookies to record login.
But, this is the issue actually bugs me.
How to make it work? Using what to build up socket connection between this two servers?
You could use httplib or urllib2 (both supplied in the Python standard library) in your Django view to send HTTP requests to the device running mongoose.
Alternatively you could use the Requests library which provides a less verbose API for generating HTTP requests - see also this blog post.
(Also, I would strongly recommend that you use mod_wsgi rather than mod_python as mod_wsgi is being actively maintained and performs better than mod_python, which was last updated in 2007)
If you have control over what runs on the device side, consider using XML-RPC to talk from client to server.

Need help understanding Comet in Python (with Django)

After spending two entire days on this I'm still finding it impossible to understand all the choices and configurations for Comet in Python. I've read all the answers here as well as every blog post I could find. It feels like I'm about to hemorrhage at this point, so my utmost apologies for anything wrong with this question.
I'm entirely new to all of this, all I've done before were simple non-real-time sites with a PHP/Django backend on Apache.
My goal is to create a real-time chat application; hopefully tied to Django for users, auth, templates, etc.
Every time I read about a tool it says I need another tool on top of it, it feels like a never-ending chain.
First of all, can anybody categorize all the tools needed for this job?
I've read about different servers, networking libraries, engines, JavaScripts for the client side, and I don't know what else. I never imagined it would be this complex.
Twisted / Twisted Web seems to be popular, but I have no idea to to integrate it or what else I need (guessing I need client-side JS at least).
If I understand correctly, Orbited is built on Twisted, do I need anything else with it?
Are Gevent and Eventlet in the same category as Twisted? How much else do I need with them?
Where do things like Celery, RabbitMQ, or KV stores like Redis come into this? I don't really understand the concept of a message queue. Are they essential and what service do they provide?
Are there any complete chat app tutorials I should look at?
I'll be entirely indebted to anybody who helps me past this mental roadblock, and if I left anything out please don't hesitate to ask. I know it's a pretty loaded question.
You could use Socket.IO. There are gevent and tornado handlers for it. See my blog post on gevent-socketio with Django here: http://codysoyland.com/2011/feb/6/evented-django-part-one-socketio-and-gevent/
I feel your pain, having had to go through the same research over the past few months. I haven't had time to deal with proper documentation yet but I have a working example of using Django with socket.io and tornadio at http://bitbucket.org/virtualcommons/vcweb - I was hoping to set up direct communication from the Django server-side to the tornadio server process using queues (i.e., logic in a django view pushes a message onto a queue that then gets handled by tornadio which pushes a json encoded version of that message out to all interested subscribers) but haven't implemented that part fully yet. The way I've currently gotten it set up involves:
An external tornado (tornadio) server, running on another port, accepting socket.io requests and working with Django models. The only writes this server process makes to the database are the chat messages that need to be stored. It has full access to all Django models, etc., and all real-time interactions need to go directly through this server process.
Django template pages that require real-time access include the socket.io javascript and establish direct connections to the tornadio server
I looked into orbited, hookbox, and gevent but decided to go with socket.io + tornado as it seemed to allow me the cleanest javascript + python code. I could be wrong about that though, having just started to learn Python/Django over the past year.
Redis is relevant as a persistence layer that also supports native publish/subscribe. So instead of a situation where you are polling the db looking for new messages, you can subscribe to a channel, and have messages pushed out to you.
I found a working example of the type of system you describe. The magic happens in the socketio view:
def socketio(request):
"""The socket.io view."""
io = request.environ['socketio']
redis_sub = redis_client().pubsub()
user = username(request.user)
# Subscribe to incoming pubsub messages from redis.
def subscriber(io):
redis_sub.subscribe(room_channel())
redis_client().publish(room_channel(), user + ' connected.')
while io.connected():
for message in redis_sub.listen():
if message['type'] == 'message':
io.send(message['data'])
greenlet = Greenlet.spawn(subscriber, io)
# Listen to incoming messages from client.
while io.connected():
message = io.recv()
if message:
redis_client().publish(room_channel(), user + ': ' + message[0])
# Disconnected. Publish disconnect message and kill subscriber greenlet.
redis_client().publish(room_channel(), user + ' disconnected')
greenlet.throw(Greenlet.GreenletExit)
return HttpResponse()
Take the view step-by-step:
Set up socket.io, get a redis client and the current user
Use Gevent to register a "subscriber" - this takes incoming messages from Redis and forwards them on to the client browser.
Run a "publisher" which takes messages from socket.io (from the user's browser) and pushes them into Redis
Repeat until the socket disconnects
The Redis Cookbook gives a little more detail on the Redis side, as well as discussing how you can persist messages.
Regarding the rest of your question: Twisted is an event-based networking library, it could be considered an alternative to Gevent in this application. It's powerful and difficult to debug in my experience.
Celery is a "distributed task queue" - basically, it lets you spread units of work out across multiple machines. The "distributed" angle means some sort of transport is required between the machines. Celery supports several types of transport, including RabbitMQ (and Redis too).
In the context of your example, Celery would only be appropriate if you had to do some sort of costly processing on each message like scanning for profanity or something. Even still, something would have to initiate the Celery task, so there would need to be some code listening for the socket.io callback.
(Just in case you weren't totally confused, Celery itself can be made to use Gevent as its underlying concurrency library.)
Hope that helps!

Categories