Why do we need to use rabbitmq - python

Why do we need RabbitMQ when we have a more powerful network framework in Python called Twisted. I am trying to understand the reason why someone would want to use RabbitMQ.
Could you please provide a scenario or an example using RabbitMQ?
Also, where can I find a tutorial on how to use RabbitMQ?

Let me tell you a few reasons that makes using MOM (Message Oriented Middleware) probably the best choice.
Decoupling:
It can decouple/separate the core components of the application. There is no need to bring all the benefits of the decoupled architecture here. I just want to point it out that this is one of the main requirement of writing a quality and maintainable software.
Flexibility:
It is actually very easy to connect two totally different applications written on different languages together by using AMQP protocol. These application will talk to each other by the help of a "translator" which is MOM.
Scalability:
By using MOM we can scale the system horizontally. One message producer can transmit to unlimited number of message consumers a task, a command or a message for processing and for scaling this system all we need to do is just create new message consumers. Lets say we are getting 1000 pictures per second and we must resize them. Solving this problem with traditional methods could be a headache. With MOM we can transmit images to the message consumers which can do their job asynchronously and make sure data integrity is intact.
They are other benefits of using MOM as well but these 3 are the most significant in my opinion.

Twisted is not a queue implementation. Apart from that RabbitMQ offers enterprise-level queuing features and implements the AMQP protocol which is often needed in an enterprise world.

Twisted is a networking library that implements a number of network protocols as well as allowing you to create your own. One of the protocols that you can use with Twisted is AMQP https://launchpad.net/txamqp
RabbitMQ is an AMQP broker, i.e. a services that runs outside of your application, probably on a separate cluster of servers. AMQP is merely the protocol that is used to communicate with a message queueing broker like RabbitMQ. You get a lot of things from RabbitMQ. You can send messages persistently with guaranteed delivery so they will arrive even if your app crashes, and even if the RabbitMQ broker ends up being restarted. You get load balancing between message consumers if you have multiple consumers on the same queue. You get interoperability with apps in other languages as long as you use a reasonably open serialization format for your message bodies. AMQP allows you to break up a monolithic app into many loosely coupled parts that can run on different servers. This is a big win for long term maintenance of an application.

RabbitMQ is a bit more than mere messaging... It's a common platform that has ability to inter-connect applications. Using RabbitMQ a Java application can speak to a Linux server and/or a .NET app, to a Ruby & rails + almost anything that finds its place in the corporate web development. And most importantly it implements the "fire and forget" model proposed by AMQP. Its just a perfect replacement for JMS or ESB, especially if you are dealing with cross platform architecture, with a guaranty of reliability. There is even a special feature called RPC (Remote procedure call) that adds to the ease of development in the distributed arch.
Apart from all these, in the world financial services like Stock-exchange or share-market where a lot of reliable and efficient routing is required (suppose you don't know the actual number of people subscribed to your services, but want to ensure that who ever does so, receives your pings whether they are connected in this moment, or will connect later), RabbitMQ rules because it's based on ERLANG & the Open-telecom platform that assures high performance while using minimum resources. For the most convenient introduction to RabbitMQ, see rabbitmq.com/getstarted.html for your native development language.

RabbitMQ is an implementation of AMQP, which defines an interoperable protocol for message oriented middleware. As such, it defines semantics for message creation, publication, routing and consumption that can be implemented on any platform.
Conceptually, it could be considered as a specialization of a networking engine like Twisted, but based on an industry accepted standard.
Here is a blog from Ross Mason that discusses the interest of interoperable publish-subscribe with AMQP: http://blogs.mulesoft.org/inter-operable-publishsubscribe-with-amqp/

I use RabbitMQ as message broker for Celery.
Also, I have worked with Twisted. It is different.
See here for more on AMQP: http://en.wikipedia.org/wiki/Advanced_Message_Queuing_Protocol

RabbitMQ works on message queueing technologies like AMQP which helps keep things clean and latency-free.
And the best scenario to make use of RabbitMQ is for background processing of data which can take more time to be processed and cannot be served over HTTP. For example, if you want to download a report from your web app. And that report generation takes like 15-20 mins to be processed and get downloaded. Then in that case you should be pushing the download request to the RabbitMQ queue and then you should be expecting that report to be delivered to you via email or notification.
To know about exactly how RabbitMQ works or how it solves such use cases you should check out this YouTube video - https://youtu.be/vvxvlF6CJkg and https://youtu.be/0dXwJa7veI8

Related

Proper Celery Monitoring and File-Management with Web-Services

We are working on an Internet and Intranet platform, that serves client-requests over website applications.
There are heavy-weight computations on database entries and files. We want to update the state of those computations via push-notification to the client and make changes to files without the risk of race-conditions. The architecture is supposed to run on both, low- scaled one-server environments and high-scaled cluster environments.
So far, we are running a Django Webserver with Postgresql, the Python-Library Channels and RabbitMQ as Messagebroker.
Once a HTTP-Request from a client arrives in Django, we trigger the task via task.delay() and immediatly return the task_id to the client. The client then opens a websocket to another Django-route and hands over the task_ids he is interested in. Django then polls the state of the task via AsyncResult(task_id).state. Once the state changes, we read the results via AsyncResult(task_id).get and push the task_results to the client.
Here a similar sequence diagramm, from another project I found online.
Source(18.09.21)
Something that is not seen on the diagram, the channels_worker have to fetch the file they are working on from Django. A part of the result is not for the client, but to update the file. Django locks and updates the file localy as soon, as the client asks for and Django receives the task_results from celery (the changes only add attributes and will not be in conflict with each other).
My thoughts about this architecture are:
monitoring of the celery-events is bad so far.
It is only triggered by the client, which has to know about the tasks to begin with.
Django is not suited for monitoring
and polling is not efficient in general.
The file management seems fishy.
I would prefer a proper monitoring, where events are pushed to Django and the client. The client have to be able to consume the events at any time later.
I have some thoughts about solutions, but I would like to hear your opinion first. Later I can bring them in the discussion too.
Greetings
Python
Edit 1
From other sources I got helpful information regarding a good strategy.
Instead of Django "monitoring" the celery tasks, we can use a dedicated Websocket-Service, like FastAPI thand monitors task events and propagates them to the clients via websocket.
The client doesn't have to know about it's running tasks per se. Instead we can have ownership of tasks and the client only has to authenticate himself. The whole Security Blog will be implemented anyways and its supported by Celery.
For the file management, we should use a dedicated object storage like minio. This service can become subscriber to task-events related to files.
We all like Python, but we don't have to re-invent the wheel whenever we want a better monitoring or more control on the behavior of our systems.
That being said, I would recommend re-architecture the solution to decrease the complexity of your django application by exploring what native cloud solutions are offering in terms of micro-service architecture (API-Gateway), AWS SQS and SNS, computation, and storage options for your files.
Such an approach will carry out a lot of the monitoring, configurations, file management activities, and most importantly your monolithic application could scale without code changes or additional configurations.

Server architecture depending on the capacity

I am new at Server side,
but I have gotten a chance to design and implement a server that will cover around 2000~3000 client.
And I am thinking that I will use Python and Websocket, though I don't know this choice is appropriate.
In this point, I am curious on how to design the server.
I think there must be some architecture normally in use depending on capacity that server handles.
Otherwise, Could I use a Websocket server offered by some python package like Tornado or Django?
I hope that I can get any information on this.
Any advice?
I've had good experiences using haproxy in front of sockjs-tornado. Depending on how complex your server-side logic, routing, and persistence requirements are, you could write all your server endpoints using tornado and use SQLAlchemy to handle writes to a relational database or use a non SQL data store like Redis.
If your main requirement is real-time interactivity it might be worth investigating meteor as well.
One of solutions could be Pyramid, sockjs, gunicorn, and gevent. Nginx probably better suits to be a frontend than Apache, but of course if you do not have any lengthy processing on the backend, any decent asynchronous Python server with websocket and sockjs support (not sure about socket.io as an alternative) will work for you out of the box.
Lenghty processing should be offloaded to some queue workers anyway, so asynchronous server will fit the bill.
Just check whether all used datastore/database adapters are compatible with your server solution be it asynchronous or multi-threading.

Storage Backend based on Websockets

I spent quite some time now with researching Server Backends/API/Frameworks. I need a solution where I can store user content (JSON & Binary data).
The obvious choice would be a REST API. The only missing element is a push feature when data on server changed and clients should be notified instantly. With more research in this matter I discovered classic approaches (Comet, Push, Server sent events, Bayeux, BOSH, …) as well as the „new“ league, Websockets. I would definitely prefer the method with Websockets or using directly TCP Sockets. But this post is not about pros/cons of these two technologies so please restrain yourself from getting side tracked in comments.
At moment exists following projects which are very similar to my needs:
- Simperium (simperium.com), this looks very promising, but core/server is sadly not open source and god knows when, if ever, this step happens
- Realtime.co (framework.realtime.co/storage), hosted service, but same principle
- Some Frameworks for building servers such as Atmosphere (java, no WAMP), Cometd (java, project page looks like stuck in the 90’s), Autobahn (python, WAMP)
My actual favorite is the Autobahn framework (autobahn.ws). Especially using the WAMP protocol (subset of Websocket) as it offers exactly what I need. So the idea would be to build a python backend/server with Autobahn Python (based on Twisted framework) which manages all socket (WAMP) connections and include a Postgresql database for data storing. For all desired clients exists already WAMP libraries. The server would need to be able to do the typical REST API features:
- Send, update, delete requested data (JSON/Binary) from/to server/clients
- Synchronize & automatic conflict management
- Offline handling when connection breaks, automatic restart when connection available again
So finally the questions:
- Have I missed an open source project which covers exactly my needs?
- If I would like to develop my own server with autobahn and a database, could you point me to right direction? Have lot of concerns and not enough depth understanding.. I know Autobahn gives you already a server, but this one is not very close to my final needs.. how to build a server efficient so that he can handle all connected sockets? How handle when a client needs server push? Are there schemas, models or concept how such a server should look like?
- Twisted is a very powerful python framework but not regarded as the most convenient for writing apps.. But I guess a Socket based storage server with db access should be possible? When I run twisted as a web ressource and develop server components with other python framework, would this compromise the latency/performance much?
- Is such a desired server backend with lot of data storage (JSON fields and also binary data such as documents, images) reasonable to build with Sockets by a single devoloper/small team or is this smth. which only bigger companies like Dropbox can do at the moment?
Thank you very much for your help & time!
So finally the questions:
Have I missed an open source project which covers exactly my needs?
No you've covered the open source projects. Open source only gets you about halfway there though. To implement a Global Realtime Network requires equal parts implementation and equal parts operations. You have to think about dropped messages, retries, what happens if a particular geography gets hot how do you scale your servers ...etc. I would argue that an open source solution won't achieve what you want unless you're willing to invest significant resources into operations. I would recommend a service like PubNub: http://pubnub.com
If I would like to develop my own server with autobahn and a database, could you point me to right direction? Have lot of concerns and not enough depth understanding.. I know Autobahn gives you already a server, but this one is not very close to my final needs.. how to build a server efficient so that he can handle all connected sockets? How handle when a client needs server push? Are there schemas, models or concept how such a server should look like?
A good database to back a realtime framework would be Cassandra because it supports high write volumes and handles time series data well: http://cassandra.apache.org/.
Twisted is a very powerful python framework but not regarded as the most convenient for writing apps.. But I guess a Socket based storage server with db access should be possible? When I run twisted as a web ressource and develop server components with other python framework, would this compromise the latency/performance much?
I would not use Twisted. I would use Gevent:http://www.gevent.org/. Its coroutine based so you don't get into callback hell. To support more connections you just increase your greenlet pool to listen on the socket.
Is such a desired server backend with lot of data storage (JSON fields and also binary data such as documents, images) reasonable to build with Sockets by a single devoloper/small team or is this smth. which only bigger companies like Dropbox can do at the moment?
Again I would not build this on your own. A service like PubNub: http://pubnub.com which takes care of all the operational issues for you and has a clean API would service your needs with minimal cost. PubNub takes care of the protocol for you so if your on a mobile device that doesn't support WebSockets it will use TCP, HTTP or whatever the best transport is for the device.

What is best pythonic way to communicate between spreaded services/unix machines?

Mornink!
I need to design, write and implement wide system consisting of multiple unix servers performing different roles and running different services. The system must be bullet proof, robust and fast. Yeah, I know. ;) Since I dont know how to approach this task, I've decided to ask you for your opinion before I leave design stage. Here is how the workflow is about to flow:
users are interacting with website, where they set up demands for service
this demand is being stored (database?) and some kind of message to central system (clustered) is being sent about new demand in database/queue
central system picks up the demand and sends signals to various other systems (clusters) to perform their duties (parts of the demanded service setup)
when they are done, they send up message to central system or the website that the service is now being served
Now, what is the modern, robust, clean and efficient way of storing these requests in some kind of queue, and executing them? Should I send some signals, or should I let all subsystems check the queue/db of any sort for new data? What could be that queue, should it be a database? How to deal with the messages? I thought about opening single tcp connection and sending data over that, along with comands triggering actions/functions on the other end, but at closer inspection, there has to be other, better way. So I found Spring Python, that has been criticized for being so 90's-ish.
I know its a very wide question, but I really hope you can help me wrap my head around that design and not make something stupid here :)
Thanks in advance!
Some general ideas for you:
You could have a master-client approach. Requests would be inserted in the master, stored in a database. Master knows the state of each client (same db). Whenever there is a request, the master redirects it to a free client. The client reports back when has finished the task (including answers if any), making it able to receive a new task from the master (this removes the need for pooling).
Communication could be done using web-services. An HTTP request/post should solve every cases. No need to actually go down to the TCP level.
Just general ideas, hope they're useful.
There are a number of message queue technologies out there which are Python friendly which could serve quite well. The top two that I know of are ActiveMQ and RabbitMQ, which both play well with Python, plus I found this comparison which states that ActiveMQ currently (as of 18 months ago!) outperforms RabbitMQ.

Abstraction and client/server architecture questions for Python game program

Here is where I am at presently. I am designing a card game with the aim of utilizing major components for future work. The part that is hanging me up is creating a layer of abstraction between the server and the client(s). A server is started, and then one or more clients can connect (locally or remotely). I am designing a thick client but my friend is looking at doing a web-based client. I would like to design the server in a manner that allows a variety of different clients to call a common set of server commands.
So, for a start, I would like to create a 'server' which manages the game rules and player interactions, and a 'client' on the local CLI (I'm running Ubuntu Linux for convenience). I'm attempting to flesh out how the two pieces are supposed to interact, without mandating that future clients be CLI-based or on the local machine.
I've found the following two questions which are beneficial, but don't quite answer the above.
Client Server programming in python?
Evaluate my Python server structure
I don't require anything full-featured right away; I just want to establish the basic mechanisms for abstraction so that the resulting mock-up code reflects the relationship appropriately: there are different assumptions at play with a client/server relationship than with an all-in-one application.
Where do I start? What resources do you recommend?
Disclaimers:
I am familiar with code in a variety of languages and general programming/logic concepts, but have little real experience writing substantial amounts of code. This pet project is an attempt at rectifying this.
Also, I know the information is out there already, but I have the strong impression that I am missing the forest for the trees.
Read up on RESTful architectures.
Your fat client can use REST. It will use urllib2 to make RESTful requests of a server. It can exchange data in JSON notation.
A web client can use REST. It can make simple browser HTTP requests or a Javascript component can make more sophisticated REST requests using JSON.
Your server can be built as a simple WSGI application using any simple WSGI components. You have nice ones in the standard library, or you can use Werkzeug. Your server simply accepts REST requests and makes REST responses. Your server can work in HTML (for a browser) or JSON (for a fat client or Javascript client.)
I would consider basing all server / client interactions on HTTP -- probably with JSON payloads. This doesn't directly allow server-initiated interactions ("server push"), but the (newish but already traditional;-) workaround for that is AJAX-y (even though the X makes little sense as I suggest JSON payloads, not XML ones;-) -- the client initiates an async request (via a separate thread or otherwise) to a special URL on the server, and the server responds to those requests to (in practice) do "pushes". From what you say it looks like the limitations of this approach might not be a problem.
The key advantage of specifying the interactions in such terms is that they're entirely independent from the programming language -- so the web-based client in Javascript will be just as doable as your CLI one in Python, etc etc. Of course, the server can live on localhost as a special case, but there is no constraint for that as the HTTP URLs can specify whatever host is running the server; etc, etc.
First of all, regardless of the locality or type of the client, you will be communicating through an established message-based interface. All clients will be operating based on a common set of requests and responses, and the server will handle and reject these based on their validity according to game state. Whether you are dealing with local clients on the same machine or remote clients via HTTP does not matter whatsoever from an abstraction standpoint, as they will all be communicating through the same set of requests/responses.
What this comes down to is your protocol. Your protocol should be a well-defined and technically sound language between client and server that will allow clients to a) participate effectively, and b) participate fairly. This protocol should define what messages ('moves') a client can do, and when, and how the server will react.
Your protocol should be fully fleshed out and documented before you even start on game logic - the two are intrinsically connected and you will save a lot of wasted time and effort by competely defining your protocol first.
You protocol is the abstraction between client and server and it will also serve as the design document and programming guide for both.
Protocol design is all about state, state transitions, and validation. Game servers usually have a set of fairly common, generic states for each game instance e.g. initialization, lobby, gameplay, pause, recap, close game, etc...
Each one of these states has important state data related with it. For example, a 'lobby' state on the server-side might contain the known state of each player...how long since the last message or ping, what the player is doing (selecting an avatar, switching settings, going to the fridge, etc.). Organizing and managing state and substate data in code is important.
Managing these states, and the associated data requirements for each is a process that should be exquisitely planned out as they are directly related to volume of work and project complexity - this is very important and also great practice if you are using this project to step up into larger things.
Also, you must keep in mind that if you have a game, and you let people play, people will cheat. It's a fact of life. In order to minimize this, you must carefully design your protocol and state management to only ever allow valid state transitions. Never trust a single client packet.
For every permutation of client/server state, you must enforce a limited set of valid game messages, and you must be very careful in what you allow players to do, and when you allow them to do it.
Project complexity is generally exponential and not linear - client/server game programming is usually a good/painful way to learn this. Great question. Hope this helps, and good luck!

Categories