I am looking for a message/queuing solution for my web based system running on Ubuntu.
The system was built on the following technologies:
Javascript (Extjs framework) - Frontend
PHP
Python (Daemon service which interacts with the encryption device)
Python pyserial - (Serial port interactions)
MySQL
Linux - Ccustom bash scripts(to update DB/mail reports)
The system serves the following purpose:
Capture client information on a distributed platform
Encrypt/decrypt sensitive transactions using a Hardware device
System breakdown:
The user gains access to the system using a web browser
The user captures client information and on pressing "submit" button
The data is sent to the encryption device and the system enters a wait state
The data is then encrypted on the device and sent back to the browser
The encrypted data is saved to the DB
System exits wait state and displays DONE message
Please note: I have already taken care of waiting/progress messages so lets omit that.
What I have done so far:
I created a python daemon which monitors a DB view for any new requests
The daemon service executes new requests on the device using pyserial and updates
the requests table with a "response" ie. the encrypted content
I created a polling service in PHP which frequently checks if there is a "response" in >the requests table for the specific request
Created the Extjs frontend with appropriate wait/done status messages
The problem with the current setup:
Concurreny - We expect > 20 users at any time submitting encryption/decryption requests
using a database as a message/queuing solution is not scalable due to table locking and only 1 listening process which monitors for requests
Daemon service - Relying on a daemon service is a bit risky and the DB overhead seems a bit high polling the view for new requests every second
Development - It would simplify my development tasks by just sending requests to a encrypt/decrypt service instead of doing this whole process of inserting a request in the db,polling for the response and processing the request in the daemon service.
My Question:
What would be the ideal message/queening solution in this situation? Please take into >account my system exclusively runs on a Ubuntu O/S.
I have done a few Google services and came accross something called a "Stomp" server but it prove somewhat difficult to setup and lacked some documentation. Also I prefer the advice from individuals who have some experience in setting up something like this instead of some "how to" guide :)
Thank You for your time
I believe the popular RabbitMQ implementation of AMQP offers a PHP extension (here) and you can definitely access AMQP in Python, e.g. via Qpid. RabbitMQ is also easy to install on Ubuntu (or Debian), see e.g. here.
Whether via RabbitMQ or otherwise, adopting an open messaging and queueing protocol such as AMQP has obvious and definite advantages in comparison to more "closed" solutions (even if technically open source, such solutions just won't offer as many implementations, and therefore flexibility, as a widely adopted open, standard protocol).
I would do:
The web component connects to the encryption daemon/service, sends the data and waits for the answer
The encryption daemon/service would:
On startup, start a thread (SerialThread) of each of the available serial devices
All 'serial threads' would then do a SerialQueue.get (blocking waiting for messages)
A multi threaded TCP server, check ThreadingMixIn from http://docs.python.org/library/socketserver.html
The TCP Server threads would receive the plain data and put it on the SerialQueue
A random SerialThread (Python's Queue class manages the multi thread required locking for you) would receive the request, encrypt and return the encrypted data to the TCP Server thread
The TCP Server thread would write the data back to the web component
I am using this logic on a project, you can check the source at http://bazaar.launchpad.net/~mirror-selector-devs/mirror-selector/devel/files/head:/mirrorselector/, on my case the input is an URL, the processing is to scan for an available mirror, the output is a mirror url.
Related
I'm looking to start a web project using Flask and its SocketIO plugin, which depends on gevent (something something greenlets), but I don't understand how gevent relates to the webserver. Does using gevent restrict my server choice at all? How does it relate to the different levels of web servers that we have in python (e.g. Nginx/Apache, Gunicorn)?
Thanks for the insight.
First, lets clarify what we are talking about:
gevent is a library to allow the programming of event loops easily. It is a way to immediately return responses without "blocking" the requester.
socket.io is a javascript library create clients that can maintain permanent connections to servers, which send events. Then, the library can react to these events.
greenlet think of this a thread. A way to launch multiple workers that do some tasks.
A highly simplified overview of the entire process follows:
Imagine you are creating a chat client.
You need a way to notify the user's screens when anyone types a message. For this to happen, you need someway to tell all the users when a new message is there to be displayed. That's what socket.io does. You can think of it like a radio that is tuned to a particular frequency. Whenever someone transmits on this frequency, the code does something. In the case of the chat program, it adds the message to the chat box window.
Of course, if you have a radio tuned to a frequency (your client), then you need a radio station/dj to transmit on this frequency. Here is where your flask code comes in. It will create "rooms" and then transmit messages. The clients listen for these messages.
You can also write the server-side ("radio station") code in socket.io using node, but that is out of scope here.
The problem here is that traditionally - a web server works like this:
A user types an address into a browser, and hits enter (or go).
The browser reads the web address, and then using the DNS system, finds the IP address of the server.
It creates a connection to the server, and then sends a request.
The webserver accepts the request.
It does some work, or launches some process (depending on the type of request).
It prepares (or receives) a response from the process.
It sends the response to the client.
It closes the connection.
Between 3 and 8, the client (the browser) is waiting for a response - it is blocked from doing anything else. So if there is a problem somewhere, like say, some server side script is taking too long to process the request, the browser stays stuck on the white page with the loading icon spinning. It can't do anything until the entire process completes. This is just how the web was designed to work.
This kind of 'blocking' architecture works well for 1-to-1 communication. However, for multiple people to keep updated, this blocking doesn't work.
The event libraries (gevent) help with this because they accept and will not block the client; they immediately send a response and when the process is complete.
Your application, however, still needs to notify the client. However, as the connection is closed - you don't have a way to contact the client back.
In order to notify the client and to make sure the client doesn't need to "refresh", a permanent connection should be open - that's what socket.io does. It opens a permanent connection, and is always listening for messages.
So work request comes in from one end - is accepted.
The work is executed and a response is generated by something else (it could be a the same program or another program).
Then, a notification is sent "hey, I'm done with your request - here is the response".
The person from step 1, listens for this message and then does something.
Underneath is all is WebSocket a new full-duplex protocol that enables all this radio/dj functionality.
Things common between WebSockets and HTTP:
Work on the same port (80)
WebSocket requests start off as HTTP requests for the handshake (an upgrade header), but then shift over to the WebSocket protocol - at which point the connection is handed off to a websocket-compatible server.
All your traditional web server has to do is listen for this handshake request, acknowledge it, and then pass the request on to a websocket-compatible server - just like any other normal proxy request.
For Apache, you can use mod_proxy_wstunnel
For nginx versions 1.3+ have websocket support built-in
The scenario is
I have multiple local computers running a python application. These are on separate networks waiting for data to be sent to them from a web server. These computers are on networks without a static IP and generally behind firewall and proxy.
On the other hand I have web server which gets updates from the user through a form and send the update to the correct local computer.
Question
What options do I have to enable this. Currently I am sending csv files over ftp to achieve this but this is not real time.
The application is built on python and using django for the web part.
Appreciate your help
Use a REST API. Then you can post information to your Django app over HTTP, using an authentication key if necessary.
http://www.django-rest-framework.org/ should help you get started quickly
Sounds like you need a message queue.
You would run a separate broker server which is sent tasks by your web app. This could be on the same machine. On your two local machines you would run queue workers which connect to the broker to receive tasks (so no inbound connection required), then notify the broker in real time when they are complete.
Examples are RabbitMQ and Oracle Tuxedo. What you choose will depend on your platform & software.
I have built a messaging/chat application for my local network (all WINDOWS) using pyzmq and pyqt for UI, it is based on the majordomo pattern. It's setup this way:
each machine on the network has a client/worker pair
they connect to a 'server' broker via pyzmq and register sessions
sessions are broadcasted by 'server' broker to clients
when 'sender' client sends a message to a specific session, broker routes the message to the corresponding worker destination, a reply is generated by worker, and it gets routed by the broker back to the 'sender' client (ending loop, confirming delivery)
Everything is working well, text messages are formed in 'client' pyqt UI and received by 'worker'pyqt UI.
Now I'm looking to build upon this skeleton to add video chat to my application... I have been looking into webRTC and would like to find a way to implement it.
This is how webRTC works From what I gather (could be severely wrong here, please correct me):
Machine A's Chrome browser opens local video/audio stream from webcam/mic via javascript function
webkitGetUserMedia, then creates a (Machine A) URL for the stream via javascript function webkitURL
Sends (Machine A) URL to Machine B's Chrome browser via signaling server
Machine B's Chrome browser accepts and loads (Machine A) URL, sets up it's own local video/audio stream from webcam.mic via previously mentioned javascript functions and replies with a (Machine B) URL back to Machine A via signaling server
Machine A's Chrome browser is displaying (Machine B) video/audio | Machine B's Chrome browser is displaying (Machine A) video/audio
Is that the process? or is this a totally wring assumption of how peers connect to each other?
If Correct , I would like to adapt my current pyzmq application to act as a signaling server for creating connections between machines, Since IP addresses of my machines are known to me and I can configure my firewall to give access to needed ports I'm trying to eliminate any extra STUN/TURN servers for this setup, I am not planning to go outside of my LAN and access remote machines. And I would like to handle everything(as much as possible) with Python and included batteries(Avoiding Node.js).
So the main question is how should I go about integrating webRTC to my setup? Does webRTC need specific prerequisite libraries or API to be built and running on the signaling server or peer machines? any code examples/advice/links would be appreciated.
Status Quo:
I have two python apps (frontend-server and data-collector, a database is 'between' them).
Currently using redis as db and its publish/subscribe protocol to notify the frontend when new data is available.
But may I want to use a different database (and don't want to keep redis on the system just for the pub/sub).
Are there any simple alternatives to notify my frontend if the data-collector has transacted new data to the database (without using an external message queue like beanstalkd or redis)?
ZeroMQ is a good option. It has good Python bindings, and it makes communicating between processes on the same machine and processes on different machines look almost identical.
Start by reading the guide: http://zguide.zeromq.org/page:all
As I mentioned in my comment, if you want something that is going across a network then other than setting up a web service (flask app?), or writing your own INET socket server there is nothing built in to the operating system to communicate between machines. Beanstalk has a very simple API in Python and I've used it for this kind of thing very successfully.
try:
beanstalk = beanstalkc.Connection(host="my.host.com")
beanstalk.watch("update_queue")
except:
print "Error connecting to beanstalk"
while True:
job = beanstalk.reserve()
do_something_with_job(job)
If you are only going to be working on the same machine, then read up on linux IPC. A socket connection between processes is very fast and has practically zero overhead. They can also be a part of an asynchronous program when you take advantage of epoll call backs.
I am developing a testbed for cloud computing environment. I want to establish multiple client connection to a server. What I want is that, server first of all send a data to all the clients specifying sending_interval and then all the clients will keep on sending their data with a time gap of that time_interval (as specified by the server). Please help me out, how can I do the same using python socket program. (i.e. I want multiple client to single server connectivity and also client sending data with the time gap specified by server). Will be great-full if anyone can help me. Thanks in advance.
This problem is easily solved by the ZeroMQ socket library. It is production stable. It allows you to define publisher-subscriber relationships, where a publishing process will publish data on a port regardless of how many (0 to infinite) listening processes there are. They call this the PUB-SUB model; it's in their docs (link below).
It sounds like you want to set up a bunch of clients that are all publishers. They can subscribe to a controlling channel, which which will send updates to their configuration (how often to write). They also act as publishers, pushing out their own data at an interval specified by default/config channel/socket.
Then, you have one or more listening processes that listen to all the clients' published messages. Perhaps you could even have two listening processes, one for backup or DR, or whatever.
We're using ZeroMQ and loving the simplicity it gives; there's no connection errors because the publisher doesn't care if anyone is listening, and the subscriber can start before the publisher and if there's nothing there to listen to, it can just loop around and wait until there is.
Bindings are available in ALL languages (it's freaky). The Python binding isn't pure-python, it does require a C compiler, but is frighteningly fast, and the pub/sub example is a cut/paste, 'golly, it works!' experience.
Link: http://zeromq.org
There are MANY other methods available with this library, including message queues, etc. They have relatively complete documentation, too.
Multi-Client and Single server Socket programming can be achieved by Multithreading in Socket Programming. I have implemented both the method:
Single Client and Single Server
Multiclient and Single Server
In my GitHub Repo Link: https://github.com/shauryauppal/Socket-Programming-Python
What is Multi-threading Socket Programming?
Multithreading is a process of executing multiple threads simultaneously in a single process.
To understand well you can visit Link: https://www.geeksforgeeks.org/socket-programming-multi-threading-python/, written by me.