Background:
I have a gtk client that uses twisted and perspective broker to perform remote object execution and server/client communication. This works great for me and was a breeze to start working with.
I have amqp (Message Queue/MQ) services that I also need to communicate from the client.
I have a security model in place around the client and server through twisted, and I don't want the clients to talk to the Message Queue Server directly, nor do I want another dependency on amqp libraries for the clients.
Ideally I would like the client to send a request to the server through perspective broker, the Perspective Broker Server to send an amqp request to another server on behalf of the client, and the client to receive an acknowledgment when the PB server receives a response from the Message Queue Server.
Question:
On the server side, how do I defer the response inside one of the servers pb methods?
More importantly what's the most efficient way to connect an outgoing request back to an incoming request and still preserve the Twisted event driven paradigms?
On the server side, how do I defer the response inside one of the servers pb methods?
Easy. Return the Deferred from the remote_ method. Done.
Related
Here is the architecture topology:
An IoT device that counts people and saves the data to its cloud platform. Data can be accessed via an API and more specific it requires to provide a webserver endpoint where it can push the data every minute or so. This a ready-made product that I cannot change the data transfer method.
A webserver on my side that receives and stores the data.
As I am new to WebSockets, I interpret the above configuration as a WebSocket server installed on my webserver and wait for the data to be received from the IoT server (client).
So I deployed a Linux server in digitalocean and enabled the websocket server to wait for the incoming connections. The code I used for the server is:
import asyncio
import websockets
async def echo(websocket, path):
async for message in websocket:
print(message)
start_server = websockets.serve(echo, "MYSERVERIP", 80)
asyncio.get_event_loop().run_until_complete(start_server)
asyncio.get_event_loop().run_forever()
All I need at this stage is to print all JSON packets that are pushed from the IoT server.
When I try to set the endpoint address in the IoT server, it refuses to accept ws://Myserver:80 and only accepts HTTP://Myserver:80. Obviously I don't have any HTTP server running on my server and therefore I am guessing the connection is refused from my server.
Also, the IoT API requires token X-Auth-token authentication. I am using the WebSockets python library but I didn't set up the authentication on my server. I left it null on both IoT server API and my server.
If I was to add a token authentication, what would be parameters or arguments required for the websocket server? I tried to search the websockets docs but with no luck.
This is not for production environment!! I am only trying to learn.
Any thoughts are welcome.
So these are the requirements:
An IoT device that counts people and saves the data to its cloud
platform. Data can be accessed via an API and more specific it
requires to provide a webserver endpoint where it can push the data
every minute or so.
A webserver on my side that receives and stores
the data.
They need data to be refresh every minute or so. In my humble opinion, websockets are neccesary only on real time.
That said, my proposed solution is to use a Message Broker instead. I think it's easier to handle than websockets directly, and you do not have to care about maintaining a live socket connection all the time (which is not efficient in terms of energy in IoT world).
In other words, use a Pub/Sub architecture instead. Your IoT devices publish data to the Message Broker (common one is RabbitMQ), and then you build a server that subscribes to the broker, consuming its data and stores it.
Now, every device connects to the cloud only when it has data available, this saves energy. The protocol may be MQTT or HTTP, MQTT is often used in the IoT world.
Related: Pub-sub messaging benefits
I use a TCP server in python, that implements this class:
class ThreadedTCPServer(SocketServer.ThreadingTCPServer):
pass
The normal use of it works perfect (initiating the server, handling requests and so on).
Now- I need to send a message to the clients, outside of the handle function in the TcpRequestHandler(SocketServer.BaseRequestHandler) class.
I tried to use the following trick, of using the internal socket of the server (it works for UDP)-
tcp_server.client_socket.send(message)
But I get this error message-
socket.error: [Errno 10057] A request to send or receive data was disallowed because the socket is not connected and (when sending on a datagram socket using a sendto call) no address was supplied
So I assume it is not possible for TCP.
Is there any other way to do it?
I assume some servers need to send messages to their client sometimes (that are not just responses to requests), but I couldn't find a good way.
Thanks!
You have two general options with TCP:
Send a message to the client out of band (OOB). In this, the server connects separately to the client and the roles are reversed. The client has to listen on a port for OOB messages and acts as a server in this regard. From your problem description you don’t want to do this, which is fine.
Implement a protocol where the server can send messages to the client in response to incoming messages. You will need a way to multiplex the extra messages along with any expected return value to the initiating message. You could implement this with a shared queue on your server. You put messages into this queue outside of your handler and then when the handler is responding to messages you consume from the queue and insert them into the response.
If that sounds like something you are interested in I could write some example code later.
There are pros & cons between both approaches:
In (1) you have more socket connections to manage and you expose the client host to connections which you might not desire. The protocols are simpler because they are not multiplexed.
In (2) you only have a single TCP stream but you have to multiplex your OOB message. You also have increased latency if the client is not regularly contacting the server.
Hope that helps.
I would like to make a multi-threading UDP server in Python.
The purpose is to be able to connect several clients to the server (not sockets connections but username and password), act with each of them and do some actions on the server. All at the same time.
I am a little confuse with all the different type of threading and I don't know what to use.
To be clearer this is exactly what I want to do at the same time :
Wait for clients to send data for the first time and register their ip in a database
Act with "connected" clients by waiting for them to send datagrams and respond to them
Be able to act with the server. For exemple, change a client's password in my database
I would have a look at a framework that is good at handling asynchronous io. The idea is to not have a thread per socket and block until you receive data, but instead let one thread handle many sockets at once. This scales well if you want your server to handle many clients.
For example:
Gevent - "a coroutine-based Python networking library", example
Twisted - "an event-driven networking engine", example
Eventlet - "a concurrent networking library", example (TCP, but it uses a patched socket so you can also refer to the Python wiki page about UDP Communication)
I'm trying to design a system that will process large amounts of data and send updates to the client about its progress. I'd like to use nginx (which, thankfully, just started supporting websockets) and uwsgi for the web server, and I'm passing messages through the system with zeromq. Ideally the solution could be written in Python, but I'm also open to a Nodejs or even a Go solution.
Here is the flow that I'd like to achieve:
Client visits a website and requests that a large amount of data be processed.
The server farms out the processing to another process/server [the worker] via zeromq, and replies to the client request explaining that processing has begun, including information about how to set up a websocket with the server.
The client sets up the websocket connection and waits for updates.
When the processing is done, the worker sends a "processing done!" message to the websocket process via zeromq, and the websocket process pushes the message down to the client.
Is what I describe possible? I guess I was thinking that I could run uwsgi in emperor mode so that it can handle one process (port) for the webserver and another for the websocket process. I'm just not sure if I can find a way to both receive zeromq message and manage websocket connections all from the same process. Maybe I have to initiate the final websocket push from the worker?
Any help/correct-direction-pointing/potential-solutions would be much appreciated. Any sample or snippet of an nginx config file with websockets properly routed would be appreciated as well.
Thanks!
Sure, that should be possible. You might want to look at zerogw.
I am developing a group chat application to learn how to use sockets, threads (maybe), and asycore module(maybe).
What my thought was have a client-server architecture so that when a client connects to the server the server sends the client a list of other connects (other client 'user name', ip addres) and then a person can connect to one or more people at a time and the server would set up a P2P connection between the client(s). I have the socket part working, but the server can only handle one client connection at a time.
What would be the best, most common, practical way to go about handling multiple connections?
Do I create a new process/thread whenever I new connection comes into the server and then connect the different client connections together, or use the asycore module which from what I understand makes the server send the same data to multiple sockets(connection) and I just have to regulate where the data goes.
Any help/thoughts/advice would be appreciated.
For a group chat application, the general approach will be:
Server side (accept process):
Create the socket, bind it to a well known port (and on appropriate interface) and listen
While (app_running)
Client_socket = accept (using serverSocket)
Spawn a new thread and pass this socket to the thread. That thread handles the client that just connected.
Continue, so that server can continue to accept more connections.
Server-side client mgmt Thread:
while app_running:
read the incoming message, and store to a queue or something.
continue
Server side (group chat processing):
For all connected clients:
check their queues. If any message present, send that to ALL the connected clients (including the client that sent this message -- serves as ACK sort of)
Client side:
create a socket
connect to server via IP-address, and port
do send/receive.
There can be lots of improvement on the above. Like the server could poll the sockets or use "select" operation on a group of sockets. That would make it efficient in the sense that having a separate thread for each connected client will be an overdose when there are many. (Think ~1MB per thread for stack).
PS: I haven't really used asyncore module. But I am just guessing that you would notice some performance improvement when you have lots of connected clients and very less processing.