Here is the architecture topology:
An IoT device that counts people and saves the data to its cloud platform. Data can be accessed via an API and more specific it requires to provide a webserver endpoint where it can push the data every minute or so. This a ready-made product that I cannot change the data transfer method.
A webserver on my side that receives and stores the data.
As I am new to WebSockets, I interpret the above configuration as a WebSocket server installed on my webserver and wait for the data to be received from the IoT server (client).
So I deployed a Linux server in digitalocean and enabled the websocket server to wait for the incoming connections. The code I used for the server is:
import asyncio
import websockets
async def echo(websocket, path):
async for message in websocket:
print(message)
start_server = websockets.serve(echo, "MYSERVERIP", 80)
asyncio.get_event_loop().run_until_complete(start_server)
asyncio.get_event_loop().run_forever()
All I need at this stage is to print all JSON packets that are pushed from the IoT server.
When I try to set the endpoint address in the IoT server, it refuses to accept ws://Myserver:80 and only accepts HTTP://Myserver:80. Obviously I don't have any HTTP server running on my server and therefore I am guessing the connection is refused from my server.
Also, the IoT API requires token X-Auth-token authentication. I am using the WebSockets python library but I didn't set up the authentication on my server. I left it null on both IoT server API and my server.
If I was to add a token authentication, what would be parameters or arguments required for the websocket server? I tried to search the websockets docs but with no luck.
This is not for production environment!! I am only trying to learn.
Any thoughts are welcome.
So these are the requirements:
An IoT device that counts people and saves the data to its cloud
platform. Data can be accessed via an API and more specific it
requires to provide a webserver endpoint where it can push the data
every minute or so.
A webserver on my side that receives and stores
the data.
They need data to be refresh every minute or so. In my humble opinion, websockets are neccesary only on real time.
That said, my proposed solution is to use a Message Broker instead. I think it's easier to handle than websockets directly, and you do not have to care about maintaining a live socket connection all the time (which is not efficient in terms of energy in IoT world).
In other words, use a Pub/Sub architecture instead. Your IoT devices publish data to the Message Broker (common one is RabbitMQ), and then you build a server that subscribes to the broker, consuming its data and stores it.
Now, every device connects to the cloud only when it has data available, this saves energy. The protocol may be MQTT or HTTP, MQTT is often used in the IoT world.
Related: Pub-sub messaging benefits
Related
I have a server which is supposed to stream a set of endless data to the web client when client subscribes to it using grpc-web
but my problem is that the server continues sending data even after user goes to some other page and leaves the streaming area
I'm looking for a way which server could be able to control if the user is listening on stream or not and by that i would be able cancel streaming on the server
note that I'm using grpc-web streaming and proto3
server is on python and client is angular and typescript
server stubs were created with betterproto plugin
any help or idea to handle this problem will be appreciated
How can CLIENT continuously receive data from SERVER? I think my sequence diagram is complex solution. I just want Client set connect with Server 1 time then Server continuously send data to Client anyway. Here I use API-Restful + Mosquitto
MQTT can run over a Websocket connection so it is possible to subscribe directly to the MQTT broker from within a webpage. This would remove the need for any REST calls.
The Paho Javascript client supports MQTT of Websockets.
The broker will need to be configured to support MQTT over Websockets on a separate port from the normal native MQTT support. Details of how to set up Mosquitto to do this can be found in the man page
I'm trying to find out if it is possible to have two paho.mqtt clients (https://eclipse.org/paho/clients/python/docs/) subscribing to the same server. Both clients and server are running on the same host. My aim is to have two clients subscribing with different credentials to the same server (which in my case is rabbitmq with mqtt plugin) so I can sort my payloads by vhosts (not by topic since I don't have control over topics).
My observation at the moment is that the clients just keep reconnecting which would suggest I'm either doing something wrong or that there can be only one client connected to the MQTT server at a time...
So here is the question - was you able to run more than one client subscribed to the same server where all clients and server were running locally?
Edit:
It seems RabbitMQ with MQTT plugin allows to achieve this functionality. The one could configure two users to have access to separate vhosts and just by doing this payloads get segregated. My scenario was to configure two clients so I could distinguish who had sent which payload, and localy I could spawn mirror clients to consume payload of related users.
Many thanks to #hardillb who helped with this question and with related question.
Each client must have a unique client id, the broker will kick off the oldest client when a new one connects with the same client id. Other than that you can run as many clients as you want connecting from anywhere that can reach the broker
I'm using GAE + Python to create an application that needs to send real-time updates of sensitive data to clients and I wanted to know if the App Engine Channel API is secure or not. Will using HTTPS be enough or do channels require their own security protocol?
Also, what is the underlying implementation of the App Engine Channel API? Websockets, SSE? It seems like it really only provides one way communication from server to client through the channel, and then has the client use a standard HTTP request to communicate with the server.
Connections to the channel API are made over HTTPS, regardless of how your page was loaded, so it's not possible to eavesdrop on the contents of a channel API connection. As long as you keep the channel key secret, then, your channel is a secure communications channel between your app and the client.
Channels are implemented using long polling (comet).
Because channels are long-term-alive connections between server and a client, channels are not allowed to use resource consuming security approaches in many cases due to performance consideration. As it is declared in the official manual, The server only receives update messages from clients via HTTP requests. And as far as I know, even Dropbox sends its long-term notify message via HTTP, using a very short notify only to tell whether there is something new.
Fortunately, there are two ways to ensure your security.
Only notify your client via the HTTP channel when some states change. After that, let the client decide whether a further request, which can be a secure communication, should be made. And this is the most common way channels are used.
Although this is not the way I personally recommended, you can encrypt your data yourself and put these encrypted data run on the insecure HTTP channel.
Background:
I have a gtk client that uses twisted and perspective broker to perform remote object execution and server/client communication. This works great for me and was a breeze to start working with.
I have amqp (Message Queue/MQ) services that I also need to communicate from the client.
I have a security model in place around the client and server through twisted, and I don't want the clients to talk to the Message Queue Server directly, nor do I want another dependency on amqp libraries for the clients.
Ideally I would like the client to send a request to the server through perspective broker, the Perspective Broker Server to send an amqp request to another server on behalf of the client, and the client to receive an acknowledgment when the PB server receives a response from the Message Queue Server.
Question:
On the server side, how do I defer the response inside one of the servers pb methods?
More importantly what's the most efficient way to connect an outgoing request back to an incoming request and still preserve the Twisted event driven paradigms?
On the server side, how do I defer the response inside one of the servers pb methods?
Easy. Return the Deferred from the remote_ method. Done.