I am considering using pyBluez, and my project requires quickly making a connection with a device. How long is the acquisition time before data can be received from the device?
In this case the device will be a remote control, which will very frequently be taken out of range. For bluetooth and pybluez to work for my application I need to be able to detect a button press on the remote within a few seconds of coming into range. I have read this similar answer. Does pyBluez introduce other overhead, which makes constant discovery impractical? After the device is discovered (minimum of 1.28 seconds I assume), is there any further delay before it can send data?
Thanks in advance.
You are looking at the wrong part of the Bluetooth protocol.
You should be looking at connection times and client to server min-max times. Discovery is assumed over with, you only do that once to pair, right? Afterwards the remote control should know which device it controls, or the controlled device would recognize its paired remotes.
Later it is just about connecting with a client server model.
You need to decide the roles of each device. However, always trying to connect is not a good pattern even for a PC. You should be having on demand connections, which could take a few seconds(1-12 seconds, with the greater distribution somewhere in 0-5 seconds range).
We can discuss this further on chat, if you can give more specific details about your project.
Related
I'm working on a debug latency problem of websocket.
I am trying to receive some price information from crypto-currency exchange with its websocket interface. The data packets we receive include the timestamp generatee on the exchange server. I log the time when we receive the tick information on our computer (the "client box") and compare the latency between the arrival time and the server generation time. Most of the ticks show a few tens of milliseconds which is more or less fine. But everyday we see a few times the latency becomes several seconds or even more then ten seconds and I would like to figure out where are these large latency come from.
The system is using Python programming language and the websocket module I'm using is websocket-client (https://pypi.org/project/websocket_client/, https://github.com/websocket-client/websocket-client), I tried to add logs inside the module and see if it is due to module processing time but still no luck.
One idea currently in my mind is to use tcpdump to capture the net traffic and record the time the tcp packet arrives my network card. If this time still presents the latency, I will have no way other than move the program to a co-located server. However, I encounters difficult here as the websocket connection is SSL-encrypted. Thus I cannot see the tick generation time packed inside the message.
Does anyone have some solution here ? In particular:
Is there any way to retrieve the private key of SSL from the websocket-client python package from client-end? (I assume the key should be available somewhere local side, otherwise the websocket-client cannot decrypt the data itself. And WireShark should be able to decrypt the message for TSL1.2 protocol)
if it is not easy to do this with websocket-client package, I'm happy to try other websocket lib written by python, C/C++.
Can tcpdump get the timestamp when the TCP data packet sent from server (even in server time)?
Any other advices are highly appreciated as well.
Thanks a lot!
Thanks #Eugène Adell
My tcpdump opened in WireShark is mostly like below
and I can see the TSval in TCP Option - Timestamps
Can these indicate something?
Sorry for probably basic questions, really lack of experience in this area & Thanks again.
EDIT
Can tcpdump get the timestamp when the TCP data packet sent from
server (even in server time)?
Open your capture and see if the packets have the TCP timestamps option (as defined in RFC 1323 but better explained in RFC 7323). If so, the very first SYN packet should already mention it.
Unluckily the meaning of the TSval (Timestamp value in milliseconds) given in these packets is not the real clock and is not always changing like a real clock (it depends on the implementation used by your computers). If the conversation with your server lasts for 60s for example, check if this TSval also moves from 60s, if so maybe can you use this field to track when the packets were sent.
So I've been racking my brain trying to implement a system in which computers on a network (where there are always three or more computers on the network) are able to asynchronously communicate with each other by sending each other data.
So far, all I've been able to find as far as solutions go is sockets--which, to my knowledge, requires a client and a server script. My first problem is that I'd like to remove any client or server roles since all of the computers on the network are decentralized and running the same script concurrently without a server. Secondly, all of the computers are sending other nodes (chosen at random) sensor data from a specific point in time. If, for example, I have 4 computers on the network and--since they're all running the same script--they decide to send their data to another computer at the same time, wouldn't that cause a wait lock since all of the nodes are trying to communicate with another computer, but those computers are unable to accept the connection because they're also trying to send data?
I've considered using multithreading to run my begin_sync and wait_sync functions concurrently, but I'm not sure whether or not that would work. Does anyone have any suggestions or ideas for solutions that I could look into?
Thanks for your time!
As per NotTheBatman's response, I was able to get this to work using sockets on multiple ports. As far as how I handled being able to wait for sensor data and query other nodes, I simply used multithreading with great success.
I have two PLC modbus devices. These two are different companies.
The first one is that A will return the temperature and humidity. The second is that B will return five values. In both devices, the connection is modbus TCP.
One problem is that the same test software can be connected but the other cannot be connected. One device can be connected using pyModbus, but the other device is not working.
I also tested some software and python libraries. List is as follows
Device A
Software
modbus Poll (OK)
ModScan32 (NO)
Python Lib
pyModbus (OK)
EasyModbus (NO)
Device B
Software
modbus Poll (NO)
ModScan32 (OK)
Python Lib
pyModbus (NO)
EasyModbus (OK)
I don't know the difference between the two devices.
I want to integrate these two into the web system for monitoring, and new devices can be added in the future.
If there is a third device, I have to retest the connection to see which Python lib is available.
How can I implement this?
Learn more about the communication format of PLC devices?
Write a set of Python Libs for all devices?
Thank You.
Use one library for each, save the data somewhere centralized and view it with a different app. I don't see the problem. Also, have you tried re-trying the connection or reading after a few seconds? In my experience, modbus devices fail every so often to answer (or maybe is a library problem, I don't know) and a retry usually works.
For this, and it sounds like any good IOT framework would solve it, you should send the data from each device separately to one central point, like an IOT cloud solution, then you can build a webapp to view whatever data you have on the IOT framework, regardless of when or if it came in. Most frameworks will show you the old data as well if its not connected, so you can still display data, indicating it is stale or something to your user.
Key points:
I need to send roughly ~100 float numbers every 1-30 seconds from one machine to another.
The first machine is catching those values through sensors connected to it.
The second machine is listening for them, passing them to an http server (nginx), a telegram bot and another program sending emails with alerts.
How would you do this and why?
Please be accurate. It's the first time I work with sockets and with python, but I'm confident I can do this. Just give me crucial details, lighten me up!
Some small portion (a few rows) of the core would be appreciated if you think it's a delicate part, but the main goal of my question is to see the big picture.
Main thing here is to decide on a connection design and to choose protocol. I.e. will you have a persistent connection to your server or connect each time when new data is ready to it.
Then will you use HTTP POST or Web Sockets or ordinary sockets. Will you rely exclusively on nginx or your data catcher will be another serving service.
This would be a most secure way, if other people will be connecting to nginx to view sites etc.
Write or use another server to run on another port. For example, another nginx process just for that. Then use SSL (i.e. HTTPS) with basic authentication to prevent anyone else from abusing the connection.
Then on client side, make a packet every x seconds of all data (pickle.dumps() or json or something), then connect to your port with your credentials and pass the packet.
Python script may wait for it there.
Or you write a socket server from scratch in Python (not extra hard) to wait for your packets.
The caveat here is that you have to implement your protocol and security. But you gain some other benefits. Much more easier to maintain persistent connection if you desire or need to. I don't think it is necessary though and it can become bulky to code break recovery.
No, just wait on some port for a connection. Client must clearly identify itself (else you instantly drop the connection), it must prove that it talks your protocol and then send the data.
Use SSL sockets to do it so that you don't have to implement encryption yourself to preserve authentication data. You may even rely only upon in advance built keys for security and then pass only data.
Do not worry about the speed. Sockets are handled by OS and if you are on Unix-like system you may connect as many times you want in as little time interval you need. Nothing short of DoS attack won't inpact it much.
If on Windows, better use some finished server because Windows sometimes do not release a socket on time so you will be forced to wait or do some hackery to avoid this unfortunate behaviour (non blocking sockets and reuse addr and then some flo control will be needed).
As far as your data is small you don't have to worry much about the server protocol. I would use HTTPS myself, but I would write myown light-weight server in Python or modify and run one of examples from internet. That's me though.
The simplest thing that could possibly work would be to take your N floats, convert them to a binary message using struct.pack(), and then send them via a UDP socket to the target machine (if it's on a single LAN you could even use UDP multicast, then multiple receivers could get the data if needed). You can safely send a maximum of 60 to 170 double-precision floats in a single UDP datagram (depending on your network).
This requires no application protocol, is easily debugged at the network level using Wireshark, is efficient, and makes it trivial to implement other publishers or subscribers in any language.
Summary
I have set up a udp packet listener in python, and I would like to be able to identify the device that is broadcasting the data it receives.
The Goal
I have a PHP web page that is reading the data from a database, which is being populated by the listener inserting the data when it receives it. My goal is to have a toggle switch that allows the user to select which device to hear data from. So currently, data is only being broadcast by either an MT4000 telemetry device, or using the terminal to manually send data across port 30000.
I don't want to identify it from a specific serial port, as described in: Identifying serial/usb device python
But rather wherever it is connected (any serial ports).
My Method
My idea at the moment is to somehow, send a message back to the same device from the listener, acting as both an acknowledger, and as a scan, to ask what the device is. Is that a feasible way?
Problems
Increases the amount of data being transmitted massively with more back and forth packets.
It may not work for every device connected, methods of extracting identity may be different for each device.
Once the python has identified the device, I will insert into the database, and when the user selects a device, a modified query will be sent, ie
("SELECT * FROM table WHERE device = MT4000");
I feel that this is not a clean method to use, and would be very open for different suggestions.
The solution
Unless it helps get across an answer, I'm not looking for specific code, but rather the theory of the task.
You may want to look into the way that nmap performs service detection. It is my understanding that it uses several different approaches and then takes the best match available. Those different approaches include:
What port the service is running on
What welcome banner the service provides for an initial connection
What OS the server runs (and thus what services could possibly run on that server)
You can read more about this in the service and application detection chapter.
Since you are also receiving data from these devices you can look at that data to determine what type it is. The file command on linux is a tool that performs a similar function, and that can determine the type based on:
File extension (obviously inapplicable here)
Magic numbers that appear at or near the start of the file
The composition of the data (mostly binary, or mostly ascii/unicode/etc, byte endiness and so on)
The underlying functionality of the file command is available as libmagic, a C library. It would be worth trying to use that directly, rather than duplicating it's functionality.
It's worth pointing out that a lot of these techniques provide statistical probabilities rather than certain answers. This may mean that you have to deal with a degree of uncertainty in your results, leading to misclassifications. To mitigate this you can collect data until you are sure enough that the device providing the data has been correctly identified.