I am currently trying to capture serial data within a python script. I intend to begin capturing a log of all the data captured on a serial port while the rest of the script continues to interact with the system I am testing.
If I use pyserial I believe it will end up blocking the rest of the tests I want to carry out until I finish logging.
My options I have considered are:
Writing another script to capture logs using pyserial, call this script using subprocess.Popen()
Using built in unix tools such as tail or cat and calling these with subprocess.Popen()
I am sure I could find a way to get either of these to work, but if anyone knows of a more direct way of doing it then I would love to know.
Thank you in advance.
Why create another process for reading data from pySerial ?
For non-blocking the read you can configure the timeout in serial class.
e.g.
ser = serial.Serial()
ser.baudrate = 19200
ser.port = 0
ser.timeout = 2 #By default, this is set to None
ser.open()
Also look at the wrapper class for reference.
http://pyserial.sourceforge.net/examples.html#wrapper-class
You can run a thread to keep reading the data from serial and update it to the buffer.
Creating another process invloves the overhead of IPC and not recommended for this task.
you can always check if there is available data with ser.inWaiting() see link
from serial import Serial
ser = Serial(port=0, baudrate=19200) # set the parameters to what you want
while 1:
if ser.inWaiting():
temp = ser.read()
#all the other code in the loop
if there is no data available to read the loop skips the serial reading
or if the data is to time sensitive you can do this
Related
I have a serial connection to a solar inverter using a USB. Using a free serial device monitor I can spy on the connection and watch it read/write information.
The info my computer gets from the inverter can only be read and exported (to .xlsx) on their application. Their app has no automatic export functionality and no api. The end goal is to be able to export automatically without me touching the computer at designated time intervals. I'm trying to circumvent their app by reading the data myself, and saving it to .csv so I can do anything else that I want with it once I have it.
What the serial monitor shows, top is read, bottom is write
I can't seem to decode what its sending/recieving. And I also don't recieve anything when I send those same write messages. And when I'm just looping and trying to readline() I get empty bytes.
My code looks like this, though I have a few different iterations of it doing similar things like looping through every baudrate, stopbits, and bitsize option. As well as looping through the 5 different unique write messages I see in the monitor.
# Construct serial connection using baud and stopbit
serialPort = serial.Serial(port= "COM3",
baudrate=115200,
bytesize=serial.EIGHTBITS,
timeout=0,
stopbits=serial.STOPBITS_ONE)
while(1):
serialPort.write('01 03 00 2D 00 2D 15 DE'.encode())
serialString = serialPort.readline()
if serialString.decode() != '':
print(serialString.decode())
Any help or advice for what I'm doing is appreciated.
I am using PYSerial to read a Arduino which outputting data through serial to a Raspberry PI. On the PI, I'm running Python script that reads the data using pyserial. I want to run another python scripts that also reads the the same data on the same serial. Would they interfere with each other? I've tested and both scripts seam to read data and it does not crash or lock the port but wondered if problems would occurs? Both scripts perform different functions and I want to keep them separate. Both scripts start with the following code and then do something different when it gets to ##do something##
import serial
ser = serial.Serial('/dev/ttyACM1', 115200) # just change the port number to the appropriate
while True :
line = ser.readline().decode("utf-8").rstrip()
##do something##
If two scripts read exactly the same data on the serial port, then the second one will have nothing to read, because after reading a byte, it is removed from the RX buffer.
Is there a way to capture and write very fast serial data to a file?
I'm using a 32kSPS external ADC and a baud rate of 2000000 while printing in the following format: adc_value (32bits) \t millis()
This results in ~15 prints every 1 ms. Unfortunately every single soulution I have tried fails to capture and store real time data to a file. This includes: Processing sketches, TeraTerm, Serial Port Monitor, puTTY and some Python scripts. All of them are unable to log the data in real time.
Arduino Serial Monitor on the other hand is able to display real time serial data, but it's unable to log it in a file, as it lacks this function.
Here's a printscreen of the serial monitor in Arduino with the incoming data:
One problematic thing is probably that you try to do a write each time you receive a new record. That will waste a lot of time writing data.
Instead try to collect the data into buffers, and as a buffer is about to overflow write the whole buffer in a single and as low-level as possible write call.
And to not stop the receiving of the data to much, you could use threads and double-buffering: Receive data in one thread, write to a buffer. When the buffer is about to overflow signal a second thread and switch to a second buffer. The other thread takes the full buffer and writes it to disk, and waits for the next buffer to become full.
After trying more than 10 possible solutions for this problem including dedicated serial capture software, python scripts, Matlab scripts, and some C projects alternatives, the only one that kinda worked for me proved to be MegunoLink Pro.
It does not achieve the full 32kSPS potential of the ADC, rather around 12-15kSPS, but it is still much better than anything I've tried.
Not achieving the full 32kSPS might also be limited by the Serial.print() method that I'm using for printing values to the serial console. By the way, the platform I've been using is ESP32.
Later edit: don't forget to edit MegunoLinkPro.exe.config file in the MegunoLink Pro install directory in order to add further baud rates, like 1000000 or 2000000. By default it is limited to 500000.
I have a Python script that is used to receive data associated with a radio station audio event (such as a song or commercial) from the machine playing the audio. The script will parse and process the data and then send portions of it to various other destinations.
First the socket is set up:
client_socket_1 = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
try:
print 'trying to open socket 1'
client_socket_1.connect((TCP_RCV_IP_CR1, TCP_RCV_PORT_CR1))
client_socket_1.setblocking(0)
except socket.error, e:
print 'Error', e, TCP_RCV_IP_CR1, '\n\n\n'
else:
SOCK1 = 1
print 'Successful connection to ',TCP_RCV_IP_CR1,'\n'
Now we wait until data is available to be read. I used select() and when the socket is ready to be read, the thread that parses and processes the data is spawned.
ready_1 = select.select([client_socket_1], [], [], 1) # select tells us when data is available at the socket
if ready_1[0] and SOCK1: # Don't run this code if there is no connection on client_socket_1 or no data available
t1 = Thread(target=processdata1) # Set up the thread
t1.start() # Call the process to process available data as a thread
It is important that the data be read as quickly as possible as it will be transported via TCP or UDP (depending on the particular data chunks and program specifications) along with the associated audio, and the function of one of the data items we are handling can create an on-air 'hiccup' in the audio if not received in a timely fashion. (TMI: It causes a 'replacement' commercial to play at the receiving end which is supposed to 'cover' the commercial audio we are sending. If the replacement spot doesn't start quickly enough listeners will hear the beginning of the commercial we are sending, then the local replacement one will start when our data is received and it sounds like a hiccup on the air.)
To confirm that my script is not always receiving the data quickly enough I telnetted to the port it is listening to and watched the data as it is received in the telnet window, then look at the Python output (which sends received data to stdout as soon as it is received) and I see about a 1.5-second delay between the telnet output and the Python output. This is the same amount of delay we have observed in normal on-air operation.
I chose to use select() because I was asked to multi-thread the script and I thought that would be a good way to know when to trigger a data-processing thread. My original idea was to simply loop through attempting to read data from each of the three systems we are monitoring and, when data is found, process it.
The thought was that if data is being processed from one system when another system has data ready to be read, it might cause a delay in processing and sending out the data from that machine. However, I can't see that delay being as significant as what I am experiencing now. I am considering going back to the original plan.
I would rather stick with what I have which is working flawlessly as long as data is received in a timely fashion. Any thoughts on why the excessively long delay?
I think it has to do with your timeout parameter in combination with the wlist and xlist parameters
Consider this piece of code
write_list = []
exception_list = []
select.select([client_socket_1], write_list, exception_list)
It takes an optional timeout parameter, like you use it. The documentation says
select() also takes an optional fourth parameter which is the number
of seconds to wait before breaking off monitoring if no channels have
become active. Using a timeout value lets a main program call select()
as part of a larger processing loop, taking other actions in between
checking for network input.
It might be that the call will always wait one second before returning because of the empty lists. Try
ready_1 = select.select(
[client_socket_1],
[client_socket_1],
[client_socket_1], 1
)
Or you can use a timeout value of 0, which
specifies a poll and never blocks.
I am doing a serial communication in python between a RPi and a camera. I send some data from the RPi using ser.write() and read data from the camera in the RPi using ser.read(). Then I would like to know what would ser.flush(), ser.flushinput() and ser.flushoutput() do if I add these after the read command.
I assume ser.flush() will make the program wait till all data from buffer memory is read. But I don't understand what the other two will do
Can someone tell me, what is the difference between these three when used in serial communication and what will happen when I use each of them separately after the ser.write() or ser.read().