Read multiple DS18B20 temperature sensors faster using Raspberry Pi - python

My custom sensor dashboard requests new readings every second.
This worked well, until I hooked up 3 DS18B20 temperature sensors (1-wire protocol, so all on 1 pin), which each take 750ms to provide new data.
This is the class I currently use to read the temperature of each sensor:
# ds18b20.py
# written by Roger Woollett
import os
import glob
import time
class DS18B20:
# much of this code is lifted from Adafruit web site
# This class can be used to access one or more DS18B20 temperature sensors
# It uses OS supplied drivers and one wire support must be enabled
# To do this add the line
# dtoverlay=w1-gpio
# to the end of /boot/config.txt
#
# The DS18B20 has three pins, looking at the flat side with the pins pointing
# down pin 1 is on the left
# connect pin 1 to GPIO ground
# connect pin 2 to GPIO 4 *and* GPIO 3.3V via a 4k8 (4800 ohm) pullup resistor
# connect pin 3 to GPIO 3.3V
# You can connect more than one sensor to the same set of pins
# Only one pullup resistor is required
def __init__(self):
# Load required kernel modules
os.system('modprobe w1-gpio')
os.system('modprobe w1-therm')
# Find file names for the sensor(s)
base_dir = '/sys/bus/w1/devices/'
device_folder = glob.glob(base_dir + '28*')
self._num_devices = len(device_folder)
self._device_file = list()
i = 0
while i < self._num_devices:
self._device_file.append(device_folder[i] + '/w1_slave')
i += 1
def _read_temp(self, index):
# Issue one read to one sensor
# You should not call this directly
# First check if this index exists
if index >= len(self._device_file):
return False
f = open(self._device_file[index], 'r')
data = f.read()
f.close()
return data
def tempC(self, index=0):
# Call this to get the temperature in degrees C
# detected by a sensor
data = self._read_temp(index)
retries = 0
# Check for error
if data == False:
return None
while (not "YES" in data) and (retries > 0):
# Read failed so try again
time.sleep(0.1)
#print('Read Failed', retries)
data = self._read_temp(index)
retries -= 1
if (retries == 0) and (not "YES" in data):
return None
(discard, sep, reading) = data.partition(' t=')
if reading == 85000:
# 85ºC is the boot temperature of the sensor, so ignore that value
return None
temperature = float(reading) / 1000.0
return temperature
def device_count(self):
# Call this to see how many sensors have been detected
return self._num_devices
I already tried to return the previous temperature reading if the current one isn't finished yet, however this didn't reduce the time it took to read a sensor, so I guess the only way is to do things asynchronously.
I could reduce the precision to reduce the time it takes per reading, but ideally I would read all of the sensors simultaneously on separate threads.
How can I best implement this? Or are there other ways to improve the reading speed of multiple DS18B20 sensors?
Thanks for any insights!

You're facing some limitations introduced by the Linux kernel driver. If you were interacting with the OneWire protocol directly, you would only have a single 750ms read cycle for all three sensors, rather than (3 * 750ms). When speaking the 1-wire protocol directly, you can issue a single "convert temperature" command to all devices on the bus, as described here, and then read all the sensors.
The Linux driver explicitly doesn't support this mode of operation:
If none of the devices are parasite powered it would be possible to convert all the devices at the same time and then go back to read individual sensors. That isn’t currently supported. The driver also doesn’t support reduced precision (which would also reduce the conversion time) when reading values.
That means you're stuck with a 750ms per device read cycle. Your best option is probably placing the sensor reading code in a separate thread, e.g.:
import glob
import threading
import time
# Note that we're inheriting from threading.Thread here;
# see https://docs.python.org/3/library/threading.html
# for more information.
class DS18B20(threading.Thread):
default_base_dir = "/sys/bus/w1/devices/"
def __init__(self, base_dir=None):
super().__init__()
self._base_dir = base_dir if base_dir else self.default_base_dir
self.daemon = True
self.discover()
def discover(self):
device_folder = glob.glob(self._base_dir + "28*")
self._num_devices = len(device_folder)
self._device_file: list[str] = []
for i in range(self._num_devices):
self._device_file.append(device_folder[i] + "/w1_slave")
self._values: list[float | None] = [None] * self._num_devices
self._times: list[float] = [0.0] * self._num_devices
def run(self):
"""Thread entrypoint: read sensors in a loop.
Calling DS18B20.start() will cause this method to run in
a separate thread.
"""
while True:
for dev in range(self._num_devices):
self._read_temp(dev)
# Adjust this value as you see fit, noting that you will never
# read actual sensor values more often than 750ms * self._num_devices.
time.sleep(1)
def _read_temp(self, index):
for i in range(3):
with open(self._device_file[index], "r") as f:
data = f.read()
if "YES" not in data:
time.sleep(0.1)
continue
disacard, sep, reading = data.partition(" t=")
temp = float(reading) / 1000.0
self._values[index] = temp
self._times[index] = time.time()
break
else:
print(f"failed to read device {index}")
def tempC(self, index=0):
return self._values[index]
def device_count(self):
"""Return the number of discovered devices"""
return self._num_devices
Because this is a thread, you need to .start() it first, so your
code would look something like:
d = DS18B20()
d.start()
while True:
for i in range(d.device_count()):
print(f'dev {i}: {d.tempC(i)}')
time.sleep(0.5)
You can call the tempC method as often as you want, because it's
just return a value from the _values array. The actual update
frequency is controlled by the loop in the run method (and the
minimum cycle time imposed by the sensors).

Related

Amending Apogee sensor open-source Python 2 code to operate on Raspberry pi 4 through USB port

Working on a project to connect several sensors to Raspberry Pi 4 Model B and stream collected data to Google Cloud Platform for storage/reporting/analysis
I have an Apogee sq520 PAR sensor which connects through USB and is designed to operate on Windows and Mac (but not Linux) with an accompanying GUI to view the data. I've connected the sensor to the Raspberry PI through the USB port (since IO pins are out of the picture) and now need to write a program for the Pi to read it. After reaching out to the Apogee support team, they've shared with me the source-code (below). However, as I am a complete beginner I am having a hard time amending the code to be read by the Raspberry Pi. Need all the help and support I can get.
Notes:
Code utilizes Pyserial library
Code is written on Python 2.7 (according to website). - I am using Python 3
When compiling the code as-is I receive a lot of syntax errors, regardless whether compiling in Python 2 or 3
Sensor connects through 5V USB
More info on sensor: https://www.apogeeinstruments.com/sq-520-full-spectrum-smart-quantum-sensor-usb/#product-tab-description
Code:
from serial import Serial
from time import sleep
import struct
GET_VOLT = '\x55!'
READ_CALIBRATION = '\x83!'
SET_CALIBRATION = '\x84%s%s!'
READ_SERIAL_NUM = '\x87!'
GET_LOGGING_COUNT = '\xf3!'
GET_LOGGED_ENTRY = '\xf2%s!'
ERASE_LOGGED_DATA = '\xf4!'
class Quantum(object):
def __init__(self):
"""Initializes class variables, and attempts to connect to device"""
self.quantum = None
self.offset = 0.0
self.multiplier = 0.0
self.connect_to_device()
def connect_to_device(self):
"""This function creates a Serial connection with the defined comport
and attempts to read the calibration values"""
port = 'COM1' # you'll have to check your device manager and put the actual com port here
self.quantum = Serial(port, 115200, timeout=0.5)
try:
self.quantum.write(READ_CALIBRATION)
multiplier = self.quantum.read(5)[1:]
offset = self.quantum.read(4)
self.multiplier = struct.unpack('<f', multiplier)[0]
self.offset = struct.unpack('<f', offset)[0]
except (IOError, struct.Error), data:
print data
self.quantum = None
def get_micromoles(self):
"""This function converts the voltage to micromoles"""
voltage = self.read_voltage()
if voltage == 9999:
# you could raise some sort of Exception here if you wanted to
return
# this next line converts volts to micromoles
micromoles = (voltage - self.offset) * self.multiplier * 1000
if micromoles < 0:
micromoles = 0
return micromoles
def read_voltage(self):
"""This function averages 5 readings over 1 second and returns
the result."""
if self.quantum == None:
try:
self.connect_to_device()
except IOError:
# you can raise some sort of exception here if you need to
return
# store the responses to average
response_list = []
# change to average more or less samples over the given time period
number_to_average = 5
# change to shorten or extend the time duration for each measurement
# be sure to leave as floating point to avoid truncation
number_of_seconds = 1.0
for i in range(number_to_average):
try:
self.quantum.write(GET_VOLT)
response = self.quantum.read(5)[1:]
except IOError, data:
print data
# dummy value to know something went wrong. could raise an
# exception here alternatively
return 9999
else:
if not response:
continue
# if the response is not 4 bytes long, this line will raise
# an exception
voltage = struct.unpack('<f', response)[0]
response_list.append(voltage)
sleep(number_of_seconds/number_to_average)
if response_list:
return sum(response_list)/len(response_list)
return 0.0

pylibftdi Device.read skips some bytes

I have an FPGA that streams data on the USB bus through an FT2232H and I have observed that about 10% of the data has to be thrown away because some bytes in the frame are missing. Here are the technical details:
FPGA is an Artix 7. A batch of 4002 byte is ready every 9 ms. So that works out to 444,667 byte/s of data.
My laptop runs python 3.7 (from anaconda) on Ubuntu 18.04LTS
The FPGA/FT2232H is opened via the following initialization lines:
SYNCFF = 0x40
SIO_RTS_CTS_HS = (0x1 << 8)
self.device = pylibftdi.Device(mode='t', interface_select=pylibftdi.INTERFACE_A, encoding='latin1')
self.device.ftdi_fn.ftdi_set_bitmode(0xff, SYNCFF)
self.device.ftdi_fn.ftdi_read_data_set_chunksize(0x10000)
self.device.ftdi_fn.ftdi_write_data_set_chunksize(0x10000)
self.device.ftdi_fn.ftdi_setflowctrl(SIO_RTS_CTS_HS)
self.device.flush()
Then the data is read via this simple line:
raw_usb_data = my_fpga.device.read(0x10000)
I have observed the following:
I always get 0x10000 of data per batch, which is what I expect.
Reading 2**16 = 65,536 byte at once using device.read should take 147.4 ms given that a batch is ready every 9 ms. But timing that line gives a mean of 143 ms with a std deviation of 6.6 ms.
My first guess is that there is no buffer/a tiny buffer somewhere and that some information is lost because the OS (priority issue?) or python (garbage collection?) does something else at some point for too long.
How can I reduce the amount of bytes lost while reading the device?
The FT2232H has internal FIFO buffers with a capacity of ~4 kbits. Chances are that you are limited by them. Not sure how pylibftdi deals with them but maybe using an alternative approach might work if you can use the VCP driver. This allows you to address the FT2232H as standard comport e.g. via pyserial.
Some excerpts from one of my projects which actually works for baud rates >12 Mbps (UART is limited to 12 Mbps but e.g. fast opto can reach ~25 Mbps):
import traceback
import serial
import serial.tools.list_ports
import multiprocessing
import multiprocessing.connection
def IO_proc(cntr_pipe, data_pipe):
try:
search_str="USB VID:PID=0403:6010 SER="
ports = [x.device for x in serial.tools.list_ports.comports() if search_str in x.hwid]
baud_rate = 12000000 #only matters for uart and not for fast opto or fifo mode
ser = serial.Serial(port, baud_rate)
while not cntr_pipe.closed:
time.sleep(0)
in_data = ser.read(ser.inWaiting())
[...do some pattern matching, package identification etc...]
data_pipe.send_bytes(in_data)
except EOFError:
ret_code = 2
except Exception as e:
cntr_pipe.send(traceback.format_exc())
cntr_pipe.close()
ret_code = 4
finally:
cntr_pipe.close()
ser.close()
multiprocessing.connection.BUFSIZE = 2 ** 20 #only required for windows
child_cntr, parent_cntr = multiprocessing.Pipe()
child_data, parent_data = multiprocessing.Pipe()
process = multiprocessing.Process(target = IO_proc, args=(child_cntr, child_data))
#called frequently
def update():
if child_cntr.poll():
raise Exception("error",child_cntr.recv())
buf = bytes()
while parent_data.poll():
buf += parent_data.recv_bytes()
[...do something fancy...]
I tried to c&p a minimum example. It is untested so please forgive me if it is not working out of the box. To get this working one actually needs to make sure that the VCP and not the D2XX driver is loaded.
P.S: Actually while scanning through my files I realized that the pylibftdi way should work as well as I use a "decorator" class in case the D2XX driver is loaded:
try: import pylibftdi
except: pylibftdi = None
class pylibftdi_device:
def __init__(self,speed):
self.dev = pylibftdi.Device(interface_select=2)
self.dev.baudrate = speed
self.buf = b''
def write(self, data):
self.dev.write(data)
def read(self, bytecount):
while bytecount > len(self.buf):
self._read()
ret = self.buf[:bytecount]
self.buf = self.buf[bytecount:]
return ret
def flushInput(self):
self.dev.flush_input()#FT_PURGE_RX
self.buf = b''
def _read(self):
self.buf += self.dev.read(2048)
#property
def in_waiting(self):
self._read()
return len(self.buf)
def close(self):
self.dev.close()
def find_device_UART(baudrate=12000000,index=1, search_string="USB VID:PID=0403:6010 SER="):
if pylibftdi:
return pylibftdi_device(baudrate),"pylibftdi_device"
try:
ports = [x.device for x in serial.tools.list_ports.comports() if search_string in x.hwid]
module_logger.info(str(ports))
if len(ports) == 0:
return None,"no device found"
else:
ser = serial.Serial(ports[index],baudrate)
return ser,"found device %s %d"%(ser.name,ser.baudrate)
except serial.SerialException as e:
return None,"error during device detection - \n"+str(e)
So main difference to your example is that the recv buffer is read more frequently and put into a buffer which is then searched for the packets later on. And maybe this all is a complete overkill for your application and you just need to make smaller read calls to ensure the buffers never overflow.

Multiprocessing function not writing to file or printing

I'm working on a Raspberry Pi (3 B+) making a data collection device and I'm
trying to spawn a process to record the data coming in and write it to a file. I have a function for the writing that works fine when I call it directly.
When I call it using the multiprocess approach however, nothing seems to happen. I can see in task monitors in Linux that the process does in fact get spawned but no file gets written, and when I try to pass a flag to it to shut down it doesn't work, meaning I end up terminating the process and nothing seems to have happened.
I've been over this every which way and can't see what I'm doing wrong; does anyone else? In case it's relevant, these are functions inside a parent class, and one of the functions is meant to spawn another as a thread.
Code I'm using:
from datetime import datetime, timedelta
import csv
from drivers.IMU_SEN0 import IMU_SEN0
import multiprocessing, os
class IMU_data_logger:
_output_filename = ''
_csv_headers = []
_accelerometer_headers = ['Accelerometer X','Accelerometer Y','Accelerometer Z']
_gyroscope_headers = ['Gyroscope X','Gyroscope Y','Gyroscope Z']
_magnetometer_headers = ['Bearing']
_log_accelerometer = False
_log_gyroscope= False
_log_magnetometer = False
IMU = None
_writer=[]
_run_underway = False
_process=[]
_stop_value = 0
def __init__(self,output_filename='/home/pi/blah.csv',log_accelerometer = True,log_gyroscope= True,log_magnetometer = True):
"""data logging device
NOTE! Multiple instances of this class should not use the same IMU devices simultaneously!"""
self._output_filename = output_filename
self._log_accelerometer = log_accelerometer
self._log_gyroscope = log_gyroscope
self._log_magnetometer = log_magnetometer
def __del__(self):
# TODO Update this
if self._run_underway: # If there's still a run underway, end it first
self.end_recording()
def _set_up(self):
self.IMU = IMU_SEN0(self._log_accelerometer,self._log_gyroscope,self._log_magnetometer)
self._set_up_headers()
def _set_up_headers(self):
"""Set up the headers of the CSV file based on the header substrings at top and the input flags on what will be measured"""
self._csv_headers = []
if self._log_accelerometer is not None:
self._csv_headers+= self._accelerometer_headers
if self._log_gyroscope is not None:
self._csv_headers+= self._gyroscope_headers
if self._log_magnetometer is not None:
self._csv_headers+= self._magnetometer_headers
def _record_data(self,frequency,stop_value):
self._set_up() #Run setup in thread
"""Record data function, which takes a recording frequency, in herz, as an input"""
previous_read_time=datetime.now()-timedelta(1,0,0)
self._run_underway = True # Note that a run is now going
Period = 1/frequency # Period, in seconds, of a recording based on the input frequency
print("Writing output data to",self._output_filename)
with open(self._output_filename,'w',newline='') as outcsv:
self._writer = csv.writer(outcsv)
self._writer.writerow(self._csv_headers) # Write headers to file
while stop_value.value==0: # While a run continues
if datetime.now()-previous_read_time>=timedelta(0,1,0): # If we've waited a period, collect the data; otherwise keep looping
print("run underway value",self._run_underway)
if datetime.now()-previous_read_time>=timedelta(0,Period,0): # If we've waited a period, collect the data; otherwise keep looping
previous_read_time = datetime.now() # Update previous readtime
next_row = []
if self._log_accelerometer:
# Get values in m/s^2
axes = self.IMU.read_accelerometer_values()
next_row += [axes['x'],axes['y'],axes['z']]
if self._log_gyroscope:
# Read gyro values
gyro = self.IMU.read_gyroscope_values()
next_row += [gyro['x'],gyro['y'],gyro['z']]
if self._log_magnetometer:
# Read magnetometer value
b= self.IMU.read_magnetometer_bearing()
next_row += b
self._writer.writerow(next_row)
# Close the csv when done
outcsv.close()
def start_recording(self,frequency_in_hz):
# Create recording process
self._stop_value = multiprocessing.Value('i',0)
self._process = multiprocessing.Process(target=self._record_data,args=(frequency_in_hz,self._stop_value))
# Start recording process
self._process.start()
print(datetime.now().strftime("%H:%M:%S.%f"),"Data logging process spawned")
print("Logging Accelerometer:",self._log_accelerometer)
print("Logging Gyroscope:",self._log_gyroscope)
print("Logging Magnetometer:",self._log_magnetometer)
print("ID of data logging process: {}".format(self._process.pid))
def end_recording(self,terminate_wait = 2):
"""Function to end the recording multithread that's been spawned.
Args: terminate_wait: This is the time, in seconds, to wait after attempting to shut down the process before terminating it."""
# Get process id
id = self._process.pid
# Set stop event for process
self._stop_value.value = 1
self._process.join(terminate_wait) # Wait two seconds for the process to terminate
if self._process.is_alive(): # If it's still alive after waiting
self._process.terminate()
print(datetime.now().strftime("%H:%M:%S.%f"),"Process",id,"needed to be terminated.")
else:
print(datetime.now().strftime("%H:%M:%S.%f"),"Process",id,"successfully ended itself.")
====================================================================
ANSWER: For anyone following up here, it turns out the problem was my use of the VS Code debugger which apparently doesn't work with multiprocessing and was somehow preventing the success of the spawned process. Many thanks to Tomasz Swider below for helping me work through issues and, eventually, find my idiocy. The help was very deeply appreciated!!
I can see few thing wrong in your code:
First thing
stop_value == 0 will not work as the multiprocess.Value('i', 0) != 0, change that line to
while stop_value.value == 0
Second, you never update previous_read_time so it will write the readings as fast as it can, you will run out of disk quick
Third, try use time.sleep() the thing you are doing is called busy looping and it is bad, it is wasting CPU cycles needlessly.
Four, terminating with self._stop_value = 1 probably will not work there must be other way to set that value maybe self._stop_value.value = 1.
Well here is a pice of example code based on the code that you have provided that is working just fine:
import csv
import multiprocessing
import time
from datetime import datetime, timedelta
from random import randint
class IMU(object):
#staticmethod
def read_accelerometer_values():
return dict(x=randint(0, 100), y=randint(0, 100), z=randint(0, 10))
class Foo(object):
def __init__(self, output_filename):
self._output_filename = output_filename
self._csv_headers = ['xxxx','y','z']
self._log_accelerometer = True
self.IMU = IMU()
def _record_data(self, frequency, stop_value):
#self._set_up() # Run setup functions for the data collection device and store it in the self.IMU variable
"""Record data function, which takes a recording frequency, in herz, as an input"""
previous_read_time = datetime.now() - timedelta(1, 0, 0)
self._run_underway = True # Note that a run is now going
Period = 1 / frequency # Period, in seconds, of a recording based on the input frequency
print("Writing output data to", self._output_filename)
with open(self._output_filename, 'w', newline='') as outcsv:
self._writer = csv.writer(outcsv)
self._writer.writerow(self._csv_headers) # Write headers to file
while stop_value.value == 0: # While a run continues
if datetime.now() - previous_read_time >= timedelta(0, 1,
0): # If we've waited a period, collect the data; otherwise keep looping
print("run underway value", self._run_underway)
if datetime.now() - previous_read_time >= timedelta(0, Period,
0): # If we've waited a period, collect the data; otherwise keep looping
next_row = []
if self._log_accelerometer:
# Get values in m/s^2
axes = self.IMU.read_accelerometer_values()
next_row += [axes['x'], axes['y'], axes['z']]
previous_read_time = datetime.now()
self._writer.writerow(next_row)
# Close the csv when done
outcsv.close()
def start_recording(self, frequency_in_hz):
# Create recording process
self._stop_value = multiprocessing.Value('i', 0)
self._process = multiprocessing.Process(target=self._record_data, args=(frequency_in_hz, self._stop_value))
# Start recording process
self._process.start()
print(datetime.now().strftime("%H:%M:%S.%f"), "Data logging process spawned")
print("ID of data logging process: {}".format(self._process.pid))
def end_recording(self, terminate_wait=2):
"""Function to end the recording multithread that's been spawned.
Args: terminate_wait: This is the time, in seconds, to wait after attempting to shut down the process before terminating it."""
# Get process id
id = self._process.pid
# Set stop event for process
self._stop_value.value = 1
self._process.join(terminate_wait) # Wait two seconds for the process to terminate
if self._process.is_alive(): # If it's still alive after waiting
self._process.terminate()
print(datetime.now().strftime("%H:%M:%S.%f"), "Process", id, "needed to be terminated.")
else:
print(datetime.now().strftime("%H:%M:%S.%f"), "Process", id, "successfully ended itself.")
if __name__ == '__main__':
foo = Foo('/tmp/foometer.csv')
foo.start_recording(20)
time.sleep(5)
print('Ending recording')
foo.end_recording()

Read-Out of two channels of National Instrument USB 6211 with python

I'm trying to read out two channels simultaneously if an USB 6211 with python. To that end, I tried to adapt the example from http://www.scipy.org/Cookbook/Data_Acquisition_with_NIDAQmx by changing the line
CHK(nidaq.DAQmxCreateAIVoltageChan(
taskHandle,
"Dev1/ai0",
"",
DAQmx_Val_Cfg_Default,
float64(-10.0),
float64(10.0),
DAQmx_Val_Volts,
None))
to
CHK(nidaq.DAQmxCreateAIVoltageChan(
taskHandle,
"Dev1/ai0:1",
"",
DAQmx_Val_Cfg_Default,
float64(-10.0),
float64(10.0),
DAQmx_Val_Volts,
None))
But then, I keep receiving the error message that "nidaq call failed with error -200229: 'Buffer is too small to fit read data". Adding the line CHK(nidaq.DAQmxCfgInputBuffer(taskHandle, uInt32(10000000))) or increasing the length of the data array did not help...
Could someone point me to the right variable to change?
I found an answer here: http://www.physics.oregonstate.edu/~hetheriw/whiki/py/topics/ni/files/ni-daq_ctypes_multichannel_adc_usb_6008.txt
In short, the arguments of nidaq.DAQmxReadAnalogF64() need the additional argument "-1" after taskHandle. The line should then look like this:
CHK(nidaq.DAQmxReadAnalogF64(taskHandle, -1,float64(1.0),
DAQmx_Val_GroupByScanNumber,#DAQmx_Val_GroupByChannel,#DAQmx_Val_GroupByScanNumber
data.ctypes.data,max_num_samples,
ctypes.byref(read),None))
Here is an object I use to do A to D with a USB-6009. Note: at the bottom is an example of the calling procedure.
#-------------------------------------------------------------------------------
# Name: This is a object that takes data from the AtoD board
# Purpose:
#
# Author: Carl Houtman
#
# Created: 12/10/2012
# Copyright: (c) Carl Houtman 2012
# Licence: none
#-------------------------------------------------------------------------------
from PyDAQmx import *
import numpy
class DAQInput:
def __init__(self, num_data, num_chan, channel, high, low):
""" This is init function that opens the channel"""
# Declare variables passed by reference
taskHandle = TaskHandle()
read = int32()
data = numpy.zeros((10000,),dtype=numpy.float64)
sumi = [0,0,0,0,0,0,0,0,0,0]
#Get the passed variables
self.num_data = num_data
self.channel = channel
self.high = high
self.low = low
self.num_chan = num_chan
# Create a task and configure a channel
DAQmxCreateTask(b"",byref(self.taskHandle))
DAQmxCreateAIVoltageChan(self.taskHandle,self.channel,b"",DAQmx_Val_Cfg_Default,
self.low,self.high,DAQmx_Val_Volts,None)
# Start the task
DAQmxStartTask(self.taskHandle)
def getData(self):
""" This function gets the data from the board and calculates the average"""
DAQmxReadAnalogF64(self.taskHandle,self.num_data,10.0,DAQmx_Val_GroupByChannel,
self.data,10000,byref(self.read),None)
# Calculate the average of the values in data (could be several channels)
i = self.read.value
for j in range(self.num_chan):
self.sumi[j] = numpy.sum(self.data[j*i:(j+1)*i])/self.read.value
return self.sumi
def killTask(self):
""" This function kills the tasks"""
# If the task is still alive kill it
if self.taskHandle != 0:
DAQmxStopTask(self.taskHandle)
DAQmxClearTask(self.taskHandle)
if __name__ == '__main__':
myDaq = DAQInput(100, 2, b"Dev1/ai0:1", 10.0, -10.0)
result = myDaq.getData()
print ("the average readings were {:.4f} and {:.4f} volts".format(result[0], result[1]))
myDaq.killTask()

Use only the most recent value read from an Arduino in a python script, not the values stored in the buffer?

I have been working on a project using Python to read values from an arduino and then control video cameras. The Arduino controls two ultrasonic sensors and reports distance in cm. The python script then reads the distances from the Arduino using ser.readline(). When the script reads values outside the range everything works fine. However if it goes into the loop for the distance inside the required range it works correctly once and then reads old values from the Arduino instead of current "live" values which causes it to continue the record loop instead of exiting the loop. What can I do to get rid of the old values in the buffer and only read the most current value? I have found several methods and tested them but so far no luck.
Here is the code I am using (i know its not well written but its my first try using python and writing code outside of matlab)
import sys, time
import serial
import cv
import os
from time import strftime
#Create window for Camera 0
cv.NamedWindow("Camera 0", cv.CV_WINDOW_AUTOSIZE)
capture0 = cv.CreateCameraCapture(2)
cv.ResizeWindow("Camera 1", 640, 480)
cv.MoveWindow("Camera 0", 0, 0)
#Create window for Camera 1
cv.NamedWindow("Camera 1", cv.CV_WINDOW_AUTOSIZE)
capture1 = cv.CreateCameraCapture(1)
cv.MoveWindow("Camera 1", 150, 150)
#Initialize connection to Arduino
arduino = serial.Serial('COM12', 9600)
connected = False
#Confirm that Arduino is connected and software is able to read inputs
while not connected:
serin = arduino.readline()
connected = True
f = 'Sensor Connected'
print f
'''#Dummy variables for testing
value1 = 145
value2 = 30'''
#Initialize video record on as false (0)
vid = 0
#Initialize counter
counter_vid = 0
counter = 0
Accel = 1
def Camera0():
frame0=cv.QueryFrame(capture0)
cv.WriteFrame(writer0,frame0)
cv.ShowImage("Camera 0", frame0)
def Camera1():
frame1=cv.QueryFrame(capture1)
cv.WriteFrame(writer1,frame1)
cv.ShowImage("Camera 1", frame1)
while True:
status = arduino.readline()
value1=int((status[6:10]))-1000
value2=int((status[17:21]))-1000
print(value1)
print(value2)
if value1>180 and value2>180 and vid==0:
vid = 0
elif value1>180 and value2>180 and vid==1:
vid = 0
elif value1<180 and vid==0 or value2<180 and vid==0:
filename0 = strftime("OUTPUT\%Y_%m_%d %H_%M_%S") + " camera0.avi"
writer0=cv.CreateVideoWriter(filename0, 1, 15.0, (640,480), is_color=1)
filename1 = strftime("OUTPUT\%Y_%m_%d %H_%M_%S") + " camera1.avi"
writer1=cv.CreateVideoWriter(filename1, 1, 15.0, (640,480), is_color=1)
vid=1
while counter_vid<25 and vid==1:
Camera0()
Camera1()
counter_vid += 1
print(counter_vid)
cv.WaitKey(10)
else:
while counter_vid<25 and vid==1:
Camera0()
Camera1()
counter_vid += 1
print(counter_vid)
cv.WaitKey(10)
cv.WaitKey(25)
counter_vid = 0
counter += 1
print('End of Loop Counter')
print(counter)
You're right about the buffer filling up. You need a way to always get the most recent value out of the buffer.
I would suggest replacing this:
status = arduino.readline()
with this:
status = getLatestStatus()
and then further up towards the top, by your camera functions:
def getLatestStatus():
while arduino.inWaiting() > 0:
status = arduino.readline()
return status
This function getLatestStatus will go through the entire buffer every time it is called and only return the latest status, disregarding all the statuses returned in the meantime.
Your other option is to modify the "firmware" for your arduino to return a distance sensor value every time it receives a command, (say "M\n") so that way you don't have to worry about buffer problems. That's what I did for an arduino-powered ultrasonic distance device and I felt it was cleaner than the "read through the entire buffer" solution. It will introduce a bit more latency into your distance measurement though.

Categories