Serial communication between teensy bord and system using python - python

i am fairly new to the field of IOT. I am setting up a sensor with teensy for reading up its data and transmitting over using serial communication to system where using python I am reading the data and storing it into a database.
The problem I am facing is when i check my program using arduino serial monitor I am getting insane sample speed like 10k readings are done in 40 milli seconds but when i try to read the same program using python it is not even giving me more than 1000 readings per second and that too without the database code with it it only reads 200 samples per second. Is there any way i can increase this sample rate or do i have to set any extra parameters for communication through serial ?
Here is my code for teensy :
int i;
elapsedMillis sinceTest1;
void setup()
{
Serial.begin(2000000); // USB is always 12 Mbit/sec
i = 0;
delay(5000);
Serial.println("Setup Called");
Serial.flush();
}
void loop()
{
if (i == 0 || i == 500000)
{
Serial.println(sinceTest1);
}
Serial.println(i);
//Serial.println(Serial.baud());
i++;
}
For python :
import serial
import pymysql
from datetime import datetime
import time
import signal
import sys
class ReadLine:
def __init__(self, s):
self.buf = bytearray()
self.s = s
def readline(self):
i = self.buf.find(b"\n")
if i >= 0:
r = self.buf[:i+1]
self.buf = self.buf[i+1:]
return r
while True:
i = max(1, min(2048, self.s.in_waiting))
data = self.s.read(i)
i = data.find(b"\n")
if i >= 0:
r = self.buf + data[:i+1]
self.buf[0:] = data[i+1:]
return r
else:
self.buf.extend(data)
ser = serial.Serial(
port='COM5',\
baudrate=2000000,\
#baudrate=9600,\
#parity=serial.PARITY_NONE,\
#stopbits=serial.STOPBITS_ONE,\
#bytesize=serial.EIGHTBITS,\
#timeout=0
)
print("connected to: " + ser.portstr)
count=1
#this will store the line
line = []
#database connection
connection = pymysql.connect(host="localhost", user="root", passwd="", database="tempDatabase")
cursor = connection.cursor()
checker = 0
rl = ReadLine(ser)
while True:
time = datetime.now().strftime('%Y-%m-%d %H:%M:%S')
print(time)
print(checker)
print(rl.readline())
insert1 = ("INSERT INTO tempinfo(value,test,counter) VALUES('{}','{}','{}');".format(33.5, time,checker)) #.format(data[0])
insert2 = ("INSERT INTO urlsync(textvalue,sync) VALUES('http://www.myname.com/value.php?&value={}&time={}',0);".format(33.5,time)) #.format(data[0])
cursor.execute(insert1)
cursor.execute(insert2)
connection.commit()
checker += 1
connection.close()
time = datetime.now().strftime('%Y-%m-%d %H:%M:%S')
print(time )
ser.close()
P.S : 1000 samples per second is the rate I am getting when I am not using the commands for database, including them I am getting around 250 samples per second only.
Any help or suggestion is appreciated, thank you.

First off, great question. The issue you are facing is loaded with learning opportunities.
Let's go one by one:
-You are now in the position to understand the difference between a microcontroller and a computer. The microcontroller in its most basic form (if you are running bare-metal code, even if it's not very efficient code, like on an Arduino) will do just one thing, and particularly when it's hardware-related (like reading or writing to UARTs) it will do it very efficiently. On a desktop computer, on the other hand, you have layer upon layer of tasks running simultaneously (operating system background tasks, updating the screen and whatnot). With so many things happening at the same time and if you don't establish priorities, it will be very difficult to accurately predict what will exactly happen and when. So it's not only your Python code that is running, there will be many more things that will come up and interrupt the flow of your user task. If you are hoping to read data from the UART buffer at a stable (or at least predictable) speed, that will never happen with the architecture you are using at the moment.
-Even if you manage to strip down your OS to the bare minimum, kill all processes, go on a terminal with no graphics whatsoever... you still have to deal with the uncertainty of what you are doing on your own Python code (that's why you see better performance with the Arduino serial monitor, which is not doing anything other than removing data from the buffer). On your Python code, you are sequentially reading from the port, trying to find a particular character (line feed) and then attaching the data you read to a list. If you want to improve performance, you need to either just read data and store it for offline processing or look at multithreading (if you have a thread of your program dedicated to only reading from the buffer and you do further processing on a separate thread you could improve significantly the throughput, particularly if you set priorities right).
-Last, but actually, most importantly, you should ask yourself: Do I really need to read data from my sensor at 2 Mbps? If the answer is yes, and your sensor is not a video camera, I'm afraid you need to take a step back and look at the following concepts: sensor bandwidth and dynamic response. After you do that, the next question is: how fast is your sensor updating its output and why? is that update rate meaningful? I can give you a couple of references here. First, imagine you have a temperature sensor to read and record the temperature in an oven. Does it make sense to sample values from the sensor at 1 MHz (1 million readings per second) if the temperature in the oven is changing at a rate of 10 degrees C per minute or even 100 degrees per second? Is your sensor even able to react so fast (that where its dynamic response comes into play)? My guess: probably not. Many industrial devices integrate dozens of sensors to control critical processes and send all data through a 1.5 Mbps link (pretty standard for Profibus, for instance).

Related

Get a wrong cpu_frequence from raspberry pi in python

i want use python to get the cpu_freq value from raspberry pi 4B
def GetCpuInfo():
# Get CPU frequence
cpu_freq =open("/sys/devices/system/cpu/cpu0/cpufreq/scaling_cur_freq").read()
return cpu_freq
when i print the cpu_freq data, the output always fixed in 1800000(it's the max cpu frequence 1.8Ghz of raspberry pi),but when each time i use the
cat /sys/devices/system/cpu/cpu0/cpufreq/scaling_cur_freq
this command in terminal,it give me the dynamic valve(600000-1800000)
So why do i get wrong value when using the python? is it a wrong way to read this file?
There's nothing wrong with your read().
The very act of starting Python can itself take enough cycles to cause the CPU to ramp up to full frequency, especially on a small system like a Pi.
To prevent that, add a delay to let it spool back down before you take your readings. For example:
import time
def GetCpuInfo():
with open("/sys/devices/system/cpu/cpu0/cpufreq/scaling_cur_freq") as f:
return f.read()
for _ in range(20):
time.sleep(1)
print(GetCpuInfo())

Serial Communication between MATLAB and STM running Micropython

Was hoping I could get some pointers on how to setup communications between MATLAB and an STM32L476 Nucleo-64.
The idea behind this is to essentially navigate some pre generated terrain, using the MATLAB script to create the map of the terrain and show position, while the STM sends in the control action to act as an 'onboard' processor for a robot navigating the path.
The MATLAB script is set up to generate 3 numbers (1 d.p.), corresponding to sensor measurements, which is then sent to the STM via serial communication, the STM will then calculate the control action, which should correspond to two integers returned.
This setup does work using an Arduino as this is what was used previously (so I know there is no issues with MATLAB generating the sensor measurements or terrain), however I need to change the hardware over to STM running micropython, which is where I am running into the trouble.
Am hoping that someone may be able to point me in the right direction, I have used the serial command in MATLAB to write to the board:
s = serial('COM3','BaudRate',115200,'timeout',1);
fopen(s);
.....
fprintf(s,y(1));
fprintf(s,y(2));
fprintf(s,y(3));
Where y is the sensor measurements. I was then was hoping to read the response from the STM back in using fread(s) - this does give numbers for each iteration, however they are quite often repeated and are no where close to what was set on the STM (it also returns a 14x1 array when it should only be a 2x1). I think my mistake maybe in the STM side of things as I tried to read the data in by setting up another serial port.
ser = serial.Serial(
port='COM3',
baudrate=115200)
dat = []
while True:
# Collect sensor data
yLeft = (dat[1])
yRight = (dat[2])
yFront = (dat[3])
Any input would be greatly appreciated - I just seem to be going around in circles at the moment.
Edit - At the moment to try and get it to work I am trying to get it to react to one number and print the result to MATLAB
import serial
import time
import sys
# Declare states
state_list = ['WAIT', 'TEN']
state = 'WAIT' # Default state on init is wait
# Initialise variables
V = 0
ser = serial.Serial(
port='COM3',
baudrate=115200)
while True:
ser.open()
V = ser.read()
ser.close()
# Make decisions according to state machine
if state == 'WAIT':
print('WAITING')
if V > 10:
state = 'TEN'
else:
state = state
elif state == 'TEN':
print('TEN')
if V < 10:
state = 'WAIT'
else:
state = state
else:
print('END OF PROGRAM')
Then the corresponding MATLAB script is:
%% clean up
clear all
clc
%% Set up serial
sobj = 'COM3';
s = serial(sobj,'BaudRate',115200,'timeout',1);
fopen(s);
%% Run Simulation
% Create Data Array to be sent to STM
DatatoSend=15;
% Print data to STM
fprintf(s,DatatoSend)
% Read Response from STM
data = [];
% Receive response data
data = fread(s);
fclose(s)
I am not confident that I am even close to getting this working though.

Arduino serial timeouts after several serial writes

I noticed with my board from DIY drones a strange behavior when I use my custom firmware.
Here is an example function which is called in my firmware running on an Arduino board:
void send_attitude(float roll, float pitch, float yaw) {
hal.console->printf("{\"type\":\"sens_attitude\",\"roll\":%.4f,\"pitch\":%.4f,\"yaw\":%.4f}\n",
roll, pitch, yaw);
}
As you can see, the code just writing a message in the serial port set in setup (hal.uartA).
I call this function every 0.5s:
inline void medium_loop() {
static int timer = 0;
int time = hal.scheduler->millis() - timer;
// send every 0.5 s
if(time > 500) {
send_attitude(OUT_PIT, OUT_ROL, OUT_YAW);
timer = hal.scheduler->millis();
}
}
Now to the strange thing. If I use the serial monitor or read the board with another program or script everything is fine. Every 0.5s the proper LED is blinking and message is shown. But if I don't read it out, after appr. 10s the LED is flushing up continuously and no connection/communication is possible anymore. I have to unplug the board then. The same behavior is observed the other way round. If I send to my board over serial port (in my case USB) and don't flush the input buffer, the LED is flushing up continuously and I get a timeout. The following code works:
def send_data(line):
# calc checksum
chk = chksum(line)
# concatenate msg and chksum
output = "%s*%x\r\n" % (line, chk)
try:
bytes = ser.write(output)
except serial.SerialTimeoutException as e:
logging.error("Write timeout on serial port '{}': {}".format(com_port, e))
# Flush input buffer, if there is still some unprocessed data left
# Otherwise the APM 2.5 control boards stucks after some command
ser.flush() # Try to send old message
ser.flushInput() # Delete what is still inside the buffer
If I comment out this line:
ser.flushInput() # Delete what is still inside the buffer
I don't use more settings then this.
I get (depending on the message interval) a timeout sooner or later. In my case I send every 20ms a signal which results in a timeout after ~10s. Also dependent on the length of message. Bigger messages cause it faster than smaller ones.
My settings are shown in the following snippets. Client side python code:
com_port = '/dev/ttyACM0'
baud_rate = '115200'
try:
ser = serial.Serial(com_port, baud_rate, timeout=0.1, writeTimeout=0.1, rtscts=1)
The if these timeouts happen, then I also get one if I set the timeout to something like 2s. In my case I need a very low latency, which is indeed possible if I keep reading and flushing. Firmware code from my Arduino:
void setup() {
// Set baud rate when connected to RPi
hal.uartA->begin(115200);
hal.console->printf("Setup device ..\n");
// Followed by motor, compass, barometer initialization
My questions are:
What exactly happens with my board?
Why it is not reacting anymore if I just write in my serial port without reading or flushing the buffer?
Is it really a buffer or driver problem associated with this strange behavior and is this problem related to all Arduino boards or maybe just mine APM 2.5 from DIY drones?
Last but not least: I was finding no functions in the library which are targeting such problems. Are there maybe any I don't know?
The complete source code is #google code: https://code.google.com/p/rpicopter/source/browse/
What board are you using and what processor does it have? My guess would be that your board is based on the ATmega32U4, or some other microcontroller that has a built-in USB module. If so, I have seen similar behavior before here is what I think is happening:
There is a buffer on your microcontroller to hold serial data going to the computer. There is a buffer in the computer's USB serial driver to hold serial received from the chip. Since you are not reading bytes from the COM port, the buffer on the computer will fill up. Once the buffer on the computer fills up, it stops requesting data from the microcontroller. Therefore, the buffer on the microcontroller will eventually fill up.
Once the microcontroller's buffer is full, how do you expect printf command to behave? For simplicity, the printf you are using is probably designed to just wait in a blocking loop until buffer space is available and then send the next character, until the message is done. Since buffer space will never be available, your program gets stuck in an infinite loop.
A better strategy would be to check to see if enough buffer space is available before calling printf. The code might look something like this:
if(console_buffer_space() > 80)
{
hal.console->printf(...);
}
I don't know if this is possible in the DIY drones firmware, and I don't know if the max buffer space can actually ever reach 80, so you will have to research this a bit.
I don't understand the use of:
ser.flush() # Try to send old message
ser.flushInput() # Delete what is still inside the buffer
Lets say your device is connected to PC and the python code is writing the (line, chk):
ser.flush() - why are you using it?
ser.flushInput() - will "delete" the Serial input buffer at the PC
It looks like other people have the same problem. And thanks to the Mod-Braniac who deleted my minimal example. My bet is, that's a problem with Arduino USB controller chip or the firmware on it.

Can I avoid a threaded UDP socket in Python dropping data?

First off, I'm new to Python and learning on the job, so be gentle!
I'm trying to write a threaded Python app for Windows that reads data from a UDP socket (thread-1), writes it to file (thread-2), and displays the live data (thread-3) to a widget (gtk.Image using a gtk.gdk.pixbuf). I'm using queues for communicating data between threads.
My problem is that if I start only threads 1 and 3 (so skip the file writing for now), it seems that I lose some data after the first few samples. After this drop it looks fine. Even by letting thread 1 complete before running thread 3, this apparent drop is still there.
Apologies for the length of code snippet (I've removed the thread that writes to file), but I felt removing code would just prompt questions. Hope someone can shed some light :-)
import socket
import threading
import Queue
import numpy
import gtk
gtk.gdk.threads_init()
import gtk.glade
import pygtk
class readFromUDPSocket(threading.Thread):
def __init__(self, socketUDP, readDataQueue, packetSize, numScans):
threading.Thread.__init__(self)
self.socketUDP = socketUDP
self.readDataQueue = readDataQueue
self.packetSize = packetSize
self.numScans = numScans
def run(self):
for scan in range(1, self.numScans + 1):
buffer = self.socketUDP.recv(self.packetSize)
self.readDataQueue.put(buffer)
self.socketUDP.close()
print 'myServer finished!'
class displayWithGTK(threading.Thread):
def __init__(self, displayDataQueue, image, viewArea):
threading.Thread.__init__(self)
self.displayDataQueue = displayDataQueue
self.image = image
self.viewWidth = viewArea[0]
self.viewHeight = viewArea[1]
self.displayData = numpy.zeros((self.viewHeight, self.viewWidth, 3), dtype=numpy.uint16)
def run(self):
scan = 0
try:
while True:
if not scan % self.viewWidth: scan = 0
buffer = self.displayDataQueue.get(timeout=0.1)
self.displayData[:, scan, 0] = numpy.fromstring(buffer, dtype=numpy.uint16)
self.displayData[:, scan, 1] = numpy.fromstring(buffer, dtype=numpy.uint16)
self.displayData[:, scan, 2] = numpy.fromstring(buffer, dtype=numpy.uint16)
gtk.gdk.threads_enter()
self.myPixbuf = gtk.gdk.pixbuf_new_from_data(self.displayData.tostring(), gtk.gdk.COLORSPACE_RGB,
False, 8, self.viewWidth, self.viewHeight, self.viewWidth * 3)
self.image.set_from_pixbuf(self.myPixbuf)
self.image.show()
gtk.gdk.threads_leave()
scan += 1
except Queue.Empty:
print 'myDisplay finished!'
pass
def quitGUI(obj):
print 'Currently active threads: %s' % threading.enumerate()
gtk.main_quit()
if __name__ == '__main__':
# Create socket (IPv4 protocol, datagram (UDP)) and bind to address
socketUDP = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
host = '192.168.1.5'
port = 1024
socketUDP.bind((host, port))
# Data parameters
samplesPerScan = 256
packetsPerSecond = 1200
packetSize = 512
duration = 1 # For now, set a fixed duration to log data
numScans = int(packetsPerSecond * duration)
# Create array to store data
data = numpy.zeros((samplesPerScan, numScans), dtype=numpy.uint16)
# Create queue for displaying from
readDataQueue = Queue.Queue(numScans)
# Build GUI from Glade XML file
builder = gtk.Builder()
builder.add_from_file('GroundVue.glade')
window = builder.get_object('mainwindow')
window.connect('destroy', quitGUI)
view = builder.get_object('viewport')
image = gtk.Image()
view.add(image)
viewArea = (1200, samplesPerScan)
# Instantiate & start threads
myServer = readFromUDPSocket(socketUDP, readDataQueue, packetSize, numScans)
myDisplay = displayWithGTK(readDataQueue, image, viewArea)
myServer.start()
myDisplay.start()
gtk.gdk.threads_enter()
gtk.main()
gtk.gdk.threads_leave()
print 'gtk.main finished!'
UDP doesn't verify the target received it (like TCP does) - you must implement retransmission and such in your applications if you want to ensure all of the data arrives. Do you control the sending UDP source?
UDP is, by definition, unreliable. You must not write programs that expect UDP datagrams to always get through.
Packets are dropped all the time in TCP too, but your program does not need to care, because TCP applications cannot process packets; the TCP stack shows your application a stream of bytes. There is a lot of machinery there to make sure that if you send bytes 'ABCD', you will see 'A' 'B' 'C' 'D' on the end. You may get any possible collection of packets, of course: 'ABC', 'D', or 'AB', CD', etc. Or you may just see 'ABC', and then nothing.
TCP isn't "reliable" because it can magically make your network cables never fail or break; the guarantee that it provides is that up until the point where the stream breaks, you will see everything in order. And after the stream breaks, you'll see nothing.
In UDP there is no such guarantee. If you send four UDP datagrams, 'AB', 'CD', 'EF' 'GH', you may receive all of them, or none of them, or half of them, or just one of them. You may receive them in any order. The only guarantee that UDP tries to provide is that you won't see a message with 'ABCD' in it, because those bytes are in different datagrams.
To sum up: this has nothing to do with Python, or threads, or GTK. It's just a basic fact of life on networks based in physical reality: sometimes the electrical characteristics of your wires are not conducive to getting your messages all the way across them.
You may be able to reduce the complexity of your program by using Twisted, specifically, the listenUDP API, because then you won't be needing to juggle threads or their interaction with GTK: you can just call methods directly on the widget in question from your datagramReceived method. But this won't fix your underlying problem: UDP just drops data sometimes, period. The real solution is to convince your data source to use TCP instead.
Firstly; can you set the recv buffer size for the socket? If so, set it to something very large as this will let the UDP stack buffer more datagrams for you.
Secondly; if you can use asynchronous I/O then post multiple recv calls at once (again this allows the stack to service more datagrams before it starts to drop them).
Thirdly; you could try unrolling your loop a little and reading multiple datagrams before placing them in your queue; could the locking on the queue be causing the recv thread to run slowly??
Finally; the datagrams may be being dropped elsewhere on the network, there may be nothing that you can do, that the U in UDP...
Edit - Struck out listen/accept sentence, thanks Daniel, I was just coming to remove it when I saw your comment :)
I'd suggest that this is a network programming issue, rather than python per-se.
You've set a packet-per-second rate and a duration to define the number of recv calls you make to your UDP socket. I don't see a listen or accept call to the socket, I'll assume that recv handles that as you say you receive some data. You've not mentioned the generation of the data.
You've defined how many reads you're expecting to make, so I'd assume that the code makes that many receives before exiting, so my conclusion would be that your recv packetSize is insufficient and therefore one read isn't pulling an entire datagram, then the subsequent recv is pulling the next part of the previous datagram.
Can't you look at the data you have received and determine what is missing? What data are you "losing"? How do you know it's lost?
Furthermore, you could use wireshark to verify that your host is actually receiving the data at the same time as verifying the size of the datagrams. Match the capture against the data your recv thread is providing.
Update
You say that you're losing data, but not what it is. I see two possibilities for data-loss:
Truncating packets
Dropping packets
You've said that the payload size is the same size as that which you are passing to recv, so I'll take it that you're not truncating.
So the factors for dropping packets are a combination of rate of receipt, rate of read-from-receive-buffer and receive-buffer size.
Your calls to Queue.put may be slowing down your rate of read.
So, first determine that you can read 1200 packets per second by modifying readFromUDPSocket to not Queue.put, but count the number of receives and report time taken.
Once you've determined that you can call recv fast enough, the next step is working out what is slowing you down. I suspect it may be your use of Queue, I suggest batching payloads in N-sized groups for placing on the Queue so that you're not trying to call put at 12Hz.
Seeing as you want to sustain a rate of 1200 reads per second I don't think you'll get very far by increasing the receive buffer on the socket.
It seems that the problem is with the source. There are two issues:
Looking at Wireshark the source is not consistently transmitting 1200 packets-per-second. Possibly, as Len pointed out, a problem with the outbound stack dropping data. BTW the source is a programmable card with an ethernet port connected to my machine.
The other issue is the after the first 15 packets or so of data there is always a drop. What I discovered is that if I recv 20 packets in the initialisation part of the readFromUDPSocket thread, I can then read the data fine, e.g.
class readFromUDPSocket(threading.Thread):
def __init__(self, socketUDP, readDataQueue, packetSize, numScans):
threading.Thread.__init__(self)
self.socketUDP = socketUDP
self.readDataQueue = readDataQueue
self.packetSize = packetSize
self.numScans = numScans
for i in range(0, 20):
buffer = self.socketUDP.recv(self.packetSize)
def run(self):
for scan in range(1, self.numScans + 1):
buffer = self.socketUDP.recv(self.packetSize)
self.readDataQueue.put(buffer)
self.socketUDP.close()
print 'myServer finished!'
Not sure what this points to?! I think all of this rules out not being able to recv and put fast enough though.

USB - sync vs async vs semi-async

Updates:
I wrote an asynchronous C version and it works as it should.
Turns out the speed issue was due to Python's GIL. There's a method to fine tune its behavior.
sys.setcheckinterval(interval)
Setting interval to zero (default is 100) fixes the slow speed issue. Now all that's left is to figure out is what's causing the other issue (not all pixels are filled). This one doesn't make any sense. usbmon shows all the communications are going through. libusb's debug messaging shows nothing out of the ordinary. I guess I need to take usbmon's output and compare sync vs async. The data that usbmon shows seems to look correct at a glance (The first byte should be 0x96 or 0x95).
As said below in the original question, S. Lott, it's for a USB LCD controller. There are three different versions of drv_send, which is the outgoing endpoint method. I've explained the differences below. Maybe it'll help if I outline the asynchronous USB operations. Note that syncrhonous USB operations work the same way, it's just that it's done synchronously.
We can view asynchronous I/O as a 5 step process:
Allocation: allocate a libusb_transfer (This is self.transfer)
Filling: populate the libusb_transfer instance with information about the transfer you wish to perform (libusb_fill_bulk_transfer)
Submission: ask libusb to submit the transfer (libusb_submit_transfer)
Completion handling: examine transfer results in the libusb_transfer structure (libusb_handle_events and libusb_handle_events_timeout)
Deallocation: clean up resources (Not shown below)
Original question:
I have three different versions. One's entirely synchronous, one's semi-asynchronous, and the last is fully asynchronous. The differences is that synchronous fully populates the LCD display I'm controlling with the expected pixels, and it's really fast. The semi-asynchronous version only populates a portion of the display, but it's still very fast. The asynchronous version is really slow and only fills a portion of the display. I'm baffled why the pixels aren't fully populated, and why the asynchronous version is really slow. Any clues?
Here's the fully synchronous version:
def drv_send(self, data):
if not self.Connected():
return
self.drv_locked = True
buffer = ''
for c in data:
buffer = buffer + chr(c)
length = len(buffer)
out_buffer = cast(buffer, POINTER(c_ubyte))
libusb_fill_bulk_transfer(self.transfer, self.handle, LIBUSB_ENDPOINT_OUT + 1, out_buffer, length, self.cb_send_transfer, None, 0)
lib.libusb_submit_transfer(self.transfer)
while self.drv_locked:
r = lib.libusb_handle_events(None)
if r < 0:
if r == LIBUSB_ERROR_INTERRUPTED:
continue
lib.libusb_cancel_transfer(transfer)
while self.drv_locked:
if lib.libusb_handle_events(None) < 0:
break
self.count += 1
Here's the semi-asynchronous version:
def drv_send(self, data):
if not self.Connected():
return
def f(d):
self.drv_locked = True
buffer = ''
for c in data:
buffer = buffer + chr(c)
length = len(buffer)
out_buffer = cast(buffer, POINTER(c_ubyte))
libusb_fill_bulk_transfer(self.transfer, self.handle, LIBUSB_ENDPOINT_OUT + 1, out_buffer, length, self.cb_send_transfer, None, 0)
lib.libusb_submit_transfer(self.transfer)
while self.drv_locked:
r = lib.libusb_handle_events(None)
if r < 0:
if r == LIBUSB_ERROR_INTERRUPTED:
continue
lib.libusb_cancel_transfer(transfer)
while self.drv_locked:
if lib.libusb_handle_events(None) < 0:
break
self.count += 1
self.command_queue.put(Command(f, data))
Here's the fully asynchronous version. device_poll is in a thread by itself.
def device_poll(self):
while self.Connected():
tv = TIMEVAL(1, 0)
r = lib.libusb_handle_events_timeout(None, byref(tv))
if r < 0:
break
def drv_send(self, data):
if not self.Connected():
return
def f(d):
self.drv_locked = True
buffer = ''
for c in data:
buffer = buffer + chr(c)
length = len(buffer)
out_buffer = cast(buffer, POINTER(c_ubyte))
libusb_fill_bulk_transfer(self.transfer, self.handle, LIBUSB_ENDPOINT_OUT + 1, out_buffer, length, self.cb_send_transfer, None, 0)
lib.libusb_submit_transfer(self.transfer)
self.count += 1
self.command_queue.put(Command(f, data))
And here's where the queue is emptied. It's the callback for a gobject timeout.
def command_worker(self):
if self.drv_locked: # or time.time() - self.command_time < self.command_rate:
return True
try:
tmp = self.command_queue.get_nowait()
except Queue.Empty:
return True
tmp.func(*tmp.args)
self.command_time = time.time()
return True
Here's the transfer's callback. It just changes the locked state back to false, indicating the operation's finished.
def cb_send_transfer(self, transfer):
if transfer[0].status.value != LIBUSB_TRANSFER_COMPLETED:
error("%s: transfer status %d" % (self.name, transfer.status))
print "cb_send_transfer", self.count
self.drv_locked = False
Ok I don't know if I get you right. You have some device with LCD, you have some firmware on it to handle USB requests. On PC side you are using PyUSB wich wraps libUsb.
Couple of suggestions if you are experiancing speed problems, try to limit data you are transfering. Do not transfer whole raw data, mayby only pixels that changed.
Second, have you measured speed of transfers by using some USB analuzer sofware, if you don't have money for hardvare usb analyzer maybe try software version. I never used that kind of analyzers but I think data provided by them is not very reiable.
Thirdly, see what device is realy doing, maybe that is bottleneck of your data transfers.
I have not much time today to exactly anwser your question so I will get back on this later.
I am watching this thread for some time, and there is dead silence around this, so I tried to spare some time and look deeper. Still not much time today maybe later today. Unfortunetly I am no python expert but I know some stuff about C,C++, windows and most of all USB. But I think this may be LCD device problem, what are you using, Because if the transfers works fine, and data was recived by the device it points that is device problem.
I looked at your code a little, could you do some testing, sending only 1 byte, 8 bytes, and Endpoint size byte length transfer. And see how it looks on USB mon ?
Endpoint size is size of Hardvare buffer used by PICO LCD USB controler. I am not sure what it is for your's but I am guessing that when you send ENdpoint size message next masage should be 0 bytes length. Maybe there is the problem.
Regarding the test I assume you have seen data wich you programed to send.
Second thing could be that the data gets overwriten, or not recived fast enough. Saying overwriten I mean LCD could not see data end, and mix one transfer with another.
I am not sure what USB mon is capable of showing, but according to USB standart after Endpoint size packet len, there should be 0 len packet data send, showing that is end of transfer.

Categories