Python set Parallel Port data pins high/low - python

I am wondering how to set the data pins on a parallel port high and low. I believe I could use PyParallel for this, but I am unsure how to set a specific pin.
Thanks!

You're talking about a software-hardware interface here. They are usually set low and high by assigning a 1-byte value to a register. A parallel port has 8 pins for data to travel across. In a low level language like C, C++, there would be a register, lets call it 'A', somewhere holding 8 bits corresponding to the 8 pins of data. So for example:
Assuming resgister A is setup like pins: [7,6,5,4,3,2,1,0]
C-like pseudocode
A=0x00 // all pins are set low
A=0xFF // all pins are high
A=0xF0 // Pins 0:3 are low, Pins 4:7 are high
This idea follows through with PyParallel
import parallel
p = parallel.Parallel() # open LPT1
p.setData(0x55) #<--- this is your bread and butter here
p.setData is the function you're interested in. 0x55 converted to binary is
0b01010101
-or-
[L H L H L H L H]
So now you can set the data to a certain byte, but how would I sent a bunch of data... lets say 3 bytes 0x00, 0x01, 0x02? Well you need to watch the ack line for when the receiving machine has confirmed receipt of whatever was just sent.
A naive implementation:
data=[0x00, 0x01, 0x02]
while data:
onebyte=data.pop()
p.setDataStrobe('low') #signal that we're sending data
p.setData(onebyte)
while p.getInAcknowledge() == 'high': #wait for this line to go 'low'
# to indicate an ACK
pass #we're waiting for it to acknowledge...
p.setDataStrobe('high')#Ok, we're done sending that byte.
Ok, that doesn't directly answer your question. Lets say i ONLY want to set pin 5 high or low. Maybe I have an LED on that pin. Then you just need a bit of binary operations.
portState = 0b01100000 #Somehow the parallel port has this currently set
newportState = portState | 0b00010000#<-- this is called a bitmask
print newportState
>>> 0b011*1*0000
Now lets clear that bit...
newportState = 0b01110000
clearedPin5 = newportState & 11101111
print clearedPin5
>>> 0b011*0*0000
If these binary operations are foreign, I recommend this excellent tutorial over on avrfreaks. I would become intimate with them before progressing further. Embedded software concepts like these are full of bitmasks and bitshifting.

I've made this function to control the pins individually (code derived from here and here):
def setPin(pin,value):
if(pin==1):
p.setDataStrobe(value)
elif(pin>=2 and pin<=9):
pin = pin-2
if(value==0):
# clear the bit
p.setData(p.getData() & (255 - pow(2, pin)))
else:
#set the bit
p.setData(p.getData() | pow(2, pin))
elif(pin==14):
p.setAutoFeed(value)
elif(pin==16):
p.setInitOut(value)
elif(pin==17):
p.setSelect(value)
else:
raise(ValueError("invalid pin number"))

Related

Using python to read 8-bit ADC outputs into a Raspberry Pi 4?

I'm using python to read in values from a high-speed 8-bit ADC (the ADS7885 linked here) and convert them into voltages using the SPI0 ports on a Raspberry Pi 4. At this point, I do receive values from the ADC on the Raspberry Pi, but the values I am reading in are not at all accurate. I was hoping someone might be able to help me out with my code so that I can accurately read values from the ADC at a sampling rate of 48mHz and convert them into voltages?
I think the problem might have to do with the number of clock cycles it takes before the ADC can read/convert valid data? The datasheet says that this specific ADC requires 16 SCLK cycles before it is able to begin converting valid data, but I'm not sure how to enforce this in my code.
I followed sample code for a 10-bit ADC that uses the Spidev python module, but I'm open to any other code solutions. This is what I'm currently running:
spi404 = spidev.SpiDev(0, 0)
def read_adc404(adc_ch, vref = 5):
msg = 0b11
msg = ((msg << 1) + 0) << 3
msg = (msg, 0b000000)
reply = spi404.xfer2(msg)
adc = 0
for n in reply:
adc = (adc << 6) + n
adc = adc >> 2
voltage = (vref * adc) / 256
return voltage
Any tips or help would be greatly appreciated!!

Why does my long-running python script crash with "invalid pointer" after running for about 3 days?

I wrote a python 3 script which tests an SPI link to an FPGA. It runs on an Raspberry Pi 3. The test works like this: after putting the FPGA in test mode (a push switch), send the first byte, which can be any value. Then further bytes are sent indefinitely. Each one increments by the first value sent, truncated to 8 bits. Thus, if the first value is 37, the FPGA expects the following sequence:
37, 74, 111, 148, 185, 222, 4, 41 ...
Some additional IO pins are used to signal between the devices - RUN (RPi output) starts the test (necessary because the FPGA times out in about 15ms if it expects a byte) and ERR (FPGA output) signals an error. Errors can thus be counted at both ends.
In addition, the RPi script writes a one line summary of bytes sent and number of erros every million bytes.
All of this works just fine. But after running for about 3 days, I get the following error on the RPi:
free(): invalid pointer: 0x00405340
I get this exact same error on two identical test setups, even the same memory address. The last report says
"4294M bytes sent, 0 errors"
I seem to have proved the SPI link, but I am concerned that this long-running program crashes for no apparent reason.
Here is the important part of my test code:
def _report(self, msg):
now = datetime.datetime.now()
os.system("echo \"{} : {}\" > spitest_last.log".format(now, msg))
def spi_test(self):
global end_loop
input("Put the FPGA board into SPI test mode (SW1) and press any key")
self._set_run(True)
self.END_LOOP = False
print("SPI test is running, CTRL-C to end.")
# first byte is sent without LOAD, this is the seed
self._send_byte(self._val)
self._next_val()
end_loop = False
err_flag = False
err_cnt = 0
byte_count = 1
while not end_loop:
mb = byte_count % 1000000
if mb == 0:
msg = "{}M bytes sent, {} errors".format(int(byte_count/1000000), err_cnt)
print("\r" + msg, end="")
self._report(msg)
err_flag = True
else:
err_flag = False
#print("sending: {}".format(self._val))
self._set_load(True)
if self._errors and err_flag:
self._send_byte(self._val + 1)
else:
self._send_byte(self._val)
if self.is_error():
err_cnt += 1
msg = "{}M bytes sent, {} errors".format(int(byte_count/1000000), err_cnt)
print("\r{}".format(msg), end="")
self._report(msg)
self._set_load(False)
# increase the value by the seed and truncate to 8 bits
self._next_val()
byte_count += 1
# test is done
input("\nSPI test ended ({} bytes sent, {} errors). Press ENTER to end.".format(byte_count, err_cnt))
self._set_run(False)
(Note for clarification : there is a command line option to artifically create an error every million bytes. Hence the " err_flag" variable.)
I've tried using python3 in console mode, and there seems to be no issue with the size of the byte_count variable (there shouldn't be, according to what I have read about python integer size limits).
Anyone have an idea as to what might cause this?
This issue is connected to spidev versions older than 3.5 only. The comments below were done under assumption that I was using the upgraded version of spidev.
#############################################################################
I can confirm this problem. It is persistent with both RPi3B and RPi4B. Using python 3.7.3 at both RPi3 and RPi4. The version of spidev which I tried were 3.3, 3.4 and the latest 3.5. I was able to reproduce this error several times by simply looping through this single line.
spidevice2.xfer2([0x00, 0x00, 0x00, 0x00])
It takes up to 11 hours depending on the RPi version. After 1073014000 calls (rounded to 1000), the script crashes because of "invalid pointer". The total amount of bytes sent is the same as in danmcb's case. It seems as if 2^32 bytes represent a limit.
I tried different approaches. For example, calling close() from time to time followed by open(). This did not help.
Then, I tried to create the spiDev object locally, so it would re-created for every batch of data.
def spiLoop():
spidevice2 = spidev.SpiDev()
spidevice2.open(0, 1)
spidevice2.max_speed_hz = 15000000
spidevice2.mode = 1 # Data is clocked in on falling edge
for j in range(100000):
spidevice2.xfer2([0x00, 0x00, 0x00, 0x00])
spidevice2.close()
It still crashed at after approx. 2^30 calls of xfer2([0x00, 0x00, 0x00, 0x00]) which corresponds to approx. 2^32 bytes.
EDIT1
To speed up the process, I was sending in blocks of 4096 bytes using the code below. And I repeatedly created the SpiDev object locally. It took 2 hours to arrive at 2^32 bytes count.
def spiLoop():
spidevice2 = spidev.SpiDev()
spidevice2.open(0, 1)
spidevice2.max_speed_hz = 25000000
spidevice2.mode = 1 # Data is clocked in on falling edge
to_send = [0x00] * 2**12 # 4096 bytes
for j in range(100):
spidevice2.xfer2(to_send)
spidevice2.close()
del spidevice2
def runSPI():
for i in range(2**31 - 1):
spiLoop()
print((2**12 * 100 * (i + 1)) / 2**20, 'Mbytes')
EDIT2
Reloading the spidev on the fly does not help either. I tried this code on both RPi3 and RPi4 with the same result:
import importlib
def spiLoop():
importlib.reload(spidev)
spidevice2 = spidev.SpiDev()
spidevice2.open(0, 1)
spidevice2.max_speed_hz = 25000000
spidevice2.mode = 1 # Data is clocked in on falling edge
to_send = [0x00] * 2**12 # 4096 bytes
for j in range(100):
spidevice2.xfer2(to_send)
spidevice2.close()
del spidevice2
def runSPI():
for i in range(2**31 - 1):
spiLoop()
print((2**12 * 100 * (i + 1)) / 2**20, 'Mbytes')
EDIT3
Executing the code snippet did not isolate the problem either. It crashed after the 4th chuck of 1Gbyte-data was sent.
program = '''
import spidev
spidevice = None
def configSPI():
global spidevice
# We only have SPI bus 0 available to us on the Pi
bus = 0
#Device is the chip select pin. Set to 0 or 1, depending on the connections
device = 1
spidevice = spidev.SpiDev()
spidevice.open(bus, device)
spidevice.max_speed_hz = 250000000
spidevice.mode = 1 # Data is clocked in on falling edge
def spiLoop():
to_send = [0xAA] * 2**12
loops = 1024
for j in range(loops):
spidevice.xfer2(to_send)
return len(to_send) * loops
configSPI()
bytes_total = 0
while True:
bytes_sent = spiLoop()
bytes_total += bytes_sent
print(int(bytes_total / 2**20), "Mbytes", int(1000 * (bytes_total / 2**30)) / 10, "% finished")
if bytes_total > 2**30:
break
'''
for i in range(100):
exec(program)
print("program executed", i + 1, "times, bytes sent > ", (i + 1) * 2**30)
I belive the original asker's issue is a reference leak. Specifically py-spidev issue 91. Said reference leak has been fixed in the 3.5 release of spidev.
Python uses a shared pool of objects to represent small integer values*, rather than re-creating them each time. So when code leaks references to small numbers the result is not a memory leak but instead a reference count that keeps increasing. The python spidev library had an issue where it leaked references to small integers in this way.
On a 32-bit system** the eventual result is that the reference count overflows. Then something decrements the overflowed reference count and the reference counting system frees the object.
What I can't explain is the other answer that claims they can still reproduce the issue with 3.5. This issue was supposed to have been fixed in that version.
* Specifically numbers in the range -3 to 256 inclusive, so anything that can be represented in an unsigned byte plus a few negative values (presumably because they are commonly used as error returns) and 256 (presumably because it's often used as a multiplier).
** On a 64-bit system the reference count will not overflow within a human lifetime.

SPIDEV on raspberry pi for TI DAC8568 not behaving as expected

I have a Texas Instruments DAC8568 in their BOOST breakout board package. The DAC8568 is an 8 channel, 16bit DAC with SPI interface. The BOOST package has headers to connect it to my raspberry pi, and it has LEDs connected to the output voltage so you can easily check to see if your code is doing what you think it does. Links to the BOOST package and datasheet of the DAC8568 are in my python code below.
I have the package wired to the raspberry Pi with the 3.3V supply, the 5V supply (needed for LEDs), and ground. The DACs SCLK goes to Pi SCLK, DAC /SYNC (which is really chip select) goes to Pi CE1, DAC /LDAC goes to Pi Gnd, and DAC MOSI goes to Pi MOSI. I do not wire the DACs /CLR, but I can physically hook it to ground to reset the chip if I need to.
I believe my wiring is good, because I can light the LEDs with either a python script or from the terminal using: sudo echo -ne "\xXX\xXX\xXX\xXX" > /dev/spidev0.1
I learned the terminal trick from this video: https://www.youtube.com/watch?v=iwzXh2V1SP4
My problem though is that the LEDs are not lighting as I would expect them to according to the data sheet. I should be lighting A, but instead I light B. I should light B but instead I light D, etc. I have tried to make sense of it all and can dim the LEDs and turn on new ones, but never in the way I would really expect it to work according to the datasheet.
Below is my python script. In the comments I mentioned where in the datasheet I am looking for the bits to send. I am very new to working with analog components and am not a EE, so maybe I am not doing the timing correctly, or making some other silly error. Perhaps someone can look at the datasheet and see my error without having to actually have the chip in hand. Thanks for the help!
# -*- coding: utf-8 -*-
"""
Created on Sat Jul 8 16:33:05 2017
#author: pi
for texas instruments BOOST DAC8568
for BOOST schematic showing LEDs http://www.ti.com/tool/boost-dac8568
for DAC8568 datasheet: http://www.ti.com/product/dac8568
"""
import spidev
import time
spi = spidev.SpiDev() #create spi object
spi.open(0,1) #open spi port 0, device (CS) 1
#spi.bits_per_word = 8 does not seem to matter
#spi.max_speed_hz = 50000000 #does not seem to matter
#you have to power the DAC, you can write to the buffer and later power on if you like
power_up = spi.xfer2([0x04, 0x00, 0x00, 0xFF]) #p.37 Table11 in datasheet: powers all DACS
voltage_write = spi.xfer2([0x00, 0x0F, 0xFF, 0xFF ]) #p.35 Table11 in datasheet supposed write A--but lights B
voltage_write = spi.xfer2([0x00, 0x1F, 0xFF, 0xFF ]) #supposed write B--but lights D
voltage_write = spi.xfer2([0x00, 0x2F, 0xFF, 0xFF ]) #supposed write C--but lights F
voltage_write = spi.xfer2([0x00, 0x3F, 0xFF, 0xFF ]) #supposed write D--but lights H
voltage_write = spi.xfer2([0x00, 0x4F, 0xFF, 0xFF ]) #supposed write E--but does nothing
spi.close()
Note for future readers, the power up needs to power on the internal reference which is
power_up = spi.xfer2([0x08, 0x00, 0x00, 0xFF]) #(p.37 datasheet
Comment: the bits are shifted. ... how ... compensate for the shift or eliminate the shift?
This could be the case, if the SPIDIV.mode is not in sync with the DAC.
DAC Datasheet Page 6/7:
This input is the frame synchronization signal for the input data.
When SYNC goes low, it enables the input shiftregister,
and data are sampled on subsequent SYNC falling clock edges.
The DAC output updates following the 32nd clock.
Reference: Clock polarity and phase
According to the above and the Timing Diagram I come to the conclusion that SPDIV.mode == 2
is the right.
Check the actual SPDIV.mode
Change to SPDIV.mode = 2
I can confirm your used Values by reading Table 11 Page 35.
Write to Input Register - DAC Channel X
My Example set Feature Bits = 0
3 2 1
10987654321098765432109876543210
RXXXCCCCAAAADDDDDDDDDDDDDDDDFFFF
A = 32-bit[00000000000011111111111111110000]:0xffff0 ('0x00', '0x0f', '0xff', '0xf0')
3 2 1
10987654321098765432109876543210
RXXXCCCCAAAADDDDDDDDDDDDDDDDFFFF
B = 32-bit[00000000000111111111111111110000]:0x1ffff0 ('0x00', '0x1f', '0xff', '0xf0')
Page 33:
DB31(MSB) is the first bit that is loaded into the DAC shift register and must be always set to '0'.
The wireing seems straight forward and simple, but worth to doublecheck.
Code Snippet from testing:
def writeDAC(command, address, data, feature=0x0):
address = ord(address) - ord('A')
b1 = command
b2 = address << 4 | data >> 12 # 4 address Bits and 4 MSB data Bits
b3 = data >> 4 # middle 8 Bits of data
b4 = 0xF0 & (data << 4) >> 8 | feature # 4 data Bits and feature Bits
voltage_write = spi.xfer2([b1, b2, b3, b4])
# Usage:
# Write Command=0 Channel=B Data=0xFFFF Default Features=0x0
writeDAC(0, 'B', 0xFFFF)

High-speed alternatives to replace byte array processing bottlenecks

>> See EDIT below <<
I am working on processing data from a special pixelated CCD camera over serial, using FTDI D2xx drivers via pyUSB.
The camera can operate at high bandwidth to the PC, up to 80 frames/sec. I would love that speed, but know that it isn't feasible with Python, due to it being a scripted language, but would like to know how close I can get - whether it be some optimizations that I missed in my code, threading, or using some other approach. I immediately think that breaking-out the most time consuming loops and putting them in C code, but I don't have much experience with C code and not sure the best way to get Python to interact inline with it, if that's possible. I have complex algorithms heavily developed in Python with SciPy/Numpy, which are already optimized and have acceptable performance, so I would need a way to just speed-up the acquisition of the data to feed-back to Python, if that's the best approach.
The difficulty, and the reason I used Python, and not some other language, is due to the need to be able to easily run it cross-platform (I develop in Windows, but am putting the code on an embedded Linux board, making a stand-alone system). If you suggest that I use another code, like C, how would I be able to work cross-platform? I have never worked with compiling a lower-level language like C between Windows and Linux, so I would want to be sure of that process - I would have to compile it for each system, right? What do you suggest?
Here are my functions, with current execution times:
ReadStream: 'RXcount' is 114733 for a device read, formatting from string to byte equivalent
Returns a list of bytes (0-255), representing binary values
Current execution time: 0.037 sec
def ReadStream(RXcount):
global ftdi
RXdata = ftdi.read(RXcount)
RXdata = list(struct.unpack(str(len(RXdata)) + 'B', RXdata))
return RXdata
ProcessRawData: To reshape the byte list into an array that matches the pixel orientations
Results in a 3584x32 array, after trimming off some un-needed bytes.
Data is unique in that every block of 14 rows represents 14-bits of one row of pixels on the device (with 32 bytes across # 8 bits/byte = 256 bits across), which is 256x256 pixels. The processed array has 32 columns of bytes because each byte, in binary, represents 8 pixels (32 bytes * 8 bits = 256 pixels). Still working on how to do that one... I have already posted a question for that previously
Current execution time: 0.01 sec ... not bad, it's just Numpy
def ProcessRawData(RawData):
if len(RawData) == 114733:
ProcessedMatrix = np.ndarray((1, 114733), dtype=int)
np.copyto(ProcessedMatrix, RawData)
ProcessedMatrix = ProcessedMatrix[:, 1:-44]
ProcessedMatrix = np.reshape(ProcessedMatrix, (-1, 32))
return ProcessedMatrix
else:
return None
Finally,
GetFrame: The device has a mode where it just outputs whether a pixel detected anything or not, using the lowest bit of the array (every 14th row) - Get that data and convert to int for each pixel
Results in 256x256 array, after processing every 14th row, which are bytes to be read as binary (32 bytes across ... 32 bytes * 8 bits = 256 pixels across)
Current execution time: 0.04 sec
def GetFrame(ProcessedMatrix):
if np.shape(ProcessedMatrix) == (3584, 32):
FrameArray = np.zeros((256, 256), dtype='B')
DataRows = ProcessedMatrix[13::14]
for i in range(256):
RowData = ""
for j in range(32):
RowData = RowData + "{:08b}".format(DataRows[i, j])
FrameArray[i] = [int(RowData[b:b+1], 2) for b in range(256)]
return FrameArray
else:
return False
Goal:
I would like to target a total execution time of ~0.02 secs/frame by whatever suggestions you make (currently it's 0.25 secs/frame with the GetFrame function being the weakest). The device I/O is not the limiting factor, as that outputs a data packet every 0.0125 secs. If I get the execution time down, then can I just run the acquisition and processing in parallel with some threading?
Let me know what you suggest as the best path forward - Thank you for the help!
EDIT, thanks to #Jaime:
Functions are now:
def ReadStream(RXcount):
global ftdi
return np.frombuffer(ftdi.read(RXcount), dtype=np.uint8)
... time 0.013 sec
def ProcessRawData(RawData):
if len(RawData) == 114733:
return RawData[1:-44].reshape(-1, 32)
return None
... time 0.000007 sec!
def GetFrame(ProcessedMatrix):
if ProcessedMatrix.shape == (3584, 32):
return np.unpackbits(ProcessedMatrix[13::14]).reshape(256, 256)
return False
... time 0.00006 sec!
So, with pure Python, I am now able to acquire the data at the desired frame rate! After a few tweaks to the D2xx USB buffers and latency timing, I just clocked it at 47.6 FPS!
Last step is if there is any way to make this run in parallel with my processing algorithms? Need some way to pass the result of GetFrame to another loop running in parallel.
There are several places where you can speed things up significantly. Perhaps the most obvious is rewriting GetFrame:
def GetFrame(ProcessedMatrix):
if ProcessedMatrix.shape == (3584, 32):
return np.unpackbits(ProcessedMatrix[13::14]).reshape(256, 256)
return False
This requires that ProcessedMatrix be an ndarray of type np.uint8, but other than that, on my systems it runs 1000x faster.
With your other two functions, I think that in ReadStream you should do something like:
def ReadStream(RXcount):
global ftdi
return np.frombuffer(ftdi.read(RXcount), dtype=np.uint8)
Even if it doesn't speed up that function much, because it is the reading taking up most of the time, it will already give you a numpy array of bytes to work on. With that, you can then go on to ProcessRawData and try:
def ProcessRawData(RawData):
if len(RawData) == 114733:
return RawData[1:-44].reshape(-1, 32)
return None
Which is 10x faster than your version.

Read latest character sent from Arduino in Python

I'm a beginner in both Arduino and Python, and I have an idea but I can't get it to work. Basically, when in Arduino a button is pressed, it sends "4" through the serial port. What I want in Python is as soon as it reads a 4, it should do something. This is what I got so far:
import serial
ser = serial.Serial('/dev/tty.usbserial-A900frF6', 9600)
var = 1
while var == 1:
if ser.inWaiting() > 0:
ser.readline(1)
print "hello"
But obviously this prints hello no matter what. What I would need is something like this:
import serial
ser = serial.Serial('/dev/tty.usbserial-A900frF6', 9600)
var = 1
while var == 1:
if ser.inWaiting() > 0:
ser.readline(1)
if last.read == "4":
print "hello"
But how can I define last.read?
I don't know a good way of synchronising the comms with readLine since it's not a blocking call. You can use ser.read(numBytes) which is a blocking call. You will need to know how many bytes Arduino is sending though to decode the byte stream correctly. Here is a simple example that reads 8 bytes and unpacks them into 2 unsigned shorts and a long (the <HHL part) in Python
try:
data = [struct.unpack('<HHL', handle.read(8)) for i in range(PACKETS_PER_TRANSMIT)]
except OSError:
self.emit(SIGNAL("connectionLost()"))
self.connected = False
Here's a reference to the struct.unpack()
The Arduino code that goes with that. It reads two analog sensor values and the micro timestamp and sends them over the serial.
unsigned int SensA, SensB;
byte out_buffer[64];
unsigned int buffer_head = 0;
unsigned int buffer_size = 64;
SensA = analogRead(SENSOR_A);
SensB = analogRead(SENSOR_B);
micr = micros();
out_buffer[buffer_head++] = (SensA & 0xFF);
out_buffer[buffer_head++] = (SensA >> 8) & 0xFF;
out_buffer[buffer_head++] = (SensB & 0xFF);
out_buffer[buffer_head++] = (SensB >> 8) & 0xFF;
out_buffer[buffer_head++] = (micr & 0xFF);
out_buffer[buffer_head++] = (micr >> 8) & 0xFF;
out_buffer[buffer_head++] = (micr >> 16) & 0xFF;
out_buffer[buffer_head++] = (micr >> 24) & 0xFF;
Serial.write(out_buffer, buffer_size);
The Arduino playground and Processing Forums are good places to look around for this sort of code as well.
UPDATE
I think I might have misled you with readLine not blocking. Either way, the above code should work. I also found this other thread on SO regarding the same subject.
UPDATE You don't need to use the analog sensors, that's just what the project I did happened to be using, you are of course free to pass what ever values over the serial. So what the Arduino code is doing is it has a buffer of type byte where the output is being stored before being sent. The sensor values and micros are then written to the buffer and the buffer sent over the serial. The (SensA & 0xFF) is a bit mask operator that takes the bit pattern of the SensA value and masks it with the bit pattern of 0xFF or 255 in decimal. Essetianlly this takes the first 8 bits from the 16 bit value of SensA which is an Arduino short. the next line does the same thing but shifts the bits right by 8 positions, thus taking the last 8 bits.
You'll need to understand bit patterns, bit masking and bit shifting for this. Then the buffer is written to the serial.
The Python code in turn does reads the bits from the serial port 8 bits at a time. Have a look at the struct.unpack docs. The for comprehension is just there to allow sending more than one set of values. Because the Arduino board and the Python code are running out of sync I added that to be able to send more than one "lines" per transmit. You can just replace that with struct.unpack('<HHL',handle.read(8)). Remember that the ´handle.read()´ takes a number of bytes where as the Arduino send code is dealing with bits.
I think it might work with this modifications:
import serial
ser = serial.Serial('/dev/tty.usbserial-A900frF6', 9600)
var = 1
while var == 1:
if (ser.inWaiting() > 0):
ser.readline(1)
print "hello"

Categories