Converting 8bit values into 24bit values - python

I am reading in ADC values, assuming I am reading things right I'm still new to this, from a [NAU7802](14www.nuvoton.com/resource-files/NAU7802 Data Sheet V1.7.pdf0) and I am getting values outputted as an 8 bit integer (i.e. 0-255) as three bytes. How do I merge the three bytes together to get the output as a 24bit value (0-16777215)?
Here is the code I am using, if I am assuming I did this right, I am still new to I2C communication.
from smbus2 import SMBus
import time
bus = SMBus(1)
address = 0x2a
bus.write_byte_data(0x2a, 0x00, 6)
data = bus.read_i2c_block_data(0x2a,0x12,3)
print bus.read_i2c_block_data(0x2a,0x12,3)
adc1 = bin(data[2])
adc2 = bin(data[1])
adc3 = bin(data[0])
print adc1
print adc2
print adc3
When I convert the binary manually I get and output that corresponds to what I am inputting to the adc.
Ouput:
[128, 136, 136]
0b10001001
0b10001000
0b10000000

try this:
data=[128, 136, 136]
data[0] + (data[1] << 8) + (data[2] << 16)
# 8947840
or
((data[2] << 24) | (data[1] << 16) | (data[0] << 8)) >> 8
# 8947840
(8947840 & 0xFF0000) >> 16
#136
(8947840 & 0x00FF00) >> 8
#136
(8947840 & 0x0000FF)
#128
Here's an example on unpacking 3 different numbers:
data=[118, 123, 41]
c = data[0] + (data[1] << 8) + (data[2] << 16)
#2718582
(c & 0xFF0000) >> 16
#41
(c & 0x00FF00) >> 8
#123
(c & 0x0000FF)
#118

Related

Converting uint_32t integers to byte array

This is the python code to get byte array and calculate SHA1 after performing concatenation of previous(n-1) hash with ibytes.
***for i in range(50000):
ibytes = pack("<I", i)
h = sha1(ibytes + h).digest()***
What could be the best possible way in C++ to implement above code,where I have C++ sha1 code already in place and accepts parameter of void* type.
I tried below code. But SHA value being generated is wrong starting from iterator == 0
BYTE pBuffer[24];
char fbuf[4];
sprintf(fbuf, "%02X%02X%02X%02X", (unsigned)val & 0xFF,
(unsigned)(val >> 8) & 0xFF,
(unsigned)(val >> 16) & 0xFF,
(unsigned)(val >> 24) & 0xFF);
pBuffer[0] = fbuf[0];
pBuffer[1] = fbuf[1];
pBuffer[2] = fbuf[2];
pBuffer[3] = fbuf[3];
memcpy(pBuffer + 4, hn, SHA_DIGEST_LENGTH);
Following the suggestions from #wohlstad and #john, I modified the code as
fbuf[0] = (unsigned)(val>>0) & 0xFF;
fbuf[1] = (unsigned)(val >> 8) & 0xFF;
fbuf[2] = (unsigned)(val >> 16) & 0xFF;
fbuf[3] = (unsigned)(val >> 24) & 0xFF;
This worked.

Getting wrong values when I stitch 2 shorts back into an unsigned long

I am doing BLE communications with an Arduino Board and an FPGA.
I have this requirement which restraints me from changing the packet structure (the packet structure is basically short data types). Thus, to send a timestamp (form millis()) over, I have to split an unsigned long into 2 shorts on the Arduino side and stitch it back up on the FPGA side (python).
This is the implementation which I have:
// Arduino code in c++
unsigned long t = millis();
// bitmask to get bits 1-16
short LSB = (short) (t & 0x0000FFFF);
// bitshift to get bits 17-32
short MSB = (short) (t >> 16);
// I then send the packet with MSB and LSB values
# FPGA python code to stitch it back up (I receive the packet and extract the MSB and LSB)
MSB = data[3]
LSB = data[4]
data = MSB << 16 | LSB
Now the issue is that my output for data on the FPGA side is sometimes negative, which tells me that I must have missed something somewhere as timestamps are not negative. Does any one know why ?
When I transfer other data in the packet (i.e. other short values and not the timestamp), I am able to receive them as expected, so the problem most probably lies in the conversion that I did and not the sending/receiving of data.
short defaults to signed, and in case of a negative number >> will keep the sign by shifting in one bits in from the left. See e.g. Microsoft.
From my earlier comment:
In Python avoid attempting that by yourself (by the way short from C perspective has no idea concerning its size, you always have to look into the compiler manual or limits.h) and use the struct module instead.
you probably need/want to first convert the long to network byte order using hotnl
As guidot reminded “short” is signed and as data are transferred to Python the code has an issue:
For t=0x00018000 most significant short MSB = 1, least significant short LSB = -32768 (0x8000 in C++ and -0x8000 in Python) and Python code expression
time = MSB << 16 | LSB
returns time = -32768 (see the start of Python code below).
So, we have incorrect sign and we are loosing MSB (any value, not only 1 in our example).
MSB is lost because in the expression above LSB is extended with sign bit 1 to the left 16 bits, then new 16 “1” bits override with “|” operator whatever MSB we have and then all new 16 “1” bits are discarded and the expression returns LSB.
Straightforward fix (1.1 Fix) would be fixing MSB, LSB to unsigned short. This could be enough without any changes in Python code.
To exclude bit operations we could use “union” as per 1.2 Fix.
Without access to C++ code we could fix in Python by converting signed LSB, MSB (2.1 Fix) or use “Union” (similar to C++ “union”, 2.2 Fix).
C++
#include <iostream>
using namespace std;
int main () {
unsigned long t = 0x00018000;
short LSB = (short)(t & 0x0000FFFF);
short MSB = (short)(t >> 16);
cout << hex << "t = " << t << endl;
cout << dec << "LSB = " << LSB << " MSB = " << MSB << endl;
// 1.1 Fix Use unsigned short instead of short
unsigned short fixedLSB = (unsigned short)(t & 0x0000FFFF);
unsigned short fixedMSB = (unsigned short)(t >> 16);
cout << "fixedLSB = " << fixedLSB << " fixedMSB = " << fixedMSB << endl;
// 1.2 Fix Use union
union {
unsigned long t2;
unsigned short unsignedShortArray[2];
};
t2 = 0x00018000;
fixedLSB = unsignedShortArray [0];
fixedMSB = unsignedShortArray [1];
cout << "fixedLSB = " << fixedLSB << " fixedMSB = " << fixedMSB << endl;
}
Output
t = 18000
LSB = -32768 MSB = 1
fixedLSB = 32768 fixedMSB = 1
fixedLSB = 32768 fixedMSB = 1
Python
DATA=[0, 0, 0, 1, -32768]
MSB=DATA[3]
LSB=DATA[4]
data = MSB << 16 | LSB
print (f"MSB = {MSB} ({hex(MSB)})")
print (f"LSB = {LSB} ({hex(LSB)})")
print (f"data = {data} ({hex(data)})")
time = MSB << 16 | LSB
print (f"time = {time} ({hex(time)})")
# 2.1 Fix
def twosComplement (short):
if short >= 0: return short
return 0x10000 + short
fixedTime = twosComplement(MSB) << 16 | twosComplement(LSB)
# 2.2 Fix
import ctypes
class UnsignedIntUnion(ctypes.Union):
_fields_ = [('unsignedInt', ctypes.c_uint),
('ushortArray', ctypes.c_ushort * 2),
('shortArray', ctypes.c_short * 2)]
unsignedIntUnion = UnsignedIntUnion(shortArray = (LSB, MSB))
print ("unsignedIntUnion")
print ("unsignedInt = ", hex(unsignedIntUnion.unsignedInt))
print ("ushortArray[1] = ", hex(unsignedIntUnion.ushortArray[1]))
print ("ushortArray[0] = ", hex(unsignedIntUnion.ushortArray[0]))
print ("shortArray[1] = ", hex(unsignedIntUnion.shortArray[1]))
print ("shortArray[0] = ", hex(unsignedIntUnion.shortArray[0]))
unsignedIntUnion.unsignedInt=twosComplement(unsignedIntUnion.shortArray[1]) << 16 | twosComplement(unsignedIntUnion.shortArray[0])
def toUInt(msShort: int, lsShort: int):
return UnsignedIntUnion(ushortArray = (lsShort, msShort)).unsignedInt
fixedTime = toUInt(MSB, LSB)
print ("fixedTime = ", hex(fixedTime))
print()
Output
MSB = 1 (0x1)
LSB = -32768 (-0x8000)
data = -32768 (-0x8000)
time = -32768 (-0x8000)
unsignedIntUnion
unsignedInt = 0x18000
ushortArray[1] = 0x1
ushortArray[0] = 0x8000
shortArray[1] = 0x1
shortArray[0] = -0x8000
fixedTime = 0x18000

Separate Data From Gyroscope Serial Port, Python

I have a Bluetooth gyroscope that I only want accelerometer data from. When I open the port the data will come in as a single, jumbled stream, right? How do I grab the data that I want? I want to simulate a keypress if acceleration is over a certain value, if that helps.
Since the data packet is 11 bytes, read 11 bytes at a time from the port, then parse the message. Note, you may want to read 1 byte at a time until you get a start-message byte (0x55) then read the following 10.
# data is a byte array, len = 11
def parse_packet(data):
# Verify message
if data[0] == 0x55 and data[1] == 0x51 and len(data) == 11:
# Verify checksum
if ((sum(data) - data[10]) & 0xFF) == data[10]:
g = 9.8 # Gravity
parsed = {}
parsed["Ax"] = ((data[3] << 8) | data[2]) / 32768.0 * 16 * g
parsed["Ay"] = ((data[5] << 8) | data[4]) / 32768.0 * 16 * g
parsed["Az"] = ((data[7] << 8) | data[6]) / 32768.0 * 16 * g
# Temp in in degrees celsius
parsed["Temp"] = ((data[9] << 8) | data[8]) / 340.0 + 36.53
return parsed
return None
The one thing you need to verify is the checksum calculation. I couldn't find it in the manual. The other calculations came from the manual I found here: https://www.manualslib.com/manual/1303256/Elecmaster-Jy-61-Series.html?page=9#manual

How I do circular shifting (rotation) of int in Python?

In Java right rotation is done using:
(bits >>> k) | (bits << (Integer.SIZE - k))
But how to do similar thing in Python?
I tried to do (as described here):
n = 13
d = 2
INT_BITS = 4
print(bin(n))
print(bin((n >> d)|(n << (INT_BITS - d)) & 0xFFFFFFFF))
Output:
0b1101
0b110111
But I could not interpret this as a right rotation.
Also is it possible to perform the rotation by excluding leading zeroes, for example:
rightRotation of (...0001101) = 1110 not 1000...110
It is my mistake, if you want to change INT_BITS to 4 you also need to change 0xFFFFFFFF to 0xF (one hex equals 4-bits):
n = 13
d = 2
INT_BITS = 4
print(bin(n))
print(bin((n >> d)|(n << (INT_BITS - d)) & 0xF))
will output:
0b1101
0b111

CRC32 calculation in Python without using libraries

I have been trying to get my head around CRC32 calculations without much success, the values that I seem to get do not match what I should get.
I am aware that Python has libraries that are capable of generating these checksums (namely zlib and binascii) but I do not have the luxury of being able to use them as the CRC functionality do not exist on the micropython.
So far I have the following code:
import binascii
import zlib
from array import array
poly = 0xEDB88320
table = array('L')
for byte in range(256):
crc = 0
for bit in range(8):
if (byte ^ crc) & 1:
crc = (crc >> 1) ^ poly
else:
crc >>= 1
byte >>= 1
table.append(crc)
def crc32(string):
value = 0xffffffffL
for ch in string:
value = table[(ord(ch) ^ value) & 0x000000ffL] ^ (value >> 8)
return value
teststring = "test"
print "binascii calc: 0x%08x" % (binascii.crc32(teststring) & 0xffffffff)
print "zlib calc: 0x%08x" % (zlib.crc32(teststring) & 0xffffffff)
print "my calc: 0x%08x" % (crc32(teststring))
Then I get the following output:
binascii calc: 0xd87f7e0c
zlib calc: 0xd87f7e0c
my calc: 0x2780810c
The binascii and zlib calculations agree where as my one doesn't. I believe the calculated table of bytes is correct as I have compared it to examples available on the net. So the issue must be the routine where each byte is calculated, could anyone point me in the correct direction?
Thanks in advance!
I haven't looked closely at your code, so I can't pinpoint the exact source of the error, but you can easily tweak it to get the desired output:
import binascii
from array import array
poly = 0xEDB88320
table = array('L')
for byte in range(256):
crc = 0
for bit in range(8):
if (byte ^ crc) & 1:
crc = (crc >> 1) ^ poly
else:
crc >>= 1
byte >>= 1
table.append(crc)
def crc32(string):
value = 0xffffffffL
for ch in string:
value = table[(ord(ch) ^ value) & 0xff] ^ (value >> 8)
return -1 - value
# test
data = (
'',
'test',
'hello world',
'1234',
'A long string to test CRC32 functions',
)
for s in data:
print repr(s)
a = binascii.crc32(s)
print '%08x' % (a & 0xffffffffL)
b = crc32(s)
print '%08x' % (b & 0xffffffffL)
print
output
''
00000000
00000000
'test'
d87f7e0c
d87f7e0c
'hello world'
0d4a1185
0d4a1185
'1234'
9be3e0a3
9be3e0a3
'A long string to test CRC32 functions'
d2d10e28
d2d10e28
Here are a couple more tests that verify that the tweaked crc32 gives the same result as binascii.crc32.
from random import seed, randrange
print 'Single byte tests...',
for i in range(256):
s = chr(i)
a = binascii.crc32(s) & 0xffffffffL
b = crc32(s) & 0xffffffffL
assert a == b, (repr(s), a, b)
print('ok')
seed(42)
print 'Multi-byte tests...'
for width in range(2, 20):
print 'Width', width
r = range(width)
for n in range(1000):
s = ''.join([chr(randrange(256)) for i in r])
a = binascii.crc32(s) & 0xffffffffL
b = crc32(s) & 0xffffffffL
assert a == b, (repr(s), a, b)
print('ok')
output
Single byte tests... ok
Multi-byte tests...
Width 2
Width 3
Width 4
Width 5
Width 6
Width 7
Width 8
Width 9
Width 10
Width 11
Width 12
Width 13
Width 14
Width 15
Width 16
Width 17
Width 18
Width 19
ok
As discussed in the comments, the source of the error in the original code is that this CRC-32 algorithm inverts the initial crc buffer, and then inverts the final buffer contents. So value is initialised to 0xffffffff instead of zero, and we need to return value ^ 0xffffffff, which can also be written as ~value & 0xffffffff, i.e. invert value and then select the low-order 32 bits of the result.
If using binary data where the crc is chained over multiple buffers I used the following (using the OPs table):
def crc32(data, crc=0xffffffff):
for b in data:
crc = table[(b ^ crc) & 0xff] ^ (crc >> 8)
return crc
One can XOR the final result with -1 to agree with the online calculators.
crc = crc32(b'test')
print('0x{:08x}'.format(crc))
crc = crc32(b'te')
crc = crc32(b'st', crc)
print('0x{:08x}'.format(crc))
print('xor: 0x{:08x}'.format(crc ^ 0xffffffff))
output
0x278081f3
0x278081f3
xor: 0xd87f7e0c

Categories