I'm trying to calulate CRC-CCITT (0xFFFF) for HEX string and get result back as HEX string. I tried binascii and crc16 but I get int values and when I convert them to HEX it's not the value I expected. I need this:
hex_string = "AA01"
crc_string = crccitt(hex_string)
print("CRC: ", crc_string)
>>> CRC: FF9B
You can use str.format / format to convert the int value to hexadecimal format: (used crc16 to get crc)
>>> import binascii
>>> import crc16
>>> hex_string = 'AA01'
>>> crc = crc16.crc16xmodem(binascii.unhexlify(hex_string), 0xffff)
>>> '{:04X}'.format(crc & 0xffff)
'FF9B'
>>> format(crc & 0xffff, '04X')
'FF9B'
or using % operator:
>>> '%04X' % (crc & 0xffff)
'FF9B'
import binascii
import crc16
def crccitt(hex_string):
byte_seq = binascii.unhexlify(hex_string)
crc = crc16.crc16xmodem(byte_seq, 0xffff)
return '{:04X}'.format(crc & 0xffff)
Related
I need to convert some tuple in little endian hex format into integer format, how do i do so?
Example:
myTuple = ['0xD4', '0x51', '0x1', '0x0']
I need to convert it into integer (86484).
Just convert your hex_string into int with int(hex_string, 16) and use struct lib to merge 4 bytes into one big int:
import struct
myTuple = ['0xD4', '0x51', '0x1', '0x0']
myResult = struct.unpack("<I", bytearray((int(x, 16) for x in myTuple)))
print(myResult[0])
< is the endianess, I is big int (4 bytes)
On python2 , using int.frombytes
int.from_bytes(bytearray((int(x, 16) for x in myTuple)), byteorder='little')
# 86484
Or explicitly summing up each value after shifting it
sum(int(e, 16) << (i * 8) for i,e in enumerate(myTuple))
# 86484
Or using reduce
from functools import reduce # only for python3
reduce(lambda x, y: (x<<8) + int(y,16), [0]+myTuple[::-1])
# 86484
I have been trying to get my head around CRC32 calculations without much success, the values that I seem to get do not match what I should get.
I am aware that Python has libraries that are capable of generating these checksums (namely zlib and binascii) but I do not have the luxury of being able to use them as the CRC functionality do not exist on the micropython.
So far I have the following code:
import binascii
import zlib
from array import array
poly = 0xEDB88320
table = array('L')
for byte in range(256):
crc = 0
for bit in range(8):
if (byte ^ crc) & 1:
crc = (crc >> 1) ^ poly
else:
crc >>= 1
byte >>= 1
table.append(crc)
def crc32(string):
value = 0xffffffffL
for ch in string:
value = table[(ord(ch) ^ value) & 0x000000ffL] ^ (value >> 8)
return value
teststring = "test"
print "binascii calc: 0x%08x" % (binascii.crc32(teststring) & 0xffffffff)
print "zlib calc: 0x%08x" % (zlib.crc32(teststring) & 0xffffffff)
print "my calc: 0x%08x" % (crc32(teststring))
Then I get the following output:
binascii calc: 0xd87f7e0c
zlib calc: 0xd87f7e0c
my calc: 0x2780810c
The binascii and zlib calculations agree where as my one doesn't. I believe the calculated table of bytes is correct as I have compared it to examples available on the net. So the issue must be the routine where each byte is calculated, could anyone point me in the correct direction?
Thanks in advance!
I haven't looked closely at your code, so I can't pinpoint the exact source of the error, but you can easily tweak it to get the desired output:
import binascii
from array import array
poly = 0xEDB88320
table = array('L')
for byte in range(256):
crc = 0
for bit in range(8):
if (byte ^ crc) & 1:
crc = (crc >> 1) ^ poly
else:
crc >>= 1
byte >>= 1
table.append(crc)
def crc32(string):
value = 0xffffffffL
for ch in string:
value = table[(ord(ch) ^ value) & 0xff] ^ (value >> 8)
return -1 - value
# test
data = (
'',
'test',
'hello world',
'1234',
'A long string to test CRC32 functions',
)
for s in data:
print repr(s)
a = binascii.crc32(s)
print '%08x' % (a & 0xffffffffL)
b = crc32(s)
print '%08x' % (b & 0xffffffffL)
print
output
''
00000000
00000000
'test'
d87f7e0c
d87f7e0c
'hello world'
0d4a1185
0d4a1185
'1234'
9be3e0a3
9be3e0a3
'A long string to test CRC32 functions'
d2d10e28
d2d10e28
Here are a couple more tests that verify that the tweaked crc32 gives the same result as binascii.crc32.
from random import seed, randrange
print 'Single byte tests...',
for i in range(256):
s = chr(i)
a = binascii.crc32(s) & 0xffffffffL
b = crc32(s) & 0xffffffffL
assert a == b, (repr(s), a, b)
print('ok')
seed(42)
print 'Multi-byte tests...'
for width in range(2, 20):
print 'Width', width
r = range(width)
for n in range(1000):
s = ''.join([chr(randrange(256)) for i in r])
a = binascii.crc32(s) & 0xffffffffL
b = crc32(s) & 0xffffffffL
assert a == b, (repr(s), a, b)
print('ok')
output
Single byte tests... ok
Multi-byte tests...
Width 2
Width 3
Width 4
Width 5
Width 6
Width 7
Width 8
Width 9
Width 10
Width 11
Width 12
Width 13
Width 14
Width 15
Width 16
Width 17
Width 18
Width 19
ok
As discussed in the comments, the source of the error in the original code is that this CRC-32 algorithm inverts the initial crc buffer, and then inverts the final buffer contents. So value is initialised to 0xffffffff instead of zero, and we need to return value ^ 0xffffffff, which can also be written as ~value & 0xffffffff, i.e. invert value and then select the low-order 32 bits of the result.
If using binary data where the crc is chained over multiple buffers I used the following (using the OPs table):
def crc32(data, crc=0xffffffff):
for b in data:
crc = table[(b ^ crc) & 0xff] ^ (crc >> 8)
return crc
One can XOR the final result with -1 to agree with the online calculators.
crc = crc32(b'test')
print('0x{:08x}'.format(crc))
crc = crc32(b'te')
crc = crc32(b'st', crc)
print('0x{:08x}'.format(crc))
print('xor: 0x{:08x}'.format(crc ^ 0xffffffff))
output
0x278081f3
0x278081f3
xor: 0xd87f7e0c
>>> a = -2147458560
>>> bin(a)
'-0b1111111111111111001111000000000'
My intention is to manipulate a as 32-bit signed binary and return it. The correct conversion for -2147458560 would be '0b10000000000000000110001000000000'; how can I achieve that?
Bitwise AND (&) with 0xffffffff (232 - 1) first:
>>> a = -2147458560
>>> bin(a & 0xffffffff)
'0b10000000000000000110001000000000'
>>> format(a & 0xffffffff, '32b')
'10000000000000000110001000000000'
>>> '{:32b}'.format(a & 0xffffffff)
'10000000000000000110001000000000'
I wrote the following code in C
short a = 0xFFFE;
printf("hex = 0x%X, signed short = %d\n", a & 0xFFFF, a);
Output ---> hex = 0xFFFE, signed short = -2
Now tying to do the same in Python using ctypes
from ctypes import *
mc = cdll.msvcrt
a = c_short(0xFFFE)
mc.printf("hex = 0x%X, signed short = %d\n", a, a)
Output ----> hex = 0xFFFE, signed short = 65534
I am not sure why the output is different? Any idea?
printf isn't being called correctly. Use %hX and %hd for passing shorts.
>>> from ctypes import *
>>> mc = cdll.msvcrt
>>> a=c_short(0xFFFE)
>>> mc.printf('hex=0x%hX, signed short=%hd\n',a,a)
hex=0xFFFE, signed short=-2
28
I need to use an embedded system running Python 1.5.2+ (!!!) with very few modules.
And there is no "struct" module usable...
Here is the list of usable modules :
marshal
imp
_main_
_builtin_
sys
md5
binascii
Yes that's it, no struct module...
So, I need to create a 4 bytes representation of an unsigned short integer to send to serial...
With struct :
date = day + month * 32 + (year - 2000) * 512
time = 100 * hour + minute
data = struct.pack(b'HH', date, time)
date on 2 bytes time on 2 bytes and everybody's happy...
But without using 'struct' module, how can I do that?
You can do something like this:
x = 0xabcd
packed_string = chr((x & 0xff00) >> 8) + chr(x & 0x00ff)
Here is a complete translation for you
Before
>>> import struct
>>> day = 1; month = 2; year = 2003
>>> hour = 4; minute = 5
>>> date = day + month * 32 + (year - 2000) * 512
>>> time = 100 * hour + minute
>>> data = struct.pack(b'HH', date, time)
>>> data
'A\x06\x95\x01'
>>> data.encode("hex")
'41069501'
And after
>>> data2 = chr(date & 0xFF) + chr((date >> 8) & 0xFF) + chr(time & 0xFF) + chr((time >> 8) & 0xFF)
>>> data2
'A\x06\x95\x01'
>>> data2.encode("hex")
'41069501'
>>>
I was able to do it by passing a list of the bytes to bytes():
data=bytes([date%256,date//256,time%256,time//256])