I'm trying to translate this c code into python, but Im having problems with the char* to ushort* conversion:
void sendAsciiCommand(string command) {
unsigned int nchars = command.length() + 1; // Char count of command string
unsigned int nshorts = ceil(nchars / 2); // Number of shorts to store the string
std::vector<unsigned short> regs(nshorts); // Vector of short registers
// Transform char array to short array with endianness conversion
unsigned short *ascii_short_ptr = (unsigned short *)(command.c_str());
for (unsigned int i = 0; i < nshorts; i++)
regs[i] = htons(ascii_short_ptr[i]);
return std::string((char *)regs.data());
}
As long I have tried with this code in Python 2.7:
from math import ceil
from array import array
command = "hello"
nchars = len(command) + 1
nshorts = ceil(nchars/2)
regs = array("H", command)
But it gives me the error:
ValueError: string length not a multiple of item size
Any help?
The exception text:
ValueError: string length not a multiple of item size
means what is says, i.e., the length of the string from which you are trying to create an array must be a multiple of the item size. In this case the item size is that of an unsigned short, which is 2 bytes. Therefore the length of the string must be a multiple of 2. hello has length 5 which is not a multiple of 2, so you can't create an array of 2 byte integers from it. It will work if the string is 6 bytes long, e.g. hello!.
>>> array("H", 'hello!')
array('H', [25960, 27756, 8559])
You might still need to convert to network byte order. array uses the native byte order on your machine, so if your native byte order is little endian you will need to convert it to big endian (network byte order). Use sys.byteorder to check and array.byteswap() to swap the byte order if required:
import sys
from array import array
s = 'hello!'
regs = array('H', s)
print(regs)
# array('H', [25960, 27756, 8559])
if sys.byteorder != 'big':
regs.byteswap()
print(regs)
# array('H', [26725, 27756, 28449])
However, it's easier to use struct.unpack() to convert straight to network byte order if necessary:
import struct
s = 'hello!'
n = len(s)/struct.calcsize('H')
regs = struct.unpack('!{}H'.format(n), s)
print(regs)
#(26725, 27756, 28449)
If you really need an array:
regs = array('H', struct.unpack('!{}H'.format(n), s))
It's also worth pointing out that your C++ code contains an error. If the string length is odd an extra byte will be read at the end of the string and this will be included in the converted data. That extra byte will be \0 as the C string should be null terminated, but the last unsigned short should either be ignored, or you should check that the length of the string is multiple of an unsigned short, just as Python does.
Related
This question already has answers here:
Convert bytes to int?
(7 answers)
Closed 8 months ago.
I receive a 32-bit number over the serial line, using num = ser.read(4). Checking the value of num in the shell returns something like a very unreadable b'\xcbu,\x0c'.
I can check against the ASCII table to find the values of "u" and ",", and determine that the hex value of the received number is actually equal to "cb 75 2c 0c", or in the format that Python outputs, it's b'\xcb\x75\x2c\x0c'. I can also type the value into a calculator and convert it to decimal (or run int(0xcb752c0c) in Python), which returns 3413453836.
How can I do this conversion from a binary string literal to an integer in Python?
I found two alternatives to solve this problem.
Using the int.from_bytes(bytes, byteorder, *, signed=False) method
Using the struct.unpack(format, buffer) from the builtin struct module
Using int.from_bytes
Starting from Python 3.2, you can use int.from_bytes.
Second argument, byteorder, specifies endianness of your bytestring. It can be either 'big' or 'little'. You can also use sys.byteorder to get your host machine's native byteorder.
from the docs:
The byteorder argument determines the byte order used to represent the integer. If byteorder is "big", the most significant byte is at the beginning of the byte array. If byteorder is "little", the most significant byte is at the end of the byte array. To request the native byte order of the host system, use sys.byteorder as the byte order value.
int.from_bytes(bytes, byteorder, *, signed=False)
Code applying in your case:
>>> import sys
>>> int.from_bytes(b'\x11', byteorder=sys.byteorder)
17
>>> bin(int.from_bytes(b'\x11', byteorder=sys.byteorder))
'0b10001'
Here is the official demonstrative code from the docs:
>>> int.from_bytes(b'\x00\x10', byteorder='big')
16
>>> int.from_bytes(b'\x00\x10', byteorder='little')
4096
>>> int.from_bytes(b'\xfc\x00', byteorder='big', signed=True)
-1024
>>> int.from_bytes(b'\xfc\x00', byteorder='big', signed=False)
64512
>>> int.from_bytes([255, 0, 0], byteorder='big')
16711680
Using the struct.unpack method
The function you need to achieve your goal is struct.unpack.
To understand where you can use it, we need to understand the parameters to give and their impact on the result.
struct.unpack(format, buffer)
Unpack from the buffer buffer (presumably packed by pack(format, ...)) according to the format string format. The result is a tuple even if it contains exactly one item. The buffer’s size in bytes must match the size required by the format, as reflected by calcsize().
The buffer is the bytes that we have to give and the format is how the bytestring should be read.
The information will be split into a string, format characters, that can be endianness, ctype, bytesize, ecc..
from the docs:
Format characters have the following meaning; the conversion between C and Python values should be obvious given their types. The ‘Standard size’ column refers to the size of the packed value in bytes when using standard size; that is, when the format string starts with one of '<', '>', '!' or '='. When using native size, the size of the packed value is platform-dependent.
This table represents the format characters currently avaiable in Python 3.10.6:
Format
C-Type
Standard Size
x
pad byte
c
char
1
b
signed char
1
B
uchar
1
?
bool
1
h
short
2
H
ushort
2
i
int
4
I
uint
4
l
long
4
L
ulong
4
q
long long
8
Q
unsigned long long
8
n
ssize_t
N
unsigned ssize_t
f
float
d
double
s
char[]
p
char[]
P
void*
and here is a table to use it to correct byte order:
Character
Byte order
Size
Alignment
#
native
native
Native
=
native
standard
None
<
little-endian
standard
None
>
big-endian
standard
None
!
network (= big-endian)
standard
None
Examples
Here is an example how you can use it:
>>> import struct
>>> format_chars = '<i' #4 bytes, endianness is 'little'
>>> struct.unpack(format_chars,b"f|cs")
(1935899750,)
check the builtin struct module.
https://docs.python.org/3/library/struct.html
in your case, it should probably be something like:
import struct
struct.unpack(">I", b'\xcb\x75\x2c\x0c')
but it depends on Endianness and signed/unsigned, so do read the entire doc.
I have a Numpy vector of bools and I'm trying to use the C API to get a bytes object as quickly as possible from it. (Ideally, I want to map the binary value of the vector to the bytes object.)
I can read in the vector successfully and I have the data in bool_vec_arr. I thought of creating an int and setting its bits in this way:
PyBytesObject * pbo;
int byte = 0;
int i = 0;
while ( i < vec->dimensions[0] )
{
if ( bool_vec_arr[i] )
{
byte |= 1UL << i % 8;
}
i++;
if (i % 8 == 0)
{
/* do something here? */
byte = 0;
}
}
return PyBuildValue("S", pbo);
But I'm not sure how to use the value of byte in pbo. Does anyone have any suggestions?
You need to store the byte you've just completed off. Your problem is you haven't made an actual bytes object to populate, so do that. You know how long the result must be (one-eighth the size of the bool vector, rounded up), so use PyBytes_FromStringAndSize to get a bytes object of the correct size, then populate it as you go.
You'd just allocate with:
// Preallocate enough bytes
PyBytesObject *pbo = PyBytes_FromStringAndSize(NULL, (vec->dimensions[0] + 7) / 8);
// Put check for NULL here
// Extract pointer to underlying buffer
char *bytebuffer = PyBytes_AsString(pbo);
where adding 7 then dividing by 8 rounds up to ensure you have enough bytes for all the bits, then assign to the appropriate index when you've finished a byte, e.g.:
if (i % 8 == 0)
{
bytebuffer[i / 8 - 1] = byte; // Store completed byte to next index
byte = 0;
}
If the final byte might be incomplete, you'll need to decide how to handle this (do the pad bits appear on the left or right, is the final byte omitted and therefore you shouldn't round up the allocation, etc.).
Input = 'FFFF' # 4 ASCII F's
desired result ... -1 as an integer
code tried:
hexstring = 'FFFF'
result = (int(hexstring,16))
print result #65535
Result: 65535
Nothing that I have tried seems to recognized that a 'FFFF' is a representation of a negative number.
Python converts FFFF at 'face value', to decimal 65535
input = 'FFFF'
val = int(input,16) # is 65535
You want it interpreted as a 16-bit signed number.
The code below will take the lower 16 bits of any number, and 'sign-extend', i.e. interpret
as a 16-bit signed value and deliver the corresponding integer
val16 = ((val+0x8000)&0xFFFF) - 0x8000
This is easily generalized
def sxtn( x, bits ):
h= 1<<(bits-1)
m = (1<<bits)-1
return ((x+h) & m)-h
In a language like C, 'FFFF' can be interpreted as either a signed (-1) or unsigned (65535) value. You can use Python's struct module to force the interpretation that you're wanting.
Note that there may be endianness issues that the code below makes no attempt to deal with, and it doesn't handle data that's more than 16-bits long, so you'll need to adapt if either of those cases are in effect for you.
import struct
input = 'FFFF'
# first, convert to an integer. Python's going to treat it as an unsigned value.
unsignedVal = int(input, 16)
assert(65535 == unsignedVal)
# pack that value into a format that the struct module can work with, as an
# unsigned short integer
packed = struct.pack('H', unsignedVal)
assert('\xff\xff' == packed)
# ..then UNpack it as a signed short integer
signedVal = struct.unpack('h', packed)[0]
assert(-1 == signedVal)
I'm trying to pack integers as bytes in python and unpack them in C. So in my python code I have something like
testlib = ctypes.CDLL('/something.so')
testlib.process(repr(pack('B',10)))
which packs 10 as a byte and calls the function "process" in my C code.
What do I need in my C code to unpack this packed data? That is, what do I need to do to get 10 back from the given packed data.
Assuming you have a 10 byte string containing 10 integers, just copy the data.
char packed_data[10];
int unpacked[10];
int i;
for(i = 0; i < 10; ++i)
unpacked[i] = packed_data[i];
... or using memcpy
On the other hand, if you're using 4 bytes pr int when packing, you can split the char string in C and use atoi on it. How are you exchanging data from Python to C ?
Python says I need 4 bytes for a format code of "BH":
struct.error: unpack requires a string argument of length 4
Here is the code, I am putting in 3 bytes as I think is needed:
major, minor = struct.unpack("BH", self.fp.read(3))
"B" Unsigned char (1 byte) + "H" Unsigned short (2 bytes) = 3 bytes (!?)
struct.calcsize("BH") says 4 bytes.
EDIT: The file is ~800 MB and this is in the first few bytes of the file so I'm fairly certain there's data left to be read.
The struct module mimics C structures. It takes more CPU cycles for a processor to read a 16-bit word on an odd address or a 32-bit dword on an address not divisible by 4, so structures add "pad bytes" to make structure members fall on natural boundaries. Consider:
struct { 11
char a; 012345678901
short b; ------------
char c; axbbcxxxdddd
int d;
};
This structure will occupy 12 bytes of memory (x being pad bytes).
Python works similarly (see the struct documentation):
>>> import struct
>>> struct.pack('BHBL',1,2,3,4)
'\x01\x00\x02\x00\x03\x00\x00\x00\x04\x00\x00\x00'
>>> struct.calcsize('BHBL')
12
Compilers usually have a way of eliminating padding. In Python, any of =<>! will eliminate padding:
>>> struct.calcsize('=BHBL')
8
>>> struct.pack('=BHBL',1,2,3,4)
'\x01\x02\x00\x03\x04\x00\x00\x00'
Beware of letting struct handle padding. In C, these structures:
struct A { struct B {
short a; int a;
char b; char b;
}; };
are typically 4 and 8 bytes, respectively. The padding occurs at the end of the structure in case the structures are used in an array. This keeps the 'a' members aligned on correct boundaries for structures later in the array. Python's struct module does not pad at the end:
>>> struct.pack('LB',1,2)
'\x01\x00\x00\x00\x02'
>>> struct.pack('LBLB',1,2,3,4)
'\x01\x00\x00\x00\x02\x00\x00\x00\x03\x00\x00\x00\x04'
By default, on many platforms the short will be aligned to an offset at a multiple of 2, so there will be a padding byte added after the char.
To disable this, use: struct.unpack("=BH", data). This will use standard alignment, which doesn't add padding:
>>> struct.calcsize('=BH')
3
The = character will use native byte ordering. You can also use < or > instead of = to force little-endian or big-endian byte ordering, respectively.