I know python doesn't have unsigned variables but I need to convert one from a program that runs python( Blender ) to a win32 application written in C++. I know I can convert an integer like so:
>>> int i = -1
>>> _ + 2**32
How can I take a float like: 0.2345f and convert it to a long type? I will need to convert to long in python and then back to float in win32( c++ )...
typically in C++ it is down by
>>>float f = 0.2345f;
>>>DWORD dw = *reinterpret_cast< DWORD* >( &f );
this produces an unsigned long ... and to convert it back is simply the reverse:
>>>FLOAT f = *reinterpret_cast< FLOAT* >( &dw );
You can use struct.pack and struct.unpack for this. Note though that it is not a cast (i.e. a reinterpretation of the same memory), but a converter (copy to a new piece of memory).
import struct
def to_float(int_):
return struct.unpack('d', struct.pack('q', int_))[0]
def to_long(float_):
return struct.unpack('q', struct.pack('d', float_))[0]
data = 0.2345
long_data= to_long(data) #4597616773191482474
new_data = to_float(long_data) #0.2345
i = 0.2345
converted = long(i)
Related
I am translating a source of NodeJS to Python. However there is a function readUInt32BE that I do not quite understand how it works
Original Code
const buf = Buffer.from("vgEAAwAAAA1kZXYubG9yaW90LmlvzXTUl6ESlOrvJST-gsL_xQ==", 'base64');
const appId = parseInt(buf.slice(0, 4).toString('hex'), 16);
const serverIdLength = buf.slice(4, 8).readUInt32BE(0);
Here is what I have tried so far in Python
encodeToken = base64.b64decode("vgEAAwAAAA1kZXYubG9yaW90LmlvzXTUl6ESlOrvJST-gsL_xQ==")
appId = encodeToken[:4]
appId = appId.hex()
serverIdLength = ......
If possible, can you write a function that works the same as readUInt32BE(0) and explain it for me ? Thanks
I'm assuming from the name that the function interpreters an arbitrary sequence of 4 bytes as an unsigned 32-bit (big endian) integer.
The corresponding Python function would be struct.unpack with an appropriate format string.
import struct
appId = encodeToken[:4]
serverIdLength = struct.unpack(">I", appId)[0]
# ">" means "big-endian"
# "I" means 4-byte unsigned integer
No need to to get the a hex representation of the bytes first. unpack always returns a tuple, even if only one value is created by the format string, so you need to take the first element of that tuple as the final value.
I want to create a Python-datatype using ctypes that matches the C-datatype "const char**", which resembles an array of pointers. However, I'm not able to code this in Python.
The simplified C-function header looks like this:
int foo(int numOfProp, const char** propName, const char** propValue);
In C, the correct function call would look like this:
const char *PropName[2];
PropName[0] = "Prop_Index_1";
PropName[1] = "Prop_Index_2";
const char *PropValue[2];
PropValue[0] = "10";
PropValue[1] = "20";
stream_id = (*foo)(2, PropName, PropValue);
Basically, the function takes two arrays (pair of name and value) as well as the length of both arrays, and returns a stream ID. When the DLL is loaded, I can see that the function expects this ctypes datatype for the property arrays:
"LP_c_char_p"
However, I am really struggling to create this datatype based on lists of strings.
My first attempt (based on How do I create a Python ctypes pointer to an array of pointers) looks like this:
# set some dummy values
dummy_prop_values = [
"10",
"20"
]
# create property dict
properties = {
f"Prop_Index_{i}": dummy_prop_values[i] for i in range(len(dummy_prop_values))
}
def first_try():
# create a dummy ctype string
ctypes_array = ctypes.c_char_p * 2
# create empty c-type arrays for the stream properties
prop_names = ctypes_array()
prop_values = ctypes_array()
# fill the empty arrays with their corresponding values
for i, (prop_name, prop_value) in enumerate(properties.items()):
prop_names[i] = prop_name.encode()
prop_values[i] = prop_value.encode()
# get pointer to properties
ptr_prop_names = ctypes.pointer(prop_names)
ptr_prop_values = ctypes.pointer(prop_values)
return ptr_prop_names, ptr_prop_values
It throws this kind of error when I hand over the returned values to the function foo (which actually makes sense, since I explicitly created an array of length 2... I don't know how/why this worked for the other guy asking the question):
ctypes.ArgumentError: argument 2: <class 'TypeError'>: expected LP_c_char_p instance instead of LP_c_char_p_Array_2
My second attempt (based more or less on my own thoughts) looks like this:
def second_try():
# convert properties to lists
prop_names = [x for x in properties.keys()]
prop_values = [x for x in properties.values()]
# concat list elements, zero terminated
# but I guess this is wrong anyway because it leads to an early string-termination (on byte-level)...?
prop_names = ctypes.c_char_p("\0".join(prop_names).encode())
prop_values = ctypes.c_char_p("\0".join(prop_values).encode())
# get pointer to properties
ptr_prop_names = ctypes.pointer(prop_names)
ptr_prop_values = ctypes.pointer(prop_values)
return ptr_prop_names, ptr_prop_values
This actually doesn't throw an error, but returns -1 as the stream ID, which denotes that "creating the stream wasn't successfull". I double checked all the other arguments of the function call, and these two properties are the only ones that can be wrong somehow.
For whatever reason I just can't figure out exactly where I make a mistake, but hopefully someone here can point me in the right direction.
To convert a list some type into a ctypes array of that type, the straightforward idiom is:
(element_type * num_elements)(*list_of_elements)
In this case:
(c_char_p * len(array))(*array)
Note that (*array) expands the array as if each individual element was passed as a parameter, which is required to initialize the array.
Full example:
test.c - To verify the parameters are passed as expected.
#include <stdio.h>
#ifdef _WIN32
# define API __declspec(dllexport)
#else
# define API
#endif
API int foo(int numOfProp, const char** propName, const char** propValue) {
for(int i = 0; i < numOfProp; i++)
printf("name = %s value = %s\n", propName[i], propValue[i]);
return 1;
}
test.py
import ctypes as ct
dll = ct.CDLL('./test')
# Always define .argtypes and .restype to help ctypes error checking
dll.foo.argtypes = ct.c_int, ct.POINTER(ct.c_char_p), ct.POINTER(ct.c_char_p)
dll.foo.restype = ct.c_int
# helper function to build ctypes arrays
def make_charpp(arr):
return (ct.c_char_p * len(arr))(*(s.encode() for s in arr))
def foo(arr1, arr2):
if len(arr1) != len(arr2):
raise ValueError('arrays must be same length')
return dll.foo(len(arr1) ,make_charpp(arr1), make_charpp(arr2))
foo(['PropName1', 'PropName2'], ['10', '20'])
Output:
name = PropName1 value = 10
name = PropName2 value = 20
I have a C function
int * myfunc()
{
int * ret = (int *) malloc(sizeof(int)*5);
...
return ret;
}
in python I can call it as
ret = lib.myfunc()
but I can't seem to figure out how to actually use ret in the python code (i.e. cast it to an int array of length 5.
I see lots of documentation (and questions here) about how to pass a python array into a C function, but not how one deals with an array returned from a C function.
the only thing I've figured out so far (which sort of works, but seems ugly)
buf = ffi.buffer(ret,ffi.sizeof("int")*5)
int_array = ffi.from_buffer("int *", buf)
is that what I'm supposed to do? or is there a better way?
In C, the type int * and the type int[5] are equivalent at runtime. That means that although you get a int * from calling the function, you can directly use it as if it were a int[5]. All the same operations work with the exception of len(). For example:
ret = lib.myfunc()
my_python_list = [ret[i] for i in range(5)]
lib.free(ret) # see below
As the function called malloc(), you probably need to call free() yourself, otherwise you have a memory leak. Declare it by adding the line void free(void *); to the cdef() earlier.
There are some more advanced functions whose usage is optional here: you could cast the type from int * to int[5] with p = ffi.cast("int[5]", p); or instead you could convert the C array into a Python list with an equivalent but slightly faster call my_python_list = ffi.unpack(ret, 5).
I have the following struct in my C program:
typedef struct
{
void* str;
DWORD str_length;
DWORD count;
} mystruct;
I would like to create a buffer in Python, write it to a file, and then read it from my C program, and reference this buffer as a "mystruct".
what I tried in python was:
from struct import *
str = raw_input("str: ")
count = raw_input("count: ")
s = Struct ( str(len(str)) + 'sLL' )
s.pack( str , len(str) , int(count))
it returns a binary buffer , but not with my data.
where have I got it wrong? is there a better way to do it?
I'm preferring using ctypes for this kind of job. It gives more options in handling pointers. Here is an example how your issue could be resolved using ctypes (in Python 3.x but you can easily convert it to Python 2.x).
Your structure still doesn't contains data. Only pointer to buffer that is accessible from C code. This buffer is created using create_string_buffer that convert python string to C null terminated one.
from ctypes import *
class mystruct(Structure):
_fields_ = [("str", c_char_p),
("str_length", c_long),
("count", c_long)]
s = b"ABC" * 10
c = 44
#convert python string to null terminated c char array
c_str = create_string_buffer(s)
#cast char array to char*
c_str_pointer = cast(c_str, c_char_p)
#create structure
a = mystruct(c_str_pointer, len(s), c)
# print byte representation of structure
print(bytes(a))
print(a.str)
print(a.str_length)
print(a.count)
If you don't need pointers and constant buffer is enough you can also do this using ctypes
from ctypes import *
class mystruct(Structure):
_fields_ = [("str", c_char*100),
("str_length", c_long),
("count", c_long)]
s = b"ABC" * 10
c = 44
a = mystruct(s, len(s), c)
# print byte representation of structure
print(bytes(a))
print(a.str)
print(a.str_length)
print(a.count)
I am trying to implement a function in C (Extending Python) to return a numpy.float32 data type. Is it possible to actually create an object and return it, such that in python the object returned from calling the function is an instance of numpy.float32 ?
(C Extension)
PyObject *numpyFloatFromFloat(float d)
{
ret = SomeAPICall(d)
return ret;
}
(in python)
a = something_special()
type(a)
-> numpy.float32
Right now all attempts at using the API documented in the reference documentation illustrate how to make an Array which yields a numpy.ndarray, and so far using the data types yields a C-type float which converts to a double in python. And for some reason that I'm unaware of, I really need an actual IEEE 754 float32 at the end of this function.
Solution thus far:
something.pyx:
cdef extern from "float_gen.h"
float special_action(void)
def numpy_float_interface():
return numpy.float32(special_action())
float_gen.h
static inline float special_action() { return 1.0; }
I don't see any loss in data here but I can't be certain. I know a numpy.float32 is treated as a C float or float32_t so assuming when I call special_action in the pyx file that it doesn't convert it to a double (as python does) it should be lossless.
Edit
The ultimate solution was very different, I just had to understand how to properly extend Python in C with the numpy library.
Below just returns a np.float32(32)
static PyObject *get_float(PyObject *self, PyObject *args) {
float v = 32;
PyObject *np_float32_val = NULL;
PyArray_Descr *descr = NULL;
if(! PyArg_ParseTuple(args, ""))
return NULL;
if(! (descr = PyArray_DescrFromType(NPY_FLOAT32))) {
PyErr_SetString(PyExc_TypeError, "Improper descriptor");
return NULL;
}
np_float32_val = PyArray_Scalar(&v, descr, NULL);
printf("%lu\n", np_float32_val->ob_refcnt);
return np_float32_val;
}
This simple module returns np.int32 from a C float. The cdef float isn't really necessary as np.float32() should coerce whatever you give to it to a np.float32.
test_mod.pyx
import numpy as np
def fucn():
cdef float a
a = 1
return np.float32(a)
tester.py
import pyximport
pyximport.install()
import test_mod
a = test_mod.func()
print type(a) # <type 'numpy.float32'>