python code
for b in range(4):
for c in range(4):
print myfunc(b/0x100000000, c*8)
c code
unsigned int b,c;
for(b=0;b<4;b++)
for(c=0;c<4; c++)
printf("%L\n", b/0x100000000);
printf("%L\n" , myfunc(b/0x100000000, c*8));
I am getting an error saying:
error: integer constant is too large for "long" type at both printf statement in c code.
'myfunc' function returns a long.
This can be solved by defining 'b' a different type. I tried defining 'b' as 'long' and 'unsigned long' but no help.
Any pointers?
My bad...This is short version of problem
unsigned int b;
b = 1;
printf("%L", b/0x100000000L);
I am getting error and warnings:
error: integer constant is too large for "long" type
warning: conversion lacks type at end of format
warning: too many arguments for format
Your C code needs braces to create the scope that Python does by indentation, so it should look like this:
unsigned int b,c;
for(b=0;b<4;b++)
{
for(c=0;c<4; c++)
{
printf("%L\n", b/0x100000000);
printf("%L\n" , myfunc(b/0x100000000, c*8));
}
}
Try long long. Python automatically uses number representation which fits your constants, but C does not. 0x100000000L simply does not fit in 32-bit unsigned int, unsigned long and so on. Also, read your C textbook on long long data type and working with it.
unsigned int b,c;
const unsigned long d = 0x100000000L; /* 33 bits may be too big for int */
for(b=0;b<4;b++) {
for(c=0;c<4; c++) { /* use braces and indent consistently */
printf("%ud\n", b/d); /* "ud" to print Unsigned int in Decimal */
printf("%ld\n", myfunc(b/d, c*8)); /* "l" is another modifier for "d" */
}
}
Related
To see how repr(x) works for float in CPython, I checked the source code for float_repr:
buf = PyOS_double_to_string(PyFloat_AS_DOUBLE(v),
'r', 0,
Py_DTSF_ADD_DOT_0,
NULL);
This calls PyOS_double_to_string with format code 'r' which seems to be translated to format code 'g' with precision set to 17:
precision = 17;
format_code = 'g';
So I'd expect repr(x) and f'{x:.17g}' to return the same representation. However this doesn't seem to be the case:
>>> repr(1.1)
'1.1'
>>> f'{1.1:.17g}'
'1.1000000000000001'
>>>
>>> repr(1.225)
'1.225'
>>> f'{1.225:.17g}'
'1.2250000000000001'
I understand that repr only needs to return as many digits as are necessary to reconstruct the exact same object as represented in memory and hence '1.1' is obviously sufficient to get back 1.1 but I'd like to know how (or why) this differs from the (internally used) .17g formatting option.
(Python 3.7.3)
Seems that you're looking at a fallback method:
/* The fallback code to use if _Py_dg_dtoa is not available. */
PyAPI_FUNC(char *) PyOS_double_to_string(double val,
char format_code,
int precision,
int flags,
int *type)
{
char format[32];
The preprocessor variable that conditions the fallback method is PY_NO_SHORT_FLOAT_REPR. If it's set then dtoa won't be compiled as it will fail:
/* if PY_NO_SHORT_FLOAT_REPR is defined, then don't even try to compile
the following code */
It's probably not the case on most modern setups. This Q&A explains when/why Python selects either method: What causes Python's float_repr_style to use legacy?
now at line 947 you have the version where _Py_dg_dtoa is available
/* _Py_dg_dtoa is available. */
static char *
format_float_short(double d, char format_code,
int mode, int precision,
int always_add_sign, int add_dot_0_if_integer,
int use_alt_formatting, const char * const *float_strings,
int *type)
and there you can see that g and r have subtle differences (explained in comments)
We used to convert
at 1e17, but that gives odd-looking results for some values
when a 16-digit 'shortest' repr is padded with bogus zeros.
case 'g':
if (decpt <= -4 || decpt >
(add_dot_0_if_integer ? precision-1 : precision))
use_exp = 1;
if (use_alt_formatting)
vdigits_end = precision;
break;
case 'r':
/* convert to exponential format at 1e16. We used to convert
at 1e17, but that gives odd-looking results for some values
when a 16-digit 'shortest' repr is padded with bogus zeros.
For example, repr(2e16+8) would give 20000000000000010.0;
the true value is 20000000000000008.0. */
if (decpt <= -4 || decpt > 16)
use_exp = 1;
break;
Seems that it matches the behaviour you're describing. note that "{:.16g}".format(1.225) yields 1.225
I have created the following class using pybind11:
py::class_<Raster>(m, "Raster")
.def(py::init<double*, std::size_t, std::size_t, std::size_t, double, double, double>());
However I have no idea how I would call this constructor in Python.. I see that Python expects a float in the place of the double*, but I cannot seem to call it.
I have tried, ctypes.data_as(ctypes.POINTER(ctypes.c_double)) but this does not work...
Edit:
I have distilled the answer from #Sergei answer.
py::class_<Raster>(m, "Raster", py::buffer_protocol())
.def("__init__", [](Raster& raster, py::array_t<double> buffer, double spacingX, double spacingY, double spacingZ) {
py::buffer_info info = buffer.request();
new (&raster) Raster3D(static_cast<double*>(info.ptr), info.shape[0], info.shape[1], info.shape[2], spacingX, spacingY, spacingZ);
})
Pybind does automatic conversions. When you bind f(double *) the argument is assumed to be a pointer to a singe value, not a pointer to the array begin, because it would be quite unnatural to expect such input from python side. So pybind will convert argument using this logic.
If you need to pass raw array to c++ use py::buffer like here:
py::class_<Matrix>(m, "Matrix", py::buffer_protocol())
.def("__init__", [](Matrix &m, py::buffer b) {
typedef Eigen::Stride<Eigen::Dynamic, Eigen::Dynamic> Strides;
/* Request a buffer descriptor from Python */
py::buffer_info info = b.request();
/* Some sanity checks ... */
if (info.format != py::format_descriptor<Scalar>::format())
throw std::runtime_error("Incompatible format: expected a double array!");
if (info.ndim != 2)
throw std::runtime_error("Incompatible buffer dimension!");
auto strides = Strides(
info.strides[rowMajor ? 0 : 1] / (py::ssize_t)sizeof(Scalar),
info.strides[rowMajor ? 1 : 0] / (py::ssize_t)sizeof(Scalar));
auto map = Eigen::Map<Matrix, 0, Strides>(
static_cast<Scalar *>(info.ptr), info.shape[0], info.shape[1], strides);
new (&m) Matrix(map);
});
To make it work you need to pass a type which follows python buffer protocol.
I'm writing a Python3 script with some computationally heavy sections in C, using the Python C API. When dealing with int64s, I can't figure out how to ensure that an input number is an unsigned int64; that is , if it's smaller than 0. As the official documentation suggests, I'm using PyArg_ParseTuple() with the formatter K - which does not check for overflow. Here is my C code:
static PyObject* from_uint64(PyObject* self, PyObject*){
uint64_t input;
PyObject* output;
if (!PyArg_ParseTuple(args, "K", &input)){
return PyErr_Format(PyExc_ValueError, "Wrong input: expected unsigned 64-bit integer.");
}
return NULL;
}
However, calling the function with a negative argument throws no error, and the input number is casted to unsigned. E.g., from_uint64(-1) will result in input=2^64-2. As expected, since there's no overflow check.
What would be the correct way of determining whether the input number is negative, possibly before parsing it?
You should use
unsigned long long input = PyLong_AsUnsignedLongLong(args);
You can then check with
if (PyErr_Occurred()) {
// handle out of range here
}
if the number was unsuitable for an unsigned long long.
See also the Python 3 API documentation on Integer Objects
With a little modification from #Ctx 's answer:
The solution is to first parse the input as an object (so, not directly from args), then check its type:
static PyObject* from_uint64(PyObject* self, PyObject* args){
PyObject* output;
PyObject* input_obj;
if (!PyArg_ParseTuple(args, "O", &input_obj)){
return PyErr_Format(PyExc_TypeError, "Wrong input: expected py object.");
}
unsigned long long input = PyLong_AsUnsignedLongLong(input_obj);
if(input == -1 && PyErr_Occurred()) {
PyErr_Clear();
return PyErr_Format(PyExc_TypeError, "Parameter must be an unsigned integer type, but got %s", Py_TYPE
(input_obj)->tp_name);
}
This code, as expected, works on any input in [0, 2^64-1] and throws error on integers outside the boundaries as well as illegal types like float, string, etc.
my C program needs a char** input which I store in python as a numpy object array of strings.
a = np.empty(2, dtype=object)
a[0] = 'hi you'
a[1] = 'goodbye'
What is the correct way to pass this to my C program considering that numpy.i only defines typemaps for char* arrays?
That's impossibru AFAIK, and as far as the docs go:
Some data types are not yet supported, like boolean arrays and string arrays.
You'll either have to write an intermediary function that takes the strings as separate arguments, put them in an array and pass that to your C function, or work out another way of doing things
So it is doable, but you need to convert the numpy object array to a list of python strings with a.tolist(). Then you can pass it to the C code with the following tutorial code as a char **
http://www.swig.org/Doc1.3/Python.html#Python_nn59
Edit: Turned out to be a real pain in the *** since the example above is for Python 2 but gives useless error messages in Python 3. Python 3 moved to unicode strings and I had to do some doc reading to make it work. Here is the python 3 equivalent of the above example.
// This tells SWIG to treat char ** as a special case
%typemap(in) char ** {
/* Check if is a list */
if (PyList_Check($input)) {
int size = PyList_Size($input);
Py_ssize_t i = 0;
$1 = (char **) malloc((size+1)*sizeof(char *));
for (i = 0; i < size; i++) {
PyObject *o = PyList_GetItem($input,i);
if (PyUnicode_Check(o))
$1[i] = PyUnicode_AsUTF8(PyList_GetItem($input,i));
else {
//PyErr_SetString(PyExc_TypeError,"list must contain strings");
PyErr_Format(PyExc_TypeError, "list must contain strings. %d/%d element was not string.", i, size);
free($1);
return NULL;
}
}
$1[i] = 0;
} else {
PyErr_SetString(PyExc_TypeError,"not a list");
return NULL;
}
}
// This cleans up the char ** array we malloc'd before the function call
%typemap(freearg) char ** {
free((char *) $1);
}
Essentially just had to replace PyString_Check with PyUnicode_Check and PyString_AsString with PyUnicode_AsUTF8 (introduced in python 3.3 and later)
While attempting to read a Python list filled with float numbers and to populate real channels[7] with their values (I'm using F2C, so real is just a typedef for float), all I am able to retrieve from it are zero values. Can you point out the error in the code below?
static PyObject *orbital_spectra(PyObject *self, PyObject *args) {
PyListObject *input = (PyListObject*)PyList_New(0);
real channels[7], coefficients[7], values[240];
int i;
if (!PyArg_ParseTuple(args, "O!", &PyList_Type, &input)) {
return NULL;
}
for (i = 0; i < PyList_Size(input); i++) {
printf("%f\n", PyList_GetItem(input, (Py_ssize_t)i)); // <--- Prints zeros
}
//....
}
PyList_GetItem will return a PyObject*. You need to convert that to a number C understands. Try changing your code to this:
printf("%f\n", PyFloat_AsDouble(PyList_GetItem(input, (Py_ssize_t)i)));
Few things I see in this code.
You leak a reference, don't create that empty list at the beginning, it's not needed.
You don't need to cast to PyListObject.
PyList_GetItem returns a PyObject, not a float. Use PyFloat_AsDouble to extract the value.
If PyList_GetItem returns NULL, then an exception has been thrown, and you should check for it.