python pyparser fails at #pragma directive - python

The python C-parser pycparser fails at the following #pragma directive:
#pragma ghs section somestring="some_other_string"
generates error:
AssertionError: invalid #pragma directive
What's wrong with this #pragma?

Most likely nothing. The syntax, meaning and compiler behaviour of #pragma lines are implementation-defined. From n3797 S16.6:
A preprocessing directive of the form
# pragma pp-tokens(opt) new-line
causes the implementation to behave in an implementation-defined manner. The behavior might cause
translation to fail or cause the translator or the resulting program to behave in a non-conforming manner. Any pragma that is not recognized by the implementation is ignored.
The C standard has similar language.
If you want PyParser to do something other than throw an assertion error, you need to see what options are available to change its behaviour. Sorry, but that's beyond my scope.

Related

confusion with cppyy for overloaded methods and error handling

I have a c++ class with several constructors:
MyClass(const std::string& configfilename);
MyClass(const MyClass& other);
I have python bindings for this class that were generated with cppyy - I don't do this myself, this is all part of the framework I'm using (CERN ROOT, in case you're wondering).
Now, I have a piece of python code that instantiates my class, with a nice try-except block:
try:
obj = MyClass(args.config)
except ConfigFileNotFoundError:
print("config file "+args.config+" was not found!")
exit(0)
Now, to test, I'm executing this with a wrong config file. But what I get is roughly this:
TypeError: none of the 2 overloaded methods succeeded. Full details:
MyClass(const std::string&) => ConfigFileNotFoundError
MyClass::MyClass(const MyClass&) => TypeError
So I'm wondering:
Since cppyy seems to handle function overloading with a try/except block, is there any reasonable way to do error handling for such applications?
I'd love to actually get the ConfigFileNotFoundError to handle it properly, rather than getting this TypeError. Also, what determines the actual error class I get in the end - does it depend on the order in which the overloads appear in the header file?
Any help, suggestions or pointers on where to find more information on this would be highly appreciated.
cppyy doesn't use try/except for overload resolution, hence there are also no __context__ and __cause__ set. To be more precise: the C++ exception is not an error that occurs during a handler. Rather, as-yet unresolved overloads are prioritized, then tried in order, with no distinction made between a Python failure (e.g. from an argument conversion) or a C++ failure (any exception that was automatically converted into a Python exception). This is a historic artifact predating run-time template instantiation and SFINAE: it allowed for more detailed run-time type matching in pre-instantiated templates.
If all overloads fail (Python or C++), the collected errors are summarized. Python still requires an exception type, however, and if the exception types across the collected types differ, a generic TypeError is raised, with a message string made up of all the collected exceptions. This is what happens here: there is ConfigFileNotFoundError raised by C++ in one overload and TypeError from argument conversion failure in the other.
There's an improvement now in the cppyy repo; to be released with 2.3.0, where in clear cases such as this one (a single overload succeeding in argument match but failing in the callee), you'll get the actual ConfigFileNotFoundError instance as long as its class is publicly derived from std::exception (but I think it already does, otherwise the error report you posted would have looked quite different).
(Note that CERN's ROOT contains an old fork of cppyy that has quite a bit diverged; you'll have to request them for a separate update there if that fork matters to you.)

C++ #include<XXX.h> equivalent of Python's import XXX as X

I work with Python most of the time, for some reasons now I also need to use C++.
I find Python's import XXX as X very neat in the following way, for example:
import numpy as np
a = np.array([1,2,3])
where I'm very clear by looking at my code that the array() function is provided by the numpy module.
However, when working with C++, if I do:
#include<cstdio>
std::remove(filename);
It's not clear to me at first sight that remove() function under the std namespace is provided by <cstdio>.
So I'm wondering if there is a way to do it in C++ as the import XXX as X way in Python?
Nope.
It'll be slightly clearer if you write std::remove (which you should be doing anyway; there's no guarantee that the symbol is available in the global namespace) because then at least you'll know it comes from a standard header.
Beyond that, it's up to your memory. 😊
Some people try to introduce hacks like:
namespace SomeThing {
#include <cstdio>
}
// Now it's SomeThing::std::remove
That might work for your own headers (though I'd still discourage it even then). But it'll cause all manner of chaos with standard headers for sure and is not permitted:
[using.headers]/1: The entities in the C++ standard library are defined in headers, whose contents are made available to a translation unit when it contains the appropriate #include preprocessing directive.
[using.headers]/3: A translation unit shall include a header only outside of any declaration or definition, and shall include the header lexically before the first reference in that translation unit to any of the entities declared in that header. No diagnostic is required.
Recall that #include and import are fundamentally different things. C++ modules may go some way towards this sort of functionality, perhaps, but by including source code you are not even touching namespaces of symbols created by that code.
No there is no way to force this syntax. The person who developped the code that you include is free. Generally people split their code into namespaces, which can result to this syntax:
#include <MyLibrary.h>
int main()
{
MyLibrary::SayHello();
return 0;
}
But you have no guarentee on how the code in the header is written.
C++ #include<XXX.h> equivalent of Python's import XXX as X
There is no equivalent in C++.
When you include a file into another, you get every single declaration from the included file, and you have no option of changing their names.
You can add aliases for types and namespaces though, and references to objects, as well as write wrapper functions to do some of what the as X part does in Python.
It's not clear to me at first sight that remove() is provided by <cstdio>.
The std namespace at least tells you that it is provided by the standard library.
What I like to do, is document which header provides the used declarations:
#include<cstdio> // std::remove
std::remove(filename);
That said, most IDE's can show you where an identifier is declared by ctrl-clicking or hovering over it (although this doesn't always work well when there are overloads in different headers). My primary use for inclusion comments is checking which includes can be removed after refactoring.

swig: suppress warning about function being python keyword

I have a C++ library and I use swig to generate Python bindings for it. Many classes have a print function, for them I get a warning like this:
Foo.h:81: Warning 314: 'print' is a python keyword, renaming to '_print'
How can I suppress the warnings? I tried
%ignore print;
But it did not help. Thank you in advance...
I expected that using the warning filtering syntax:
%warnfilter(314) print;
would do the trick, however in this instance it didn't seem to work. I was however able to fix the warning by explicitly doing the rename myself using %rename:
%module test
%rename(_print) print;
void print();
%ignore does also work with SWIG 3.0. Best guess you had the directive and the declaration in the wrong order for example:
%module test
%ignore print;
void print();
Does not warn with 3.0.2

CPython - Compile dails, PyDateTime_FromTimestamp not declared?

I'm writing a V8 add-on to convert javascript objects to python, and vice-versa. I'm able to convert all sorts of types, but PyDateTime_FromTimestamp (which is specified as existing in the cpython docs: https://docs.python.org/2/c-api/datetime.html#c.PyDateTime_FromTimestamp) is apparently undefined, causing compilation to fail.
../src/py_object_wrapper.cc:189:13: error: use of undeclared identifier
'PyDateTime_FromTimestamp'
return PyDateTime_FromTimestamp(value->NumberValue());
Anybody know what's going on?
Since you haven't given us enough information to debug anything, I'm going to take a wild guess at the most likely problem.
Notice that at the top of the documentation you linked to it says:
Various date and time objects are supplied by the datetime module. Before using any of these functions, the header file datetime.h must be included in your source (note that this is not included by Python.h), and the macro PyDateTime_IMPORT must be invoked, usually as part of the module initialisation function. The macro puts a pointer to a C structure into a static variable, PyDateTimeAPI, that is used by the following macros.
If you just forgot the macro, this would compile but then crash at runtime, as PyDateTimeAPI will be NULL.
But if you forgot to #include the datetime.h, that would cause exactly what you're seeing.

Compiling lunatic python on windows

I'm trying to compile lunatic python on windows with minigw. The command is as follows:
gcc.exe -shared -DLUA_BUILD_AS_DLL src\luainpython.c src\pythoninlua.c liblua.a
libpython27.a -IC:\Python27\include -IC:\LUA\include
This gives me undefined reference errors. But I cannot find any lua api change reference with what should I replace these.
src\luainpython.c:350:14: warning: 'LuaObject_Type' redeclared without dllimport
attribute after being referenced with dll linkage [enabled by default]
C:\Users\Wiz\AppData\Local\Temp\cccm0nAN.o:luainpython.c:(.text+0x7a): undefined
reference to `lua_strlen'
C:\Users\Wiz\AppData\Local\Temp\cccm0nAN.o:luainpython.c:(.text+0x557): undefine
d reference to `_imp__LuaObject_Type'
C:\Users\Wiz\AppData\Local\Temp\cccm0nAN.o:luainpython.c:(.text+0xc3a): undefine
d reference to `luaL_getn'
C:\Users\Wiz\AppData\Local\Temp\cccm0nAN.o:luainpython.c:(.text+0x1036): undefin
ed reference to `luaopen_loadlib'
c:/mingw32/bin/../lib/gcc/i686-w64-mingw32/4.7.1/../../../../i686-w64-mingw32/bi
n/ld.exe: C:\Users\Wiz\AppData\Local\Temp\cccm0nAN.o: bad reloc address 0x0 in s
ection `.data'
collect2.exe: error: ld returned 1 exit status
The original Lunatic-Python codebase has many known problems -- the build issue you're running into above being one of them. Unfortunately, it doesn't seem like the original author is still maintaining this project -- if the last modification date here is any indication.
If you're still trying to get it to work I would highly recommend going with one of the more recent forks. In particular, the Lunantic-Python fork at github incorporates many of my fixes improvements.
Getting back to your question, many of the undefined references are due to improper forward declaration in the headers or because of defined macros that cause the forward declare to be incorrect. For example, the original luainpython.h contains:
PyAPI_DATA(PyTypeObject) LuaObject_Type;
In windows, after preprocessing it expands into:
extern __declspec(dllimport) PyTypeObject LuaObject_Type;
In other words, the linker's going to try and find the definition of LuaObject_Type from an import library. This is of course wrong since that new type is created and implemented by lunatic in luainpython.c. The proper prototype should be extern PyTypeObject LuaObject_Type; instead.
Also note that luaopen_loadlib is deprecated in Lua5.1 which explains the other undefined reference you're getting. In fact, lunatic-python's usage of the following are all deprecated:
luaopen_base(L);
luaopen_table(L);
luaopen_io(L);
luaopen_string(L);
luaopen_debug(L);
luaopen_loadlib(L);
and should be replaced with this instead:
luaL_openlibs(L);

Categories