I'm porting an Application from Linux to OS X and the Boost::Python integration is failing at run time.
I'm exposing my C++ classes like so:
using namespace scarlet;
BOOST_PYTHON_MODULE(libscarlet) {
using namespace boost::python;
class_<VideoEngine, boost::noncopyable>("VideoEngine", no_init)
.def("play", &VideoEngine::play)
.def("pause", &VideoEngine::pause)
.def("isPaused", &VideoEngine::isPaused)
[...]
;
}
I'm importing the library like so:
try {
boost::python::import("libscarlet");
} catch (boost::python::error_already_set & e) {
PyErr_Print();
}
Then I'm inject an instance into the global Python namespace like so:
void VideoEngine::init() {
[...]
try {
auto main_module = boost::python::import("__main__");
auto main_namespace = main_module.attr("__dict__");
main_namespace["engine"] = boost::python::object(boost::python::ptr(this));
} catch (boost::python::error_already_set & e) {
PyErr_Print();
}
[...]
}
It works great in Linux but in OS X an exception is thrown and PyErr_Print() returns TypeError: No Python class registered for C++ class scarlet::VideoEngine.
As far as I can tell the module works without issue when imported via the Python interpreter. It is difficult to test since it designed to be injected as a pre-constructed instance but the class and functions are present as shown below:
$ python
Python 2.7.5 (default, Mar 9 2014, 22:15:05)
[GCC 4.2.1 Compatible Apple LLVM 5.0 (clang-500.0.68)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> import libscarlet
>>> libscarlet.VideoEngine
<class 'libscarlet.VideoEngine'>
>>> libscarlet.VideoEngine.play
<unbound method VideoEngine.play>
Any ideas as to where the incompatibility lies?
Edit: I'm starting to think it might be related to multithreading since my OS X implementation uses a different threading structure, although all of the calls detailed here happen in the same thread. Could that be the cause of such an issue? Probably not the issue since it doesn't work in MS Windows in single-threaded mode.
I have solved this now.
It was caused entirely by Boost::Python being statically compiled and once I recompiled it as a shared library the problem went away entirely, on all platforms.
The lesson: don't compile boost statically. I'm pretty sure there are warnings against it and they should be heeded.
Related
I am trying to use a .NET Core library inside a Jupyter Notebook python script by using PythonNet. Support for .NET Core was added recently (see https://github.com/pythonnet/pythonnet/issues/984#issuecomment-778786164) but I am still getting a No module named 'TestAppCore' error.
I don't have an issue using a .NET Framework library with PythonNet, only .NET Core. Any help with diagnosing and fixing the issue would be greatly appreciated.
The C# library I'm trying to get working is a simple class library project with no dependencies at all. Below is the entirety of the code:
namespace TestAppCore
{
public class Foo
{
public int ID { get; set; }
public Foo(int id)
{
ID = id;
}
public int Add(int a, int b)
{
return a + b;
}
}
}
Here is the python script:
from clr_loader import get_coreclr
from pythonnet import set_runtime
rt = get_coreclr("D:\src\Test.runtimeconfig.json")
set_runtime(rt)
import clr
import sys
sys.path.append(r"D:\src\TestAppCore")
clr.AddReference(r"TestAppCore")
from TestAppCore import Foo
foo = Foo(5)
print(foo.ID)
res = foo.Add(1, 2)
print(res)
Here is the output:
Finally, here is the runtime config I am using:
{
"runtimeOptions": {
"tfm": "netcoreapp3.1",
"framework": {
"name": "Microsoft.NETCore.App",
"version": "3.1.0"
}
}
}
.NET Core: 3.1
python version: 3.7
pythonnet: 3.0.0.dev1
clr-loader: 0.1.6
I suspect that you are getting the DLL path wrong.
This worked for me:
from clr_loader import get_coreclr
from pythonnet import set_runtime
set_runtime(get_coreclr("pythonnetconfig.json"))
import clr
clr.AddReference("C:/Path/To/Interface.dll")
from Interface import Foo
foo = Foo()
Using
Python 3.8.10
pythonnet 3.0.0a1
clr-loader 0.1.7
C# DLL (Class Library) targeting .NET Core 3.1
pythonnetconfig.json exactly as you posted.
I never got it to work with .NET Core 3.1. For me it worked with .NET Framework 4.8 and pythonnet 2.5.2. See my other answer for more details.
I have a similar problem and found out that if the DLL and the namespace are the same, then it fails. In my case, following pythonnet tutorial:
Calculate.sln contains Calc.cs, with
namespace Calculate; // recommended by VS
class Calc { ... }
Then in Python
clr.AddReference("Calculate") # Assuming sys.path correct setting
from Calculate import Calc
Then:
ImportError: cannot import name 'Calc' from 'Calculate' (unknown location)
But with:
namespace CalculateNS; // different name than Calculate.dll
class Calc { ... }
Then in Python
clr.AddReference("Calculate") # Assuming sys.path correct setting
from CalculateNS import Calc
it works... side effects, pylance known Calculate module but not CalculateNS :-(
Anyone experimented this? I saw lot of answers that have never been tested I guess, only thoughts
Using .NET 6.0 Framework.
Introduction
Image you want to describe a combustion process
In order to sort elements, I created
classes for the describing the substance (eq. class Fuel)
class which describes the combustion (eq. class combustion)
main.py to run an example
Python Files
properties.py
class SolidProp:
def __init__(self,ua):
self._ultimate = Ultimate(ua)
#property
def ultimate(self):
return self._ultimate
class Ultimate:
def __init__(self,ua: dict):
self._comp = ua
#property
def comp(self):
return self._comp
combustion.py
from properties import *
class Combustion:
def __init__(self,ultimate):
self.fuel = SolidProp(ua=ultimate)
main.py
from combustion import *
burner = Combustion({'CH4':0.75, 'C2H4':0.25})
Problem Description
ipython console
In the ipython console (in bash) the following is not recognized automatically (but it can be called):
Python 3.7.2 (default, Dec 29 2018, 06:19:36)
Type 'copyright', 'credits' or 'license' for more information
IPython 7.2.0 -- An enhanced Interactive Python. Type '?' for help.
In [1]: run 'main.py'
In [2]: burner.fuel.ultimate.comp
Out[2]: {'CH4': 0.75, 'C2H4': 0.25}
This has something to do that *.ultimate is defined via a decorator in properties.py (see #property) but I would like to be able to get *.ultimate.comp autocompleted in the ipython console, so people can work with it intuitively.
Example
burner.fuel.ultimate is recognized
burner.fuel.ultimate.comp is NOT recognized
I can not see any methods or properties beyond burner.fuel.ultimate in the ipython console. This makes it not intuitively for people to work with it when they do not know those methods exist.
Remark: ipython console of IDE pycharm works fine!?
python console
Running it in the python console:
Python 3.7.2 (default, Dec 29 2018, 06:19:36)
[GCC 7.3.0] :: Anaconda, Inc. on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> exec(open("main.py").read())
>>> burner.fuel.ultimate.comp
{'CH4': 0.75, 'C2H4': 0.25}
Works fine. But why not in the ipython console from a terminal?
If I use placement new in the following way my code appears to run reliably. If I use the commented code instead I frequently get either a segmentation fault (11) or unexpected results. Returning the address of a statically allocated object also works.
I've verified that null is not returned and that this function is being called exactly once for the single Python object in my test script.
extern "C"
{
void * hail_detector_new()
{
alignas(alignof(hail::Detector)) static U8 allocation[sizeof(hail::Detector)];
// return new(std::nothrow) hail::Detector;
return new(allocation) hail::Detector;
}
}
Python arguments and return type declarations:
f = _library.hail_detector_new
f.restype = c_void_p
f.argtypes = None
class Detector(object):
def __init__(self):
self._object = _library.hail_detector_new()
Versions
Squall: clang --version
Apple LLVM version 8.0.0 (clang-800.0.42.1)
Target: x86_64-apple-darwin16.7.0
Thread model: posix
InstalledDir: /Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin
Squall: python --version
Python 2.7.13 :: Continuum Analytics, Inc.
I am facing an issue I faced a few times so far.
The issue in question just gets solved by itself every time, without me understanding what is causing it
So what happens is that I start a python virtual environment from my c++ code. That works, afterwards by using the write function I am able to write stuff in that environment. This also works perfectly fine so far. However I am unable to write my last command to the process.
I though about maybe some buffer being full but I didn't really find anything about a buffer in the Qt docs
This is the relevant piece of code:
static QStringList params;
QProcess *p = new QProcess();
params<<"-f"<<"-c"<<"python2"<< "/home/John/Desktop/python.log";
qDebug()<<"parameters: "<<params;
qDebug()<<"going to write";
p->start("script", params);
qDebug()<<"Turning on new user process...";
while(!p->waitForStarted())
{qDebug()<<"waiting for virtualenv to be ready";}
successFailWrite = p->write("import imp;\n");
while(!p->waitForBytesWritten());
successFailWrite = p->write("foo = imp.load_source('myTest', '/home/John/recognitionClass.py');\n");
while(!p->waitForBytesWritten());
successFailWrite = p->write("from myTest import recognitionClass;\n");
while(!p->waitForBytesWritten());
successFailWrite = p->write("myClassObj = recognitionClass();\n");
if(successFailWrite !=-1)
{qDebug()<<"OK written";}
while(!p->waitForBytesWritten());
successFailWrite = p->write("habelahabela\n");
if(successFailWrite !=-1)
{qDebug()<<"OK written";}
QString name = "John";
QString processNewUserParameter= "print myClassObj.addNewUser("+ name +");\n";
QByteArray processNewUserParameterByteArr= processNewUserParameter.toUtf8();
p->write(processNewUserParameterByteArr);
I keep a log file which contains what is being written to the python virtualenv and what is being printed
Script started on Son 27 Aug 2017 20:09:52 CEST
import imp;
foo = imp.load_source('myTest', '/home/John/recognitionClass.py');
from myTest import recognitionClass;
myClassObj = recognitionClass();
Python 2.7.12 (default, Nov 19 2016, 06:48:10)
[GCC 5.4.0 20160609] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import imp;
>>> foo = imp.load_source('myTest', '/home/John/recognit
<myTest', '/home/John/recogniti onClass.py');
/usr/local/lib/python2.7/dist-packages/sklearn/cross_validation.py:44: DeprecationWarning: This module was deprecated in version 0.18 in favor of the model_selection module into which all the refactored classes and functions are moved. Also note that the interface of the new CV iterators are different from that of this module. This module will be removed in 0.20.
"This module will be removed in 0.20.", DeprecationWarning)
/usr/local/lib/python2.7/dist-packages/sklearn/grid_search.py:43: DeprecationWarning: This module was deprecated in version 0.18 in favor of the model_selection module into which all the refactored classes and functions are moved. This module will be removed in 0.20.
DeprecationWarning)
>>> from myTest import recognitionClass;
>>> myClassObj = recognitionClass();
>>>
It does pront "OK written" twice, which on one side proves that I successfully wrote my commands to the process, yet I can't see anything.
As you can see the test sentence "habelahabela" doesn't get written neither.
Does anybody have an idea about what I may be doing wrong?
I know that I am writing my commands to quickly to the environment. Because as you can see I start by writing "import imp", it then gets buffered and a little later the buffer gets flushed and the virtualenv executes the command (this is why you see it twice).
Does anybody see why I can't see the test-sentence and -more importantly- my actual command "print myClassObj.addNewUser("+ name +");\n" being printed to the virtual environment?
Thanks
First of all, there is no sense in writing while(!p->waitForBytesWritten());. waitForBytesWritten already blocks your thread without a while loop and, as the name states, waits until bytes are written. It returns false only if there are either timeout or an error. In the first case you should give it more time to write bytes. In the second case you should fix the error and only then try again.
The same holds for waitForStarted and all other Qt functions starting with "waitFor...".
So the usage looks like:
if(!p->waitForBytesWritten(-1)) // waits forever until bytes ARE written
{
qDebug() << "Error while writing bytes";
}
Regarding the question: I believe the problem (or at least a part of it) is that you write your last 2 messages into p, but you neither wait for bytesWritten() signal, nor use waitForBytesWritten() blocking function. Although, there is probably no error occuring (because p->write(...) does not return -1 at that point), however it does not mean that your message is written yet. In a nutshell, wait for bytesWritten() signal...
QProcess inherits from QIODevice, so I recommend to look its docs and learn about it a bit more.
I'm compiling several different versions of Python for my system, and I'd like to know where in the source the startup banner is defined so I can change it for each version. For example, when the interpreter starts it displays
Python 3.3.1 (default, Apr 28 2013, 10:19:42)
[GCC 4.7.2 20121109 (Red Hat 4.7.2-8)] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>>
I'd like to change the string default to other things to signal which version I'm using, but I'm also interested in how the whole shebang is assembled. Where is this defined?
Let's use grep to get in the ballpark. I'm not going to bother searching for default because I'll get too many results, but I'll try Type "Help", which should not appear too many times. If it's a C string, the quotes will be escaped. We should look for C strings first and Python strings later.
Python $ grep 'Type \\"help\\"' . -Ir
./Modules/main.c: "Type \"help\", \"copyright\", \"credits\" or \"license\" " \
It's in Modules/main.c, in Py_Main(). More digging gives us this line:
fprintf(stderr, "Python %s on %s\n",
Py_GetVersion(), Py_GetPlatform());
Because "on" is in the format string, Py_GetPlatform() must be linux and Py_GetVersion() must give the string we want...
Python $ grep Py_GetVersion . -Irl
...
./Python/getversion.c
...
That looks promising...
PyOS_snprintf(version, sizeof(version), "%.80s (%.80s) %.80s",
PY_VERSION, Py_GetBuildInfo(), Py_GetCompiler());
We must want Py_GetBuildInfo(), because it's inside the parentheses...
Python $ grep Py_GetBuildInfo . -Irl
...
./Modules/getbuildinfo.c
...
That looks a little too obvious.
const char *
Py_GetBuildInfo(void)
{
static char buildinfo[50 + sizeof(HGVERSION) +
((sizeof(HGTAG) > sizeof(HGBRANCH)) ?
sizeof(HGTAG) : sizeof(HGBRANCH))];
const char *revision = _Py_hgversion();
const char *sep = *revision ? ":" : "";
const char *hgid = _Py_hgidentifier();
if (!(*hgid))
hgid = "default";
PyOS_snprintf(buildinfo, sizeof(buildinfo),
"%s%s%s, %.20s, %.9s", hgid, sep, revision,
DATE, TIME);
return buildinfo;
}
So, default is the name of the Mercurial branch. By examining the makefiles, we can figure out that this comes from the macro HGTAG. A makefile variable named HGTAG produces the variable, and that variable is run as a command. So,
Simple solution
When building Python,
Python $ ./configure
Python $ make HGTAG='echo awesome'
Python $ ./python
Python 3.2.3 (awesome, May 1 2013, 21:33:27)
[GCC 4.7.2] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>>
Looks like if you add a mercurial tag before you build, then default will be replaced with the name of your tag (source: Modules/getbuildinfo.c : _Py_hgidentifier())
Basically seems like it chooses the name default because that is the name of the branch. Looks like the interpreter is built with the tag name, if one exists, or the name of the branch if no tag (besides tip) exists on the current working copy.