I have created the following simple C++ script with OpenMesh:
#include <string>
#include <OpenMesh/Core/IO/MeshIO.hh>
#include <OpenMesh/Core/Mesh/TriMesh_ArrayKernelT.hh>
struct MyTraits : OpenMesh::DefaultTraits{
typedef OpenMesh::Vec3d Point;
typedef OpenMesh::Vec3d Normal;
};
typedef OpenMesh::TriMesh_ArrayKernelT<MyTraits> MyMesh;
int main(int argc, char *argv[]){
std::string filename = "filename.stl";
MyMesh OM_mesh;
OM_mesh.request_face_normals();
OM_mesh.request_halfedge_normals();
OM_mesh.request_vertex_normals();
OM_mesh.request_face_status();
OM_mesh.request_edge_status();
OM_mesh.request_halfedge_status();
OM_mesh.request_vertex_status();
OpenMesh::IO::Options ropt;
ropt += OpenMesh::IO::Options::Binary;
ropt += OpenMesh::IO::Options::FaceNormal;
OpenMesh::IO::read_mesh(OM_mesh, filename);
for(int k=0; k<1000; k++){
OM_mesh.update_face_normals();
}
return 0;
}
Also, I have developed the following simple Python script using the OpenMesh bindings:
import openmesh as OM
filename = "filename.stl"
OM_mesh = OM.TriMesh()
OM_mesh.request_face_normals()
OM_mesh.request_halfedge_normals()
OM_mesh.request_vertex_normals()
OM_mesh.request_face_status()
OM_mesh.request_edge_status()
OM_mesh.request_halfedge_status()
OM_mesh.request_vertex_status()
options = OM.Options()
options += OM.Options.Binary
options += OM.Options.FaceNormal
OM.read_mesh(OM_mesh, filename, options)
for k in range(1000):
OM_mesh.update_face_normals()
Both scripts update the face normals of the loaded mesh 1000 times. I expected that the C++ script would be considerably faster than the Python script, but in fact it is just the opposite. I found that the C++ script spends around 8 seconds, while the Python script only spends around 0.3 seconds.
How can this be possible? Are the Python bindings doing something different than just "wrap" the C++ update_face_normals method? Thanks.
I've found that I should use the reading options when I read the file in C++, like this:
OpenMesh::IO::read_mesh(OM_mesh, filename, ropt);
By doing so, the speed in C++ is higher than in Python. However, in .off files, this update is not correct, but this is another issue.
Related
I have a problem with the functional feature of Pybind11 when I use it with a for-loop with OpenMP. I've done some research and my problem sounds pretty similar to the one in this Pull Request from 2 years ago, but although this PR is closed and the issue seems to be fixed I still have this issue. A code example I created will hopefully explain my problem better:
b.h
#include <pybind11/pybind11.h>
#include <pybind11/functional.h>
#include <omp.h>
namespace py = pybind11;
class B {
public:
B(int n, const int& initial_value);
void map(const std::function<int(int)> &f);
private:
int n;
int* elements;
};
b.cpp
#include <pybind11/pybind11.h>
#include <pybind11/functional.h>
#include "b.h"
namespace py = pybind11;
B::B(int n, const int& v)
: n(n) {
elements = new int[n];
#pragma omp parallel for
for (int i = 0; i < n; i++) {
elements[i] = v;
}
}
void B::map(const std::function<int(int)> &f) {
#pragma omp parallel for
for (int i = 0; i < n; i++) {
elements[i] = f(elements[i]);
}
}
PYBIND11_MODULE(m, handle) {
handle.doc() = "Example Module";
py::class_<B>(handle, "B")
.def(py::init<int, int>())
.def("map", &B::map)
;
}
CMakeLists.txt
cmake_minimum_required(VERSION 3.4...3.18)
project(example)
find_package(OpenMP)
add_subdirectory(pybind11)
pybind11_add_module(m b.cpp)
if(OpenMP_CXX_FOUND)
target_link_libraries(m PUBLIC OpenMP::OpenMP_CXX)
else()
message( FATAL_ERROR "Your compiler does not support OpenMP" )
endif()
test.py
from build.m import *
def test(i):
return i * 20
b = B(2, 2)
b.map(test)
I basically have an array where I want to apply a Python function to every element using a for-loop. I know that it is an issue with functional and OpenMP specifically because in other parts of my project I am using OpenMP successfully and functional is also working if I am not using OpenMP.
Edit: It freezes at the map function and has to be terminated. I am using Ubuntu 21.10, Python 3.9, GCC 11.2.0, OpenMP 4.5, and the newest version of the pybind11 repo.
You're likely experiencing a deadlock between OpenMP's scheduler and Python's GIL (Global Interpreter Lock).
I suggest attaching gdb to your process and looking at where the threads are to verify that's really the problem.
IMHO mixing Python functions and OpenMP like that is asking for trouble. If you want multi-threading of Python functions you can use multiprocessing.pool.ThreadPool. But unless your functions release the GIL most of the time you won't benefit from multi-threading.
This question already has answers here:
how can i include python.h in QMake
(1 answer)
Embedding python 3.4 into C++ Qt Application?
(4 answers)
Closed 2 years ago.
When I was trying to embed a Python script into my Qt C++ program, I run into multiple problems when trying to include Python.h.
The following features, I would like to provide:
Include python.h
Execute Python Strings
Execute Python Scripts
Execute Python Scripts with Arguments
It should also work when Python is not installed on the deployed machine
Therefore I searched around the Internet to try to find a solution. And found a lot of Questions and Blogs, but non have them covered all my Problems and it still took me multiple hours and a lot of frustration.
That's why I have to write down a StackOverflow entry with my full solution so it might help and might accelerate all your work :)
(This answer and all its code examples work also in a non-Qt environment. Only 2. and 4. are Qt specific)
Download and install Python https://www.python.org/downloads/release
Alter the .pro file of your project and add the following lines (edit for your correct python path):
INCLUDEPATH = "C:\Users\Public\AppData\Local\Programs\Python\Python39\include"
LIBS += -L"C:\Users\Public\AppData\Local\Programs\Python\Python39\libs" -l"python39"
Example main.cpp code:
#include <QCoreApplication>
#pragma push_macro("slots")
#undef slots
#include <Python.h>
#pragma pop_macro("slots")
/*!
* \brief runPy can execut a Python string
* \param string (Python code)
*/
static void runPy(const char* string){
Py_Initialize();
PyRun_SimpleString(string);
Py_Finalize();
}
/*!
* \brief runPyScript executs a Python script
* \param file (the path of the script)
*/
static void runPyScript(const char* file){
FILE* fp;
Py_Initialize();
fp = _Py_fopen(file, "r");
PyRun_SimpleFile(fp, file);
Py_Finalize();
}
int main(int argc, char *argv[])
{
QCoreApplication a(argc, argv);
runPy("from time import time,ctime\n"
"print('Today is', ctime(time()))\n");
//uncomment the following line to run a script
//runPyScript("test/decode.py");
return a.exec();
}
Whenever you #include <Python.h> use the following code instead. (The Slots from Python will otherwise conflict with the Qt Slots
#pragma push_macro("slots")
#undef slots
#include <Python.h>
#pragma pop_macro("slots")
After compiling, add the python3.dll, python39.dll, as well as the DLLs and Lib Python folders to your compilation folder. You can find them in the root directory of your Python installation. This will allow you to run the embedded c++ code even when python is not installed.
With these steps, I was able to get python running in Qt with the 64 bit MinGW and MSVC compiler. Only the MSVC in debug mode got still a problem.
FURTHER:
If you want to pass arguments to the python script, you need the following function (It can be easy copy-pasted into your code):
/*!
* \brief runPyScriptArgs executs a Python script and passes arguments
* \param file (the path of the script)
* \param argc amount of arguments
* \param argv array of arguments with size of argc
*/
static void runPyScriptArgs(const char* file, int argc, char *argv[]){
FILE* fp;
wchar_t** wargv = new wchar_t*[argc];
for(int i = 0; i < argc; i++)
{
wargv[i] = Py_DecodeLocale(argv[i], nullptr);
if(wargv[i] == nullptr)
{
return;
}
}
Py_SetProgramName(wargv[0]);
Py_Initialize();
PySys_SetArgv(argc, wargv);
fp = _Py_fopen(file, "r");
PyRun_SimpleFile(fp, file);
Py_Finalize();
for(int i = 0; i < argc; i++)
{
PyMem_RawFree(wargv[i]);
wargv[i] = nullptr;
}
delete[] wargv;
wargv = nullptr;
}
To use this function, call it like this (For example in your main):
int py_argc = 2;
char* py_argv[py_argc];
py_argv[0] = "Progamm";
py_argv[1] = "Hello";
runPyScriptArgs("test/test.py", py_argc, py_argv);
Together with the test.py script in the test folder:
import sys
if len(sys.argv) != 2:
sys.exit("Not enough args")
ca_one = str(sys.argv[0])
ca_two = str(sys.argv[1])
print ("My command line args are " + ca_one + " and " + ca_two)
you get the following output:
My command line args are Progamm and Hello
I'm very new to the whole CMake. Following this and this posts, now I want to call a MAXON function inside Python, using pybind11. What I have done so far:
The library can be downloaded from this page (direct download link).
wget https://www.maxongroup.com/medias/sys_master/root/8837358518302/EPOS-Linux-Library-En.zip
unzip:
unzip EPOS-Linux-Library-En.zip
make the install shell script executable and run it:
chmod +x ./install.sh
sudo ./install.sh
Then going to the example folder:
cd /opt/EposCmdLib_6.6.1.0/examples/HelloEposCmd/
Now combining the CMakeLists.txt files from here:
# CMakeLists.txt
cmake_minimum_required(VERSION 2.8.12)
project (HelloEposCmd)
set(CMAKE_CXX_FLAGS_DEBUG "${CMAKE_CXX_FLAGS_DEBUG} -Wall")
set(CMAKE_CXX_FLAGS_RELEASE "${CMAKE_CXX_FLAGS_RELEASE} -Wall")
set(CMAKE_POSITION_INDEPENDENT_CODE ON)
find_package(pybind11 REQUIRED)
pybind11_add_module(${PROJECT_NAME} HelloEposCmd.cpp)
add_executable(${PROJECT_NAME} HelloEposCmd.cpp)
target_link_libraries(${PROJECT_NAME} -lEposCmd)
and the HelloEposCmd.cpp this line is added right after other header files:
#include <pybind11/pybind11.h>
the main function is renamed to:
int run(int argc, char** argv)
and the pybind11 syntax to add the module is written at the end:
PYBIND11_MODULE(HelloEposCmd, m) {
m.def("run", &run, "runs the HelloEposCmd");
}
However, When I run the cmake . I get the error:
CMake Error at CMakeLists.txt:13 (add_executable):
add_executable can not create target "HelloEposCmd" because another target with the same name already exists. The existing target is a module library created in source directory "/opt/EposCmdLib_6.6.1.0/examples/HelloEposCmd" See documentation for policy CMP0002 for more details.
...
I was wondering if you could be kind to help me get the right CMakeList.txt file. Ideally, I should be able to call the compiled module in python:
# HelloEposCmd.py
import HelloEposCmd
HelloEposCmd.run()
Thanks for your support in advance.
pybind11_add_module already creates a target for you. So you don't need add_executable anymore. Just remove that line and when you will build you will get a library with the name HelloEposCmd
add_executable is needed if you are building an executable (.exe), which I believe is not what you want.
Documenation of pybind11 says.
This function behaves very much like CMake’s builtin add_library (in fact, it’s a wrapper function around that command).
Thanks to abhilb post and his kind followup in the comments I was able to figure the problem out. well, at least find a temporary workaround:
According to this post, the last two lines of the CMakeLists.txt file should change to
# this line can be removed
# add_executable(${PROJECT_NAME} HelloEposCmd.cpp)
target_link_libraries(${PROJECT_NAME} PRIVATE -lEposCmd)
and then because according to this post pybind11 doesn't support double pointers we change the run function to:
int run() {
int argc = 1;
char* argv[] = {"./HelloEposCmd"};
...
}
which I suppose to be a horrible workaround (inspired by information from this page). Now running cmake ., make and python3 HelloEposCmd.py should work properly (except a small c++ warning!).
P.S.1. Maybe someone could use std::vector<std::string> as suggested here. This idea was proposed here and there are already some answers worth investigating.
P.S.2. Following this discussion, another workaround could be something like:
#include <stdio.h>
#include <stdlib.h>
void myFunc(int argc, char* argv[]) {
for (int i = 0; i < argc; ++i) {
printf("%s\n", argv[i]);
}
}
int run(int argc, long* argv_) {
char** argv = (char**)malloc(argc * sizeof(char*));
for (int i = 0; i < argc; ++i) {
argv[i] = (char*)(argv_[i]);
}
myFunc(argc, argv);
free(argv);
return 0;
}
I am working on packaging and distributing some python applications in Windows by wrapping together a python runtime, the python packages for the applications, and some executables to run the python applications. The approach is just to modify the source for python.exe to launch the applications but accept command line arguments for things like data file names.
Below is an example C++ source for one of the executables:
// source for my_python_application1
#include "stdafx.h"
#include "Windows.h"
#include "Python.h"
wchar_t SWITCH[] = L"-m";
wchar_t APP[] = L"my_python_application1.main";
int wmain(int argc, wchar_t **argv) {
int newargc;
newargc = argc + 2;
// can use this to modify the PythonPath for specific distributions
// _putenv("PYTHONPATH=\"\"");
wchar_t **newargv = new wchar_t*[newargc];
newargv[0] = argv[0];
newargv[1] = SWITCH;
newargv[2] = APP;
for (int i = 1; i < argc; i++) {
newargv[i + 2] = argv[i];
}
return Py_Main(newargc, newargv);
// return Py_Main(argc, argv);
}
Functionally this achieves everything I need it to achieve, but I suffer from a certain OCD nature which leads me to want things organized in a certain way. I'd like to have a structure like the following
/application_suite
/python_runtime
python.exe
python36.dll
(and everything else in a python dir)
/python_applications
my_python_application1.exe
my_python_application2.exe
However, since mypythonapplication1/2.exe are basically modified python.exe files, in order for them to work properly (load the python dll, import modules, access all of the landmarking features necessary for modules to be interconnected) they need to be located in the /python_runtime directory.
I'm wondering is there a way to compile these executables so that they can be arranged in the directory structure that I presented, but know that they python_runtime directory and all of its structure are located in a relative path of './python_runtime' or whatever so that this all behaves well no matter where the distribution of applications is installed by the end user.
Pre-Answer Warning I am not a C/C++ programmer. It is possible there are bad C++ practices in here, so please use what you find in this answer with a grain of salt.
The requirements to achieve this behavior are the following:
We must get the directory of the custom executable
We must set the PYTHONHOME environment variable to %executable_dir%\runtime
We must set the PYTHONPATH environment variable to %executable_dir%\apps so that python knows where our python packages are living. This also clears out any system wide settings so that the distribution doesn't use other python environment settings
I don't know if it's necessary, but I am adding the runtime directory at the front of the path
We have to dynamically load the Py_Main function from the desired dll. Since we are not expecting the runtime to be on the path before execution, we must find the dll dynamically from %executable_dir%\runtime\python36.dll.
The following source code works when I compiled in Visual Studio 2017, with no Python Header files and no dll specified in the Linker
// source code for custom my_python_application1
// requires _CRT_SECURE_NO_WARNINGS flag to compile deprecated path operations
#include "stdafx.h"
#include <string>
#include <sstream>
#include <iostream>
#include "Windows.h"
#include "Shlwapi.h"
// #include "Python.h" // don't need this as we are dynamically loading the library of choice
#pragma comment(lib, "Shlwapi.lib")
__pragma(warning(disable:4996)) // # _CRT_SECURE_NO_DEPRECIATE
wchar_t SWITCH[] = L"-m";
wchar_t APP[] = L"my_python_application1.main";
typedef int(__stdcall *py_main_function)(int, wchar_t**);
int wmain(int argc, wchar_t **argv) {
int newargc;
newargc = argc + 2;
// determine the path of the executable so we know the absolute path
// of the python runtime and application directories
wchar_t executable_dir[MAX_PATH];
if (GetModuleFileName(NULL, executable_dir, MAX_PATH) == 0)
return -1;
PathRemoveFileSpec(executable_dir);
std::wstring executable_dir_string(executable_dir);
// now set the relevant environment variables so that the environment works as it is supposed to
std::wstring python_home(L"PYTHONHOME=" + executable_dir_string + L"\\runtime");
_wputenv(python_home.c_str());
std::wstring python_path(L"PYTHONPATH=" + executable_dir_string + L"\\apps");
_wputenv(python_path.c_str());
// put the python runtime at the front of the path
std::wstringstream ss;
ss << "PATH=" << executable_dir << "\\runtime;" << getenv("PATH");
std::wstring path_string (ss.str());
_wputenv(path_string.c_str());
wchar_t **newargv = new wchar_t*[newargc];
newargv[0] = argv[0];
newargv[1] = SWITCH;
newargv[2] = APP;
for (int i = 1; i < argc; i++) {
newargv[i + 2] = argv[i];
}
// dynamically load the python dll
std::wstring python_dll(executable_dir_string + L"\\runtime\\python36.dll");
HINSTANCE hGetProcIDDLL = LoadLibrary(python_dll.c_str());
py_main_function Py_Main = (py_main_function)GetProcAddress(hGetProcIDDLL, "Py_Main");
//now call Py_Main with our arguments
return Py_Main(newargc, newargv);
// return Py_Main(argc, argv);
}
I am trying to create a python library from a class which uses opencv 2.3. I want to be able to pass numpy array's into the class where they will be converted into cv::Mat's processed then converted back to numpy array's and returned.
Here is a simple test class I am working on to get this working before wrapping my own class. Currently I am just trying to receive a numpy array concert to a cv::Mat, process it and then write it to file. After this is working I will work on returning the processed array to python.
Here is the simple class:
foo.h :
#include <opencv2/core/core.hpp>
class Foo {
public:
Foo();
~Foo();
cv::Mat image;
void bar( cv::Mat in );
};
foo.cpp :
#include "foo.h"
Foo::Foo(){}
Foo::~Foo(){}
void Foo::bar( cv::Mat in) {
image = in;
cv::Canny( image, image, 50, 100 );
cv::imwrite("image.png", image);
}
And here is where I have attempted to wrap this class using boost::python (I am using code from the opencv source for the the numpy to mat conversion)
wrap_foo.cpp
#include <boost/python.hpp>
#include <numpy/arrayobject.h>
#include <opencv2/core/core.hpp>
#include "foo.h"
using namespace cv;
namespace bp = boost::python;
//// Wrapper Functions
void bar(Foo& f, bp::object np);
//// Converter Functions
cv::Mat convertNumpy2Mat(bp::object np);
//// Wrapper Functions
void bar(Foo& f, bp::object np)
{
Mat img = convertNumpy2Mat(np);
f.bar(img);
return;
}
//// Boost Python Class
BOOST_PYTHON_MODULE(lib)
{
bp::class_<Foo>("Foo")
.def("bar", bar)
;
}
//// Converters
cv::Mat convertNumpy2Mat(bp::object np)
{
Mat m;
numpy_to_mat(np.ptr(),m);
return m;
}
The numpy_to_mat function is from the opencv source (modules/python/src2/cv2.cpp). The full file has the function below what I wrote above. This code compiles with bjam just fine but the when I import into python it crashes. The error is this: libFoo.so: undefined symbol: _ZN2cv3Mat10deallocateEv. I have tried a number of different things but I can't get this to work.
Help is most appreciated.
I think this is probably a bit late but it may be useful to others who experienced the same problem...
I think you need to add the path to the newly created library to your LD_LIBRARY_PATH for your program to locate it.
Assuming the current directory '.' is where your library is at, type the following in your terminal before running your program:
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:.
NOTE: The above is temporary export is temporary. You might want to copy your libs to standard library paths such as /usr/local/lib or add the path permanently by including the above command in your .profile (or any shell startup configuration file).