I have a python wrapper holding some c++ code. In it is a function that I setup as a process from my python code. Its a while statement that I need to setup a condition for when it should shut down.
For this situation , the while statement is simple.
while(TERMINATE == 0)
I have data that is being sent back from within the while loop. I'm using pipe() to create 'in' and 'out' objects. I send the 'out' object to the function when I create the process.
fxn = self.FG.do_videosequence
(self.inPipe, self.outPipe) = Pipe()
self.stream = Process(target=fxn, args=(self.outPipe,))
self.stream.start()
As I mentioned, while inside the wrapper I am able to send data back to the python script with
PyObject *send = Py_BuildValue("s", "send_bytes");
PyObject_CallMethodObjArgs(pipe, send, temp, NULL);
This works just fine. However, I'm having issues with sending a message to the C++ code, in the wrapper, that tells the loop to stop.
What I figured I would do is just check poll(), as that is what I do on the python script side. I want to keep it simple. When the system sees that there is an incoming signal from the python script it would set TERMINATE = 1. so i wrote this.
PyObject *poll = Py_BuildValue("p", "poll");
As I'm expecting a true or false from the python function poll(). I figured "p" would be ideal as it would convert true to 1 and false to 0.
in the loop I have
if(PyObject_CallMethodObjArgs(pipe, poll, NULL, NULL))
TERMINATE = 1;
I wanted to use poll() as its non-blocking, like recv() is. This way I could just go about my other work and check poll() once a cycle.
however, when I send a signal from the python script it never trips.
self.inPipe.send("Hello");
I'm not sure where the disconnect is. When I print the poll() request, I get 0 the entire time. I'm either not calling it correctly, and its just defaulting to 0. or I'm not actually generating a signal to trip the poll() call. Thus its always 0.
Does anyone have any insight as what i am doing wrong?
*****UPDATE******
I found some other information.
PyObject *poll = Py_BuildValue("p", "poll");
should be
PyObject *poll = Py_BuildValue("s", "poll");
as I'm passing a string as a reference to the function im calling it should be referenced as a string. It has nothing to do with the return type.
From there the return of
PyObject_CallMethodObjArgs(pipe, poll, NULL, NULL)
is a pyobject so it needs to be checked against a pyobject. such as making a call to
PyObject_IsTrue
to determine if its true or false. I'll make changes to my code and if I have solution I'll update the post with an answer.
So I've been able to find the solution. In the end I was making two mistakes.
The first mistake was when I created the pyobject reference to the python function I was calling. I mistook the information and inserted a "p" thinking before reading the context. So
PyObject *poll = Py_BuildValue("p", "poll");
should be
PyObject *poll = Py_BuildValue("s", "poll");
The second mistake was how I was handling the return value of
PyObject_CallMethodObjArgs(pipe, poll, NULL, NULL)
while its true that its calling a python object, it is not returning a simple true false value, but rather a python object. So I specificly needed to handle the python object, by calling
PyObject_IsTrue(Pyobject o)
with the return of the poll() request as the argument. I now have the ability to send/recieve from both the python script and the C api contained in the wrapper.
Related
I have looked at the other questions similar to this but they don't work for me well.
My question is I have this code here:
function pyInput(){
const buffers = [];
proc.stdout.on('data', (chunk) => buffers.push(chunk));
proc.stdout.on('end', () => {
const result = JSON.parse(Buffer.concat(buffers));
console.log('Python process exited, result:', result);
});
proc.stdin.write(JSON.stringify([['a','b',1],['b','c',-6],['c','a',4],['b','d',5],['d','a', -10]]));
proc.stdin.end();
}
The python function I'm trying to pass this to:
def createGraph(listOfAttr):
for i in range(len(listOfAttr)):
G.add_edge(listOfAttr[i][0], listOfAttr[i][1], weight = listOfAttr[i][2])
#createGraph([['a','b',1],['b','c',-6],['c','a',4],['b','d',5],['d','a', -10]])
my_list = json.load(sys.stdin)
json.dump(my_list,sys.stdout)
The code is basically for finding negative cycles in a graph, and I want to load that data in from node js. However my python program never finishes executing, it just gets stuck and I dont know why. For now I won't pass the list from Node into the py function, but I am trying to at least print it out to see if its being passed to python.
The json.load() from sys.stdin is probably the problem. Since sys.stdin is a pipe, it never actually closes until you tell it to, so it just hogs the stream, waiting for new data. You can probably hog the stream for just some time using input() until you receive some sort of input string telling you you hit the end, then moving along. Make sure to update your node.js script aswell, to feed each piece of data as line, terminated by a \n, and to send the end signal you specified once it's done.
I'm trying to execute a Python callback when a certain function is called. It works if the function is called by running the process, but it fails when I call the function with SBTarget.EvaluateExpression
Here's my C code:
#include <stdio.h>
int foo(void) {
printf("foo() called\n");
return 42;
}
int main(int argc, char **argv) {
foo();
return 0;
}
And here's my Python script:
import lldb
import os
def breakpoint_cb(frame, bpno, err):
print('breakpoint callback')
return False
debugger = lldb.SBDebugger.Create()
debugger.SetAsync(False)
target = debugger.CreateTargetWithFileAndArch('foo', 'x86_64-pc-linux')
assert target
# Break at main and start the process.
main_bp = target.BreakpointCreateByName('main')
process = target.LaunchSimple(None, None, os.getcwd())
assert process.state == lldb.eStateStopped
foo_bp = target.BreakpointCreateByName('foo')
foo_bp.SetScriptCallbackFunction('breakpoint_cb')
# Callback is executed if foo() is called from the program
#process.Continue()
# This causes an error and the callback is never called.
opt = lldb.SBExpressionOptions()
opt.SetIgnoreBreakpoints(False)
v = target.EvaluateExpression('foo()', opt)
err = v.GetError()
if err.fail:
print(err.GetCString())
else:
print(v.value)
I get the following error:
error: Execution was interrupted, reason: breakpoint 2.1.
The process has been left at the point where it was interrupted, use "thread
return -x" to return to the state before expression evaluation
I get the same error when the breakpoint has no callback, so it's really the breakpoint that is causing problems, not the callback. The expression is evaluated when opt.SetIgnoreBreakpoints(True) set, but that doesn't help in my case.
Is this something that can be fixed or is it a bug or missing feature?
Operating system is Arch Linux, LLDB version is 6.0.0 from the repository.
The IgnoreBreakpoints setting doesn't mean you don't hit breakpoints while running. For instance, you will notice that the breakpoint hit count will get updated either way. Rather it means:
True: that if we hit a breakpoint we will auto-resume
False: if we hit a breakpoint we will stop regardless
The False feature is intended for calling a function because you want to stop in it or some function it called for the purposes of debugging that function. So overriding the breakpoint conditions and commands is the right thing to do.
For your purposes, I think you want IgnoreBreakpoints to be True, since you also want the expression evaluation to succeed.
OTOH, if I understand your intent, the thing that's causing you a problem is that when IgnoreBreakpoints is false, lldb doesn't call the breakpoint's commands. It should only skip that bit of work when we are forcing the stop.
I wrote this snippet of Python code (using pybluez) to send raw BNEP Bluetooth packet over L2CAP. The purpose is to do some fuzzing-like testing.
BNEP_PSM = 0x000F
btSock = bluetooth.BluetoothSocket(bluetooth.L2CAP)
btSock.connect(('<some BDADDR>', BNEP_PSM))
for i in range(10):
btSock.send('<some payload>')
This is working quite fine and as expected creating multiple BNEP packet even if the payload is malformed.
Now, I'm trying to write the same function in C++ using Qt, but it is not working the same way. An excerpt of the code is the following:
QBluetoothSocket btSock(QBluetoothServiceInfo::L2capProtocol);
btSock.connectToService(QBluetoothAddress("<some BDADDR>"), QBluetoothUuid::Bnep);
QObject::connect(&btSock, &QBluetoothSocket::connected, [&btSock](){
int i = 10;
while (i--)
btSock.write("<some payload>");
});
Running it with i = 1 works just fine sending a single packet with the specified payload.
Running it with i = 10 will results in a single packet with the payload equals to ten times the specified payload.
For instance setting a payload of "AAAA" in a loop of 3 will result in the first case using Python in
+------------+----+ +------------+----+ +------------+----+
|L2CAP Header|AAAA| --> |L2CAP Header|AAAA| --> |L2CAP Header|AAAA|
+------------+----+ +------------+----+ +------------+----+
In the second case using Qt in
+------------+------------+
|L2CAP Header|AAAAAAAAAAAA|
+------------+------------+
How could I force Qt socket's write to behave like Python socket's send?
UPDATE:
Looking at the documentation it says that
The bytes are written when control goes back to the event loop
How could I force buffer to flush before going back to the event loop?
How could I force buffer to flush before going back to the event loop?
You can't, because the sending can only be done asynchronously, not synchronously.
But we can queue a flush the same way the packets are queued. Namely: send each packet after the previous one has been sent. Thus we shall send it every time the event loop has processed all other work. The idiom for that is zero-duration timers - note that this has nothing at all to do with timers, it's a weird overloading of the timer concept that really makes no sense otherwise.
int i = 10;
while (i--)
QTimer::singleShot(0, this, [this]{ m_btSocket.write("<some payload>"); });
m_btSocket must be a member of the class, and must be a value member - otherwise the code will be unsafe.
If you wish to ensure that stale packets are dumped in case of a disconnection and won't affect any subsequent connections, keep track of their generation and send only if it's current:
class Foo : public QObject {
unsigned int m_generation = {}; // unsigned: modulo math w/o overflows
QBluetoothSocket m_btSocket{QBluetoothServiceInfo::L2CAP};
...
bool isBtConnected() const { return m_btSocket::state() == QBluetoothSocket::ConnectedState; }
void sendSinglePacket(const QByteArray & data) {
if (!isBtConnected()) return;
auto gen = m_generation;
QTimer::singleShot(0, this, [this, gen, data] {
if (m_generation == gen)
m_btSocket.write(data);
});
}
Foo(QObject * parent = {}) : QObject(parent) {
connect(&m_btSocket, &QBluetoothSocket::Disconnected, this, [this]{
m_generation++; // drops all in-flight packets
});
...
}
};
I did not found a proper solution using QBluetoothSocket's methods, but I made it work with a little hack.
Just used C header sys/socket.h (I need to support POSIX compliant only OSs) and changed
btSock.write("<some payload>");
to
send(btSock.socketDescriptor(), "<some payload>", <payload length>);
I'm writing a Python extension in C++, wrapping a third-party library I do not control. That library creates a thread Python knows nothing about, and from that thread, calls a C++ callback I provide to the library. I want that callback to call a Python function, but I get a deadlock using the approach I read from the docs. Here's my interpretation of those.
void Wrapper::myCallback()
{
PyGILState_STATE gstate=PyGILState_Ensure();
PyObject *result=PyObject_CallMethod(_pyObj,"callback",nullptr);
if (result) Py_DECREF(result);
PyGILState_Release(gstate);
}
My code does nothing else related to threads, though I've tried a number of other things that have. Based on this, for example, I tried calling PyEval_InitThreads(), but it's not obvious where that call should be made for an extension. I put it in the PyMODINIT_FUNC. These attempts have all lead to deadlock, crashes, or mysterious fatal errors from Python, e.g., PyEval_ReleaseThread: wrong thread state.
This is on Linux with Python 3.6.1. Any ideas how I can get this "simple" callback to work?
Likely Culprit
I didn't realize that in another thread, the library was in a busy/wait loop waiting on the callback's thread. In gdb, info threads made this apparent. The only solution I can see is to skip those particular calls to the callback; I don't see a way to make them safe, given the busy/wait loop. In this case, that's acceptable, and doing so eliminates the deadlock.
Also, it appears that I do need to also call PyEval_InitThreads() before any of this. In a C++ extension, it's not clear where that should go though. One of the replies suggested doing it indirectly in Python by creating and deleting a throwaway threading.Thread. That didn't seem to fix it, triggering instead a Fatal Python error: take_gil: NULL tstate, which I think means there's still no GIL. My guess, based on this and the issue it refers to, is that PyEval_InitThreads() causes the current thread to become the main thread for the GIL. If that call is made in the short-lived throwaway thread, maybe that's a problem. Yeah, I'm only guessing and would appreciate an explanation from someone who doesn't have to.
This answer is only for Python >= 3.0.0. I don't know if it would work for earlier Pythons or not.
Wrap your C++ module in a Python module that looks something like this:
import threading
t = threading.Thread(target=lambda: None, daemon=True)
t.run()
del t
from your_cpp_module import *
From my reading of the documentation, that should force threading to be initialized before your module is imported. Then the callback function you have written up there should work.
I'm less confident of this working, but your module init function could instead do this:
if (!PyEval_ThreadsInitialized())
{
PyEval_InitThreads();
}
that should work because your module init function should be being executed by the only Python thread in existence if PyEval_ThreadsInitialized() isn't true, and holding the GIL is the right thing to do then.
These are guesses on my part. I've never done anything like this as is evidenced by my clueless comments on your question. But from my reading of the documentation, both of these approaches should work.
I'm new to StackOverflow, but I've been working on embedding python in a multithreaded C++ system for the last few days and run into a fair number of situations where the code has deadlocked itself. Here's the solution that I've been using to ensure thread safety:
class PyContextManager {
private:
static volatile bool python_threads_initialized;
public:
static std::mutex pyContextLock;
PyContextManager(/* if python_threads_initialized is false, call PyEval_InitThreads and set the variable to true */);
~PyContextManager();
};
#define PY_SAFE_CONTEXT(expr) \
{ \
std::unique_lock<std::mutex>(pyContextLock); \
PyGILState_STATE gstate; \
gstate = PyGILState_Ensure(); \
expr; \
PyGILState_Release(gstate); \
}
Initializing the boolean and the mutex in the .cpp file.
I've noticed that without the mutex, the PyGILState_Ensure() command can cause a thread to deadlock. Likewise, calling PySafeContext within the expr of another PySafeContext will cause the thread to brick while it waits on its mutex.
Using these functions, I believe your callback function would look like this:
void Wrapper::myCallback()
{
PyContextManager cm();
PY_SAFE_CONTEXT(
PyObject *result=PyObject_CallMethod(_pyObj,"callback",nullptr);
if (result) Py_DECREF(result);
);
}
If you don't believe that your code is likely to ever need more than one multithreaded call to Python, you can easily expand the macro and take the static variables out of a class structure. This is just how I've handled an unknown thread starting and determining whether it needs to start up the system, and dodging the tedium of writing out the GIL functions repeatedly.
Hope this helps!
I have wrapped C++ observers in Python. If you are using boost then you can call PyEval_InitThreads() in BOOST_PYTHON_MODULE:
BOOST_PYTHON_MODULE(eapipy)
{
boost::shared_ptr<Python::InitialisePythonGIL> gil(new Python::InitialisePythonGIL());
....
}
Then I use a class to control calling back into Python from C++.
struct PyLockGIL
{
PyLockGIL()
: gstate(PyGILState_Ensure())
{
}
~PyLockGIL()
{
PyGILState_Release(gstate);
}
PyLockGIL(const PyLockGIL&) = delete;
PyLockGIL& operator=(const PyLockGIL&) = delete;
PyGILState_STATE gstate;
};
If you are calling into C++ for any length of time you can also relinquish the GIL:
struct PyRelinquishGIL
{
PyRelinquishGIL()
: _thread_state(PyEval_SaveThread())
{
}
~PyRelinquishGIL()
{
PyEval_RestoreThread(_thread_state);
}
PyRelinquishGIL(const PyLockGIL&) = delete;
PyRelinquishGIL& operator=(const PyLockGIL&) = delete;
PyThreadState* _thread_state;
};
Our code is multi-threaded and this approach works well.
I have some C code that calls a Python function. This Python function accepts an address and uses WINFUNCTYPE to eventually convert it to a function that Python can call. The C function send as a parameter to the Python function will eventually call another Python function. It is at this last step which causes a crash. So in short I go from C -> Python -> C -> Python. The last C -> Python causes a crash. I've been trying to understand the problem, but I have been unable to.
Can someone point out my problem?
C code compiled with Visual Studio 2010 and run with the args "c:\...\crash.py" and "func1":
#include <stdlib.h>
#include <stdio.h>
#include <Python.h>
PyObject* py_lib_mod_dict; //borrowed
void __stdcall cfunc1()
{
PyObject* py_func;
PyObject* py_ret;
int size;
PyGILState_STATE gil_state;
gil_state = PyGILState_Ensure();
printf("Hello from cfunc1!\n");
size = PyDict_Size(py_lib_mod_dict);
printf("The dictionary has %d items!\n", size);
printf("Calling with GetItemString\n");
py_func = PyDict_GetItemString(py_lib_mod_dict, "func2"); //fails here when cfunc1 is called via callback... will not even go to the next line!
printf("Done with GetItemString\n");
py_ret = PyObject_CallFunction(py_func, 0);
if (py_ret)
{
printf("PyObject_CallFunction from cfunc1 was successful!\n");
Py_DECREF(py_ret);
}
else
printf("PyObject_CallFunction from cfunc1 failed!\n");
printf("Goodbye from cfunc1!\n");
PyGILState_Release(gil_state);
}
int wmain(int argc, wchar_t** argv)
{
PyObject* py_imp_str;
PyObject* py_imp_handle;
PyObject* py_imp_dict; //borrowed
PyObject* py_imp_load_source; //borrowed
PyObject* py_dir; //stolen
PyObject* py_lib_name; //stolen
PyObject* py_args_tuple;
PyObject* py_lib_mod;
PyObject* py_func;
PyObject* py_ret;
Py_Initialize();
//import our python script
py_dir = PyUnicode_FromWideChar(argv[1], wcslen(argv[1]));
py_imp_str = PyString_FromString("imp");
py_imp_handle = PyImport_Import(py_imp_str);
py_imp_dict = PyModule_GetDict(py_imp_handle); //borrowed
py_imp_load_source = PyDict_GetItemString(py_imp_dict, "load_source"); //borrowed
py_lib_name = PyUnicode_FromWideChar(argv[2], wcslen(argv[2]));
py_args_tuple = PyTuple_New(2);
PyTuple_SetItem(py_args_tuple, 0, py_lib_name); //stolen
PyTuple_SetItem(py_args_tuple, 1, py_dir); //stolen
py_lib_mod = PyObject_CallObject(py_imp_load_source, py_args_tuple);
py_lib_mod_dict = PyModule_GetDict(py_lib_mod); //borrowed
printf("Calling cfunc1 from main!\n");
cfunc1();
py_func = PyDict_GetItem(py_lib_mod_dict, py_lib_name);
py_ret = PyObject_CallFunction(py_func, "(I)", &cfunc1);
if (py_ret)
{
printf("PyObject_CallFunction from wmain was successful!\n");
Py_DECREF(py_ret);
}
else
printf("PyObject_CallFunction from wmain failed!\n");
Py_DECREF(py_imp_str);
Py_DECREF(py_imp_handle);
Py_DECREF(py_args_tuple);
Py_DECREF(py_lib_mod);
Py_Finalize();
fflush(stderr);
fflush(stdout);
return 0;
}
Python code:
from ctypes import *
def func1(cb):
print "Hello from func1!"
cb_proto = WINFUNCTYPE(None)
print "C callback: " + hex(cb)
call_me = cb_proto(cb)
print "Calling callback from func1."
call_me()
print "Goodbye from func1!"
def func2():
print "Hello and goodbye from func2!"
Output:
Calling cfunc1 from main!
Hello from cfunc1!
The dictionary has 88 items!
Calling with GetItemString
Done with GetItemString
Hello and goodbye from func2!
PyObject_CallFunction from cfunc1 was successful!
Goodbye from cfunc1!
Hello from func1!
C callback: 0x1051000
Calling callback from func1.
Hello from cfunc1!
The dictionary has 88 items!
Calling with GetItemString
PyObject_CallFunction from wmain failed!
I added a PyErr_Print() to the end and this was the result:
Traceback (most recent call last):
File "C:\Programming\crash.py", line 9, in func1
call_me()
WindowsError: exception: access violation writing 0x0000000C
EDIT: Fixed a bug that abarnert pointed out. Output is unaffected.
EDIT: Added in the code that resolved the bug (acquiring the GIL lock in cfunc1). Thanks again abarnert.
The problem is this code:
py_func = PyDict_GetItemString(py_lib_mod_dict, "func2"); //fails here when cfunc1 is called via callback... will not even go to the next line!
printf("Done with GetItemString\n");
py_ret = PyObject_CallFunction(py_func, 0);
Py_DECREF(py_func);
As the docs say, PyDict_GetItemString returns a borrowed reference. So, the first time you call here, you borrow the reference, and decref it, causing it to be destroyed. The next time you call, you get back garbage, and try to call it.
So, to fix it, just remove the Py_DECREF(py_func) (or add Py_INCREF(py_func) after the pyfunc = line).
Actually, you will usually get back a special "dead" object, so you can test this pretty easily: put a PyObject_Print(py_func, stdout) after the py_func = line and after the Py_DECREF line, and you'll probably see something like <function func2 at 0x10b9f1230> the first time, <refcnt 0 at 0x10b9f1230> the second and third times (and you won't see the fourth, because it'll crash before you get there).
I don't have a Windows box handy, but changing wmain, wchar_t, PyUnicode_FromWideChar, WINFUNCTYPE, etc. to main, char, PyString_FromString, CFUNCTYPE, etc., I was able to build and run your code, and I get a crash in the same place… and the fix works.
Also… shouldn't you be holding the GIL inside cfunc1? I don't often write code like this, so maybe I'm wrong. And I don't get a crash with the code as-is. Obviously, spawning a thread to run cfunc1 does crash, and PyGILState_Ensure/Release solves that crash… but that doesn't prove you need anything in the single-threaded case. So maybe this isn't relevant… but if you get another crash after fixing the first one (in the threaded case, mine looked like Fatal Python error: PyEval_SaveThread: NULL tstate), look into this.
By the way, if you're new to Python extending and embedding: A huge number of unexplained crashes are, like this one, caused by manual refcounting errors. That's the reason things like boost::python, etc. exist. It's not that it's impossible to get it right with the plain C API, just that it's so easy to get it wrong, and you will have to get used to debugging problems like this.
abarnert's answer provided the correct functions to call, however the explanation bothered me so I came home early and poked around some more.
Before I go into the explanation, I want to mention that when I say GIL, I strictly mean the mutex, semaphore, or whatever that the Global Interpreter Lock uses to do the thread synchronization. This does not include any other housekeeping that Python does before/after it acquires and releases the GIL.
Single threaded programs do not initialize the GIL because you never call PyEval_InitThreads(). Thus there is no GIL. Even if there was locking going on, it shouldn't matter because it's single threaded. However, functions that acquire and release the GIL also do some funny stuff like mess with the thread state in addition to acquiring/releasing the GIL. Documentation on WINFUNCTYPE objects explicitly states that it releases the GIL before making the jump to C. So when the C callback was called in Python, I suspect something like PyEval_SaveThread() is called (maybe in error because it's only suppose to be called in threaded operations at least from my understanding). This would release the GIL (if it existed) and set the thread state to become NULL, however there's no GIL in single threaded Python programs, thus all it really does is just set the thread state to NULL. This causes the majority of Python functions in the C callback to fail hard.
Really the only benefit of calling PyGILState_Ensure/Release is to tell Python to set the thread state to something valid before running off and doing things. There's not a GIL to acquire (not initialized because I never called PyEval_InitThreads()).
To test my theory: In the main function I use PyThreadState_Swap(NULL) to grab a copy of the thread state object. I restore it during the callback and everything works fine. If I keep the thread state at null, I get pretty much the same access violation even without doing a Python -> C callback. Inside cfunc1, I restore the thread state and there's no more problems cfunc1 itself during the Python -> C callback.
There is an issue when cfunc1 returns into Python code, but that's probably because I messed with the thread state and the WINFUNCTYPE object is expecting something totally different. If you keep the thread the state without setting it back to null when returning, Python just sits there and does nothing. If you restore it back to null, it crashes. However, it does successfully executes cfunc1 so I'm not sure I care too much.
I may eventually go poke around in the Python source code to be 100% sure, but I'm sure enough to be satisfied.