I am embedding Python 3.2 in a C++ application and I have several sub interpreters that run at various times in the programs (created by Py_NewInterpreter). They acquire and release the GIL at various times, but I have run into a problem when I want to destroy one of the sub interpreters.
To destroy a sub interpreter, you have to acquire the GIL. So I do this:
PyEval_AcquireLock(threadstate);
Then I destroy the interpreter with
Py_EndInterpreter(threadstate);
And you would think it would release the GIL because the thing that held it was destroyed. However, the documentation for Py_EndInterpreter says:
The given thread state must be the
current thread state. See the
discussion of thread states below.
When the call returns, the current
thread state is NULL. (The global interpreter lock must be held before calling this function and is still held when it returns.)
So if I have to hold the GIL when I destroy a sub interpreter and destroying the sub interpreter sets it to NULL and I have to have the thread that acquired the GIL to release it, how do I release the GIL after destroying a sub-interpreter?
What happens if you call PyEval_ReleaseLock() directly after you call Py_EndInterpreter()? That's what the docs tells you to do anyway. :)
Related
My program consists of a main core function and a thread that was initiated inside that function to do some tasks (check order status and update the ledger). In my core program, im using objects of multiple classes hundreds of times and i need my thread to be able to make changes and add data to those objects that are shared from the core function. In that thread, i have implemented a thread lock to make sure everything runs smoothly. My question is whether i need to put a lock everytime i use the object in the core function or is it sufficient to only lock the resources in the thread?
My apologies in advance for not sharing the code, i cant.
The effect of holding a lock (i.e. having acquired and not yet released it) is that any other thread that tries to acquire the same lock will be blocked until the first thread has released the lock. In other words, at any given time at most one thread can hold the lock.
If only one thread uses the lock, nothing is achieved.
The deprecation of Python's PyEval_ReleaseLock has introduced a problem in our codebase: We want to terminate a Python interpreter from a C callback function using Py_EndInterpreter
So to do that, Python's Docs say that you must hold the GIL when calling this function:
void Py_EndInterpreter(PyThreadState *tstate)
Destroy the (sub-)interpreter represented by the given thread state. The given thread state must be
the current thread state. See the discussion of thread states below. When the call returns, the
current thread state is NULL. All thread states associated with this interpreter are destroyed. (The
global interpreter lock must be held before calling this function and is still held when it returns.)
Py_FinalizeEx() will destroy all sub-interpreters that haven’t been explicitly destroyed at that point.
Great! So we call PyEval_RestoreThread to restore our thread state to the thread we're about to terminate, and then call Py_EndInterpreter.
// Acquire the GIL
PyEval_RestoreThread(thread);
// Tear down the interpreter.
Py_EndInterpreter(thread);
// Now what? We still hold the GIL and we no longer have a valid thread state.
// Previously we did PyEval_ReleaseLock here, but that is now deprecated.
The documentation for PyEval_ReleaseLock says that we should either use PyEval_SaveThread or PyEval_ReleaseThread.
PyEval_ReleaseThread's documentation says that the input thread state must not be NULL. Okay, but we can't pass in the recently deleted thread state.
PyEval_SaveThread will hit a debug assertion if you try to call it after calling Py_EndInterpreter, so that's not an option either.
So, we've currently implemented a hack to get around this issue - we save the thread state of the thread that calls Py_InitializeEx in a global variable, and swap to it after calling Py_EndInterpreter.
// Acquire the GIL
PyEval_RestoreThread(thread);
// Tear down the interpreter.
Py_EndInterpreter(thread);
// Swap to the main thread state.
PyThreadState_Swap(g_init.thread_state_);
PyEval_SaveThread(); // Release the GIL. Probably.
What's the proper solution here? It seems that embedded Python is an afterthought for this API.
Similar question: PyEval_InitThreads in Python 3: How/when to call it? (the saga continues ad nauseam)
I have a C++ program that uses the C api to use a Python library of mine.
Both the Python library AND the C++ code are multithreaded.
In particular, one thread of the C++ program instantiates a Python object that inherits from threading.Thread. I need all my C++ threads to be able to call methods on that object.
From my very first tries (I naively just instantiate the object from the main thread, then wait some time, then call the method) I noticed that the execution of the Python thread associated with the object just created stops as soon as the execution comes back to the C++ program.
If the execution stays with Python (for example, if I call PyRun_SimpleString("time.sleep(5)");) the execution of the Python thread continues in background and everything works fine until the wait ends and the execution goes back to C++.
I am evidently doing something wrong. What should I do to make both my C++ and Python multithreaded and capable of working with each other nicely? I have no previous experience in the field so please don't assume anything!
A correct order of steps to perform what you are trying to do is:
In the main thread:
Initialize Python using Py_Initialize*.
Initialize Python threading support using PyEval_InitThreads().
Start the C++ thread.
At this point, the main thread still holds the GIL.
In a C++ thread:
Acquire the GIL using PyGILState_Ensure().
Create a new Python thread object and start it.
Release the GIL using PyGILState_Release().
Sleep, do something useful or exit the thread.
Because the main thread holds the GIL, this thread will be waiting to acquire the GIL. If the main thread calls the Python API it may release the GIL from time to time allowing the Python thread to execute for a little while.
Back in the main thread:
Release the GIL, enabling threads to run using PyEval_SaveThread()
Before attempting to use other Python calls, reacquire the GIL using PyEval_RestoreThread()
I suspect that you are missing the last step - releasing the GIL in the main thread, allowing the Python thread to execute.
I have a small but complete example that does exactly that at this link.
You probably do not unlock the Global Interpreter Lock when you callback from python's threading.Thread.
Well, if you are using bare python's C API you have some documentation here, about how to release/acquire the GIL. But while using C++, I must warn you that it might broke down upon any exceptions throwing in your C++ code. See here.
In general any of your C++ function that runs for too long should unlock GIL and lock, whenever it use the C Python API again.
In a multi-threaded Python process I have a number of non-daemon threads, by which I mean threads which keep the main process alive even after the main thread has exited / stopped.
My non-daemon threads hold weak references to certain objects in the main thread, but when the main thread ends (control falls off the bottom of the file) these objects do not appear to be garbage collected, and my weak reference finaliser callbacks don't fire.
Am I wrong to expect the main thread to be garbage collected? I would have expected that the thread-locals would be deallocated (i.e. garbage collected)...
What have I missed?
Supporting materials
Output from pprint.pprint( threading.enumerate() ) showing the main thread has stopped while others soldier on.
[<_MainThread(MainThread, stopped 139664516818688)>,
<LDQServer(testLogIOWorkerThread, started 139664479889152)>,
<_Timer(Thread-18, started 139663928870656)>,
<LDQServer(debugLogIOWorkerThread, started 139664437925632)>,
<_Timer(Thread-17, started 139664463103744)>,
<_Timer(Thread-19, started 139663937263360)>,
<LDQServer(testLogIOWorkerThread, started 139664471496448)>,
<LDQServer(debugLogIOWorkerThread, started 139664446318336)>]
And since someone always asks about the use-case...
My network service occasionally misses its real-time deadlines (which causes a total system failure in the worst case). This turned out to be because logging of (important) DEBUG data would block whenever the file-system has a tantrum. So I am attempting to retrofit a number of established specialised logging libraries to defer blocking I/O to a worker thread.
Sadly the established usage pattern is a mix of short-lived logging channels which log overlapping parallel transactions, and long-lived module-scope channels which are never explicitly closed.
So I created a decorator which defers method calls to a worker thread. The worker thread is non-daemon to ensure that all (slow) blocking I/O completes before the interpreter exits, and holds a weak reference to the client-side (where method calls get enqueued). When the client-side is garbage collected the weak reference's callback fires and the worker thread knows no more work will be enqueued, and so will exit at its next convenience.
This seems to work fine in all but one important use-case: when the logging channel is in the main thread. When the main thread stops / exits the logging channel is not finalised, and so my (non-daemon) worker thread lives on keeping the entire process alive.
It's a bad idea for your main thread to end without calling join on all non-daemon threads, or to make any assumptions about what happens if you don't.
If you don't do anything very unusual, CPython (at least 2.0-3.3) will cover for you by automatically calling join on all non-daemon threads as pair of _MainThread._exitfunc. This isn't actually documented, so you shouldn't rely on it, but it's what's happening to you.
Your main thread hasn't actually exited at all; it's blocking inside its _MainThread._exitfunc trying to join some arbitrary non-daemon thread. Its objects won't be finalized until the atexit handler is called, which doesn't happen until after it finishes joining all non-daemon threads.
Meanwhile, if you avoid this (e.g., by using thread/_thread directly, or by detaching the main thread from its object or forcing it into a normal Thread instance), what happens? It isn't defined. The threading module makes no reference to it at all, but in CPython 2.0-3.3, and likely in any other reasonable implementation, it falls to the thread/_thread module to decide. And, as the docs say:
When the main thread exits, it is system defined whether the other threads survive. On SGI IRIX using the native thread implementation, they survive. On most other systems, they are killed without executing try ... finally clauses or executing object destructors.
So, if you manage to avoid joining all of your non-daemon threads, you have to write code that can handle both having them hard-killed like daemon threads, and having them continue running until exit.
If they do continue running, at least in CPython 2.7 and 3.3 on POSIX systems, that the main thread's OS-level thread handle, and various higher-level Python objects representing it, may be still retained, and not get cleaned up by the GC.
On top of that, even if everything were released, you can't rely on the GC ever deleting anything. If your code depends on deterministic GC, there are many cases you can get away with it in CPython (although your code will then break in PyPy, Jython, IronPython, etc.), but at exit time is not one of them. CPython can, and will, leak objects at exit time and let the OS sort 'em out. (This is why writable files that you never close may lose the last few writes—the __del__ method never gets called, and therefore there's nobody to tell them to flush, and at least on POSIX the underlying FILE* doesn't automatically flush either.)
If you want something to be cleaned up when the main thread finishes, you have to use some kind of close function rather than relying on __del__, and you have to make sure it gets triggered via a with block around the main block of code, an atexit function, or some other mechanism.
One last thing:
I would have expected that the thread-locals would be deallocated (i.e. garbage collected)...
Do you actually have thread locals somewhere? Or do you just mean locals and/or globals that are only accessed in one thread?
I am trying to write a C++ class that calls Python methods of a class that does some I/O operations (file, stdout) at once. The problem I have ran into is that my class is called from different threads: sometimes main thread, sometimes different others. Obviously I tried to apply the approach for Python calls in multi-threaded native applications. Basically everything starts from PyEval_AcquireLock and PyEval_ReleaseLock or just global locks. According to the documentation here when a thread is already locked a deadlock ensues. When my class is called from the main thread or other one that blocks Python execution I have a deadlock.
Python> Cfunc1() - C++ func that creates threads internally which lead to calls in "my class",
It stuck on PyEval_AcquireLock, obviously the Python is already locked, i.e. waiting for C++ Cfunc1 call to complete... It completes fine if I omit those locks. Also it completes fine when Python interpreter is ready for the next user command, i.e. when thread is calling funcs in the background - not inside of a native call
I am looking for a workaround. I need to distinguish whether or not the global lock is allowed, i.e. Python is not locked and ready to receive the next command... I tried PyGIL_Ensure, unfortunately I see hang.
Any known API or solution for this ?
(Python 2.4)
Unless you have wrapped your C++ code quite peculiarly, when any Python thread calls into your C++ code, the GIL is held. You may release it in your C++ code (if you want to do some consuming task that doesn't require any Python interaction), and then will have to acquire it again when you want to do any Python interaction -- see the docs: if you're just using the good old C API, there are macros for that, and the recommended idiom is
Py_BEGIN_ALLOW_THREADS
...Do some blocking I/O operation...
Py_END_ALLOW_THREADS
the docs explain:
The Py_BEGIN_ALLOW_THREADS macro opens
a new block and declares a hidden
local variable; the
Py_END_ALLOW_THREADS macro closes the
block. Another advantage of using
these two macros is that when Python
is compiled without thread support,
they are defined empty, thus saving
the thread state and GIL
manipulations.
So you just don't have to acquire the GIL (and shouldn't) until after you've explicitly released it (ideally with that macro) and need to interact with Python in any way again. (Where the docs say "some blocking I/O operation", it could actually be any long-running operation with no Python interaction whatsoever).