Is there a Python equivalent of vfork in Unix? - python

I'm reading APUE.
#include "apue.h"
int globvar = 6; /* external variable in initialized data */
int main(void)
{
int var; /* automatic variable on the stack */
pid_t pid;
var = 88;
printf("before vfork\n"); /* we don′t flush stdio */
if ((pid = vfork()) < 0) {
err_sys("vfork error");
} else if (pid == 0) { /* child */
globvar++; /* modify parent′s variables */
var++;
_exit(0); /* child terminates */
}
/* parent continues here */
printf("pid = %ld, glob = %d, var = %d\n", (long)getpid(), globvar, var);
exit()
}
It's said that vfork() creates a child process without copying the address space the parent process has. And parent will wait until child process calls exit or exec.
Is there anything similar in Python give this kind of low level control? How can this be achieved in Python if possible?

vfork is a mostly-obsolete optimization of fork, intended solely for uses where it is immediately followed by an exec. It was designed back when fork didn't use copy-on-write, and has been rendered almost entirely pointless by copy-on-write.
Your use of vfork is undefined behavior; modifying variables like globvar or var in the child is not allowed. Almost the only thing the child is allowed to do is call one of the exec* functions. The details are in the man page.
If your goal is to share memory between Python processes, you should do that with the multiprocessing module and multiprocessing.sharedctypes.
If your goal is to call vfork, go ahead and call it with ctypes, and watch your program immediately fall apart because it's impossible to call vfork safely from Python:
$ cat asdf.py
import ctypes
import os
lib = ctypes.CDLL('libc.so.6')
vfork = lib.vfork
if not vfork():
os.execv('/usr/bin/echo', ['echo', 'hello', 'world'])
print("Should print in parent, but doesn't due to memory corruption.")
$ python3.6 asdf.py
hello world
Segmentation fault (core dumped)

Related

PyGILState_Ensure() Causing Deadlock

I'm writing a Python extension in C++, wrapping a third-party library I do not control. That library creates a thread Python knows nothing about, and from that thread, calls a C++ callback I provide to the library. I want that callback to call a Python function, but I get a deadlock using the approach I read from the docs. Here's my interpretation of those.
void Wrapper::myCallback()
{
PyGILState_STATE gstate=PyGILState_Ensure();
PyObject *result=PyObject_CallMethod(_pyObj,"callback",nullptr);
if (result) Py_DECREF(result);
PyGILState_Release(gstate);
}
My code does nothing else related to threads, though I've tried a number of other things that have. Based on this, for example, I tried calling PyEval_InitThreads(), but it's not obvious where that call should be made for an extension. I put it in the PyMODINIT_FUNC. These attempts have all lead to deadlock, crashes, or mysterious fatal errors from Python, e.g., PyEval_ReleaseThread: wrong thread state.
This is on Linux with Python 3.6.1. Any ideas how I can get this "simple" callback to work?
Likely Culprit
I didn't realize that in another thread, the library was in a busy/wait loop waiting on the callback's thread. In gdb, info threads made this apparent. The only solution I can see is to skip those particular calls to the callback; I don't see a way to make them safe, given the busy/wait loop. In this case, that's acceptable, and doing so eliminates the deadlock.
Also, it appears that I do need to also call PyEval_InitThreads() before any of this. In a C++ extension, it's not clear where that should go though. One of the replies suggested doing it indirectly in Python by creating and deleting a throwaway threading.Thread. That didn't seem to fix it, triggering instead a Fatal Python error: take_gil: NULL tstate, which I think means there's still no GIL. My guess, based on this and the issue it refers to, is that PyEval_InitThreads() causes the current thread to become the main thread for the GIL. If that call is made in the short-lived throwaway thread, maybe that's a problem. Yeah, I'm only guessing and would appreciate an explanation from someone who doesn't have to.
This answer is only for Python >= 3.0.0. I don't know if it would work for earlier Pythons or not.
Wrap your C++ module in a Python module that looks something like this:
import threading
t = threading.Thread(target=lambda: None, daemon=True)
t.run()
del t
from your_cpp_module import *
From my reading of the documentation, that should force threading to be initialized before your module is imported. Then the callback function you have written up there should work.
I'm less confident of this working, but your module init function could instead do this:
if (!PyEval_ThreadsInitialized())
{
PyEval_InitThreads();
}
that should work because your module init function should be being executed by the only Python thread in existence if PyEval_ThreadsInitialized() isn't true, and holding the GIL is the right thing to do then.
These are guesses on my part. I've never done anything like this as is evidenced by my clueless comments on your question. But from my reading of the documentation, both of these approaches should work.
I'm new to StackOverflow, but I've been working on embedding python in a multithreaded C++ system for the last few days and run into a fair number of situations where the code has deadlocked itself. Here's the solution that I've been using to ensure thread safety:
class PyContextManager {
private:
static volatile bool python_threads_initialized;
public:
static std::mutex pyContextLock;
PyContextManager(/* if python_threads_initialized is false, call PyEval_InitThreads and set the variable to true */);
~PyContextManager();
};
#define PY_SAFE_CONTEXT(expr) \
{ \
std::unique_lock<std::mutex>(pyContextLock); \
PyGILState_STATE gstate; \
gstate = PyGILState_Ensure(); \
expr; \
PyGILState_Release(gstate); \
}
Initializing the boolean and the mutex in the .cpp file.
I've noticed that without the mutex, the PyGILState_Ensure() command can cause a thread to deadlock. Likewise, calling PySafeContext within the expr of another PySafeContext will cause the thread to brick while it waits on its mutex.
Using these functions, I believe your callback function would look like this:
void Wrapper::myCallback()
{
PyContextManager cm();
PY_SAFE_CONTEXT(
PyObject *result=PyObject_CallMethod(_pyObj,"callback",nullptr);
if (result) Py_DECREF(result);
);
}
If you don't believe that your code is likely to ever need more than one multithreaded call to Python, you can easily expand the macro and take the static variables out of a class structure. This is just how I've handled an unknown thread starting and determining whether it needs to start up the system, and dodging the tedium of writing out the GIL functions repeatedly.
Hope this helps!
I have wrapped C++ observers in Python. If you are using boost then you can call PyEval_InitThreads() in BOOST_PYTHON_MODULE:
BOOST_PYTHON_MODULE(eapipy)
{
boost::shared_ptr<Python::InitialisePythonGIL> gil(new Python::InitialisePythonGIL());
....
}
Then I use a class to control calling back into Python from C++.
struct PyLockGIL
{
PyLockGIL()
: gstate(PyGILState_Ensure())
{
}
~PyLockGIL()
{
PyGILState_Release(gstate);
}
PyLockGIL(const PyLockGIL&) = delete;
PyLockGIL& operator=(const PyLockGIL&) = delete;
PyGILState_STATE gstate;
};
If you are calling into C++ for any length of time you can also relinquish the GIL:
struct PyRelinquishGIL
{
PyRelinquishGIL()
: _thread_state(PyEval_SaveThread())
{
}
~PyRelinquishGIL()
{
PyEval_RestoreThread(_thread_state);
}
PyRelinquishGIL(const PyLockGIL&) = delete;
PyRelinquishGIL& operator=(const PyLockGIL&) = delete;
PyThreadState* _thread_state;
};
Our code is multi-threaded and this approach works well.

Cannot create more than 10 mqueues

I am using a python module that wraps the posix real time extensions to get MessageQueues.
This is the python code
#!/usr/bin env python
import uuid
import posix_ipc
import time
def spawn():
return posix_ipc.MessageQueue("/%s" % uuid.uuid4(), flags=posix_ipc.O_CREAT)
i = 0
while True:
i += 1
spawn()
print(i)
This will create about 10 mqs before reporting OSError: This process already has the maximum number of files open
I looked into mq limits and rlimit and checked that they are all set very high. E.g.
fs.file-max = 2097152
fs.mqueue.msg_max = 1000
fs.mqueue.queues_max = 1000
And even for privileged users it will still only create about 10 queues.
The equivalent C using the realtime extensions directly is as follows
#include <stdlib.h>
#include <stdio.h>
#include <string.h>
#include <sys/stat.h>
#include <sys/types.h>
#include <errno.h>
#include <mqueue.h>
#define handle_error(msg) \
do { perror(msg); exit(EXIT_FAILURE); } while (0)
int main(int argc, char **argv)
{
mqd_t mq;
struct mq_attr attr;
char buffer[1024 + 1];
int must_stop = 0;
/* initialize the queue attributes */
attr.mq_flags = 0;
attr.mq_maxmsg = 10;
attr.mq_msgsize = 1024;
attr.mq_curmsgs = 0;
/* create the message queue */
int count = 0;
char name[5];
while (1) {
sprintf(name, "/%d", count);
mq = mq_open(name, O_CREAT | O_RDONLY, 0644, &attr);
if (mq == (mqd_t) -1)
handle_error("mq_open");
count++;
}
return 0;
}
(compile with gcc -lrt test.c)
But this only gets me 20 mqs open at one time. Realistically I want to have a few hundred maybe a thousand open at a time.
Anyone got any ideas or suggestions?
EDIT: Better error checking in the C version. Still max out.
The parameter fs.mqueue.queues_max is only a global number of message queues allowed in the system. The limit you are reaching is the number of message queues per process. Because mq_open says about error codes:
[EMFILE] Too many message queue descriptors or file descriptors
are currently in use by this process.
You should normally be able to read (an set) that per process limit with getrlimit/ setrlimit. The man page for rlimit says:
RLIMIT_MSGQUEUE (Since Linux 2.6.8)
Specifies the limit on the number of bytes that can be allocated for POSIX message queues for the real user ID of the calling process. This limit is enforced for mq_open(3). Each message queue that the user creates counts (until it is removed) against this limit according to the formula:
bytes = attr.mq_maxmsg * sizeof(struct msg_msg *) +
attr.mq_maxmsg * attr.mq_msgsize
where attr is the mq_attr structure specified as the fourth argument to mq_open(3).
The first addend in the formula, which includes sizeof(struct msg_msg *) (4 bytes on Linux/i386), ensures that the user cannot create an unlimited number of zero-length messages (such messages nevertheless each consume some system memory for bookkeeping overhead).
You can also try to read the value and multiply it with what you need:
struct rlimit limit;
getrlimit(RLIMIT_MSGQUEUE, &limit);
limit.rlim_cur *= 20;
limit.rlim_max *= 20;
setrlimit(RLIMIT_MSGQUEUE, &limit);

Embedding Python in C: Error when attempting to call Python code in a C callback called by Python code

I have some C code that calls a Python function. This Python function accepts an address and uses WINFUNCTYPE to eventually convert it to a function that Python can call. The C function send as a parameter to the Python function will eventually call another Python function. It is at this last step which causes a crash. So in short I go from C -> Python -> C -> Python. The last C -> Python causes a crash. I've been trying to understand the problem, but I have been unable to.
Can someone point out my problem?
C code compiled with Visual Studio 2010 and run with the args "c:\...\crash.py" and "func1":
#include <stdlib.h>
#include <stdio.h>
#include <Python.h>
PyObject* py_lib_mod_dict; //borrowed
void __stdcall cfunc1()
{
PyObject* py_func;
PyObject* py_ret;
int size;
PyGILState_STATE gil_state;
gil_state = PyGILState_Ensure();
printf("Hello from cfunc1!\n");
size = PyDict_Size(py_lib_mod_dict);
printf("The dictionary has %d items!\n", size);
printf("Calling with GetItemString\n");
py_func = PyDict_GetItemString(py_lib_mod_dict, "func2"); //fails here when cfunc1 is called via callback... will not even go to the next line!
printf("Done with GetItemString\n");
py_ret = PyObject_CallFunction(py_func, 0);
if (py_ret)
{
printf("PyObject_CallFunction from cfunc1 was successful!\n");
Py_DECREF(py_ret);
}
else
printf("PyObject_CallFunction from cfunc1 failed!\n");
printf("Goodbye from cfunc1!\n");
PyGILState_Release(gil_state);
}
int wmain(int argc, wchar_t** argv)
{
PyObject* py_imp_str;
PyObject* py_imp_handle;
PyObject* py_imp_dict; //borrowed
PyObject* py_imp_load_source; //borrowed
PyObject* py_dir; //stolen
PyObject* py_lib_name; //stolen
PyObject* py_args_tuple;
PyObject* py_lib_mod;
PyObject* py_func;
PyObject* py_ret;
Py_Initialize();
//import our python script
py_dir = PyUnicode_FromWideChar(argv[1], wcslen(argv[1]));
py_imp_str = PyString_FromString("imp");
py_imp_handle = PyImport_Import(py_imp_str);
py_imp_dict = PyModule_GetDict(py_imp_handle); //borrowed
py_imp_load_source = PyDict_GetItemString(py_imp_dict, "load_source"); //borrowed
py_lib_name = PyUnicode_FromWideChar(argv[2], wcslen(argv[2]));
py_args_tuple = PyTuple_New(2);
PyTuple_SetItem(py_args_tuple, 0, py_lib_name); //stolen
PyTuple_SetItem(py_args_tuple, 1, py_dir); //stolen
py_lib_mod = PyObject_CallObject(py_imp_load_source, py_args_tuple);
py_lib_mod_dict = PyModule_GetDict(py_lib_mod); //borrowed
printf("Calling cfunc1 from main!\n");
cfunc1();
py_func = PyDict_GetItem(py_lib_mod_dict, py_lib_name);
py_ret = PyObject_CallFunction(py_func, "(I)", &cfunc1);
if (py_ret)
{
printf("PyObject_CallFunction from wmain was successful!\n");
Py_DECREF(py_ret);
}
else
printf("PyObject_CallFunction from wmain failed!\n");
Py_DECREF(py_imp_str);
Py_DECREF(py_imp_handle);
Py_DECREF(py_args_tuple);
Py_DECREF(py_lib_mod);
Py_Finalize();
fflush(stderr);
fflush(stdout);
return 0;
}
Python code:
from ctypes import *
def func1(cb):
print "Hello from func1!"
cb_proto = WINFUNCTYPE(None)
print "C callback: " + hex(cb)
call_me = cb_proto(cb)
print "Calling callback from func1."
call_me()
print "Goodbye from func1!"
def func2():
print "Hello and goodbye from func2!"
Output:
Calling cfunc1 from main!
Hello from cfunc1!
The dictionary has 88 items!
Calling with GetItemString
Done with GetItemString
Hello and goodbye from func2!
PyObject_CallFunction from cfunc1 was successful!
Goodbye from cfunc1!
Hello from func1!
C callback: 0x1051000
Calling callback from func1.
Hello from cfunc1!
The dictionary has 88 items!
Calling with GetItemString
PyObject_CallFunction from wmain failed!
I added a PyErr_Print() to the end and this was the result:
Traceback (most recent call last):
File "C:\Programming\crash.py", line 9, in func1
call_me()
WindowsError: exception: access violation writing 0x0000000C
EDIT: Fixed a bug that abarnert pointed out. Output is unaffected.
EDIT: Added in the code that resolved the bug (acquiring the GIL lock in cfunc1). Thanks again abarnert.
The problem is this code:
py_func = PyDict_GetItemString(py_lib_mod_dict, "func2"); //fails here when cfunc1 is called via callback... will not even go to the next line!
printf("Done with GetItemString\n");
py_ret = PyObject_CallFunction(py_func, 0);
Py_DECREF(py_func);
As the docs say, PyDict_GetItemString returns a borrowed reference. So, the first time you call here, you borrow the reference, and decref it, causing it to be destroyed. The next time you call, you get back garbage, and try to call it.
So, to fix it, just remove the Py_DECREF(py_func) (or add Py_INCREF(py_func) after the pyfunc = line).
Actually, you will usually get back a special "dead" object, so you can test this pretty easily: put a PyObject_Print(py_func, stdout) after the py_func = line and after the Py_DECREF line, and you'll probably see something like <function func2 at 0x10b9f1230> the first time, <refcnt 0 at 0x10b9f1230> the second and third times (and you won't see the fourth, because it'll crash before you get there).
I don't have a Windows box handy, but changing wmain, wchar_t, PyUnicode_FromWideChar, WINFUNCTYPE, etc. to main, char, PyString_FromString, CFUNCTYPE, etc., I was able to build and run your code, and I get a crash in the same place… and the fix works.
Also… shouldn't you be holding the GIL inside cfunc1? I don't often write code like this, so maybe I'm wrong. And I don't get a crash with the code as-is. Obviously, spawning a thread to run cfunc1 does crash, and PyGILState_Ensure/Release solves that crash… but that doesn't prove you need anything in the single-threaded case. So maybe this isn't relevant… but if you get another crash after fixing the first one (in the threaded case, mine looked like Fatal Python error: PyEval_SaveThread: NULL tstate), look into this.
By the way, if you're new to Python extending and embedding: A huge number of unexplained crashes are, like this one, caused by manual refcounting errors. That's the reason things like boost::python, etc. exist. It's not that it's impossible to get it right with the plain C API, just that it's so easy to get it wrong, and you will have to get used to debugging problems like this.
abarnert's answer provided the correct functions to call, however the explanation bothered me so I came home early and poked around some more.
Before I go into the explanation, I want to mention that when I say GIL, I strictly mean the mutex, semaphore, or whatever that the Global Interpreter Lock uses to do the thread synchronization. This does not include any other housekeeping that Python does before/after it acquires and releases the GIL.
Single threaded programs do not initialize the GIL because you never call PyEval_InitThreads(). Thus there is no GIL. Even if there was locking going on, it shouldn't matter because it's single threaded. However, functions that acquire and release the GIL also do some funny stuff like mess with the thread state in addition to acquiring/releasing the GIL. Documentation on WINFUNCTYPE objects explicitly states that it releases the GIL before making the jump to C. So when the C callback was called in Python, I suspect something like PyEval_SaveThread() is called (maybe in error because it's only suppose to be called in threaded operations at least from my understanding). This would release the GIL (if it existed) and set the thread state to become NULL, however there's no GIL in single threaded Python programs, thus all it really does is just set the thread state to NULL. This causes the majority of Python functions in the C callback to fail hard.
Really the only benefit of calling PyGILState_Ensure/Release is to tell Python to set the thread state to something valid before running off and doing things. There's not a GIL to acquire (not initialized because I never called PyEval_InitThreads()).
To test my theory: In the main function I use PyThreadState_Swap(NULL) to grab a copy of the thread state object. I restore it during the callback and everything works fine. If I keep the thread state at null, I get pretty much the same access violation even without doing a Python -> C callback. Inside cfunc1, I restore the thread state and there's no more problems cfunc1 itself during the Python -> C callback.
There is an issue when cfunc1 returns into Python code, but that's probably because I messed with the thread state and the WINFUNCTYPE object is expecting something totally different. If you keep the thread the state without setting it back to null when returning, Python just sits there and does nothing. If you restore it back to null, it crashes. However, it does successfully executes cfunc1 so I'm not sure I care too much.
I may eventually go poke around in the Python source code to be 100% sure, but I'm sure enough to be satisfied.

Python C extension for a function that sleeps

I'm writing a Python extension for the functions provided by a GPIO driver. I made progress pretty easily on the simple functions like set_bit() and clear_bit(). But now I need to implement wait_int(), which sleeps until an event is sensed on an input pin and I'm not sure the right way to orchestrate this between c and python. Here's a stripped down example of using the function in c:
main(int argc, char *argv[])
{
int c;
//some setup like testing port availability, clearing interrupts, etc
...
while(1)
{
printf("**\n");
c = wait_int(1);//sleeps until an interrupt occurs on chip 1
if(c > 0) {
printf("Event sense occured on Chip 1 bit %d\n",c);
++event_count;
}
else
break;
}
printf("Event count = %05d\r",event_count);
printf("\nExiting Now\n");
}
Do I just expose wait_int pretty much directly and then do whatever the python equivalent idiom of the infinite loop is? There's also some debouncing that needs to be done. I've done it in c but maybe it could be moved to the python side.
You don't need to do anything on the Python side, you can just treat it as a synchronous function. On the C side, you just block until the event occurs, possibly allowing interrupts. For example, take a look at the implementation of the time.sleep function:
/* LICENSE: http://docs.python.org/license.html */
/* Implement floatsleep() for various platforms.
When interrupted (or when another error occurs), return -1 and
set an exception; else return 0. */
static int
floatsleep(double secs)
{
/* XXX Should test for MS_WINDOWS first! */
#if defined(HAVE_SELECT) && !defined(__BEOS__) && !defined(__EMX__)
struct timeval t;
double frac;
frac = fmod(secs, 1.0);
secs = floor(secs);
t.tv_sec = (long)secs;
t.tv_usec = (long)(frac*1000000.0);
Py_BEGIN_ALLOW_THREADS
if (select(0, (fd_set *)0, (fd_set *)0, (fd_set *)0, &t) != 0) {
#ifdef EINTR
if (errno != EINTR) {
#else
if (1) {
#endif
Py_BLOCK_THREADS
PyErr_SetFromErrno(PyExc_IOError);
return -1;
}
}
Py_END_ALLOW_THREADS
#elif defined(__WATCOMC__) && !defined(__QNX__)
...
All it does is use the select function to sleep for the given period of time. select is used so that if any signal is received (such as SIGINT from hitting Ctrl+C at the terminal), the system call is interrupted and control returns to Python.
Hence. your implementation can just call the C wait_int function. If it supports being interrupted by signals, than great, that will allow the user to interrupt it by hitting Ctrl+C, but make sure to react appropriately such that an exception will be thrown (I'm not certain of how this works, but it looks like returning NULL from the top-level function (time_sleep in this example) will do the trick).
Likewise, for better multithreaded performance, surround the wait call with a pair of Py_BEGIN_ALLOW_THREADS/Py_END_ALLOW_THREADS macros, but this is not required, especially if you're not using multithreading at all.

Fast way to determine if a PID exists on (Windows)?

I realize "fast" is a bit subjective so I'll explain with some context. I'm working on a Python module called psutil for reading process information in a cross-platform way. One of the functions is a pid_exists(pid) function for determining if a PID is in the current process list.
Right now I'm doing this the obvious way, using EnumProcesses() to pull the process list, then interating through the list and looking for the PID. However, some simple benchmarking shows this is dramatically slower than the pid_exists function on UNIX-based platforms (Linux, OS X, FreeBSD) where we're using kill(pid, 0) with a 0 signal to determine if a PID exists. Additional testing shows it's EnumProcesses that's taking up almost all the time.
Anyone know a faster way than using EnumProcesses to determine if a PID exists? I tried OpenProcess() and checking for an error opening the nonexistent process, but this turned out to be over 4x slower than iterating through the EnumProcesses list, so that's out as well. Any other (better) suggestions?
NOTE: This is a Python library intended to avoid third-party lib dependencies like pywin32 extensions. I need a solution that is faster than our current code, and that doesn't depend on pywin32 or other modules not present in a standard Python distribution.
EDIT: To clarify - we're well aware that there are race conditions inherent in reading process iformation. We raise exceptions if the process goes away during the course of data collection or we run into other problems. The pid_exists() function isn't intended to replace proper error handling.
UPDATE: Apparently my earlier benchmarks were flawed - I wrote some simple test apps in C and EnumProcesses consistently comes out slower and OpenProcess (in conjunction with GetProcessExitCode in case the PID is valid but the process has stopped) is actually much faster not slower.
OpenProcess could tell you w/o enumerating all. I have no idea how fast.
EDIT: note that you also need GetExitCodeProcess to verify the state of the process even if you get a handle from OpenProcess.
Turns out that my benchmarks evidently were flawed somehow, as later testing reveals OpenProcess and GetExitCodeProcess are much faster than using EnumProcesses after all. I'm not sure what happened but I did some new tests and verified this is the faster solution:
int pid_is_running(DWORD pid)
{
HANDLE hProcess;
DWORD exitCode;
//Special case for PID 0 System Idle Process
if (pid == 0) {
return 1;
}
//skip testing bogus PIDs
if (pid < 0) {
return 0;
}
hProcess = handle_from_pid(pid);
if (NULL == hProcess) {
//invalid parameter means PID isn't in the system
if (GetLastError() == ERROR_INVALID_PARAMETER) {
return 0;
}
//some other error with OpenProcess
return -1;
}
if (GetExitCodeProcess(hProcess, &exitCode)) {
CloseHandle(hProcess);
return (exitCode == STILL_ACTIVE);
}
//error in GetExitCodeProcess()
CloseHandle(hProcess);
return -1;
}
Note that you do need to use GetExitCodeProcess() because OpenProcess() will succeed on processes that have died recently so you can't assume a valid process handle means the process is running.
Also note that OpenProcess() succeeds for PIDs that are within 3 of any valid PID (See Why does OpenProcess succeed even when I add three to the process ID?)
There is an inherent race condition in the use of pid_exists function: by the time the calling program gets to use the answer, the process may have already disappeared, or a new process with the queried id may have been created. I would dare say that any application that uses this function is flawed by design and that optimizing this function is therefore not worth the effort.
I'd code Jay's last function this way.
int pid_is_running(DWORD pid){
HANDLE hProcess;
DWORD exitCode;
//Special case for PID 0 System Idle Process
if (pid == 0) {
return 1;
}
//skip testing bogus PIDs
if (pid < 0) {
return 0;
}
hProcess = handle_from_pid(pid);
if (NULL == hProcess) {
//invalid parameter means PID isn't in the system
if (GetLastError() == ERROR_INVALID_PARAMETER) {
return 0;
}
//some other error with OpenProcess
return -1;
}
DWORD dwRetval = WaitForSingleObject(hProcess, 0);
CloseHandle(hProcess); // otherwise you'll be losing handles
switch(dwRetval) {
case WAIT_OBJECT_0;
return 0;
case WAIT_TIMEOUT;
return 1;
default:
return -1;
}
}
The main difference is closing the process handle (important when the client of this function is running for a long time) and the process termination detection strategy. WaitForSingleObject gives you the opportunity to wait for a while (changing the 0 to a function parameter value) until the process ends.

Categories