This question already has an answer here:
Embedded python: multiprocessing not working
(1 answer)
Closed 8 years ago.
I'm trying to embed Python 3.3 x64 script with 'multiprocessing' to C++ code under Windows 7 x64.
Simple script like:
from multiprocessing import Process
def spawnWork(fileName, index):
print("spawnWork: Entry... ")
process = Process(target=execute, args=(fileName, index, ))
process.start()
print("spawnWork: ... Exit.")
def execute(fileName, index):
print("execute: Entry... ")
#Do some long processing
print("execute: ... Exit.")
works fine from Python, but when embedded it stuck at .start() and locks.
I'm using all the relevant API calls to ensure safe GIL processing for Python. It works pretty well when not dealing with 'multiprocessing' package but locks when attempting to start another 'Process'.
Is it possible to use both Python/C++ mix and 'multiprocessing'?
Thanks
I wouldn't expect this to work, as the way multiprocessing works on Windows (where there's no fork) is to CreateProcess another copy of the same executable. And since that executable is your embedding C++ app, not the Python interpreter, you will probably have to cooperate very closely with it to make that work. You can see the relevant code in posix_spawn_win32.py.
Another potential problem is that on Windows, multiprocessing relies on a C extension module that fakes POSIX semaphores on top of Windows kernel semaphores; I haven't read through the code, but that could easily be doing something funky to GIL/threadstate and/or relying on something under the covers to share the semaphores with child Python executables.
Related
How to close (terminate) Windows applications using Python script? When I switch-on my PC, I find many applications like MSOSYNC.exe, ONENOTEM.exe etc. along with many others, running, which are not very useful. I want to close those? I tried "subprocess" module and some other, they not working. Which method should I use?
You're using the Popen class to construct a new object in you example already. It has methods to deal with it. Read the documentation
import subprocess
proc = subprocess.Popen(['c:\\windows\\system32\\notepad.exe','C:\\file1.txt'])
proc.terminate()
Can my python script spawn a process that will run indefinitely?
I'm not too familiar with python, nor with spawning deamons, so I cam up with this:
si = subprocess.STARTUPINFO()
si.dwFlags = subprocess.CREATE_NEW_PROCESS_GROUP | subprocess.CREATE_NEW_CONSOLE
subprocess.Popen(executable, close_fds = True, startupinfo = si)
The process continues to run past python.exe, but is closed as soon as I close the cmd window.
Using the answer Janne Karila pointed out this is how you can run a process that doen't die when its parent dies, no need to use the win32process module.
DETACHED_PROCESS = 8
subprocess.Popen(executable, creationflags=DETACHED_PROCESS, close_fds=True)
DETACHED_PROCESS is a Process Creation Flag that is passed to the underlying CreateProcess function.
This question was asked 3 years ago, and though the fundamental details of the answer haven't changed, given its prevalence in "Windows Python daemon" searches, I thought it might be helpful to add some discussion for the benefit of future Google arrivees.
There are really two parts to the question:
Can a Python script spawn an independent process that will run indefinitely?
Can a Python script act like a Unix daemon on a Windows system?
The answer to the first is an unambiguous yes; as already pointed out; using subprocess.Popen with the creationflags=subprocess.CREATE_NEW_PROCESS_GROUP keyword will suffice:
import subprocess
independent_process = subprocess.Popen(
'python /path/to/file.py',
creationflags=subprocess.CREATE_NEW_PROCESS_GROUP
)
Note that, at least in my experience, CREATE_NEW_CONSOLE is not necessary here.
That being said, the behavior of this strategy isn't quite the same as what you'd expect from a Unix daemon. What constitutes a well-behaved Unix daemon is better explained elsewhere, but to summarize:
Close open file descriptors (typically all of them, but some applications may need to protect some descriptors from closure)
Change the working directory for the process to a suitable location to prevent "Directory Busy" errors
Change the file access creation mask (os.umask in the Python world)
Move the application into the background and make it dissociate itself from the initiating process
Completely divorce from the terminal, including redirecting STDIN, STDOUT, and STDERR to different streams (often DEVNULL), and prevent reacquisition of a controlling terminal
Handle signals, in particular, SIGTERM.
The reality of the situation is that Windows, as an operating system, really doesn't support the notion of a daemon: applications that start from a terminal (or in any other interactive context, including launching from Explorer, etc) will continue to run with a visible window, unless the controlling application (in this example, Python) has included a windowless GUI. Furthermore, Windows signal handling is woefully inadequate, and attempts to send signals to an independent Python process (as opposed to a subprocess, which would not survive terminal closure) will almost always result in the immediate exit of that Python process without any cleanup (no finally:, no atexit, no __del__, etc).
Rolling your application into a Windows service, though a viable alternative in many cases, also doesn't quite fit. The same is true of using pythonw.exe (a windowless version of Python that ships with all recent Windows Python binaries). In particular, they fail to improve the situation for signal handling, and they cannot easily launch an application from a terminal and interact with it during startup (for example, to deliver dynamic startup arguments to your script, say, perhaps, a password, file path, etc), before "daemonizing". Additionally, Windows services require installation, which -- though perfectly possible to do quickly at runtime when you first call up your "daemon" -- modifies the user's system (registry, etc), which would be highly unexpected if you're coming from a Unix world.
In light of that, I would argue that launching a pythonw.exe subprocess using subprocess.CREATE_NEW_PROCESS_GROUP is probably the closest Windows equivalent for a Python process to emulate a traditional Unix daemon. However, that still leaves you with the added challenge of signal handling and startup communications (not to mention making your code platform-dependent, which is always frustrating).
That all being said, for anyone encountering this problem in the future, I've rolled a library called daemoniker that wraps both proper Unix daemonization and the above strategy. It also implements signal handling (for both Unix and Windows systems), and allows you to pass objects to the "daemon" process using pickle. Best of all, it has a cross-platform API:
from daemoniker import Daemonizer
with Daemonizer() as (is_setup, daemonizer):
if is_setup:
# This code is run before daemonization.
do_things_here()
# We need to explicitly pass resources to the daemon; other variables
# may not be correct
is_parent, my_arg1, my_arg2 = daemonizer(
path_to_pid_file,
my_arg1,
my_arg2
)
if is_parent:
# Run code in the parent after daemonization
parent_only_code()
# We are now daemonized, and the parent just exited.
code_continues_here()
For that purpose you could daemonize your python process or as you are using windows environment you would like to run this as a windows service.
You know i like to hate posting only web-links:
But for more information according to your requirement:
A simple way to implement Windows Service. read all comments it will resolve any doubt
If you really want to learn more
First read this
what is daemon process or creating-a-daemon-the-python-way
update:
Subprocess is not the right way to achieve this kind of thing
From my understanding, os.popen() opens a pipe within Python and initiates a new sub process. I have a problem when I run a for loop in conjunction with os.popen(). I can't seem to CTRL+C out of the loop. Here is my code:
for FILE in os.popen("ls $MY_DIR/"):
os.system("./processFile " + FILE)
Whenever I try to CTRL+C, Python will stop the ./processFile program but NOT the python program itself!
I have Google'd around and couldn't seem to find the correct answer. Some people recommend using SIGNALS (I tried... it didn't work). Another tried to use PIDs and killing child PIDs but I couldn't seem to get it.
Can someone lead me to a better example so I can stop the programming when I use CTRL+C (SIGINT) ?
I see some answer correctly recommended subprocess.check_call and the OP in a comment said
I'm getting this error:
AttributeError: 'module' object has no
attribute 'check_call'
Per the docs I just linked to, check_call is marked as:
New in version 2.5.
so it looks like the OP is using some ancient version of Python -- 2.4 or earlier -- without mentioning the fact (the current production-ready version is 2.7, and 2.4 is many years old).
The best one can recommend, therefore, is to upgrade! If 2.7 is "too new" for your tastes (as it might be considered in a conservative "shop"), 2.6's latest microrelease should at least be fine -- and it won't just give you subprocess.check_call, but also many additional feautures, bug fixes, and optimizations!-)
The behavior is correct. Ctrl+C stops the foreground process and not its parent process. Calling the shell and using ls is inappropriate here, your code should better be written as follows (untested):
import os
import subprocess
for fname in os.listdir(directory):
path = os.path.join(directory, fname)
subprocess.check_call(["./processFile", path])
I think this is a pretty basic question, but here it is anyway.
I need to write a python script that checks to make sure a process, say notepad.exe, is running. If the process is running, do nothing. If it is not, start it. How would this be done.
I am using Python 2.6 on Windows XP
The process creation functions of the os module are apparently deprecated in Python 2.6 and later, with the subprocess module being the module of choice now, so...
if 'notepad.exe' not in subprocess.Popen('tasklist', stdout=subprocess.PIPE).communicate()[0]:
subprocess.Popen('notepad.exe')
Note that in Python 3, the string being checked will need to be a bytes object, so it'd be
if b'notepad.exe' not in [blah]:
subprocess.Popen('notepad.exe')
(The name of the file/process to start does not need to be a bytes object.)
There are a couple of options,
1: the more crude but obvious would be to do some text processing against:
os.popen('tasklist').read()
2: A more involved option would be to use pywin32 and research the win32 APIs to figure out what processes are running.
3: WMI (I found this just now), and here is a vbscript example of how to query the machine for processes through WMI.
Python library for Linux process management
This question already has answers here:
What's the best way to duplicate fork() in windows?
(7 answers)
Closed 3 years ago.
This code works well in Mac/Linux, but not in Windows.
import mmap
import os
map = mmap.mmap(-1, 13)
map.write("Hello world!")
pid = os.fork()
if pid == 0: # In a child process
print 'child'
map.seek(0)
print map.readline()
map.close()
else:
print 'parent'
What's the equivalent function of os.fork() on Windows?
Depending on your use case and whether you can use python 2.6 or not, you might be able to use the multiprocessing module.
The answer here may not answer the question. The problem is due to fork(). From the example, you seemed want to share data between two Python scripts. Let me explain my view.
As of Python 3.8, there is no true Unix's fork() implementation in Windows platform. For the example above to work, a child process must inherit all environment and open file descriptors.
I understand that Windows now support Windows Linux Subsystem, but the last i checked it still does not fully implement fork. Cygwin does actually but it is a bit slow.
I do not know how, until now, to pass information between two Python scripts using mmap in Windows platform. Use
multiprocessing.Queue or
multiprocessing.Pipe or
multiprocessing.Manager or
multiprocessing's shared memory (Value and Array) instead.
I believe you could make each Python script to read in content of the to-be-mapped file into Array of characters. Then use your own structure to map the shared memory into a structured data as you do for the to-be-mapped file.