Python spawn detatched non-python process on linux? - python

I was to make a launcher application, I haven't found a way to detach a sub-process entirely from the spawning python process.
When I launch a program with my desktop's (cinnamon's) launcher the process tree goes:
/sbin/init -> mdm -> mdm -> cinnamon-session -> cinnamon -> the-app-i-launched
Of the threads I read, this one was the most insightful/helpful: Launch a completely independent process. But gets muddied answers as the OP is looking to run python code, which can often be achieved in many usually-preferred ways than by spawning an independent process.
From other posts from stack overflow that do not answer how to launch a detatched python process:
Running daemonalized python code: Applicable to running python code/module as a daemon, (not another process/application) detached from the python instance.
subprocess.call: Process spawns as a child of the python process.
os.system: Process spawns as a child of the python process.
close_fds: (Apparently) Windows(R)-only solution, need portable solution (primary target is Debian linux). Attempting to use close_fds=True on linux, process spawns as a child of the python process.
creationflags: Windows(R)-only solution. On linux raises: ValueError: creationflags is only supported on Windows platforms.
prefix launched process with nohup: Process spawns as a child of the python process. As far as I know, nohup or equivalent is not available on all platforms, making it a linux-only solution.
os.fork: Same as "Running daemonalized python code".
multiprocessing: Same problem as "Running daemonalized python code": useful only for running python code/module.
os.spawnl* + os.P_NOWAIT: Deprecated functions are undesirable to use for new code. In my testing I was not able to see my process actually spawned at all.
os.spawnl* + os.P_DETACH: Windows(R)-only, seemingly removed in current python 2.X versions: AttributeError: 'module' object has no attribute 'P_DETACH'.
os.system + shell fork: I was able to actually see my process run detatched from the python process with this, however I am woried that it has the faults:
Relies on running commands in a shell, which is more vulnerable to maliciousness, intentional or otherwise?.
Relies on non-portable? POSIX/shell? syntaxies that may not be interpenetrate on non-Linux platforms. Which I haven't dug up any good reference for portability on Partial Ref.
subprocess.Popen Alt: I still only observed the sub-process running as a child of the python process.

A working solution can be found in JonMc's answer here. I use it to open documents using 'xdg-open'.
You can change the stderr argument to stderr=open('/dev/null', 'w'), if you do not want a logfile.

The only workable solution I have, and that may be non-portable Linux-only is to use shell evaluation with shell-detach-ampersand syntax.
#!/usr/bin/env python2
import os
os.system('{cmd} &'.format('firefox'))
This might go too-far up the process tree though, outside of the window manager session, potentially not-exiting with your desktop session.
/sbin/init -> firefox

Related

Kill an MPI process in all machines

Suppose that I run an MPI program involving 25 processes on 25 different machines. The program is initiated at one of them called the "master" with a command like
mpirun -n 25 --hostfile myhostfile.txt python helloworld.py
This is executed on Linux with some bash script and it uses mpi4py. Sometimes, in the middle of execution, I want to stop the program in all machines. I don't care if this is done graciously or not since the data I might need is already saved.
Usually, I press Ctrl + C on terminal of the "master" and I think it works as described above. Is this true? In other words, will it stop this specific MPI program in all machines?
Another method I tried is to get the PID of the process in the "master" and kill it. I am not sure about this either.
Do the above methods work as described? If no, what else do you suggest? Note that I want to avoid the use of MPI calls for that purpose like MPI_Abort that some other discussions here and here suggest.

Python os.execlp() in Cygwin returns different child pid

In Cygwin, calling Python3 os.execlp() creates a new process to run external python codes, child's pid is different from that returned by previous os.fork().
I do not know why Cygwin have this weird result.
Running Environment :
Cygwin under win10
Python 3.6.4
Code :
parent.py
pid = os.fork()
if pid == 0:
os.execlp('python', 'python', 'child.py')
else:
print('child is , ', pid)
child.py
print(os.getpid())
When running parent code in Cygwin, the pid numbers returned by two print function are different.
# running result $python fork-exec.py
Child is 6104
Hello from child, 9428
This program runs perfectly under Linux platform.
First, let's start by stating that being a native Unix/Linux system primitive, fork hasn't any equivalent in Windows, and os.fork doesn't exist in Windows native python for that very reason.
But, Python built for Cygwin is able to make it available because Cygwin emulates fork (how to run python script with os.fork on windows?)
Now, the actual reason why PIDs are different is that os.execlp doesn't behave the same on Windows or on Linux. On windows, execlp is also an emulation and does NOT replace the current process. It just spawns a new process using CreateProcess underneath. Cygwin is able to emulate fork properly, but not exec.
So fork+exec is replaced by CreateProcess on Windows (os.exec on Windows), and os.exec does create a new process, hence the different PIDs.
the PID is diffrent because cygwin hosts one of them, which is already a process. and windows handles PID's diffrently than most linux distro's.

Python script produces zombie processes only in Docker

I have quite complicated setup with
Luigi https://github.com/spotify/luigi
https://github.com/kennethreitz/requests-html
and https://github.com/miyakogi/pyppeteer
But long story short - everything works fine at my local Ubuntu (17.10) desktop, but when run in Docker(18.03.0-ce) via docker-compose(1.18.0) (config version 3.3, some related details here https://github.com/miyakogi/pyppeteer/issues/14) it bloats up with zombie chrome processes spawned from python.
Any ideas why it might happen and what to do with it?
Try installing dumb-init: https://github.com/Yelp/dumb-init (available both in alpine and debian) inside your container and use it as entry point (more reading here: https://blog.phusion.nl/2015/01/20/docker-and-the-pid-1-zombie-reaping-problem/)
Seems like it is because python process is not meant to be a root-level process - the topmost one in the processes tree. It just does not handle zombies properly. So after few hours of struggling I end up with the next ugly docker-compose config entry:
entrypoint: /bin/bash -c
command: "(luigid) &"
Where luigid is a python process. This makes bash a root process which handles zombies perfectly.
Would be great to know a more straightforward way of doing this.

Using daemontools with a Python script that spawns subprocesses

I am trying to set up daemontools with a large python program that spawns various subprocesses, and I'm having issues where the subprocesses are not spawning correctly. The subprocess just appears as a zombified process when launched via daemontools.
I have provided a simplified example to demonstrate this.
/service/test/run:
#!/bin/sh
cd /script_directory/
exec envdir /service/test/env /usr/bin/python3 test_subprocess.py
/script_directory/test_subprocess.py
import subprocess
from time import sleep
subprocess.Popen("xterm")
while True:
sleep(1)
test_subprocess.py simply launches a GUI terminal and stays alive, so I can see if it is still running in top/htop.
If I run the script either as root or a non-root user, the script properly executes and the window is displayed. When run via daemontools/supervise, the xterm is zombified and no window is shown.
Setting the env/DISPLAY and env/XAUTHORITY variables as described here doesn't seem to work for me.
On further investigation, the subprocess is zombified even if it does not use the GUI. For example if the subprocess in subprocess.py is "top" - it will not run.
I've used daemontools successfully on various other projects that don't spawn subprocesses so I don't think the issue is with the basic setup here.
Can daemontools be used with scripts that spawn other processes?
If not, what some other recommended tools for daemonising complex python applications?
bro i can't understand what you went to do. but try this program:
import subprocess
p = subprocess.Popen(
['xterm', '-hold'], stdin=subprocess.PIPE)
p.communicate()
if went to give some argument use -e and type command,and if another problem please let me know.thanks

Optionally daemonize a Python process

I am looking into daemonization of my Python script and I have found some libraries that can help: daemonic, daemonize and daemon. Each of them have some issues:
daemonic and daemonize will terminate when they cannot create the PID file. daemonic will not even log or print anything. Looking into the code, they actually call os.exit(). I would like an exception or other error message, so I can fallback to running my code in the foreground.
daemon doesn't even install correctly for Python 3. And seeing the last commit was in 2010, I don't expect any updates soon (if ever).
How can I portably (both Python2 and 3) and optionally (falling back to running in foreground) create a daemonized Python script? Of course I could fallback to using the & operator when starting it, but I would like to implement PEP3143.
I am using two solutions
based on zdaemon
based on supervisor
Both packages are written in Python and are daemonizing anything, what can be run from command line. The requirement is, that the command to be run is running in foreground and not trying to daemonize itself.
supervisor is even part of Linux distributions and even though it comes in a bit outdated version, it is very well usable.
Note, that as it controls general command line driven program, it does not require python version being matched with the controlled code.

Categories