Sorry, if headline is strange. Let me explain.
Let's say there is handler.py:
import funcs
import requests
def initialize_calculate(data):
check_data(data)
funcs.calculate(data) # takes a lot of time like 30 minutes
print('Calculation launched')
requests.get('hostname', params={'func':'calculate', 'status':'launched'})
and here is funcs.py:
import requests
def calculate(data):
result = make_calculations(data)
requests.get('hostname',params={'func':'calculate', 'status':'finished', 'result':result})
So what I want is that handler can initialize another function no matter where, but doesn't wait until it ends, because I want to notify client-side that process is started, and when it's done this process itself will send result when it's finished.
How can I launch independent process with function calculate from initialize_calculate?
I want to know If it's possible without non-native libraries or frameworks.
If you don't wan't to use a 3rd-party lib like daemonocle implementing a "well-behaved" Unix-Daemon, you could
use subprocess.Popen() to create an independent process. Another option would be to modify multiprocessing.Process to prevent auto-joining of the child when the parent exits.
subprocess.Popen()
With subprocess.Popen() you start the new process with specifying commands and arguments like manually from terminal. This means you need to make funcs.py or another file a top-level script which parses string-arguments from stdin and then calls funcs.calculate() with these arguments.
I boiled your example down to the essence so we don't have to read too much code.
funcs.py
#!/usr/bin/env python3
# UNIX: enable executable from terminal with: chmod +x filename
import os
import sys
import time
import psutil # 3rd party for demo
def print_msg(msg):
print(f"[{time.ctime()}, pid: {os.getpid()}] --- {msg}")
def calculate(data, *args):
print_msg(f"parent pid: {psutil.Process().parent().pid}, start calculate()")
for _ in range(int(500e6)):
pass
print_msg(f"parent pid: {psutil.Process().parent().pid}, end calculate()")
if __name__ == '__main__':
if len(sys.argv) > 1:
calculate(*sys.argv[1:])
subp_main.py
#!/usr/bin/env python3
# UNIX: enable executable from terminal with: chmod +x filename
if __name__ == '__main__':
import time
import logging
import subprocess
import multiprocessing as mp
import funcs
mp.log_to_stderr(logging.DEBUG)
filename = funcs.__file__
data = ("data", 42)
# in case filename is an executable you don't need "python" before `filename`:
subprocess.Popen(args=["python", filename, *[str(arg) for arg in data]])
time.sleep(1) # keep parent alive a bit longer for demo
funcs.print_msg(f"exiting")
And important for testing, run from terminal, e.g. not PyCharm-Run, because it won't show what the child prints. In the last line below you see the child process' parent-id changed to 1 because the child got adopted by systemd (Ubuntu) after the parent exited.
$> ./subp_main.py
[Fri Oct 23 20:14:44 2020, pid: 28650] --- parent pid: 28649, start calculate()
[Fri Oct 23 20:14:45 2020, pid: 28649] --- exiting
[INFO/MainProcess] process shutting down
[DEBUG/MainProcess] running all "atexit" finalizers with priority >= 0
[DEBUG/MainProcess] running the remaining "atexit" finalizers
$> [Fri Oct 23 20:14:54 2020, pid: 28650] --- parent pid: 1, end calculate()
class OrphanProcess(multiprocessing.Process)
If you search for something more convenient, well you can't use the high-level multiprocessing.Process as is, because it doesn't let the parent process exit before the child, as you asked for. Regular child-processes are either joined (awaited) or terminated (if you set the daemon-flag for Process) when the parent shuts down. This still happens within Python. Note that the daemon-flag doesn't make a process a Unix-Daemon. The naming is a somewhat frequent source of confusion.
I subclassed multiprocessing.Process to switch the auto-joining off and spend some time with the source and observing if zombies might become an issue. Because the modification turns off automatic joining in the parent, I recommend using "forkserver" as start-method for new processes on Unix (always a good idea if the parent is already multi-threaded) to prevent zombie-children from sticking around as long the parent is still running. When the parent process terminates, its child-zombies get eventually reaped by systemd/init. Running multiprocessing.log_to_stderr() shows everything shutting down cleanly, so nothing seems broken so far.
Consider this approach experimental, but it's probably a lot safer than using raw os.fork() to re-invent part of the extensive multiprocessing machinery, just to add this one feature. For error-handling in the child, write a try-except block and log to file.
orphan.py
import multiprocessing.util
import multiprocessing.process as mpp
import multiprocessing as mp
__all__ = ['OrphanProcess']
class OrphanProcess(mp.Process):
"""Process which won't be joined by parent on parent shutdown."""
def start(self):
super().start()
mpp._children.discard(self)
def __del__(self):
# Finalizer won't `.join()` the child because we discarded it,
# so here last chance to reap a possible zombie from within Python.
# Otherwise systemd/init will reap eventually.
self.join(0)
orph_main.py
#!/usr/bin/env python3
# UNIX: enable executable from terminal with: chmod +x filename
if __name__ == '__main__':
import time
import logging
import multiprocessing as mp
from orphan import OrphanProcess
from funcs import print_msg, calculate
mp.set_start_method("forkserver")
mp.log_to_stderr(logging.DEBUG)
p = OrphanProcess(target=calculate, args=("data", 42))
p.start()
time.sleep(1)
print_msg(f"exiting")
Again test from terminal to get the child print to stdout. When the shell appears to be hanging after everything was printed over the second prompt, hit enter to get a new prompt. The parent-id stays the same here because the parent, from the OS-point of view, is the forkserver-process, not the initial main-process for orph_main.py.
$> ./orph_main.py
[INFO/MainProcess] created temp directory /tmp/pymp-bd75vnol
[INFO/OrphanProcess-1] child process calling self.run()
[Fri Oct 23 21:18:29 2020, pid: 30998] --- parent pid: 30997, start calculate()
[Fri Oct 23 21:18:30 2020, pid: 30995] --- exiting
[INFO/MainProcess] process shutting down
[DEBUG/MainProcess] running all "atexit" finalizers with priority >= 0
[DEBUG/MainProcess] running the remaining "atexit" finalizers
$> [Fri Oct 23 21:18:38 2020, pid: 30998] --- parent pid: 30997, end calculate()
[INFO/OrphanProcess-1] process shutting down
[DEBUG/OrphanProcess-1] running all "atexit" finalizers with priority >= 0
[DEBUG/OrphanProcess-1] running the remaining "atexit" finalizers
[INFO/OrphanProcess-1] process exiting with exitcode 0
you may use Process class from multiprocessing module to do that.
Here is an example:
from multiprocessing import Process
import requests
def calculate(data):
result = make_calculations(data)
requests.get('hostname',params={'func':'calculate', 'status':'finished', 'result':result})
def initialize_calculate(data):
check_data(data)
p = Process(target=calculate, args=(data,))
p.start()
print('Calculation launched')
requests.get('hostname', params={'func':'calculate', 'status':'launched'})
Related
I have a script "run.py" that must print "Hello", launch another script "run2.py", and then terminate (do not wait for run2.py to return).
run2.py is not in the local directory and is only required to print "Hello again".
How can I do this?
# run_path = "C:/Program Files (x86)/xxx/run.py"
# run2_path = "//network_share/folder/run2.py"
**run.py**
import os
print("Hello")
# What do I do here?
# os.execl("//network_share/folder/run2.py")
exit()
**run2.py**
print("Hello again")
This seems to work for a script I have in the same folder I'm running this one in.
This should verify that the first script finishes and doesn't linger while the second script runs in its own process. It is possible on some systems, due to their configuration, the child process will terminate when the parent does. But not in this case...
I put more time into this post to add code that shows how to check if the parent process is still running. This would be a good way for the child to ensure it's exited. Also shows how to pass parameters to the child process.
# launch.py
import subprocess as sp
import os
if __name__ == '__main__':
sp.Popen(['ps']) # Print out runniing processes.
print("launch.py's process id is %s." % os.getpid())
# Give child process this one's process ID in the parameters.
sp.Popen(['python3', 'runinproc.py', str(os.getpid())])
# ^^^ This line above anwers the main question of how to kick off a
# child Python script.
print("exiting launch.py")
Other script.
# runinproc.py
import time
import subprocess as sp
import sys
import os
def is_launcher_running():
try:
# This only checks the status of the process. It doesn't
# kill it, or otherwise affect it.
os.kill(int(sys.argv[1]), 0)
except OSError:
return False
else:
return True
if __name__ == '__main__':
print("runinproc.py was launched by process ID %s" % sys.argv[1])
for i in range(100):
if is_launcher_running():
# Is launch.py still running?
print("[[ launch.py is still running... ]]")
sp.Popen(['ps']) # Print out the running processes.
print("going to sleep for 2 seconds...")
time.sleep(2)
Bash output:
Todds-iMac:pyexperiments todd$ python3 launch.py
launch.py process id is 40975.
exiting launch.py
Todds-iMac:pyexperiments todd$ runinproc.py was launched by process ID 40975
going to sleep for 2 seconds...
PID TTY TIME CMD
PID TTY TIME CMD
40866 ttys000 0:00.09 -bash
40866 ttys000 0:00.09 -bash
40977 ttys000 0:00.04 /Library/Frameworks/Python.framework/Versions/3.8/Resources/Python.app/C
40977 ttys000 0:00.04 /Library/Frameworks/Python.framework/Versions/3.8/Resources/Python.app/C
going to sleep for 2 seconds...
PID TTY TIME CMD
40866 ttys000 0:00.09 -bash
40977 ttys000 0:00.04 /Library/Frameworks/Python.framework/Versions/3.8/Resources/Python.app/C
going to sleep for 2 seconds...
PID TTY TIME CMD
40866 ttys000 0:00.09 -bash
40977 ttys000 0:00.04 /Library/Frameworks/Python.framework/Versions/3.8/Resources/Python.app/C
going to sleep for 2 seconds...
Note that the first call to the shell, ps from launch.py is executed after launch.py exited. That's why it doesn't show up in the printed process list.
subprocess is your friend, but if you need to not wait, check out the P_NOWAIT--replacing example code in https://docs.python.org/3/library/subprocess.html
EG:
pid = Popen(["/bin/mycmd", "myarg"]).pid
I don't think .communicate is what you need this time around - isn't it more for waiting?
The cleanest way to do this (since both scripts are written in pure Python) is to import the other script as a module and execute its content, placed within a function:
run.py
import os
import sys
sys.path.append("//network_share/folder/")
import run2
print("Hello")
run2.main()
exit()
run2.py
def main():
print("Hello again")
I have a python script: zombie.py
from multiprocessing import Process
from time import sleep
import atexit
def foo():
while True:
sleep(10)
#atexit.register
def stop_foo():
p.terminate()
p.join()
if __name__ == '__main__':
p = Process(target=foo)
p.start()
while True:
sleep(10)
When I run this with python zombie.py & and kill the parent process with kill -2, the stop() is correctly called and both processes terminate.
Now, suppose I have a bash script zombie.sh:
#!/bin/sh
python zombie.py &
echo "done"
And I run ./zombie.sh from the command line.
Now, stop() never gets called when the parent gets killed. If I run kill -2 on the parent process, nothing happens. kill -15 or kill -9 both just kill the parent process, but not the child:
[foo#bar ~]$ ./zombie.sh
done
[foo#bar ~]$ ps -ef | grep zombie | grep -v grep
foo 27220 1 0 17:57 pts/3 00:00:00 python zombie.py
foo 27221 27220 0 17:57 pts/3 00:00:00 python zombie.py
[foo#bar ~]$ kill -2 27220
[foo#bar ~]$ ps -ef | grep zombie | grep -v grep
foo 27220 1 0 17:57 pts/3 00:00:00 python zombie.py
foo 27221 27220 0 17:57 pts/3 00:00:00 python zombie.py
[foo#bar ~]$ kill 27220
[foo#bar ~]$ ps -ef | grep zombie | grep -v grep
foo 27221 1 0 17:57 pts/3 00:00:00 python zombie.py
What is going on here? How can I make sure the child process dies with the parent?
Neither the atexit nor the p.daemon = True will truely ensure that the child process will die with the father. Receiving a SIGTERM will not trigger the atexit routines.
To make sure the child gets killed upon its father's death you will have to install a signal handler in the father. This way you can react on most signals (SIGQUIT, SIGINT, SIGHUP, SIGTERM, ...) but not on SIGKILL; there simply is no way to react on that signal from within the process which receives it.
Install a signal handler for all useful signals and in that handler kill the child process.
Update: This solution doesn't work for processes killed by a signal.
Your child process is not a zombie. It is alive.
If you want the child process to be killed when its parent exits normally then set p.daemon = True before p.start(). From the docs:
When a process exits, it attempts to terminate all of its daemonic child processes.
Looking at the source code, it is clear that multiprocessing uses atexit callback to kill its daemonic children i.e., it won't work if the parent is killed by a signal. For example:
#!/usr/bin/env python
import logging
import os
import signal
import sys
from multiprocessing import Process, log_to_stderr
from threading import Timer
from time import sleep
def foo():
while True:
sleep(1)
if __name__ == '__main__':
log_to_stderr().setLevel(logging.DEBUG)
p = Process(target=foo)
p.daemon = True
p.start()
# either kill itself or exit normally in 5 seconds
if '--kill' in sys.argv:
Timer(5, os.kill, [os.getpid(), signal.SIGTERM]).start()
else: # exit normally
sleep(5)
Output
$ python kill-orphan.py
[INFO/Process-1] child process calling self.run()
[INFO/MainProcess] process shutting down
[DEBUG/MainProcess] running all "atexit" finalizers with priority >= 0
[INFO/MainProcess] calling terminate() for daemon Process-1
[INFO/MainProcess] calling join() for process Process-1
[DEBUG/MainProcess] running the remaining "atexit" finalizers
Notice "calling terminate() for daemon" line.
Output (with --kill)
$ python kill-orphan.py --kill
[INFO/Process-1] child process calling self.run()
The log shows that if the parent is killed by a signal then "atexit" callback is not called (and ps shows that the child is alive in this case). See also Multiprocess Daemon Not Terminating on Parent Exit.
I am running on a linux machine a python script which creates a child process using subprocess.check_output() as it follows:
subprocess.check_output(["ls", "-l"], stderr=subprocess.STDOUT)
The problem is that even if the parent process dies, the child is still running.
Is there any way I can kill the child process as well when the parent dies?
Yes, you can achieve this by two methods. Both of them require you to use Popen instead of check_output. The first is a simpler method, using try..finally, as follows:
from contextlib import contextmanager
#contextmanager
def run_and_terminate_process(*args, **kwargs):
try:
p = subprocess.Popen(*args, **kwargs)
yield p
finally:
p.terminate() # send sigterm, or ...
p.kill() # send sigkill
def main():
with run_and_terminate_process(args) as running_proc:
# Your code here, such as running_proc.stdout.readline()
This will catch sigint (keyboard interrupt) and sigterm, but not sigkill (if you kill your script with -9).
The other method is a bit more complex, and uses ctypes' prctl PR_SET_PDEATHSIG. The system will send a signal to the child once the parent exits for any reason (even sigkill).
import signal
import ctypes
libc = ctypes.CDLL("libc.so.6")
def set_pdeathsig(sig = signal.SIGTERM):
def callable():
return libc.prctl(1, sig)
return callable
p = subprocess.Popen(args, preexec_fn = set_pdeathsig(signal.SIGTERM))
Your problem is with using subprocess.check_output - you are correct, you can't get the child PID using that interface. Use Popen instead:
proc = subprocess.Popen(["ls", "-l"], stdout=PIPE, stderr=PIPE)
# Here you can get the PID
global child_pid
child_pid = proc.pid
# Now we can wait for the child to complete
(output, error) = proc.communicate()
if error:
print "error:", error
print "output:", output
To make sure you kill the child on exit:
import os
import signal
def kill_child():
if child_pid is None:
pass
else:
os.kill(child_pid, signal.SIGTERM)
import atexit
atexit.register(kill_child)
Don't know the specifics, but the best way is still to catch errors (and perhaps even all errors) with signal and terminate any remaining processes there.
import signal
import sys
import subprocess
import os
def signal_handler(signal, frame):
sys.exit(0)
signal.signal(signal.SIGINT, signal_handler)
a = subprocess.check_output(["ls", "-l"], stderr=subprocess.STDOUT)
while 1:
pass # Press Ctrl-C (breaks the application and is catched by signal_handler()
This is just a mockup, you'd need to catch more than just SIGINT but the idea might get you started and you'd need to check for spawned process somehow still.
http://docs.python.org/2/library/os.html#os.kill
http://docs.python.org/2/library/subprocess.html#subprocess.Popen.pid
http://docs.python.org/2/library/subprocess.html#subprocess.Popen.kill
I'd recommend rewriting a personalized version of check_output cause as i just realized check_output is really just for simple debugging etc since you can't interact so much with it during executing..
Rewrite check_output:
from subprocess import Popen, PIPE, STDOUT
from time import sleep, time
def checkOutput(cmd):
a = Popen('ls -l', shell=True, stdin=PIPE, stdout=PIPE, stderr=STDOUT)
print(a.pid)
start = time()
while a.poll() == None or time()-start <= 30: #30 sec grace period
sleep(0.25)
if a.poll() == None:
print('Still running, killing')
a.kill()
else:
print('exit code:',a.poll())
output = a.stdout.read()
a.stdout.close()
a.stdin.close()
return output
And do whatever you'd like with it, perhaps store the active executions in a temporary variable and kill them upon exit with signal or other means of intecepting errors/shutdowns of the main loop.
In the end, you still need to catch terminations in the main application in order to safely kill any childs, the best way to approach this is with try & except or signal.
As of Python 3.2 there is a ridiculously simple way to do this:
from subprocess import Popen
with Popen(["sleep", "60"]) as process:
print(f"Just launched server with PID {process.pid}")
I think this will be best for most use cases because it's simple and portable, and it avoids any dependence on global state.
If this solution isn't powerful enough, then I would recommend checking out the other answers and discussion on this question or on Python: how to kill child process(es) when parent dies?, as there are a lot of neat ways to approach the problem that provide different trade-offs around portability, resilience, and simplicity. 😊
Manually you could do this:
ps aux | grep <process name>
get the PID(second column) and
kill -9 <PID>
-9 is to force killing it
I'm running a daemon on a remote machine: mydaemon. This daemon should be persistently running at all times.
When I'm running a job on my remote machine, it also launches a lightweight python server process, my_remote_server.py.
One of the commands I can send to my_remote_server.py is to restart mydaemon, which I'm trying to do like this:
os.system("killall mydaemon")
subprocess.Popen(["mydaemon"], stdin=None, stdout=None, stderr=None, close_fds=True)
When my job ends, my_remote_server.py is supposed to terminate, but mydaemon should keep running. However I see my_remote_server.py stuck as a zombie process (This is causing the system to not see my job as terminated)
820 root Z [my_remote_serve]
834 root 552 S /usr/sbin/telnetd -l /bin/sh
835 root 836 S /bin/sh
844 root 672 S mydaemon
I want to detach Parent (my_remote_server.py) from the child (mydaemon), but I can't figure out how.
--
My python version is 2.5.4
edit:
I think I understand daemonization a bit better now, but I'm still having some trouble getting the daemon to separate
I'm leaving out the error handling here for brevity
os.system("killall mydaemon")
if(os.fork() > 0):
return True # my_remote_server.py returns to handle additional commands
os.setsid()
if(os.fork() > 0):
exit(0) # first child exits after becoming session leader
os.execlp("mydaemon") # have the second child run as the daemon
This is my ps list before I call the restart_mydaemon function
252 root 672 S mydaemon
286 root 4552 S /usr/bin/python my_remote_server.py
This is after restart_mydaemon, first child is zombied (shouldn't it be gone?)
286 root 4552 S /usr/bin/python my_remote_server.py
300 root Z [my_remote_serve]
304 root 672 S [mydaemon]
This is when the job terminates (my_remote_server.py should have exited, but it's a zombie, however, the first child has exited now at this point)
286 root Z [my_remote_serve]
304 root 1012 S [mydaemon]
Turning a process into a daemon is a multi-step process. Part of your problem is that your parent process is not waiting for child termination correctly (read about wait/waitpid),
Your process must close any open file descriptors (stdin, stdout, stderr)
Your process needs to (change directory, set umask, whatever you want/need)
You must fork a process
That process must become a process group leader
You must fork again, to fully detach from the (grand)parent
try:
#free parent, detach from process group
pid = os.fork()
if( pid>0 ):
exit(0) #parent exits
except OSError, e:
raise Exception("%s [%d]" % (e.strerror, e.errno))
#become session leader, process group leader, detach from controlling terminal
os.setsid()
try:
#prevent zombie process, make init cleanup
pid = os.fork()
if( pid>0 ):
exit(0) #parent exits
except OSError, e:
raise Exception("%s [%d]" % (e.strerror, e.errno))
#change directories, close stdin/out/err, etc
os.chdir(MYDIR)
os.umask(MYMASK)
#close files (including stdin, stdout, stderr)
#(re)open new stdin/stdout, if desired
Take a look here for reference/good explanation of how daemonization works:
http://code.activestate.com/recipes/278731/
Overall this is pretty non-trivial, I would try using the python-daemon library or similar library:
https://pypi.python.org/pypi/python-daemon/
I wouldn't call sub process to do this. Sub process imo should only be used to run a command that you cant import.
So in this case your my_remote_server.py could import the mydaemon and run it.
You have 2 ways of doing this. You could make my_remote_server the parent and mydaemon the child. Or have a new script that is the parent and spawns 2 children(mydaemon, my_remote_server).
For example. Below I create a child process. Global scope since i need to access it in the HTTP handler of tornado. I set up the Web Server. Start the Child process and then start the web server. When the endpoint is hit the handler runs and the child process is pulled from the global scope. We terminate the child process and print. You could add more endpoint to tornado to start the child again or stop and then restart it.
remote.py:
import multiprocessing
import deamon
import tornado.ioloop
import tornado.web
child_pro = multiprocessing.Process(target=deamon.run_deamon)
child_pro.daemon = True
class MainHandler(tornado.web.RequestHandler):
def get(self):
global child_pro
child_pro = child_pro # type: multiprocessing.Process
child_pro.terminate()
print ("Child killed")
def make_app():
return tornado.web.Application([
(r"/", MainHandler),
])
if __name__ == "__main__":
global child_pro
app = make_app()
app.listen(8888)
child_pro.start()
tornado.ioloop.IOLoop.current().start()
deamon.py:
import time
def run_deamon():
while True:
time.sleep(2)
print ('child alive')
I want a script to start a new process, such that the new process continues running after the initial script exits. I expected that I could use multiprocessing.Process to start a new process, and set daemon=True so that the main script may exit while the created process continues running.
But it seems that the second process is silently terminated when the main script exits. Is this expected behavior, or am I doing something wrong?
From the Python docs:
When a process exits, it attempts to
terminate all of its daemonic child
processes.
This is the expected behavior.
If you are on a unix system, you could use os.fork:
import os
import time
pid=os.fork()
if pid:
# parent
while True:
print("I'm the parent")
time.sleep(0.5)
else:
# child
while True:
print("I'm just a child")
time.sleep(0.5)
Running this creates two processes. You can kill the parent without killing the child.
For example, when you run script you'll see something like:
% script.py
I'm the parent
I'm just a child
I'm the parent
I'm just a child
...
Stop the script with ctrl-Z:
^Z
[1]+ Stopped script.py
Find the process ID number for the parent. It will be the smaller of the two process ID numbers since the parent came first:
% ps axuw | grep script.py
unutbu 6826 0.1 0.1 33792 6388 pts/24 T 15:09 0:00 python /home/unutbu/pybin/script.py
unutbu 6827 0.0 0.1 33792 4352 pts/24 T 15:09 0:00 python /home/unutbu/pybin/script.py
unutbu 6832 0.0 0.0 17472 952 pts/24 S+ 15:09 0:00 grep --color=auto script.py
Kill the parent process:
% kill 6826
Restore script.py to the foreground:
% fg
script.py
Terminated
You'll see the child process is still running:
% I'm just a child
I'm just a child
I'm just a child
...
Kill the child (in a new terminal) with
% kill 6827
Simply use the subprocess module:
import subprocess
subprocess.Popen(["sleep", "60"])
Here is a related question on SO, where one of the answers gives a nice solution to this problem:
"spawning process from python"
If you are on a unix system (using docs):
#!/usr/bin/env python3
import os
import sys
import time
import subprocess
import multiprocessing
from multiprocessing import Process
def to_use_in_separate_process(*args):
print(args)
#check args before using them:
if len(args)>1:
subprocess.call((args[0], args[1]))
print('subprocess called')
def main(apathtofile):
print('checking os')
if os.name == 'posix':
print('os is posix')
multiprocessing.get_context('fork')
p = Process(target=to_use_in_separate_process, args=('xdg-open', apathtofile))
p.run()
print('exiting def main')
if __name__ == '__main__':
#parameter [1] must be some file that can be opened by xdg-open that this
#program uses.
if len(sys.argv)>1:
main(sys.argv[1])
print('we can exit now.')
else:
print('no parameters...')
print('mother program will end now!')
sys.exit(0)
In Ubuntu the following commands keep working even though the python app exit.
url = "https://www.youtube.com/watch?v=t3kcqTE6x4A"
cmd = f"mpv '{url}' && zenity --info --text 'you have watched {url}' &"
os.system(cmd)