Killing a Python script from another script spawned from it - python

I have two Python scripts on a Linux system. Let's call them service and killer. Service is run as a systemd service and killer as a script. Killer exists to perform certain tasks that can't be executed while service is running due to limited hardware resources and the will to keep the code simple.
What I need is to be able to start killer from service and then have killer kill service without dying itself (as a child process). How can I do that?
These are what I have tried so far (without success):
import subprocess
subprocess.call("killer.py")
import subprocess
subprocess.Popen(["killer.py"])
import sh
killer = sh.command("killer.py")
killer()

Related

Linux/pm2 is killing my Flask service using Python's multiprocessing library

I have a Flask service running on a particular port xxxx. Inside this flask service is an endpoint:
/buildGlobalIdsPool
This endpoint uses Python's multiprocessing library's Pool object to run parallel processes of a function:
with Pool() as p:
p.starmap(api.build_global_ids_with_recordlinkage, args)
We use pm2 process manager on a Linux server to manage our services. I am hitting the endpoint from Postman and everything works fine up until the code above is reached. As soon as processes are supposed to spawn, pm2 will kill the main Flask process, but the spawned processes will persist (I check using lsof -i:xxxx and I see multiple python3 processes running on this port). This happens whether I run the service using pm2 or if I simply run python3 app.py. My program works on my local Windows 10 machine.
Just curious what I could be missing that is native to Linux or pm2 that is killing this process or not allowing multiple processes on the same port, while my local machine handles the program just fine.
Thanks!

Is the python subprocess blocking the IO?

I am using CherryPy as a web server, after my web server request, it may run a long long process. I don't want the web server busy on handling the process, so I separate the execution on in a separate script, and using a subprocess to call this script. But it seems that the 'subprocess' will wait the process finish. Can I do something that after the computer called the subprocess, it executed in the background on it own? Thanks.

Difference between python-daemon and multiprocessing libraries

I need to run a daemon process from a python django module which will be running an xmlrpc server. The main process will host an xmlrpc client. I am a bit confused regarding creating, starting, stopping and terminating daemons in python. I have seen two libraries, the standard python multiprocessing, and another python-daemon (https://pypi.python.org/pypi/python-daemon/1.6), but not quite understanding which would be effective in my case. Also when and how do I need to handle SIGTERM for my daemons? Can anybody help me to understand these please?
The multiprocessing module is designed as a drop-in replacement for the threading module. It's designed to be used for the same kind of tasks you'd normally use threads for; speeding up execution by running against multiple cores, background polling, and any other task that you want running concurrently with some other task. It's not designed to launch standalone daemon processes, so I don't think it's appropriate for your use-case.
The python-daemon library is designed to "daemonize" the currently running Python process. I think what you want is to use the subprocess library from your main process (the xmlrpc client) to launch your daemon process (the xmlrpc server), using subprocess.Popen. Then, inside the daemon process, you can use the python-daemon library to become a daemon.
So in the main process, something like this:
subprocess.Popen([my_daemon.py, "-o", "some_option"])
And in my_daemon.py:
import daemon
...
def main():
# Do normal startup stuff
if __name__ == "__main__":
with daemon.DaemonContext(): # This makes the process a daemon
main()

How to design a resilient and highly available service in python?

I am trying to design a resilient and highly available python API back-end service. The core service is designed to run continuously. The service has to run independently for each of my tenants. This is required as the core service is a blocking service and each tenant's execution needs to be independent from any other tenant's service.
The core service is to be started by a provisioning service. The provisioner is also a continuously running service and is to be responsible for doing the house-keeping functions i.e start the core service on tenant sign-up, check for the required environment and attributes and stop the core service etc.
Currently I am using the multiprocessing module to spawn child instances of the core service from the provisioner service. Having a multi-threaded service with one thread for each tenant is also an option but that has the drawback of disruption of service for other tenant if any of the threads craches. Ideally I would like all these to run as background processes. The problems are
If I daemonize the provisioner service, multiprocessing will not let that daemon to create child processes. This is written here
If the provisioner service dies, then all the children will become orphans. How do I get back from this situation.
Obviously, I am open to solutions that do not follow this multiprocessing usage model.
I would recommend you take a different approach. Use the system tools available in your distribution to manage the life-cycle of your processes instead of spawning them yourself. The provisioner would be much simpler as well, as it will not have to reproduce what your operating system can do with little effort.
On Ubuntu/CentOS 6 systems you can use Upstart, which has a great deal of advantages compared to the old sysvinit (aggressive parallelisation, respawning, simple init config syntax, etc).
There is also SystemD which is similar to upstart in design, and comes default in OpenSuse.
The provisioner could then be used only to create the needed init config for each service, and start or stop them using the subprocess module. You could then monitor your instances in case upstart was not able to respawn an instance, and send an alert, or try to start the service again.
Using this approach, you isolate all instances of user services from one another. If the provisioner crashes, the rest of the services will remain up.
For example, say your provisioner is running in the background. It gets a message via AMQP or some other means to create a user and start services for that user. One possible flow youd be:
create user
Do any bootstrap needed for new users
Create /etc/init/[username]_service.conf
start [username]_service
The init script could look similar to:
description "start Service for [username]"
start on runlevel [2345]
stop on runlevel [!2345]
respawn
# Run before process
pre-start script
end script
exec /bin/su -c "/path/to/your/app" <username>
This way you offload process management from your provisioner to the system upstart daemon. You only need to do job management in a simple way (create/destroy services when a user is created or deleted).
On debian-like you can wrap not demonized service with
start-stop-daemon --start --quiet --background --make-pidfile --pidfile $PIDFILE --exec $DAEMON --chuid $USER --chdir $DIR -- \
$DAEMON_ARGS
Children must die after proceesing task.
Parent process must be so simle so posible, only "resieve task - spawn child" in main loop.

Starting a python script on a remote machine which starts a bash script

I have what I believe to be a fairly unique problem for a script I use to stand up webservers on remote machines.
I have a controller script which after checking a ledger initiates a "builder" script on a remote machine. Part of this builder script calls a bash script which starts a process I want to continue running after both scripts are finished.
My only problem is that the builder script seems to finish (gets to the last line) but doesn't seem to return control to the controller script.
For the record I am using subprocess.call in the controller script (to initiate a ssh call) to start the builder script on the remote machine. I have toyed with various ways of initiating the bash script in the builder script but it seems the builder won't return control to the controller until kill the processes spawned by the bash script.
Things I have tried:
pid=os.spawnl(os.P_NOWAIT,dest+'/start_background_script.sh')
pid=subprocess.Popen([dest+'/start_background_script.sh'])
os.system(dest+'/start_background_script.sh &')
pid=os.spawnl(os.P_NOWAIT,dest+'/start_background_script.sh')
The bash script is written to that you execute it and it backgrounds two processes and then returns control.
Any recommendations?
Sound like a job for fabric to me.
Fabric wraps the handling of shell-calls on remote (and also local) machines for you.

Categories