Get the PID of a process started with fabric run command - python

How to get the PID of the processes started by the fabric run command.
I want to keep track of the PID, in case I want to kill the process.
Any better way of dealing with this case?

This should work because the shell opened by the run command is still open when the echo portion of the command is run:
run("mycommand & echo $!")

Related

How to interact with new shell I logged into from script?

Not able to send commands to shell I logged into
Originally, I wrote a Python script. It was able to send commands like
subprocess.run(['kubectl', 'config', 'get-context'], shell=True)
but when it came time to get to the child shell, in this case bash, the command wouldn't run until I exited that shell and it would say things like it couldn't find the command.
I then tried to do it with the module "sh," but was also unsuccessful
I thought maybe using Python was problem and also realized my ultimate goal was to use a different shell (cypher-shell) and so skipped immediately to that with bash as the parent shell. In there I have a line that is sometimes successful, sometimes not
kubectl run -it --rm cypher-shell --image=gcr.io/cloud-marketplace/neo4j-public/causal-cluster-k8s:3.4 --restart=Never --namespace=default --command -- ./bin/cypher-shell -u neo4j -p "password" -a "domain.name"
But even when it successfully logs in it, it just hangs until I manually exit and then it runs the next commands
Note: I saw this and so, perhaps, it's not a child shell? Run shell command from child shell
I can't say I know exactly what you are doing, but if I understand your objective correctly you want the Python program to continue to log while the script continues to run? The problem is that the logger continues to run and holds up your program. The way I would deal with that would be to run the logger as a background process.
With bash, that would be ./script.sh & which would make it run without holding the rest of the program back from running.
Hopefully that may give you an idea! Good luck.

Cannot find daemon after daemonizing python script

I daemonized a python script using the daemonize python library, but now I cannot find the daemon that it spawned. I want to find the daemon and kill it to make some changes to the script.
I used the following to daemonize:
pidfile='/tmp/filename.pid'
daemon = Daemonize(app='filename',pid=pidfile, action=main)
print("daemon started")
daemon.start()
Open a terminal window and try the following:
ps ax | grep <ScriptThatStartedTheDaemon>.py
It should return the PID and the name of the process. Once you have the PID, do:
kill <pid>
Depending on how many times you've run your script, you may have multiple daemons running, in which case you'd want to kill all of them.
To make sure the process was terminated, run the first line of code again. The process with the PID that you killed shouldn't show up if it was successfully terminated.

Run python script with supervisord

I have a simple python script (discord bot) and it works well when I run it with command python3 discord_bot.py & or sh start_bot.sh.
But how can I run it with supervisord?
Update:
I have installed supervisord. But when I try to run process, I have error:
exit status 0; not expected
My supervisord config:
[program:AFI]
command=/home/maksymov/www/Bots/discord_bots/afi/start_bot.sh
autostart=true
autorestart=true
stderr_logfile=/var/log/afi.err.log
stdout_logfile=/var/log/afi.out.log
user=www-data
Probably you need to use one of the "supervisors". Like system.d or ramona
The first one is more general. The second is more "python-specific"
I guess your program tries to run as a daemon. I pasted the most relevant part from documentation:
Supervisord subprocess
Programs meant to be run under supervisor should not daemonize themselves. Instead, they should run in the foreground. They should not detach from the terminal from which they are started.
The easiest way to tell if a program will run in the foreground is to run the command that invokes the program from a shell prompt. If it gives you control of the terminal back, but continues running, it’s daemonizing itself and that will almost certainly be the wrong way to run it under supervisor.

Python hangs when executing a shell script that runs a process as a daemon

I am trying to use os.system (soon to be replaced with subprocess) to call a shell script (which runs a process as a daemon)
os.system('/path/to/shell_script.sh')
The shell script looks like:
nohup /path/to/program &
If I execute this shell script in my local environment, I have to hit enter before being returned to the console as the shell script is running a process as a daemon. If I do the above command in python, I also have to hit enter before being returned to the console.
However, if I do this in a python program, it just hangs forever.
How can I get the python program to resume execution after calling a shell script that runs a process as a daemon?
From here -
Within a script, running a command in the background with an ampersand (&)
may cause the script to hang until ENTER is hit. This seems to occur with
commands that write to stdout.
You should try redirecting your output to some file (or null if you do not need it), maybe /dev/null , if you really do not need the output.
nohup /path/to/program > /dev/null &
Why don't you trying using a separate thread?
Wrap up your process into something like
def run(my_arg):
my_process(my_arg)
thread = Thread(target = run, args = (my_arg, ))
thread.start()
Checkout join and lock for more control over the thread execution.
https://docs.python.org/2/library/threading.html

Python popen process does not stay running

I have a Python process that uses os.popen to run tcpdump in the background. It then reads and processes the output from tcpdump. The process runs in the background as a daemon. When I execute this process from the command line, it runs just fine--it fires up tcpdump and reads the output properly. However, I want this process to run automatically at boot and I've directed it to do so in cron. When I do this, my process is running (per the ps command) but tcpdump is not.
Is there some reason the behavior is different starting a process in cron vs starting it from the command line? My code looks something like this:
p = os.popen('/usr/sbin/tcpdump -l -i eth0')
while True:
data = p.readline()
# do something with data
cron will send you an email when there is a problem. So the first thing is to look into your mailbox (run mailx to access it).
If there is no mail, make sure the processes write messages to stdout/stderr when there is a problem.
Also: Check that you're using the correct user. On some systems, tcpdump needs to be run as root, so you need to install the job into root's crontab (instead of the one of your normal user).

Categories