How do i stop process running a certain Python script? - python

I am running two different Python scripts on a Ubuntu VPS. One of them is the production version, and the other one is for testing purposes.
From time to time, I need to kill or restart one of them. However ps does not show which script does each Python process runs.
What is the conventional way to do this?

ps -AF will give you All processes (not only the ones in your current terminal, or run as your current user), in Full detail, including arguments.

Easiest way to keep it simple would be to create a screen for each. screen -S prod and screen -S test, then run the python script in the background of each and detach the screen (using ctrl +a+d) then when you need to stop one screen -r prod then kill it/restart then detach again.

Related

How can I keep my python-daemon process running or restart it on fail?

I have a python3.9 script I want to have running 24/7. In it, I use python-daemon to keep it running like so:
import daemon
with daemon.DaemonContext():
%%script%%
And it works fine but after a few hours or days, it just crashes randomly. I always start it with sudo but I can't seem to figure out where to find the log file of the daemon process for debugging. What can I do to ensure logging? How can I keep the script running or auto-restart it after crashing?
You can find the full code here.
If you really want to run a script 24/7 in background, the cleanest and easiest way to do it would surely be to create a systemd service.
There are already many descriptions of how to do that, for example here.
One of the advantages of systemd, in addition to being able to launch a service at startup, is to be able to restart it after failure.
Restart=on-failure
If all you want to do is automatically restart the program after a crash, the easiest method would probably be to use a bash script.
You can use the until loop, which is used to execute a given set of commands as long as the given condition evaluates to false.
#!/bin/bash
until python /path/to/script.py; do
echo "The program crashed at `date +%H:%M:%S`. Restarting the script..."
done
If the command returns a non zero exit-status, then the script is restarted.
I would start with familiarizing myself with those two questions:
How to make a Python script run like a service or daemon in Linux
Run a python script with supervisor
Looks like you need a supervisor that will make sure that your script/daemon is still running. You can take a look at supervisord.

How to run a python script always in linux (ubuntu)

I have a linux system server using ssh to connect. Now I have a python script.I want it run always.I using this commond
ubuntu:~$ nohup python3 -u ~/test/main.py > test.outs 2>&1 &
but I exit ssh connect,That python script exit at the same time.
What should I do?
You could run the script regularly / check it is running with a cronjob and this would also allow you to run the script at system start-up so it would keep running in the event of a reboot.
There are a few suggestions here;
https://superuser.com/questions/448445/run-bash-script-in-background-and-exit-terminal
Although this also suggests that nohub should stop the child process being killed when you exit the session. How are you aware that the script stops running upon exit?
There are multiple ways to do this and they depend on your use-case.
One way to do it is to install 'screen' on your server.
sudo apt-get install screen
Now, whenever you connect to your server with ssh, you can type 'screen' and then start your code there. (first time you do this, you get an explanation on screen, press space to skip it)
With this code executing, you type Ctrl + A and then Ctrl + D. This 'detaches' the screen. You can now disconnect and this 'screen' will keep existing and the code will still run.
When reconnecting, you might want to go back to this screen. Type
screen -ls
to get an overview of screens that you have running. (in this case, there'll only be one) They can be identified by the 5 numbers they all start with. So go back to this screen by typing
screen -r XXXXX
and now you're back. More information on this here:
https://www.howtogeek.com/662422/how-to-use-linuxs-screen-command/
Again, I don't know what your script does so it might not be the best solution.
You may use screen
After installation, you can start screen by sending screen to the console, and then run your script. When you type CTRL+A+D, the screen will disappear and you can exit your ssh connection.
If you re-open the screen you opened before, just type screen -r
If you have multiple screen instances, the app will show you numbers of screen when you type screen -r. You just need to find the id of the screen and type screen -r id i.e. screen -r 2643

Ways to run python script in the background on ubuntu VPS and save the logs

So, I have a python script which outputs some data into terminal from time to time. Im trying to run in on the Ubuntu VPS even after I close the SSH connection and still keep the logs somewhere.
Im saving the logs by using:
python3 my_script.py >>file.txt
and it works perfect, however when I try to run this process using
nohup python3 my_script.py >>file.txt &
so it runs in the background and after the ssh connection is closed it seems to save only the first log outputted from my_script.py. I've also tried running this in crontab but the result is similar - only the first log is saved.
Any tips? What am I doing wrong?
I could not understand what you mean "the first log". Maybe the first line of logs?
To run something in the background when SSH connection is closed, I prefer Linux screen, a terminal simulation tool that help you run your command in a sub-process. With it, you could choose to view your output any time in the foreground, or leave your process run in the background.
Usage (short)
screen is not included in most Linux distributions. Install it (Ubuntu):
$ sudo apt-get install screen
Run your script in the foreground:
$ screen python3 my_script.py
You'll see it running. Now detach from this screen: Press keys Ctrl-A followed by Ctrl-D. You'll be back to your shell where you run previous screen command. If you need to switch back to the running context, use screen -r command.
This tool supports multiple parallel running process too.
Something weird
I've tried to redirect stdout or stderr to a file with > or >> symbol. It turned out in failure. I am not an expert of this either, and maybe you need to see its manual page. However, I tend to directly write to a file in Python scripts, with some essential output lines on the console.

Is it possible to run SLURM jobs in the background using SRUN instead of SBATCH?

I was trying to run slurm jobs with srun on the background. Unfortunately, right now due to the fact I have to run things through docker its a bit annoying to use sbatch so I am trying to find out if I can avoid it all together.
From my observations, whenever I run srun, say:
srun docker image my_job_script.py
and close the window where I was running the command (to avoid receiving all the print statements) and open another terminal window to see if the command is still running, it seems that my running script is for some reason cancelled or something. Since it isn't through sbatch it doesn't send me a file with the error log (as far as I know) so I have no idea why it closed.
I also tried:
srun docker image my_job_script.py &
to give control back to me in the terminal. Unfortunately, if I do that it still keeps printing things to my terminal screen, which I am trying to avoid.
Essentially, I log into a remote computer through ssh and then do a srun command, but it seems that if I terminate the communication of my ssh connection, the srun command is automatically killed. Is there a way to stop this?
Ideally I would like to essentially send the script to run and not have it be cancelled for any reason unless I cancel it through scancel and it should not print to my screen. So my ideal solution is:
keep running srun script even if I log out of the ssh session
keep running my srun script even if close the window from where I sent the command
keep running my srun script and let me leave the srun session and not print to my scree (i.e. essentially run to the background)
this would be my idea solution.
For the curious crowd that want to know the issue with sbatch, I want to be able to do (which is the ideal solution):
sbatch docker image my_job_script.py
however, as people will know it does not work because sbatch receives the command docker which isn't a "batch" script. Essentially a simple solution (that doesn't really work for my case) would be to wrap the docker command in a batch script:
#!/usr/bin/sh
docker image my_job_script.py
unfortunately I am actually using my batch script to encode a lot of information (sort of like a config file) of the task I am running. So doing that might affect jobs I do because their underlying file is changing. That is avoided by sending the job directly to sbatch since it essentially creates a copy of the batch script (as noted in this question: Changing the bash script sent to sbatch in slurm during run a bad idea?). So the real solution to my problem would be to actually have my batch script contain all the information that my script requires and then somehow in python call docker and at the same time pass it all the information. Unfortunately, some of the information are function pointers and objects, so its not even clear to me how I would pass such a thing to a docker command ran in python.
or maybe being able to run docker directly to sbatch instead of using a batch script with also solve the problem.
The outputs can be redirected with the options -o stdout and -e for stderr.
So, the job can be launched in background and with the outputs redirected:
$ srun -o file.out -e file.errr docker image my_job_script.py &
Another approach is to use a terminal multiplexer like tmux or screen.
For example, create a new tmux window type tmux. In that window, use srun with your script. From there, you can then detach the tmux window, which returns you to your main shell so you can go about your other business, or you can logoff entirely. When you want to check in on your script, just reattach to the tmux window. See the documentation tmux -h for how to detach and reattach on your OS.
Any output redirects using the -o or -e will still work with this technique and you can run multiple srun commands concurrently in different tmux windows. I’ve found this approach useful to run concurrent pipelines (genomics in this case).
I was wondering this too because the differences between sbatch and srun are not very clearly explainer or motivated. I looked at the code and found:
sbatch
sbatch pretty much just sends a shell script to the controller, tells it to run it and then exits. It does not need to keep running while the job is happening. It does have a --wait option to stay running until the job is finished but all it does is poll the controller every 2 seconds to ask it.
sbatch can't run a job across multiple nodes - the code simply isn't in sbatch.c. sbatch is not implemented in terms of srun, it's a totally different thing.
Also its argument must be a shell script. Bit of a weird limitation but it does have a --wrap option so that it can automatically wrap a real program in a shell script for you. Good luck getting all the escaping right with that!
srun
srun is more like an MPI runner. It directly starts tasks on lots of nodes (one task per node by default though you can override that with --ntasks). It's intended for MPI so all of the jobs will run simultaneously. It won't start any until all the nodes have a slot free.
It must keep running while the job is in progress. You can send it to the background with & but this is still different to sbatch. If you need to start a million sruns you're going to have a problem. A million sbatchs should (in theory) work fine.
There is no way to have srun exit and leave the job still running like there is with sbatch. srun itself acts as a coordinator for all of the nodes in the job, and it updates the job status etc. so it needs to be running for the whole thing.

Autostart Python script and run in background with Ubuntu

I'm running Ubuntu server 16.04 and still getting to grips with it. I have a python script that runs in an endless loop, performing a task related to fetching data from an external source.
What I'm trying to do, is make this python script start after (or during) boot and then run in the background.
I've tried editing rc.local but the boot sequence just hangs since the script keeps running.
Any advice would be greatly appreciated.
As one of the comments mentions, you can use cronjobs to start scripts at certain times such as at startup(as you would like to do). It also would not halt execution like you mentioned with rc.local
The line that you need to add to the chronjob list is -
#reboot python /home/MyPythonScript.py
Here is are a couple of useful tutorials that show you how to do this: http://kvz.io/blog/2007/07/29/schedule-tasks-on-linux-using-crontab/
https://help.ubuntu.com/community/CronHowto
If you would like to do it with python itself there is this handy python library - https://pypi.python.org/pypi/python-crontab/
tmux is a great utility for background desktops. You can use it for this:
sudo apt get install tmux
Then add it to your rc.local:
/usr/bin/tmux new-session -d 'python /path/to/your/script'
After boot you can use it as follow:
tmux attach
And your console will be attached to the last desktop working at background.

Categories