Running python script in background with nohup and timing it - python

I'm running a python script on a remote server with the time command as follows:
time python myscript.py
SSH timeout occurs on the server after some time, so i also need to run it with nohup.So, i have the following two questions:
Is nohup time myscript.py & the right command to execute my python script ?
If the script runs in the background, how will i see the output of the time command ?
Please Help
Thank You

nohup will usually write STDOUT and STDERR to a file called "nohup.out" in the current directory. You'll be able to see the output of time at the end of that file.
Another way of solving this redirection of the output like this:
nohup time bla.py >myoutput &

To 1.: Yes
To 2.: You can redirect the output to a file
nohup time myscript.py > ~/time_output &

Related

How to wrap python script into bash script so that when using nohup the output is redirected in online manner?

I have a python script script.py
from time import sleep
for i in range(30):
print(i)
sleep(1)
I wrap this script into bash script script.sh
#!/bin/bash
python3 python_test.py
I want to run the bash script with nohup and redirect the output to output.out. Thus I run the linux command:
nohup bash script.sh > output.out &
However, the output is redirected to output.out only when the python script 'script.py' ends, not in the online manner. Thus the question.
Question. How to redirect the output to output.out into online manner?

nohup multiple sequential commands not logging output

I have a python script (which takes a lot of time to complete the execution) that I need to run several times varying the parameters. And it's executed in a remote machine. For instance and test purposes, take the following script test.py:
import time
print("\nStart time: {}".format(time.ctime()))
time.sleep(10)
print("End time: {}".format(time.ctime()))
For this I normaly use nohup. It's work just fine whith one execution using the following command:
nohup test.py &
The outputs are correctly saved in nohup.out file. To run in sequence I've done some research and found [this question][1] and I came up with the following command:
nohup $(python test.py; python test.py) &
Which works fine. I run the command, quickly logged out and in again and saw through htop the first execution running, then finishing and then the second one starting. But the problem is that the output isn't been saved into nohup.out file. If I wait in terminal for both executions to finish, the following error is showed:
nohup: failed to run command 'Start': No such file or directory
What am I doing wrong here?
PS.:
I need to log the outputs because I need to see the current progress of the script and know which error happened if it doesn't finish properly. So if there is some other command to use instead of nohup which could log python print's it will be welcomed too.
The command you have:
nohup $(python test.py; python test.py) &
Will attempt to execute the output of the script. It's likely not what you wanted.
What you wanted here is to have nohup start off a command that will execute the two commands in sequence. The most straight forward program that can do this is to use run a child shell:
nohup bash -c "python one.py; python two.py" &
As for a better way to do this, you might want to investigate tmux or screen. If you start off a command in a tmux/screen, not only you can detach the session from the currently running shell, you'd also be able to reconnect to the session later on to resume and interact with the program.
The nohup command is passed a utility and arguments for that utility.
If you like to run your script in sequence via nohup, perhaps using bash as the utility isn't a bad idea
nohup bash -c "python ./test.py; python ./test.py"
However, I recommend to look into using python's logging package as I consider nohup to be a workaround (nohup only appends to nohup.out if the standard output is the terminal.)
Also, there is the approach of using a queue to manage the running of your tasks sequentially.
Here you needn't have to be verbose to run the same script twice. Then again between what you have and writing a worker to consume the queue, I think what you've is simpler ¯\_(ツ)_/¯

no error messages with nohup and python?

I run a python script on a linux server with nohup like that:
nohup python3 ./myscript.py > ./mylog.log &
It works and the script writes the log file but the problem is that python error messages / thrown exceptions don't seem to get written into the log file. How could I achieve this?
Has this something to do with stderr? (but it says: "nohup: redirecting stderr to stdout" when I start the script.)
It is a long running script and after a while sometimes the script stops working because of some problem and with missing error messages I have no clue why. The problems always happen after a few days so this is really a pain to debug.
edit:
Could it have something to do with flush? Since my own prints use flush but maybe python errors don't so they don't show up in the file once the script aborts?
I have found the reason. It really was the buffering problem (see my edit above). :)
nohup python3 -u ./myscript.py > ./mylog.log &
With the python -u parameter it works. It disables buffering.
Now I can go bug hunting...
You are only redirecting stdout. Error messages are given on stderr. Rerun your script like this:
nohup python3 ./myscript.py &> ./mylog.log &
The &> redirects all output (stdout and stderr) to your log file.
with nohup, the error will not be logged unless you specifically redirect error logs to a second file as shown below
nohup python myscript.py > out.log 2> err.log
you can also redirect both logs and error.log to the same files as
nohup python myscript.py > out.log2>&1
Python cache into the buffer before writing to log. To get realtime log used -u flag for "unbuffered".
nohup python -u myscript.py > out.log2>&1
To run in the background and recapture prompt add & at the end
nohup python -u myscript.py > out.log2>&1 &

Nohup is not writing log to output file

I am using the following command to run a python script in the background:
nohup ./cmd.py > cmd.log &
But it appears that nohup is not writing anything to the log file. cmd.log is created but is always empty. In the python script, I am using sys.stdout.write instead of print to print to standard output. Am I doing anything wrong?
You can run Python with the -u flag to avoid output buffering:
nohup python -u ./cmd.py > cmd.log &
It looks like you need to flush stdout periodically (e.g. sys.stdout.flush()). In my testing Python doesn't automatically do this even with print until the program exits.
Using -u with nohup worked for me. Using -u will force the stdout, stderr streams to be unbuffered. It will not affect stdin. Everything will be saved in "nohup.out " file. Like this-
nohup python -u your_code.py &
You can also save it into your directory. This way-
nohup python -u your_code.py > your_directory/nohup.out &
Also, you can use PYTHONUNBUFFERED. If you set it to a non-empty string it will work same as the -u option. For using this run below commands before running python code.
export PYTHONUNBUFFERED=1
or
export PYTHONUNBUFFERED=TRUE
P.S.- I will suggest using tools like cron-job to run things in the background and scheduled execution.
export PYTHONUNBUFFERED=1
nohup ./cmd.py > cmd.log &
or
nohup python -u ./cmd.py > cmd.log &
https://docs.python.org/2/using/cmdline.html#cmdoption-u
Python 3.3 and above has a flush argument to print and this is the only method that worked for me.
print("number to train = " + str(num_train), flush=True)
print("Using {} evaluation batches".format(num_evals), flush=True)
I had a similar issue, but not connected with a Python process. I was running a script which did a nohup and the script ran periodically via cron.
I was able to resolve the problem by:
redirecting the stdin , stdout and stderr
ensuring the the script being invoked via nohup didn't run anything else in the background
PS: my scripts were written in ksh running on RHEL
I run my scripts in the following way and I have no problem at all:
nohup python my_script.py &> my_script.out &
comparing with your syntax looks like you are only missing a "&" symbol after your input...

How to run a script in the background even after I logout SSH?

I have Python script bgservice.py and I want it to run all the time, because it is part of the web service I build. How can I make it run continuously even after I logout SSH?
Run nohup python bgservice.py & to get the script to ignore the hangup signal and keep running. Output will be put in nohup.out.
Ideally, you'd run your script with something like supervise so that it can be restarted if (when) it dies.
If you've already started the process, and don't want to kill it and restart under nohup, you can send it to the background, then disown it.
Ctrl+Z (suspend the process)
bg (restart the process in the background
disown %1 (assuming this is job #1, use jobs to determine)
Running a Python Script in the Background
First, you need to add a shebang line in the Python script which looks like the following:
#!/usr/bin/env python3
This path is necessary if you have multiple versions of Python installed and /usr/bin/env will ensure that the first Python interpreter in your $$PATH environment variable is taken. You can also hardcode the path of your Python interpreter (e.g. #!/usr/bin/python3), but this is not flexible and not portable on other machines. Next, you’ll need to set the permissions of the file to allow execution:
chmod +x test.py
Now you can run the script with nohup which ignores the hangup signal. This means that you can close the terminal without stopping the execution. Also, don’t forget to add & so the script runs in the background:
nohup /path/to/test.py &
If you did not add a shebang to the file you can instead run the script with this command:
nohup python /path/to/test.py &
The output will be saved in the nohup.out file, unless you specify the output file like here:
nohup /path/to/test.py > output.log &
nohup python /path/to/test.py > output.log &
If you have redirected the output of the command somewhere else - including /dev/null - that's where it goes instead.
# doesn't create nohup.out
nohup command >/dev/null 2>&1
If you're using nohup, that probably means you want to run the command in the background by putting another & on the end of the whole thing:
# runs in background, still doesn't create nohup.out
nohup command >/dev/null 2>&1 &
You can find the process and its process ID with this command:
ps ax | grep test.py
# or
# list of running processes Python
ps -fA | grep python
ps stands for process status
If you want to stop the execution, you can kill it with the kill command:
kill PID
You could also use GNU screen which just about every Linux/Unix system should have.
If you are on Ubuntu/Debian, its enhanced variant byobu is rather nice too.
You might consider turning your python script into a proper python daemon, as described here.
python-daemon is a good tool that can be used to run python scripts as a background daemon process rather than a forever running script. You will need to modify existing code a bit but its plain and simple.
If you are facing problems with python-daemon, there is another utility supervisor that will do the same for you, but in this case you wont have to write any code (or modify existing) as this is a out of the box solution for daemonizing processes.
Alternate answer: tmux
ssh into the remote machine
type tmux into cmd
start the process you want inside the tmux e.g. python3 main.py
leaving the tmux session by Ctrl+b then d
It is now safe to exit the remote machine. When you come back use tmux attach to re-enter tmux session.
If you want to start multiple sessions, name each session using Ctrl+b then $. then type your session name.
to list all session use tmux list-sessions
to attach a running session use tmux attach-session -t <session-name>.
You can nohup it, but I prefer screen.
Here is a simple solution inside python using a decorator:
import os, time
def daemon(func):
def wrapper(*args, **kwargs):
if os.fork(): return
func(*args, **kwargs)
os._exit(os.EX_OK)
return wrapper
#daemon
def my_func(count=10):
for i in range(0,count):
print('parent pid: %d' % os.getppid())
time.sleep(1)
my_func(count=10)
#still in parent thread
time.sleep(2)
#after 2 seconds the function my_func lives on is own
You can of course replace the content of your bgservice.py file in place of my_func.
Try this:
nohup python -u <your file name>.py >> <your log file>.log &
You can run above command in screen and come out of screen.
Now you can tail logs of your python script by: tail -f <your log file>.log
To kill you script, you can use ps -aux and kill commands.
The zsh shell has an option to make all background processes run with nohup.
In ~/.zshrc add the lines:
setopt nocheckjobs #don't warn about bg processes on exit
setopt nohup #don't kill bg processes on exit
Then you just need to run a process like so: python bgservice.py &, and you no longer need to use the nohup command.
I know not many people use zsh, but it's a really cool shell which I would recommend.
If what you need is that the process should run forever no matter whether you are logged in or not, consider running the process as a daemon.
supervisord is a great out of the box solution that can be used to daemonize any process. It has another controlling utility supervisorctl that can be used to monitor processes that are being run by supervisor.
You don't have to write any extra code or modify existing scripts to make this work. Moreover, verbose documentation makes this process much simpler.
After scratching my head for hours around python-daemon, supervisor is the solution that worked for me in minutes.
Hope this helps someone trying to make python-daemon work
You can also use Yapdi:
Basic usage:
import yapdi
daemon = yapdi.Daemon()
retcode = daemon.daemonize()
# This would run in daemon mode; output is not visible
if retcode == yapdi.OPERATION_SUCCESSFUL:
print('Hello Daemon')

Categories