I'm trying to run a python script with the nice level set.
nice -n 5 python3 blah.py
runs as expected and sends text output to the screen. However, I would like to pipe the output to a text file and run this all in the background so I can go and check on the progress remotely.
However,
nice -n 5 python3 blah.py > log.txt &
creates the log file log.txt but doesn't write anything to the text file so I'm not sure where the standard output is being sent to or how to direct it to my text file.
I eventually solved this using the command
nice -n 5 python3 -u blah.py >log.txt &
-u forces the binary I/O layers of stdin, stdout and stderr to be unbuffered. This allows the output of the python script to be written to the text file whilst the process is running.
I'm guessing you're running the command via ssh and want to log out between running and checking the log. To do this run:
nohup nice -n 5 python3 blah.py > log.txt &
This will prevent killing the program on logout. As well nohup redirects stderr to stdout, which also might be what's causing an empty log.txt file.
Related
I have a question about syntax of bash regarding launching scripts from within bash script.
My questions are:
I've seen the following syntax:
#!/bin/bash
python do_something.py > /dev/null 2>&1 &
Can you please explain what is directed to /dev/null, and what is the meaning of 2>&1 if before already mentioned /dev/null?
In addition if I have a line defined like:
python do_something.py > 2>&1 &
how is that different?
If I have the same python file in many paths, how can I differentiate between each process after launching ps -ef |grep python.
When I'm doing so, I get a list of processes which are all called do_something.py, it would be nice if I could have the full execution path string of each pid; how can I do that?
NOTE: The python file launched is writing its own log files.
Ok, diclaimer: I don't have access to a bash right now, so I might be wrong.
Let's break your command: python do_something.py > /dev/null 2>&1 &
python do_something.py will run your command
> /dev/null will redirect stdout to /dev/null
2>&1 will redirect stderr to stdout
& will fork your process and run in background
So your command will ignore stdout/stderr and be run in background which
is equivalent to the command python do_something.py >& /dev/null & [1][2]
python do_something.py > 2>&1 &:
> 2 will redirect stdout to a file named 2
>&1 will redirect stdout to stdout (yes stdout to stdout)
& will fork your process and run in background
So this command is almost equivalent to python do_something.py >2 &,
it will redirect the output to a file named 2 (eg: echo 'yes' > 2>&1)
Note: the behavior of >&1 is probably unspecified.
Since you have run your command using &, your command will be fork and
run in background, therefore I'm not aware of any way to do it in that
case. You can still lookup the /proc directory [3] to see from
which directory your command have been run thought.
[1]: What is the difference between &> and >& in bash?
[2]: In the shell, what does “ 2>&1 ” mean?
[3]: ls -l /proc/$PROCCESSID/cwd
1) stdout (Standard Output) is redirected to /dev/null and stderr (error messages) is redirected to standard output i.e console.
1>filename : Redirect stdout to file "filename."
1>>filename: Redirect and append stdout to file "filename."
2>filename : Redirect stderr to file "filename."
2>>filename: Redirect and append stderr to file "filename."
&>filename : Redirect both stdout and stderr to file "filename."
3) Using the ps auxww flags, you will see the full path to output in both your terminal window and from shell scripts. "ps manual":
-w Wide output. Use this option twice for unlimited width.
Answers:
1, 2. > redirects whatever that is printed in stdout as result of executing the command (in your case python do_something.py) to a file called /dev/null. The /dev/null is kind of a black hole. Whatever you write to it disappers.
2>&1 redirects the output of stderr (which has fd as 2) to stdout (whose fd is 1).
Refer I/O redirection for more info about redirections.
Refer this link for more info about /dev/null
I've tried to run a script from crontab on my Linux system which is leaving an empty file. When I run the script logged on the terminal, it works fine.
Cron line:
#reboot sleep 1m; /bin/bash /root/start_reader_services
The script "start_reader_services" calls a Python script as below:
/root/java/tag_output >> $TAGS_PATH/tags_$DATE_FILE.log
tag_output basically prints out a series of IDs. The same mechanics used to work when I was sending the stdout to my serial port (tag_output > /dev/ttyO0), but now, writing it to the file from cron, the file is created, but is empty.
As I mentioned, running start_reader_services or any piece of that on command line, it works as expected.
Have done:
- Set bash as cron shell
- Set java environments on cron
As requested:
ls -l /root/java/tag_output
-rwxr-xr-x 1 root root 1981 Aug 6 12:06 /root/java/tag_output
First line of tag output:
#!/usr/bin/python
Any help?
Is it possible that tag_output is writing to stderr instead of stdout? Try redirecting stderr too:
/root/java/tag_output >> $TAGS_PATH/tags_$DATE_FILE.log 2>&1
As an aside, you might also want to quote and use braces with the shell variable expansion; that makes it easier to read and probably a little safer:
/root/java/tag_output >> "${TAGS_PATH}/tags_${DATE_FILE}.log" 2>&1
After initialising and updating the crontab using crontab python and before issuing
cron.write(), give the command,
cron.__reduce__()[2]['lines'] = filter(None,
cron.__reduce__()[2]['lines'])
Doing this will remove the empty lines being printed in Linux crontab.
I have h problem, i need to run a backup script with no output to the screen at all the problem is i need to do it only if its running from the crontab in linux.
So if a user open the script it will load the UI menu
But from the crontab i want to add an argument so it will run without any output, something like:
07 00 * * * /root/idan/python nw_backup.py -s
s for silent :)
From my search here i found how to run only one command with subprocess module
Thanks !
You can just dump all output (stdout and stderr) to /dev/null.
/root/idan/python nw_backup.py -s > /dev/null 2>&1
2>&1 basically means, dump stderr (2) same to where you dump stdout (&1).
As the title suggests how do I write a bash script that will execute for example 3 different Python programs as separate processes? And then am I able to gain access to each of these processes to see what is being logged onto the terminal?
Edit: Thanks again. I forgot to mention that I'm aware of appending & but I'm not sure how to access what is being outputted to the terminal for each process. For example I could run all 3 of these programs separately on different tabs and be able to see what is being outputted.
You can run a job in the background like this:
command &
This allows you to start multiple jobs in a row without having to wait for the previous one to finish.
If you start multiple background jobs like this, they will all share the same stdout (and stderr), which means their output is likely to get interleaved. For example, take the following script:
#!/bin/bash
# countup.sh
for i in `seq 3`; do
echo $i
sleep 1
done
Start it twice in the background:
./countup.sh &
./countup.sh &
And what you see in your terminal will look something like this:
1
1
2
2
3
3
But could also look like this:
1
2
1
3
2
3
You probably don't want this, because it would be very hard to figure out which output belonged to which job. The solution? Redirect stdout (and optionally stderr) for each job to a separate file. For example
command > file &
will redirect only stdout and
command > file 2>&1 &
will redirect both stdout and stderr for command to file while running command in the background. This page has a good introduction to redirection in Bash. You can view the command's output "live" by tailing the file:
tail -f file
I would recommend running background jobs with nohup or screen as user2676075 mentioned to let your jobs keep running after you close your terminal session, e.g.
nohup command1 > file1 2>&1 &
nohup command2 > file2 2>&1 &
nohup command3 > file3 2>&1 &
Try something like:
command1 2>&1 | tee commandlogs/command1.log ;
command2 2>&1 | tee commandlogs/command2.log ;
command3 2>&1 | tee commandlogs/command3.log
...
Then you can tail the files as the commands run. Remember, you can tail them all by being in the directory and doing a "tail *.log"
Alternatively, you can setup a script to generate a screen for each command with:
screen -S CMD1 -d -m command1 ;
screen -S CMD2 -d -m command2 ;
screen -S CMD3 -d -m command3
...
Then reconnect to them later with screen --list and screen -r [screen name]
Enjoy
Another option is to use a terminal emulator to run the three processes. You could use xterm (or rxvt etc.) if you are using X.
xterm -e <program1> [arg] ... &
xterm -e <program2> [arg] ... &
xterm -e <program3> [arg] ... &
Depends on what you want. This approach lets you pop up the terminal windows, so you can see the output in real time. You can also combine it with redirection to save the output as well.
I have a python script in /usr/share/myscript.py
I want to execute this script from a cron job, which if the script produces any errors, emails these errors to a specific user (and does not inform the root account).
I do not want to over-ride any of the cron settings - other cron jobs should still notify root.
Currently I am using a shell wrapper, which should pipe errors to a log file and then email it to me. The cron job then executes this .sh file rather than the python script directly.
#!/bin/sh
python /usr/share/scripts/myscript.py 2>&1 > /home/me/logs/myscript.log
test -s /home/me/logs/myscript.log && cat /home/me/logs/myscript.log | mail -s "myscript errors" bob#myplace.com
In production, if nothing goes wrong, then the script executes correctly and nobody is emailed. However, if there is an error in the execution of the python script, then this is still being emailed to the root user from cron.
How should I change the .sh script to suppress this and report to me instead?
This command does the redirection of err output not in the order you want:
python /usr/share/scripts/myscript.py 2>&1 > /home/me/logs/myscript.log
instead you need to redirect stdin first, and stderr second, like so:
python /usr/share/scripts/myscript.py > /home/me/logs/myscript.log 2>&1
Also, have you appended >/dev/null 2>&1 to the end of the wrapped script call in crontab?