Launching scripts from bash and directing outputs - python

I have a question about syntax of bash regarding launching scripts from within bash script.
My questions are:
I've seen the following syntax:
#!/bin/bash
python do_something.py > /dev/null 2>&1 &
Can you please explain what is directed to /dev/null, and what is the meaning of 2>&1 if before already mentioned /dev/null?
In addition if I have a line defined like:
python do_something.py > 2>&1 &
how is that different?
If I have the same python file in many paths, how can I differentiate between each process after launching ps -ef |grep python.
When I'm doing so, I get a list of processes which are all called do_something.py, it would be nice if I could have the full execution path string of each pid; how can I do that?
NOTE: The python file launched is writing its own log files.

Ok, diclaimer: I don't have access to a bash right now, so I might be wrong.
Let's break your command: python do_something.py > /dev/null 2>&1 &
python do_something.py will run your command
> /dev/null will redirect stdout to /dev/null
2>&1 will redirect stderr to stdout
& will fork your process and run in background
So your command will ignore stdout/stderr and be run in background which
is equivalent to the command python do_something.py >& /dev/null & [1][2]
python do_something.py > 2>&1 &:
> 2 will redirect stdout to a file named 2
>&1 will redirect stdout to stdout (yes stdout to stdout)
& will fork your process and run in background
So this command is almost equivalent to python do_something.py >2 &,
it will redirect the output to a file named 2 (eg: echo 'yes' > 2>&1)
Note: the behavior of >&1 is probably unspecified.
Since you have run your command using &, your command will be fork and
run in background, therefore I'm not aware of any way to do it in that
case. You can still lookup the /proc directory [3] to see from
which directory your command have been run thought.
[1]: What is the difference between &> and >& in bash?
[2]: In the shell, what does “ 2>&1 ” mean?
[3]: ls -l /proc/$PROCCESSID/cwd

1) stdout (Standard Output) is redirected to /dev/null and stderr (error messages) is redirected to standard output i.e console.
1>filename : Redirect stdout to file "filename."
1>>filename: Redirect and append stdout to file "filename."
2>filename : Redirect stderr to file "filename."
2>>filename: Redirect and append stderr to file "filename."
&>filename : Redirect both stdout and stderr to file "filename."
3) Using the ps auxww flags, you will see the full path to output in both your terminal window and from shell scripts. "ps manual":
-w Wide output. Use this option twice for unlimited width.

Answers:
1, 2. > redirects whatever that is printed in stdout as result of executing the command (in your case python do_something.py) to a file called /dev/null. The /dev/null is kind of a black hole. Whatever you write to it disappers.
2>&1 redirects the output of stderr (which has fd as 2) to stdout (whose fd is 1).
Refer I/O redirection for more info about redirections.
Refer this link for more info about /dev/null

Related

Running background process with kubectl exec

I am trying to execute a Python program as a background process inside a container with kubectl as below (kubectl issued on local machine):
kubectl exec -it <container_id> -- bash -c "cd some-dir && (python xxx.py --arg1 abc &)"
When I log in to the container and check ps -ef I do not see this process running. Also, there is no output from kubectl command itself.
Is the kubectl command issued correctly?
Is there a better way to achieve the same?
How can I see the output/logs printed off the background process being run?
If I need to stop this background process after some duration, what is the best way to do this?
The nohup Wikipedia page can help; you need to redirect all three IO streams (stdout, stdin and stderr) - an example with yes:
kubectl exec pod -- bash -c "yes > /dev/null 2> /dev/null &"
nohup is not required in the above case because I did not allocate a pseudo terminal (no -t flag) and the shell was not interactive (no -i flag) so no HUP signal is sent to the yes process on session termination. See this answer for more details.
Redirecting /dev/null to stdin is not required in the above case since stdin already refers to /dev/null (you can see this by running ls -l /proc/YES_PID/fd in another shell).
To see the output you can instead redirect stdout to a file.
To stop the process you'd need to identity the PID of the process you want to stop (pgrep could be useful for this purpose) and send a fatal signal to it (kill PID for example).
If you want to stop the process after a fixed duration, timeout might be a better option.
Actually, the best way to make this kind of things is adding an entry point to your container and run execute the commands there.
Like:
entrypoint.sh:
#!/bin/bash
set -e
cd some-dir && (python xxx.py --arg1 abc &)
./somethingelse.sh
exec "$#"
You wouldn't need to go manually inside every single container and run the command.

Python3 http.server : save log to a file

I use Python3.6 to write a simple HTTP server to redirect all requests.
The file I written can be found here
I can see output in both Win8.1 CMD & Ubuntu 16.04.3 Bash.
However , whatever I try any of those methods below , it doesn't work , the log cannot be saved into the file.
nohup python3 ./filename.py > ./logfile 2>&1 &
python3 ./filename.py > ./logfile 2>&1 &
setsid ./filename.py > ./logfile 2>&1 &
I tried to use:
import sys
logfile = open('logfile.log','w')
sys.stdout = logfile
sys.stdin = logfile
sys.stderr = logfile
It didn't work.
By default, Python's stdout and stderr are buffered. As other responders have noted, if your log files are empty then (assuming your logging is correct) the output has not been flushed.
The link to the script is no longer valid, but you can try running your script either as python3 -u filename.py or the equivalent PYTHONUNBUFFERED=x python3 filename.py. This causes the stdout and stderr streams to be unbuffered.
A full example that uses the standard library's http.server module to serve files from the current directory:
PYTHONUNBUFFERED=x python3 -m http.server &> http.server.log & echo $! > http.server.pid
All output (stdout & stderr) is redirected to http.server.log, which can be tailed, and the process ID of the server is written to http.server.pid so that you can kill the process by kill $(cat http.server.pid).
i've tried your code on Ubuntu 16.04 and it worked like charm.
import sys
logfile = open('logfile.log','w')
sys.stdout = logfile
sys.stdin = logfile
sys.stderr = logfile

run python script with and without output

I have h problem, i need to run a backup script with no output to the screen at all the problem is i need to do it only if its running from the crontab in linux.
So if a user open the script it will load the UI menu
But from the crontab i want to add an argument so it will run without any output, something like:
07 00 * * * /root/idan/python nw_backup.py -s
s for silent :)
From my search here i found how to run only one command with subprocess module
Thanks !
You can just dump all output (stdout and stderr) to /dev/null.
/root/idan/python nw_backup.py -s > /dev/null 2>&1
2>&1 basically means, dump stderr (2) same to where you dump stdout (&1).

Direct output of a niced command to a text file

I'm trying to run a python script with the nice level set.
nice -n 5 python3 blah.py
runs as expected and sends text output to the screen. However, I would like to pipe the output to a text file and run this all in the background so I can go and check on the progress remotely.
However,
nice -n 5 python3 blah.py > log.txt &
creates the log file log.txt but doesn't write anything to the text file so I'm not sure where the standard output is being sent to or how to direct it to my text file.
I eventually solved this using the command
nice -n 5 python3 -u blah.py >log.txt &
-u forces the binary I/O layers of stdin, stdout and stderr to be unbuffered. This allows the output of the python script to be written to the text file whilst the process is running.
I'm guessing you're running the command via ssh and want to log out between running and checking the log. To do this run:
nohup nice -n 5 python3 blah.py > log.txt &
This will prevent killing the program on logout. As well nohup redirects stderr to stdout, which also might be what's causing an empty log.txt file.

how do i redirect the output of nosetests to a textfile?

I've tried "nosetests p1.py > text.txt" and it is not working.
What is the proper way to pipe this console output?
Try:
nosetests -s p1.py > text.txt 2>&1
Last --obvious--tip: If you are not in the test file directory, add before the .py file.
I want to add more detail here.
On my version (v1.3.7) Nose is logging on stderr instead of the expected stdout. Why nose logs on stderr instead of stdout is beyond me. So the solution is to redirect the stderr stream to your file. The redirect character sends stdout by default. The --nocapture, -s flag is used to stop nose from capturing your own print statements.
$ nosetests -s -v test/ > stdout.log 2> stderr.log
One thing to note, is although stderr seems to be flushed on each output, stdout does not get flushed, so if you are tailing the stdout.log file, you will not see output until nose or the OS decides to flush. So if you think it's not working try exiting the test runner which will cause stdout to flush.
See this answer on redirecting linux streams.
parameter -s - Not capturing stdout

Categories