Return prompt after accessing nohup.out (bash) - python

I'm running a Python script from bash using nohup. The script is executed via my bashrc as part of a shell function. If I run it like this:
function timer {
nohup python path/timer.py $1 $2 > path/nohup.out 2>&1 &
echo 'blah'
}
Everything works and I get my prompt back. However, if instead of echo I call tail to access the end of the nohup output file, like this:
function timer {
nohup python path/timer.py $1 $2 > path/nohup.out 2>&1 &
tail -f path/nohup.out
}
my prompt is not returned. I would like to see the contents of nohup.out and get back to the prompt without having to use CTRL-c.
I have followed the advice here, but adding </dev/null yields the same results as above.

You won't get prompt.
Because, tail -f will always watch the file (path/nohup.out) to output appended data as the file grows. You can try tail -n to get last 10 lines of path/nohup.out.

Related

How to run my python script parallely with another Java application on the same Linux box in Gitlab CI?

For one gitlab CI runner
I have a jar file which needs to be continuosly running in the Git linux box but since this is a application which is continuosly running, the python script in the next line is not getting executed. How to run the jar application and then execute the python script simultaneously one after another?
.gitlab.ci-yml file:
pwd && ls -l
unzip ZAP_2.8.0_Core.zip && ls -l
bash scan.sh
python3 Report.py
scan.sh file has the code java -jar app.jar.
Since, this application is continuosly running, 4th line code python3 Report.py is not getting executed.
How do I make both these run simulataneously without the .jar application stopping?
The immediate solution would probably be:
pwd && ls -l
echo "ls OK"
unzip ZAP_2.8.0_Core.zip && ls -l
echo "unzip + ls OK"
bash scan.sh &
scanpid=$!
echo "started scanpid with pid $scanpid"]
ps axuf | grep $scanpid || true
echo "ps + grep OK"
( python3 Report.py ; echo $? > report_status.txt ) || true
echo "report script OK"
kill $scanpid
echo "kill OK"
echo "REPORT STATUS = $(cat report_status.txt)"
test $(cat report_status.txt) -eq 0
Start the java process in the background,
run your python code and remember its return status and always return true.
kill the background process after running python
check for the status code of the python script.
Perhaps this is not necessary, as I never checked how gitlabci deals with background processes, that were spawned by its runners.
I do here a conservative approach.
- I remember the process id of the bash script, so that I can kill it later
- I ensure, that the line running the python script always returns a 0 exit code such, that gitlabci does not stop executing the next lines, but I remember the status code
- then I kill the bash script
- then I check whether the exit code of the python script was 0 or not, such, that gitlabci can perform the proper checking whether the runner was executed successfully or not.
Another minor comment (not related to your question)
I don't really understand why you write
unzip ZAP_2.8.0_Core.zip && ls -l
instead of
unzip ZAP_2.8.0_Core.zip ; ls -l```
If you expect the unzip command to fail you could just write
unzip ZAP_2.8.0_Core.zip
ls -l
and gitlabci would abort automatically before executing ls -l
I also added many echo statements for better debugging, error analysis, you might remove them in your final solution.
To run the two scripts one after the other, you can add & to the end of the line that is blocking. That will make it run in the background.
Either do
bash scan.sh & or add & to the end of the line calling the jar file within the scan.sh...

Calling python script from bash stops at command prompt

I am calling a python script:
/bin/sh
python ~/Documents/Projects/Programming/Python/svg/svg2dxf.py $1 0
After running the script, I get a python command prompt ($) and it's only when I type "exit" at the command prompt that the script runs.
What am I doing wrong?
Change the following line:
/bin/sh
with (shebang interpreter directive):
#!/bin/sh
Otherwise, the new shell instance is invoked; until the new shell is exit, next line is not executed.
You should remove the leading:
/bin/sh
Your current script does two things:
1) execute a new instance of /bin/sh
==> which gives your the shell $ sign
2) execute the python script
Your script should be:
python ~/Documents/Projects/Programming/Python/svg/svg2dxf.py $1 0

Executing output of python from loop in bash

I wrote a bash that has python command included in loop: (part of script)
#!/bin/bash
ARG=( $(echo "${#:3}"))
for (( i=1; i<=$#-3; i++ ))
do
python -c "print('<Command with variables>' * 1)"
done
When I run it, depends on number of my args for example I have this output:
nohup command-a &
nohup command-b &
nohup command-c &
How do I execute the output lines from bash immediately?
Can I tell python command to run them in each iteration? How?
Thank you
You can achieve that by executing the python code in a sub-shell and evaluating the content of that shell afterwards.
eval $(python -c ...)
$() returns a string you can evaluate with eval
You are asking two questions. I can only answer the first one:
If you want to run commands coming to the standard output, just redirect them to bash:
python -c 'print "ls"' | bash

Have bash script execute multiple programs as separate processes

As the title suggests how do I write a bash script that will execute for example 3 different Python programs as separate processes? And then am I able to gain access to each of these processes to see what is being logged onto the terminal?
Edit: Thanks again. I forgot to mention that I'm aware of appending & but I'm not sure how to access what is being outputted to the terminal for each process. For example I could run all 3 of these programs separately on different tabs and be able to see what is being outputted.
You can run a job in the background like this:
command &
This allows you to start multiple jobs in a row without having to wait for the previous one to finish.
If you start multiple background jobs like this, they will all share the same stdout (and stderr), which means their output is likely to get interleaved. For example, take the following script:
#!/bin/bash
# countup.sh
for i in `seq 3`; do
echo $i
sleep 1
done
Start it twice in the background:
./countup.sh &
./countup.sh &
And what you see in your terminal will look something like this:
1
1
2
2
3
3
But could also look like this:
1
2
1
3
2
3
You probably don't want this, because it would be very hard to figure out which output belonged to which job. The solution? Redirect stdout (and optionally stderr) for each job to a separate file. For example
command > file &
will redirect only stdout and
command > file 2>&1 &
will redirect both stdout and stderr for command to file while running command in the background. This page has a good introduction to redirection in Bash. You can view the command's output "live" by tailing the file:
tail -f file
I would recommend running background jobs with nohup or screen as user2676075 mentioned to let your jobs keep running after you close your terminal session, e.g.
nohup command1 > file1 2>&1 &
nohup command2 > file2 2>&1 &
nohup command3 > file3 2>&1 &
Try something like:
command1 2>&1 | tee commandlogs/command1.log ;
command2 2>&1 | tee commandlogs/command2.log ;
command3 2>&1 | tee commandlogs/command3.log
...
Then you can tail the files as the commands run. Remember, you can tail them all by being in the directory and doing a "tail *.log"
Alternatively, you can setup a script to generate a screen for each command with:
screen -S CMD1 -d -m command1 ;
screen -S CMD2 -d -m command2 ;
screen -S CMD3 -d -m command3
...
Then reconnect to them later with screen --list and screen -r [screen name]
Enjoy
Another option is to use a terminal emulator to run the three processes. You could use xterm (or rxvt etc.) if you are using X.
xterm -e <program1> [arg] ... &
xterm -e <program2> [arg] ... &
xterm -e <program3> [arg] ... &
Depends on what you want. This approach lets you pop up the terminal windows, so you can see the output in real time. You can also combine it with redirection to save the output as well.

Strange behavior redirecting to a file while using runuser

I'm currently executing a python file with runuser and redirecting the output to a file. This is the command:
runuser -l "user" -c "/path/python-script.py parameter1 > /path/file.log &"
This run correctly the python script but creates an empty log file. If I run without the redirect:
runuser -l "user" -c "/path/python-script.py parameter1 &"
runs correctly the python script and make all output from python script to flow the screen. The output from the python script are done with print which output to stdout.
I don't understand why the output from the python script is not dumped to the file. File permissions are correct. The log files is created, but not filled.
But, if I remove the "parameter1", then the error message reported by the python script is correctly dumped to the log file:
runuser -l "user" -c "/path/python-script.py > /path/file.log &"
The error message is done with print too, so I don't understand why one message are dumped and others not.
Maybe runuser interprets the "parameter1" as a command or something. But, the parameter is correctly passed to the script, as I can see with ps:
/usr/bin/python /path/python-script.py connect
I've tried adding 2>&1 but still don't work.
Any idea ?
Encountered similar problem in startup scripts, where I needed to log output. In the end I came up with following:
USR=myuser
PRG=myprogrog
WKD="/path/to/workdir"
BIN="/path/to/binary"
ARG="--my arguments"
PID="/var/run/myapp/myprog.pid"
su -l $USR -s /bin/bash -c "exec > >( logger -t $PRG ) 2>&1 ; cd $WKD; { $BIN $ARG & }; echo \$! > $PID "
Handy as you can also have PID of the process available. Example writes to syslog, but if it is to write to the file use cat:
LOG="/path/to/file.log"
su -l $USR -s /bin/bash -c "exec > >( cat > $LOG ) 2>&1 ; cd $WKD; { $BIN $ARG & }; echo \$! > $PID "
It starts a new shell and ties all outputs to command inside exec.

Categories