As the title suggests how do I write a bash script that will execute for example 3 different Python programs as separate processes? And then am I able to gain access to each of these processes to see what is being logged onto the terminal?
Edit: Thanks again. I forgot to mention that I'm aware of appending & but I'm not sure how to access what is being outputted to the terminal for each process. For example I could run all 3 of these programs separately on different tabs and be able to see what is being outputted.
You can run a job in the background like this:
command &
This allows you to start multiple jobs in a row without having to wait for the previous one to finish.
If you start multiple background jobs like this, they will all share the same stdout (and stderr), which means their output is likely to get interleaved. For example, take the following script:
#!/bin/bash
# countup.sh
for i in `seq 3`; do
echo $i
sleep 1
done
Start it twice in the background:
./countup.sh &
./countup.sh &
And what you see in your terminal will look something like this:
1
1
2
2
3
3
But could also look like this:
1
2
1
3
2
3
You probably don't want this, because it would be very hard to figure out which output belonged to which job. The solution? Redirect stdout (and optionally stderr) for each job to a separate file. For example
command > file &
will redirect only stdout and
command > file 2>&1 &
will redirect both stdout and stderr for command to file while running command in the background. This page has a good introduction to redirection in Bash. You can view the command's output "live" by tailing the file:
tail -f file
I would recommend running background jobs with nohup or screen as user2676075 mentioned to let your jobs keep running after you close your terminal session, e.g.
nohup command1 > file1 2>&1 &
nohup command2 > file2 2>&1 &
nohup command3 > file3 2>&1 &
Try something like:
command1 2>&1 | tee commandlogs/command1.log ;
command2 2>&1 | tee commandlogs/command2.log ;
command3 2>&1 | tee commandlogs/command3.log
...
Then you can tail the files as the commands run. Remember, you can tail them all by being in the directory and doing a "tail *.log"
Alternatively, you can setup a script to generate a screen for each command with:
screen -S CMD1 -d -m command1 ;
screen -S CMD2 -d -m command2 ;
screen -S CMD3 -d -m command3
...
Then reconnect to them later with screen --list and screen -r [screen name]
Enjoy
Another option is to use a terminal emulator to run the three processes. You could use xterm (or rxvt etc.) if you are using X.
xterm -e <program1> [arg] ... &
xterm -e <program2> [arg] ... &
xterm -e <program3> [arg] ... &
Depends on what you want. This approach lets you pop up the terminal windows, so you can see the output in real time. You can also combine it with redirection to save the output as well.
Related
For one gitlab CI runner
I have a jar file which needs to be continuosly running in the Git linux box but since this is a application which is continuosly running, the python script in the next line is not getting executed. How to run the jar application and then execute the python script simultaneously one after another?
.gitlab.ci-yml file:
pwd && ls -l
unzip ZAP_2.8.0_Core.zip && ls -l
bash scan.sh
python3 Report.py
scan.sh file has the code java -jar app.jar.
Since, this application is continuosly running, 4th line code python3 Report.py is not getting executed.
How do I make both these run simulataneously without the .jar application stopping?
The immediate solution would probably be:
pwd && ls -l
echo "ls OK"
unzip ZAP_2.8.0_Core.zip && ls -l
echo "unzip + ls OK"
bash scan.sh &
scanpid=$!
echo "started scanpid with pid $scanpid"]
ps axuf | grep $scanpid || true
echo "ps + grep OK"
( python3 Report.py ; echo $? > report_status.txt ) || true
echo "report script OK"
kill $scanpid
echo "kill OK"
echo "REPORT STATUS = $(cat report_status.txt)"
test $(cat report_status.txt) -eq 0
Start the java process in the background,
run your python code and remember its return status and always return true.
kill the background process after running python
check for the status code of the python script.
Perhaps this is not necessary, as I never checked how gitlabci deals with background processes, that were spawned by its runners.
I do here a conservative approach.
- I remember the process id of the bash script, so that I can kill it later
- I ensure, that the line running the python script always returns a 0 exit code such, that gitlabci does not stop executing the next lines, but I remember the status code
- then I kill the bash script
- then I check whether the exit code of the python script was 0 or not, such, that gitlabci can perform the proper checking whether the runner was executed successfully or not.
Another minor comment (not related to your question)
I don't really understand why you write
unzip ZAP_2.8.0_Core.zip && ls -l
instead of
unzip ZAP_2.8.0_Core.zip ; ls -l```
If you expect the unzip command to fail you could just write
unzip ZAP_2.8.0_Core.zip
ls -l
and gitlabci would abort automatically before executing ls -l
I also added many echo statements for better debugging, error analysis, you might remove them in your final solution.
To run the two scripts one after the other, you can add & to the end of the line that is blocking. That will make it run in the background.
Either do
bash scan.sh & or add & to the end of the line calling the jar file within the scan.sh...
I have a python program like this:
import signal, time
def cleanup(*_):
print("cleanup")
# do stuff ...
exit(1)
# trap ctrl+c and hide the traceback message
signal.signal(signal.SIGINT, cleanup)
time.sleep(20)
I run the program through a script:
#!/bin/bash
ARG1="$1"
trap cleanup INT TERM EXIT
cleanup() {
echo "\ncleaning up..."
killall -9 python >/dev/null 2>&1
killall -9 python3 >/dev/null 2>&1
# some more killing here ...
}
mystart() {
echo "starting..."
export PYTHONPATH=$(pwd)
python3 -u myfolder/myfile.py $ARG1 2>&1 | tee "myfolder/log.txt"
}
mystart &&
cleanup
My problem is that the message cleanup isn't appearing on the terminal nor on the log file.
However, if I call the program without redirecting the output it works fine.
If you don't want this to happen, put tee in the background so it isn't part of the process group getting a SIGINT. For example, with bash 4.1 or newer, you can start a process substitution with an automatically-allocated file descriptor providing a handle:
#!/usr/bin/env bash
# ^^^^ NOT /bin/sh; >(...) is a bashism, likewise automatic FD allocation.
exec {log_fd}> >(exec tee log.txt) # run this first as a separate command
python3 -u myfile >&"$log_fd" 2>&1 # then here, ctrl+c will only impact Python...
exec {log_fd}>&- # here we close the file & thus the copy of tee.
Of course, if you put those three commands in a script, that entire script becomes your foreground process, so different techniques are called for. Thus:
python3 -u myfile > >(trap '' INT; exec tee log.txt) 2>&1
Pressing ^C sends SIGINT to the entire foreground process group (the current pipeline or shell “job”), killing tee before it can write the output from your handler anywhere. You can use trap in the shell to immunize a command against SIGINT, although that comes with obvious risks.
Simply use the -i or --ignore-interrupts option of tee.
Documentation says:
-i, --ignore-interrupts
ignore interrupt signals
https://helpmanual.io/man1/tee/
I am trying to execute a Python program as a background process inside a container with kubectl as below (kubectl issued on local machine):
kubectl exec -it <container_id> -- bash -c "cd some-dir && (python xxx.py --arg1 abc &)"
When I log in to the container and check ps -ef I do not see this process running. Also, there is no output from kubectl command itself.
Is the kubectl command issued correctly?
Is there a better way to achieve the same?
How can I see the output/logs printed off the background process being run?
If I need to stop this background process after some duration, what is the best way to do this?
The nohup Wikipedia page can help; you need to redirect all three IO streams (stdout, stdin and stderr) - an example with yes:
kubectl exec pod -- bash -c "yes > /dev/null 2> /dev/null &"
nohup is not required in the above case because I did not allocate a pseudo terminal (no -t flag) and the shell was not interactive (no -i flag) so no HUP signal is sent to the yes process on session termination. See this answer for more details.
Redirecting /dev/null to stdin is not required in the above case since stdin already refers to /dev/null (you can see this by running ls -l /proc/YES_PID/fd in another shell).
To see the output you can instead redirect stdout to a file.
To stop the process you'd need to identity the PID of the process you want to stop (pgrep could be useful for this purpose) and send a fatal signal to it (kill PID for example).
If you want to stop the process after a fixed duration, timeout might be a better option.
Actually, the best way to make this kind of things is adding an entry point to your container and run execute the commands there.
Like:
entrypoint.sh:
#!/bin/bash
set -e
cd some-dir && (python xxx.py --arg1 abc &)
./somethingelse.sh
exec "$#"
You wouldn't need to go manually inside every single container and run the command.
I'm running a Python script from bash using nohup. The script is executed via my bashrc as part of a shell function. If I run it like this:
function timer {
nohup python path/timer.py $1 $2 > path/nohup.out 2>&1 &
echo 'blah'
}
Everything works and I get my prompt back. However, if instead of echo I call tail to access the end of the nohup output file, like this:
function timer {
nohup python path/timer.py $1 $2 > path/nohup.out 2>&1 &
tail -f path/nohup.out
}
my prompt is not returned. I would like to see the contents of nohup.out and get back to the prompt without having to use CTRL-c.
I have followed the advice here, but adding </dev/null yields the same results as above.
You won't get prompt.
Because, tail -f will always watch the file (path/nohup.out) to output appended data as the file grows. You can try tail -n to get last 10 lines of path/nohup.out.
Stating the problem in a simplified form:
I'm ssh'ing to two servers using two bash terminals and running programs on the servers whose outputs I need to continuously view. Server1's output appears on terminal1 and Server2's output on terminal2.
Is there a way to run a script which is aware of how many terminals are open, and be able to cycle through them and execute bash commands on them?
Pseudocode:
open terminal1
run program1
open terminal2
run program2
switch to terminal1
run program3 on terminal1
Looked at the man page for xterm, but there was no option to switch between terminals.
The closest I could get was this and this. But both didn't help.
In [5]: import subprocess
In [6]: import shlex
In [7]: subprocess.Popen(shlex.split('gnome-terminal -x bash -c "ls; read -n1"'))
Out[7]: <subprocess.Popen object at 0x9480a2c>
screen
An alternative to screen would be tmux. Once you split your screens as you need them you can send commands to either one from a separate terminal something like:
tmux send-keys -t sessionname:0.0 "ls -al" "Enter"
tmux send-keys -t sessionname:0.1 "ls -al" "Enter"
The -t option references "sessionname":"window number"."pane number". I believe you can do a similar thing with screen but I've never used it.
Another option you might consider, if having two separate screens is not highly pertinent, is the python utility fabric. You can script commands to multiple servers and fetch results.
Creating a bash script that runs screen was the solution for me in a similar case.
You can use screen to create a screen session, and inside, create multiple numbered windows, and execute commands on them.
I am running a scrip on a cluster with 8 computers, so I ssh in each of them and run the command htop to check if no one is using.
The flag -S names a session on screen, -p enumerates the session window, and -X stuff runs a command. Note that in order to run a command " is needed on a new line to simulate carriage return(Enter)
Here is the script
#!/bin/bash
screen -d -m -S dracos
# window 0 is created by default, command ssh is executed " needed in new line to simulate pressing Enter
screen -S dracos -p 0 -X stuff "ssh draco1
"
screen -S dracos -p 0 -X stuff "htop
"
for n in {2..8}; do
# create now window using `screen` command
screen -S dracos -X screen $n
#ssh to each draco
screen -S dracos -p $n -X stuff "ssh draco$n
"
#run htop in each draco
screen -S dracos -p $n -X stuff "htop
"
screen -S dracos -p $n -X stuff "<your_new_command_here>
"
done
If you want to run commands in other order you can put the line bellow after the for
screen -S dracos -p $n -X stuff "<your_new_command_here>