I am executing a python script from bash script as follows
"python -O myscript.pyo &"
After launching the python script I need to press "enter" manually to get prompt.
Is there a way to avoid this manual intervention.
Thanks in advance!
pipe a blank input to it:
echo "" | python -O myscript.pyo
you might want to create a bash alias to save keyhits: alias run_myscript="echo '' | python -O myscript.pyo"
placing wait after the line to run the process in background seems to work.
Source:
http://www.iitk.ac.in/LDP/LDP/abs/html/abs-guide.html#WAITHANG
Example given:
!/bin/bash
test.sh
ls -l &
echo "Done."
wait
Many thanks
Related
For one gitlab CI runner
I have a jar file which needs to be continuosly running in the Git linux box but since this is a application which is continuosly running, the python script in the next line is not getting executed. How to run the jar application and then execute the python script simultaneously one after another?
.gitlab.ci-yml file:
pwd && ls -l
unzip ZAP_2.8.0_Core.zip && ls -l
bash scan.sh
python3 Report.py
scan.sh file has the code java -jar app.jar.
Since, this application is continuosly running, 4th line code python3 Report.py is not getting executed.
How do I make both these run simulataneously without the .jar application stopping?
The immediate solution would probably be:
pwd && ls -l
echo "ls OK"
unzip ZAP_2.8.0_Core.zip && ls -l
echo "unzip + ls OK"
bash scan.sh &
scanpid=$!
echo "started scanpid with pid $scanpid"]
ps axuf | grep $scanpid || true
echo "ps + grep OK"
( python3 Report.py ; echo $? > report_status.txt ) || true
echo "report script OK"
kill $scanpid
echo "kill OK"
echo "REPORT STATUS = $(cat report_status.txt)"
test $(cat report_status.txt) -eq 0
Start the java process in the background,
run your python code and remember its return status and always return true.
kill the background process after running python
check for the status code of the python script.
Perhaps this is not necessary, as I never checked how gitlabci deals with background processes, that were spawned by its runners.
I do here a conservative approach.
- I remember the process id of the bash script, so that I can kill it later
- I ensure, that the line running the python script always returns a 0 exit code such, that gitlabci does not stop executing the next lines, but I remember the status code
- then I kill the bash script
- then I check whether the exit code of the python script was 0 or not, such, that gitlabci can perform the proper checking whether the runner was executed successfully or not.
Another minor comment (not related to your question)
I don't really understand why you write
unzip ZAP_2.8.0_Core.zip && ls -l
instead of
unzip ZAP_2.8.0_Core.zip ; ls -l```
If you expect the unzip command to fail you could just write
unzip ZAP_2.8.0_Core.zip
ls -l
and gitlabci would abort automatically before executing ls -l
I also added many echo statements for better debugging, error analysis, you might remove them in your final solution.
To run the two scripts one after the other, you can add & to the end of the line that is blocking. That will make it run in the background.
Either do
bash scan.sh & or add & to the end of the line calling the jar file within the scan.sh...
I am using a python script to restrict the commands usage using the command argument in the authorized_keys file.
command:
ssh host-name bash --login -c 'exec $0 "$#"' mkdir -p hello
My script is performing required actions to restrict the commands. After filtering, the python script does sys.exit(1) for error and sys.exit(0) for success. After the return value the above ssh command is not getting executed at the end. Is there something else I need to send from the python script to SSH daemon?
The command modifier in the authorized_keys is not (only) used to validate the users command, but that command is run instead of the command provided by the user. This means calling sys.exit(0) from there prevents running the user-provided command.
In that script, after you validate the command, you need to run it too!
I think changing it to
ssh host-name bash --login -c 'exec $0 "$#" && mkdir -p hello'
should do the trick, otherwise bash will assume only the part in the single quotes is the command to execute.
If the second part should be executed even if the first part fails, replace the && with ;
I am building an interactive installer using a nifty command line:
curl -L http://install.example.com | bash
The bash script then rapidly delegates to a python script:
# file: install.sh
[...]
echo "-=- Welcome -=-"
[...]
/usr/bin/env python3 deploy_p3k.py
And the python script itself prompts the user for input:
# file: deploy_py3k.py
[...]
input('====> Confirm or enter installation directory [/srv/vhosts/project]: ')
[...]
input('====> Confirm installation [y/n]: ')
[...]
PROBLEM: Because the python script is ran from a bash script itself being piped from curl, when the prompt comes up, it is automatically "skipped" and everything ends like so:
$ curl -L http://install.example.com | bash
-=- Welcome ! -=-
We have detected you have python3 installed.
====> Confirm or enter installation directory [/srv/vhosts/project]: ====> Confirm installation [y/n]: Installation aborted.
As you can see, the script doesn't wait for user input, because of the pipe which ties the input to the curl output. Thus, we have the following problem:
curl [STDOUT]=>[STDIN] BASH (which executes python script)
= the [STDIN] of the python script is the [STDOUT] of curl (which contains at a EOF) !
How can I keep this very useful and short command line (curl -L http://install.example.com | bash) and still be able to prompt the user for input ? I should somehow detach the stdin of python from curl but I didn't find how to do it.
Thanks very much for your help !
Things I have also tried:
Starting the python script in a subshell: $(/usr/bin/env python3 deploy.py)
You can always redirect standard input from the controlling tty, assuming there is one:
/usr/bin/env python3 deploy_p3k.py < /dev/tty
or
/usr/bin/env python3 deploy_p3k.py <&1
I wrote a bash that has python command included in loop: (part of script)
#!/bin/bash
ARG=( $(echo "${#:3}"))
for (( i=1; i<=$#-3; i++ ))
do
python -c "print('<Command with variables>' * 1)"
done
When I run it, depends on number of my args for example I have this output:
nohup command-a &
nohup command-b &
nohup command-c &
How do I execute the output lines from bash immediately?
Can I tell python command to run them in each iteration? How?
Thank you
You can achieve that by executing the python code in a sub-shell and evaluating the content of that shell afterwards.
eval $(python -c ...)
$() returns a string you can evaluate with eval
You are asking two questions. I can only answer the first one:
If you want to run commands coming to the standard output, just redirect them to bash:
python -c 'print "ls"' | bash
I'm currently executing a python file with runuser and redirecting the output to a file. This is the command:
runuser -l "user" -c "/path/python-script.py parameter1 > /path/file.log &"
This run correctly the python script but creates an empty log file. If I run without the redirect:
runuser -l "user" -c "/path/python-script.py parameter1 &"
runs correctly the python script and make all output from python script to flow the screen. The output from the python script are done with print which output to stdout.
I don't understand why the output from the python script is not dumped to the file. File permissions are correct. The log files is created, but not filled.
But, if I remove the "parameter1", then the error message reported by the python script is correctly dumped to the log file:
runuser -l "user" -c "/path/python-script.py > /path/file.log &"
The error message is done with print too, so I don't understand why one message are dumped and others not.
Maybe runuser interprets the "parameter1" as a command or something. But, the parameter is correctly passed to the script, as I can see with ps:
/usr/bin/python /path/python-script.py connect
I've tried adding 2>&1 but still don't work.
Any idea ?
Encountered similar problem in startup scripts, where I needed to log output. In the end I came up with following:
USR=myuser
PRG=myprogrog
WKD="/path/to/workdir"
BIN="/path/to/binary"
ARG="--my arguments"
PID="/var/run/myapp/myprog.pid"
su -l $USR -s /bin/bash -c "exec > >( logger -t $PRG ) 2>&1 ; cd $WKD; { $BIN $ARG & }; echo \$! > $PID "
Handy as you can also have PID of the process available. Example writes to syslog, but if it is to write to the file use cat:
LOG="/path/to/file.log"
su -l $USR -s /bin/bash -c "exec > >( cat > $LOG ) 2>&1 ; cd $WKD; { $BIN $ARG & }; echo \$! > $PID "
It starts a new shell and ties all outputs to command inside exec.