Current Situation
I created a php script, to start the python script.
Following is the script:
$python_file = "/var/www/web/test.py 2>&1 | tee -a /tmp/mylog 2>/dev/null >/dev/null &";
$command = "nohup python3 ".$python_file;
exec($command);
Problem:
After triggering the php script, the script keeps on running and finally it returns 504 error page.
Expected Solution
After triggering the above script, it needs to return immediately after the exec statement. is it possible?
add & to run in the background
$python_file = "/var/www/web/test.py 2>&1 | tee -a /tmp/mylog 2>/dev/null >/dev/null &";
$command = "nohup python3 ".$python_file . " &";
exec($command);
Related
For one gitlab CI runner
I have a jar file which needs to be continuosly running in the Git linux box but since this is a application which is continuosly running, the python script in the next line is not getting executed. How to run the jar application and then execute the python script simultaneously one after another?
.gitlab.ci-yml file:
pwd && ls -l
unzip ZAP_2.8.0_Core.zip && ls -l
bash scan.sh
python3 Report.py
scan.sh file has the code java -jar app.jar.
Since, this application is continuosly running, 4th line code python3 Report.py is not getting executed.
How do I make both these run simulataneously without the .jar application stopping?
The immediate solution would probably be:
pwd && ls -l
echo "ls OK"
unzip ZAP_2.8.0_Core.zip && ls -l
echo "unzip + ls OK"
bash scan.sh &
scanpid=$!
echo "started scanpid with pid $scanpid"]
ps axuf | grep $scanpid || true
echo "ps + grep OK"
( python3 Report.py ; echo $? > report_status.txt ) || true
echo "report script OK"
kill $scanpid
echo "kill OK"
echo "REPORT STATUS = $(cat report_status.txt)"
test $(cat report_status.txt) -eq 0
Start the java process in the background,
run your python code and remember its return status and always return true.
kill the background process after running python
check for the status code of the python script.
Perhaps this is not necessary, as I never checked how gitlabci deals with background processes, that were spawned by its runners.
I do here a conservative approach.
- I remember the process id of the bash script, so that I can kill it later
- I ensure, that the line running the python script always returns a 0 exit code such, that gitlabci does not stop executing the next lines, but I remember the status code
- then I kill the bash script
- then I check whether the exit code of the python script was 0 or not, such, that gitlabci can perform the proper checking whether the runner was executed successfully or not.
Another minor comment (not related to your question)
I don't really understand why you write
unzip ZAP_2.8.0_Core.zip && ls -l
instead of
unzip ZAP_2.8.0_Core.zip ; ls -l```
If you expect the unzip command to fail you could just write
unzip ZAP_2.8.0_Core.zip
ls -l
and gitlabci would abort automatically before executing ls -l
I also added many echo statements for better debugging, error analysis, you might remove them in your final solution.
To run the two scripts one after the other, you can add & to the end of the line that is blocking. That will make it run in the background.
Either do
bash scan.sh & or add & to the end of the line calling the jar file within the scan.sh...
I am executing a python script from bash script as follows
"python -O myscript.pyo &"
After launching the python script I need to press "enter" manually to get prompt.
Is there a way to avoid this manual intervention.
Thanks in advance!
pipe a blank input to it:
echo "" | python -O myscript.pyo
you might want to create a bash alias to save keyhits: alias run_myscript="echo '' | python -O myscript.pyo"
placing wait after the line to run the process in background seems to work.
Source:
http://www.iitk.ac.in/LDP/LDP/abs/html/abs-guide.html#WAITHANG
Example given:
!/bin/bash
test.sh
ls -l &
echo "Done."
wait
Many thanks
I have this simple code for running shell scripts and it sometimes work, sometimes not.If not working console log is:
Please edit the vars script to reflect your configuration, then
source it with "source ./vars". Next, to start with a fresh PKI
configuration and to delete any previous certificates and keys, run
"./clean-all". Finally, you can run this tool (pkitool) to build
certificates/keys.
It is strange for me because when I run commands in console they work as should
def cmds(*args):
cd1 = "cd /etc/openvpn/easy-rsa && source ./vars"
cd2 = "cd /etc/openvpn/easy-rsa && ./clean-all"
cd3 = "cd /etc/openvpn/easy-rsa && printf '\n\n\n\n\n\n\n\n\n' | ./build-ca"
runcd1 = subprocess.Popen(cd1, shell=True)
runcd2 = subprocess.Popen(cd2 , shell=True)
runcd3 = subprocess.Popen(cd3 , shell=True)
return (runcd1, runcd2, runcd3)
I've changed like this:
def pass3Cmds(*args):
commands = "cd /etc/openvpn/easy-rsa && source ./vars && ./clean-all && printf '\n\n\n\n\n\n\n\n\n' | ./build-ca"
runCommands = subprocess.Popen(commands, shell=True, stdout=PIPE)
return (runCommands)
but console writes down:
source: not found
You need to combine the three commands into one.
The "source ./vars" only affects the shell from which it's run. When you use three separate Popen commands, you're getting three separate shells.
Run all the commands in one Popen with &&s between them.
The reason this works "sometimes" as written is that you're sometimes running python in a shell where you already sourced the vars script.
I'm currently executing a python file with runuser and redirecting the output to a file. This is the command:
runuser -l "user" -c "/path/python-script.py parameter1 > /path/file.log &"
This run correctly the python script but creates an empty log file. If I run without the redirect:
runuser -l "user" -c "/path/python-script.py parameter1 &"
runs correctly the python script and make all output from python script to flow the screen. The output from the python script are done with print which output to stdout.
I don't understand why the output from the python script is not dumped to the file. File permissions are correct. The log files is created, but not filled.
But, if I remove the "parameter1", then the error message reported by the python script is correctly dumped to the log file:
runuser -l "user" -c "/path/python-script.py > /path/file.log &"
The error message is done with print too, so I don't understand why one message are dumped and others not.
Maybe runuser interprets the "parameter1" as a command or something. But, the parameter is correctly passed to the script, as I can see with ps:
/usr/bin/python /path/python-script.py connect
I've tried adding 2>&1 but still don't work.
Any idea ?
Encountered similar problem in startup scripts, where I needed to log output. In the end I came up with following:
USR=myuser
PRG=myprogrog
WKD="/path/to/workdir"
BIN="/path/to/binary"
ARG="--my arguments"
PID="/var/run/myapp/myprog.pid"
su -l $USR -s /bin/bash -c "exec > >( logger -t $PRG ) 2>&1 ; cd $WKD; { $BIN $ARG & }; echo \$! > $PID "
Handy as you can also have PID of the process available. Example writes to syslog, but if it is to write to the file use cat:
LOG="/path/to/file.log"
su -l $USR -s /bin/bash -c "exec > >( cat > $LOG ) 2>&1 ; cd $WKD; { $BIN $ARG & }; echo \$! > $PID "
It starts a new shell and ties all outputs to command inside exec.
I'm working on a Django website where I have various compilation programs that need to run (Compass/Sass, coffeescript, hamlpy), so I made this shell script for convenience:
#!/bin/bash
SITE=/home/dev/sites/rmx
echo "RMX using siteroot=$SITE"
$SITE/rmx/manage.py runserver &
PIDS[0]=$!
compass watch $SITE/media/compass/ &
PIDS[1]=$!
coffee -o $SITE/media/js -cw $SITE/media/coffee &
PIDS[2]=$!
hamlpy-watcher $SITE/templates/hamlpy $SITE/templates/templates &
PIDS[3]=$!
trap "echo PIDS: ${PIDS[*]} && kill ${PIDS[*]}" SIGINT
wait
Everything except for the Django server shuts down nicely on a ctrl+c because the PID of the server process isn't the PID of the python manage.py runserver command. Which means everytime I stop the script, I have to find the running process PID and shut it down.
Here's an example:
$> ./compile.sh
RMX using siteroot....
...
[ctrl+c]
PIDS: 29725 29726 29728 29729
$> ps -A | grep python
29732 pts/2 00:00:00 python
The first PID, 29725, is the initial python manage.py runserver call, but 29732 is the actual dev server process.
edit Looks like this is due to Django's auto-reload feature which can be disabled with the --noreload flag. Since I'd like to keep the auto reload feature, the question now becomes how to kill the child processes from the bash script. I would think killing the initial python runserver command would do it...
SOLVED
Thanks to this SO question, I've changed my script to this:
#!/bin/bash
SITE=/home/dev/sites/rmx
echo "RMX using siteroot=$SITE"
$SITE/rmx/manage.py runserver &
compass watch $SITE/media/compass/ &
coffee -o $SITE/media/js -cw $SITE/media/coffee &
hamlpy-watcher $SITE/templates/hamlpy $SITE/templates/templates &
trap "kill -TERM -$$" SIGINT
wait
PIDs preceded with the dash operate on the PID group with the kill command, and the $$ references the PID of the bash script itself.
Thanks for the help, me!
No problem, self, and hey -- you're awesome.
You can execute this to kill or process and servers, you set PORT number:
$ netstat -tulpn | grep PORT | awk '{print $7}' | cut -d/ -f 1 | xargs kill
OR
$ sudo lsof -i tcp:PORT
$ sudo lsof -i tcp:PORT|awk '{print $2}'|cut -d/ -f 1|xargs kill