I have a python script that calls a bash script, which calls another bash script that hangs only when called from python.
test.py
#!/usr/bin/python
import subprocess
print("Python")
subprocess.call(["/home/user/test/bash1.sh"])
bash1.sh
#!/bin/bash
echo "Bash 1"
var=$(echo "Bash 1 var")
echo $var
/home/user/test/bash2.sh
bash2.sh
#!/bin/bash
echo "Bash 2"
var=$(echo "Bash 2 var")
echo $var
randomkey=$(cat /dev/urandom | tr -dc 'a-z' | fold -w 8 | head -n 1)
echo $randomkey
When I run ./bash1.sh everything works just fine. When I run test.py bash2.sh hangs at:
randomkey=$(cat /dev/urandom | tr -dc 'a-z' | fold -w 8 | head -n 1)
I have a stinking feeling the pipes (|) aren't reaching their destinations. Any ideas how to make this work from test.py?
EDIT: Ubuntu VM with Python2.7
I solved this by upgrading to Python3.7 from 2.7
Related
Current Situation
I created a php script, to start the python script.
Following is the script:
$python_file = "/var/www/web/test.py 2>&1 | tee -a /tmp/mylog 2>/dev/null >/dev/null &";
$command = "nohup python3 ".$python_file;
exec($command);
Problem:
After triggering the php script, the script keeps on running and finally it returns 504 error page.
Expected Solution
After triggering the above script, it needs to return immediately after the exec statement. is it possible?
add & to run in the background
$python_file = "/var/www/web/test.py 2>&1 | tee -a /tmp/mylog 2>/dev/null >/dev/null &";
$command = "nohup python3 ".$python_file . " &";
exec($command);
I have a short bash script foo.sh
#!/bin/bash
cat /dev/urandom | tr -dc 'a-z1-9' | fold -w 4 | head -n 1
When I run it directly from the shell, it runs fine, exiting when it is done
$ ./foo.sh
m1un
$
but when I run it from Python
$ python -c "import subprocess; subprocess.call(['./foo.sh'])"
ygs9
it outputs the line but then just hangs forever. What is causing this discrepancy?
Adding the trap -p command to the bash script, stopping the hung python process and running ps shows what's going on:
$ cat foo.sh
#!/bin/bash
trap -p
cat /dev/urandom | tr -dc 'a-z1-9' | fold -w 4 | head -n 1
$ python -c "import subprocess; subprocess.call(['./foo.sh'])"
trap -- '' SIGPIPE
trap -- '' SIGXFSZ
ko5o
^Z
[1]+ Stopped python -c "import subprocess; subprocess.call(['./foo.sh'])"
$ ps -H -o comm
COMMAND
bash
python
foo.sh
cat
tr
fold
ps
Thus, subprocess.call() executes the command with the SIGPIPE signal masked. When head does its job and exits, the remaining processes do not receive the broken pipe signal and do not terminate.
Having the explanation of the problem at hand, it was easy to find the bug in the python bugtracker, which turned out to be issue#1652.
The problem with Python 2 handling SIGPIPE in a non-standard way (i.e., being ignored) is already coined in Leon's answer, and the fix is given in the link: set SIGPIPE to default (SIG_DFL) with, e.g.,
import signal
signal.signal(signal.SIGPIPE,signal.SIG_DFL)
You can try to unset SIGPIPE from within your script with, e.g.,
#!/bin/bash
trap SIGPIPE # reset SIGPIPE
cat /dev/urandom | tr -dc 'a-z1-9' | fold -w 4 | head -n 1
but, unfortunately, it doesn't work, as per the Bash reference manual
Signals ignored upon entry to the shell cannot be trapped or reset.
A final comment: you have a useless use of cat here; it's better to write your script as:
#!/bin/bash
tr -dc 'a-z1-9' < /dev/urandom | fold -w 4 | head -n 1
Yet, since you're using Bash, you might as well use the read builtin as follows (this will advantageously replace fold and head):
#!/bin/bash
read -n4 a < <(tr -dc 'a-z1-9' < /dev/urandom)
printf '%s\n' "$a"
It turns out that with this version, you'll have a clear idea of what's going on (and the script will not hang):
$ python -c "import subprocess; subprocess.call(['./foo'])"
hcwh
tr: write error: Broken pipe
tr: write error
$
$ # script didn't hang
(Of course, it works well with no errors with Python3). And telling Python to use the default signal for SIGPIPE works well too:
$ python -c "import signal; import subprocess; signal.signal(signal.SIGPIPE,signal.SIG_DFL); subprocess.call(['./foo'])"
jc1p
$
(and also works with Python3).
I new to bash script, and found one of the code from stackoverflow. I merge it with my script, it doesn't run python. When I try to echo, it is always going to "Good". When I tried to run ps -ef | grep runserver* this always came out and causing the python script not running.
root 1133 0.0 0.4 11988 2112 pts/0 S+ 02:58 0:00 grep --color=auto runserver.py
Here is my code:-
#!/bin/sh
SERVICE='runserver*'
if ps ax | grep -v grep | grep $SERVICE > /dev/null
then
python /var/www/html/rest/runserver.py
python /var/www/html/rest2/runserver.py
else
echo "Good"
fi
If you are more familiar with python, try this instead:
#!/usr/bin/python
import os
import sys
process = os.popen("ps aux | grep -v grep | grep WHATEVER").read().splitlines()
if len(process) == 2:
print "WHATEVER is running - nothing to do"
else:
os.system("WHATEVER &")
Is the following code what you want?
#!/bin/sh
SERVER='runserver*'
CC=`ps ax|grep -v grep|grep "$SERVER"`
if [ "$CC" = "" ]; then
python /var/www/html/rest/runserver.py
python /var/www/html/rest2/runserver.py
else
echo "good"
fi
#NickyMan, the problem is related to the logic in the shell script. Your program don't find runserver and always says "Good".
In this code, if you don't find the server, then exec runserver.
#!/bin/sh
SERVICE='runserver*'
if ps ax | grep -v grep | grep $SERVICE > /dev/null
then
echo "Good"
else
python /var/www/html/rest/runserver.py
python /var/www/html/rest2/runserver.py
fi
Currently, I do the following steps:
a. Grep for pid of a process and kill it.
ps -aux | grep foo.bar # process of interest
kill -9 pid_of_foo.bar # kill the process
b. start virtualenv
cd {required_folder}
sudo virtualenv folder/
cd {folder2}
source bin/activate
c. Start the manage.py in shell mode
cd {required folder}
sudo python manage.py shell
d. In the interactive manage shell, execute the following commands:
from core import *
foo.bar.bz.clear.state()
exit
e. Execute a script
/baz/maz/foo
In bash we can write down a series of commands, however Is it possible to run the interactive shell in django using bash and execute commands? I was wondering if above steps can be scriptified.
Thanks
You need a script like this one:
#!/bin/bash
# kill all foo.bar's instances
for pid in $(ps -aux | grep foo.bar | grep -v grep | awk '{print $2;}'); do
kill $pid
done
# start virtualenv
cd {required_folder}
...
# Start the manage.py in shell mode
cd {required folder}
cat << EOF | sudo python manage.py shell
from core import *
foo.bar.bz.clear.state()
exit
EOF
# Execute a script
/baz/maz/foo
The key point of the script is HEREDOC python snippet. Take a look at the example I've just tried in a console:
[alex#galene ~]$ cat <<EOF_MARK | python -
> import sys
> print "Hello, world from python %s" % sys.version
> exit
> EOF_MARK
Hello, world from python 2.7.6 (default, Nov 22 2013, 22:57:56)
[GCC 4.7.2 20121109 (ALT Linux 4.7.2-alt7)]
[alex#galene ~]$ _
I have some python-2.x scripts which I copy between different systems, Debian and Arch linux.
Debian install python as '/usr/bin/python' while Arch installs it as '/usr/bin/python2'.
A problem is that on Arch linux '/usr/bin/python' also exists which refers to python-3.x.
So every time I copy a file I have to correct the shebang line, which is a bit annoying.
On Arch I use
#!/usr/bin/env python2
While on debian I have
#!/usr/bin/env python
Since 'python2' does not exist on Debian, is there a way to pass a preferred application? Maybe with some shell expansion? I don't mind if it depends on '/bin/sh' existing for example.
The following would be nice but don't work.
#!/usr/bin/env python2 python
#!/usr/bin/env python{2,}
#!/bin/sh python{2,}
#!/bin/sh -c python{2,}
The frustrating thing is that 'sh -c python{2,}' works on the command line: i.e. it calls python2 where available and otherwise python.
I would prefer not to make a make a link 'python2->python' on Debian because then if I give the script to someone else it will not run. Neither would I like to make 'python' point to python2 on Arch, since it breaks with updates.
Is there a clean way to do this without writing a wrapper?
I realize similar question have been asked before, but I didn't see any answers meeting my boundary conditions :)
Conditional shebang line for different versions of Python
--- UPDATE
I hacked together an ugly shell solution, which does the job for now.
#!/bin/bash
pfound=false; v0=2; v1=6
for p in /{usr/,}bin/python*; do
v=($(python -V 2>&1 | cut -c 7- | sed 's/\./ /g'))
if [[ ${v[0]} -eq $v0 && ${v[1]} -eq $v1 ]]; then pfound=true; break; fi
done
if ! $pfound; then echo "no suitable python version (2.6.x) found."; exit 1; fi
$p - $* <<EOF
PYTHON SCRIPT GOES HERE
EOF
explanation:
get version number (v is a bash array) and check
v=($(python -V 2>&1 | cut -c 7- | sed 's/\./ /g'))
if [[ ${v[0]} -eq $v0 && ${v[1]} -eq $v1 ]]; then pfound=true; break; fi
launch found program $p with input from stdin (-) and pass arguments ($*)
$p - $* <<EOF
...
EOF
#!/bin/sh
''''which python2 >/dev/null 2>&1 && exec python2 "$0" "$#" # '''
''''which python >/dev/null 2>&1 && exec python "$0" "$#" # '''
''''exec echo "Error: I can't find python anywhere" # '''
import sys
print sys.argv
This is first run as a shell script. You can put almost any shell code in between '''' and # '''. Such code will be executed by the shell. Then, when python runs on the file, python will ignore the lines as they look like triple-quoted strings to python.
The shell script tests if the binary exists in the path with which python2 >/dev/null and then executes it if so (with all arguments in the right place). For more on this, see Why does this snippet with a shebang #!/bin/sh and exec python inside 4 single quotes work?
Note: The line starts with four ' and their must be no space between the fourth ' and the start of the shell command (which...)
Something like this:
#!/usr/bin/env python
import sys
import os
if sys.version_info >= (3, 0):
os.execvp("python2.7", ["python2.7", __file__])
os.execvp("python2.6", ["python2.6", __file__])
os.execvp("python2", ["python2", __file__])
print ("No sutable version of Python found")
exit(2)
Update Below is a more robust version of the same.
#!/bin/bash
ok=bad
for pyth in python python2.7 python2.6 python2; do
pypath=$(type -P $pyth)
if [[ -x $pypath ]] ; then
ok=$(
$pyth <<##
import sys
if sys.version_info < (3, 0):
print ("ok")
else:
print("bad")
##
)
if [[ $ok == ok ]] ; then
break
fi
fi
done
if [[ $ok != ok ]]; then
echo "Could not find suitable python version"
exit 2
fi
$pyth <<##
<<< your python script goes here >>>
##
I'll leave this here for future reference.
All of my own scripts are usually written for Python 3, so I'm using a modified version of Aaron McDaid's answer to check for Python 3 instead of 2:
#!/usr/bin/env sh
''''which python3 >/dev/null 2>&1 && exec python3 "$0" "$#" # '''
''''test $(python --version 2>&1 | cut -c 8) -eq 3 && exec python "$0" "$#" # '''
''''exec echo "Python 3 not found." # '''
import sys
print sys.argv
Here is a more concise way for the highest voted answer:
#!/bin/sh
''''exec $(which python3 || which python2 || echo python) $0 $# #'''
import sys
print(sys.argv)
print(sys.version_info)
You'll get this if none of them found:
'./test.py: 2: exec: python: not found'
Also, get rid of warnings from linter:
'module level import not at top of file - flake8(E402)'
In my case I don't need the python path or other information (like version).
I have an alias setup to point "python" to "python3". But, my scripts are not aware of the environment alias. Therefore I needed a solution for the scripts to determine the program name, programmatically.
If you have more than one version installed, the sort and tail are going to give you the latest version:
#!/bin/sh
command=$((which python3 || which python2 || which python) | sort | tail -n1 | awk -F "/" '{ print $NF }')
echo $command
Would give a result like: python3
This solution is original but similar to #Pamela