shebang env preferred python version - python

I have some python-2.x scripts which I copy between different systems, Debian and Arch linux.
Debian install python as '/usr/bin/python' while Arch installs it as '/usr/bin/python2'.
A problem is that on Arch linux '/usr/bin/python' also exists which refers to python-3.x.
So every time I copy a file I have to correct the shebang line, which is a bit annoying.
On Arch I use
#!/usr/bin/env python2
While on debian I have
#!/usr/bin/env python
Since 'python2' does not exist on Debian, is there a way to pass a preferred application? Maybe with some shell expansion? I don't mind if it depends on '/bin/sh' existing for example.
The following would be nice but don't work.
#!/usr/bin/env python2 python
#!/usr/bin/env python{2,}
#!/bin/sh python{2,}
#!/bin/sh -c python{2,}
The frustrating thing is that 'sh -c python{2,}' works on the command line: i.e. it calls python2 where available and otherwise python.
I would prefer not to make a make a link 'python2->python' on Debian because then if I give the script to someone else it will not run. Neither would I like to make 'python' point to python2 on Arch, since it breaks with updates.
Is there a clean way to do this without writing a wrapper?
I realize similar question have been asked before, but I didn't see any answers meeting my boundary conditions :)
Conditional shebang line for different versions of Python
--- UPDATE
I hacked together an ugly shell solution, which does the job for now.
#!/bin/bash
pfound=false; v0=2; v1=6
for p in /{usr/,}bin/python*; do
v=($(python -V 2>&1 | cut -c 7- | sed 's/\./ /g'))
if [[ ${v[0]} -eq $v0 && ${v[1]} -eq $v1 ]]; then pfound=true; break; fi
done
if ! $pfound; then echo "no suitable python version (2.6.x) found."; exit 1; fi
$p - $* <<EOF
PYTHON SCRIPT GOES HERE
EOF
explanation:
get version number (v is a bash array) and check
v=($(python -V 2>&1 | cut -c 7- | sed 's/\./ /g'))
if [[ ${v[0]} -eq $v0 && ${v[1]} -eq $v1 ]]; then pfound=true; break; fi
launch found program $p with input from stdin (-) and pass arguments ($*)
$p - $* <<EOF
...
EOF

#!/bin/sh
''''which python2 >/dev/null 2>&1 && exec python2 "$0" "$#" # '''
''''which python >/dev/null 2>&1 && exec python "$0" "$#" # '''
''''exec echo "Error: I can't find python anywhere" # '''
import sys
print sys.argv
This is first run as a shell script. You can put almost any shell code in between '''' and # '''. Such code will be executed by the shell. Then, when python runs on the file, python will ignore the lines as they look like triple-quoted strings to python.
The shell script tests if the binary exists in the path with which python2 >/dev/null and then executes it if so (with all arguments in the right place). For more on this, see Why does this snippet with a shebang #!/bin/sh and exec python inside 4 single quotes work?
Note: The line starts with four ' and their must be no space between the fourth ' and the start of the shell command (which...)

Something like this:
#!/usr/bin/env python
import sys
import os
if sys.version_info >= (3, 0):
os.execvp("python2.7", ["python2.7", __file__])
os.execvp("python2.6", ["python2.6", __file__])
os.execvp("python2", ["python2", __file__])
print ("No sutable version of Python found")
exit(2)
Update Below is a more robust version of the same.
#!/bin/bash
ok=bad
for pyth in python python2.7 python2.6 python2; do
pypath=$(type -P $pyth)
if [[ -x $pypath ]] ; then
ok=$(
$pyth <<##
import sys
if sys.version_info < (3, 0):
print ("ok")
else:
print("bad")
##
)
if [[ $ok == ok ]] ; then
break
fi
fi
done
if [[ $ok != ok ]]; then
echo "Could not find suitable python version"
exit 2
fi
$pyth <<##
<<< your python script goes here >>>
##

I'll leave this here for future reference.
All of my own scripts are usually written for Python 3, so I'm using a modified version of Aaron McDaid's answer to check for Python 3 instead of 2:
#!/usr/bin/env sh
''''which python3 >/dev/null 2>&1 && exec python3 "$0" "$#" # '''
''''test $(python --version 2>&1 | cut -c 8) -eq 3 && exec python "$0" "$#" # '''
''''exec echo "Python 3 not found." # '''
import sys
print sys.argv

Here is a more concise way for the highest voted answer:
#!/bin/sh
''''exec $(which python3 || which python2 || echo python) $0 $# #'''
import sys
print(sys.argv)
print(sys.version_info)
You'll get this if none of them found:
'./test.py: 2: exec: python: not found'
Also, get rid of warnings from linter:
'module level import not at top of file - flake8(E402)'

In my case I don't need the python path or other information (like version).
I have an alias setup to point "python" to "python3". But, my scripts are not aware of the environment alias. Therefore I needed a solution for the scripts to determine the program name, programmatically.
If you have more than one version installed, the sort and tail are going to give you the latest version:
#!/bin/sh
command=$((which python3 || which python2 || which python) | sort | tail -n1 | awk -F "/" '{ print $NF }')
echo $command
Would give a result like: python3
This solution is original but similar to #Pamela

Related

Best way to run script via a /bin/bash shell script in multiple environments?

I have a python script which I run on localhost and development in command line with argument, sth as python script.py development - on development and python script.py localhost - on localhost.
Now I want to run this script - when I running script /bin/bash sh,
so I want to run this script from /bin/.bash script.
I added in headers in sh script: #!/usr/bin/env python.
In what way I can achieve this?
do
if [ $1 == "local" ]; then
python script.py $1
elif [ $1 == "development" ]; then
python script.py $1
What I can do to improve this script?
Since $1 already contains what you want, the conditional is unnecessary.
If your script is a Bash script, you should put #!/bin/bash (or your local equivalent) in the shebang line. However, this particular script uses no Bash features, and so might usefully be coded to run POSIX sh instead.
#!/bin/sh
case $1 in
local|development) ;;
*) echo "Syntax: $0 local|development" >&2; exit 2;;
esac
exec python script.py "$1"
A more useful approach is to configure your local system to run the script directly with ./script.py or similar, and let the script itself take care of parsing its command-line arguments. How exactly to do that depends on your precise environment, but on most U*x-like systems, you would put #!/usr/bin/env python as the first line of script.py itself, and chmod +x the file.
I assume this is what you wanted...
#!/bin/bash
if [ ! "$#" ]; then
echo "Usage: $1 (local|development) "
exit
fi
if [ "$1" == "local" ]; then
python script.py "$1"
echo "$1"
elif
[ "$1" == "development" ]; then
python script.py "$1"
echo "$1"
fi
Save the bash code above into a file named let's say script.sh. The make it executable: chmod +x script.sh. Then run it:
./script.sh
If no argument is specified, the script will just print an info about how to use it.
./script.sh local - executes python script.py local
./script.sh development - executes python script.py development
You can comment the lines with echo, they were left there just for debugging purposes (add a # in front of the echo lines to comment them).

Script works differently when ran from the terminal and ran from Python

I have a short bash script foo.sh
#!/bin/bash
cat /dev/urandom | tr -dc 'a-z1-9' | fold -w 4 | head -n 1
When I run it directly from the shell, it runs fine, exiting when it is done
$ ./foo.sh
m1un
$
but when I run it from Python
$ python -c "import subprocess; subprocess.call(['./foo.sh'])"
ygs9
it outputs the line but then just hangs forever. What is causing this discrepancy?
Adding the trap -p command to the bash script, stopping the hung python process and running ps shows what's going on:
$ cat foo.sh
#!/bin/bash
trap -p
cat /dev/urandom | tr -dc 'a-z1-9' | fold -w 4 | head -n 1
$ python -c "import subprocess; subprocess.call(['./foo.sh'])"
trap -- '' SIGPIPE
trap -- '' SIGXFSZ
ko5o
^Z
[1]+ Stopped python -c "import subprocess; subprocess.call(['./foo.sh'])"
$ ps -H -o comm
COMMAND
bash
python
foo.sh
cat
tr
fold
ps
Thus, subprocess.call() executes the command with the SIGPIPE signal masked. When head does its job and exits, the remaining processes do not receive the broken pipe signal and do not terminate.
Having the explanation of the problem at hand, it was easy to find the bug in the python bugtracker, which turned out to be issue#1652.
The problem with Python 2 handling SIGPIPE in a non-standard way (i.e., being ignored) is already coined in Leon's answer, and the fix is given in the link: set SIGPIPE to default (SIG_DFL) with, e.g.,
import signal
signal.signal(signal.SIGPIPE,signal.SIG_DFL)
You can try to unset SIGPIPE from within your script with, e.g.,
#!/bin/bash
trap SIGPIPE # reset SIGPIPE
cat /dev/urandom | tr -dc 'a-z1-9' | fold -w 4 | head -n 1
but, unfortunately, it doesn't work, as per the Bash reference manual
Signals ignored upon entry to the shell cannot be trapped or reset.
A final comment: you have a useless use of cat here; it's better to write your script as:
#!/bin/bash
tr -dc 'a-z1-9' < /dev/urandom | fold -w 4 | head -n 1
Yet, since you're using Bash, you might as well use the read builtin as follows (this will advantageously replace fold and head):
#!/bin/bash
read -n4 a < <(tr -dc 'a-z1-9' < /dev/urandom)
printf '%s\n' "$a"
It turns out that with this version, you'll have a clear idea of what's going on (and the script will not hang):
$ python -c "import subprocess; subprocess.call(['./foo'])"
hcwh
tr: write error: Broken pipe
tr: write error
$
$ # script didn't hang
(Of course, it works well with no errors with Python3). And telling Python to use the default signal for SIGPIPE works well too:
$ python -c "import signal; import subprocess; signal.signal(signal.SIGPIPE,signal.SIG_DFL); subprocess.call(['./foo'])"
jc1p
$
(and also works with Python3).

running shell command in python

I have this simple code for running shell scripts and it sometimes work, sometimes not.If not working console log is:
Please edit the vars script to reflect your configuration, then
source it with "source ./vars". Next, to start with a fresh PKI
configuration and to delete any previous certificates and keys, run
"./clean-all". Finally, you can run this tool (pkitool) to build
certificates/keys.
It is strange for me because when I run commands in console they work as should
def cmds(*args):
cd1 = "cd /etc/openvpn/easy-rsa && source ./vars"
cd2 = "cd /etc/openvpn/easy-rsa && ./clean-all"
cd3 = "cd /etc/openvpn/easy-rsa && printf '\n\n\n\n\n\n\n\n\n' | ./build-ca"
runcd1 = subprocess.Popen(cd1, shell=True)
runcd2 = subprocess.Popen(cd2 , shell=True)
runcd3 = subprocess.Popen(cd3 , shell=True)
return (runcd1, runcd2, runcd3)
I've changed like this:
def pass3Cmds(*args):
commands = "cd /etc/openvpn/easy-rsa && source ./vars && ./clean-all && printf '\n\n\n\n\n\n\n\n\n' | ./build-ca"
runCommands = subprocess.Popen(commands, shell=True, stdout=PIPE)
return (runCommands)
but console writes down:
source: not found
You need to combine the three commands into one.
The "source ./vars" only affects the shell from which it's run. When you use three separate Popen commands, you're getting three separate shells.
Run all the commands in one Popen with &&s between them.
The reason this works "sometimes" as written is that you're sometimes running python in a shell where you already sourced the vars script.

Makefile: use $exe1 if exists else $exe2

In bash I can do something like this in order to check if a program exists:
if type -P vim > /dev/null; then
echo "vim installed"
else
echo "vim not installed"
fi
I would like to do the same thing in a Makefile.
In details I would like to choose "python3" if installed, else "python" (2). My Makefile looks like this:
PYTHON = python
TSCRIPT = test/test_psutil.py
test:
$(PYTHON) $(TSCRIPT)
Is there anything I can do to use a conditional around that PYTHON = python line? I understand Makefiles can be told to use bash syntax somehow (SHELL:=/bin/bash?) but I'm no expert.
The easiest thing is probably to use $(shell) to figure out if python3 is callable:
ifeq ($(shell which python3),)
PYTHON = python
else
PYTHON = python3
endif
$(shell which python 3) runs which python3 in a shell and expands to the output of that command. That is the path of python3 if it is available, and otherwise it is empty. This can be used in a conditional.
Addendum: About the portability concerns in the comments: the reason that $(shell type -P python3) does not work is that GNU make attempts to optimize away the shell call and fork/exec itself, which does not work with a shell builtin. I found this out from here. If your /bin/sh knows type -P, then
# note the semicolon -------v
ifeq ($(shell type -P python3;),)
works. My /bin/sh is dash, though, so that didn't work for me (it complained about -P not being a valid command). What did work was
ifeq ($(shell type python3;),)
because dash's type sends the error message about unavailable commands to stderr, not stdout (so the $(shell) expands to the empty string). If you can depend on which, I think doing that is the cleanest way. If you can depend on bash, then
ifeq ($(shell bash -c 'type -P python3'),)
also works. Alternatively,
SHELL = bash
ifeq ($(shell type -P python3;),)
has the same effect. If none of those are an option, desperate measures like #MadScientist's answer become attractive.
Or, if all else fails, you can resort to searching the path yourself:
PYTHON = $(shell IFS=:; for dir in $$PATH; do if test -f "$$dir/python3" && test -x "$$dir/python3"; then echo python3; exit 0; fi; done; echo python)
This is lifted from the way autoconf's AC_CHECK_PROG is implemented. I'm not sure whether I'd want this, though.
If you wanted to be more portable you can try invoking the command itself to see if it works or not:
PYTHON := $(shell python3 --version >/dev/null 2>&1 && echo python3 || echo python)
PYTHON := $(shell type -P python3 || echo "python")
You could use command -v:
PYTHON := $(shell command -v python3 2> /dev/null || echo python)
In Bash, command is a builtin command.
The example above is for GNU Make. Other Make programs may have a different syntax for running shell commands.

How can a shell function know if it is running within a virtualenv?

How should a bash function test whether it is running inside a Python virtualenv?
The two approaches that come to mind are:
[[ "$(type -t deactivate)" != function ]]; INVENV=$?
or
[[ "x$(which python)" != "x$VIRTUAL_ENV/bin/python" ]]; INVENV=$?
(Note: wanting $INVENV to be 1 if we're inside a virtualenv, and 0 otherwise, is what forces the backward-looking tests above.)
Is there something less hacky?
if [[ "$VIRTUAL_ENV" != "" ]]
then
INVENV=1
else
INVENV=0
fi
// or shorter if you like:
[[ "$VIRTUAL_ENV" == "" ]]; INVENV=$?
EDIT: as #ThiefMaster mentions in the comments, in certain conditions (for instance, when starting a new shell – perhaps in tmux or screen – from within an active virtualenv) this check may fail (however, starting new shells from within a virtualenv may cause other issues as well, I wouldn't recommend it).
Actually, I just found a similar question, from which one can easily derive an answer to this one:
Python: Determine if running inside virtualenv
E.g., a shell script can use something like
python -c 'import sys; print (sys.real_prefix)' 2>/dev/null && INVENV=1 || INVENV=0
(Thanks to Christian Long for showing how to make this solution work with Python 3 also.)
EDIT: Here's a more direct (hence clearer and cleaner) solution (taking a cue from JuanPablo's comment):
INVENV=$(python -c 'import sys; print ("1" if hasattr(sys, "real_prefix") else "0")')
If you use virtualenvwrappers there are pre/post scripts that run that could set INVENV for you.
Or what I do, put the following in your your .bashrc, and make a file called .venv in your working directory (for django) so that the virtual env is automatically loaded when you cd into the directory
export PREVPWD=`pwd`
export PREVENV_PATH=
handle_virtualenv(){
if [ "$PWD" != "$PREVPWD" ]; then
PREVPWD="$PWD";
if [ -n "$PREVENV_PATH" ]; then
if [ "`echo "$PWD" | grep -c $PREVENV_PATH`" = "0" ]; then
deactivate
unalias python 2> /dev/null
PREVENV_PATH=
fi
fi
# activate virtualenv dynamically
if [ -e "$PWD/.venv" ] && [ "$PWD" != "$PREVENV_PATH" ]; then
PREVENV_PATH="$PWD"
workon `basename $PWD`
if [ -e "manage.py" ]; then
alias python='python manage.py shell_plus'
fi
fi
fi
}
export PROMPT_COMMAND=handle_virtualenv

Categories