Is there a way to write the shebang line such that it will find the Python3 interpreter, if present?
Naively, from PEP 394 I would expect that #!/usr/bin/env python3 should work.
However, I've noticed that on some systems where python is Python3, they don't provide a python3 alias. On these systems, you'd need to use #!/usr/bin/env python to get Python3.
Is there a robust way to handle this ambiguity? Is there some way to write the shebang line such that it will use python3 if present, but try python if not? (Requiring that end users manually fix their systems to add a python3 alias is not ideal.)
The only way I can see to do this is to provide your own shebang wrapper to call the correct version of python. If you can reliably place the wrapper in a set location you can do this:
Create wrapper script, e.g. /usr/local/bin/python3_wrapper
#!/bin/bash
cmd="$1"
shift
if which python3 >/dev/null; then
exec python3 "$cmd" "$#"
elif which python >/dev/null; then
version=$(python --version 2>&1 | cut -d' ' -f2 | cut -d. -f1)
if [[ "$version" == "3" ]]; then
exec python "$cmd" "$#"
else
echo "python is version $version (python3 not found)"
fi
else
echo "python3 nor python found"
fi
exit 1
Then use the following shebang in your script:
#!/usr/local/bin/python3_wrapper
Your other option would be to call a python script that works in both version 2 and 3 that then calls your python3 script using the correct executable. If your script is called script.py then rename it to script.py3 and create script.py as follows:
#!/usr/bin/env python
import os
import sys
if sys.version_info.major == 3:
exe = "python" # python is version 3.x
else:
exe = "python3" # python is not version 3.x so try python3
try:
os.execvp(exe, [exe, sys.argv[0]+'3'] + sys.argv[1:])
except:
print(exe, "not found")
sys.exit(1)
Related
I'm trying to run a Python script inside a perl script with the following command:
system("python3 script.py -d http:\/\/site.com --no-interaction");
qx/python3 script.py -d http:\/\/site.com --no-interaction/;
On the operating system's command line, the Python script executes, but when I make a call from a PHP application, the perl work, but the python script don't work.
Do you get any error message from Perl side?
Likely where your PHP/Perl script runs from isn't the same location as where script.py is at. Try by using full path to Python script. Also double check that python3 is in your $PATH.
For example:
-> cat /home/me/python/script.py
print("This line will be printed.")
-> cat /home/me/perl/pytest.pl
#!/bin/env perl
print "From perl:\n";
system ("python3 /home/me/python/script.py");
cd /home/me/perl/
ksh
whence python3
"/usr/bin"
pytest.pl
"From perl:
This line will be printed."
I am working with shell scripting and trying to learn Python scripting, any suggestion is welcome.
I want to achieve something like below:
Usage1:
ps_result=`ps -eaf|grep -v "grep" | grep "programmanager"`
then can we straight away use ps_result variable in python code; if yes, how?
Usage2:
matched_line=`cat file_name |grep "some string"`
can we use matched_line variable in python code as a list, if yes how?
PS: If possible assume I am writing the bash and python code in the one file, if not possible request you to please suggest a way. TIA
Yes, you can do it via environment variables.
First, define the environment variable for the shell using export:
$ export ps_result=`ps -eaf|grep -v "grep" | grep "programmanager"`
Then, import the os module in Python and read the environment variable:
$ python -c 'import os; ps_result=os.environ.get("ps_result"); print(ps_result)'
Second question first, if you run python -, python will run the script piped on stdin. There are several functions in python subprocess that let you run other programs. So you could write
test.sh
#!/bin/sh
python - << endofprogram
import subprocess as subp
result = subp.run('ps -eaf | grep -v "grep" | grep "python"',
shell=True, capture_output=True)
if result.returncode == 0:
matched_lines = result.stdout.decode().split("\n")
print(matched_lines)
endofprogram
In this example we pipe via the shell but python can chain stdout/stdin also, albeit more verbosely.
I have a python script, which I want to be able to run from bash.
This is simply solved by shebang.
The next step is to implement the time command into the shebang.
My best but not the complete successful idea was to use
#!/usr/bin/env -vS bash -c "time /usr/bin/python3 -OO"
which does so sadly not make python interpret the script file and ends in an interactive python session.
The output is
split -S: ‘bash -c "time /usr/bin/python3 -OO"’
into: ‘bash’
& ‘-c’
& ‘time /usr/bin/python3 -OO’
executing: bash
arg[0]= ‘bash’
arg[1]= ‘-c’
arg[2]= ‘time /usr/bin/python3 -OO’
arg[3]= ‘./mypycheck.py’
Python 3.7.3 (default, Apr 3 2019, 05:39:12)
How can I do the job? Thanks in advance.
At the end summing up all helpful details from here, I was able to reach my goal with the following solution.
Installing time utiliy by running sudo apt install time
Using the shebang #!/usr/bin/env -S /usr/bin/time /usr/bin/python3 -OO
And now all is running the way I was looking for.
You can solve this by creating a secondary bash script, and just invoking it as the shebang.
Kamori#Kamori-PC:/tmp# ./timed.py
hello
real 0m0.028s
user 0m0.016s
sys 0m0.000s
Kamori#Kamori-PC:/tmp# cat timed.py
#!/bin/bash startup.sh
print("hello")
Kamori#Kamori-PC:/tmp# cat startup.sh
#!/usr/bin/env bash
time python3.7 timed.py
You cannot do that with a shebang, because it's format (on Linux) is:
#!interpreter [optional-arg]
And this argument is passed as single string (see "Interpreter scripts" and "Interpreter scripts" in the linked document). In other words, you cannot pass multiple arguments (unless they can be concatenated to a single string) to an interpreter. This is down to kernel implementation of how code gets executed.
Using env -S is also not helpful here, because as you can see in your debugging output:
arg[0]= ‘bash’
arg[1]= ‘-c’
arg[2]= ‘time /usr/bin/python3 -OO’
arg[3]= ‘./mypycheck.py’
It runs shell, tells to run a command (-c) starting python wrapped in time and then passed ‘./mypycheck.py’ to bash (not python) as its last argument. meaning of which is (applying to the bash):
-c
If the -c option is present, then commands are read from the first non-option argument command_string. If there are arguments after the command_string, the first argument is assigned to $0 and any remaining arguments are assigned to the positional parameters. The assignment to $0 sets the name of the shell, which is used in warning
and error messages.
As for you objective. You could create a wrapper that is used as an interpreter in place of env in your case that does desired actions and passed the script to an actual interpreter.
I guess you already simply tried
#!/usr/bin/time python3
Was it not ok?
(i.e. is the -OO in your tests mandatory?)
Example:
$ cat test.py
#!/usr/bin/time python3
import sys
print (sys.argv)
$ ./test.py
['./test.py']
0.01user 0.00system 0:00.02elapsed 95%CPU (0avgtext+0avgdata 9560maxresident)k
0inputs+0outputs (0major+1164minor)pagefaults 0swaps
Although this doesn't solve the -OO yet
I am trying to pipe output from a command written in the terminal to a Python script.
For example:
ls | ./foo.py
I wrote a Python script to do the same:
#foo.py
import fileinput
with fileinput.input() as f_input :
for line in f_input :
print(line,end='')
But this does not seem to work,
when I run the following command:
$ ls | sudo ./foo.py
I get an error that says:
$ ./foo.py: command not found
I have checked the working directory and I can see the foo.py when I use the ls command, so what am I doing wrong here?
It seems like you forgot the Shebang:
#!/usr/bin/env python3
import fileinput
with fileinput.input() as f_input :
for line in f_input :
print(line,end='')
Also remember make it as executable via command:
chmod +x foo.py
Then run your command again.
You have to pipe it to the Python executable, not to the name of a file. As the error says, that filename doesn't represent a command it knows.
ls | py ./foo.py
Use py or python or however you run the Python interpreter on your particular system.
I've tried googling the answer but with no luck.
I need to use my works supercomputer server, but for my python script to run, it must be executed via a shell script.
For example I want job.sh to execute python_script.py
How can this be accomplished?
Just make sure the python executable is in your PATH environment variable then add in your script
python path/to/the/python_script.py
Details:
In the file job.sh, put this
#!/bin/sh
python python_script.py
Execute this command to make the script runnable for you : chmod u+x job.sh
Run it : ./job.sh
Method 1 - Create a shell script:
Suppose you have a python file hello.py
Create a file called job.sh that contains
#!/bin/bash
python hello.py
mark it executable using
$ chmod +x job.sh
then run it
$ ./job.sh
Method 2 (BETTER) - Make the python itself run from shell:
Modify your script hello.py and add this as the first line
#!/usr/bin/env python
mark it executable using
$ chmod +x hello.py
then run it
$ ./hello.py
Save the following program as print.py:
#!/usr/bin/python3
print('Hello World')
Then in the terminal type:
chmod +x print.py
./print.py
You should be able to invoke it as python scriptname.py e.g.
# !/bin/bash
python /home/user/scriptname.py
Also make sure the script has permissions to run.
You can make it executable by using chmod u+x scriptname.py.
Imho, writing
python /path/to/script.py
Is quite wrong, especially in these days. Which python? python2.6? 2.7? 3.0? 3.1? Most of times you need to specify the python version in shebang tag of python file. I encourage to use #!/usr/bin/env python2 #or python2.6 or python3 or even python3.1 for compatibility.
In such case, is much better to have the script executable and invoke it directly:
#!/bin/bash
/path/to/script.py
This way the version of python you need is only written in one file. Most of system these days are having python2 and python3 in the meantime, and it happens that the symlink python points to python3, while most people expect it pointing to python2.
This works for me:
Create a new shell file job. So let's say:
touch job.sh and add command to run python script (you can even add command line arguments to that python, I usually predefine my command line arguments).
chmod +x job.sh
Inside job.sh add the following py files, let's say:
python_file.py argument1 argument2 argument3 >> testpy-output.txt && echo "Done with python_file.py"
python_file1.py argument1 argument2 argument3 >> testpy-output.txt && echo "Done with python_file1.py"
Output of job.sh should look like this:
Done with python_file.py
Done with python_file1.py
I use this usually when I have to run multiple python files with different arguments, pre defined.
Note: Just a quick heads up on what's going on here:
python_file.py argument1 argument2 argument3 >> testpy-output.txt && echo "completed with python_file.py" .
Here shell script will run the file python_file.py and add multiple command-line arguments at run time to the python file.
This does not necessarily means, you have to pass command line arguments as well.
You can just use it like: python python_file.py, plain and simple.
Next up, the >> will print and store the output of this .py file in the testpy-output.txt file.
&& is a logical operator that will run only after the above is executed successfully and as an optional echo "completed with python_file.py" will be echoed on to your cli/terminal at run time.
This works best for me:
Add this at the top of the script:
#!c:/Python27/python.exe
(C:\Python27\python.exe is the path to the python.exe on my machine)
Then run the script via:
chmod +x script-name.py && script-name.py
I use this and it works fine
#/bin/bash
/usr/bin/python python python_script.py
Since the other posts say everything (and I stumbled upon this post while looking for the following).
Here is a way how to execute a python script from another python script:
Python 2:
execfile("somefile.py", global_vars, local_vars)
Python 3:
with open("somefile.py") as f:
code = compile(f.read(), "somefile.py", 'exec')
exec(code, global_vars, local_vars)
and you can supply args by providing some other sys.argv
Here I have demonstrated an example to run python script within a shell script. For different purposes you may need to read the output from a shell command, execute both python script and shell command within the same file.
To execute a shell command from python use os.system() method. To read output from a shell command use os.popen().
Following is an example which will grep all processes having the text sample_program.py inside of it. Then after collecting the process IDs (using python) it will kill them all.
#!/usr/bin/python3
import os
# listing all matched processes and taking the output into a variable s
s = os.popen("ps aux | grep 'sample_program.py'").read()
s = '\n'.join([l for l in s.split('\n') if "grep" not in l]) # avoiding killing the grep itself
print("To be killed:")
print(s)
# now manipulating this string s and finding the process IDs and killing them
os.system("kill -9 " + ' '.join([x.split()[1] for x in s.split('\n') if x]))
References:
Execute a python program from within a shell script
Assign output of os.system to a variable and prevent it from being displayed on the screen
If you have a bash script and you need to run inside of it a python3 script (with external modules), I recommend that you point in your bash script to your python path like this.
#!/usr/bin/env bash
-- bash code --
/usr/bin/python3 your_python.py
-- bash code --