I have a program with two different versions on this computer. I can't get rid of the older one because I don't have root access, but I put the newer one first in 'bin' in my home directory (which is the first thing in my $PATH)
I tried calling it with Python's Popen
Popen(['clingo'...]...)
and it worked fine
But then I needed to set an environment variable before calling it so I renamed 'clingo' to 'run_clingo' in the bin directory and replaced it with a script:
# File: ~/bin/clingo
export LD_LIBRARY_PATH=/usr/local/gcc4.8.1/lib64:$LD_LIBRARY_PATH
run_clingo $#
When I run 'clingo' from terminal, it works fine, but Python Popen calls the older version (which is later on the PATH).
How can I get Popen to call the right one? Why does changing an executable to a script cause Popen to search elsewhere?
Add shebang #!/bin/sh at the very top of ~/bin/clingo and run:
$ chmod +x ~/bin/clingo
to make the file executable.
Popen() calls os.execvp() to execute a program. Its behaviour differs from your shell that emulates execvp(): if there is no the shebang (!#.. at the top) in the script then the shell reruns it as a shell script but Python's os.execvp() runs the next file in PATH instead.
Wild guess is that Popen does not use the same PATH as your terminal. I would check the shebang line (#!) sometimes people put env -i python there, which explicitly asks fro empty, i.e. default environment.
Note that subprocess.call is the recommended way, not using the Popen directly.
Related
I am using python and I am trying to run a shell script that is located in another folder I am trying
subprocess.call(['source','../Apps/appName/run'])
Where 'run' is a shell script I wrote and made an executable. But it keeps giving errors such as
No such file or directory or **No such file or directory: "source"
I have also tried the following
subprocess.call(['source','../Apps/appName/run'])
subprocess.call(['source ../Apps/appName/run'])
subprocess.call(['.','../Apps/appName/run'])
I am trying to run the script and leave it alone (as in I do not want any return value or to see what the output of the shell script is.
Thank you in advance
source is a shell builtin that reads a file and interprets the commands in the current shell instance. It's similar to C #include or Python import. You should not be using it to run shell scripts.
The correct way to run a script is to add a shebang like #!/bin/bash to the first line, and chmod +x yourscriptfile. The OS will then consider it a real executable, which can be executed robustly from any context. In Python, you would do:
subprocess.call(['../Apps/appName/run'])
If for whichever reason this is not an option, you can instead explicitly invoke bash on the file, since this is similar to what would happen if you're in bash and type source:
subprocess.call(['bash', '../Apps/appName/run'])
I have a virtualenv named 'venv' and it is activate:
(venv)>
and I wrote codes that I'll run it in the virtualenv (main.py):
import subprocess
result = subprocess.run('python other.py', stdout=subprocess.PIPE)
but when I run main.py file:
(venv)> python main.py
subprocess does not execute the command (python other.py) in the virtualenv i.e venv
How to run subprocess command in the current virtualenv session?
A child process can't run commands in its parent process without that process's involvement.
This is why ssh-agent requires usage as eval "$(ssh-agent -s)" to invoke the shell commands it emits on output, for example. Thus, the literal thing you're asking for here is impossible.
Fortunately, it's also unnecessary.
virtualenvs use environment variables inherited by child processes.
This means that you don't actually need to use the same shell that has a virtualenv activated to start a new Python interpreter intended to use the interpreter/libraries/etc. from that virtualenv.
subprocess.run must be passed a list, or shell=True must be used.
Either do this (which is better!)
import subprocess
result = subprocess.run(['python', 'other.py'], stdout=subprocess.PIPE)
Or this (which is worse!)
import subprocess
result = subprocess.run('python other.py', stdout=subprocess.PIPE, shell=True)
If you want to run a script with the same Python executable being used to run the current script, don't use python and rely on the path being set up properly, just use sys.executable:
A string giving the absolute path of the executable binary for the Python interpreter, on systems where this makes sense.
This works if you executed the script with python myscript.py relying on the active virtualenv's PATH. It also works if you executed the script with /usr/local/bin/python3.6 to ignore the PATH and test your script with a specific interpreter. Or if you executed the script with myscript.py, relying on a shbang line created at installation time by setuptools. Or if the script was run as a CGI depending on your Apache configuration. Or if you sudod the executable, or did something else that scraped down your environment. Or almost anything else imaginable.1
As explained in Charles Duffy's answer, you still need to use a list of arguments instead of a string (or use shell=True, but you rarely want to do that). So:
result = subprocess.run([sys.executable, 'other.py'], stdout=subprocess.PIPE)
1. Well, not quite… Examples of where it doesn't work include custom C programs that embed a CPython interpreter, some smartphone mini-Python environments, old-school Amiga Python, … The one most likely to affect you—and it's a pretty big stretch—is that on some *nix platforms, if you write a program that execs Python by passing incompatible names for the process and arg0, sys.executable can end up wrong.
Is it possible to use a "more complex" shell than just a single command shell? We have written a python shell that is a command loop, and it works fine in /etc/passwd like this:
user:x:1000:1000::/home/user:/usr/bin/ourshell.py
Of course the Python file has the shebang line for /usr/bin/python in it. However, we'd like to compile the Python shell into a .pyc file to save a bit of time on execution in login. So, after compiling, I've been trying to "quote" the shell line in /etc/passwd as "python ourshell.pyc", and I even tried making the shell a bash script which simply executes that same command (with the initial arguments).
Of course none of this has worked. When we SSH in, there is always some kind of error. Is there any special trick to what I am trying to do?
CPython's .pyc files are not text, and do not allow use of a shebang line. The traditional method is to have your called script be tiny; it would simply import a module with the rest of the program, which can then be precompiled. For instance, here is the main script of xonsh:
#!/usr/bin/env python3 -u
from xonsh.main import main
main()
This script takes negligible time to compile. It is also possible to run installed modules using -m, but that takes module names, not filenames, so is not suitable for a shebang script.
I suggest to code a small C wrapper program running your python shell.
(notice that execve(2) forbids nested shebang interpreters; I don't know if that applies for your case)
Look into your log files, probably /var/log/messages and /var/log/auth.log
You may also need to explicitly add (the compiled C executable for the wrapper) to /etc/shells; see shells(5)
Look also into scsh.
Your sshd daemon is probably using Linux Plugin Authentification Modules. So read more about PAM.
Create a file /usr/bin/shell_wrapper that contains this one line:
#!/usr/bin/python /usr/bin/ourshell.pyc
The compiled bytecode ourshell.pyc has to live in /usr/bin, or else change the path accordingly. The python path should go to the same version that compiled the bytecode.
Then make sure to have your /etc/passwd use /usr/bin/shell_wrapper for the shell executable:
user:x:1000:1000::/home/user:/usr/bin/shell_wrapper
I am trying to write a python script that will execute a bash script I have on my Windows machine. Up until now I have been using the Cygwin terminal so executing the bash script RunModels.scr has been as easy as ./RunModels.scr. Now I want to be able to utilize subprocess of Python, but because Windows doesn't have the built in functionality to handle bash I'm not sure what to do.
I am trying to emulate ./RunModels.scr < validationInput > validationOutput
I originally wrote this:
os.chdir(atm)
vin = open("validationInput", 'r')
vout = open("validationOutput", 'w')
subprocess.call(['./RunModels.scr'], stdin=vin, stdout=vout, shell=True)
vin.close()
vout.close()
os.chdir(home)
But after spending a while trying to figure out why my access was denied, I realized my issue wasn't the file permissions but the fact that I was trying to execute a bash file on Windows in general. Can someone please explain how to execute a bash script with directed input/output on windows using a python script?
Edit (Follow up Question):
Thanks for the responses, I needed the full path to my bash.exe as the first param. Now however, command line calls from within RunModels.scr come back in the python output as command not found. For example, ls, cp, make. Any suggestions for this?
Follow up #2:
I updated my call to this:
subprocess.call(['C:\\cygwin64\\bin\\bash.exe', '-l', 'RunModels.scr'], stdin=vin, stdout=vout, cwd='C:\\path\\dir_where_RunModels\\')
The error I now get is /usr/bin/bash: RunModels.scr: No such file or directory.
Using cwd does not seem to have any effect on this error, either way the subprocess is looking in /usr/bin/bash for RunModels.scr.
SELF-ANSWERED
I needed to specify the path to RunModels.scr in the call as well as using cwd.
subprocess.call(['C:\\cygwin64\\bin\\bash.exe', '-l', 'C:\\path\\dir_where_RunModels\\RunModels.scr'], stdin=vin, stdout=vout, cwd='C:\\path\\dir_where_RunModels\\')
But another problem...
Regardless of specifying cwd, the commands executed by RunModels.scr are throwing errors as if RunModels.scr is in the wrong directory. The script executes, but cp and cd throw the error no such file or directory. If I navigate to where RunModels.scr is through the command line and execute it the old fashioned way I don't get these errors.
Python 3.4 and below
Just put bash.exe in first place in your list of subprocess.call arguments. You can remove shell=True, that's not necessary in this case.
subprocess.call(['C:\\cygwin64\\bin\\bash.exe', '-l', 'RunModels.scr'],
stdin=vin, stdout=vout,
cwd='C:\\path\\dir_where_RunModels\\')
Depending on how bash is installed (is it in the PATH or not), you might have to use the full path to the bash executable.
Python 3.5 and above
subprocess.call() has been effectively replaced by subprocess.run().
subprocess.run(['C:\\cygwin64\\bin\\bash.exe', '-l', 'RunModels.scr'],
stdin=vin, stdout=vout,
cwd='C:\\path\\dir_where_RunModels\\')
Edit:
With regard to the second question, you might need to add the -l option to the shell invocation to make sure it reads all the restart command files like /etc/profile. I presume these files contain settings for the $PATH in bash.
Edit 2:
Add something like pwd to the beginning of RunModels.scr so you can verify that you are really in the right directory. Check that there is no cd command in the rc-files!
Edit 3:
The error /usr/bin/bash: RunModels.scr: No such file or directory can also be generated if bash cannot find one of the commands that are called in the script. Try adding the -v option to bash to see if that gives more info.
A better solution than the accepted answer is to use the executable keyword argument to specify the path to your shell. Behind the curtain, Python does something like
exec([executable, '-c`, subprocess_arg_string])
So, concretely, in this case,
subprocess.call(
'./RunModels.scr',
stdin=vin, stdout=vout,
shell=True,
executable="C:/cygwin64/bin/bash.exe")
(Windows thankfully lets you use forward slashes instead of backslashes, so you can avoid the requirement to double the backslashes or use a raw string.)
Yes, I know I can do
python2 cal.py
What I am asking for is a way to execute it on the command line such as:
calpy
and then the command afterwards. I put in in a path and when I write cal.py in the command line:
/usr/bin/cal.py: line 5: print: command not found
I don't want to issue cal.py to run my script, I want it to be issued with calpy
I'm running Arch Linux if that helps, thanks. Sorry for my English.
In order for bash to know to run your script via the Python interpreter, you need to put an appropriate shebang at the start. For example:
#!/usr/bin/python
tells bash to run /usr/bin/python with your script as the first argument. I personally prefer
#!/usr/bin/env python
which is compatible with virtualenv. You also need to ensure that the permissions on your script allow it to be executed:
~$ chmod +x path/to/cal.py
Finally, in order to call cal rather than path/to/cal.py, you need to remove the .py extension and make sure that the directory containing cal is in your command search path. I prefer to add ~/bin to the search path by modifying the $PATH environment variable in ~/.bashrc:
export PATH=$HOME/bin:$PATH
then put my own executables in ~/bin. You could also copy (or symlink) cal to one of the system-wide binary directories (/bin or /usr/bin), but I consider it bad practice to mess with system-wide directories unnecessarily.
Ok, you need a couple of things for achive what you want.
First you have to tell your script "How" is going to execute/interpret it. You can do this writting
#/usr/bin/env python
at the very beggining of the file.
The problem you have is the system is trying to execute the script using bash. And in bash there is no print command.
Second you need give execution privileges to your script. And of course if you want to call your script through the command "calcpy", the script has to be called like that.
Put this (exactly this) as the first line of your script:
#!/usr/bin/env python