I stumble over a strange behavior of a #!/bin/bash script.
I got the variable SD=/SOME_VALID_PATH and SCRIPT=SOME_PYTHON_SCRIPT.py. What I want to do now in my script is eval python ${SD}/${SCRIPT}.
What happens is that the result looks like python /SOME_VALID_PATH/ SOME_PYTHON_SCRIPT.py. Note the space between the path and the script's name, so naturally, this command is not executed.
echo ${SD} gives /SOME_VALID_PATH and echo ${SCRIPT} gives SOME_PYTHON_SCRIPT.py, no spaces whatsoever.
Any ideas what I (generally) might do wrong?
I am not really sure why you're trying to use eval the way you mentioned (as mentioned in the comments a sample of your code will be helpful). It is likely the error you mentioned is because eval concatenates the string provided as its argument and it is your responsibility to make sure that the command syntax is correct. So it may be that python is getting the path (SD) and SCRIPT as two different arguments (separated by space) and so it fails to execute the path which is probably a directory.
You can use command substitution with eval and echo for example to make sure that the strings are properly represented. An example is given below where I have defined two variables for path and script name. The script name contains a space so a normal eval will throw an error.
$ echo $var1
/home/user/Documents/Temp
$ echo $var2
temp.py
$ echo \"$var1/$var2\"
"/home/user/Documents/Temp/ temp.py"
$ cat "$(echo $var1/$var2)"
print "Hello World"
$ eval python $var1/$var2
/home/user/bin/python: can't find '__main__' module in '/home/user/Documents/Temp/'
$ eval python $(echo \"$var1/$var2\")
Hello World
But this is probably not the best way to do things. Spaces should be avoided in paths.
Related
I have a long bash script that at the end exports an environment variable, let's call it myscript.sh.
I need to call this shell script from python code. As far as I know, the exported environment variable will be local, and won't be visible in python.
Is there a proper way to make it exported in the python environment as well?
You can use env -0 in your script to print all environment variables, separated with a null char as newline might be problematic if it's contained in some variable value.
Then from python you can set the process environment using this
import os
import subprocess
for k,v in filter(lambda p: len(p)==2, map(lambda e:e.decode().split('='), subprocess.run('script', check=True, capture_output=True).stdout.split(b'\x00'))):
os.environ[k]=v
The general easiest way to get any information back from a child process is via the stdout pipe, which is very easily captured in the python exec method.
Get your script to print the info you want in a way that is easy to parse.
If you really want to see the env vars then you will need to print them out. If you don't want to modify the script you are trying to run then you could exec the script then dump the env, which is parseable.
. myscript && env
or if there is a particular env var you want:
. myscript && echo MYVAR="$myvar"
Note that this will be executed by python's default shell. If you want bash in particular, or you want the hashbang of your script to be honoured, that takes more effort. You could scrape the hashbang yourself, for example, but the env printing trick will only work trivially with shells - hashbangs may refer to other clevernesses such as sed/awk
To avoid nesting shell invocations, you could call subprocess.Popen with, eg ["/bin/bash", "-c", ". myscript && echo MYVAR=\"$myvar\""] and shell=False
I added quotes to cover some cases where the var contains wildcard characters. Depending on the content of the variables printed, this may be good enough. If you expect the var to contain multi lines or perhaps the string MYVAR= or feel that the value set by the script is unconstrained, then you might need to do something more complex to ensure the output is preserved, as described by CD. It will also be marginally more difficult to parse from python.
Context:
I'm working with a few developers, some on OSX and some on Linux. We use a collection of bash scripts to load environment variables, build docker images etc. The OSX users are using zsh instead of bash, so some tricks (specifically how we're getting the running script's directory) aren't compatible. What works on Linux (but not OSX) is:
$ cat ./scripts/env_script.sh
THIS_DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" >/dev/null 2>&1 && pwd )"
# ... more variables ...
$ source ./scripts/env_script.sh
$ echo $THIS_DIR
# the absolute path to the scripts directory
Some cursory testing shows this doesn't work on OSX, presumably because BASH_SOURCE doesn't work the same with zsh. Fine.
Goal:
We're all working with a Python, we would like to replace our bash scripts with Python.
The same pattern is very easy in Python.
from pathlib import Path
THIS_DIR = str(Path(__file__).parent)
Not the cleanest, but it works anywhere. Ideally we could run this and share THIS_DIR with the shell that called the script. Something like
$ <answer to my question> ./scripts/env_script.py
$ echo $THIS_DIR
# absolute path to the scripts directory
Question:
Suppose we rewrote our ./scripts/env_script.sh into ./scripts/env_script.py and collected a list of variables we would like to share with the calling environment.
That is, we wrote the Python script above.
Is it possible to do something akin to source ./scripts/env_script.py so that we can export the variables from the Python script into the calling shell?
What I've tried:
I found a potential duplicate here though the answer is basically 'Python runs in a sub-process, source would only work if your shell is Python` which makes sense but doesn't answer the question.
If there isn't a nice way to exchange these variables, then another option would basically be to have the Python script spit the variables to stdout and then eval them within my shell. I.e. eval $(./script/env_script.py). I guess this could work, but it feels wrong at least because I was always taught eval is evil.
I spent a little more time on the second solution since I could imagine it working (although I'm not happy with it). Basically, I rewrite ./scripts/env_script.py to be
FOO='whatever'
THIS_DIR=str(Path(__file__).parent)
# and so on...
# prints all "normal" variables to stdout
def_vars = list(locals().items())
for name, val in def_vars:
if name.startswith("_"):
continue
print(f'{name}="{val}"')
So running it normally would output
$ ./scripts/env_script.py
FOO="whatever"
THIS_DIR="..."
...
$ echo $FOO
# nothing
Then, to jam those variables into the calling shell, we wrap that with eval and it does the job
$ eval $(./scripts/env_script.py)
$ echo $FOO
whatever
I guess with some further tweaking, the footer at the end of env_script.py could be made a little more robust, and then moved outside of the script. Some sort of alias could be defined to make it syntactically nicer, so then calling it could like pysource ./scripts/env_script.py. Still, I'm not satisfied with this answer.
Edit: I did the tweaking and wrote a downloadable script here.
If I have a program written in a language other than bash (say python), how can I change environment variables or the current working directory inside it such that it reflects in the calling shell?
I want to use this to write a 'command line helper' that simplifies common operations. For example, a smart cd. When I simple type in the name of a directory into my prompt, it should cd into it.
[~/]$ Downloads
[~/Downloads]$
or even
[~/]$ project5
[~/projects/project5]$
I then found How to change current working directory inside command_not_found_handle (which is exactly one of the things I wanted to do) , which introduced me to shopt -s autocd. However, this still doesn't handle the case where the supplied directory is not in ./.
In addition, if I want to do things like setting the http_proxy variable from a python script, or even update the PATH variable, what are my options?
P. S. I understand that there probably isn't an obvious way to write a magical command inside a python script that automatically updates environment variables in the calling shell. I'm looking for a working solution, not necessarily one that's elegant.
This can only be done with the parent shell's involvement and assistance. For a real-world example of a program that does this, you can look at how ssh-agent is supposed to be used:
eval "$(ssh-agent -s)"
...reads the output from ssh-agent and runs it in the current shell (-s specifies Bourne-compatible output, vs csh).
If you're using Python, be sure to use pipes.quote() (or, for Python 3.x, shlex.quote()) to process your output safely:
import pipes
dirname='/path/to/directory with spaces'
foo_val='value with * wildcards * that need escaping and \t\t tabs!'
print 'cd %s; export FOO=%s;' % (pipes.quote(dirname), pipes.quote(foo_val))
...as careless use can otherwise lead to shell injection attacks.
By contrast, if you're writing this as an external script in bash, be sure to use printf %q for safe escaping (though note that its output is targeted for other bash shells, not for POSIX sh compliance):
#!/bin/bash
dirname='/path/to/directory with spaces'
foo_val='value with * wildcards * that need escaping and \t\t tabs!'
printf 'cd %q; export FOO=%q;' "$dirname" "$foo_val"
If, as it appears from your question, you want your command to appear to be written as a native shell function, I would suggest wrapping it in one (this practice can also be used with command_not_found_handle). For instance, installation can involve putting something like the following in one's .bashrc:
my_command() {
eval "$(command /path/to/my_command.py "$#")"
}
...that way users aren't required to type eval.
Essentially, Charles Duffy hit the nail on the head, I present here another spin on the issue.
What you're basically asking about is interprocess communication: You have a process, which may or may not be a subprocess of the shell (I don't think that matters too much), and you want that process to communicate information to the original shell (just another process, btw), and have it change its state.
One possibility is to use signals. For example, in your shell you could have:
trap 'cd /tmp; pwd;' SIGUSR2
Now:
Type echo $$ in your shell, this will give you a number, PID
cd to a directory in your shell (any directory other than /tmp)
Go to another shell (in another window or what have you), and type: kill SIGUSR2 PID
You will find that you are in /tmp in your original shell.
So that's an example of the communication channel. The devil of course is in the details. There are two halves to your problem: How to get the shell to communicate to your program (the command_not_found_handle would do that nicely if that would work for you), and how to get your program to communicate to the shell. Below, I cover the latter issue:
You could, for example, have a trap statement in the original shell:
trap 'eval $(/path/to/my/fancy/command $(pwd) $$)' SIGUSR2
...your fancy command will be given the current working directory of the original shell as the first argument, and the process id of the shell (so it knows who to signal), and it can act upon it. If your command sends an executable shell command string to the eval command, it will be executed in the environment of the original shell.
For example:
trap 'eval $(/tmp/doit $$ $(pwd)); pwd;' SIGUSR2
/tmp/doit is the fancy command. It could be any executable type [Python, C, Perl, etc.]), the key is that it spits out a string that the shell can evaluate. In /tmp/doit, I have provided a bash script:
#!/bin/bash
echo "echo PID: $1 original directory: $2; cd /tmp"
(I make sure the file is executable with: chmod 755 /tmp/doit). Now if I type:
cd; echo $$
Then, in another shell, take the number output ("NNNNN") by the above echo and do:
kill -s SIGUSR2 NNNNN
...then suddenly I will see something like this pop up in the original shell:
PID: NNNNN original directory: /home/myhomepath
/tmp
and if I type "pwd" in my original shell, I will see that I'm in /tmp.
The guy who wanted command_not_found_handle to do something in the current shell environment could have used signals to get the effect he wanted. Here I was running the kill manually but there's no reason why a shell function couldn't do it.
Doing fancy work on the frontend, whereby you re-interpret or pre-interpret the user's input to the shell, may require that the user runs a frontend program that could be pretty complicated, depending on what you want to do. The old school "expect" program is ideal for something like this, but not too many youngsters pick up TCL these days :-) .
What i'd like to have is a mechanism that all commands i enter on a Bash-Terminal are wrapped by a Python-script. The Python-script executes the entered command, but it adds some additional magic (for example setting "dynamic" environment variables).
Is that possible somehow?
I'm running Ubuntu and Debian Squeezy.
Additional explanation:
I have a property-file which changes dynamically (some scripts do alter it at any time). I need the properties from that file as environment variables in all my shell scripts. Of course i could parse the property-file somehow from shell, but i prefer using an object-oriented style for that (especially for writing), as it can be done with Python (and ConfigObject).
Therefore i want to wrap all my scripts with that Python script (without having to modify the scripts themselves) which handles these properties down to all Shell-scripts.
This is my current use case, but i can imagine that i'll find additional cases to which i can extend my wrapper later on.
The perfect way to wrap every command that is typed into a Bash Shell is to change the variable PROMPT_COMMAND inside the .bashrc. For example, if I want to do some Python stuff before every command, liked asked in my question:
.bashrc:
# ...
PROMPT_COMMAND="python mycoolscript.py; $PROMPT_COMMAND;"
export $PROMPT_COMMAND
# ...
now before every command the script mycoolscript.py is run.
Use Bash's DEBUG trap. Let me know if you need me to elaborate.
Edit:
Here's a simple example of the kinds of things you might be able to do:
$ cat prefix.py
#!/usr/bin/env python
print "export prop1=foobar"
print "export prop2=bazinga"
$ cat propscript
#!/bin/bash
echo $prop1
echo $prop2
$ trap 'eval "$(prefix.py)"' DEBUG
$ ./propscript
foobar
bazinga
You should be aware of the security risks of using eval.
I don't know of anything but two things that might help you follow
http://sourceforge.net/projects/pyshint/
The iPython shell has some functionality to execute shell commands in the iterpreter.
There is no direct way you can do it .
But you can make a python script to emulate a bash terminal and you can use the beautiful "Subprocess" module in python to execute commnands the way you like
I needed to have a directly executable python script, so i started the file with #!/usr/bin/env python. However, I also need unbuffered output, so i tried #!/usr/bin/env python -u, but that fails with python -u: no such file or directory.
I found out that #/usr/bin/python -u works, but I need it to get the python in PATH to support virtual env environments.
What are my options?
In some environment, env doesn't split arguments.
So your env is looking for python -u in your path.
We can use sh to work around.
Replace your shebang with the following code lines and everything will be fine.
#!/bin/sh
''''exec python -u -- "$0" ${1+"$#"} # '''
# vi: syntax=python
p.s. we need not worry about the path to sh, right?
This might be a little bit outdated but env(1) manual tells one can use '-S' for that case
#!/usr/bin/env -S python -u
It seems to work pretty good on FreeBSD.
It is better to use environment variable to enable this. See python doc : http://docs.python.org/2/using/cmdline.html
for your case:
export PYTHONUNBUFFERED=1
script.py
When you use shebang on Linux, the entire rest of the line after the interpreter name is interpreted as a single argument. The python -u gets passed to env as if you'd typed: /usr/bin/env 'python -u'. The /usr/bin/env searches for a binary called python -u, which there isn't one.
Passing arguments to the shebang line is not standard and in as you have experimented do not work in combination with env in Linux. The solution with bash is to use the builtin command "set" to set the required options. I think you can do the same to set unbuffered output of stdin with a python command.
my2c
Here is a script alternative to /usr/bin/env, that permits passing of arguments on the hash-bang line, based on /bin/bash and with the restriction that spaces are disallowed in the executable path. I call it "envns" (env No Spaces):
#!/bin/bash
ARGS=( $1 ) # separate $1 into multiple space-delimited arguments.
shift # consume $1
PROG=`which ${ARGS[0]}`
unset ARGS[0] # discard executable name
ARGS+=( "$#" ) # remainder of arguments preserved "as-is".
exec $PROG "${ARGS[#]}"
Assuming this script is located at /usr/local/bin/envns, here's your shebang line:
#!/usr/local/bin/envns python -u
Tested on Ubuntu 13.10 and cygwin x64.
This is a kludge and requires bash, but it works:
#!/bin/bash
python -u <(cat <<"EOF"
# Your script here
print "Hello world"
EOF
)
Building off of Larry Cai's answer, env allows you to set a variable directly in the command line. That means that -u can be replaced by the equivalent PYTHONUNBUFFERED setting before python:
#!/usr/bin/env PYTHONUNBUFFERED="YESSSSS" python
Works on RHEL 6.5. I am pretty sure that feature of env is just about universal.
I recently wrote a patch for the GNU Coreutils version of env to address this issue:
http://lists.gnu.org/archive/html/coreutils/2017-05/msg00018.html
If you have this, you can do:
#!/usr/bin/env :lang:--foo:bar
env will split :lang:foo:--bar into the fields lang, foo and --bar. It will search PATH for the interpreter lang, and then invoke it with arguments --foo, bar, plus the path to the script and that script's arguments.
There is also a feature to pass the name of the script in the middle of the options. Suppose you want to run lang -f <thecriptname> other-arg, followed by the remaining arguments. With this patched env, it is done like this:
#!/usr/bin/env :lang:-f:{}:other-arg
The leftmost field which is equivalent to {} is replaced with the first argument that follows, which, under hash bang invocation, is the script name. That argument is then removed.
Here, other-arg could be something processed by lang or perhaps something processed by the script.
To understand better, see the numerous echo test cases in the patch.
I chose the : character because it is an existing separator used in PATH on POSIX systems. Since env does PATH searching, it's vanishingly unlikely to be used for a program whose name contains a colon. The {} marker comes from the find utility, which uses it to denote the insertion of a path into the -exec command line.