If I have a program written in a language other than bash (say python), how can I change environment variables or the current working directory inside it such that it reflects in the calling shell?
I want to use this to write a 'command line helper' that simplifies common operations. For example, a smart cd. When I simple type in the name of a directory into my prompt, it should cd into it.
[~/]$ Downloads
[~/Downloads]$
or even
[~/]$ project5
[~/projects/project5]$
I then found How to change current working directory inside command_not_found_handle (which is exactly one of the things I wanted to do) , which introduced me to shopt -s autocd. However, this still doesn't handle the case where the supplied directory is not in ./.
In addition, if I want to do things like setting the http_proxy variable from a python script, or even update the PATH variable, what are my options?
P. S. I understand that there probably isn't an obvious way to write a magical command inside a python script that automatically updates environment variables in the calling shell. I'm looking for a working solution, not necessarily one that's elegant.
This can only be done with the parent shell's involvement and assistance. For a real-world example of a program that does this, you can look at how ssh-agent is supposed to be used:
eval "$(ssh-agent -s)"
...reads the output from ssh-agent and runs it in the current shell (-s specifies Bourne-compatible output, vs csh).
If you're using Python, be sure to use pipes.quote() (or, for Python 3.x, shlex.quote()) to process your output safely:
import pipes
dirname='/path/to/directory with spaces'
foo_val='value with * wildcards * that need escaping and \t\t tabs!'
print 'cd %s; export FOO=%s;' % (pipes.quote(dirname), pipes.quote(foo_val))
...as careless use can otherwise lead to shell injection attacks.
By contrast, if you're writing this as an external script in bash, be sure to use printf %q for safe escaping (though note that its output is targeted for other bash shells, not for POSIX sh compliance):
#!/bin/bash
dirname='/path/to/directory with spaces'
foo_val='value with * wildcards * that need escaping and \t\t tabs!'
printf 'cd %q; export FOO=%q;' "$dirname" "$foo_val"
If, as it appears from your question, you want your command to appear to be written as a native shell function, I would suggest wrapping it in one (this practice can also be used with command_not_found_handle). For instance, installation can involve putting something like the following in one's .bashrc:
my_command() {
eval "$(command /path/to/my_command.py "$#")"
}
...that way users aren't required to type eval.
Essentially, Charles Duffy hit the nail on the head, I present here another spin on the issue.
What you're basically asking about is interprocess communication: You have a process, which may or may not be a subprocess of the shell (I don't think that matters too much), and you want that process to communicate information to the original shell (just another process, btw), and have it change its state.
One possibility is to use signals. For example, in your shell you could have:
trap 'cd /tmp; pwd;' SIGUSR2
Now:
Type echo $$ in your shell, this will give you a number, PID
cd to a directory in your shell (any directory other than /tmp)
Go to another shell (in another window or what have you), and type: kill SIGUSR2 PID
You will find that you are in /tmp in your original shell.
So that's an example of the communication channel. The devil of course is in the details. There are two halves to your problem: How to get the shell to communicate to your program (the command_not_found_handle would do that nicely if that would work for you), and how to get your program to communicate to the shell. Below, I cover the latter issue:
You could, for example, have a trap statement in the original shell:
trap 'eval $(/path/to/my/fancy/command $(pwd) $$)' SIGUSR2
...your fancy command will be given the current working directory of the original shell as the first argument, and the process id of the shell (so it knows who to signal), and it can act upon it. If your command sends an executable shell command string to the eval command, it will be executed in the environment of the original shell.
For example:
trap 'eval $(/tmp/doit $$ $(pwd)); pwd;' SIGUSR2
/tmp/doit is the fancy command. It could be any executable type [Python, C, Perl, etc.]), the key is that it spits out a string that the shell can evaluate. In /tmp/doit, I have provided a bash script:
#!/bin/bash
echo "echo PID: $1 original directory: $2; cd /tmp"
(I make sure the file is executable with: chmod 755 /tmp/doit). Now if I type:
cd; echo $$
Then, in another shell, take the number output ("NNNNN") by the above echo and do:
kill -s SIGUSR2 NNNNN
...then suddenly I will see something like this pop up in the original shell:
PID: NNNNN original directory: /home/myhomepath
/tmp
and if I type "pwd" in my original shell, I will see that I'm in /tmp.
The guy who wanted command_not_found_handle to do something in the current shell environment could have used signals to get the effect he wanted. Here I was running the kill manually but there's no reason why a shell function couldn't do it.
Doing fancy work on the frontend, whereby you re-interpret or pre-interpret the user's input to the shell, may require that the user runs a frontend program that could be pretty complicated, depending on what you want to do. The old school "expect" program is ideal for something like this, but not too many youngsters pick up TCL these days :-) .
Related
I have a long bash script that at the end exports an environment variable, let's call it myscript.sh.
I need to call this shell script from python code. As far as I know, the exported environment variable will be local, and won't be visible in python.
Is there a proper way to make it exported in the python environment as well?
You can use env -0 in your script to print all environment variables, separated with a null char as newline might be problematic if it's contained in some variable value.
Then from python you can set the process environment using this
import os
import subprocess
for k,v in filter(lambda p: len(p)==2, map(lambda e:e.decode().split('='), subprocess.run('script', check=True, capture_output=True).stdout.split(b'\x00'))):
os.environ[k]=v
The general easiest way to get any information back from a child process is via the stdout pipe, which is very easily captured in the python exec method.
Get your script to print the info you want in a way that is easy to parse.
If you really want to see the env vars then you will need to print them out. If you don't want to modify the script you are trying to run then you could exec the script then dump the env, which is parseable.
. myscript && env
or if there is a particular env var you want:
. myscript && echo MYVAR="$myvar"
Note that this will be executed by python's default shell. If you want bash in particular, or you want the hashbang of your script to be honoured, that takes more effort. You could scrape the hashbang yourself, for example, but the env printing trick will only work trivially with shells - hashbangs may refer to other clevernesses such as sed/awk
To avoid nesting shell invocations, you could call subprocess.Popen with, eg ["/bin/bash", "-c", ". myscript && echo MYVAR=\"$myvar\""] and shell=False
I added quotes to cover some cases where the var contains wildcard characters. Depending on the content of the variables printed, this may be good enough. If you expect the var to contain multi lines or perhaps the string MYVAR= or feel that the value set by the script is unconstrained, then you might need to do something more complex to ensure the output is preserved, as described by CD. It will also be marginally more difficult to parse from python.
Our environment has a shell script to setup the working area. setup.sh looks like this:
export BASE_DIR=$PWD
export PATH=$BASE_DIR/bin
export THIS_VARIABLE=THAT_VALUE
The user does the following:
% . setup.sh
Some of our users are looking for a csh version and that would mean having two setup files.
I'm wondering if there is a way to do this work with a common python file. In The Hitchhiker's Guide to Python Kenneth Reitz suggests using a setup.py file in projects, but I'm not sure if Python can set environment variables in the shell as I do above.
Can I replace this shell script with a python script that does the same thing? I don't see how.
(There are other questions that ask this more broadly with many many comments, but this one has a direct question and direct single answer.)
No, Python (or generally any process on Unix-like platforms) cannot change its parent's environment.
A common solution is to have your script print the output in a format suitable for the user's shell. E.g. ssh-agent will print out sh-compatible global assignments with -s or when it sees that it is being invoked from a Bourne-compatible shell; and csh syntax if invoked from csh or tcsh or when explicitly invoked with -c.
The usual invocation in sh-compatible shells is $(eval ssh-agent) -- so the text that the program prints is evaluated by the shell where the user invoked this command.
eval is a well-known security risk, so you want to make this code very easy to vet even for people who don't speak much Python (or shell, or anything much else).
If you are, eh cough, skeptical of directly supporting Csh users, perhaps you can convince them to run your sh-compatible script in a Bourne-compatible shell and then exec csh to get their preferred interactive environment. This also avoids the slippery slope of having an ever-growing pile of little maintenance challenges for supporting Csh, Fish, rc, Powershell etc users.
I'm writing a simple script which ideally will help me conveniently change directories around my system.
The details of the implementation don't matter, but let's say ideally I will place this script in /usr/bin and call it with an argument denoting where I want to go to on the system: goto project1
I would expect that when the script exits, my terminal's current working would have changed to that of Project 1.
In order to accomplish this, I tried:
os.chdir('/')
subprocess.call('cd /', shell=True)
Neither of which work. The first changes the working directory in Python and the second spawns a shell at /.
Then I realized how naive I was being. When a program is run, the terminal is just forking a process, while reading from stdout, which my program is writing to. Whatever it does, it wouldn't affect the state of terminal.
But then I thought "I've been using cd for years, surely someone wrote code for that", thinking there might be something to go off of (system call or something?).
But cd is not even coreutils. Instead, the source of cd is this:
builtin `echo ${0##*/} | tr \[:upper:] \[:lower:]` ${1+"$#"}
So, a couple of questions come to mind:
What's actually going on behind the scenes when a user calls cd? (Meaning, how is the terminal and the system actually interacting?)
Is it possible to have something like a Python script alter the terminal location?
Thanks so much for your help!
You could do it as a pair of scripts:
directory.py
#!/usr/bin/python
import sys
directory = sys.argv[1]
# do something interesting to manipulate directory...
print directory + "tmp"
directory.csh
#!/bin/csh -f
cd `python directory.py $1`
Result:
> pwd
/Users/cdl
> source directory.csh "/"
> pwd
/tmp
>
Substitute your favorite shell and variant of Python as desired. Turn on execute for the Python script to simplify further.
Clearly the shell is changing the directory but Python can do all the clever logic you want to figure out where to send the shell.
The Issue
I have a Python script that when I run it from the command line I do not want to record anything within .bash_history.
The reason for this is that the script uses the Python argparse library which allows me to pass in arguments to the python code directly from the command line.
For example I could write the script so that it would use "123456" as a value in the script:
$ ./scriptname.py -n 123456
The issue is that I don't want the value 123456 stored in .bash_history. In fact, I'd rather the entire command was never stored into the .bash_history file in the first place.
What I've Tried
Subprocess & history -c
I've added the subprocess library to the top of my script and then included this directly after to attempt to proactively clear the current history of the shell I am working in:
subprocess.call("history -c", shell=True)
Theoretically this should clear the history of the current shell. I don't see errors from this so I'm assuming that it runs in some other shell. When I run it outside of the script (directly after running the command to invoke the script) it works properly.
Subprocess & unset HISTFILE
I have also used subprocess with the following with no success:
subprocess.call("unset HISTFILE", shell=True)
os.system & history -c
I've also used the os library for Python and included the following in the script:
os.system("history -c")
os.system and unset HISTFILE
I've also tried unset HISTFILE with os.system to no avail.
os.system("unset HISTFILE")
Preferred Solution Characteristics
I realize that I could simply type in unset HISTFILE or history -c after using the command. But I want this to be as much as possible a self-contained script.
Ideally the solution would prevent the ./scomescript.py command from ever being recorded within .bash_history.
I need this script to output text to the terminal based on the input so I can't close the terminal immediately afterwards either.
I imagine there must be a way to do this from within the python script itself - this is my preference.
This really isn't very feasible... Adding the entry to the history file is performed by the interactive shell, and it occurs after the command has completed and the parent shell exits. It is, strictly speaking, possible, if you were to make your python program execute spawn a hacky background process that did something like read the history file in a loop re-writing it. I really can't advocate anything like this, but you could append your script with something like:
os.system("nohup bash -ic 'while :; do read -d \"\" history < \"$HISTFILE\"; echo \"$history\" | sed -e\"s#^%s.*##\" -e\"/^$/d\" > \"$HISTFILE\"; sleep 1; done &' >/dev/null 2>&1" % sys.argv[0])
I think a much better way to accomplish your goal of not recording any arguments would be to use something like var = raw_input("") instead of passing sensitive argument on the command line.
You could also perhaps create a shell function to wrap your script, something like my_script(){ set +o history; python_script.py "$#; set -o history ;}?
What i'd like to have is a mechanism that all commands i enter on a Bash-Terminal are wrapped by a Python-script. The Python-script executes the entered command, but it adds some additional magic (for example setting "dynamic" environment variables).
Is that possible somehow?
I'm running Ubuntu and Debian Squeezy.
Additional explanation:
I have a property-file which changes dynamically (some scripts do alter it at any time). I need the properties from that file as environment variables in all my shell scripts. Of course i could parse the property-file somehow from shell, but i prefer using an object-oriented style for that (especially for writing), as it can be done with Python (and ConfigObject).
Therefore i want to wrap all my scripts with that Python script (without having to modify the scripts themselves) which handles these properties down to all Shell-scripts.
This is my current use case, but i can imagine that i'll find additional cases to which i can extend my wrapper later on.
The perfect way to wrap every command that is typed into a Bash Shell is to change the variable PROMPT_COMMAND inside the .bashrc. For example, if I want to do some Python stuff before every command, liked asked in my question:
.bashrc:
# ...
PROMPT_COMMAND="python mycoolscript.py; $PROMPT_COMMAND;"
export $PROMPT_COMMAND
# ...
now before every command the script mycoolscript.py is run.
Use Bash's DEBUG trap. Let me know if you need me to elaborate.
Edit:
Here's a simple example of the kinds of things you might be able to do:
$ cat prefix.py
#!/usr/bin/env python
print "export prop1=foobar"
print "export prop2=bazinga"
$ cat propscript
#!/bin/bash
echo $prop1
echo $prop2
$ trap 'eval "$(prefix.py)"' DEBUG
$ ./propscript
foobar
bazinga
You should be aware of the security risks of using eval.
I don't know of anything but two things that might help you follow
http://sourceforge.net/projects/pyshint/
The iPython shell has some functionality to execute shell commands in the iterpreter.
There is no direct way you can do it .
But you can make a python script to emulate a bash terminal and you can use the beautiful "Subprocess" module in python to execute commnands the way you like