I am trying to call a shell script that sets a bunch of environment variables on our server from a mercurial hook. The shell script gets called fine when a new changegroup comes in, but the environment variables aren't carrying over past the call to the shell script.
My hgrc file on the respository looks like this:
[hooks]
changegroup = shell_script
changegroup.env = env
I can see the output of the shell script, and then the output of the env command, but the env command doesn't include the new environment variables set by the shell script.
I have verified that the shell script works fine when run by itself but when run in the context of the mercurial hook it does not properly set the environment.
Shell scripts can't modify their enviroment.
http://tldp.org/LDP/abs/html/gotchas.html
A script may not export variables back to its parent process, the shell, or to the environment. Just as we learned in biology, a child process can inherit from a parent, but not vice versa
$ cat > eg.sh
export FOO="bar";
^D
$ bash eg.sh
$ echo $FOO;
$
also, the problem is greater, as you have multiple calls of bash
bash 1 -> hg -> bash 2 ( shell script )
-> bash 3 ( env call )
it would be like thinking I could set a variable in one php script and then magically get it with another simply by running one after the other.
Related
I have python module that should run python scripts (let's call it launcher)
I have list of scripts. Each of them has it's own virtual environment.
Launcher's input:
name of the script to launch
path to script
arguments to pass to script
I need to come up with solution, so that launcher was able to run scripts without creating new processes.
I tried to use __import__() function, but the main problem is that I don't know how to use script's own virtual environment.
Based on Lie Ryan from activate-a-virtualenv-with-a-python-script
You could try to alter the current interpreter and import the scripts
# Doing execfile() on this file will alter the current interpreter's
# environment so you can import libraries in the virtualenv
activate_this_file = "/path/to/virtualenv/bin/activate_this.py"
execfile(activate_this_file, dict(__file__=activate_this_file))
If every script is gonna need different venv, then your best choice is to create a bash file with the pipeline and connect the scripts through output files. That's what I would do.
You can use pickle to transfer numpy arrays or dictionaries or other python objects, but be sure that the pickle protocol is the same.
For example:
#!/usr/bin/env bash
# General conf
var1 = 1.
var2 = "text"
# for each script
cd mypath1/
conda activate venv1
python script1.py -a 3.141592 -b var1 # Outputs some file
conda deactivate venv1
cd mypath2/
conda activate venv2
python script2.py -a var2 -b "text" # Takes the previous output
conda deactivate venv2
...
I need to run a terminal from my Python script and execute a command in clean environment, as if I just opened terminal emulator and typed the command there. But I have some exported variables in my script that should not be available in this terminal. There can be an arbitrary number of variables (some of them may be even set outside the script using bash 'export' command), so I can't delete them manually before running the terminal.
I tried the common solution which is claimed to reset the default environment, but it did not work. The code looks like this:
import subprocess
import os
os.environ['X'] = 'Y'
cmd = 'gnome-terminal -x env -i bash -c "echo $X; bash --noprofile --norc"'
subprocess.Popen([cmd], stdout=subprocess.PIPE, shell=True)
The output still prints "Y". When I try to do the same thing using only terminal, the result is the same:
$ export X=Y
$ gnome-terminal -x env -i bash -c "echo $X; bash --noprofile --norc"
The new terminal is opened and "Y" is printed.
Is there any solution that could solve the problem?
Use the env argument when calling subprocess.Popen:
subprocess.Popen([cmd], stdout=subprocess.PIPE, shell=True, env={})
This will run it in as clean environment as possible, however a lot of maybe needed environment variables will be missing. You may want to cache os.environ when you start your script and then populate the env argument with that cache to get the same environment variables you had when you started your script.
Update (for clarity sake): Keep in mind that the current environment is always copied to any sub-process (and sub-processes cannot access/change the environment of their parents) so the above essentially takes the current environment and blanks it out (giving the sub-process copy of an empty environment) and if the sub-process cannot establish new environment it will never know the variables from your script's environment. One way to partially mitigate that is to actually let bash (or whatever shell you're calling from your sub-process) to load profile and other user scripts but it still won't get the global environment.
I saw this in the instructions for a Python templating language (specifically, the tutorial for Genshi):
$ PYTHONPATH=. python geddit/controller.py geddit.db
I understand what it means to source a script, but I don't understand the reason behind assigning the sourced script to a variable, here, "PYTHONPATH".
Running . python doesn't work, but PYTHONPATH=. python does.
But so does test=. python, so I know what's in my PYTHONPATH has nothing to do with it.
What you are seeing is not what you are thinking. :) The command does not assign the sourced file or even source a file. In bash, you can do something like this:
ENVIRONMENT_VAR=VALUE command
This will set an environment variable to a value and then execute command with the modified environment. After the call the environment var will have it's old value again.
So the line above just sets the environment variable PYTHONPATH to . - the current directory - and executes the command python geddit/controller.py geddit.db
Bash lets you assign environment variables for one call only.
$ VAR1=one VAR2=two someprogram param1 param2
sets VAR1 and VAR2 before running someprogram. Your command
$ PYTHONPATH=. python geddit/controller.py geddit.db
sets PYTHONPATH to the current directory so that .py files in the current directory can be imported then runs "python geddit/controller.py geddit.db"
I have a bash backup script run as root (cron) that delegates certain tasks to other specific bash scripts owned by different users. (simplified example, principle is, some things have to be done as root, different tasks are delegated to users with the appropriate environment (oracle, amazon, ...)
mkdir -p /tmp/backup$NAME
su - oracle -c "~/.backups/export-test.sh"
tar cf /tmp/backup/$NOW.tar /tmp/backup$NAME
su - amazon upload_to_amazon.sh /tmp/backup/$NOW.tar
This script itself does then some tasks as user oracle:
mkdir -p $TMP_LOCATION
cd ~/.backups
exp $TMP_LOCATION/$NAME-$NOW
When I try to mimic this behaviour in python I came up with the following (started from cron as root)
name = "oracle"
# part run as root
os.makedirs(tmp_backup + name)
os.setegid(pwd.getpwnam(name)[3])
os.seteuid(pwd.getpwnam(name)[2])
# part run as oracle
os.makedirs(tmp_location)
os.chdir(os.path.expanduser("~{user}/.backups".format(user=name)))
subprocess.check_call(["exp",
"os.path.join(tmp_location, name+'-'+now)"
])
In bash when using su -, a real new shell is invoked and all environment variables of that user are set.
How can I improve this for my python script? Is there a standard recipe I can follow? I'm thinking of environment variables, umask, ...
the environment is Solaris if that might matter.
all environment variables of that user are set
Usually because a shell runs a .profile file when it starts up.
You have several choices.
Create a proper subprocess with subprocess.Popen to execute the shell .profile -- same as su -.
Carefully locate the environment variable settings and mimic them in Python. The issue is that a .profile can do all kinds of crazy things, making it a potential problem to determine the exact effects of the .profile.
Or you can extract the relevant environment variables to make the accessible to both the shell environment and your Python programs.
First. Read the .profile for each user to be clear on what environment variables it sets (different from things like aliases or other craziness that doesn't apply to your Python script). Some of these environment variables are relevant to the scripts you're running. Some aren't relevant.
Second. Split the "relevant" environment variables into a tidy env_backups.sh script or env_uploads.sh script.
Once you have those environment variable scripts, update your .profile files to replace the environment variables settings with source env_backup.sh or source env_uploads.sh.
Third. Source the relevant env_this and env_that scripts before running the Python program. Now your Python environment shares the variables with your shell environment and you only maintain them in one place.
my_script.sh
source ~oracle/env_backup.sh
source ~amazon/env_uploads.sh
python my_script.py
That seems best to me. (Since that's how we do it.)
I can run amazon as root, without needing environment variables after all.
I used boto for that.
As for the oracle environment variables I used this piece of code:
if "ORACLE_HOME" not in os.environ or os.environ["ORACLE_HOME"] != ORACLE_HOME:
logger.debug("setting ORACLE_HOME='{oh}'".format(oh=ORACLE_HOME))
os.environ['ORACLE_HOME'] = ORACLE_HOME
if ORACLE_HOME + "/bin" not in os.environ["PATH"].split(":"):
logger.debug("setting PATH='{p}'".format(p=os.path.expandvars(ORACLE_PATH)))
os.environ['PATH'] = os.path.expandvars(ORACLE_PATH)
if "NLS_LANG" not in os.environ or os.environ["NLS_LANG"] != NLS_LANG:
logger.debug("setting NLS_LANG='{n}'".format(n=NLS_LANG))
os.environ['NLS_LANG'] = NLS_LANG
I am writing a python script (Linux) that is adding some shell aliases (writes them to HOME/.bash_aliases).
In order to make an alias available immediately after it was written I should issue the following bash built-in:
source HOME/.bashrc
source is a bash built-in so I cannot just:
os.system(source HOME/.bashrc)
If i try something like:
os.system('/bin/bash -c source HOME/.bashrc')
...will freeze the script (just like is waiting for something).
Any suggestions ?
What you want is not possible. A program (your script) cannot modify the environment of the caller (the shell you run it from).
Another approach which would allow you to do something close is to write it in terms of a bash function, which is run in the same process and can modify the caller. Note that sourcing during runtime can have possible negative side-effects depending on what the user has in their bashrc.
what you are trying to do is impossible. or better: how you are trying to do it is impossible.
your bash command is wrong. bash -s command does not execute command. it just stores the string "command" in the variable $1 and then drops you to the prompt. that is why the python script seems to freeze. what you meant to do is bash -c command.
why do you source .bashrc? would it not be enough to just source .bash_aliases?
even if you got your bash command right, the changes will only take effect in the bash session started from python. once that bash session is closed, and your python script is done, you are back at your original bash session. all changes in the bash session started from python is lost.
everytime you want to change something in the current bash session, you have to do it from inside the current bash session. most of the commands you run from bash (system commands, python scripts, even bash scripts) will spawn another process, and everything you do in that other process will not affect your first bash session.
source is a bash builtin which allows you to execute commands inside the currently running bash session, instead of spawning another process and running the commands there. defining a bash function is another way to execute commands inside the currently running bash session.
see this answer for more information about sourcing and executing.
what you can do to achieve what you want
modify your python script to just do the changes necessary to .bash_aliases.
prepare a bash script to run your python script and then source .bash_aliases.
#i am a bash script, but you have to source me, do not execute me.
modify_bash_aliases.py "$#"
source ~/.bash_aliases
add an alias to your .bashrc to source that script
alias add_alias='source modify_bash_aliases.sh'
now when you type add_alias some_alias in your bash prompt it will be replaced with source modify_bash_aliases.sh and then executed. since source is a bash builtin, the commands inside the script will be executed inside the currently running bash session. the python script will still run in another process, but the subsequent source command will run inside your currently running bash session.
another way
modify your python script to just do the changes necessary to .bash_aliases.
prepare a bash function to run your python script and then source .bash_aliases.
add_alias() {
modify_bash_aliases.py "$#"
source ~/.bash_aliases
}
now you can call the function like this: add_alias some_alias
I had an interesting issue where I needed to source an RC file to get the correct output in my python script.
I eventually used this inside my function to bring over the same variables from the bash file I needed to source. Be sure to have os imported.
with open('overcloudrc') as data:
lines = data.readlines()
for line in lines:
var = line.split(' ')[1].split('=')[0].strip()
val = line.split(' ')[1].split('=')[1].strip()
os.environ[var] = val
Working solution from Can I use an alias to execute a program from a python script :
import subprocess
sp = subprocess.Popen(["/bin/bash", "-i", "-c", "nuke -x scriptpath"])
sp.communicate()