setting environment variable in different folder in python - python

I'm in folder A & need to programatically set environment variable ENV_VAR in Folder B/C
I'm doing this right now
command = "cd B/C; export ENV_VAR=/Folder1/Folder2; "
fip = open('NUL','wb+')
subprocess.Popen(command, stdout = fip, stderr =fip, shell=True)
I'm getting the following error
/bin/sh:: ENV_VAR=/Folder1/Folder2 is not an identifier
UPDATE: I guess I just want to know how to set environment variables in python such that it's visible to processes residing in different folders. I always though environment variables once set, can be seen from anywhere. But I am on solaris and that doesn't seem to be the case.
How do I fix this?

/bin/sh is not required to support all the features that you're probably used to from bash
Use ENV_VAR=/foo; export ENV_VAR, or else use command = ['bash', '-c', command] and shell=False

Related

How to add a directory to Fish shell PATH from with a Python script?

From within a Python-based installation script running on Fish shell and supporting Python 2.7 and 3.4+, I want to add a directory to the PATH environment variable if not already present. When the script exits, PATH should contain the new directory, both for the current Fish shell session as well as future logins.
To achieve this, I can think of two potential approaches:
Create a localpath.fish file in ~/.config/fish/conf.d/ containing: set PATH ~/.local/bin $PATH
Use Python’s subprocess to run: set -U fish_user_paths ~/.local/bin $fish_user_paths
I would prefer the latter approach, but I have tried invoking Fish via subprocess several different ways, all to no avail. For example:
>>> subprocess.check_output(["fish", "-c", "echo", "Hello", "World!"], shell=True)
On Fish 3.0.2 and Python 3.7.4, the above yields:
<W> fish: Current terminal parameters have rows and/or columns set to zero.
<W> fish: The stty command can be used to correct this (e.g., stty rows 80 columns 24).
… and the Python REPL prompt does not re-appear. Tapping CTRL-C does not exit properly, leaving the Python REPL in a mostly unusable state until quit and relaunched. (This appears to be a known issue.)
Similarly, using subprocess.run() with Python 3.7’s capture_output parameter fails to yield the expected output:
>>> subprocess.run(["fish", "-c", "echo", "Hello", "World!"], capture_output=True)
CompletedProcess(args=['fish', '-c', 'echo', 'Hello', 'World!'], returncode=0, stdout=b'\n', stderr=b'')
My questions:
Is there a better strategy than either of the two methods above?
If using the first approach, how would I adjust PATH for the current Fish shell session from within a Python script?
If using the set -U fish_user_paths […] approach, what would be the appropriate way to invoke that from within a Python script?
You're on the right track, but the whole command has to be as a single argument. Also you don't want shell expansion beforehand.
So you could write:
subprocess.check_output(["fish", "-c", "echo hello world"])
and it will do what you expect. For the PATH case:
subprocess.check_output(["fish", "-c", "set -U fish_user_paths ~/.local/bin $fish_user_paths"])
and this will modify $PATH in the parent process.

Python Popen fails in compound command (PowerShell)

I am trying to use Python's Popen to change my working directory and execute a command.
pg = subprocess.Popen("cd c:/mydirectory ; ./runExecutable.exe --help", stdout=subprocess.PIPE, stderr=subprocess.STDOUT, shell=True)
buff,buffErr = pg.communicate()
However, powershell returns "The system cannot find the path specified." The path does exist.
If I run
pg = subprocess.Popen("cd c:/mydirectory ;", stdout=subprocess.PIPE, stderr=subprocess.STDOUT, shell=True)
it returns the same thing.
However, if i run this: (without the semicolon)
pg = subprocess.Popen("cd c:/mydirectory",stdout=subprocess.PIPE, stderr=subprocess.STDOUT, shell=True)
The command returns without an error. This leads me to believe that the semicolon is issue. What is the cause for this behavior and how can I get around it?
I know I can just do c:/mydirectory/runExecutable.exe --help, but I would like to know why this is happening.
UPDATE :
I have tested passing the path to powershell as the argument for Popen's executable parameter. Just powershell.exe may not be enough. To find the true absolute path of powershell, execute where.exe powershell. Then you can pass it into Popen. Note that shell is still true. It will use the default shell but pass the command to powershell.exe
powershell = C:\Windows\System32\WindowsPowerShell\v1.0\powershell.exe
pg = subprocess.Popen("cd c:/mydirectory ; ./runExecutable.exe", stdout=subprocess.PIPE, stderr=subprocess.STDOUT, shell=True, executable=powershell)
buff,buffErr = pg.communicate()
//It works!
In your subprocess.Popen() call, shell=True means that the platform's default shell should be used.
While the Windows world is - commendably - moving from CMD (cmd.exe) to PowerShell, Python determines what shell to invoke based on the COMSPEC environment variable, which still points to cmd.exe, even in the latest W10 update that has moved toward PowerShell in terms of what the GUI offers as the default shell.
For backward compatibility, this will not change anytime soon, and will possibly never change.
Therefore, your choices are:
Use cmd syntax, as suggested in Maurice Meyer's answer.
Do not use shell = True and invoke powershell.exe explicitly - see below.
Windows only: Redefine environment variable COMSPEC before using shell = True - see below.
A simple Python example of how to invoke the powershell binary directly, with command-line switches followed by a single string containing the PowerShell source code to execute:
import subprocess
args = 'powershell', '-noprofile', '-command', 'set-location /; $pwd'
subprocess.Popen(args)
Note that I've deliberately used powershell instead of powershell.exe, because that opens up the possibility of the command working on Unix platforms too, once PowerShell Core is released.
Windows only: An example with shell = True, after redefining environment variable COMSPEC to point to PowerShell first:
import os, subprocess
os.environ["COMSPEC"] = 'powershell'
subprocess.Popen('Set-Location /; $pwd', shell=True)
Note:
COMSPEC is only consulted on Windows; on Unix platforms, the shell executable is invariably /bin/sh
As of Windows PowerShell v5.1 / PowerShell Core v6-beta.3, invoking powershell with just -c (interpreted as -Command) still loads the profiles files by default, which can have unexpected side effects (with the explicit invocation of powershell used above, -noprofile suppresses that).
Changing the default behavior to not loading the profiles is the subject of this GitHub issue, in an effort to align PowerShell's CLI with that of POSIX-like shells.
You can concat multiple commands using '&' character instead of a semicolon. Try this:
pg = subprocess.Popen("cd c:/mydirectory & ./runExecutable.exe --help", stdout=subprocess.PIPE, stderr=subprocess.STDOUT, shell=True)
buff,buffErr = pg.communicate()

Setting environment variables, does not work [duplicate]

I was hoping to write a python script to create some appropriate environmental variables by running the script in whatever directory I'll be executing some simulation code, and I've read that I can't write a script to make these env vars persist in the mac os terminal. So two things:
Is this true?
and
It seems like it would be a useful things to do; why isn't it possible in general?
You can't do it from python, but some clever bash tricks can do something similar. The basic reasoning is this: environment variables exist in a per-process memory space. When a new process is created with fork() it inherits its parent's environment variables. When you set an environment variable in your shell (e.g. bash) like this:
export VAR="foo"
What you're doing is telling bash to set the variable VAR in its process space to "foo". When you run a program, bash uses fork() and then exec() to run the program, so anything you run from bash inherits the bash environment variables.
Now, suppose you want to create a bash command that sets some environment variable DATA with content from a file in your current directory called ".data". First, you need to have a command to get the data out of the file:
cat .data
That prints the data. Now, we want to create a bash command to set that data in an environment variable:
export DATA=`cat .data`
That command takes the contents of .data and puts it in the environment variable DATA. Now, if you put that inside an alias command, you have a bash command that sets your environment variable:
alias set-data="export DATA=`cat .data`"
You can put that alias command inside the .bashrc or .bash_profile files in your home directory to have that command available in any new bash shell you start.
One workaround is to output export commands, and have the parent shell evaluate this..
thescript.py:
import pipes
import random
r = random.randint(1,100)
print("export BLAHBLAH=%s" % (pipes.quote(str(r))))
..and the bash alias (the same can be done in most shells.. even tcsh!):
alias setblahblahenv="eval $(python thescript.py)"
Usage:
$ echo $BLAHBLAH
$ setblahblahenv
$ echo $BLAHBLAH
72
You can output any arbitrary shell code, including multiple commands like:
export BLAHBLAH=23 SECONDENVVAR='something else' && echo 'everything worked'
Just remember to be careful about escaping any dynamically created output (the pipes.quote module is good for this)
If you set environment variables within a python script (or any other script or program), it won't affect the parent shell.
Edit clarification:
So the answer to your question is yes, it is true.
You can however export from within a shell script and source it by using the dot invocation
in fooexport.sh
export FOO="bar"
at the command prompt
$ . ./fooexport.sh
$ echo $FOO
bar
It's not generally possible. The new process created for python cannot affect its parent process' environment. Neither can the parent affect the child, but the parent gets to setup the child's environment as part of new process creation.
Perhaps you can set them in .bashrc, .profile or the equivalent "runs on login" or "runs on every new terminal session" script in MacOS.
You can also have python start the simulation program with the desired environment. (use the env parameter to subprocess.Popen (http://docs.python.org/library/subprocess.html) )
import subprocess, os
os.chdir('/home/you/desired/directory')
subprocess.Popen(['desired_program_cmd', 'args', ...], env=dict(SOMEVAR='a_value') )
Or you could have python write out a shell script like this to a file with a .sh extension:
export SOMEVAR=a_value
cd /home/you/desired/directory
./desired_program_cmd
and then chmod +x it and run it from anywhere.
What I like to do is use /usr/bin/env in a shell script to "wrap" my command line when I find myself in similar situations:
#!/bin/bash
/usr/bin/env NAME1="VALUE1" NAME2="VALUE2" ${*}
So let's call this script "myappenv". I put it in my $HOME/bin directory which I have in my $PATH.
Now I can invoke any command using that environment by simply prepending "myappenv" as such:
myappenv dosometask -xyz
Other posted solutions work too, but this is my personal preference. One advantage is that the environment is transient, so if I'm working in the shell only the command I invoke is affected by the altered environment.
Modified version based on new comments
#!/bin/bash
/usr/bin/env G4WORKDIR=$PWD ${*}
You could wrap this all up in an alias too. I prefer the wrapper script approach since I tend to have other environment prep in there too, which makes it easier for me to maintain.
As answered by Benson, but the best hack-around is to create a simple bash function to preserve arguments:
upsert-env-var (){ eval $(python upsert_env_var.py $*); }
Your can do whatever you want in your python script with the arguments. To simply add a variable use something like:
var = sys.argv[1]
val = sys.argv[2]
if os.environ.get(var, None):
print "export %s=%s:%s" % (var, val, os.environ[var])
else:
print "export %s=%s" % (var, val)
Usage:
upsert-env-var VAR VAL
As others have pointed out, the reason this doesn't work is that environment variables live in a per-process memory spaces and thus die when the Python process exits.
They point out that a solution to this is to define an alias in .bashrc to do what you want such as this:
alias export_my_program="export MY_VAR=`my_program`"
However, there's another (a tad hacky) method which does not require you to modify .bachrc, nor requires you to have my_program in $PATH (or specify the full path to it in the alias). The idea is to run the program in Python if it is invoked normally (./my_program), but in Bash if it is sourced (source my_program). (Using source on a script does not spawn a new process and thus does not kill environment variables created within.) You can do that as follows:
my_program.py:
#!/usr/bin/env python3
_UNUSED_VAR=0
_UNUSED_VAR=0 \
<< _UNUSED_VAR
#=======================
# Bash code starts here
#=======================
'''
_UNUSED_VAR
export MY_VAR=`$(dirname $0)/my_program.py`
echo $MY_VAR
return
'''
#=========================
# Python code starts here
#=========================
print('Hello environment!')
Running this in Python (./my_program.py), the first 3 lines will not do anything useful and the triple-quotes will comment out the Bash code, allowing Python to run normally without any syntax errors from Bash.
Sourcing this in bash (source my_program.py), the heredoc (<< _UNUSED_VAR) is a hack used to "comment out" the first-triple quote, which would otherwise be a syntax error. The script returns before reaching the second triple-quote, avoiding another syntax error. The export assigns the result of running my_program.py in Python from the correct directory (given by $(dirname $0)) to the environment variable MY_VAR. echo $MY_VAR prints the result on the command-line.
Example usage:
$ source my_program.py
Hello environment!
$ echo $MY_VAR
Hello environment!
However, the script will still do everything it did before except exporting, the environment variable if run normally:
$ ./my_program.py
Hello environment!
$ echo $MY_VAR
<-- Empty line
As noted by other authors, the memory is thrown away when the Python process exits. But during the python process, you can edit the running environment. For example:
>>> os.environ["foo"] = "bar"
>>> import subprocess
>>> subprocess.call(["printenv", "foo"])
bar
0
>>> os.environ["foo"] = "foo"
>>> subprocess.call(["printenv", "foo"])
foo
0

Python and environment variables

In the following code snippet (meant to work in an init.d environment) I would like to execute test.ClassPath. However, I'm having trouble setting and passing the CLASSPATH environment variable as defined in the user's .bashrc.
Here is the source of my frustration:
When the below script is run in use mode, it prints out the CLASSPATH OK (from $HOME/.bashrc)
when I run it as root, it also displays CLASSPATH fine (I've set up /etc/bash.bashrc with CLASSPATH)
BUT when I do "sudo script.py" (to simulate what happens at init.d startup time), the CLASSPATH is missing !!
The CLASSPATH is quite large, so I'd like to read it from a file .. say $HOME/.classpath
#!/usr/bin/python
import subprocess
import os.path as osp
import os
user = "USERNAME"
logDir = "/home/USERNAME/temp/"
print os.environ["HOME"]
if "CLASSPATH" in os.environ:
print os.environ["CLASSPATH"]
else:
print "Missing CLASSPATH"
procLog = open(osp.join(logDir, 'test.log'), 'w')
cmdStr = 'sudo -u %s -i java test.ClassPath'%(user, ) # run in user
proc = subprocess.Popen(cmdStr, shell=True, bufsize=0, stderr=procLog, stdout=procLog)
procLog.close()
sudo will not pass environment variables by default. From the man page:
By default, the env_reset option is enabled. This causes
commands to be executed with a minimal environment containing
TERM, PATH, HOME, MAIL, SHELL, LOGNAME, USER and USERNAME in
addition to variables from the invoking process permitted by
the env_check and env_keep options. This is effectively a
whitelist for environment variables.
There are a few ways of addressing this.
You can edit /etc/sudoers to explicitly pass the CLASSPATH
variable using the env_keep configuration directive. That might
look something like:
Defaults env_keep += "CLASSPATH"
You can run your command using the env command, which lets you set the environment explicitly. A typical command line invocation might look like this:
sudo env CLASSPATH=/path1:/path2 java test.ClassPath
The obvious advantage to option (2) is that it doesn't require mucking about with the sudoers configuration.
You could put source ~/.bashrc before starting your python script to get the environment variables set.

Invoke make from different directory with python script

I need to invoke make (build a makefile) in a directory different from the one I'm in, from inside a Python script. If I simply do:
build_ret = subprocess.Popen("../dir1/dir2/dir3/make",
shell = True, stdout = subprocess.PIPE)
I get the following:
/bin/sh: ../dir1/dir2/dir3/make: No such file or directory
I've tried:
build_ret = subprocess.Popen("(cd ../dir1/dir2/dir3/; make)",
shell = True, stdout = subprocess.PIPE)
but the make command is ignored. I don't even get the "Nothing to build for" message.
I've also tried using "communicate" but without success. This is running on Red Hat Linux.
I'd go with #Philipp's solution of using cwd, but as a side note you could also use the -C option to make:
make -C ../dir1/dir2/dir3/make
-C dir, --directory=dir
Change to directory dir before reading the makefiles or doing anything else. If multiple -C options are specified, each is interpreted relative to the previous one: -C / -C etc is equivalent to -C /etc. This is typically used with recursive invocations of make.
Use the cwd argument, and use the list form of Popen:
subprocess.Popen(["make"], stdout=subprocess.PIPE, cwd="../dir1/dir2/dir3")
Invoking the shell is almost never required and is likely to cause problems because of the additional complexity involved.
Try to use the full path, you can get the location of the python script by calling
sys.argv[0]
to just get the path:
os.path.dirname(sys.argv[0])
You'll find quite some path manipulations in the os.path module

Categories