Executing bash script and setting environment variables using python - python

I have been trying to use the sub-process module to run a sequence of shell commands in succession, to automate certain tasks using python(for the first time). The following are the steps involved in the task mentioned before:
Goto location 1
Execute the cshell script in the path location
Goto location 2
Execute another cshell script in the new path location
I have written the following code for the purpose:
#! /usr/bin/python
import subprocess
import os
path = r'/AA/BB/CC/DD/EE/FF/GG/HH/'
os.chdir(path)
p1 = subprocess.Popen('./xyz.csh', stdout=subprocess.PIPE, shell=True)
outdata = p1.communicate()[0]
os.chdir('../XX/YY/')
subprocess.call("./abc.csh")
print("Simulation complete")
The script file contains a few setenv commands which set the environment variables that are required for the execution of abc.csh script.
I have the following query about the usage of subprocess.Popen() routine, as the subprocess.Popen() routine creates a sub-shell for the execution of xyz.csh, the environment variables set by the execution of the script vanish once the execution is completed, and abc .csh (which requires those environment variables for execution) throws errors during execution. Is there a way to set environment variables in main shell so that abc.csh can be executed with fail.

Related

How to run "source" command from python?

I need to set up the ROS2 Galactic environment by sourcing the following file through python: -
"source /opt/ros/galactic/setup.bash"
If I write the above line in terminal it will be sourced but I need to do this from python script.
I tried: -
import subprocess
subprocess.call("source /opt/ros/galactic/setup.bash", shell=True)
and
import os
os.system('source /opt/ros/galactic/setup.bash')
But none of them is sourcing the enviornment. I am working on Ubuntu 20.04, Python 3.8.10.
This will not impact your Python runtime environment. subprocess starts a shell wherein it runs your script (setup.bash) and then terminates.
Consider this:
import subprocess
import os
subprocess.run('export FOO=1', shell=True)
print(os.environ['FOO'])
This tries to set an environment variable in the sub-shell. That actually works but when run() returns, the shell no longer exists. Thus, when we try to access the environment variable we get KeyError

How do I export environment variables using a script in the same shell in python?

My django project fetches credentials from environment variables, now I want to automate this process and store the credentials in the vault(hashivcorp).
I have a python and shell script which fetches data from an API and exports it as environment variables, when I run it using os.system command it runs the shell script but as it runs it in a subprocess, I can't access the variables in the main(parent) process/shell. Only way of doing it by inserting the shell script in the settings.py file.
Is there any way I can do it so that I get those in the main process?
P.s: I did try sourcing, os.system didn't recognise it as a command.
Here's the code I'm running:
import os
os.environ['ENV'] = 'Demo'
os.system('python3 /home/rishabh/export.py')
print(os.environ.get('RDS_DB_NAME'))
output:
None
the python file, shell script works just fine.
One way to do it is to run export.py in the same process, as user1934428 suggested:
import os
import sys
os.environ['ENV'] = 'Demo'
sys.path.append('/home/rishabh/')
import export # runs export.py in the same process
print(os.environ.get('RDS_DB_NAME'))
This assumes there are no __name__ == '__main__' checks inside export.py.
You only need the sys.path line if export.py is in a different directory than your current script.

Linux, Python open terminal run global python command

Not sure if this is possible. I have a set of python scripts and have modified the linux PATH in ~/.bashrc so that whenever I open a terminal, the python scripts are available to run as a command.
export PATH=$PATH:/home/user/pythonlib/
my_command.py resides in the above path.
I can run my_command.py (args) from anywhere in terminal and it will run the python scripts.
I'd like to control this functionality from a different python script as this will be the quickest solution to automating my processing routines. So I need it to open a terminal and run my_command.py (args) from within the python script I'm working on.
I have tried subprocess:
import subprocess
test = subprocess.Popen(["my_command.py"], stdout=subprocess.PIPE)
output = test.communicate()[0]
While my_command.py is typically available in any terminal I launch, here I have no access to it, returns file not found.
I can start a new terminal using os then type in my_command.py, and it works
os.system("x-terminal-emulator -e /bin/bash")
So, is there a way to get the second method to accept a script you want to run from python with args?
Ubuntu 16
Thanks :)
Popen does not load the system PATH for the session you create in a python script. You have to modify the PATH in the session to include the directory to your project like so:
someterminalcommand = "my_command.py (args)"
my_env = os.environ.copy()
my_env["PATH"] = "/home/usr/mypythonlib/:" + my_env["PATH"]
combine = subprocess.Popen(shlex.split(someterminalcommand), env=my_env)
combine.wait()
This allows me to run my "my_command.py" file from a different python session just like I had a terminal window open.
If you're using Gnome, the gnome-terminal command is rather useful in this situation.
As an example of very basic usage, the following code will spawn a terminal, and run a Python REPL in it:
import subprocess
subprocess.Popen(["gnome-terminal", "-e", "python"])
Now, if you want to run a specific script, you will need to concatenate its path with python, for the last element of that list it the line that will be executed in the new terminal.
For instance:
subprocess.Popen(["gnome-terminal", "-e", "python my_script.py"])
If your script is executable, you can omit python:
subprocess.Popen(["gnome-terminal", "-e", "my_script.py"])
If you want to pass parameters to your script, simply add them to the python command:
subprocess.Popen(["gnome-terminal", "-e", "python my_script.py var1 var2"])
Note that if you want to run your script with a particular version of Python, you should specify it, by explicitly calling "python2" or "python3".
A small example:
# my_script.py
import sys
print(sys.argv)
input()
# main.py
import subprocess
subprocess.Popen(["gnome-terminal", "-e", "python3 my_script.py hello world"])
Running python3 main.py will spawn a new terminal, with ['my_script.py', 'hello', 'world'] printed, and waited for an input.

Use Python in ubuntu to show contents of a directory

I have a .py file in the home directory which contains these three lines:
import os
os.system("cd Desktop/")
os.system("ls")
and I want it to "ls" from the "Desktop" directory but it shows contents of the /home directory.
I looked at these pages:
Calling an external command in Python
http://ubuntuforums.org/showthread.php?t=729192
but I could not understand what to do. Can anybody help me?
The two calls are separate from each other. There is no context kept between successive invocations of os.system because a new shell is spawned for every call. First os.system("cd Desktop/") switches directories to Desktop and exits. Then a new shell executes ls in the original folder.
Try chaining your commands with &&:
import os
os.system("cd Desktop/ && ls")
This will show the contents of directory Desktop.
Fabric
If your application is going to be heavy on os usage you might consider using python-fabric. It allows you to use higher level language constructs like contextmanagers to make command line invocations easier:
from fabric.operations import local
from fabric.context_managers import lcd
with lcd("Desktop/"): # Prefixes all commands with `cd Desktop && `
contents=local("ls", capture=True)
You have to consider that os.system executes the commands in a sub-shell. Hence 1) python starts a sub-shell, 2) the directory is changed, 3) then the sub-shell is completed, 4) return to previous state.
To force the current directory change you should do:
os.chdir("Desktop")
Always try and do it by other means that through os.system (os.listdir), or also by doing other than subprocess (which is an excellent module for command control in the shell)

Setting environment variables, does not work [duplicate]

I was hoping to write a python script to create some appropriate environmental variables by running the script in whatever directory I'll be executing some simulation code, and I've read that I can't write a script to make these env vars persist in the mac os terminal. So two things:
Is this true?
and
It seems like it would be a useful things to do; why isn't it possible in general?
You can't do it from python, but some clever bash tricks can do something similar. The basic reasoning is this: environment variables exist in a per-process memory space. When a new process is created with fork() it inherits its parent's environment variables. When you set an environment variable in your shell (e.g. bash) like this:
export VAR="foo"
What you're doing is telling bash to set the variable VAR in its process space to "foo". When you run a program, bash uses fork() and then exec() to run the program, so anything you run from bash inherits the bash environment variables.
Now, suppose you want to create a bash command that sets some environment variable DATA with content from a file in your current directory called ".data". First, you need to have a command to get the data out of the file:
cat .data
That prints the data. Now, we want to create a bash command to set that data in an environment variable:
export DATA=`cat .data`
That command takes the contents of .data and puts it in the environment variable DATA. Now, if you put that inside an alias command, you have a bash command that sets your environment variable:
alias set-data="export DATA=`cat .data`"
You can put that alias command inside the .bashrc or .bash_profile files in your home directory to have that command available in any new bash shell you start.
One workaround is to output export commands, and have the parent shell evaluate this..
thescript.py:
import pipes
import random
r = random.randint(1,100)
print("export BLAHBLAH=%s" % (pipes.quote(str(r))))
..and the bash alias (the same can be done in most shells.. even tcsh!):
alias setblahblahenv="eval $(python thescript.py)"
Usage:
$ echo $BLAHBLAH
$ setblahblahenv
$ echo $BLAHBLAH
72
You can output any arbitrary shell code, including multiple commands like:
export BLAHBLAH=23 SECONDENVVAR='something else' && echo 'everything worked'
Just remember to be careful about escaping any dynamically created output (the pipes.quote module is good for this)
If you set environment variables within a python script (or any other script or program), it won't affect the parent shell.
Edit clarification:
So the answer to your question is yes, it is true.
You can however export from within a shell script and source it by using the dot invocation
in fooexport.sh
export FOO="bar"
at the command prompt
$ . ./fooexport.sh
$ echo $FOO
bar
It's not generally possible. The new process created for python cannot affect its parent process' environment. Neither can the parent affect the child, but the parent gets to setup the child's environment as part of new process creation.
Perhaps you can set them in .bashrc, .profile or the equivalent "runs on login" or "runs on every new terminal session" script in MacOS.
You can also have python start the simulation program with the desired environment. (use the env parameter to subprocess.Popen (http://docs.python.org/library/subprocess.html) )
import subprocess, os
os.chdir('/home/you/desired/directory')
subprocess.Popen(['desired_program_cmd', 'args', ...], env=dict(SOMEVAR='a_value') )
Or you could have python write out a shell script like this to a file with a .sh extension:
export SOMEVAR=a_value
cd /home/you/desired/directory
./desired_program_cmd
and then chmod +x it and run it from anywhere.
What I like to do is use /usr/bin/env in a shell script to "wrap" my command line when I find myself in similar situations:
#!/bin/bash
/usr/bin/env NAME1="VALUE1" NAME2="VALUE2" ${*}
So let's call this script "myappenv". I put it in my $HOME/bin directory which I have in my $PATH.
Now I can invoke any command using that environment by simply prepending "myappenv" as such:
myappenv dosometask -xyz
Other posted solutions work too, but this is my personal preference. One advantage is that the environment is transient, so if I'm working in the shell only the command I invoke is affected by the altered environment.
Modified version based on new comments
#!/bin/bash
/usr/bin/env G4WORKDIR=$PWD ${*}
You could wrap this all up in an alias too. I prefer the wrapper script approach since I tend to have other environment prep in there too, which makes it easier for me to maintain.
As answered by Benson, but the best hack-around is to create a simple bash function to preserve arguments:
upsert-env-var (){ eval $(python upsert_env_var.py $*); }
Your can do whatever you want in your python script with the arguments. To simply add a variable use something like:
var = sys.argv[1]
val = sys.argv[2]
if os.environ.get(var, None):
print "export %s=%s:%s" % (var, val, os.environ[var])
else:
print "export %s=%s" % (var, val)
Usage:
upsert-env-var VAR VAL
As others have pointed out, the reason this doesn't work is that environment variables live in a per-process memory spaces and thus die when the Python process exits.
They point out that a solution to this is to define an alias in .bashrc to do what you want such as this:
alias export_my_program="export MY_VAR=`my_program`"
However, there's another (a tad hacky) method which does not require you to modify .bachrc, nor requires you to have my_program in $PATH (or specify the full path to it in the alias). The idea is to run the program in Python if it is invoked normally (./my_program), but in Bash if it is sourced (source my_program). (Using source on a script does not spawn a new process and thus does not kill environment variables created within.) You can do that as follows:
my_program.py:
#!/usr/bin/env python3
_UNUSED_VAR=0
_UNUSED_VAR=0 \
<< _UNUSED_VAR
#=======================
# Bash code starts here
#=======================
'''
_UNUSED_VAR
export MY_VAR=`$(dirname $0)/my_program.py`
echo $MY_VAR
return
'''
#=========================
# Python code starts here
#=========================
print('Hello environment!')
Running this in Python (./my_program.py), the first 3 lines will not do anything useful and the triple-quotes will comment out the Bash code, allowing Python to run normally without any syntax errors from Bash.
Sourcing this in bash (source my_program.py), the heredoc (<< _UNUSED_VAR) is a hack used to "comment out" the first-triple quote, which would otherwise be a syntax error. The script returns before reaching the second triple-quote, avoiding another syntax error. The export assigns the result of running my_program.py in Python from the correct directory (given by $(dirname $0)) to the environment variable MY_VAR. echo $MY_VAR prints the result on the command-line.
Example usage:
$ source my_program.py
Hello environment!
$ echo $MY_VAR
Hello environment!
However, the script will still do everything it did before except exporting, the environment variable if run normally:
$ ./my_program.py
Hello environment!
$ echo $MY_VAR
<-- Empty line
As noted by other authors, the memory is thrown away when the Python process exits. But during the python process, you can edit the running environment. For example:
>>> os.environ["foo"] = "bar"
>>> import subprocess
>>> subprocess.call(["printenv", "foo"])
bar
0
>>> os.environ["foo"] = "foo"
>>> subprocess.call(["printenv", "foo"])
foo
0

Categories