Set an environment variable from a python script - python

I am trying to automate build, test, and deployment via CI/CD. I have a python script designed to query my git remote repository, select the most recent tag in semantic versioning format (x.x.x) and increment it depending on the type of change.
I would like to set my environment variable (GIT_NEW_VERSION) so that this can be used within my Makefile and the generated binary will have the version available. The problem with this is that the python script is run in a sub-process that doesn't have access to the parent process variables. So I can modify the variable only for the current process and any processes created after but not the process that called the python script.
I could call make from the python script but that is not ideal for error management and logging with my CI tool.

bash:
LD_LIBRARY_PATH=my_path
sqsub -np $1 /path/to/executable
Similar, in Python:
import os
import subprocess
import sys
os.environ['LD_LIBRARY_PATH'] = "my_path" # visible in this process + all children
subprocess.check_call(['sqsub', '-np', sys.argv[1], '/path/to/executable'],
env=dict(os.environ, SQSUB_VAR="visible in this subprocess"))

Related

Run bash source command within a python script

I am using python to run a bash script which contains a bash source command. For some reason the source command is not working.
Bash script code:
#!/bin/bash
SETTINGSFILE=/apps/settings
if test -f "$SETTINGSFILE"; then
source $SETTINGSFILE #This is not working
echo "settings file exists" #reaches here
else
echo "settings file not found"
fi
...
Python code:
import subprocess
rc = subprocess.call(["./<name-of-bash-script>.sh"])
Basically, I want to run the command source /apps/settings using a script. The control reaches to the echo statement (marked with comment), but still source command is not executed by the bash script. How to get it working?
I don't know why running "source file.sh" in a terminal would not load your environment variables (I am not on a linux system right now and can't really test this). However, in general, if you want to manipulate your environment, you should do that directly in python and not call a child process. For example,
import os
os.environ['THING'] = 'WORD'
I believe if you wish to change your environment variables from a child process then you need to imbed that child process in your current parent process (so that you become the child with the new environment) and you need to keep that child process open (you don't want it to terminate). You probably need something like "/bin/bash -i" at the end of your source file (to keep the subprocess running). You then need to use something like the pexpect module to imbed subprocess in current process.
import pexpect
child = pexpect.spawn('file.sh')
child.interact()
#check your os.environ to see if it is updated from your source file.
Sorry I can't be of more help.

Using python subprocess with module load

I'm currently using Python 2.7 on a unix environment.
I need to run R scripts in my python scripts but I can't manage to make it work because my R module needs to be loaded before (using "module load")
Here's my python script :
import os
import subprocess as sp
os.system('module load R/3.2.3')
out = sp.check_output(['Rscript','test.R'], universal_newlines=True)
I keep having the same error : "[Errno 2] No such file or directory"
Any idea ?
I looked here and here but couldn't make it work.
Thank you for your help !
So what "module load" actually does is set some environment variables in the calling shell. So when you do this:
os.system('module load R/3.2.3')
Python creates a process, runs /bin/sh in it, and passes that command to the shell. The module environment variables are set in that shell. Then that shell exits--job done!
The environment variables do not--and cannot--propagate back to the Python process. So when you do this:
sp.check_output(['Rscript','test.R'])
It's totally irrelevant that you ran module load before.
So how can you fix this? Well, one possibility would be to explicitly specify the path to Rscript:
sp.check_output(['/your/full/path/to/Rscript','test.R'])
Another would be to combine your commands:
sp.check_output('module load R/3.2.3 && Rscript test.R', shell=True)
Finally, you could simply run module load before running your Python script in the first place. The environment variables it sets can propagate all the way to the R invocation within Python.
By the way, it is possible to invoke R directly from Python: http://rpy.sourceforge.net/rpy2/doc-dev/html/introduction.html

Create a process that runs in background using python

I want to write a python program that runs in the background.
I mean, like we install Python package. And later, we can run any script using python in front of the script name. This means that some python process is running in background which can take inputs and perform actions.
And in case of linux, you can call grep from anywhere. That means grep is also running in the background somehow.
I want to write something like that in python. When I call certain function with name and arguments at any time, it should perform the intended action without caring for the original code. But I am not able to find how to achieve that.
Can anyone please help me here?
Thanks in advance.
Clarification: the fact that you can run python or grep in a console just by typing their name, does not mean that they run in background. It means that there exist an executable file in some location, and this location is listed in the environment variable PATH.
For example, on my system I can run Python by typing python. The python executable is installed at /usr/local/bin/python, and has the execute permission bit on.
$ echo $PATH
/usr/local/bin:/usr/bin:/bin:/usr/sbin:/sbin:/opt/X11/bin
and yes, /usr/local/bin is contained in PATH.
You can do the same with python scripts:
ensure that the very first line of your script contains #!/usr/bin/python or #!/usr/bin/env python
give your script execute permissions: chmod a+x yourScript
either move your script to one of the directories contained in $PATH, or add the directory where your script is located to PATH: export PATH=$PATH:/home/you/scripts
Have a look at
http://www.jejik.com/articles/2007/02/a_simple_unix_linux_daemon_in_python/
you can roll out your own daemon by inheriting the Daemon class and overriding run method
from daemon import Daemon
class run_daemon(Daemon):
def run(self):
import sys
run_daemon.execute_shell_command(sys.argv[1])
#staticmethod
def execute_shell_command(ShellCommand):
import subprocess
process = subprocess.Popen(ShellCommand, shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
process.communicate()

Using subprocess under django to call a scala program

I have a scala program that I would like to call from within django using subprocess:
encode_cmd = "/usr/local/share/scala/scala-2.10.0/bin/scala -cp /home/django/code/classes conn {}".format(self.id)
output = subprocess.Popen(encode_cmd, shell = True, stdout = subprocess.PIPE).stdout.read()
This code runs fine in the python shell, but when run as part of the normal webserver process it fails to find the scala dependencies (the scala class references the slick libraries for example) failing with the java.lang.NoClassDefFoundError.
I've tried specifying specific users as part of the mod_wsgi daemon process, but this makes no difference.
You should add jars in your command like that: -cp /home/django/code/classes:/path/to/slick.jar, otherwise it only includes the .class' and folders containing class files as per packages.
You can always rely on the shell expansion if you have many jars: /path/to/dependencies/*.jar
Another option is using SBT's xsbt-start-script-plugin or Maven's appassembler plugin to create a shell script

How do I create a custom python interpreter? i.e. with certain modules already included?

If you've used Ruby on Rails, I'm thinking of the feature where the user types
'rails console'
and instantly gets a Ruby console with rails and the current app already loaded.
I want to make something like this for a python program I'm working on, does anyone know how I would get to type say,
'python myPythonConsole.py'
and open up a regular python interpreter but with my program and all its dependencies loaded?
If I understand you correctly then you might want python -i myPythonConsole.py. It gives you a console when the script has finished so you have to run your application in a different thread.
To create a console in a script you would use the code module.
If you are using IPython (if you are not you should, it is an awesome python shell with TAB completion and many shortcuts) it is possible to set up profiles, which basically are named configurations.
Each configuration can import modules (and do other stuff) at startup.
Django does this with its "shell" command:
./manage.py shell
will open a Python shell with your Django settings loaded so you can import your project code interactively.
Source: http://code.djangoproject.com/browser/django/trunk/django/core/management/commands/shell.py
The real answer here is to use the PYTHONSTARTUP environment variable. See the tutorial section The interactive startup file.
Do your custom imports in a file interpreter.py, and configure
PYTHONSTARTUP=/path/to/interpreter.py
Next time your start Python, the custom code will be executed before you're dropped in the REPL shell.
Here's my customization:
import os
import sys
from pathlib import Path
from pprint import pprint
pp = pprint
P = Path
version = ".".join(str(number) for number in sys.version_info[0:3])
print(f"\nCustomized Python interpreter v{version} running from {sys.prefix}")

Categories