How to use export with Python on Linux - python

I need to make an export like this in Python :
# export MY_DATA="my_export"
I've tried to do :
# -*- python-mode -*-
# -*- coding: utf-8 -*-
import os
os.system('export MY_DATA="my_export"')
But when I list export, "MY_DATA" not appear :
# export
How I can do an export with Python without saving "my_export" into a file ?

export is a command that you give directly to the shell (e.g. bash), to tell it to add or modify one of its environment variables. You can't change your shell's environment from a child process (such as Python), it's just not possible.
Here's what's happening when you try os.system('export MY_DATA="my_export"')...
/bin/bash process, command `python yourscript.py` forks python subprocess
|_
/usr/bin/python process, command `os.system()` forks /bin/sh subprocess
|_
/bin/sh process, command `export ...` changes its local environment
When the bottom-most /bin/sh subprocess finishes running your export ... command, then it's discarded, along with the environment that you have just changed.

You actually want to do
import os
os.environ["MY_DATA"] = "my_export"

Another way to do this, if you're in a hurry and don't mind the hacky-aftertaste, is to execute the output of the python script in your bash environment and print out the commands to execute setting the environment in python. Not ideal but it can get the job done in a pinch. It's not very portable across shells, so YMMV.
$(python -c 'print "export MY_DATA=my_export"')
(you can also enclose the statement in backticks in some shells ``)

Not that simple:
python -c "import os; os.putenv('MY_DATA','1233')"
$ echo $MY_DATA # <- empty
But:
python -c "import os; os.putenv('MY_DATA','123'); os.system('bash')"
$ echo $MY_DATA #<- 123

I have an excellent answer.
#! /bin/bash
output=$(git diff origin/master..origin/develop | \
python -c '
# DO YOUR HACKING
variable1_to_be_exported="Yo Yo"
variable2_to_be_exported="Honey Singh"
… so on
magic=""
magic+="export onShell-var1=\""+str(variable1_to_be_exported)+"\"\n"
magic+="export onShell-var2=\""+str(variable2_to_be_exported)+"\""
print magic
'
)
eval "$output"
echo "$onShell-var1" // Output will be Yo Yo
echo "$onShell-var2" // Output will be Honey Singh
Mr Alex Tingle is correct about those processes and sub-process stuffs
How it can be achieved is like the above I have mentioned.
Key Concept is :
Whatever printed from python will be stored in the variable in the catching variable in bash [output]
We can execute any command in the form of string using eval
So, prepare your print output from python in a meaningful bash commands
use eval to execute it in bash
And you can see your results
NOTE
Always execute the eval using double quotes or else bash will mess up your \ns and outputs will be strange
PS: I don't like bash but your have to use it

I've had to do something similar on a CI system recently. My options were to do it entirely in bash (yikes) or use a language like python which would have made programming the logic much simpler.
My workaround was to do the programming in python and write the results to a file.
Then use bash to export the results.
For example:
# do calculations in python
with open("./my_export", "w") as f:
f.write(your_results)
# then in bash
export MY_DATA="$(cat ./my_export)"
rm ./my_export # if no longer needed

You could try os.environ["MY_DATA"] instead.

Kind of a hack because it's not really python doing anything special here, but if you run the export command in the same sub-shell, you will probably get the result you want.
import os
cmd = "export MY_DATA='1234'; echo $MY_DATA" # or whatever command
os.system(cmd)

In the hope of providing clarity over common cinfusion...
I have written many python <--> bash <--> elfbin toolchains and the proper way to see it is such as this:
Each process (originator) has a state of the environment inherited from whatever invoked it. Any change remains lokal to that process. Transfering an environment state is a function by itself and runs in two directions, each with it's own caveats. The most common thing is to modify environment before running a sub-process. To go down to the metal, look at the exec() - call in C. There is a variant that takes a pointer to environment data. This is the only actually supported transfer of environment in typical OS'es.
Shell scripts will create a state to pass when running children when you do an export. Otherwise it just uses that which it got in the first place.
In all other cases it will be some generic mechanism used to pass a set of data to allow the calling process itself to update it's environment based on the result of the child-processes output.
Ex:
ENVUPDATE = $(CMD_THAT_OUTPUTS_KEYVAL_LISTS)
echo $ENVUPDATE > $TMPFILE
source $TMPFILE
The same can of course be done using json, xml or other things as long as you have the tools to interpret and apply.
The need for this may be (50% chance) a sign of misconstruing the basic primitives and that you need a better config or parameter interchange in your solution.....
Oh, in python I would do something like...
(need improvement depending on your situation)
import re
RE_KV=re.compile('([a-z][\w]*)\s*=\s*(.*)')
OUTPUT=RunSomething(...) (Assuming 'k1=v1 k2=v2')
for kv in OUTPUT.split(' ')
try:
k,v=RE_KV.match(kv).groups()
os.environ[k]=str(v)
except:
#The not a property case...
pass

One line solution:
eval `python -c 'import sysconfig;print("python_include_path={0}".format(sysconfig.get_path("include")))'`
echo $python_include_path # prints /home/<usr>/anaconda3/include/python3.6m" in my case
Breakdown:
Python call
python -c 'import sysconfig;print("python_include_path={0}".format(sysconfig.get_path("include")))'
It's launching a python script that
imports sysconfig
gets the python include path corresponding to this python binary (use "which python" to see which one is being used)
prints the script "python_include_path={0}" with {0} being the path from 2
Eval call
eval `python -c 'import sysconfig;print("python_include_path={0}".format(sysconfig.get_path("include")))'`
It's executing in the current bash instance the output from the python script. In my case, its executing:
python_include_path=/home/<usr>/anaconda3/include/python3.6m
In other words, it's setting the environment variable "python_include_path" with that path for this shell instance.
Inspired by:
http://blog.tintoy.io/2017/06/exporting-environment-variables-from-python-to-bash/

import os
import shlex
from subprocess import Popen, PIPE
os.environ.update(key=value)
res = Popen(shlex.split("cmd xxx -xxx"), stdin=PIPE, stdout=PIPE, stderr=PIPE,
env=os.environ, shell=True).communicate('y\ny\ny\n'.encode('utf8'))
stdout = res[0]
stderr = res[1]

os.system ('/home/user1/exportPath.ksh')
exportPath.ksh:
export PATH=MY_DATA="my_export"

Related

Add environment variables in python [duplicate]

I was hoping to write a python script to create some appropriate environmental variables by running the script in whatever directory I'll be executing some simulation code, and I've read that I can't write a script to make these env vars persist in the mac os terminal. So two things:
Is this true?
and
It seems like it would be a useful things to do; why isn't it possible in general?
You can't do it from python, but some clever bash tricks can do something similar. The basic reasoning is this: environment variables exist in a per-process memory space. When a new process is created with fork() it inherits its parent's environment variables. When you set an environment variable in your shell (e.g. bash) like this:
export VAR="foo"
What you're doing is telling bash to set the variable VAR in its process space to "foo". When you run a program, bash uses fork() and then exec() to run the program, so anything you run from bash inherits the bash environment variables.
Now, suppose you want to create a bash command that sets some environment variable DATA with content from a file in your current directory called ".data". First, you need to have a command to get the data out of the file:
cat .data
That prints the data. Now, we want to create a bash command to set that data in an environment variable:
export DATA=`cat .data`
That command takes the contents of .data and puts it in the environment variable DATA. Now, if you put that inside an alias command, you have a bash command that sets your environment variable:
alias set-data="export DATA=`cat .data`"
You can put that alias command inside the .bashrc or .bash_profile files in your home directory to have that command available in any new bash shell you start.
One workaround is to output export commands, and have the parent shell evaluate this..
thescript.py:
import pipes
import random
r = random.randint(1,100)
print("export BLAHBLAH=%s" % (pipes.quote(str(r))))
..and the bash alias (the same can be done in most shells.. even tcsh!):
alias setblahblahenv="eval $(python thescript.py)"
Usage:
$ echo $BLAHBLAH
$ setblahblahenv
$ echo $BLAHBLAH
72
You can output any arbitrary shell code, including multiple commands like:
export BLAHBLAH=23 SECONDENVVAR='something else' && echo 'everything worked'
Just remember to be careful about escaping any dynamically created output (the pipes.quote module is good for this)
If you set environment variables within a python script (or any other script or program), it won't affect the parent shell.
Edit clarification:
So the answer to your question is yes, it is true.
You can however export from within a shell script and source it by using the dot invocation
in fooexport.sh
export FOO="bar"
at the command prompt
$ . ./fooexport.sh
$ echo $FOO
bar
It's not generally possible. The new process created for python cannot affect its parent process' environment. Neither can the parent affect the child, but the parent gets to setup the child's environment as part of new process creation.
Perhaps you can set them in .bashrc, .profile or the equivalent "runs on login" or "runs on every new terminal session" script in MacOS.
You can also have python start the simulation program with the desired environment. (use the env parameter to subprocess.Popen (http://docs.python.org/library/subprocess.html) )
import subprocess, os
os.chdir('/home/you/desired/directory')
subprocess.Popen(['desired_program_cmd', 'args', ...], env=dict(SOMEVAR='a_value') )
Or you could have python write out a shell script like this to a file with a .sh extension:
export SOMEVAR=a_value
cd /home/you/desired/directory
./desired_program_cmd
and then chmod +x it and run it from anywhere.
What I like to do is use /usr/bin/env in a shell script to "wrap" my command line when I find myself in similar situations:
#!/bin/bash
/usr/bin/env NAME1="VALUE1" NAME2="VALUE2" ${*}
So let's call this script "myappenv". I put it in my $HOME/bin directory which I have in my $PATH.
Now I can invoke any command using that environment by simply prepending "myappenv" as such:
myappenv dosometask -xyz
Other posted solutions work too, but this is my personal preference. One advantage is that the environment is transient, so if I'm working in the shell only the command I invoke is affected by the altered environment.
Modified version based on new comments
#!/bin/bash
/usr/bin/env G4WORKDIR=$PWD ${*}
You could wrap this all up in an alias too. I prefer the wrapper script approach since I tend to have other environment prep in there too, which makes it easier for me to maintain.
As answered by Benson, but the best hack-around is to create a simple bash function to preserve arguments:
upsert-env-var (){ eval $(python upsert_env_var.py $*); }
Your can do whatever you want in your python script with the arguments. To simply add a variable use something like:
var = sys.argv[1]
val = sys.argv[2]
if os.environ.get(var, None):
print "export %s=%s:%s" % (var, val, os.environ[var])
else:
print "export %s=%s" % (var, val)
Usage:
upsert-env-var VAR VAL
As others have pointed out, the reason this doesn't work is that environment variables live in a per-process memory spaces and thus die when the Python process exits.
They point out that a solution to this is to define an alias in .bashrc to do what you want such as this:
alias export_my_program="export MY_VAR=`my_program`"
However, there's another (a tad hacky) method which does not require you to modify .bachrc, nor requires you to have my_program in $PATH (or specify the full path to it in the alias). The idea is to run the program in Python if it is invoked normally (./my_program), but in Bash if it is sourced (source my_program). (Using source on a script does not spawn a new process and thus does not kill environment variables created within.) You can do that as follows:
my_program.py:
#!/usr/bin/env python3
_UNUSED_VAR=0
_UNUSED_VAR=0 \
<< _UNUSED_VAR
#=======================
# Bash code starts here
#=======================
'''
_UNUSED_VAR
export MY_VAR=`$(dirname $0)/my_program.py`
echo $MY_VAR
return
'''
#=========================
# Python code starts here
#=========================
print('Hello environment!')
Running this in Python (./my_program.py), the first 3 lines will not do anything useful and the triple-quotes will comment out the Bash code, allowing Python to run normally without any syntax errors from Bash.
Sourcing this in bash (source my_program.py), the heredoc (<< _UNUSED_VAR) is a hack used to "comment out" the first-triple quote, which would otherwise be a syntax error. The script returns before reaching the second triple-quote, avoiding another syntax error. The export assigns the result of running my_program.py in Python from the correct directory (given by $(dirname $0)) to the environment variable MY_VAR. echo $MY_VAR prints the result on the command-line.
Example usage:
$ source my_program.py
Hello environment!
$ echo $MY_VAR
Hello environment!
However, the script will still do everything it did before except exporting, the environment variable if run normally:
$ ./my_program.py
Hello environment!
$ echo $MY_VAR
<-- Empty line
As noted by other authors, the memory is thrown away when the Python process exits. But during the python process, you can edit the running environment. For example:
>>> os.environ["foo"] = "bar"
>>> import subprocess
>>> subprocess.call(["printenv", "foo"])
bar
0
>>> os.environ["foo"] = "foo"
>>> subprocess.call(["printenv", "foo"])
foo
0

Calling alias Command from python script

I need to run an OpenFOAM command by automatized python script.
My python code contains the lines
subprocess.Popen(['OF23'], shell=True)
subprocess.Popen(['for i in *; do surfaceConvert $i file_path/$i.stlb; done', shell=True)
where OF23 is a shell command is defined in alias as
alias OF23='export PATH=/usr/lib64/openmpi/bin/:$PATH;export LD_LIBRARY_PATH=/usr/lib64/openmpi/lib/:$LD_LIBRARY_PATH;source /opt/OpenFOAM/OpenFOAM-2.3.x/etc/bashrc'
This script runs the OpenFOAM command in terminal and the file_path defines the stl files which are converted to binary format
But when I run the script, I am getting 'OF23' is not defined.
How do I make my script to run the alias command and also perform the next OpenFOAM file conversion command
That's not going to work, even once you've resolved the alias problem. Each Python subprocess.Popen is run in a separate subshell, so the effects of executing OF23 won't persist to the second subprocess.Popen.
Here's a brief demo:
import subprocess
subprocess.Popen('export ATEST="Hello";echo "1 $ATEST"', shell=True)
subprocess.Popen('echo "2 $ATEST"', shell=True)
output
1 Hello
2
So whether you use the alias, or just execute the aliased commands directly, you'll need to combine your commands into one subprocess.Popen call.
Eg:
subprocess.Popen('''export PATH=/usr/lib64/openmpi/bin/:$PATH;
export LD_LIBRARY_PATH=/usr/lib64/openmpi/lib/:$LD_LIBRARY_PATH;
source /opt/OpenFOAM/OpenFOAM-2.3.x/etc/bashrc;
for i in *;
do surfaceConvert $i file_path/$i.stlb;
done''', shell=True)
I've used a triple-quoted string so I can insert linebreaks, to make the shell commands easier to read.
Obviously, I can't test that exact command sequence on my machine, but it should work.
You need to issue shopt -s expand_aliases to activate alias expansion. From bash(1):
Aliases are not expanded when the shell is not interactive, unless the expand_aliases shell option is set using shopt [...]
If that does not help, check if the shell executed from your Python program is actually Bash (e.g. by echoing $BASH).
If your command may use bash-isms then you could pass executable parameter otherwise /bin/sh is used. To expand aliases, you could use #Michael Jaros' suggestion:
#!/usr/bin/env python
import subprocess
subprocess.check_call("""
shopt -s expand_aliases
OF23
for i in *; do surfaceConvert $i file_path/$i.stlb; done
"""], shell=True, executable='/bin/bash')
If you already have a working bash-script then just call it as is.
Though to make it more robust and maintainable, you could convert to Python parts that provide the most benefit e.g., here's how you could emulate the for-loop:
#!/usr/bin/env python
import subprocess
for entry in os.listdir():
subprocess.check_call(['/path/to/surfaceConvert', entry,
'file_path/{entry}.stlb'.format(entry)])
It allows filenames to contain shell meta-characters such as spaces.
To configure the environment for a child process, you could use Popen's env parameter e.g., env=dict(os.environ, ENVVAR='value').
It is possible to emulate source bash command in Python but you should probably leave the parts that depend on it in bash-script.

Setting environment variables, does not work [duplicate]

I was hoping to write a python script to create some appropriate environmental variables by running the script in whatever directory I'll be executing some simulation code, and I've read that I can't write a script to make these env vars persist in the mac os terminal. So two things:
Is this true?
and
It seems like it would be a useful things to do; why isn't it possible in general?
You can't do it from python, but some clever bash tricks can do something similar. The basic reasoning is this: environment variables exist in a per-process memory space. When a new process is created with fork() it inherits its parent's environment variables. When you set an environment variable in your shell (e.g. bash) like this:
export VAR="foo"
What you're doing is telling bash to set the variable VAR in its process space to "foo". When you run a program, bash uses fork() and then exec() to run the program, so anything you run from bash inherits the bash environment variables.
Now, suppose you want to create a bash command that sets some environment variable DATA with content from a file in your current directory called ".data". First, you need to have a command to get the data out of the file:
cat .data
That prints the data. Now, we want to create a bash command to set that data in an environment variable:
export DATA=`cat .data`
That command takes the contents of .data and puts it in the environment variable DATA. Now, if you put that inside an alias command, you have a bash command that sets your environment variable:
alias set-data="export DATA=`cat .data`"
You can put that alias command inside the .bashrc or .bash_profile files in your home directory to have that command available in any new bash shell you start.
One workaround is to output export commands, and have the parent shell evaluate this..
thescript.py:
import pipes
import random
r = random.randint(1,100)
print("export BLAHBLAH=%s" % (pipes.quote(str(r))))
..and the bash alias (the same can be done in most shells.. even tcsh!):
alias setblahblahenv="eval $(python thescript.py)"
Usage:
$ echo $BLAHBLAH
$ setblahblahenv
$ echo $BLAHBLAH
72
You can output any arbitrary shell code, including multiple commands like:
export BLAHBLAH=23 SECONDENVVAR='something else' && echo 'everything worked'
Just remember to be careful about escaping any dynamically created output (the pipes.quote module is good for this)
If you set environment variables within a python script (or any other script or program), it won't affect the parent shell.
Edit clarification:
So the answer to your question is yes, it is true.
You can however export from within a shell script and source it by using the dot invocation
in fooexport.sh
export FOO="bar"
at the command prompt
$ . ./fooexport.sh
$ echo $FOO
bar
It's not generally possible. The new process created for python cannot affect its parent process' environment. Neither can the parent affect the child, but the parent gets to setup the child's environment as part of new process creation.
Perhaps you can set them in .bashrc, .profile or the equivalent "runs on login" or "runs on every new terminal session" script in MacOS.
You can also have python start the simulation program with the desired environment. (use the env parameter to subprocess.Popen (http://docs.python.org/library/subprocess.html) )
import subprocess, os
os.chdir('/home/you/desired/directory')
subprocess.Popen(['desired_program_cmd', 'args', ...], env=dict(SOMEVAR='a_value') )
Or you could have python write out a shell script like this to a file with a .sh extension:
export SOMEVAR=a_value
cd /home/you/desired/directory
./desired_program_cmd
and then chmod +x it and run it from anywhere.
What I like to do is use /usr/bin/env in a shell script to "wrap" my command line when I find myself in similar situations:
#!/bin/bash
/usr/bin/env NAME1="VALUE1" NAME2="VALUE2" ${*}
So let's call this script "myappenv". I put it in my $HOME/bin directory which I have in my $PATH.
Now I can invoke any command using that environment by simply prepending "myappenv" as such:
myappenv dosometask -xyz
Other posted solutions work too, but this is my personal preference. One advantage is that the environment is transient, so if I'm working in the shell only the command I invoke is affected by the altered environment.
Modified version based on new comments
#!/bin/bash
/usr/bin/env G4WORKDIR=$PWD ${*}
You could wrap this all up in an alias too. I prefer the wrapper script approach since I tend to have other environment prep in there too, which makes it easier for me to maintain.
As answered by Benson, but the best hack-around is to create a simple bash function to preserve arguments:
upsert-env-var (){ eval $(python upsert_env_var.py $*); }
Your can do whatever you want in your python script with the arguments. To simply add a variable use something like:
var = sys.argv[1]
val = sys.argv[2]
if os.environ.get(var, None):
print "export %s=%s:%s" % (var, val, os.environ[var])
else:
print "export %s=%s" % (var, val)
Usage:
upsert-env-var VAR VAL
As others have pointed out, the reason this doesn't work is that environment variables live in a per-process memory spaces and thus die when the Python process exits.
They point out that a solution to this is to define an alias in .bashrc to do what you want such as this:
alias export_my_program="export MY_VAR=`my_program`"
However, there's another (a tad hacky) method which does not require you to modify .bachrc, nor requires you to have my_program in $PATH (or specify the full path to it in the alias). The idea is to run the program in Python if it is invoked normally (./my_program), but in Bash if it is sourced (source my_program). (Using source on a script does not spawn a new process and thus does not kill environment variables created within.) You can do that as follows:
my_program.py:
#!/usr/bin/env python3
_UNUSED_VAR=0
_UNUSED_VAR=0 \
<< _UNUSED_VAR
#=======================
# Bash code starts here
#=======================
'''
_UNUSED_VAR
export MY_VAR=`$(dirname $0)/my_program.py`
echo $MY_VAR
return
'''
#=========================
# Python code starts here
#=========================
print('Hello environment!')
Running this in Python (./my_program.py), the first 3 lines will not do anything useful and the triple-quotes will comment out the Bash code, allowing Python to run normally without any syntax errors from Bash.
Sourcing this in bash (source my_program.py), the heredoc (<< _UNUSED_VAR) is a hack used to "comment out" the first-triple quote, which would otherwise be a syntax error. The script returns before reaching the second triple-quote, avoiding another syntax error. The export assigns the result of running my_program.py in Python from the correct directory (given by $(dirname $0)) to the environment variable MY_VAR. echo $MY_VAR prints the result on the command-line.
Example usage:
$ source my_program.py
Hello environment!
$ echo $MY_VAR
Hello environment!
However, the script will still do everything it did before except exporting, the environment variable if run normally:
$ ./my_program.py
Hello environment!
$ echo $MY_VAR
<-- Empty line
As noted by other authors, the memory is thrown away when the Python process exits. But during the python process, you can edit the running environment. For example:
>>> os.environ["foo"] = "bar"
>>> import subprocess
>>> subprocess.call(["printenv", "foo"])
bar
0
>>> os.environ["foo"] = "foo"
>>> subprocess.call(["printenv", "foo"])
foo
0

Error in check_call() subprocess, executing 'mv' unix command: "Syntax error: '(' unexpected"

I'm making a python script for Travis CI.
.travis.yml
...
script:
- support/travis-build.py
...
The python file travis-build.py is something like this:
#!/usr/bin/env python
from subprocess import check_call
...
check_call(r"mv !(my_project|cmake-3.0.2-Darwin64-universal) ./my_project/final_folder", shell=True)
...
When Travis building achieves that line, I'm getting an error:
/bin/sh: 1: Syntax error: "(" unexpected
I just tried a lot of different forms to write it, but I get the same result. Any idea?
Thanks in advance!
Edit
My current directory layout:
- my_project/final_folder/
- cmake-3.0.2-Darwin64-universal/
- fileA
- fileB
- fileC
I'm trying with this command to move all the current files fileA, fileB and fileC, excluding my_project and cmake-3.0.2-Darwin64-universal folders into ./my_project/final_folder. If I execute this command on Linux shell, I get my aim but not through check_call() command.
Note: I can't move the files one by one, because there are many others
I don't know which shell Travis are using by default because I don't specify it, I only know that if I write the command in my .travis.yml:
.travis.yml
...
script:
# Here is the previous Travis code
- mv !(my_project|cmake-3.0.2-Darwin64-universal) ./my_project/final_folder
...
It works. But If I use the script, it fails.
I found this command from the following issue:
How to use 'mv' command to move files except those in a specific directory?
You're using the bash feature extglob, to try to exclude the files that you're specifying. You'll need to enable it in order to have it exclude the two entries you're specifying.
The python subprocess module explicitly uses /bin/sh when you use shell=True, which doesn't enable the use of bash features like this by default (it's a compliance thing to make it more like original sh).
If you want to get bash to interpret the command; you have to pass it to bash explicitly, for example using:
subprocess.check_call(["bash", "-O", "extglob", "-c", "mv !(my_project|cmake-3.0.2-Darwin64-universal) ./my_project/final_folder"])
I would not choose to do the job in this manner, though.
Let me try again: in which shell do you expect your syntax !(...) to work? Is it bash? Is it ksh? I have never used it, and a quick search for a corresponding bash feature led nowhere. I suspect your syntax is just wrong, which is what the error message is telling you. In that case, your problem is entirely independent form python and the subprocess module.
If a special shell you have on your system supports this syntax, you need to make sure that Python is using the same shell when invoking your command. It tells you which shell it has been using: /bin/sh. This is usually just a link to the real shell executable. Does it point to the same shell you have tested your command in?
Edit: the SO solution you referenced contains the solution in the comments:
Tip: Note however that using this pattern relies on extglob. You can
enable it using shopt -s extglob (If you want extended globs to be
turned on by default you can add shopt -s extglob to .bashrc)
Just to demonstrate that different shells might deal with your syntax in different ways, first using bash:
$ !(uname)
-bash: !: event not found
And then, using /bin/dash:
$ !(uname)
Linux
The argument to a subprocess.something method must be a list of command line arguments. Use e.g. shlex.split() to make the string be split into correct command line arguments:
import shlex, subprocess
subprocess.check_call( shlex.split("mv !(...)") )
EDIT:
So, the goal is to move files/directories, with the exemption of some file(s)/directory(ies). By playing around with bash, I could get it to work like this:
mv `ls | grep -v -e '\(exclusion1\|exclusion2\)'` my_project
So in your situation that would be:
mv `ls | grep -v -e '\(myproject\|cmake-3.0.2-Darwin64-universal\)'` my_project
This could go into the subprocess.check_call(..., shell=True) and it should do what you expect it to do.

How to call a shell script function/variable from python?

Is there any way to call a shell script and use the functions/variable defined in the script from python?
The script is unix_shell.sh
#!/bin/bash
function foo
{
...
}
Is it possible to call this function foo from python?
Solution:
For functions: Convert Shell functions to python functions
For shell local variables(non-exported), run this command in shell, just before calling python script:
export $(set | tr '\n' ' ')
For shell global variables(exported from shell), in python, you can:
import os
print os.environ["VAR1"]
Yes, in a similar way to how you would call it from another bash script:
import subprocess
subprocess.check_output(['bash', '-c', 'source unix_shell.sh && foo'])
This can be done with subprocess. (At least this was what I was trying to do when I searched for this)
Like so:
output = subprocess.check_output(['bash', '-c', 'source utility_functions.sh; get_new_value 5'])
where utility_functions.sh looks like this:
#!/bin/bash
function get_new_value
{
let "new_value=$1 * $1"
echo $new_value
}
Here's how it looks in action...
>>> import subprocess
>>> output = subprocess.check_output(['bash', '-c', 'source utility_functions.sh; get_new_value 5'])
>>> print(output)
b'25\n'
No, that's not possible. You can execute a shell script, pass parameters on the command line, and it could print data out, which you could parse from Python.
But that's not really calling the function. That's still executing bash with options and getting a string back on stdio.
That might do what you want. But it's probably not the right way to do it. Bash can not do that many things that Python can not. Implement the function in Python instead.
With the help of above answer and this answer, I come up with this:
import subprocess
command = 'bash -c "source ~/.fileContainingTheFunction && theFunction"'
stdout = subprocess.getoutput(command)
print(stdout)
I'm using Python 3.6.5 in Ubuntu 18.04 LTS.
I do not know to much about python, but if You use export -f foo after the shell script function definition, then if You start a sub bash, the function could be called. Without export, You need to run the shell script as . script.sh inside the sub bash started in python, but it will run everything in it and will define all the functions and all variables.
You could separate each function into their own bash file. Then use Python to pass the right parameters to each separate bash file.
This may be easier than just re-writing the bash functions in Python yourself.
You can then call these functions using
import subprocess
subprocess.call(['bash', 'function1.sh'])
subprocess.call(['bash', 'function2.sh'])
# etc. etc.
You can use subprocess to pass parameters too.

Categories