Call a bash function within a python script - python

I have a python script (e.g. test.py) and a commands.txt file which contains a custom bash function (e.g. my_func) along with its parameters, e.g.
my_func arg1 arv2; my_func arg3 arg4; my_func arg5 arg6;
This function is included in the ~/.bash_profile.
What I have tried to do is:
subprocess.call(['.', path/to/commands.txt], shell=True)
I know this is not the best case, in terms of setting the shell argument into True, but I cannot seem to implement it even in this way. What I get when I run test.py is:
my_func: command not found

You will need to invoke bash directly, and instruct it to process both files.
At the command-line, this is:
bash -c '. ~/.bash_profile; . commands.txt'
Wrapping it in python:
subprocess.call(['bash', '-c', '. ~/.bash_profile; . commands.txt'])
You could also source ~/.bash_profile at the top of commands.txt. Then you'd be able to run a single file.
It may make sense to extract the function to a separate file, since .bash_profile is intended to be used by login shells, not like this.

If the first line of your commands.txt file had the correct shebang (for example #!/bin/bash) making your file executable (chmod +x commands.txt) will be enough :
subprocess.call("path/to/commands.txt")

Related

Add environment variables in python [duplicate]

I was hoping to write a python script to create some appropriate environmental variables by running the script in whatever directory I'll be executing some simulation code, and I've read that I can't write a script to make these env vars persist in the mac os terminal. So two things:
Is this true?
and
It seems like it would be a useful things to do; why isn't it possible in general?
You can't do it from python, but some clever bash tricks can do something similar. The basic reasoning is this: environment variables exist in a per-process memory space. When a new process is created with fork() it inherits its parent's environment variables. When you set an environment variable in your shell (e.g. bash) like this:
export VAR="foo"
What you're doing is telling bash to set the variable VAR in its process space to "foo". When you run a program, bash uses fork() and then exec() to run the program, so anything you run from bash inherits the bash environment variables.
Now, suppose you want to create a bash command that sets some environment variable DATA with content from a file in your current directory called ".data". First, you need to have a command to get the data out of the file:
cat .data
That prints the data. Now, we want to create a bash command to set that data in an environment variable:
export DATA=`cat .data`
That command takes the contents of .data and puts it in the environment variable DATA. Now, if you put that inside an alias command, you have a bash command that sets your environment variable:
alias set-data="export DATA=`cat .data`"
You can put that alias command inside the .bashrc or .bash_profile files in your home directory to have that command available in any new bash shell you start.
One workaround is to output export commands, and have the parent shell evaluate this..
thescript.py:
import pipes
import random
r = random.randint(1,100)
print("export BLAHBLAH=%s" % (pipes.quote(str(r))))
..and the bash alias (the same can be done in most shells.. even tcsh!):
alias setblahblahenv="eval $(python thescript.py)"
Usage:
$ echo $BLAHBLAH
$ setblahblahenv
$ echo $BLAHBLAH
72
You can output any arbitrary shell code, including multiple commands like:
export BLAHBLAH=23 SECONDENVVAR='something else' && echo 'everything worked'
Just remember to be careful about escaping any dynamically created output (the pipes.quote module is good for this)
If you set environment variables within a python script (or any other script or program), it won't affect the parent shell.
Edit clarification:
So the answer to your question is yes, it is true.
You can however export from within a shell script and source it by using the dot invocation
in fooexport.sh
export FOO="bar"
at the command prompt
$ . ./fooexport.sh
$ echo $FOO
bar
It's not generally possible. The new process created for python cannot affect its parent process' environment. Neither can the parent affect the child, but the parent gets to setup the child's environment as part of new process creation.
Perhaps you can set them in .bashrc, .profile or the equivalent "runs on login" or "runs on every new terminal session" script in MacOS.
You can also have python start the simulation program with the desired environment. (use the env parameter to subprocess.Popen (http://docs.python.org/library/subprocess.html) )
import subprocess, os
os.chdir('/home/you/desired/directory')
subprocess.Popen(['desired_program_cmd', 'args', ...], env=dict(SOMEVAR='a_value') )
Or you could have python write out a shell script like this to a file with a .sh extension:
export SOMEVAR=a_value
cd /home/you/desired/directory
./desired_program_cmd
and then chmod +x it and run it from anywhere.
What I like to do is use /usr/bin/env in a shell script to "wrap" my command line when I find myself in similar situations:
#!/bin/bash
/usr/bin/env NAME1="VALUE1" NAME2="VALUE2" ${*}
So let's call this script "myappenv". I put it in my $HOME/bin directory which I have in my $PATH.
Now I can invoke any command using that environment by simply prepending "myappenv" as such:
myappenv dosometask -xyz
Other posted solutions work too, but this is my personal preference. One advantage is that the environment is transient, so if I'm working in the shell only the command I invoke is affected by the altered environment.
Modified version based on new comments
#!/bin/bash
/usr/bin/env G4WORKDIR=$PWD ${*}
You could wrap this all up in an alias too. I prefer the wrapper script approach since I tend to have other environment prep in there too, which makes it easier for me to maintain.
As answered by Benson, but the best hack-around is to create a simple bash function to preserve arguments:
upsert-env-var (){ eval $(python upsert_env_var.py $*); }
Your can do whatever you want in your python script with the arguments. To simply add a variable use something like:
var = sys.argv[1]
val = sys.argv[2]
if os.environ.get(var, None):
print "export %s=%s:%s" % (var, val, os.environ[var])
else:
print "export %s=%s" % (var, val)
Usage:
upsert-env-var VAR VAL
As others have pointed out, the reason this doesn't work is that environment variables live in a per-process memory spaces and thus die when the Python process exits.
They point out that a solution to this is to define an alias in .bashrc to do what you want such as this:
alias export_my_program="export MY_VAR=`my_program`"
However, there's another (a tad hacky) method which does not require you to modify .bachrc, nor requires you to have my_program in $PATH (or specify the full path to it in the alias). The idea is to run the program in Python if it is invoked normally (./my_program), but in Bash if it is sourced (source my_program). (Using source on a script does not spawn a new process and thus does not kill environment variables created within.) You can do that as follows:
my_program.py:
#!/usr/bin/env python3
_UNUSED_VAR=0
_UNUSED_VAR=0 \
<< _UNUSED_VAR
#=======================
# Bash code starts here
#=======================
'''
_UNUSED_VAR
export MY_VAR=`$(dirname $0)/my_program.py`
echo $MY_VAR
return
'''
#=========================
# Python code starts here
#=========================
print('Hello environment!')
Running this in Python (./my_program.py), the first 3 lines will not do anything useful and the triple-quotes will comment out the Bash code, allowing Python to run normally without any syntax errors from Bash.
Sourcing this in bash (source my_program.py), the heredoc (<< _UNUSED_VAR) is a hack used to "comment out" the first-triple quote, which would otherwise be a syntax error. The script returns before reaching the second triple-quote, avoiding another syntax error. The export assigns the result of running my_program.py in Python from the correct directory (given by $(dirname $0)) to the environment variable MY_VAR. echo $MY_VAR prints the result on the command-line.
Example usage:
$ source my_program.py
Hello environment!
$ echo $MY_VAR
Hello environment!
However, the script will still do everything it did before except exporting, the environment variable if run normally:
$ ./my_program.py
Hello environment!
$ echo $MY_VAR
<-- Empty line
As noted by other authors, the memory is thrown away when the Python process exits. But during the python process, you can edit the running environment. For example:
>>> os.environ["foo"] = "bar"
>>> import subprocess
>>> subprocess.call(["printenv", "foo"])
bar
0
>>> os.environ["foo"] = "foo"
>>> subprocess.call(["printenv", "foo"])
foo
0

Setting environment variables, does not work [duplicate]

I was hoping to write a python script to create some appropriate environmental variables by running the script in whatever directory I'll be executing some simulation code, and I've read that I can't write a script to make these env vars persist in the mac os terminal. So two things:
Is this true?
and
It seems like it would be a useful things to do; why isn't it possible in general?
You can't do it from python, but some clever bash tricks can do something similar. The basic reasoning is this: environment variables exist in a per-process memory space. When a new process is created with fork() it inherits its parent's environment variables. When you set an environment variable in your shell (e.g. bash) like this:
export VAR="foo"
What you're doing is telling bash to set the variable VAR in its process space to "foo". When you run a program, bash uses fork() and then exec() to run the program, so anything you run from bash inherits the bash environment variables.
Now, suppose you want to create a bash command that sets some environment variable DATA with content from a file in your current directory called ".data". First, you need to have a command to get the data out of the file:
cat .data
That prints the data. Now, we want to create a bash command to set that data in an environment variable:
export DATA=`cat .data`
That command takes the contents of .data and puts it in the environment variable DATA. Now, if you put that inside an alias command, you have a bash command that sets your environment variable:
alias set-data="export DATA=`cat .data`"
You can put that alias command inside the .bashrc or .bash_profile files in your home directory to have that command available in any new bash shell you start.
One workaround is to output export commands, and have the parent shell evaluate this..
thescript.py:
import pipes
import random
r = random.randint(1,100)
print("export BLAHBLAH=%s" % (pipes.quote(str(r))))
..and the bash alias (the same can be done in most shells.. even tcsh!):
alias setblahblahenv="eval $(python thescript.py)"
Usage:
$ echo $BLAHBLAH
$ setblahblahenv
$ echo $BLAHBLAH
72
You can output any arbitrary shell code, including multiple commands like:
export BLAHBLAH=23 SECONDENVVAR='something else' && echo 'everything worked'
Just remember to be careful about escaping any dynamically created output (the pipes.quote module is good for this)
If you set environment variables within a python script (or any other script or program), it won't affect the parent shell.
Edit clarification:
So the answer to your question is yes, it is true.
You can however export from within a shell script and source it by using the dot invocation
in fooexport.sh
export FOO="bar"
at the command prompt
$ . ./fooexport.sh
$ echo $FOO
bar
It's not generally possible. The new process created for python cannot affect its parent process' environment. Neither can the parent affect the child, but the parent gets to setup the child's environment as part of new process creation.
Perhaps you can set them in .bashrc, .profile or the equivalent "runs on login" or "runs on every new terminal session" script in MacOS.
You can also have python start the simulation program with the desired environment. (use the env parameter to subprocess.Popen (http://docs.python.org/library/subprocess.html) )
import subprocess, os
os.chdir('/home/you/desired/directory')
subprocess.Popen(['desired_program_cmd', 'args', ...], env=dict(SOMEVAR='a_value') )
Or you could have python write out a shell script like this to a file with a .sh extension:
export SOMEVAR=a_value
cd /home/you/desired/directory
./desired_program_cmd
and then chmod +x it and run it from anywhere.
What I like to do is use /usr/bin/env in a shell script to "wrap" my command line when I find myself in similar situations:
#!/bin/bash
/usr/bin/env NAME1="VALUE1" NAME2="VALUE2" ${*}
So let's call this script "myappenv". I put it in my $HOME/bin directory which I have in my $PATH.
Now I can invoke any command using that environment by simply prepending "myappenv" as such:
myappenv dosometask -xyz
Other posted solutions work too, but this is my personal preference. One advantage is that the environment is transient, so if I'm working in the shell only the command I invoke is affected by the altered environment.
Modified version based on new comments
#!/bin/bash
/usr/bin/env G4WORKDIR=$PWD ${*}
You could wrap this all up in an alias too. I prefer the wrapper script approach since I tend to have other environment prep in there too, which makes it easier for me to maintain.
As answered by Benson, but the best hack-around is to create a simple bash function to preserve arguments:
upsert-env-var (){ eval $(python upsert_env_var.py $*); }
Your can do whatever you want in your python script with the arguments. To simply add a variable use something like:
var = sys.argv[1]
val = sys.argv[2]
if os.environ.get(var, None):
print "export %s=%s:%s" % (var, val, os.environ[var])
else:
print "export %s=%s" % (var, val)
Usage:
upsert-env-var VAR VAL
As others have pointed out, the reason this doesn't work is that environment variables live in a per-process memory spaces and thus die when the Python process exits.
They point out that a solution to this is to define an alias in .bashrc to do what you want such as this:
alias export_my_program="export MY_VAR=`my_program`"
However, there's another (a tad hacky) method which does not require you to modify .bachrc, nor requires you to have my_program in $PATH (or specify the full path to it in the alias). The idea is to run the program in Python if it is invoked normally (./my_program), but in Bash if it is sourced (source my_program). (Using source on a script does not spawn a new process and thus does not kill environment variables created within.) You can do that as follows:
my_program.py:
#!/usr/bin/env python3
_UNUSED_VAR=0
_UNUSED_VAR=0 \
<< _UNUSED_VAR
#=======================
# Bash code starts here
#=======================
'''
_UNUSED_VAR
export MY_VAR=`$(dirname $0)/my_program.py`
echo $MY_VAR
return
'''
#=========================
# Python code starts here
#=========================
print('Hello environment!')
Running this in Python (./my_program.py), the first 3 lines will not do anything useful and the triple-quotes will comment out the Bash code, allowing Python to run normally without any syntax errors from Bash.
Sourcing this in bash (source my_program.py), the heredoc (<< _UNUSED_VAR) is a hack used to "comment out" the first-triple quote, which would otherwise be a syntax error. The script returns before reaching the second triple-quote, avoiding another syntax error. The export assigns the result of running my_program.py in Python from the correct directory (given by $(dirname $0)) to the environment variable MY_VAR. echo $MY_VAR prints the result on the command-line.
Example usage:
$ source my_program.py
Hello environment!
$ echo $MY_VAR
Hello environment!
However, the script will still do everything it did before except exporting, the environment variable if run normally:
$ ./my_program.py
Hello environment!
$ echo $MY_VAR
<-- Empty line
As noted by other authors, the memory is thrown away when the Python process exits. But during the python process, you can edit the running environment. For example:
>>> os.environ["foo"] = "bar"
>>> import subprocess
>>> subprocess.call(["printenv", "foo"])
bar
0
>>> os.environ["foo"] = "foo"
>>> subprocess.call(["printenv", "foo"])
foo
0

bash wrap a piped command with a python script

Is there a way to create a python script which wraps an entire bash command including the pipes.
For example, if I have the following simple script
import sys
print sys.argv
and call it like so (from bash or ipython), I get the expected outcome:
[pkerp#pendari trell]$ python test.py ls
['test.py', 'ls']
If I add a pipe, however, the output of the script gets redirected to the pipe sink:
[pkerp#pendari trell]$ python test.py ls > out.txt
And the > out.txt portion is not in sys.argv. I understand that the shell automatically process this output, but I'm curious if there's a way to force the shell to ignore it and pass it to the process being called.
The point of this is to create something like a wrapper for the shell. I'd like to run the commands regularly, but keep track of the strace output for each command (including the pipes). Ideally I'd like to keep all of the bash features, such as tab-completion and up and down arrows and history search, and then just pass the completed command through a python script which invokes a subprocess to handle it.
Is this possible, or would I have to write my own shell to do this?
Edit
It appears I'm asking the exact same thing as this question.
The only thing you can do is pass the entire shell command as a string, then let Python pass it back to a shell for execution.
$ python test.py "ls > out.txt"
Inside test.py, something like
subprocess.call("strace " + sys.argv[1], shell=True, executable="/bin/bash")
to ensure the entire string is passed to the shell (and bash, specifically).
Well, I don't quite see what you are trying to do. The general approach would be to give the desired output destination to the script using command line options: python test.py ls --output=out.txt. Incidentally, strace writes to stderr. You could capture everything using strace python test.py > out 2> err if you want to save everything...
Edit: If your script writes to stderr as well you could use strace -o strace_out python test.py > script_out 2> script_err
Edit2: Okay, I understand better what you want. My suggestion is this: Write a bash helper:
function process_and_evaluate()
{
strace -o /tmp/output/strace_output "$#"
/path/to/script.py /tmp/output/strace_output
}
Put this in a file like ~/helper.sh. Then open a bash, source it using . ~/helper.sh.
Now you can run it like this: process_and_evaluate ls -lA.
Edit3:
To capture output / error you could extend the macro like this:
function process_and_evaluate()
{
out=$1
err=$2
shift 2
strace -o /tmp/output/strace_output "$#" > "$out" 2> "$err"
/path/to/script.py /tmp/output/strace_output
}
You would have to use the (less obvious ) process_and_evaluate out.txt err.txt ls -lA.
This is the best that I can come up with...
At least in your simple example, you could just run the python script as an argument to echo, e.g.
$ echo $(python test.py ls) > test.txt
$ more test.txt
['test.py','ls']
Enclosing a command in parenthesis with a dollar sign first executes the contents then passes the output as an argument to echo.

running python script for multiple files from windows cmd

I am trying to run python script from windows cmd. When I run it under linux I put
python myscript.py filename??.txt
it goes through files with numbers from filename01.txt to filename18.txt and it works.
I tried to run it from cmd like
python myscript.py filename*.txt
or
python myscript.py filename**.txt
but it didnt work. If I tried the script on one single file in windows cmd it works.
Do you have any clue where the problem could be?
Thanks!
Unix shell convert file path pattern to actual files, then pass the result to the program. (python myscript.py)
But in Windows cmd, this does not happen.
See glob.glob if you want get file list that match the pattern.
Those wildcards are expanded at "shell (i.e. bash) level" before running your python script.
So the problem doesn't reside in python, but in the "shell" that you are using on Windows.
Probably you cloud try PowerShell for Windows or bash via CygWin.
try this:
FOR %X IN (filename*.txt) DO (python myscript.py %X)
Edit, you can create a .bat with this and try it.
setlocal EnableDelayedExpansion
set files=
FOR %%X IN (filename*.txt) DO set files=!files! %%X
echo %files%
python myscript.py %files%
From batch file
for %%f in ("filename*.txt") do python myscript.py "%%~nxf"
%%f will get a reference to each of the files. For each of them execute your script. %%~nxf will expand to name and extension of file.
From command line, replace %% with a single %
EDITED - I missunderstood the problem. Next try.
In windows, there is no default expansion of wildcard arguments ( see here). So, to get the same result you will need a batch file. It will concatenate the list of files and pass it to your python script
#echo off
setlocal enabledelayedexpansion
set "fileList="
for %%f in ("*.txt") do set "fileList=!fileList! "%%f""
python myscript.py !fileList!
endlocal
For a more reusable code, use something as (script calls are only echoed to screen to show efect of parameters and to avoid unneeded execution, remove when it works as intended)
#echo off
setlocal enableextensions
call :glob "*.txt" true fileList
echo python myscript.py %fileList%
echo.
call :glob "*.txt" false fileList
echo python myscript.py %fileList%
exit /b
:glob pattern useFullPath outputList
setlocal enabledelayedexpansion
if /i "%~2"=="true" (set "_name=%%%%~ff") else (set "_name=%%%%~nxf")
set "_list="
for %%f in ("%~1") do set "_list=!_list! "%_name%""
endlocal & if not "%~3"=="" set "%~3=%_list%"
As falsetru notes, on Windows the shell doesn't expand the wildcards for you, so the correct answer is glob.glob(). You should iterate over all the command line arguments and expand each. This works fine in Linux/UNIX too, because the expansion of an argument without any wildcards in it (which is what the shell gives you) is the unchanged filename. So something like this, using lazy evaluation to handle a potentially large number of args:
from sys import argv
from glob import glob
from itertools import chain, islice
for name in chain.from_iterable(glob(name) for name in islice(argv, 1, None)):
# do something with each file

Shell Script: Execute a python program from within a shell script

I've tried googling the answer but with no luck.
I need to use my works supercomputer server, but for my python script to run, it must be executed via a shell script.
For example I want job.sh to execute python_script.py
How can this be accomplished?
Just make sure the python executable is in your PATH environment variable then add in your script
python path/to/the/python_script.py
Details:
In the file job.sh, put this
#!/bin/sh
python python_script.py
Execute this command to make the script runnable for you : chmod u+x job.sh
Run it : ./job.sh
Method 1 - Create a shell script:
Suppose you have a python file hello.py
Create a file called job.sh that contains
#!/bin/bash
python hello.py
mark it executable using
$ chmod +x job.sh
then run it
$ ./job.sh
Method 2 (BETTER) - Make the python itself run from shell:
Modify your script hello.py and add this as the first line
#!/usr/bin/env python
mark it executable using
$ chmod +x hello.py
then run it
$ ./hello.py
Save the following program as print.py:
#!/usr/bin/python3
print('Hello World')
Then in the terminal type:
chmod +x print.py
./print.py
You should be able to invoke it as python scriptname.py e.g.
# !/bin/bash
python /home/user/scriptname.py
Also make sure the script has permissions to run.
You can make it executable by using chmod u+x scriptname.py.
Imho, writing
python /path/to/script.py
Is quite wrong, especially in these days. Which python? python2.6? 2.7? 3.0? 3.1? Most of times you need to specify the python version in shebang tag of python file. I encourage to use #!/usr/bin/env python2 #or python2.6 or python3 or even python3.1 for compatibility.
In such case, is much better to have the script executable and invoke it directly:
#!/bin/bash
/path/to/script.py
This way the version of python you need is only written in one file. Most of system these days are having python2 and python3 in the meantime, and it happens that the symlink python points to python3, while most people expect it pointing to python2.
This works for me:
Create a new shell file job. So let's say:
touch job.sh and add command to run python script (you can even add command line arguments to that python, I usually predefine my command line arguments).
chmod +x job.sh
Inside job.sh add the following py files, let's say:
python_file.py argument1 argument2 argument3 >> testpy-output.txt && echo "Done with python_file.py"
python_file1.py argument1 argument2 argument3 >> testpy-output.txt && echo "Done with python_file1.py"
Output of job.sh should look like this:
Done with python_file.py
Done with python_file1.py
I use this usually when I have to run multiple python files with different arguments, pre defined.
Note: Just a quick heads up on what's going on here:
python_file.py argument1 argument2 argument3 >> testpy-output.txt && echo "completed with python_file.py" .
Here shell script will run the file python_file.py and add multiple command-line arguments at run time to the python file.
This does not necessarily means, you have to pass command line arguments as well.
You can just use it like: python python_file.py, plain and simple.
Next up, the >> will print and store the output of this .py file in the testpy-output.txt file.
&& is a logical operator that will run only after the above is executed successfully and as an optional echo "completed with python_file.py" will be echoed on to your cli/terminal at run time.
This works best for me:
Add this at the top of the script:
#!c:/Python27/python.exe
(C:\Python27\python.exe is the path to the python.exe on my machine)
Then run the script via:
chmod +x script-name.py && script-name.py
I use this and it works fine
#/bin/bash
/usr/bin/python python python_script.py
Since the other posts say everything (and I stumbled upon this post while looking for the following).
Here is a way how to execute a python script from another python script:
Python 2:
execfile("somefile.py", global_vars, local_vars)
Python 3:
with open("somefile.py") as f:
code = compile(f.read(), "somefile.py", 'exec')
exec(code, global_vars, local_vars)
and you can supply args by providing some other sys.argv
Here I have demonstrated an example to run python script within a shell script. For different purposes you may need to read the output from a shell command, execute both python script and shell command within the same file.
To execute a shell command from python use os.system() method. To read output from a shell command use os.popen().
Following is an example which will grep all processes having the text sample_program.py inside of it. Then after collecting the process IDs (using python) it will kill them all.
#!/usr/bin/python3
import os
# listing all matched processes and taking the output into a variable s
s = os.popen("ps aux | grep 'sample_program.py'").read()
s = '\n'.join([l for l in s.split('\n') if "grep" not in l]) # avoiding killing the grep itself
print("To be killed:")
print(s)
# now manipulating this string s and finding the process IDs and killing them
os.system("kill -9 " + ' '.join([x.split()[1] for x in s.split('\n') if x]))
References:
Execute a python program from within a shell script
Assign output of os.system to a variable and prevent it from being displayed on the screen
If you have a bash script and you need to run inside of it a python3 script (with external modules), I recommend that you point in your bash script to your python path like this.
#!/usr/bin/env bash
-- bash code --
/usr/bin/python3 your_python.py
-- bash code --

Categories