How to source script via python - python

I can source bash script (without shebang) easy as bash command in terminal but trying to do the same via python command
sourcevars = "cd /etc/openvpn/easy-rsa && . ./vars"
runSourcevars = subprocess.Popen(sourcevars, shell = True)
or
sourcevars = [". /etc/openvpn/easy-rsa/vars"]
runSourcevars = subprocess.Popen(sourcevars, shell = True)
I receive :
Please source the vars script first (i.e. "source ./vars")
Make sure you have edited it to reflect your configuration.
What's the matter, how to do it correctly?I've read some topics here,e.g here but could not solve my problem using given advices. Please explain with examples.
UPDATED:
# os.chdir = ('/etc/openvpn/easy-rsa')
initvars = "cd /etc/openvpn/easy-rsa && . ./vars && ./easy-rsa ..."
# initvars = "cd /etc/openvpn/easy-rsa && . ./vars"
# initvars = [". /etc/openvpn/easy-rsa/vars"]
cleanall = ["/etc/openvpn/easy-rsa/clean-all"]
# buildca = ["printf '\n\n\n\n\n\n\n\n\n' | /etc/openvpn/easy-rsa/build-ca"]
# buildkey = ["printf '\n\n\n\n\n\n\n\n\n\nyes\n ' | /etc/openvpn/easy-rsa/build-key AAAAAA"]
# buildca = "cd /etc/openvpn/easy-rsa && printf '\n\n\n\n\n\n\n\n\n' | ./build-ca"
runInitvars = subprocess.Popen(cmd, shell = True)
# runInitvars = subprocess.Popen(initvars,stdout=subprocess.PIPE, shell = True, executable="/bin/bash")
runCleanall = subprocess.Popen(cleanall , shell=True)
# runBuildca = subprocess.Popen(buildca , shell=True)
# runBuildca.communicate()
# runBuildKey = subprocess.Popen(buildkey, shell=True )
UPDATE 2
buildca = ["printf '\n\n\n\n\n\n\n\n\n' | /etc/openvpn/easy-rsa/build-ca"]
runcommands = subprocess.Popen(initvars+cleanall+buildca, shell = True)

There's absolutely nothing wrong with this in and of itself:
# What you're already doing -- this is actually fine!
sourcevars = "cd /etc/openvpn/easy-rsa && . ./vars"
runSourcevars = subprocess.Popen(sourcevars, shell=True)
# ...*however*, it won't have any effect at all on this:
runOther = subprocess.Popen('./easy-rsa build-key yadda yadda', shell=True)
However, if you subsequently try to run a second subprocess.Popen(..., shell=True) command, you'll see that it doesn't have any of the variables set by sourcing that configuration.
This is entirely normal and expected behavior: The entire point of using source is to modify the state of the active shell; each time you create a new Popen object with shell=True, it's starting a new shell -- their state isn't carried over.
Thus, combine into a single call:
prefix = "cd /etc/openvpn/easy-rsa && . ./vars && "
cmd = "/etc/openvpn/easy-rsa/clean-all"
runCmd = subprocess.Popen(prefix + cmd, shell=True)
...such that you're using the results of sourcing the script in the same shell invocation as that in which you actually source the script.
Alternately (and this is what I'd do), require your Python script to be invoked by a shell which already has the necessary variables in its environment. Thus:
# ask your users to do this
set -a; . ./vars; ./yourPythonScript
...and you can error out if people don't do so very easy:
import os, sys
if not 'EASY_RSA' in os.environ:
print >>sys.stderr, "ERROR: Source vars before running this script"
sys.exit(1)

Related

run cmake command via subprocess.Popen or subprocess.run [duplicate]

If I run echo a; echo b in bash the result will be that both commands are run. However if I use subprocess then the first command is run, printing out the whole of the rest of the line.
The code below echos a; echo b instead of a b, how do I get it to run both commands?
import subprocess, shlex
def subprocess_cmd(command):
process = subprocess.Popen(shlex.split(command), stdout=subprocess.PIPE)
proc_stdout = process.communicate()[0].strip()
print proc_stdout
subprocess_cmd("echo a; echo b")
You have to use shell=True in subprocess and no shlex.split:
import subprocess
command = "echo a; echo b"
ret = subprocess.run(command, capture_output=True, shell=True)
# before Python 3.7:
# ret = subprocess.run(command, stdout=subprocess.PIPE, shell=True)
print(ret.stdout.decode())
returns:
a
b
I just stumbled on a situation where I needed to run a bunch of lines of bash code (not separated with semicolons) from within python. In this scenario the proposed solutions do not help. One approach would be to save a file and then run it with Popen, but it wasn't possible in my situation.
What I ended up doing is something like:
commands = '''
echo "a"
echo "b"
echo "c"
echo "d"
'''
process = subprocess.Popen('/bin/bash', stdin=subprocess.PIPE, stdout=subprocess.PIPE)
out, err = process.communicate(commands)
print out
So I first create the child bash process and after I tell it what to execute. This approach removes the limitations of passing the command directly to the Popen constructor.
Join commands with "&&".
os.system('echo a > outputa.txt && echo b > outputb.txt')
If you're only running the commands in one shot then you can just use subprocess.check_output convenience function:
def subprocess_cmd(command):
output = subprocess.check_output(command, shell=True)
print output
>>> command = "echo a; echo b"
>>> shlex.split(command);
['echo', 'a; echo', 'b']
so, the problem is shlex module do not handle ";"
Got errors like when I used capture_output=True
TypeError: __init__() got an unexpected keyword argument 'capture_output'
After made changes like as below and its works fine
import subprocess
command = '''ls'''
result = subprocess.run(command, stdout=subprocess.PIPE,shell=True)
print(result.stdout.splitlines())
import subprocess
cmd = "vsish -e ls /vmkModules/lsom/disks/ | cut -d '/' -f 1 | while read diskID ; do echo $diskID; vsish -e cat /vmkModules/lsom/disks/$diskID/virstoStats | grep -iE 'Delete pending |trims currently queued' ; echo '====================' ;done ;"
def subprocess_cmd(command):
process = subprocess.Popen(command,stdout=subprocess.PIPE, shell=True)
proc_stdout = process.communicate()[0].strip()
for line in proc_stdout.decode().split('\n'):
print (line)
subprocess_cmd(cmd)

Output not getting redirected properly

I am running this command through on bash console through iTerm
{ cd /usr/local/path/to/code; echo "hi1"; sudo chmod 777 /tmp/dissolve.log; echo "hi2"; python someapp/runner.py dissolve; echo "hi3"; } > /tmp/dissolve.log &
on tailing the file i get :
tail: /tmp/dissolve.log: file truncated
hi1
hi2
I am not able to figure out why i am not getting output of file python someapp/runner.py dissolve, when i do cmd + c, the expected output is appearing on tail log.
code snippet from runner.py:
if __name__ == '__main__':
program_name = sys.argv[1]
if program_name == 'dissolve':
obj = SomeClass() # this is properly imported
obj.some_function() # this has lot of `print` statements, which i intened to catch in '/tmp/dissolve.log'
Is the initial print inside some_function() passing the values some where other than the /tmp/dissolve.log?
Any suggestion why this could be happening?
This seems like a buffering issue, as you are sending the output to a file. You can force line buffering with stdbuf, like this:
{ cd /usr/local/path/to/code;
echo "hi1";
sudo chmod 777 /tmp/dissolve.log;
echo "hi2";
stdbuf -oL python someapp/runner.py dissolve;
echo "hi3"; } > /tmp/dissolve.log &

Python: subprocess with pipes, exit on failure

I'm trying to prevent uploads to S3 in case any previous pipelined command will fail, unfortunately none of these two methods works as expected:
Shell pipeline
for database in sorted(databases):
cmd = "bash -o pipefail -o errexit -c 'mysqldump -B {database} | gpg -e -r {GPGRCPT} | gof3r put -b {S3_BUCKET} -k {database}.sql.e'".format(database = database, GPGRCPT = GPGRCPT, S3_BUCKET = S3_BUCKET)
try:
subprocess.check_call(cmd, shell = True, executable="/bin/bash")
except subprocess.CalledProcessError as e:
print e
Popen with PIPEs
for database in sorted(databases):
try:
cmd_mysqldump = "mysqldump {database}".format(database = database)
p_mysqldump = subprocess.Popen(shlex.split(cmd_mysqldump), stdout=subprocess.PIPE)
cmd_gpg = "gpg -a -e -r {GPGRCPT}".format(GPGRCPT = GPGRCPT)
p_gpg = subprocess.Popen(shlex.split(cmd_gpg), stdin=p_mysqldump.stdout, stdout=subprocess.PIPE)
p_mysqldump.stdout.close()
cmd_gof3r = "gof3r put -b {S3_BUCKET} -k {database}.sql.e".format(S3_BUCKET = S3_BUCKET, database = database)
p_gof3r = subprocess.Popen(shlex.split(cmd_gof3r), stdin=p_gpg.stdout, stderr=open("/dev/null"))
p_gpg.stdout.close()
except subprocess.CalledProcessError as e:
print e
I tried something like this with no luck:
....
if p_gpg.returncode == 0:
cmd_gof3r = "gof3r put -b {S3_BUCKET} -k {database}.sql.e".format(S3_BUCKET = S3_BUCKET, database = database)
p_gof3r = subprocess.Popen(shlex.split(cmd_gof3r), stdin=p_gpg.stdout, stderr=open("/dev/null"))
p_gpg.stdout.close()
...
Basically gof3r is streaming data to S3 even if there are errors, for instance when I intentionally change mysqldump -> mysqldumpp to generate an error.
I had the exact same question, and I managed it with:
cmd = "cat file | tr -d '\\n'"
subprocess.check_call( [ '/bin/bash' , '-o' , 'pipefail' , '-c' , cmd ] )
Thinking back, and searching in my code, I used another method too:
subprocess.check_call( "ssh -c 'make toto 2>&1 | tee log.txt ; exit ${PIPESTATUS[0]}'", shell=True )
All commands in a pipeline run concurrently e.g.:
$ nonexistent | echo it is run
the echo is always run even if nonexistent command does not exist.
pipefail affects the exit status of the pipeline as a whole -- it does not make gof3r exit any sooner
errexit has no effect because there is a single pipeline here.
If you meant that you don't want to start the next pipeline if the one from the previous iteration fails then put break after print e in the exception handler.
p_gpg.returncode is None while gpg is running. If you don't want gof3r to run if gpg fails then you have to save gpg's output somewhere else first e.g., in a file:
filename = 'gpg.out'
for database in sorted(databases):
pipeline_no_gof3r = ("bash -o pipefail -c 'mysqldump -B {database} | "
"gpg -e -r {GPGRCPT}'").format(**vars())
with open(filename, 'wb', 0) as file:
if subprocess.call(shlex.split(pipeline_no_gof3r), stdout=file):
break # don't upload to S3, don't run the next database pipeline
# upload the file on success
gof3r_cmd = 'gof3r put -b {S3_BUCKET} -k {database}.sql.e'.format(**vars())
with open(filename, 'rb', 0) as file:
if subprocess.call(shlex.split(gof3r_cmd), stdin=file):
break # don't run the next database pipeline

Run tcsh script without interruption after called from python

I am calling a tcsh script in my python program. The tcsh script takes 10-12 mins for completion. But as i call this script from python, python interrupts script before it executes completely. here is the code snippet.
import subprocess
import os
os.chdir(dir_path_forCD)
subprocess.call('/home/sdcme/bin/nii_mdir_sdcme %s %s' % (a, a), shell=True)
print(a+1);
Can some one point out, how i can call nii_mdir_sdcmescript from python without interrupting(killing) script before it is executed completely.
The complete script is as follows:
#!/usr/bin/python
import subprocess
import os
import dicom
import time
dire = '.'
directories = subprocess.check_output(
['find', '/Users/sdb99/Desktop/dicom', '-maxdepth', '1', '-type', 'd', '-mmin', '-660', '-type', 'd', '-mmin', '+5']
).splitlines()
number_of_directories = len(directories)
b_new = '.'
for n in range(1,number_of_directories):
dire_str = (directories[n])
dire_str = str(dire_str) #[2:-1]
print(dire_str)
for dirpath,dirnames,filenames in os.walk(dire_str,topdown=True):
a =1
for filename in filenames:
print(dirpath)
if filename[-4:] == '.dcm':
firstfilename = os.path.join(dirpath, filename)
dir_path_forCD= dirpath
dcm_info = dicom.read_file(firstfilename, force=True)
if dcm_info[0x0019, 0x109c].value == 'epiRTme':
os.chdir(dir_path_forCD)
subprocess.call('/home/sdcme/bin/nii_mdir_sdcme %s %s' % (a, a), shell=True)
print(a+1);
break
break
break
tcsh script: nii_mdir_sdcme
#!/bin/tcsh
if ($#argv < 2) then
echo "Usage: nii_mdir_sdcme start_dir# end_dir#"
exit
else
set start = $argv[1]
set end = $argv[2]
if ( ! -d ./medata ) then
sudo mkdir ./medata
endif
sudo chown sdcme ./medata
sudo chgrp users ./medata
set i = $start
while ( $i <= $end )
echo " "
if ( $i < 10 ) then
echo "Entering 000$i..."
cd 000$i
sudo chmod 777 .
niidicom_sdcme run0$i
#mv *+orig.* ../medata
sudo chmod 755 .
else
echo "Entering 00$i..."
cd 00$i
sudo chmod 777 .
niidicom_sdcme run$i
#mv *+orig.* ../medata
sudo chmod 755 .
endif
cd ..
# i++
end
endif
The problem was with the counter a, which i am passing to call tcsh script.
Now it seems that, the problem was never with the python interrupting tcsh script. subprocess lets tcsh script run without interruptions with shell = True.

Print executed command for Python subprocess.Popen

I have a script that is automating author re-writes on a number of git repositories.
def filter_history(old, new, name, repoPath):
command = """--env-filter '
an="$GIT_AUTHOR_NAME"
am="$GIT_AUTHOR_EMAIL"
cn="$GIT_COMMITTER_NAME"
cm="$GIT_COMMITTER_EMAIL"
if [[ "$GIT_COMMITTER_NAME" = "|old|" ]]
then
cn="|name|"
cm="|new|"
fi
if [[ "$GIT_AUTHOR_NAME" = "|old|" ]]
then
an="|name|"
am="|new|"
fi
export GIT_AUTHOR_NAME="$an"
export GIT_AUTHOR_EMAIL="$am"
export GIT_COMMITTER_NAME="$cn"
export GIT_COMMITTER_EMAIL="$cm"
'
"""
#DO string replace
command = command.replace("|old|", old)
command = command.replace("|new|", new)
command = command.replace("|name|", name)
print "git filter-branch -f " + command
process = subprocess.Popen(['git filter-branch -f', command],cwd=os.path.dirname(repoPath), shell=True)
process.wait()
The command executes fine, but tells me that nothing changed in the repo history. However, if I take the command that is printed out (which should be what is being executed), drop it in a shell script, and execute it, it changes the history fine. I think that the command is somehow not being executed correctly. Is there any way for be to see exactly what command the subprocess module is executing?
When you use shell = True, subprocess.Popen expects a string as its first argument. It is better not to use shell = True if you can help it, since it can be a security risk (see the Warning.
When you omit shell = True, or use shell = False, subprocess.Popen expects a list of arguments. You can generate that list of arguments from a string using shlex.split:
import shlex
import subprocess
def filter_history(old, new, name, repoPath):
"""Change author info
"""
# http://help.github.com/change-author-info/
# http://stackoverflow.com/a/3880493/190597
command = """git filter-branch -f --env-filter '
an="$GIT_AUTHOR_NAME"
am="$GIT_AUTHOR_EMAIL"
cn="$GIT_COMMITTER_NAME"
cm="$GIT_COMMITTER_EMAIL"
if [[ "$GIT_COMMITTER_NAME" = "{old}" ]]
then
cn="{name}"
cm="{new}"
fi
if [[ "$GIT_AUTHOR_NAME" = "{old}" ]]
then
an="{name}"
am="{new}"
fi
export GIT_AUTHOR_NAME="$an"
export GIT_AUTHOR_EMAIL="$am"
export GIT_COMMITTER_NAME="$cn"
export GIT_COMMITTER_EMAIL="$cm"
'
""".format(old = old, new = new, name = name)
process = subprocess.Popen(
shlex.split(command),
cwd = os.path.dirname(repoPath))
process.communicate()
If your application is running in a Windows environment, as stated in the following answer, subprocess has an undocumented function called subprocess.list2cmdline which you could use. subprocess.list2cmdline translates a sequence of arguments into a command line string, using the same rules as the MS C runtime.
if you are using Python > 3.3 you could also get the args list directly from the subprocess object using .args:
import subprocess
process = subprocess.Popen(...)
subprocess.list2cmdline(process.args)
Since Python 3.8 there is also a possibility to use the shlex.join() function:
Keep in mind though that subprocess does everything via IPC, so the best approach would be to simply examine the args list, as they will be passed to argv in the called program.

Categories