I have a Python script, which checks my language translation files (in CSV format), if all the lines contain translations for all languages included in the CSV's first, header line. Script lists files/lines with missing translations. If no problem is found, it outputs OK.
My question is following:
how do I call the script from within the makefile and check, if the output was OK? If something else than OK was printed out by the script, I want the makefile to stop.
Any ideas?
make checks output status, not text which is output. The simplest solution is to use sys.exit(1) in your python script if it doesn't check out OK.
for example:
targetfile: dependsonfile
python pythonscript.py -o targetfile dependsonfile
Of course the actual syntax will depend critically on how pythonscript.py is meant to be called.
If your pythonscript just does a check, you can accomplish that as follows:
makethis: didcheck
echo "made it" > makethis
didcheck: #dependencies here
python -c 'import sys; sys.exit(1)'
touch didcheck
then you call make as make makethis.
If modifying the Python script so that it would indicate its result using an exit code is not possible, you can implement it in Make as follows (assuming you work on *nix):
check-i18n:
#if [ `python your_script.py` = 'OK' ]; \
then echo "good enough"; \
else echo "too bad"; exit 1; \
fi
Another option, less readable, but cross-platform (so it will work on Windows as well):
# String equality check.
eq = $(findstring $1,$(findstring $2,$1))
check-i18n:
#$(if $(call eq,OK,$(shell python your_script.py)), \
echo "good enough", \
echo "too bad"; exit 1)
as my answer to this question
import os
import copy
import subprocess
def command(command):
env = copy.deepcopy(os.environ)
proc = subprocess.Popen(command,
shell=True, env=env, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
result = proc.stdout.read()
return result
ret = command(YOUR COMMAND HERE)
if ret == "OK":
print "yay"
else:
print "oh no!: %s" % ret
if you showed your code i could better implement my answer into it.
Related
I have written a python code to check if file exists in Hadoop file system or not. Python function receives location passed from another function and bash code within checks if location exists.
def check_file_exists_in_hadoop(loc):
yourdir = "/somedirectory/inhadoop/"+loc
cmd = '''
hadoop fs -test -d ${yourdir};
if [ $? -eq 0 ]
then
echo "Directory exists!"
else
echo "Directory does not exists!"
fi
'''
res = subprocess.check_output(cmd, shell=True)
output = (str(res, "utf-8").strip())
print(output)
if output == "Directory exists!":
print("Yay!!!!")
else:
print("Oh no!!!!")
How to pass 'yourdir' variable inside bash portion of code.
All that playing around in shells looks awkward, why not just do:
def check_file_exists_in_hadoop(loc):
path = "/somedirectory/inhadoop/" + loc
res = subprocess.run(["hadoop", "fs", "-test", "-d", path])
return res.returncode == 0
You can execute as:
if check_file_exists_in_hadoop('foo.txt'):
print("Yay!!!!")
else:
print("Oh noes!!!!")
When you execute/run a process/program in a Unix-like system, it receives an array of arguments (exposed as e.g., sys.argv in Python). you can construct these in various ways but passing them to run gives you the most direct control. You can of course use a shell to do this, but starting up a shell just to do this seems unnecessary. Given that this argument list is just a list of strings in Python you can use normal list/string manipulations to construct whatever you need.
Using a shell can be useful, but as Gilles says you need to be careful to sanitise/escape your input — not everybody loves little bobby tables!
Pass the string as an argument to the shell. Instead of using shell=True, which runs ['sh', '-c', cmd] under the hood, invoke a shell explicitly. After the shell code, the first argument is the shell or script name (which is unused here), then the next argument is available as "$1" in the shell snippet, the next argument as "$2", etc.
cmd = '''
hadoop fs -test -d "$1";
…
'''
res = subprocess.check_output(['sh', '-c', cmd, 'sh', yourdir])
Alternatively, pass the string as an environment variable.
cmd = '''
hadoop fs -test -d "$yourdir";
…
'''
env = os.environ.copy()
env['yourdir'] = yourdir
res = subprocess.check_output(cmd, shell=True, env=env)
In the shell snippet, note the double quotes around $1 or $yourdir.
Do not interpolate the string into the shell command directly, i.e. don't use things like 'test -d {}'.format(yourdir). That doesn't work if the string contains shell special characters: it's a gaping security hole. For example if yourdir is a; rm -rf ~ then you've just kissed your data goodbye.
I am trying to pass a sequence of shell script commands to a python function.
I used to call the script.sh file inside subprocess, but now I would like to have the script itself in the function, so I don't have to worry about the trailing script.
oversimplified example to give an idea:
def myfunc()
script = "ls -la; cd /; ls -la"
runscript_output = subprocess.call(script)
print runscript_output
Is this the correct way to do so?
I saw some examples where the shell commands were wrapped in EOF wrappers; although I am not really sure why.
Use shell=True in this use case. Also, I'd strongly suggest making your script text constant, and pushing any variable substitutions as separate arguments:
script_text=r'''
ls -la
cd "$1" || exit
ls -la
'''
# for /bin/sh
def myfunc_sh():
runscript_output = subprocess.check_output(
[script_text, # /bin/sh script text here
'/'], # add values for $1 and on as list elements
shell=True)
print runscript_output
# for /bin/bash
def myfunc_bash():
runscript_output = subprocess.check_output(
['bash', # interpreter to run
'-c', script_text, # script to invoke
'_', # value for $0
'/']) # add values for $1 and on as extra list elements here
print runscript_output
Following this practice -- keeping the content passed to the shell as script text constant -- avoids opening you up to shell injection vulnerabilities, wherein data is parsed as code.
#!/usr/bin/env bash
I am working on using bash commands in python I found this example in Red hat magazine the tail command does not produce any output. I would like to understand why it is failing and I have also tried to import subprocess but that just hangs.
#Create Commands**strong text**
SPACE=`df -h`
MESSAGES=`tail /var/log/messages`
#Assign to an array(list in Python)
cmds=("$MESSAGES" "$SPACE")
#iteration loop
count=0
for cmd in "${cmds[#]}"; do
count=$((count + 1))
printf "Running Command Number %s \n" $count
echo "$cmd"
done
Printing the command doesn't mean executing it. Look at Python's subprocess library for API and examples for doing what you want.
I've seen some shell scripts in which they pass a file by writing the contents of the file in the same shell script. For instance:
if [[ $uponly -eq 1 ]]; then
mysql $tabular -h database1 -u usertrack -pabulafia usertrack << END
select host from host_status where host like 'ld%' and status = 'up';
END
exit 0
fi
I've been able to do something similar in which I do:
python << END
print 'hello world'
END
If the name of the script is say myscript.sh then I can run it by executing sh myscript.sh. and I obtain my desired output.
Question is, can we do something similar in a Makefile? I've been looking around and they all say that I have do something like:
target:
python #<<
print 'hello world'
<<
But that doesn't work.
Here are the links where I've been looking:
http://www.opussoftware.com/tutorial/TutMakefile.htm#Response%20Files
http://www.scribd.com/doc/2369245/Makefile-Memo
You can do something like this:
define TMP_PYTHON_PROG
print 'hello'
print 'world'
endef
export TMP_PYTHON_PROG
target:
#python -c "$$TMP_PYTHON_PROG"
First, you're defining a multi line variable with define and endef. Then you need to export it to the shell otherwise it will treat each new line as a new command. Then you reinsert the shell variable using $$.
The reason your #<< thing didn't work is that it appears to be a feature of a non-standard make variant. Similarly, the define command that mVChr mentions is specific (as far as I'm aware) to GNU Make. While GNU Make is very widely distributed, this trick won't work in a BSD make, nor in a POSIX-only make.
I feel it's good, as a general principle, to keep makefiles as portable as possible; and if you're writing a Makefile in the context of an autoconf-ed system, it's more important still.
A fully portable technique for doing what you're looking for is:
target:
{ echo "print 'First line'"; echo "print 'second'"; } | python
or equivalently, if you want to lay things out a bit more tidily:
target:
{ echo "print 'First line'"; \
echo "print 'second'"; \
} | python
What I'd like to do is something like
$echo $PATH | python --remain-interactive "x = raw_input().split(':')"
>>>
>>> print x
['/usr/local/bin', '/usr/bin', '/bin']
I suppose ipython solution would be best. If this isn't achievable, what would be your solution for the situation where I want to process output from various other commands? I've used subprocess before to do it when I was desperate, but it is not ideal.
UPDATE: So this is getting closer to the end result:
echo $PATH > /tmp/stdout.txt; ipython -i -c 'stdout = open("/tmp/stdout.txt").read()'
Now how can we go about bending this into a form
echo $PATH | pyout
where pyout is the "magic solution to all my problems". It could be a shell script that writes the piped output and then runs the ipython. Everything done fails for the same reasons bp says.
In IPython you can do this
x = !echo $$$$PATH
The double escape of $ is a pain though
You could do this I guess
PATH="$PATH"
x = !echo $PATH
x[0].split(":")
The --remain-interactive switch you are looking for is -i. You also can use the -c switch to specify the command to execute, such as __import__("sys").stdin.read().split(":"). So what you would try is: (do not forget about escaping strings!)
echo $PATH | python -i -c x = __import__(\"sys\").stdin.read().split(\":\")
However, this is all that will be displayed:
>>>
So why doesn't it work? Because you are piping. The python intepreter is trying to interactively read commands from the same sys.stdin you are reading arguments from. Since echo is done executing, sys.stdin is closed and no further input can happen.
For the same reason, something like:
echo $PATH > spam
python -i -c x = __import__(\"sys\").stdin.read().split(\":\") < spam
...will fail.
What I would do is:
echo $PATH > spam.bar
python -i my_app.py spam.bar
After all, open("spam.bar") is a file object just like sys.stdin is :)
Due to the Python axiom of "There should be one - and preferably only one - obvious way to do it" I'm reasonably sure that there won't be a better way to interact with other processes than the subprocess module.
It might help if you could say why something like the following "is not ideal":
>>> process = subprocess.Popen(['cmd', '/c', 'echo %PATH%'], stdout=subprocess.PIPE)
>>> print process.communicate()[0].split(';')
(In your specific example you could use os.environ but I realise that's not really what you're asking.)