What I'd like to do is something like
$echo $PATH | python --remain-interactive "x = raw_input().split(':')"
>>>
>>> print x
['/usr/local/bin', '/usr/bin', '/bin']
I suppose ipython solution would be best. If this isn't achievable, what would be your solution for the situation where I want to process output from various other commands? I've used subprocess before to do it when I was desperate, but it is not ideal.
UPDATE: So this is getting closer to the end result:
echo $PATH > /tmp/stdout.txt; ipython -i -c 'stdout = open("/tmp/stdout.txt").read()'
Now how can we go about bending this into a form
echo $PATH | pyout
where pyout is the "magic solution to all my problems". It could be a shell script that writes the piped output and then runs the ipython. Everything done fails for the same reasons bp says.
In IPython you can do this
x = !echo $$$$PATH
The double escape of $ is a pain though
You could do this I guess
PATH="$PATH"
x = !echo $PATH
x[0].split(":")
The --remain-interactive switch you are looking for is -i. You also can use the -c switch to specify the command to execute, such as __import__("sys").stdin.read().split(":"). So what you would try is: (do not forget about escaping strings!)
echo $PATH | python -i -c x = __import__(\"sys\").stdin.read().split(\":\")
However, this is all that will be displayed:
>>>
So why doesn't it work? Because you are piping. The python intepreter is trying to interactively read commands from the same sys.stdin you are reading arguments from. Since echo is done executing, sys.stdin is closed and no further input can happen.
For the same reason, something like:
echo $PATH > spam
python -i -c x = __import__(\"sys\").stdin.read().split(\":\") < spam
...will fail.
What I would do is:
echo $PATH > spam.bar
python -i my_app.py spam.bar
After all, open("spam.bar") is a file object just like sys.stdin is :)
Due to the Python axiom of "There should be one - and preferably only one - obvious way to do it" I'm reasonably sure that there won't be a better way to interact with other processes than the subprocess module.
It might help if you could say why something like the following "is not ideal":
>>> process = subprocess.Popen(['cmd', '/c', 'echo %PATH%'], stdout=subprocess.PIPE)
>>> print process.communicate()[0].split(';')
(In your specific example you could use os.environ but I realise that's not really what you're asking.)
Related
I have below code in python3:
>>> os.system('echo -n abc')
-n abc
0
the output has -n which is not correct. If I run the command on terminal it doesn't print the newline at the end. It seems os.system method doesn't understand -n parameter. How can I solve this issue?
I have tried subprocess module but got the same result:
>>> subprocess.call('echo -n abc', shell=True)
-n abc
0
Is this correct behavior for echo to have?
This is legal, standards-compliant behavior for an echo implementation to have. Quoting from the POSIX echo standard (the APPLICATION USAGE section of which is also strongly recommended reading):
If the first operand is -n, or if any of the operands contain a backslash character, the results are implementation-defined.
Thus, echo is allowed to print -n literally -- or to do anything else when it's given as the first operand.
Why does my interactive shell do something different?
As the above-quoted specification says, behavior is undefined when -n is given as first argument. This means every behavior is equally legal; so not printing a trailing newline is also a legal behavior. Presumably your interactive shell does that (on the system at hand), whereas your /bin/sh does not.
How can I get reliable behavior when starting a shell to print text without a trailing newline?
If you want a command with well-specified behavior, use printf instead:
os.system('printf %s abc')
...will always print the exact string abc with no trailing newline, with any POSIX-compliant /bin/sh.
But of course, don't do any of this in Python; sys.stdout.write('abc') is far more sensible.
I would like to know if a Python script could launch the Python interpreter at runtime, making variables accessible from the interpreter.
Let me explain myself. Let's say I have the following script:
x = 20
LaunchInterpreter() #Imaginary way to launch the Interpreter
Now, the interpreter is launched and we can play around with variables.
>>> x #x defined value by the script
20
>>> x*x
400
If you're looking for a dynamic interpreter you can use pdb. It is just a debugger though so should be used only for that purpose but can be used like in the following way;
x = 20
import pdb
pdb.set_trace()
Now you will have an interpreter and you can play around with the variables.
I don't know if this is suitable in your situation but it's the closest thing I can think of with the information provided.
Edit 1:
As stated in the comments by skishore you can also use code.interact(local=locals()) so:
x = 20
import code
code.interact(local=locals())
The -i command line option to Python forces the launching of a command interpreter after a script completes:
python --help
usage: /Library/Frameworks/Python.framework/Versions/2.6/Resources/Python.app/Contents/MacOS/Python [option] ... [-c cmd | -m mod | file | -] [arg] ...
Options and arguments (and corresponding environment variables):
< ... >
-i : inspect interactively after running script; forces a prompt even
if stdin does not appear to be a terminal; also PYTHONINSPECT=x
so given a file test.py that contains:
x = 7
y = "a banana"
you can launch python with the -i option to do exactly what you'd like to do.
python -i test.py
>>> x
7
>>> y
'a banana'
>>>
I have a Python script, which checks my language translation files (in CSV format), if all the lines contain translations for all languages included in the CSV's first, header line. Script lists files/lines with missing translations. If no problem is found, it outputs OK.
My question is following:
how do I call the script from within the makefile and check, if the output was OK? If something else than OK was printed out by the script, I want the makefile to stop.
Any ideas?
make checks output status, not text which is output. The simplest solution is to use sys.exit(1) in your python script if it doesn't check out OK.
for example:
targetfile: dependsonfile
python pythonscript.py -o targetfile dependsonfile
Of course the actual syntax will depend critically on how pythonscript.py is meant to be called.
If your pythonscript just does a check, you can accomplish that as follows:
makethis: didcheck
echo "made it" > makethis
didcheck: #dependencies here
python -c 'import sys; sys.exit(1)'
touch didcheck
then you call make as make makethis.
If modifying the Python script so that it would indicate its result using an exit code is not possible, you can implement it in Make as follows (assuming you work on *nix):
check-i18n:
#if [ `python your_script.py` = 'OK' ]; \
then echo "good enough"; \
else echo "too bad"; exit 1; \
fi
Another option, less readable, but cross-platform (so it will work on Windows as well):
# String equality check.
eq = $(findstring $1,$(findstring $2,$1))
check-i18n:
#$(if $(call eq,OK,$(shell python your_script.py)), \
echo "good enough", \
echo "too bad"; exit 1)
as my answer to this question
import os
import copy
import subprocess
def command(command):
env = copy.deepcopy(os.environ)
proc = subprocess.Popen(command,
shell=True, env=env, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
result = proc.stdout.read()
return result
ret = command(YOUR COMMAND HERE)
if ret == "OK":
print "yay"
else:
print "oh no!: %s" % ret
if you showed your code i could better implement my answer into it.
I've seen some shell scripts in which they pass a file by writing the contents of the file in the same shell script. For instance:
if [[ $uponly -eq 1 ]]; then
mysql $tabular -h database1 -u usertrack -pabulafia usertrack << END
select host from host_status where host like 'ld%' and status = 'up';
END
exit 0
fi
I've been able to do something similar in which I do:
python << END
print 'hello world'
END
If the name of the script is say myscript.sh then I can run it by executing sh myscript.sh. and I obtain my desired output.
Question is, can we do something similar in a Makefile? I've been looking around and they all say that I have do something like:
target:
python #<<
print 'hello world'
<<
But that doesn't work.
Here are the links where I've been looking:
http://www.opussoftware.com/tutorial/TutMakefile.htm#Response%20Files
http://www.scribd.com/doc/2369245/Makefile-Memo
You can do something like this:
define TMP_PYTHON_PROG
print 'hello'
print 'world'
endef
export TMP_PYTHON_PROG
target:
#python -c "$$TMP_PYTHON_PROG"
First, you're defining a multi line variable with define and endef. Then you need to export it to the shell otherwise it will treat each new line as a new command. Then you reinsert the shell variable using $$.
The reason your #<< thing didn't work is that it appears to be a feature of a non-standard make variant. Similarly, the define command that mVChr mentions is specific (as far as I'm aware) to GNU Make. While GNU Make is very widely distributed, this trick won't work in a BSD make, nor in a POSIX-only make.
I feel it's good, as a general principle, to keep makefiles as portable as possible; and if you're writing a Makefile in the context of an autoconf-ed system, it's more important still.
A fully portable technique for doing what you're looking for is:
target:
{ echo "print 'First line'"; echo "print 'second'"; } | python
or equivalently, if you want to lay things out a bit more tidily:
target:
{ echo "print 'First line'"; \
echo "print 'second'"; \
} | python
How can I reverse the results of a shlex.split? That is, how can I obtain a quoted string that would "resemble that of a Unix shell", given a list of strings I wish quoted?
Update0
I've located a Python bug, and made corresponding feature requests here.
We now (3.3) have a shlex.quote function. It’s none other that pipes.quote moved and documented (code using pipes.quote will still work). See http://bugs.python.org/issue9723 for the whole discussion.
subprocess.list2cmdline is a private function that should not be used. It could however be moved to shlex and made officially public. See also http://bugs.python.org/issue1724822.
How about using pipes.quote?
import pipes
strings = ["ls", "/etc/services", "file with spaces"]
" ".join(pipes.quote(s) for s in strings)
# "ls /etc/services 'file with spaces'"
.
There is a feature request for adding shlex.join(), which would do exactly what you ask. As of now, there does not seem any progress on it, though, mostly as it would mostly just forward to shlex.quote(). In the bug report, a suggested implementation is mentioned:
' '.join(shlex.quote(x) for x in split_command)
See https://bugs.python.org/issue22454
It's shlex.join() in python 3.8
subprocess uses subprocess.list2cmdline(). It's not an official public API, but it's mentioned in the subprocess documentation and I think it's pretty safe to use. It's more sophisticated than pipes.open() (for better or worse).
While shlex.quote is available in Python 3.3 and shlex.join is available in Python 3.8, they will not always serve as a true "reversal" of shlex.split. Observe the following snippet:
import shlex
command = "cd /home && bash -c 'echo $HOME'"
print(shlex.split(command))
# ['cd', '/home', '&&', 'bash', '-c', 'echo $HOME']
print(shlex.join(shlex.split(command)))
# cd /home '&&' bash -c 'echo $HOME'
Notice that after splitting and then joining, the && token now has single quotes around it. If you tried running the command now, you'd get an error: cd: too many arguments
If you use subprocess.list2cmdline() as others have suggested, it works nicer with bash operators like &&:
import subprocess
print(subprocess.list2cmdline(shlex.split(command)))
# cd /home && bash -c "echo $HOME"
However you may notice now that the quotes are now double instead of single. This results in $HOME being expanded by the shell rather than being printed verbatim as if you had used single quotes.
In conclusion, there is no 100% fool-proof way of undoing shlex.split, and you will have to choose the option that best suites your purpose and watch out for edge cases.