I have a simple bash command here for a script that I am re-writing in Python, and I've done a lot of searching and haven't found a simple answer. I am trying to echo the output of Print to a file, making sure there are no line breaks and that I can pass a variable into it. Here is just a little snippet (there are a lot of lines like this):
echo " ServerName www.${hostName}" >> $prjFile
Now I know it would end up looking something like:
print ("ServerName www.", hostName) >> prjFile
Right? But that doesn't work. Mind you, this is in Python 2.6 (as the machine this script will run on is using that version and there are other dependencies reliant on sticking with that version).
The syntax is;
print >>myfile, "ServerName www.", hostName,
where myfile is a file object opened in mode "a" (for "append").
The trailing comma prevents line breaks.
You might also want to use sys.stdout.softspace = False to prevent the spaces that Python adds between comma-separate arguments to print, and/or to print things as a single string:
print >>myfile, "ServerName www.%s" % hostName,
You can try a simple:
myFile = open('/tmp/result.file', 'w') # or 'a' to add text instead of truncate
myFile.write('whatever')
myFile.close()
In your case:
myFile = open(prjFile, 'a') # 'a' because you want to add to the existing file
myFile.write('ServerName www.{hostname}'.format(hostname=hostname))
myFile.close()
Related
I can successfully redirect my output to a file, however this appears to overwrite the file's existing data:
import subprocess
outfile = open('test','w') #same with "w" or "a" as opening mode
outfile.write('Hello')
subprocess.Popen('ls',stdout=outfile)
will remove the 'Hello' line from the file.
I guess a workaround is to store the output elsewhere as a string or something (it won't be too long), and append this manually with outfile.write(thestring) - but I was wondering if I am missing something within the module that facilitates this.
You sure can append the output of subprocess.Popen to a file, and I make a daily use of it. Here's how I do it:
log = open('some file.txt', 'a') # so that data written to it will be appended
c = subprocess.Popen(['dir', '/p'], stdout=log, stderr=log, shell=True)
(of course, this is a dummy example, I'm not using subprocess to list files...)
By the way, other objects behaving like file (with write() method in particular) could replace this log item, so you can buffer the output, and do whatever you want with it (write to file, display, etc) [but this seems not so easy, see my comment below].
Note: what may be misleading, is the fact that subprocess, for some reason I don't understand, will write before what you want to write. So, here's the way to use this:
log = open('some file.txt', 'a')
log.write('some text, as header of the file\n')
log.flush() # <-- here's something not to forget!
c = subprocess.Popen(['dir', '/p'], stdout=log, stderr=log, shell=True)
So the hint is: do not forget to flush the output!
Well the problem is if you want the header to be header, then you need to flush before the rest of the output is written to file :D
Are data in file really overwritten? On my Linux host I have the following behavior:
1) your code execution in the separate directory gets:
$ cat test
test
test.py
test.py~
Hello
2) if I add outfile.flush() after outfile.write('Hello'), results is slightly different:
$ cat test
Hello
test
test.py
test.py~
But output file has Hello in both cases. Without explicit flush() call stdout buffer will be flushed when python process is terminated.
Where is the problem?
For those who are curious as to why I'm doing this: I need specific files in a tar ball - no more, no less. I have to write unit tests for make check, but since I'm constrained to having "no more" files, I have to write the check within the make check. In this way, I have to write bash(but I don't want to).
I dislike using bash for unit testing(sorry to all those who like bash. I just dislike it so much that I would rather go with an extremely hacky approach than to write many lines of bash code), so I wrote a python file. I later learned that I have to use bash because of some unknown strict rule. I figured that there was a way to cache the entire content of the python file into a single string in the bash file, so I could take the string literal in bash and write to a python file and then execute it.
I tried the following attempt (in the following script and result, I used another python file that's not unit_test.py, so don't worry if it doesn't actually look like a unit test):
toStr.py:
import re
with open("unit_test.py", 'r+') as f:
s = f.read()
s = s.replace("\n", "\\n")
print(s)
And then I piped the results out using:
python toStr.py > temp.txt
It looked something like:
#!/usr/bin/env python\n\nimport os\nimport sys\n\n#create number of bytes as specified in the args:\nif len(sys.argv) != 3:\n print("We need a correct number of args : 2 [NUM_BYTES][FILE_NAME].")\n exit(1)\nn = -1\ntry:\n n = int(sys.argv[1])\nexcept:\n print("Error casting number : " + sys.argv[1])\n exit(1)\n\nrand_string = os.urandom(n)\n\nwith open(sys.argv[2], 'wb+') as f:\n f.write(rand_string)\n f.flush()\n f.close()\n\n
I tried taking this as a string literal and echoing it into a new file and see whether I could run it as a python file but it failed.
echo '{insert that giant string above here}' > new_unit_test.py
I wanted to take this statement above and copy it into my "bash unit test" file so I can just execute the python file within the bash script.
The resulting file looked exactly like {insert giant string here}. What am I doing wrong in my attempt? Are there other, much easier ways where I can hold a python file as a string literal in a bash script?
the easiest way is to only use double-quotes in your python code, then, in your bash script, wrap all of your python code in one pair of single-quotes, e.g.,
#!/bin/bash
python -c 'import os
import sys
#create number of bytes as specified in the args:
if len(sys.argv) != 3:
print("We need a correct number of args : 2 [NUM_BYTES][FILE_NAME].")
exit(1)
n = -1
try:
n = int(sys.argv[1])
except:
print("Error casting number : " + sys.argv[1])
exit(1)
rand_string = os.urandom(n)
# i changed ""s to ''s below -webb
with open(sys.argv[2], "wb+") as f:
f.write(rand_string)
f.flush()
f.close()'
I have a text file /etc/default/foo which contains one line:
FOO="/path/to/foo"
In my python script, I need to reference the variable FOO.
What is the simplest way to "source" the file /etc/default/foo into my python script, same as I would do in bash?
. /etc/default/foo
Same answer as #jil however, that answer is specific to some historical version of Python.
In modern Python (3.x):
exec(open('filename').read())
replaces execfile('filename') from 2.x
You could use execfile:
execfile("/etc/default/foo")
But please be aware that this will evaluate the contents of the file as is into your program source. It is potential security hazard unless you can fully trust the source.
It also means that the file needs to be valid python syntax (your given example file is).
Keep in mind that if you have a "text" file with this content that has a .py as the file extension, you can always do:
import mytextfile
print(mytestfile.FOO)
Of course, this assumes that the text file is syntactically correct as far as Python is concerned. On a project I worked on we did something similar to this. Turned some text files into Python files. Wacky but maybe worth consideration.
Just to give a different approach, note that if your original file is setup as
export FOO=/path/to/foo
You can do source /etc/default/foo; python myprogram.py (or . /etc/default/foo; python myprogram.py) and within myprogram.py all the values that were exported in the sourced' file are visible in os.environ, e.g
import os
os.environ["FOO"]
If you know for certain that it only contains VAR="QUOTED STRING" style variables, like this:
FOO="some value"
Then you can just do this:
>>> with open('foo.sysconfig') as fd:
... exec(fd.read())
Which gets you:
>>> FOO
'some value'
(This is effectively the same thing as the execfile() solution
suggested in the other answer.)
This method has substantial security implications; if instead of FOO="some value" your file contained:
os.system("rm -rf /")
Then you would be In Trouble.
Alternatively, you can do this:
>>> with open('foo.sysconfig') as fd:
... settings = {var: shlex.split(value) for var, value in [line.split('=', 1) for line in fd]}
Which gets you a dictionary settings that has:
>>> settings
{'FOO': ['some value']}
That settings = {...} line is using a dictionary comprehension. You could accomplish the same thing in a few more lines with a for loop and so forth.
And of course if the file contains shell-style variable expansion like ${somevar:-value_if_not_set} then this isn't going to work (unless you write your very own shell style variable parser).
There are a couple ways to do this sort of thing.
You can indeed import the file as a module, as long as the data it contains corresponds to python's syntax. But either the file in question is a .py in the same directory as your script, either you're to use imp (or importlib, depending on your version) like here.
Another solution (that has my preference) can be to use a data format that any python library can parse (JSON comes to my mind as an example).
/etc/default/foo :
{"FOO":"path/to/foo"}
And in your python code :
import json
with open('/etc/default/foo') as file:
data = json.load(file)
FOO = data["FOO"]
## ...
file.close()
This way, you don't risk to execute some uncertain code...
You have the choice, depending on what you prefer. If your data file is auto-generated by some script, it might be easier to keep a simple syntax like FOO="path/to/foo" and use imp.
Hope that it helps !
The Solution
Here is my approach: parse the bash file myself and process only variable assignment lines such as:
FOO="/path/to/foo"
Here is the code:
import shlex
def parse_shell_var(line):
"""
Parse such lines as:
FOO="My variable foo"
:return: a tuple of var name and var value, such as
('FOO', 'My variable foo')
"""
return shlex.split(line, posix=True)[0].split('=', 1)
if __name__ == '__main__':
with open('shell_vars.sh') as f:
shell_vars = dict(parse_shell_var(line) for line in f if '=' in line)
print(shell_vars)
How It Works
Take a look at this snippet:
shell_vars = dict(parse_shell_var(line) for line in f if '=' in line)
This line iterates through the lines in the shell script, only process those lines that has the equal sign (not a fool-proof way to detect variable assignment, but the simplest). Next, run those lines into the function parse_shell_var which uses shlex.split to correctly handle the quotes (or the lack thereof). Finally, the pieces are assembled into a dictionary. The output of this script is:
{'MOO': '/dont/have/a/cow', 'FOO': 'my variable foo', 'BAR': 'My variable bar'}
Here is the contents of shell_vars.sh:
FOO='my variable foo'
BAR="My variable bar"
MOO=/dont/have/a/cow
echo $FOO
Discussion
This approach has a couple of advantages:
It does not execute the shell (either in bash or in Python), which avoids any side-effect
Consequently, it is safe to use, even if the origin of the shell script is unknown
It correctly handles values with or without quotes
This approach is not perfect, it has a few limitations:
The method of detecting variable assignment (by looking for the presence of the equal sign) is primitive and not accurate. There are ways to better detect these lines but that is the topic for another day
It does not correctly parse values which are built upon other variables or commands. That means, it will fail for lines such as:
FOO=$BAR
FOO=$(pwd)
Based off the answer with exec(.read()), value = eval(.read()), it will only return the value. E.g.
1 + 1: 2
"Hello Word": "Hello World"
float(2) + 1: 3.0
I have add the line into my .vimrc
map <F4> :w !python<cr>
When I open gvim to edit an unname Python file, there are only two lines in it
x=input("which one do you like ? ")
print(x)
I press F4, and get EOF when reading a line, how to solve it?
When you add map <F4> :w<cr>:!python %<cr> or imap <F4> <Esc>:w <cr>:!python %<cr>, it can only make a named file run, if it is a no named file, the map will not work, how can I make the no named file run?
#benjifisher's answer is correct. The input (function) is the problem.
The :w !python pipes the program to python through stdin (Basically the same as
echo 'input("x")' | python
which also fails if run in the shell). However input() tries to read from stdin which it can't do because python read the program from stdin and stdin is still trying to read from the pipe. However the pipe is already at the end and won't ever contain new data. So input() just reads EOF.
To see that python is reading from stdin we look at :h :w_c which shows that the file is being passed to stdin of the cmd which in this case is python.
:w_c :write_c
:[range]w[rite] [++opt] !{cmd}
Execute {cmd} with [range] lines as standard input
(note the space in front of the '!'). {cmd} is
executed like with ":!{cmd}", any '!' is replaced with
the previous command :!.
If the buffer had contained something that wasn't reading from stdin your mapping would have worked.
For example if the unnamed buffer contains
print(42)
running the command :w !python in vim prints 42.
So the problem isn't that the mapping fails. The problem is that your program doesn't know how to get input. The solution is use either a named file or don't write interactive programs in vim. Use the python interpreter for that.
Since you have :w in your mapping, I am assuming you either want to run the script directly from the insert mode or you want to save it anyways before running it.
Multiple commands in vim require a separator like <bar> i.e. (|) or use <CR> (Carriage Return) between and after commands.
You can put both of the mappings below in your .vimrc and they should meet your requirement on hitting F4, whether you are in normal mode or insert mode.
If you are in normal mode, you are fine with map:
map <F4> :w<cr>:!python %<cr>
While for insert mode, you would need imap and an Esc to get out of insert mode:
imap <F4> <Esc>:w <cr>:!python %<cr>
I think the problem is the input() line. Python is looking for input and not finding any. All it finds (wherever it looks) is an EOF.
One way to do this without a temp file, would be to do something like this(with a small helper function):
function! GetContentsForPython()
let contents = join(getline(1,'$'), "\n")
let res = ''
for l in split(contents, '\n')
if len(l)
let res = res . l . ';'
endif
endfor
let res = '"' . escape(res, '"') . '"'
return res
endfunction
noremap <f4> :!python -c <c-r>=GetContentsForPython()<cr><cr>
This gets the contents of the current buffer, and replaces the newlines with semi colons so you can execute it with
python -c "print 'hello'"
There may be better ways accomplishing this, but this seems to work for me.
I would like to read the next logical line from a file into python, where logical means "according to the syntax of python".
I have written a small command which reads a set of statements from a file, and then prints out what you would get if you typed the statements into a python shell, complete with prompts and return values. Simple enough -- read each line, then eval. Which works just fine, until you hit a multi-line string.
I'm trying to avoid doing my own lexical analysis.
As a simple example, say I have a file containing
2 + 2
I want to print
>>> 2 + 2
4
and if I have a file with
"""Hello
World"""
I want to print
>>>> """Hello
...World"""
'Hello\nWorld'
The first of these is trivial -- read a line, eval, print. But then I need special support for comment lines. And now triple quotes. And so on.
You may want to take a look at the InteractiveInterpreter class from the code module .
The runsource() method shows how to deal with incomplete input.
Okay, so resi had the correct idea. Here is my trivial code which does the job.
#!/usr/bin/python
import sys
import code
class Shell(code.InteractiveConsole):
def write(data):
print(data)
cons = Shell()
file_contents = sys.stdin
prompt = ">>> "
for line in file_contents:
print prompt + line,
if cons.push(line.strip()):
prompt = "... "
else:
prompt = ">>> "