update /etc/sysctl.conf with Python's ConfigParser - python

I am able to use Python's ConfigParser library to read /etc/sysctl.conf by adding a [dummy] section and overriding ConfigParser's read() method as follows:
class SysctlConfigParser(ConfigParser.ConfigParser):
def read(self, fn):
text = open(fn).read()
contents = StringIO.StringIO("[dummy]\n" + text)
self.readfp(contents, fn)
Now the tricky part is to write back configuration updates that my python program made, because if I would now call ConfigParser.write() directly then it would add back this [dummy] section as well:
[dummy]
net.netfilter.nf_conntrack_max = 313
net.netfilter.nf_conntrack_expect_max = 640
net.netfilter.nf_conntrack_tcp_timeout_time_wait = 5
Here are my questions:
Is there an elegant way to make ConfigParser not to add this [dummy] section? It seems odd if I would have to open this file again just to remove the first line that contains this dummy section.
Maybe ConfigParser is not the right tool to edit sysctl.conf? If so are there any other Python libraries that would allow to update sysctl.conf in a convenient way from Python?

ConfigParser is designed for parsing INI-style configuration files. /etc/sysconf.conf is not this sort of file.
You could use the Augeas bindings for Python if you want a parser that works out-of-the-box:
import augeas
aug = augeas.Augeas()
aug.set('/files/etc/sysctl.conf/net.ipv4.ip_forwarding', '1')
aug.set('/files/etc/sysctl.conf/fs.inotify.max_user_watches', '8192')
aug.save()
The format of the file is pretty trivial (just a collection of <name> = <value> lines with optional comments).

Related

How to read "-" (dash) as standard input with Python without writing extra code?

Using Python 3.5.x, not any greater version than that.
https://stackoverflow.com/a/30254551/257924 is the right answer, but doesn't provide a solution that is built into Python, but requires writing code from scratch:
I need to have a string that has a value of "-" to represent stdin, or its value is a path to a text file I want to read from. I want to use the with operator to open up either type of those files, without using conditional logic to check for "-" in my scripts. I have something that works, but it seems like it should be something that is built into Python core and not requiring me to roll my own context-manager, like this:
from contextlib import contextmanager
#contextmanager
def read_text_file_or_stdin(path):
"""Return a file object from stdin if path is '-', else read from path as a text file."""
if path == '-':
with open(0) as f:
yield f
else:
with open(path, 'r') as f:
yield f
# path = '-' # Means read from stdin
path = '/tmp/paths' # Means read from a text file given by this value
with read_text_file_or_stdin(path) as g:
paths = [path for path in g.read().split('\n') if path]
print("paths", paths)
I plan to pass in the argument to a script via something like -p - to mean "read from standard-input" or -p some_text_file meaning "read from some_text_file".
Does this require me to do the above, or is there something built into Python 3.5.x that provides this already? This seems like such a common need for writing CLI utilities, that it could have already been handled by something in the Python core or standard libraries.
I don't want to install any module/package from repositories outside of the Python standard library in 3.5.x, just for this.
The argparse module provides a FileType factory which knows about the - convention.
import argparse
p = argparse.ArgumentParser()
p.add_argument("-p", type=argparse.FileType("r"))
args = p.parse_args()
Note that args.p is an open file handle, so there's no need to open it "again". While you can still use it with a with statement:
with args.p:
for line in args.p:
...
this only ensures the file is closed in the event of an error in the with statement itself. Also, you may not want to use with, as this will close the file, even if you meant to use it again later.
You should probably use the atexit module to make sure the file gets closed by the end of the program, since it was already opened for you at the beginning.
import atexit
...
args = p.parse_args()
atexit.register(args.p.close)
Check https://docs.python.org/3/library/argparse.html#filetype-objects.
Where you can do like this
>>> parser = argparse.ArgumentParser()
>>> parser.add_argument('infile', nargs='?', type=argparse.FileType('r'),
... default=sys.stdin)
>>> parser.add_argument('outfile', nargs='?', type=argparse.FileType('w'),
... default=sys.stdout)
So infile & outfile will support read & write streams for stdin & stdout by default.
Or using my favorite library click
Check more details at the library docs website
https://click.palletsprojects.com/en/7.x/api/#click.File
https://click.palletsprojects.com/en/7.x/arguments/#file-arguments
File Arguments
Since all the examples have already worked with filenames, it makes sense to explain how to deal with files properly. Command line tools are more fun if they work with files the Unix way, which is to accept - as a special file that refers to stdin/stdout.
Click supports this through the click.File type which intelligently handles files for you. It also deals with Unicode and bytes correctly for all versions of Python so your script stays very portable.
The library is the most "BASH/DASH" features friendly I believe.

Dealing with à in generating code for a string literal using Python 3.5's AST module, need to open with right coding

To generate JavaScript from Python in the Transcrypt Python to JS compiler, Python 3.5's ast module is used in combination with the following code:
class Generator (ast.NodeVisitor):
...
...
def visit_Str (self, node):
self.emit (repr (node.s)) # Simplified to need less context on StackOverflow
...
...
This works fine e.g. for the following line of Python:
test = "âäéèêëiîïoôöùüû"
which is correctly translated to:
var test = 'âäéèêëiîïoôöùüû';
Only the character à gives problems:
test = "àâäéèêëiîïoôöùüû"
is translated to:
var test = 'Ĝxa0âäéèêëiîïoôöùüû';
Is there any way to have the ast module read the source file respecting coding directives like:
# coding=<encoding name>
To open a Python file for parsing, use
tokenize.open
rather than the ordinary
open
function.
It will open, read the pep263 coding hint and return the open file as if it were opened by the ordinary open using the right encoding.
Quite hard to find, not currently in the Green Tree Snakes doc. Actually found it by searching for 'coding' in the CPython sources on GitHub.
Have created an issue for Green Tree Snakes doc to add this.

how to "source" file into python script

I have a text file /etc/default/foo which contains one line:
FOO="/path/to/foo"
In my python script, I need to reference the variable FOO.
What is the simplest way to "source" the file /etc/default/foo into my python script, same as I would do in bash?
. /etc/default/foo
Same answer as #jil however, that answer is specific to some historical version of Python.
In modern Python (3.x):
exec(open('filename').read())
replaces execfile('filename') from 2.x
You could use execfile:
execfile("/etc/default/foo")
But please be aware that this will evaluate the contents of the file as is into your program source. It is potential security hazard unless you can fully trust the source.
It also means that the file needs to be valid python syntax (your given example file is).
Keep in mind that if you have a "text" file with this content that has a .py as the file extension, you can always do:
import mytextfile
print(mytestfile.FOO)
Of course, this assumes that the text file is syntactically correct as far as Python is concerned. On a project I worked on we did something similar to this. Turned some text files into Python files. Wacky but maybe worth consideration.
Just to give a different approach, note that if your original file is setup as
export FOO=/path/to/foo
You can do source /etc/default/foo; python myprogram.py (or . /etc/default/foo; python myprogram.py) and within myprogram.py all the values that were exported in the sourced' file are visible in os.environ, e.g
import os
os.environ["FOO"]
If you know for certain that it only contains VAR="QUOTED STRING" style variables, like this:
FOO="some value"
Then you can just do this:
>>> with open('foo.sysconfig') as fd:
... exec(fd.read())
Which gets you:
>>> FOO
'some value'
(This is effectively the same thing as the execfile() solution
suggested in the other answer.)
This method has substantial security implications; if instead of FOO="some value" your file contained:
os.system("rm -rf /")
Then you would be In Trouble.
Alternatively, you can do this:
>>> with open('foo.sysconfig') as fd:
... settings = {var: shlex.split(value) for var, value in [line.split('=', 1) for line in fd]}
Which gets you a dictionary settings that has:
>>> settings
{'FOO': ['some value']}
That settings = {...} line is using a dictionary comprehension. You could accomplish the same thing in a few more lines with a for loop and so forth.
And of course if the file contains shell-style variable expansion like ${somevar:-value_if_not_set} then this isn't going to work (unless you write your very own shell style variable parser).
There are a couple ways to do this sort of thing.
You can indeed import the file as a module, as long as the data it contains corresponds to python's syntax. But either the file in question is a .py in the same directory as your script, either you're to use imp (or importlib, depending on your version) like here.
Another solution (that has my preference) can be to use a data format that any python library can parse (JSON comes to my mind as an example).
/etc/default/foo :
{"FOO":"path/to/foo"}
And in your python code :
import json
with open('/etc/default/foo') as file:
data = json.load(file)
FOO = data["FOO"]
## ...
file.close()
This way, you don't risk to execute some uncertain code...
You have the choice, depending on what you prefer. If your data file is auto-generated by some script, it might be easier to keep a simple syntax like FOO="path/to/foo" and use imp.
Hope that it helps !
The Solution
Here is my approach: parse the bash file myself and process only variable assignment lines such as:
FOO="/path/to/foo"
Here is the code:
import shlex
def parse_shell_var(line):
"""
Parse such lines as:
FOO="My variable foo"
:return: a tuple of var name and var value, such as
('FOO', 'My variable foo')
"""
return shlex.split(line, posix=True)[0].split('=', 1)
if __name__ == '__main__':
with open('shell_vars.sh') as f:
shell_vars = dict(parse_shell_var(line) for line in f if '=' in line)
print(shell_vars)
How It Works
Take a look at this snippet:
shell_vars = dict(parse_shell_var(line) for line in f if '=' in line)
This line iterates through the lines in the shell script, only process those lines that has the equal sign (not a fool-proof way to detect variable assignment, but the simplest). Next, run those lines into the function parse_shell_var which uses shlex.split to correctly handle the quotes (or the lack thereof). Finally, the pieces are assembled into a dictionary. The output of this script is:
{'MOO': '/dont/have/a/cow', 'FOO': 'my variable foo', 'BAR': 'My variable bar'}
Here is the contents of shell_vars.sh:
FOO='my variable foo'
BAR="My variable bar"
MOO=/dont/have/a/cow
echo $FOO
Discussion
This approach has a couple of advantages:
It does not execute the shell (either in bash or in Python), which avoids any side-effect
Consequently, it is safe to use, even if the origin of the shell script is unknown
It correctly handles values with or without quotes
This approach is not perfect, it has a few limitations:
The method of detecting variable assignment (by looking for the presence of the equal sign) is primitive and not accurate. There are ways to better detect these lines but that is the topic for another day
It does not correctly parse values which are built upon other variables or commands. That means, it will fail for lines such as:
FOO=$BAR
FOO=$(pwd)
Based off the answer with exec(.read()), value = eval(.read()), it will only return the value. E.g.
1 + 1: 2
"Hello Word": "Hello World"
float(2) + 1: 3.0

Is there a version of ConfigParser that deals with files with no section headers?

I have a config file which is mainly used in shell scripts, and therefore has the following format:
# Database parameters (MySQL only for now)
DBHOST=localhost
DATABASE=stuff
DBUSER=mypkguser
DBPASS=zbxhsxhg
# Storage locations
STUFFDIR=/var/mypkg/stuff
GIZMODIR=/var/mypkg/gizmo
Now I need to read its values from a Python (2.6) script. I would like not to reinvent the wheel and parse it with descriptor.readlines() and looking for equal signs and skipping lines beginning with '#' and dealing with quoted values and blah blah blah boring. I tried using ConfigParser but it doesn't like files that don't have section headers. Do I have any options or will I have to do the boring thing?
Oh, by the way, wrapping a shell script around the Python script is not an option. It has to run within Apache.
I'm not aware of such a module, but as a quick and dirty hack - just add the [section] before the file-content and you can use ConfigParser as intended!
from io import StringIO
filename = 'ham.egg'
vfile = StringIO(u'[Pseudo-Sectio]\n%s' % open(filename).read())

Make Sphinx generate RST class documentation from pydoc

I'm currently migrating all existing (incomplete) documentation to Sphinx.
The problem is that the documentation uses Python docstrings (the module is written in C, but it probably does not matter) and the class documentation must be converted into a form usable for Sphinx.
There is sphinx.ext.autodoc, but it automatically puts current docstrings to the document. I want to generate a source file in (RST) based on current docstrings, which I could then edit and improve manually.
How would you transform docstrings into RST for Sphinx?
The autodoc does generate RST only there is no official way to get it out of it. The easiest hack to get it was by changing sphinx.ext.autodoc.Documenter.add_line method to emit me the line it gets.
As all I want is one time migration, output to stdout is good enough for me:
def add_line(self, line, source, *lineno):
"""Append one line of generated reST to the output."""
print(self.indent + line)
self.directive.result.append(self.indent + line, source, *lineno)
Now autodoc prints generated RST on stdout while running and you can simply redirect or copy it elsewhere.
monkey patching autodoc so it works without needing to edit anything:
import sphinx.ext.autodoc
rst = []
def add_line(self, line, source, *lineno):
"""Append one line of generated reST to the output."""
rst.append(line)
self.directive.result.append(self.indent + line, source, *lineno)
sphinx.ext.autodoc.Documenter.add_line = add_line
try:
sphinx.main(['sphinx-build', '-b', 'html', '-d', '_build/doctrees', '.', '_build/html'])
except SystemExit:
with file('doc.rst', 'w') as f:
for line in rst:
print >>f, line
As far as I know there are no automated tools to do this. My approach would therefore be to write a small script that reads relevant modules (based on sphinc.ext.autodoc) and throws doc strings into a file (formatted appropriately).

Categories