python ruamel.yaml package, how to get header comment lines? - python

I want to get YAML file comments on header lines, like
# 11111111111111111
# 11111111111111111
# 22222222222222222
# bbbbbbbbbbbbbbbbb
---
start:
....
And I used the ca attribute on the loaded data, butfound there are no these comments on it. Is there any other way to get these comments?

Currently (ruamel.yaml==0.17.17) the comments that occur
before the document start token (---) are not passed on from the
DocumentStartToken to the DocumentStartEvent, so these comments are
effectively lost during parsing. Even if they were passed on, it is
non-trivial to preserve them as the DocumentStartEvent is silently
dropped during composition.
You can either put the comments after the end of directives indicator
(---) which allows you to get at the comments using the .ca
attribute without a problem, or remove that indicator altogether as it
is superfluous (at least in your example). Alternatively you will have to
write a small wrapper around the loader:
import sys
import pathlib
import ruamel.yaml
fn = pathlib.Path('input.yaml')
def load_with_pre_directives_comments(yaml, path):
comments = []
text = path.read_text()
if '\n---\n' not in text and '\n--- ' not in text:
return yaml.load(text), comments
for line in text.splitlines(True):
if line.lstrip().startswith('#'):
comments.append(line)
elif line.startswith('---'):
return yaml.load(text), comments
break
yaml = ruamel.yaml.YAML()
yaml.explicit_start = True
data, comments = load_with_pre_directives_comments(yaml, fn)
print(''.join(comments), end='')
yaml.dump(data, sys.stdout)
which gives:
# 11111111111111111
# 11111111111111111
# 22222222222222222
# bbbbbbbbbbbbbbbbb
---
start: 42

Related

Force Sphinx to interpret Markdown in Python docstrings instead of reStructuredText

I'm using Sphinx to document a python project. I would like to use Markdown in my docstrings to format them. Even if I use the recommonmark extension, it only covers the .md files written manually, not the docstrings.
I use autodoc, napoleon and recommonmark in my extensions.
How can I make sphinx parse markdown in my docstrings?
Sphinx's Autodoc extension emits an event named autodoc-process-docstring every time it processes a doc-string. We can hook into that mechanism to convert the syntax from Markdown to reStructuredText.
Unfortunately, Recommonmark does not expose a Markdown-to-reST converter. It maps the parsed Markdown directly to a Docutils object, i.e., the same representation that Sphinx itself creates internally from reStructuredText.
Instead, I use Commonmark for the conversion in my projects. Because it's fast — much faster than Pandoc, for example. Speed is important as the conversion happens on the fly and handles each doc-string individually. Other than that, any Markdown-to-reST converter would do. M2R2 would be a third example. The downside of any of these is that they do not support Recommonmark's syntax extensions, such as cross-references to other parts of the documentation. Just the basic Markdown.
To plug in the Commonmark doc-string converter, make sure that package is installed (pip install commonmark) and add the following to Sphinx's configuration file conf.py:
import commonmark
def docstring(app, what, name, obj, options, lines):
md = '\n'.join(lines)
ast = commonmark.Parser().parse(md)
rst = commonmark.ReStructuredTextRenderer().render(ast)
lines.clear()
lines += rst.splitlines()
def setup(app):
app.connect('autodoc-process-docstring', docstring)
Meanwhile, Recommonmark was deprecated in May 2021. The Sphinx extension MyST, a more feature-rich Markdown parser, is the replacement recommended by Sphinx and by Read-the-Docs. MyST does not yet support Markdown in doc-strings either, but the same hook as above can be used to get limited support via Commonmark.
A possible alternative to the approach outlined here is using MkDocs with the MkDocStrings plug-in, which would eliminate Sphinx and reStructuredText entirely from the process.
Building on #john-hennig answer, the following will keep the restructured text fields like: :py:attr:, :py:class: etc. . This allows you to reference other classes, etc.
import re
import commonmark
py_attr_re = re.compile(r"\:py\:\w+\:(``[^:`]+``)")
def docstring(app, what, name, obj, options, lines):
md = '\n'.join(lines)
ast = commonmark.Parser().parse(md)
rst = commonmark.ReStructuredTextRenderer().render(ast)
lines.clear()
lines += rst.splitlines()
for i, line in enumerate(lines):
while True:
match = py_attr_re.search(line)
if match is None:
break
start, end = match.span(1)
line_start = line[:start]
line_end = line[end:]
line_modify = line[start:end]
line = line_start + line_modify[1:-1] + line_end
lines[i] = line
def setup(app):
app.connect('autodoc-process-docstring', docstring)
I had to extend the accepted answer by john-hen to allow multi-line descriptions of Args: entries to be considered a single parameter:
def docstring(app, what, name, obj, options, lines):
wrapped = []
literal = False
for line in lines:
if line.strip().startswith(r'```'):
literal = not literal
if not literal:
line = ' '.join(x.rstrip() for x in line.split('\n'))
indent = len(line) - len(line.lstrip())
if indent and not literal:
wrapped.append(' ' + line.lstrip())
else:
wrapped.append('\n' + line.strip())
ast = commonmark.Parser().parse(''.join(wrapped))
rst = commonmark.ReStructuredTextRenderer().render(ast)
lines.clear()
lines += rst.splitlines()
def setup(app):
app.connect('autodoc-process-docstring', docstring)
The current #john-hennig is great, but seems to be failing for multi-line Args: in python style. Here was my fix:
def docstring(app, what, name, obj, options, lines):
md = "\n".join(lines)
ast = commonmark.Parser().parse(md)
rst = commonmark.ReStructuredTextRenderer().render(ast)
lines.clear()
lines += _normalize_docstring_lines(rst.splitlines())
def _normalize_docstring_lines(lines: list[str]) -> list[str]:
"""Fix an issue with multi-line args which are incorrectly parsed.
```
Args:
x: My multi-line description which fit on multiple lines
and continue in this line.
```
Is parsed as (missing indentation):
```
:param x: My multi-line description which fit on multiple lines
and continue in this line.
```
Instead of:
```
:param x: My multi-line description which fit on multiple lines
and continue in this line.
```
"""
is_param_field = False
new_lines = []
for l in lines:
if l.lstrip().startswith(":param"):
is_param_field = True
elif is_param_field:
if not l.strip(): # Blank line reset param
is_param_field = False
else: # Restore indentation
l = " " + l.lstrip()
new_lines.append(l)
return new_lines
def setup(app):
app.connect("autodoc-process-docstring", docstring)

Loading and dumping multiple yaml files with ruamel.yaml (python)

Using python 2 (atm) and ruamel.yaml 0.13.14 (RedHat EPEL)
I'm currently writing some code to load yaml definitions, but they are split up in multiple files. The user-editable part contains eg.
users:
xxxx1:
timestamp: '2018-10-22 11:38:28.541810'
<< : *userdefaults
xxxx2:
<< : *userdefaults
timestamp: '2018-10-22 11:38:28.541810'
the defaults are stored in another file, which is not editable:
userdefaults: &userdefaults
# Default values for user settings
fileCountQuota: 1000
diskSizeQuota: "300g"
I can process these together by loading both and concatinating the strings, and then running them through merged_data = list(yaml.load_all("{}\n{}".format(defaults_data, user_data), Loader=yaml.RoundTripLoader)) which correctly resolves everything. (when not using RoundTripLoader I get errors that the references cannot be resolved, which is normal)
Now, I want to do some updates via python code (eg. update the timestamp), and for that I need to just write back the user part. And that's where things get hairy. I sofar haven't found a way to just write that yaml document, not both.
First of all, unless there are multiple documents in your defaults file, you
don't have to use load_all, as you don't concatenate two documents into a
multiple-document stream. If you had by using a format string with a document-end
marker ("{}\n...\n{}") or with a directives-end marker ("{}\n---\n{}")
your aliases would not carry over from one document to another, as per the
YAML specification:
It is an error for an alias node to use an anchor that does not
previously occur in the document.
The anchor has to be in the document, not just in the stream (which can consist of multiple
documents).
I tried some hocus pocus, pre-populating the already represented dictionary
of anchored nodes:
import sys
import datetime
from ruamel import yaml
def load():
with open('defaults.yaml') as fp:
defaults_data = fp.read()
with open('user.yaml') as fp:
user_data = fp.read()
merged_data = yaml.load("{}\n{}".format(defaults_data, user_data),
Loader=yaml.RoundTripLoader)
return merged_data
class MyRTDGen(object):
class MyRTD(yaml.RoundTripDumper):
def __init__(self, *args, **kw):
pps = kw.pop('pre_populate', None)
yaml.RoundTripDumper.__init__(self, *args, **kw)
if pps is not None:
for pp in pps:
try:
anchor = pp.yaml_anchor()
except AttributeError:
anchor = None
node = yaml.nodes.MappingNode(
u'tag:yaml.org,2002:map', [], flow_style=None, anchor=anchor)
self.represented_objects[id(pp)] = node
def __init__(self, pre_populate=None):
assert isinstance(pre_populate, list)
self._pre_populate = pre_populate
def __call__(self, *args, **kw):
kw1 = kw.copy()
kw1['pre_populate'] = self._pre_populate
myrtd = self.MyRTD(*args, **kw1)
return myrtd
def update(md, file_name):
ud = md.pop('userdefaults')
MyRTD = MyRTDGen([ud])
yaml.dump(md, sys.stdout, Dumper=MyRTD)
with open(file_name, 'w') as fp:
yaml.dump(md, fp, Dumper=MyRTD)
md = load()
md['users']['xxxx2']['timestamp'] = str(datetime.datetime.utcnow())
update(md, 'user.yaml')
Since the PyYAML based API requires a class instead of an object, you need to
use a class generator, that actually adds the data elements to pre-populate on
the fly from withing yaml.load().
But this doesn't work, as a node only gets written out with an anchor once it is
determined that the anchor is used (i.e. there is a second reference). So actually the
first merge key gets written out as an anchor. And although I am quite familiar
with the code base, I could not get this to work properly in a reasonable amount of time.
So instead, I would just rely on the fact that there is only one key that matches
the first key of users.yaml at the root level of the dump of the combined updated
file and strip anything before that.
import sys
import datetime
from ruamel import yaml
with open('defaults.yaml') as fp:
defaults_data = fp.read()
with open('user.yaml') as fp:
user_data = fp.read()
merged_data = yaml.load("{}\n{}".format(defaults_data, user_data),
Loader=yaml.RoundTripLoader)
# find the key
for line in user_data.splitlines():
line = line.split('# ')[0].rstrip() # end of line comment, not checking for strings
if line and line[-1] == ':' and line[0] != ' ':
split_key = line
break
merged_data['users']['xxxx2']['timestamp'] = str(datetime.datetime.utcnow())
buf = yaml.compat.StringIO()
yaml.dump(merged_data, buf, Dumper=yaml.RoundTripDumper)
document = split_key + buf.getvalue().split('\n' + split_key)[1]
sys.stdout.write(document)
which gives:
users:
xxxx1:
<<: *userdefaults
timestamp: '2018-10-22 11:38:28.541810'
xxxx2:
<<: *userdefaults
timestamp: '2018-10-23 09:59:13.829978'
I had to make a virtualenv to make sure I could run the above with ruamel.yaml==0.13.14.
That version is from the time I was still young (I won't claim to have been innocent).
There have been over 85 releases of the library since then.
I can understand that you might not be able to run anything but
Python2 at the moment and cannot compile/use a newer version. But what
you really should do is install virtualenv (can be done using EPEL, but also without
further "polluting" your system installation), make a virtualenv for the
code you are developping and install the latest version of ruamel.yaml (and
your other libraries) in there. You can also do that if you need
to distribute your software to other systems, just install virtualenv there as well.
I have all my utilties under /opt/util, and managed
virtualenvutils a
wrapper around virtualenv.
For writing the user part, you will have to manually split the output of yaml.dump() multifile output and write the appropriate part back to users yaml file.
import datetime
import StringIO
import ruamel.yaml
yaml = ruamel.yaml.YAML(typ='rt')
data = None
with open('defaults.yaml', 'r') as defaults:
with open('users.yaml', 'r') as users:
raw = "{}\n{}".format(''.join(defaults.readlines()), ''.join(users.readlines()))
data = list(yaml.load_all(raw))
data[0]['users']['xxxx1']['timestamp'] = datetime.datetime.now().isoformat()
with open('users.yaml', 'w') as outfile:
sio = StringIO.StringIO()
yaml.dump(data[0], sio)
out = sio.getvalue()
outfile.write(out.split('\n\n')[1]) # write the second part here as this is the contents of users.yaml

lxml, xi:include, and original file

I'm using lxml to parse a file that contains xi:include elements, and I'm resolve the includes using xinclude().
Given an element, is there any way to identify the file and source line that the element originally appeared in?
For example:
from lxml import etree
doc = etree.parse('file.xml')
doc.xinclude()
xpath_expression = ...
elt = doc.xpath(xpath_expression)
# Print file name and source line of `elt` location
The xinclude expansion will add an xml:base attribute to the top level expanded element,
and elt.base and elt.sourceline are also updated for the child nodes as well, so:
print elt.base, elt.sourceline
will give you what you want.
If elt is not part of the xinclude expansion, then elt.base will point to the base
document ( 'file.xml' ) and elt.sourceline will be the line number in that file.
( Note that sourceline usually seems to actually point to the line where the element tag
ends, not to the line where it begins, if the element is on multiple lines, just as
validation error messages usually point to the closing tag where the error occurs. )
You can find the initial xincluded elements and check this with:
xels = doc.xpath( '//*[#xml:base] )
for x in xels:
print x.tag, x.base, x.sourceline
for c in x.getchildren():
print c.tag, c.base, c.sourceline
Sadly, current versions of lxml no longer include this ability. However, I've developed a workaround using a simple custom loader. Here's a test script which demonstrates the bug in the approach above along with the workaround. Note that this approach only updates the xml:base attribute of the root tag of the included document.
The output of the program (using Python 3.9.1, lxml 4.6.3):
Included file was source.xml; xinclude reports it as document.xml
Included file was source.xml; workaround reports it as source.xml
Here's the sample program.
# Includes
# ========
from pathlib import Path
from textwrap import dedent
from lxml import etree as ElementTree
from lxml import ElementInclude
# Setup
# =====
# Create a sample document, taken from the `Python stdlib
# <https://docs.python.org/3/library/xml.etree.elementtree.html#id3>`_...
Path("document.xml").write_text(
dedent(
"""\
<?xml version="1.0"?>
<document xmlns:xi="http://www.w3.org/2001/XInclude">
<xi:include href="source.xml" parse="xml" />
</document>
"""
)
)
# ...and the associated include file.
Path("source.xml").write_text("<para>This is a paragraph.</para>")
# Failing xinclude case
# =====================
# Load and xinclude this.
tree = ElementTree.parse("document.xml")
tree.xinclude()
# Show that the ``base`` attribute refers to the top-level
# ``document.xml``, instead of the xincluded ``source.xml``.
root = tree.getroot()
print(f"Included file was source.xml; xinclude reports it as {root[0].base}")
# Workaround
# ==========
# As a workaround, define a loader which sets the ``xml:base`` of an
# xincluded element. While lxml evidently used to do this, a change
# eliminated this ability per some `discussion
# <https://mail.gnome.org/archives/xml/2014-April/msg00015.html>`_,
# which included a rejected patch fixing this problem. `Current source
# <https://github.com/GNOME/libxml2/blob/master/xinclude.c#L1689>`_
# lacks this patch.
def my_loader(href, parse, encoding=None, parser=None):
ret = ElementInclude._lxml_default_loader(href, parse, encoding, parser)
ret.attrib["{http://www.w3.org/XML/1998/namespace}base"] = href
return ret
new_tree = ElementTree.parse("document.xml")
ElementInclude.include(new_tree, loader=my_loader)
new_root = new_tree.getroot()
print(f"Included file was source.xml; workaround reports it as {new_root[0].base}")

Yajl parse error with githubarchive.org JSON stream in Python

I'm trying to parse a GitHub archive file with yajl-py. I believe the basic format of the file is a stream of JSON objects, so the file itself is not valid JSON, but it contains objects which are.
To test this out, I installed yajl-py and then used their example parser (from https://github.com/pykler/yajl-py/blob/master/examples/yajl_py_example.py) to try to parse a file:
python yajl_py_example.py < 2012-03-12-0.json
where 2012-03-12-0.json is one of the GitHub archive files that's been decompressed.
It appears this sort of thing should work from their reference implementation in Ruby. Do the Python packages not handle JSON streams?
By the way, here's the error I get:
yajl.yajl_common.YajlError: parse error: trailing garbage
9478bbc3","type":"PushEvent"}{"repository":{"url":"https://g
(right here) ------^
You need to use a stream parser to read the data. Yajl supports stream parsing, which allows you to read one object at a time from a file/stream. Having said that, it doesn't look like Python has working bindings for Yajl..
py-yajl has iterload commented out, not sure why: https://github.com/rtyler/py-yajl/commit/a618f66005e9798af848c15d9aa35c60331e6687#L1R264
Not a Python solution, but you can use Ruby bindings to read in the data and emit it in a format you need:
# gem install yajl-ruby
require 'open-uri'
require 'zlib'
require 'yajl'
gz = open('http://data.githubarchive.org/2012-03-11-12.json.gz')
js = Zlib::GzipReader.new(gz).read
Yajl::Parser.parse(js) do |event|
print event
end
The example does not enable any of the Yajl extra features, for what you are looking for you need to enable allow_multiple_values flag on the parser. Here is what you need to modify to the basic example to have it parse your file.
--- a/examples/yajl_py_example.py
+++ b/examples/yajl_py_example.py
## -37,6 +37,7 ## class ContentHandler(YajlContentHandler):
def main(args):
parser = YajlParser(ContentHandler())
+ parser.allow_multiple_values = True
if args:
for fn in args:
f = open(fn)
Yajl-Py is a thin wrapper around yajl, so you can use all the features Yajl provides. Here are all the flags that yajl provides that you can enable:
yajl_allow_comments
yajl_dont_validate_strings
yajl_allow_trailing_garbage
yajl_allow_multiple_values
yajl_allow_partial_values
To turn these on in yajl-py you do the following:
parser = YajlParser(ContentHandler())
# enabling these features, note that to make it more pythonic, the prefix `yajl_` was removed
parser.allow_comments = True
parser.dont_validate_strings = True
parser.allow_trailing_garbage = True
parser.allow_multiple_values = True
parser.allow_partial_values = True
# then go ahead and parse
parser.parse()
I know this has been answered, but I prefer the following approach and it does not use any packages. The github dictionary is on a single line for some reason, so you cannot assume a single dictionary per line. This looks like:
{"json-key":"json-val", "sub-dict":{"sub-key":"sub-val"}}{"json-key2":"json-val2", "sub-dict2":{"sub-key2":"sub-val2"}}
I decided to create a function which fetches one dictionary at a time. It returns json as a string.
def read_next_dictionary(f):
depth = 0
json_str = ""
while True:
c = f.read(1)
if not c:
break #EOF
json_str += str(c)
if c == '{':
depth += 1
elif c == '}':
depth -= 1
if depth == 0:
break
return json_str
I used this function to loop through the Github archive with a while loop:
arr_of_dicts = []
f = open(file_path)
while True:
json_as_str = read_next_dictionary(f)
try:
json_dict = json.loads(json_as_str)
arr_of_dicts.append(json_dict)
except:
break # exception on loading json to end loop
pprint.pprint(arr_of_dicts)
This works on the dataset post here: http://www.githubarchive.org/ (after gunzip)
As a workaround you can split the GitHub Archive files into lines and then parse each line as json:
import json
with open('2013-05-31-10.json') as f:
lines = f.read().splitlines()
for line in lines:
rec = json.loads(line)
...

Writing comments to files with ConfigParser

How can one write comments to a given file within sections?
If I have:
import ConfigParser
with open('./config.ini', 'w') as f:
conf = ConfigParser.ConfigParser()
conf.set('DEFAULT', 'test', 1)
conf.write(f)
I will get the file:
[DEFAULT]
test = 1
But how can I get a file with comments inside [DEFAULT] section, like:
[DEFAULT]
; test comment
test = 1
I know I can write codes to files by doing:
import ConfigParser
with open('./config.ini', 'w') as f:
conf = ConfigParser.ConfigParser()
conf.set('DEFAULT', 'test', 1)
conf.write(f)
f.write('; test comment') # but this gets printed after the section key-value pairs
Is this a possibility with ConfigParser? And I don't want to try another module because I need to keep my program as "stock" as possible.
You can use the allow_no_value option if you have Version >= 2.7
This snippet:
import ConfigParser
config = ConfigParser.ConfigParser(allow_no_value=True)
config.add_section('default_settings')
config.set('default_settings', '; comment here')
config.set('default_settings', 'test', 1)
with open('config.ini', 'w') as fp:
config.write(fp)
config = ConfigParser.ConfigParser(allow_no_value=True)
config.read('config.ini')
print config.items('default_settings')
will create an ini file like this:
[default_settings]
; comment here
test = 1
Update for 3.7
I've been dealing with configparser lately and came across this post. Figured I'd update it with information relevant to 3.7.
Example 1:
config = configparser.ConfigParser(allow_no_value=True)
config.set('SECTION', '; This is a comment.', None)
Example 2:
config = configparser.ConfigParser(allow_no_value=True)
config['SECTION'] = {'; This is a comment':None, 'Option':'Value')
Example 3: If you want to keep your letter case unchanged (default is to convert all option:value pairs to lowercase)
config = configparser.ConfigParser(allow_no_value=True)
config.optionxform = str
config.set('SECTION', '; This Comment Will Keep Its Original Case', None)
Where "SECTION" is the case-sensitive section name you want the comment added to. Using "None" (no quotes) instead of an empty string ('') will allow you to set the comment without leaving a trailing "=".
You can create variable that starts by # or ; character:
conf.set('default_settings', '; comment here', '')
conf.set('default_settings', 'test', 1)
created conf file is
[default_settings]
; comment here =
test = 1
ConfigParser.read function won't parse first value
config = ConfigParser.ConfigParser()
config.read('config.ini')
print config.items('default_settings')
gives
[('test','1')]
You could also use ConfigUpdater. It has many more convenience options to update configuration files in a minimal invasive way.
You would basically do:
from configupdater import ConfigUpdater
updater = ConfigUpdater()
updater.add_section('DEFAULT')
updater.set('DEFAULT', 'test', 1)
updater['DEFAULT']['test'].add_before.comment('test comment', comment_prefix=';')
with open('./config.ini', 'w') as f:
updater.write(f)
Freaky solution for the above :)
Note there is a side-effect, see if that suites you
config = configparser.ConfigParser(comment_prefixes='///')
config.set('section', '# cmt', 'comment goes here')
configparse will treat comments as variables, but real software would not.
This would even preserve the comments on writes done after read of the same ini file, which is a real game changer (disappearing comments are just horrible) :) and you don't need to do allow_no_value=True to allow empty value, just minor visual candy :)
so the ini file would look like:
[section]
# cmt = comment goes here
which pretty much gets the job done :)
please make sure to initialize comment_prefixes with a string that would never appear in your ini file just in case
This worked for me in 3.9.
Side effect on writing the already existing comments. They would not disappear which was normal default, but will be converted to a similar form # first = <remaining>, where first - first word of comment, remaining - remaining of the comment, which would change how file looks, so be carefull...

Categories