How does the automatic Ansible documentation work? - python

So this is my situation:
I am writing Ansible playbooks and roles in a huge project directory. Currently my documentation is stored in .md files. But I want to use Ansible's documentation mechanism, so I looked into the sphinx documentation system. It seems pretty neat to me, so I installed it and got into the mechanisms.
But what I cant figure out: How does Ansible include the documentation that is located in the python modules into the sphinx documentation?
I am sorry to be not able to be more specific, but currently I am just scratching the surface I assume.
So here is what I want to achieve:
I have the typical roles directory
In it there are several roles
The files are mainly .yaml or .yml files
I at least want to include documentation from one file within these role directories
Ideally I want to pull documentation from several files within the directory
If too much is unclear please tell me and I will try to improve the question as I can't figure out for hours how to achieve this or even how to ask precisely.
Thanks in advance for every comment and answer!

Auto doc is only for Ansible modules, not playbooks.
If you want to document your playbooks, you are on your own.
There is a small project on github: ansible-docgen – it fetches a list of tasks into MD files and adds a couple of headers.
You can achive comparable result by calling ansible-playbook --list-tasks myplaybook.yml
In my personal opinion, reading playbooks with comments is very convenient.

Related

Extract dependencies and versions from a gradle file with Python

I need to extract the dependencies and versions of build.gradle file. I do not have access to the project folder, only to the file, so this answer does not work in my case. I am using python to do the parsing, but it has not worked for me, especially since it does not have a structure already defined for example JSON.
I'm using these files to test my parsing:
Twidere gradle
votling grade
Thanks in advance
Unfortunately, you can't do what you want.
As you can see from the answer given to the SO post you linked, a gradle build file is a script. That script is written in either Kotlin or Groovy, and you can programmatically define the version and dependencies in a multitude of ways. For instance, to set a version, you can hard code it in the script, reference a system property, or get it through an included plugin and more. In your first example, it is set through an extension property, and in the second it is not even defined - likely leaving it up to the individual sub-projects if they even use it. In both examples, the build files are just a small part of a larger multi-project, and each individual project potentially has their own defined dependencies and version.
So there is really no way to tell without actually evaluating the script. And you can't do that unless you have access to the full project structure.

Where to place example data in python package?

I'm setting up my first Python package, and I want to install it with some example data so that users can run the code straight off. In case it's relevant my package is on github and I'm using pip.
At the moment my example data is being installed with the rest of the package into site_packages/, by setting include_package_data=True in setup.py, and referencing the files I want to include in MANIFEST.in. However, while this makes sense to me for files used by the code as part of its processing, it doesn't seem especially appropriate for example data.
What is best/standard practice for deploying example data with a python package?
You can put your example data in the repository in examples folder next to your project sources and exclude it from package with prune examples in your manifest file.
There is actually no universal and standard advice for that. Do whatever suits your needs.

Releasing a python package - should you include doc and tests?

So, I've released a small library on pypi, more as an exercise (to "see how it's done") than anything else.
I've uploaded the documentation on readthedocs, and I have a test suite in my git repo.
Since I figure anyone who might be interested in running the test will probably just clone the repo, and the doc is already available online, I decided not to include the doc and test directories in the released package, and I was just wondering if that was the "right" thing to do.
I know answers to this question will be rather subjective, but I felt it was a good place to ask in order to get a sense of what the community considers to be the best practice.
It is not required but recommended to include documentation as well as unit tests into the package.
Regarding documentation:
Old-fashioned or better to say old-school source releases of open source software contain documentation, this is a (de facto?) standard (have a look at GNU software, for example). Documentation is part of the code and should be part of the release, simply because once you download the source release you are independent. Ever been in the situation where you've been on a train somewhere, where you needed to have a quick look into the documentation of module X but didn't have internet access? And then you relievedly realized that the docs are already there, locally.
Another important point in this regard is that the documentation that you bundle together with the code for sure applies to the code version. Code and docs are in sync.
One more thing especially regarding Python: you can write your docs using Sphinx and then build beautiful HTML output based on the documentation source in the process of installing the package. I have seen various Python packages doing exactly this.
Regarding tests:
Imagine the tests are bundled in the source release and are easy to be run by the user (you should document how to do this). Then, if the user observes a problem with your code which is not so easy to track down, he can simply run the unit tests in his environment and see if at least those are passing. If not, you've probably made a wrong assumption when specifying the behavior of your code, which is good to know about. What I want to say is: it can be very good for you as a developer if you make it very simple for the user to execute unit tests.

What is the best practice for bundling third party libraries with your python project? (for users with no Internet)

I'm trying to build a project which includes a few open source third party libraries, but I want to bundle my distribution with said libraries - because I expect my users to want to use my project without an Internet connection. Additionally, I'd like to leave their code 100% untouched and even leave their directory structure untouched if I can. My approach so far has been to extract the tarballs and place the entire folder in MyProject/lib, but I've had to put __init__.py in every sub-directory to be able to reference the third-party code in my own modules.
What's the common best practice for accomplishing this task? How can I best respect the apps' developers by retaining their projects' structure? How can I make importing these libraries in my code less painful?
I'm new to making a distributable project in Python, so I've no clue if there's something I can do in __init__.py or in setup.py to keep myself from having to type from lib.app_name.app_name.app_module include * and whatnot.
For what it's worth, I will be distributing this on OS X and possibly *nix. I'd like to avoid using another library (e.g. setuptools) to accomplish this. (Weirdly, it seems to already be installed on my system, but I didn't install it, so I've no idea what's going on.)
I realize that this question seems to be a duplicate of this one, but I don't think it is because I'm asking for the best practice for the "distribute-the-third-party-code-with-your-own" approach. Please forgive me if I'm asking a garbage question.
buildout is good solution for building and distributing Python software.
You could have a look at the inner workings of virtualenv to get some inspiration how go about this. Maybe you can resuse code from there.

Project structuring and file implementation in Python

I am having a bit of troubles getting a grasp on how to structure my Python projects. I have read jcalderone: Filesystem structure of a Python project and been looking at the source code of CouchApp, but I'm still feeling very puzzled.
I understand how the files should be structured, but I don't understand why. I would love if somebody could hook me up with a detailed walk-through of this, or could explain it to me. Simply how to set up a basic python project, and how the files would interact with each other.
I think this is definitely something people coming from other languages like C, C++, Erlang ... or people who have never been programming before, could benefit from.
name the directory something related to your project. When you do releases, you should include a version number suffix: Twisted-2.5.
Not sure why this is unclear. It seems obvious. It all has to be in one directory.
Why do things have to be in one directory? Because everyone says so, that's why.
create a directory Twisted/bin and put your executables there.
This is the way Linux works. Executables are in a bin directory. It makes it easy to put this specific directory in your PATH environment variable.
If your project is expressable as a single Python source file, then put it into the directory and name it something related to your project. For example, Twisted/twisted.py.
Right. You have /Twisted, /Twisted/bin and /Twisted/twisted.py with your actual, running code in it. Where else would you put it?
There's no "why" to this. Where else could you possibly put it?
If you need multiple source files, create a package instead (Twisted/twisted/, with an empty Twisted/twisted/init.py) and place your source files in it. For example, Twisted/twisted/internet.py.
This is just the way Python packages work. They're directories with __init__.py files. The tutorial is pretty clear on this.
put your unit tests in a sub-package of your package Twisted/twisted/test/.
Where else would you put your tests? Seriously. There's no "why?" to this. There's no sensible alternative.
add Twisted/README and Twisted/setup.py to explain and install your software, respectively
Right. Where else would you put them? Again. There's no "why?" They go in the top directory because -- well -- that's what a directory is for. It contains files.
I am not expert in python but reading this lines from first link makes sense if you think that
There might be computers/programs involved whit project
There might be other people involved whit project
If you have consistent names and file structures both humans and computers might understand your complex program much better.
This involves topics like: testing, building, deploying, re-usability, searching, structure, consistency...
Standard makes connectivity.
Let's try to answer to each rule:
1) You should have a root dir with a good name. If you make a tarball of your package, it's not considered good behavior to have files on the root. I feel really angry when I unpack something and the current folder ends up cluttered with junk.
2) You should separate your executables from modules. They are different beasts. And if you plan to use distutils, it will make your life easier.
3) If you have a single module, the reasons above don't appy. So you can simplify your tree.
4) Unit tests should be closely tighted to it's package. But they are not the package, so it's the perfect case for a subpackage.

Categories