How do I configure sphinx to document modules intended for a MicroPython interpreter?
The fundamental problem I'm facing is that sphinx gets the information it documents from the imported module. Therefore the python interpreter used to document a module must be importable into that interpreter.
First Problem
I'm using a pyboard, so naturally
import pyb
cannot find module pyb...
So I added to conf.py
from unittest.mock import MagicMock
sys.modules['pyb'] = MagicMock() # and many more
Second Problem
One of my MicroPython libraries is called cmd
Exception occurred:
File "/usr/lib/python3.5/pdb.py", line 135, in <module>
class Pdb(bdb.Bdb, cmd.Cmd):
AttributeError: module 'cmd' has no attribute 'Cmd'
So that makes sense... I changed the name of the module to ucmd, and that appears to be working... but it's suuuuuper dodgy.
Question
Is there a proper way to do this?
To sphinx document a module not designed for the platform running the sphinx-build command?
Phrased more practically: if I wanted to document a MicroPython module called collections, subprocess, or io (all of which are used by the sphinx library), is it possible to use sphinx to do so?
Or would I simply have to be content with naming them ucollections, usubprocess, and uio respectively?
Below is not a sphinx solution, but does provide for a partial autocompletion in most modern editors.
to generate stubs for a (custom) MicroPython module you could use the MicroPython-Stubber
for configuration for a custom module see section 4.4
Alternatively in that same repro in various tests I import the MicroPython-CPython stubs ( sourced from micropython-lib and pycopy-lib) by inserting that that in CPython's sys.path.
This works very well for my testing purposes, allowing me to run and debug (hardware agnostic) MicroPython code with no or little alteration on CPython.
Perhaps it suits your documentation needs as well.
Related
I wrote a custom python package for Ansible to handle business logic for some servers I manage. I have multiple files and they reference each other by re-importing the package.
So my package named <MyCustomPackage> has functions <Function1> <Function2> <Function3>, etc all in their own files... Some of these functions reference functions in the same package, so to do that the file has:
import MyCustomPackage
at the top. I did it this way instead of a relative import because I'm also unit testing these and mocking would not work with relative paths because of a __init__ file in the test directory which was needed for test discovery. The only way I could mock was through importing the package itself. Seemed simple enough.
The problem is with Ansible. These packages are in module_utils. I import them with:
from ansible.module_utils.MyCustomPackage import MyCustomPackage
but when I use the commands I get module not found errors - and traced it back to the import MyCustomPackage statement in the package itself.
So - how should I be structuring my package? Should I try again with relative file imports, or have the package modify the path so it's found with the friendly name?
Any tips would be helpful! Or if someone has a module they've written with Python modules in module_utils and unit tests that they'd be willing to share, that'd be great also!
Many people have problems with relative imports and imports in general in Python because they are ambiguous and surprisingly depend on your current working directory (and other things).
Thus I've created an experimental, new import library: ultraimport
It gives you more control over your imports and lets you do file system based, relative imports.
Given that you have a file function1.py, to import a function from function2.py, you would then write:
import ultraimport
Function2 = ultraimport('__dir__/function2.py', 'Function2')
This will always work, no matter how you run your code. It also does not force you to a specific package structure. You can just have any files you like.
I am trying to follow the tutorial for creating python packages from shared objects compiled from C++ via boost::python, but I am running into some problems I need clarification about.
Assume I have a local $install_dir into which I install the compiled shared objects in the form of a python package via CMake. Parallel to the tutorial liked above, my structure is:
$installdir/
my_package/
__init__.py
module/
__init__.py
_module.so
I have added $installdir to my $PYTHONPATH.
$installdir/my_package/__init__.py is empty.
$installdir/my_package/module/__init__.py contains:
from _module import *
When I then import my_package.module I get ModuleNotFoundError: No module named '_module' raised from $installdir/my_package/module/__init__.py.
The issue seems to be that _module.so is not found from $installdir/my_package/module/__init__.py.
Why is the approach from the tutorial not working?
If I add $installdir/my_package/module to $PYTHONPATH directly everything works fine, but it feels like that should not be neccessary, as $installdir/my_package/module/__init__.py should find _module.so locally.
I implemented the following portable workaround for now within $installdir/my_package/module/__init__.py:
import sys, pathlib
sys.path.insert(0,str(pathlib.Path(__file__).parent.absolute()))
from _module import *
Bonus Question:
Changing the file name extension from .so to .pyd breaks the import (ModuleNotFoundError) even without any packaging and .pyd being accessible directly via $PYTHONPATH. I define the extension via CMake's SUFFIX target property. This is obviously mostly cosmetic, but I would still like to understand the reason and how to fix it.
Edit:
This is Ubuntu 20.04 with python 3.8 and boost 1.71
I am using the following command to run tests:
nosetests --with-coverage --cover-html --cover-package mypackage
I would like the coverage report to be updated, even if a developer adds new, untested, code to the package.
For example, imagine a developer adds a new module to the package but forgets to write tests for it. Since the tests may not import the new module, the code coverage may not reflect the uncovered code. Obviously this is something which could be prevented at the code review stage but it would be great to catch it even earlier.
My solution was to write a simple test which dynamically imports all modules under the top-level package. I used the following code snippet to do this:
import os
import pkgutil
for loader, name, is_pkg in pkgutil.walk_packages([pkg_dirname]):
mod = loader.find_module(name).load_module(name)
Dynamically importing sub-packages and sub-modules like this does not get picked up by the code coverage plugin in nose.
Can anyone suggest a better way to achieve this type of thing?
The problem seems to be the method for dynamically importing all packages/modules under the top-level package.
Using the method defined here seems to work. The key difference being the use of importlib instead of pkgutil. However, importlib was introduced in python 2.7 and 3.1 so this solution is not appropriate for older versions of python.
I have updated the original code snippet to use __import__ instead of the ImpLoader.load_module method. This also seems to do the trick.
import os
import pkgutil
for loader, name, is_pkg in pkgutil.walk_packages([pkg_dirname]):
mod = loader.find_module(name)
__import__(mod.fullname)
Maybe it's not possible (I'm more used to Ruby, where this sort of thing is fine). I'm writing a library that provides additional functionality to docker-py, which provides the docker package, so you just import docker and then you get access to docker.Client etc.
Because it seemed a logical naming scheme, I wanted users to pull in my project with import docker.mymodule, so I've created a directory called docker with an __init__.py, and mymodule.py inside it.
When I try to access docker.Client, Python can't see it, as if my docker package has hidden it:
import docker
import docker.mymodule
docker.Client() # AttributeError: 'module' object has no attribute 'Client'
Is this possible, or do all top-level package names have to differ between source trees?
This would only be possible if docker was set up as a namespace package (which it isn't).
See zope.schema, zope.interface, etc. for an example of a namespace package (zope is the namespace package here). Because zope is declared as a namespace package in setup.py, it means that zope doesn't refer to a particular module or directory on the file system, but is a namespace shared by several packages. This also means that the result of import zope is pretty much undefined - it will simply import the top-level module of the first zope.* package found in the import path.
Therefore, when dealing with namespace packages, you need to explicitely import a specific one with import zope.schema or from zope import schema.
Unfortunately, namespace packages aren't that well documented. As noted by #Bakuriu in the comment, these are some resources that contain some helpful information:
Stackoverflow: How do I create a namespace package in Python?
Built-in support for namespace packages in Python 3.3
Namespace packages in the setuptools documentation
Post about namespace packages at sourceweaver.com
The only thing I can get python omnicomplete to work with are system modules. I get nothing for help with modules in my site-packages or modules that I'm currently working on.
Once I generated ctags for one of my site-packages, it started working for that package -- so I'm guessing that the omnicomplete function depends on ctags for non-sys modules.
EDIT: Not true at all.
Here's the problem -- poor testing on my part -- omnicomplete WAS working for parts of my project, just not most of it.
The issue was that I'm working on a django project, and in order to import django.db, you need to have an environment variable set. Since I couldn't import django.db, any class that inherited from django.db, or any module that imported a class that inherited from django.db wouldn't complete.
I get completion for my own modules in my PYTHONPATH or site-packages. I'm not sure what version of the pythoncomplete.vim script you're using, but you may want to make sure it's the latest.
EDIT: Here's some examples of what I'm seeing on my system...
This file (mymodule.py), I puth in a directory in PYTHONPATH, and then in site-packages. Both times I was able to get the screenshot below.
myvar = 'test'
def myfunction(foo='test'):
pass
class MyClass(object):
pass
Just ran across this on Python reddit tonight: PySmell. Looks like what you're looking for.
PySmell is a python IDE completion helper.
It tries to statically analyze Python source code, without executing it, and generates information about a project’s structure that IDE tools can use.
While it's important to note that you must properly set your PYTHONPATH environmental variable, per the the previous answer, there is a notable bug in Vim which prevents omnicompletion from working when an import fails. As of Vim 7.2.79, this bug hasn't been fixed.
Trouble-shooting tip: verify that the module you are trying to omni-complete can be imported by VIM. I had some syntactically correct Python that VIM didn't like:
:python import {module-name}
Traceback (most recent call last):
File "<string>", line 1, in ?
File "modulename/__init__.py", line 9
class empty_paranthesis():
^
SyntaxError: invalid syntax
Case-in-point, removing the parenthesis from my class definition allowed VIM to import the module, and subsequently OmniComplete on that module started to work.
I think your after the pydiction script. It lets you add your own stuff and site-packages to omni complete.
While your at it, add the following to your python.vim file...
set iskeyword+=.
This will let you auto-complete package functions e.g. if you enter...
os.path.
and then [CTRL][N], you'll get a list of the functions for os.path.