Re-opening a package in Python - python

Maybe it's not possible (I'm more used to Ruby, where this sort of thing is fine). I'm writing a library that provides additional functionality to docker-py, which provides the docker package, so you just import docker and then you get access to docker.Client etc.
Because it seemed a logical naming scheme, I wanted users to pull in my project with import docker.mymodule, so I've created a directory called docker with an __init__.py, and mymodule.py inside it.
When I try to access docker.Client, Python can't see it, as if my docker package has hidden it:
import docker
import docker.mymodule
docker.Client() # AttributeError: 'module' object has no attribute 'Client'
Is this possible, or do all top-level package names have to differ between source trees?

This would only be possible if docker was set up as a namespace package (which it isn't).
See zope.schema, zope.interface, etc. for an example of a namespace package (zope is the namespace package here). Because zope is declared as a namespace package in setup.py, it means that zope doesn't refer to a particular module or directory on the file system, but is a namespace shared by several packages. This also means that the result of import zope is pretty much undefined - it will simply import the top-level module of the first zope.* package found in the import path.
Therefore, when dealing with namespace packages, you need to explicitely import a specific one with import zope.schema or from zope import schema.
Unfortunately, namespace packages aren't that well documented. As noted by #Bakuriu in the comment, these are some resources that contain some helpful information:
Stackoverflow: How do I create a namespace package in Python?
Built-in support for namespace packages in Python 3.3
Namespace packages in the setuptools documentation
Post about namespace packages at sourceweaver.com

Related

Creating package from boost::python modules

I am trying to follow the tutorial for creating python packages from shared objects compiled from C++ via boost::python, but I am running into some problems I need clarification about.
Assume I have a local $install_dir into which I install the compiled shared objects in the form of a python package via CMake. Parallel to the tutorial liked above, my structure is:
$installdir/
my_package/
__init__.py
module/
__init__.py
_module.so
I have added $installdir to my $PYTHONPATH.
$installdir/my_package/__init__.py is empty.
$installdir/my_package/module/__init__.py contains:
from _module import *
When I then import my_package.module I get ModuleNotFoundError: No module named '_module' raised from $installdir/my_package/module/__init__.py.
The issue seems to be that _module.so is not found from $installdir/my_package/module/__init__.py.
Why is the approach from the tutorial not working?
If I add $installdir/my_package/module to $PYTHONPATH directly everything works fine, but it feels like that should not be neccessary, as $installdir/my_package/module/__init__.py should find _module.so locally.
I implemented the following portable workaround for now within $installdir/my_package/module/__init__.py:
import sys, pathlib
sys.path.insert(0,str(pathlib.Path(__file__).parent.absolute()))
from _module import *
Bonus Question:
Changing the file name extension from .so to .pyd breaks the import (ModuleNotFoundError) even without any packaging and .pyd being accessible directly via $PYTHONPATH. I define the extension via CMake's SUFFIX target property. This is obviously mostly cosmetic, but I would still like to understand the reason and how to fix it.
Edit:
This is Ubuntu 20.04 with python 3.8 and boost 1.71

How does Python handle subpackages?

Say Ansible was installed by means of "pip install ansible". Right after the install the following import statement succeeds:
from ansible.module_utils.basic import AnsibleModule
Now, a local package named "ansible.module_utils.custom" is created. The directory structure:
ansible/
__init__.py
module_utils/
__init__.py
custom/
__init__.py
utils.py
As soon as this is put in place the aforementioned import statement fails. Claiming "basic" is undefined. The local package does indeed not declare a "basic" subpackage. Only the installed Ansible library does. It seems Python limited its search to the local package only.
I was under the impression Python would consider the complete system path before giving up on finding code. That it would backtrack out of the local package and finally hit the installed Ansible library.
Is this an incorrect assumption ? If so, is it possible at all to make the local package to coexist with the installed package ?
How Import works
import abc
The first thing Python will do is look up the name abc in sys.modules. This is a cache of all modules that have been previously imported.
If the name isn’t found in the module cache, Python will proceed to search through a list of built-in modules. These are modules that come pre-installed with Python and can be found in the Python Standard Library. If the name still isn’t found in the built-in modules, Python then searches for it in a list of directories defined by sys.path. This list usually includes the current directory, which is searched first.
When Python finds the module, it binds it to a name in the local scope. This means that abc is now defined and can be used in the current file without throwing a NameError.
If the name is never found, you’ll get a ModuleNotFoundError. You can find out more about imports in the Python documentation here!

How to build sphinx docs for micropython

How do I configure sphinx to document modules intended for a MicroPython interpreter?
The fundamental problem I'm facing is that sphinx gets the information it documents from the imported module. Therefore the python interpreter used to document a module must be importable into that interpreter.
First Problem
I'm using a pyboard, so naturally
import pyb
cannot find module pyb...
So I added to conf.py
from unittest.mock import MagicMock
sys.modules['pyb'] = MagicMock() # and many more
Second Problem
One of my MicroPython libraries is called cmd
Exception occurred:
File "/usr/lib/python3.5/pdb.py", line 135, in <module>
class Pdb(bdb.Bdb, cmd.Cmd):
AttributeError: module 'cmd' has no attribute 'Cmd'
So that makes sense... I changed the name of the module to ucmd, and that appears to be working... but it's suuuuuper dodgy.
Question
Is there a proper way to do this?
To sphinx document a module not designed for the platform running the sphinx-build command?
Phrased more practically: if I wanted to document a MicroPython module called collections, subprocess, or io (all of which are used by the sphinx library), is it possible to use sphinx to do so?
Or would I simply have to be content with naming them ucollections, usubprocess, and uio respectively?
Below is not a sphinx solution, but does provide for a partial autocompletion in most modern editors.
to generate stubs for a (custom) MicroPython module you could use the MicroPython-Stubber
for configuration for a custom module see section 4.4
Alternatively in that same repro in various tests I import the MicroPython-CPython stubs ( sourced from micropython-lib and pycopy-lib) by inserting that that in CPython's sys.path.
This works very well for my testing purposes, allowing me to run and debug (hardware agnostic) MicroPython code with no or little alteration on CPython.
Perhaps it suits your documentation needs as well.

python modules in gcloud deployment manager template

Is it possible to use modules installed via python pip in gcloud deployment manager templates (python templates, not jinja)?
I have only being able to find reference of how to import .py files through a deployment manager schema file. e.g.
app.py.schema
info:
title: app
author: me
description: this is a description
imports:
- path: helper.py
i.e. i can only import a single .py at a time, so not useful for importing pip modules.
this link explains that to use libraries that is not explicitly supported we need to import the full library source. Although it does not mention if this full library source can actually be a pip module, or is it only referring to single .py files.
The module i'm trying to use inside my python templates is netaddr for manipulating ip address and subnets.
Any help is appreciated.
what you are looking for it not possible, you cannot install module using pip with interacting the the API, unless if you want to import the whole netaddr module as source code in your *.yaml config file (by adding the path for all the files related to the module) then importing which function your *.py file as Google mention in the documentation some library are supported, even with that some sys and network call will be rejected, you may think about using template_module
Original Answer:
Yes, you can check the link Here for importing multiple python files and using multiple templates.

Project structure leads to redundant dot notation

I have created a Python package which builds on the structure indicated in Kenneth Reitz' "Repository Structure and Python" (1). The main package path is:
/projects-folder (not site-packages)
/package
/package
__init__.py
Datasets.py
Draw.py
Gmaps.py
ShapeSVG.py
project.py
__init__.py
setup.py
With the current structure, I must use the following module import syntax:
import package.package.Datasets
I would prefer to type the following:
import package.Datasets
I am capable of typing the same word twice, of course, but it feels wrong in a deeper sense, i.e., I am structuring my package incorrectly or misunderstanding how Python interprets that structure.
The outer __init__.py is required for Python to detect this package at all, per the docs (2). But that sets up /package/ as the top level of the package and /package/package/ as a sub-package, forcing me into the unwieldy import syntax above.
To avoid this, it seems that my options are to:
Create a package in which the outer folder contains the top level of package modules.
Add the inner folder to my PYTHONPATH environment variable.
Yet both of these seem like suboptimal workarounds for something that shouldn't be an issue in the first place. What should I do?
You've misunderstood. You have two package packages for some reason, but the source you cite never said to do that. The outer folder, with setup.py, is not supposed to be a package.
It sounds like you're running Python in projects-folder and trying to import your package from there. That's not what you should be doing. You have several options to get your package into the import system. (I'll refer to the folder with setup.py in it as setupfolder, to distinguish it from the inner folder):
Build your package with setup.py, for example, python setup.py bdist-wheel --universal, and install the built package with pip.
Skip the build step and just run pip install path/to/setupfolder. Building the package produces an installer useful if you want to distribute your package, but maybe you don't want to do that.
"Install" the package's source tree in development mode with pip install -e path/to/setupfolder, so the Python import system will locate the package's source tree when performing imports. This is handy because you don't have to rebuild and reinstall if you edit the source repository, although you'll still want to restart any running Python processes that are using the package.
Run Python from directly inside the setupfolder.
Any of these options will cause your package to be importable directly as package instead of package.package, as it should be.
While I do not entirely agree with your package structure, you can make use of __all__ and possibly the one legitimate use for star imports I've seen so far. __init__.py can serve more purposes than just identifying your folder as a package or sub-package.
Using a Star Import
In package/package/__init__.py, add a variable __all__ that declares all the public elements you want to export:
__all__ = ['Datasets', 'Draw', 'Gmaps', 'ShapeSVG', 'project']
In package/__init__.py do from package.package import *. Now all the attributes that were available as package.package.x will also be available as package.x.
If you want to directly copy package.package.__all__ to package.__all__ (which is optional, but will allow you to do from package import * properly), you can do something like
from package.package import *
from package.package import __all__ as _all
__all__ = _all
del _all
Not Using a Star Import
You can accomplish the same thing without using package.package.__all__ at all. Just add __all__ directly to package/__init__.py and use from package.package import x-style imports:
from package.package import (
Datasets, Draw, Gmaps, ShapeSVG, project
)
# As before, package.__all__ is optional
__all__ = ['Datasets', 'Gmaps', 'ShapeSVG', 'project']
I would still recommend having a package.package.__all__ variable, but it is optional for this particular purpose.
Pros and Cons
Both approaches are pretty legitimate and I have seen both used in major projects. The first approach reduces redundancy. You only define the public exports in one place: package.package.__all__. The star imports and package.__all__ reference that definition directly, leading to one place that you really have to maintain. On the other hand, there are times when you want to separate the "full" package.package.x API from what you expose via package.x to the casual user. In that case, go with the second option. The only downside here is that you have to be more careful to keep package.__all__ and the corresponding imports synchronized properly.
Note
A number of projects I've seen (numpy especially comes to mind), export the attributes of the individual modules to the top level using this technique. For example, if you had a function package.package.Datasets.get_data, it would be listed in package.package.Datasets.__all__, which would be imported into pacakge.package.__init__, appended to package.package.__all__, and then be referenced by the top-level package and package.__all__.

Categories