Filled __init__ seems to block access to modules in same package - python

In a project one module moduleA requires access to module moduleB within the same sub-package packageA (which is in package project).
This access fails, when the __init__.py of sub-package packageA is filled with an import .. as .. statement, while the __init__py of package project is empty.
Why does a filled __init__.py (seemingly) block access from this (same package) modules - while PyCharm seems to still accept it from an autocomplete and highlighting perspective?
The thrown AttributeError suggests, that the import .. as .. statement makes the interpreter believe that the sub-package is an attribute, not an package – despite an existing __init__.py.
File structure
├── ProjectA
│ ├── src
│ │ ├── project
│ │ │ ├── __init__.py
│ │ │ ├── packageA
│ │ │ │ ├── __init__.py
│ │ │ │ ├── moduleA.py
│ │ │ │ ├── moduleB.py
Code sample 1
# ProjectA / src / project / __init__.py
(empty)
# packageA / __init__.py
(empty)
# packageA / moduleA.py
import project.packageA.moduleB as dummy
class A:
pass
class B:
pass
# packageA / moduleB.py
def method():
pass
Code execution 1
# jupyter stated in 'C:\\Users\\username\\Desktop\\devenv\\'
# notebook located in 'C:\\Users\\username\\Desktop\\devenv\\dev\\'
import sys
sys.path
# output:
# ['C:\\src\\ProjectA',
# 'C:\\src\\ProjectA\\src',
# 'C:\\Users\\username\\Desktop\\devenv\\dev',
# 'C:\\ProgramData\\Anaconda3\\envs\\myenv\\python36.zip',
# 'C:\\ProgramData\\Anaconda3\\envs\\myenv\\DLLs',
# 'C:\\ProgramData\\Anaconda3\\envs\\myenv\\lib',
# 'C:\\ProgramData\\Anaconda3\\envs\\myenv',
# '',
# 'C:\\ProgramData\\Anaconda3\\envs\\myenv\\lib\\site-packages',
# 'C:\\ProgramData\\Anaconda3\\envs\\myenv\\lib\\site-packages\\win32',
# 'C:\\ProgramData\\Anaconda3\\envs\\myenv\\lib\\site-packages\\win32\\lib',
# 'C:\\ProgramData\\Anaconda3\\envs\\myenv\\lib\\site-packages\\Pythonwin',
# 'C:\\ProgramData\\Anaconda3\\envs\\myenv\\lib\\site-packages\\IPython\\extensions',
# 'C:\\Users\\username\\.ipython']
from project.packageA.moduleA import A, B
# no error(s)
Code sample 2
First alternative filling of packageA / __init__.py
# packageA / __init__.py
from .moduleA import A, B
import .moduleB as dummy
Second alternative filling of packageA / __init__.py
# packageA / __init__.py
from project.packageA.moduleA import A, B
import project.packageA.moduleB as dummy
Code execution 2
from project.packageA.moduleA import A, B
AttributeError Traceback (most recent call last)
<ipython-input-1-61a791f79421> in <module>
----> 1 import project.packageA.moduleA.moduleB
C:\src\ProjectA\src\project\packageA\__init__.py in <module>
----> 1 from .moduleA import A, B
2 from .moduleB import *
C:\src\ProjectA\src\project\packageA\moduleA.py in <module>
---> 1 import project.packageA.moduleB as dummy
2
3 class A:
AttributeError: module 'project' has no attribute 'packageA'
Solution
I've found the solution in Stack Overflow: Imports in __init__.py and import as statement
Changing the import in packageA / __init__.py from import .. as to from xx import .. as did the trick:
# packageA / __init__.py
from project.packageA.moduleA import A, B
from project.packageA import moduleB as dummy
Can anyone help me to understand, why import xx as and from xx import xx as work differently, when it comes to sub-packages - specifically in this situation where the package's __init__.py is empty, but the sub-package's __init__.py is filled?

This behavior can actually not be described by any features of intended back-end mechanisms: the behavior doesn't match any of the documentation, e.g. PEP 0221. Thus the import .. as .. statement is nonfunctional.
This bug seems to have been fixed with Python 3.7 (I've been running 3.69)

Related

Where to place shared test annotation/code when using pytest?

I'm using pytest to write some unit tests and I have some tests that can only be run when the tests are running in cloud under some special runtime (Databricks cluster).
I want to automatically skip these tests when I run the tests locally. I know how to find if I'm running locally or not programmatically.
This is my project structure.
.
├── poetry.lock
├── poetry.toml
├── pyproject.toml
├── README.md
└── src
├── pkg1
│ ├── __init__.py
│ ├── conftest.py
│ ├── module1.py
│ ├── module2.py
│ ├── test_module1.py
│ ├── test_module2.py
│ └── utils
│ ├── aws.py
│ └── common.py
└── pkg2
├── __init__.py
├── ...
test_module1.py:
from pkg1 import module1
from common import skip_if_running_locally
def test_everywhere(module1_instance):
pass # do test..
#skip_if_running_locally
def test_only_in_cloud(module1_instance):
pass # do test..
common.py:
import pytest
from pyspark.sql import SparkSession
my_spark = SparkSession.getActiveSession()
running_locally = my_spark is None or \
my_spark.conf.get('spark.app.name') != 'Databricks Shell'
skip_if_running_locally = pytest.mark.skipif(running_locally, reason='running locally')
And I do the same in test_module2.py to mark tests that should be skipped locally.
I don't really like to put this in common.py because it contains the common application code (not test code).
I thought about putting it in a base class, but then it has to be a Class attribute (not self. instance attr).
If I put it in a test_common.py then it'll be picked up by pytest as a file containing test cases.
If I put it in conftest.py how do I import it? from conftest import skip_... ?
What is the right way of doing this? Where do I store common code/annotations dedicated to testing and how do I use it?
Generally, conftest.py is the place to put common test logic. There is nothing wrong with using util/common modules, but the conftest.py has two advantages:
It is executed automatically by pytest.
It is the standard place, so developers would often check it.
With that said, I believe that you can use the approach mentioned here to have custom markers enabled/disabled according to the environment.
Your tests would look like so (note that there is no import, just using the locally vs cloud markers):
import pytest
#pytest.mark.locally
def test_only_runs_locally():
pass
#pytest.mark.cloud
def test_only_runs_on_the_cloud():
pass
def test_runs_everywhere():
pass
Then inside the conftest.py you enable/disable the proper tests:
from pyspark.sql import SparkSession
import pytest
ALL = set("locally cloud".split())
my_spark = SparkSession.getActiveSession()
running_on = "locally" if (
my_spark is None
or my_spark.conf.get('spark.app.name') != 'Databricks Shell'
) else "cloud"
# runs before every test
def pytest_runtest_setup(item):
# look for all the relevant markers of the test
supported_platforms = ALL.intersection(mark.name for mark in item.iter_markers())
if supported_platforms and running_on not in supported_platforms:
pytest.skip(
f"We're running on {running_on}, cannot run {supported_platforms} tests")

pdoc3 or Sphinx for directory with nested module

My code directory looks like below. I need to generate documentation for all the modules like for sub1,sub2,submoduleA1,submoduleB1 and so on.
Also as shown for submoduleB2.py: all the modules imports from other modules/submodules
<workspace>
└── toolbox (main folder)
├── __init__.py
│
├── sub
│ ├── __init__.py
│ ├── sub1.py
│ └── sub2.py
│
├── subpackageA
│ ├── __init__.py
│ ├── submoduleA1.py
│ └── submoduleA2.py
│
└── subpackageB
├── __init__.py
├── submoduleB1.py
└── submoduleB2.py code[from sub import sub1
from subpackageA import submoduleA2 and so on]
code structure for submoduleB2.py
from __future__ import absolute_import, division
import copy
import logging
import numpy as np
import pandas as pd
from dc.dc import DataCleaning
from sub.sub1 import ToolboxLogger
from subpackageA import pan
LOGGER = ToolboxLogger(
"MATH_FUNCTIONS", enableconsolelog=True, enablefilelog=False, loglevel=logging.DEBUG
).logger
"""
Calculations also take into account units of the tags that are passed in
"""
def spread(tag_list):
"""
Returns the spread of a set of actual tag values
:param tag_list: List of tag objects
:type tag_list: list
:return: Pandas Series of spreads
:rtype: Pandas Series
:example:
>>> tag_list = [tp.RH1_ogt_1,
tp.RH1_ogt_2,
tp.RH1_ogt_3,
tp.RH1_ogt_4,
tp.RH1_ogt_5,
tp.RH1_ogt_6]
>>> spread = pan.spread(tag_list)
"""
# use the same units for everything
units_to_use = tag_list[0].units
idxs = tag_list[0].actuals.index
spread_df = pd.DataFrame(index=idxs)
spread_series = spread_df.max(axis=1).copy()
return Q_(spread_series, units_to_use)
I tried to run the pdoc command using anaconda prompt by navigating it to the toolbox folder and executed the below command
pdoc --html --external-links --all-submodules preprocess/toolbox/subpackageA
after executing this command a "subpackageA" folder was created under toolbox with index.html file but it was all blank
Then i tried to generate documentation by providing specific module name
pdoc --html --external-links --all-submodules preprocess/toolbox/submoduleB2.py
but received this below error:
File "C:\Users\preprocess/toolbox/submoduleB2.py", line 16, in
from sub import sub1
ImportError: No module named sub.sub1
Can you please tell me how to generate the documentation using pdoc for complete directory?
Or is there any other package which will auto generate the documentation?
I even tried Sphnix, but faced issues in adding the module/submodule paths in config file
It appears that pdoc3 is throwing that kind of error for a module if it cannot find an import into that module in the python path. One solution is to put
import os, sys
syspath = os.path.dirname(os.path.abspath(__file__))
sys.path.append(path)
into the __init__.py files in each of the subdirectories.

Pytest and submodules

I am trying to run pytest tests on my python modules but am running into an error. It looks like it the main script ircFriend.py can't find the modules I import inside of it. This is the error I get. I get this error on every test.
______________________________________________ ERROR collecting test/configuration_test.py ____________________________________
ImportError while importing test module 'C:\Users\munded\Desktop\irc-friend\test\configuration_test.py'.
Hint: make sure your test modules/packages have valid Python names.
Traceback:
..\..\appdata\local\programs\python\python38\lib\importlib\__init__.py:127: in import_module
return _bootstrap._gcd_import(name[level:], package, level)
test\configuration_test.py:2: in <module>
from src import ircFriend
E ModuleNotFoundError: No module named 'configuration'
This is the file structure I am using for my tests. The __init__.py files are empty.
├───src
│ │ configuration.py
│ │ ircFriend.py
│ │ ircSync.py
│ │ logbook.py
│ │ networkdrive.py
│ │ server.py
│ │ tree.py
│ │ workspace.py
│ │ __init__.py
└───test
│ configuration_test.py
│ fileIO_test.py
│ sandbox_test.py
│ server_test.py
│ sync_test.py
│ __init__.py
If we look at the imports in ircFriend.py they look like this.
import sys
import getopt
import logging
from configuration import Configuration
from logbook import LogBook
from networkdrive import NetworkDrive
from ircSync import IRCSync
from workspace import Workspace
from server import Server
Finaly thees are what my tests look like.
from src import ircFriend
from unittest import mock
from src import configuration
from src import server
#mock.patch('builtins.input', side_effect=['X'])
def testPropertiesFileExists(mockInput):
conf = Configuration()
assert conf.propertiesFileExists() is True
#mock.patch('builtins.input', side_effect=['X'])
def testIrcConfigExists(mockInput):
conf = Configuration()
assert conf.ircConfigExists() is True
#mock.patch('builtins.input', side_effect=['devsite.dev', 'user'])
#mock.patch('src.ircFriend.getpass.getpass', return_value="IDK")
def testServerCreation(mock_input, mock_getpass):
dev = Server()
if isinstance(dev, ircFriend.Server):
assert True
else:
assert False
Any guidence on this subject would do me a world of good.
Best Regards,
E
You should not make both src/__init__.py and test/__init__.py files because these src and test are not packages. These are just root directories for source and test codes.
In test codes, You should remove from src because src is not a package.
Finally, run pytest adding src to PYTHONPATH otherwise pytest can't find modules under the src directory.
$ PYTHONPATH=src pytest test
Or, You can make src/conftest.py, this is a special file for pytest.
I checked these codes.
# test/conftest.py
import sys
sys.path.append("./src")
# src/a.py
from b import say
def func():
return say()
# src/b.py
def say():
return "Hello"
# test/test_a.py
import a
def test_a():
assert a.func() == "Hello"
$ pytest test

Module imports from another folder

I wanted to write a test program and for this I need to include my DPS310.py file. But somehow it doesn't work
I tried:
• from myPackage import DPS310
• import ..DPS310
• import myPackage.DPS310
My structure:
Projekt
├── myPackage
│   ├── __init__.py
│ └── DPS310.py
├── tests
│ ├── __init__.py
│ └── test_module1.py
├── README.md
├── LICENSE
└── setup.py
test_module1.py File
import myPackage.DPS310
msg = "Hello World"
print(msg)
dps = DPS310()
#y = DPS310.getTemperature
print(dps.getTemperature())
DPS310.py File (Extraction. Just to show that the getTemperature method is in here)
...
class DPS310():
def __init__(self, *args):
self.i2c = smbus2.SMBus(1)
if len(args) > 0:
self.addr = args[0]
print("Man addr set")
else:
# default sensor's address
self.addr = 0x77
def getTemperature(self):
r = self.i2c.read_i2c_block_data(self.addr, DPS310_TMP_B2, 3)
# reads out the temperature that is stored in the register
# r[0]=TMP0-7 r[1]=TMP8-15 r[2]=TMP16-23
temp = (r[2] << 16) | (r[1] << 8) | (r[0]) # r[0] << 0
# deploys this function to 24 bits
return self.twos_comp(temp, 24)
...
If I run the test_module1.py file:
Exception has occurred: ModuleNotFoundError
No module named 'myPackage'
File "C:\Julian\Projects\PhytonLib_Raspberry\DPS310_Library_RPi\tests\test_module1.py", line 1, in <module>
import myPackage.DPS310
No module named 'myPackage' File
Most likely Python doesn't know where to find myPackage, so you could try setting it as a source root (you could look up the relevant info for your IDE). Also, check out this StackOverFlow post.

How to load module with same name as other module in Python?

Let me explain problem - we have such project:
model/__init__.py
model/abstract.py
task/__init__.py
task/model.py
How to load into task/model.py model.abstract what is the syntax for it?
# task/model.py
import model # it loads task/model.py not model
from model.abstract import test # there is no test exception
# model/abstract.py
test = 1
How to do such import?
Requested more details.
Google App Engine application:
- main is main.py
Directory structure:
└───src
│ app.yaml
│ index.yaml
│ main.html
│ main.py
│ task_master_api.py
│
├───circle
│ model.py
│ __init__.py
│
├───model
│ abstract.py
│ xxx.py
│ __init__.py
│
├───task
│ model.py
│ __init__.py
│
├───user
│ model.py
│ __init__.py
Exception (see task.model not model in root):
from .. import model
logging.critical((type(model), model.__name__))
from model.abstract import AbstractNamed, AbstractForgetable
-
CRITICAL 2014-02-17 21:23:36,828 model.py:8] (<type 'module'>, 'task.model')
from model.abstract import AbstractNamed, AbstractForgetable
ImportError: No module named abstract
Much more related to answer.
from .. import model
Gives exception.
ValueError: Attempted relative import beyond toplevel package
While the relative imports in ndpu's answer should work, the answer to this question that is burning in my mind is simply this: change the name of your files to avoid this error.
If you have model.py inside the circle directory, how about changing the name to circle_model.py?
Then, you should be able to import modules without any of the relative import .. business.
Edit - knowing now that you don't want to rename
Make sure you have an __init__.py file in your src directory, then try the relative import from .model.abstract import test
Relative import given in the other answer should just work fine. But it is not working because you have a name conflict. You have both a package and module named model. try to use another name either for your package or module.
I found two tricks to force load modele name into module name:
First forcing only absolute loading:
from __future__ import absolute_import
import name
Second is like previous but more code and more local impact:
save_path = sys.path[:]
sys.path.remove('')
import name
sys.path = save_path

Categories