Project structure
I have the following folder structure
|
|- src
| |- mypackage
| | |- __init__.py
| | |- mymodule.py
| |- utils.egg
|- main.py
in mymodule.py file I can import the egg adding it to the sys.path as
import sys
sys.path.append('src/utils.egg')
import utils
When calling main.py everything works fine (python -m main).
Problem
The problem comes from pylint. First, it shows the following message in mymodule.py file
Unable to import 'utils' pylint(import-error)
if I ask for suggestions (CRTL + Space) when importing I got
utils.build
.dist
.utils
.setup
# |- suggestions
And from utils.utils I can acces the actual classes / functions in utils module. Of course if I import utils.utils, when executing the main script, an importing error pops up.
How can I configure my vscode setting in order fix pylint?
should I install the egg instead of copy it to the working folder?
Is my project's folder-structure ok, or it goes against recommended practices?
Extra info
In case you wonder the EGG-INFO/SOURCE.txt file looks like
setup.py
utils/__init__.py
utils/functions.py
utils.egg-info/PKG-INFO
utils.egg-info/SOURCES.txt
utils.egg-info/dependency_links.txt
utils.egg-info/top_level.txt
utils/internals/__init__.py
utils/internals/somemodule.py
utils/internals/someothermodule.py
Also, there aren't build nor dist folder in the egg.
This is an issue with Pylint itself and not the Python extension, so it will come down to however you need to configure Pylint.
As for whether you should copy an egg around or install it, you should be installing it into your virtual environment, or at least copying over the appropriate .pth file to make the egg directory work appropriately.
Related
I created a python package which I use for brain preprocessing - named bpt. I would also like to use the same package as a sub-module in another project named brain_seg.
As a results, I am having problems with import statements which either results in errors when running the package stand-alone or nested.
The structure of the bpt package is as follows:
bpt
|--__init__.py
|--module1.py
|--module2.py
|--run.py
Import statements in the file run.py look something like:
import os
import module1
import module2
.
.
.
The structure of the brain_seg package is as follows:
brain_seg
|--bpt
| |--__init__.py
| |--module1.py
| |--module2.py
| |--run.py
|
|--package2
| |--__init__.py
| |--module1a.py
|
|--__init__.py
When running the run.py script which is part of the stand alone bpt package, everything runs as expected.
If I try to run the same script as part of the brain_seg package, the following error is dispatched:
ModuleNotFoundError: No module named 'module1'
I tried to use relative imports so the nested project will work as expected, resulting in the following run.py file:
import os
from . import module1
from . import module2
.
.
.
Following the change, execution of the brain_seg project worked as expected but when I tried to call the run.py script from the stand alone bpt package, it resulted in the following error being dispatched:
ImportError: attempted relative import with no known parent package
How should I handle the imports such that both options will be supported?
Imports in bpt.run must be absolute or relative:
from bpt.module1 import ... # absolute
from .module1 import ... # relative
To run bpt.run from your repository root with your pythonpath getting correctly set, use -m:
python bpt/run.py # bad
python -m bpt.run # good
If brain_seg is not truly meant to be a package itself, but just a project consisting of multiple packages and modules, get rid of brain_seg/__init__.py.
If you do mean to use it as a package, then move it below the repo root, i.e.
- README.md (etc.)
- .gitignore (etc.)
- brain_seg
- __init__.py
- bps
- __init__.py
- main.py
and use e.g. python -m brain_seg.bps.run
from the repository root.
If you also intend to use bps as a standalone thing, I wouldn't recommend keeping it in the brain_seg hierarchy at all, but to make it pip installable (see https://packaging.python.org/tutorials/packaging-projects/) and then pip install -e ../somewhere/bps to have pip set up an editable link between your projects.
CloudFunctions does not support private dependencies.
Following their recommendation (https://cloud.google.com/functions/docs/writing/specifying-dependencies-python#using_private_dependencies) I downloaded the package via:
pip install -t vendor foo
touch vendor/__init__.py
Which results in a directory:
vendor
|- foo-0.0.1.dist-info
|- INSTALLER
|- METADTA
|- RECORD
|- REQUESTED
|- top_level.txt
|- WHEEL
|- __init__.py
Now trying to import vendor.foo results in an error:
ModuleNotFoundError: No module named 'vendor.foo
Is there an import subtlety I am missing or how is this supposed to work?
There are multiple ways to do this.
relative imports, see here, also the guide you linked uses relative imports. If you import as import vendor.foo it will only work if you run the python script in the folder, where also the 'vendor' folder is.
adding the package location to PYTHONPATH environment variable. export PYTHONPATH = $PYTHONPATH":/path/to/vendor. This can also be done inside a python script using sys.path. I'm not sure how this is done in case of Gcloud
installing the package to a virtual environment. As shown here you can setup a virtual environment. One you activated this you should be able to pip install packages to it.
First approach should definatly work for you.
The problem was in foo. Forgot a toplevel __init__.py, package autodetection failed and it added no sources to the wheel.
Thus when downloading later, it created no package directory.
I am trying to make my own package so that I can use the files in a different folder. This package contains a few different modules and then the main module that imports all the others inside it. For example:
Folder
|- main.py
|- other.py
|- something.py
|- __init__.py
Inside the main.py I have the imports:
import other
import something
and it works just fine when running the file itself; however, I added the __init__.py file and tried to import it into a different folder. The package is recognized, but the main.py gives me the following error:
Exception has occurred: ModuleNotFoundError No module named
'univariate'
File "C:...\stats.py", line 8, in
import univariate
File "F:...\testing.py", line 7, in
from stats import stats
For clarification, the actual main file is called stats.py. This is my first experience trying to make a package so I might be missing something. Thank you.
You need to change your imports into relative imports
import .other
import .something
or to change it to absolute imports rooted to your project folder
import x.y.other
import x.y.something
you can read here about the imports
When you have a module that you're trying to import you don't need the ".py" part.
Having a folder with a init.py file (even a blank one) means that a project that contains that folder can import from it.
/myproject
| - /mymodule
| - |- stats.py
| - |- other.py
| - |- something.py
| - |- __init__.py
| - main.py
then in main.py all you need to do is import mymodule or from mymodule import stats
I always hate to FTFM someone, but here's a link to how to build packages from the official documentation. But, where this really starts to shine is when you need to package your module so that someone else can run it Digital Ocean has a pretty good tutorial here.
I’m looking for solution for designing my program.
My program consists of 3 blocks:
Classes
Functions
Other utilities
I want to structure my program this way:
program_folder/
main.py
classes_folder/
class_1.py
class_2.py
functions_folder/
set_of_func_1.py
set_of_func_1.py
utilities_folder/
set_of_utilities_1.py
set_of_utilities_1.py
I want to:
any scripts in «classes_folder» were able to import any of scripts in
«functions_folder».
any scripts in «functions_folder» were able
to import any of scripts in «utilities_folder».
all scripts were
normally used by main.py.
all scripts in «classes_folder»,
«functions_folder» and «utilities_folder» could be tested when worked
as «main» (if __name__ == “__main__”: some tests)
«program_folder»
could be in any place in my computer (there shouldn’t be dependency
on exact path to «program_folder»).
From all the above I thought I have to:
Change import search path for all scripts in «classes_folder»,
«functions_folder» and «utilities_folder».
Set current working
directory to «program_folder» for all scripts?
Is there a way I can do it?
Does my idea look good or have I put there some unexpected problems?
You can create a skeleton project like the following:
/path/to/project/
setup.py
my_project/
__init__.py
a/
__init__.py
b/
__init__.py
==> ./my_project/__init__.py <==
print('my_project/__init__.py')
==> ./my_project/a/__init__.py <==
import my_project
print('my_project/a/__init__.py')
==> ./my_project/b/__init__.py <==
import my_project.a
print('my_project/b/__init__.py')
==> ./setup.py <==
from distutils.core import setup
setup(name='my_project',
version='1.0',
description='my_project',
author='author',
packages=['my_project'])
Then you can install the project locally using pip install -e /path/to/project/ (the project folder is not copied, just gets registered; there's a dependency on the exact path, but this dependency is not hard-coded in project files themselves).
As the result, import my_project, import my_project.a etc. do that they mean:
$ python my_project/b/__init__.py
my_project/__init__.py
my_project/a/__init__.py
my_project/b/__init__.py
A common Python project structure could look like this:
project_name/
setup.py
requirements.txt
project_name/
__main__.py
classes/
__init__.py
class1.py
class2.py
functions/
__init__.py
functions.py
utils/
__init__.py
utils.py
Then, you could modify your imports from absolute to relative and run your package using something like:
$ /path/to/project_name> python -m project_name
Note that setup.py is only required if you want to install your package under some of your interpreters.
Note: see comments below also
I am working on a package with the following structure.
Package
|- __init__.py
|- dir
|- subdir
|- moduleB.py
|- __init__.py
|- __init__.py
|- moduleA.py
main.py
main.py tries to import moduleA, which in turn imports moduleB. However, it runs in to an error when it tries to import moduleA, citing an error at a line of code that has since been changed.
I figured this would is a caching issue, so I deleted all of the pycache files in the package but it still fails.
What can I do to fix this, and what can I do to ensure that this does not remain a problem?
The actual code is
import tensorflow as tf
from UROP.data_structure.default_dictionary import DefaultDictionary
def default_distribution(shape, variation, name=''):
return tf.truncated_normal(
shape=shape,
stddev=variation,
name=name
)
#tdelaney was correct, and stepping through with a debugger revealed that the kernel I was using redirected me to its own private cache. I was using Hydrogen in Atom, and restarting the computer cleared the cache and solved the problem.
However, I was unable to find a long-term solution to the cached dependencies that would not require re-starting my computer.