I currently have the following project structure in python:
| project
├── module_1
│ ├── __init__.py
│ ├── class_1.py
├── module_2
│ ├── __init__.py
│ ├── class_2.py
├── main_1.py
├── main_2.py
├── ..
├── main_n.py
├── set_up.py
Because of the vast amount of main files we use to run our scripts, I would like to organise these in different modules within the project (different module based on different usage of the endpoint). I have tried something like below:
| project
├── module_1
│ ├── __init__.py
│ ├── class_1.py
├── module_2
│ ├── __init__.py
│ ├── class_2.py
├── endpoints_1
│ ├── __init__.py
│ ├── main_1.py
│ ├── set_up.py
├── endpoints_2
│ ├── __init__.py
│ ├── main_2.py
│ ├── set_up.py
The problem that arises however is that I would have to make a different 'set_up.py' file for each of the endpoint modules. In this set_up file I would add the project path to the sys.paths so that all imports work as expected.
So to conclude I have the following two questions:
Is there any way I can have this project structure without having to mess to much with the sys paths and having a custom 'set_up.py' for each module which contains endpoints?
Should I, to begin with, even use this kind of project structure or should I just keep the original structure and not bundle the endpoints in a module ?
Related
Based from a project with the following structure:
.
└── src/
├── main.py
├── PackageA/
│ ├── __init__.py
│ ├── logic.py
│ ├── SubPackageA1/
│ │ ├── __init__.py
│ │ └── util.py
│ └── SubPackageA2/
│ ├── __init__.py
│ └── otherUtil.py
└── PackageB/
├── __init__.py
└── helpers.py
Project structure
It would be possible to import in the file helpers.py the package otherutil.py?
All the combinations I tried until now fail.
If your program is executed from main.py the import in helpers.py should work like this:
from PackageA.SubPackageA2 import otherUtil
Yes, I checked it, main.py:
from PackageB import helpers
print(helpers.HELPERS_UTIL)
otherUtil.py:
OTHER_UTIL = 'test'
helpers.py
from PackageA.SubPackageA2 import otherUtil
HELPERS_UTIL = otherUtil.OTHER_UTIL
My project (written in python 2.7) has a complex structure and most modules are interlinked. There is no direct entry or link to this project to execute. It works as toolbox for other project.
When I tried to use sphinx to create the documentation it is giving error related to "unable to import module.
sample structure:
<workspace>
└── toolbox (main folder)
├── __init__.py
│
├── sub
│ ├── __init__.py
│ ├── sub1.py
│ └── sub2.py
│
├── subpackageA
│ ├── __init__.py
│ ├── submoduleA1.py
│ └── submoduleA2.py
│
└── subpackageB
├── __init__.py
├── submoduleB1.py
└── submoduleB2.py code[from sub import sub1
from subpackageA import submoduleA2 and so on]
Is there a way to configure the index.rst or the config.rst to ignore the import module error and give a output document directory like below:
└── toolbox
│
├── sub
│ ├── index
│ ├── sub1.m
│ └── sub2.m
│
├── subpackageA
│ ├── index
│ ├── submoduleA1.m
│ └── submoduleA2.m
│
└── subpackageB
├── index
├── submoduleB1.m
└── submoduleB2.m
I tried adding system path in config.rst
import os
import sys
sys.path.insert(0, os.path.abspath('../'))
tried ('../..') or ('..')
even hardcoded the project path.
even tried to use the sphinx.ext.autodoc but getting the same import error.
commands used:
sphinx-apidoc -o doc project/toolbox
make html
I have a Project where there is a python(.py) file inside a Directory say A, under the folder A , I have a file having the code:
from utils.utils import csv_file_transform
where utils is a sibling directory to A
/A
file.py
/utils
__init__.py
utils.py
but, I am getting a error No Module named utils.utils, utils is not a package
I tried adding the path using sys, but its still giving the same error
import sys
sys.path.append('C:/Users/Sri/Documents/newmod/folder/utils')
The code is working fine in ubuntu but not in windows, I am using a windows system
as much as I don't support link only / reference only answers I reckon that your question is answered best by referencing the existing discussions and providing broader context on Python importing mechanism.
Your Requirements
Are you actually working on creating on Python module? If this is the case, it would be good if you could establish a rigorous structure for your module and follow those guidelines.
Consider the following article by Jason C. McDonald on strutting a hypothetical module omission in Python. Given the following structure:
omission-git
├── LICENSE.md
├── omission
│ ├── app.py
│ ├── common
│ │ ├── classproperty.py
│ │ ├── constants.py
│ │ ├── game_enums.py
│ │ └── __init__.py
│ ├── data
│ │ ├── data_loader.py
│ │ ├── game_round_settings.py
│ │ ├── __init__.py
│ │ ├── scoreboard.py
│ │ └── settings.py
│ ├── game
│ │ ├── content_loader.py
│ │ ├── game_item.py
│ │ ├── game_round.py
│ │ ├── __init__.py
│ │ └── timer.py
│ ├── __init__.py
│ ├── __main__.py
│ ├── resources
│ └── tests
│ ├── __init__.py
│ ├── test_game_item.py
│ ├── test_game_round_settings.py
│ ├── test_scoreboard.py
│ ├── test_settings.py
│ ├── test_test.py
│ └── test_timer.py
├── pylintrc
├── README.md
└── .gitignore
Jason would import the modules in a following manner from omission.common.game_enums import GameMode.
Relative Imports
#am5 suggests adding
import os, sys; sys.path.append(os.path.dirname(os.path.realpath(__file__)))
to your __init__.py files. If you study the related discussion you will observer that views on modifying sys.path are diverse. Some may argue that failed imports and errors like:
ImportError: attempted relative import with no known parent package
are actually indication of code smell. As a solution, which I hope you won't find patronising, I would suggest that you solve your importing challenge by:
a) Deciding on your requirements on how "deep" you want to go into modular architecture.
b) Adopting a corresponding recommended architecture with rigorous approach to module structure
c) Refraining from hard-coding any path via sys.path. Appending relative paths, as outlined above, is not a flawless solution but has some merits and may be worth considering.
Other worthy discussions/artefacts
Best structure for the Python project
Sample project by PyPa (GitHub)
Importing files from different folders
I have this structure for my project:
├── Dockerfile
├── app
│ ├── __init__.py
│ ├── __pycache__
│ ├── config
│ ├── database
│ ├── logging.py
│ ├── main.py
│ ├── routers
│ ├── services
│ ├── static
│ ├── templates
│ ├── utils
│ └── worker
├── k6.js
├── poetry.lock
├── prestart.sh
├── pyproject.toml
├── pytest.ini
└── run.py
Inside app, I have this worker folder that I also open as a kind of separate project.
├── __init__.py
├── database
│ ├── __init__.py
│ └── conn.py
├── engine
│ ├── __init__.py
│ ├── core
│ ├── data
│ ├── main.py
│ └── utils
├── main.py
├── poetry.lock
├── pyproject.toml
└── run.sh
The issue that I have when I open worker project which uses code from upper directory, pylance gives me an error of an import that could not be resolved. However, this code runs fine and perfect.
I created .vscode/settings.json for the worker project and add these options:
"python.analysis.extraPaths": ["../../app"],
"python.autoComplete.extraPaths": ["../../app"]
But I am still getting these errors! How can I fix this?
These paths fixed my issue:
"python.analysis.extraPaths": ["${workspaceFolder}\\..\\.."],
"python.autoComplete.extraPaths": ["${workspaceFolder}\\..\\.."]
I'm struggling in figuring out how to properly set up a python Lambda in a subdirectory and have imports work correctly. For example, if you don't put your python in the root folder as recommended by AWS and instead, put it in a src folder with a lambda_handler.py that is the main handler in there and then packages/folders inside that, so you might have src/api, as an example. I am using the new SAM accelerate and they acknowledge a bug where it doesn't ignore the .aws-sam folder, so it will infinitely loop with the project in root, so they recommend a subfolder, but that greatly complicates things with Python, apparently.
I think I properly figured out how to get it to properly to read my own packages and modules in subfolders using init.py, but I can't get my requirements.txt to install, so they don't show up in the local build or the cloud build. I have found quite a few StackOverflows that are seemingly on the subject, but none of them seem to work for me or give an example that I can follow that works. The following is my structure:
/
.aws-sam
└── src
├── app_folder_1
│ ├── __init__.py
│ ├── example1.py
├── lambda_handler.py
├── app_folder_2
│ ├── __init__.py
│ ├── example2.py
├── requirements.txt
└── template.yml
I have an import of pymysql as an example and my dependencies for requirements.txt are never installed, so pymysql never is found. I feel like things shouldn't be this difficult. Can anyone assist?
UPDATE: I may have figured out the issue, which this post gave me a cluse to https://github.com/aws/serverless-application-model/issues/1927 It appears that sam invoke local has the same issue with custom templates -- something I was utilizing -- and that isn't very intuitive, even though they claim that is working as intended.
UPDATE 2: Definitely assisted with my progress, but it still isn't working as intended.
I forgot to return to this, so I figured I would post this here to assist others that may be struggling as Python is really finicky.
The following is the structure of a functional project in Lambda:
├── README.md
├── buildspec.yml
├── lambda_handler.py
├── locales
│ ├── en
│ │ └── LC_MESSAGES
│ │ └── base.po
│ ├── es
│ │ └── LC_MESSAGES
│ │ └── base.po
│ ├── fr
│ │ └── LC_MESSAGES
│ │ └── base.po
├── my_project_name
│ ├── __init__.py
│ ├── python_file_1.py
│ ├── python_file_2.py
│ ├── python_file_3.py
│ ├── repositories
│ │ ├── __init__.py
│ │ ├── resolvers_1.py
│ │ ├── resolvers_2.py
├── requirements-dev.txt
├── requirements.txt
├── template.yml
└── tests
├── __init__.py
├── tests_file_1.py
├── tests_file_2.py
├── tests_file_3.py
└── tests_file_4.py
Note that the base.mo files are generated in the build in the same location as the base.po files and that requirements-dev.txt has dev-only libraries, such as pep8 for formatting and is only run within my local conda environments [I set up virtual environments for each project using miniconda]. The purpose of the resolvers in the nested repositories folder is to separate out the controller-like python files from the calls to the repositories to retrieve data.