My project has the following structure:
server/ (root of project)
|
|--- __init__.py
|--- requirements.txt
|--- env/ (virtual environment)
|--- app/ (main app folder)
|--- __init__.py (defines a 'app = Flask(__name__)' object)
|--- app.py (runs app on local server)
|--- models.py
|--- views.py
The way I import different modules in app.py on my local machine is do:
# /server/app/app.py
from server.app import app
from server.app.models import *
from server.app.views import *
It works fine on my local machine (using PyCharm IDE, and the Python binary inside virtual environment folder /server/env/bin/.
However, when I push this to the production server running Ubuntu, where I install all dependencies globally, it keeps throwing the error no module named server.app when I run:
python server/app/app.py
Does anyone know why?
Any IDE environment usually sets the pythonpath. E.g. in eclipse right click your project and see the properties. You will see that your main project is listed in the pythonpath. This path is used to locate modules.
Now in your production code you are not in your IDE. So normal python interpreter cannot find your path. Hence you need to specify this.
One way is to add sys.path.append('/path/to/the/project') before you do your import (This should be done for the first script that got executed, in this case app.py, that way you only need to do this once).
You can also add your path permanently to your production environment. See this post.
As #Rash mentionned, your IDE very probably add the directory containing /server to your python path. You can check this in your app.py by adding
import sys
print "\n".join(sys.path)
before your imports.
When you're manually running your app, ie python server/app/app.py, the parent of your server directory is obviously not in your python path, so you have to add yourself.
Related
I have an airflow environment locally deployed on WSL, and I am using VScode to debug and code.
My app folder structure is as follows:
~/workspaces
|--- .env
|---organization/gcp/datalake
|--- dags
|--- //My dags
|--- plugins
|--- __init.py__
|--- operators
|--- __init.py__
|--- facebook_operators.py
|--- hooks
|--- __init.py__
|--- facebook_hooks.py
I am having trouble understanding the behavior of VSCode regarding the imports.
I added the dags and plugins folders to the PYTHONPATH via .env file. My VSCode is opened directly on the workspaces directory.
The problem :
I get import errors, although I can successfully go to definition of the class I want to import.
Example : In my facebook_operators.py
from hooks.facebook_hooks import FacebookAdsHook
raises the following error :
No name 'facebook_hooks' in module 'hooks'
The contents of my .env file:
PROJECTDIR=~/workspaces/organization/gcp/datalake
PYTHONPATH=${PROJECTDIR}/plugins
PYTHONPATH=${PROJECTDIR}/dags:${PYTHONPATH}
Where did I go wrong? I'd like to understand and solve this error please.
Add the following statement at the beginning of the file "facebook_operators.py" to help VSCode find the file that needs to be imported:
import os,sys
sys.path.append(os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
(VSCode defaults to find the file from the parent folder of the currently opened file. We can use the above statement to add the file path that needs to be imported to the system path, and then VSCode can find it.)
Project structure
I have the following folder structure
|
|- src
| |- mypackage
| | |- __init__.py
| | |- mymodule.py
| |- utils.egg
|- main.py
in mymodule.py file I can import the egg adding it to the sys.path as
import sys
sys.path.append('src/utils.egg')
import utils
When calling main.py everything works fine (python -m main).
Problem
The problem comes from pylint. First, it shows the following message in mymodule.py file
Unable to import 'utils' pylint(import-error)
if I ask for suggestions (CRTL + Space) when importing I got
utils.build
.dist
.utils
.setup
# |- suggestions
And from utils.utils I can acces the actual classes / functions in utils module. Of course if I import utils.utils, when executing the main script, an importing error pops up.
How can I configure my vscode setting in order fix pylint?
should I install the egg instead of copy it to the working folder?
Is my project's folder-structure ok, or it goes against recommended practices?
Extra info
In case you wonder the EGG-INFO/SOURCE.txt file looks like
setup.py
utils/__init__.py
utils/functions.py
utils.egg-info/PKG-INFO
utils.egg-info/SOURCES.txt
utils.egg-info/dependency_links.txt
utils.egg-info/top_level.txt
utils/internals/__init__.py
utils/internals/somemodule.py
utils/internals/someothermodule.py
Also, there aren't build nor dist folder in the egg.
This is an issue with Pylint itself and not the Python extension, so it will come down to however you need to configure Pylint.
As for whether you should copy an egg around or install it, you should be installing it into your virtual environment, or at least copying over the appropriate .pth file to make the egg directory work appropriately.
I have a project where I want to VS Code's discover tests and other testing features to make testing easier. I have a problem that imports in test files break when I try to discover tests.
I have a file structure like so:
project\
__init__.py
package1\
module1.py
__init__.py
tests\
test.py
__init__.py
In test.py I have a line:
import project.package1.module1 as module1
I run my project by calling python -m project in the root folder, and I am able to run tests successfully by calling python -m pytest project from the root folder.
When I run VS Code's "discover tests" feature or try to step through a file with the debugger, I receive an error 'ModuleNotFoundError: No module named project'.
Does anyone know how to solve this problem?
I had the same issue. The solution that worked for me was to introduce a .envfile that holds my PYTHONPATH entries, relative to my workspace folder.
PYTHONPATH="path1:path2:pathN"
Then I added a line to my workspace settings that specifies the location of my .env file.
// ...
"python.envFile": "${workspaceFolder}/.env",
// ...
I had the same issue where I was able to run pytest and python -m pytest successfully in the terminal within VSCode but the discovery was failing. My solution was to implement the failing import in the following way
import sys
sys.path.insert(0, '/full/path/to/package1/')
from package1.module1 import module1
Note that VSCode was opened with the project folder being the root.
Next solution works for Linux and Windows,
import sys
from pathlib import Path
sys.path.insert(0, str(Path('package1/').resolve()))
It's based on #Chufolon answer. My StackOverflow reputation doesn't allow me to just comment on his answer. I prefer his solution because in the .env there could be sensitive information (passwords, ...) that shouldn't be shared (omit it in .gitignore file) for security reasons; and also because __init__.py is shared by default through Git.
My project contains three Python applications. Application 1 is a web app. Applications 2 and 3 contain scripts downloading some data.
All three apps need to use a module Common containing a "model" (classes that are saved to database) and common settings.
I have no clue how to structure this project. I could create three directories, one for each application, and copy Common three times into their directories (doesn't seem right).
Another idea that comes to mind is; create a main directory and put there all files from Common, including __init__.py. Then, crete three subdirectories (submodules), one for each application.
Another way would be installing Common using pip, but that means I would have to reinstall every time I change something in that module.
In previous projects I used .NET - the equivalent in that world would be a Solution with four projects, one of them being Common.
Any suggestions?
I have a similar project that is set up like this
project_root/
App1/
__init__.py
FlaskControlPanel/
app.py
static/
templates/
models/
__init__.py
mymodels.py
Then, I run everything from project_root. I have a small script (either batch or shell depending on my environment) that sets PYTHONPATH=. so that imports work correctly. This is done because I usually develop using PyCharm, where the imports "just work", but when I deploy the final product the path doesn't match what it did in my IDE.
Once the PYTHONPATH is set to include everything from your project root, you can do standard imports.
For example, from my FlaskControlPanel app.py, I have this line:
from models.mymodels import Model1, Model2, Model3
From the App1 __init__.py I have the exact same import statement:
from models.mymodels import Model1, Model2, Model3
I can start the Flask application by running this from my command line (in Windows) while I am in the project_root directory:
setlocal
SET PYTHONPATH=.
python FlaskControlPanel\app.py
The setlocal is used to ensure the PYTHONPATH is only modified for this session.
I like this approach
projects/
__init__.py
project1/
__init__.py
project2/
__init__.py
lib1/
__init__.py
libfile.py
lib2/
__init__.py
So, I need to cd into the projects folder.
To start a projects use
python -m project_name
This allows me to easily import from any external lib like
from lib1.libfile import [imoprt what you want]
or
from lib1 import libfile
Make standard Python modules from your apps. I recommend structure like this:
apps/
common/
setup.py
common/
__init__.py
models.py
app1/
setup.py
app1/
__init__.py
models.py
project/
requirements.txt
Basic setup.py for app common:
#!/usr/bin/env python
from setuptools import setup, find_packages
setup(
name='common',
version='1.0.0',
packages=find_packages(),
zip_safe=False,
)
Make similar setup.py for other apps.
Set editable "-e" option for your apps in requirements.txt:
-e apps/common
-e apps/app1
Install requirements with pip:
$ pip install -r requirements.txt
Editable option means that source files will be linked into Python enviroment. Any change in source files of your apps will have immediate effect without reinstalling them.
Now you can import models from your common app (or any other app) anywhere (in other apps, project files, ...).
I would create a structure like this:
project_root/
app1/
__init__.py
script.py
common/
__init__.py
models.py (all "common" models)
app1/script.py
import os, sys
# add parent directory to pythonpath
basepath = os.path.join(os.path.dirname(os.path.abspath(__file__)), '..')
if basepath not in sys.path:
sys.path.append(basepath)
from common.models VeryCommonModel
print VeryCommonModel
If you don't want to set the python path at runtime, set the python path before running the script:
$ export PYTHONPATH=$PYTHONPATH:/path/to/project_root
And then you can do:
python app1/script.py
I plan to organize my python project the following way:
<my_project>/
webapp/
mymodulea.py
mymoduleb.py
mymodulec.py
mylargemodule/
__init.py__
mysubmodule1.py
mysubmodule2.py
backend/
mybackend1.py
mybackend2.py
lib/
python_external_lib1.py
python_external_large_lib2/
__init__.py
blabla.py
python_external_lib2.py
in my development IDE (PYdev) to have all working I have setup webapp/, backend/ and lib/ as source folders and all of course works.
How can I deploy it on a remote server? Have I to set PYTHONPATH in a startupscript ?Or have I to it programmatively?
If you are treating webapp, backend, and lib as source folders, then you are importing (for example) mymodulea, mybackend1, and python_external_large_lib2.
Then on the server, you must put webapp, backend, and lib into your python path. Doing it in some kind of startup script is the usual way to do it. Doing it programmatically is complicated because now your code needs to know what environment it's running in to configure the path correctly.