I have a Flask app that I am trying to put on heroku. I have a requirements.txt file with the requirements for my project, and heroku says that this should be enough to let heroku detect python but it does not. I can manually set the buildpack to python like so
heroku buildpacks:set heroku/python but then I get this error: (from running git push heroku master)
remote: -----> App not compatible with buildpack: https://buildpack-registry.s3.amazonaws.com/buildpacks/heroku/python.tgz
remote: More info: https://devcenter.heroku.com/articles/buildpacks#detection-failure
No default language could be detected for this app.`
What is wrong with my project layout?
Here is my file tree:
.
├── faceParser
│ ├── __init__.py
│ ├── recolor.py
│ ├── static
│ │ ├── libs
│ │ │ ├── bootstrap.min.css
│ │ │ ├── bootstrap.min.js
│ │ │ ├── jquery.min.js
│ │ │ ├── notify.js
│ │ │ └── webcam.min.js
│ │ ├── sketch.js
│ │ └── style.css
│ └── templates
│ ├── base.html
│ └── index.html
├── main.sh
├── README.md
├── requirements.txt
└── venv
(virtual environment files omitted because there are a lot of them)
This is how I run it locally:
export FLASK_APP=faceParser
export FLASK_ENV=development
flask run
Thanks!
I changed my project layout to contain an app.py file in the outermost directory, and added a Procfile file and a runtime.txt file (python-3.5.2)
I think that Heroku needs these files to understand that it is a python project.
Now it works.
Related
I am having some trouble with local modules when deploying on Cloud Functions - any ideas or best practices would be appreciated!
I am trying to deploy a piece of my project as a Cloud Function. It uses some local code from the project, which is shared with other modules - and I use an absolute import for that. I am using a Cloud Repository for deployment, and there I state the folder where the function resides (parent\cloud_function\). The problem is the parent package is not available with that setup.
This is an example of the project structure:
├── parent_repo
│ ├── parent
│ │ ├── __init__.py
│ │ ├── config.conf
│ │ ├── config.py
│ │ ├── cloud_function
│ │ │ ├── __init__.py
│ │ │ ├── main.py
│ │ │ └── requirements.txt
│ │ ├── shared_module
│ │ │ ├── __init__.py
│ │ │ ├── package1.py
│ │ │ └── package2.py
│ │ ├── other_module
│ │ │ ├── __init__.py
│ │ │ ├── some_script.py
│ │ │ └── another_script.py
│ │ └── utils.py
inside parent.cloud_function.main.py AND in parent.other_module.some_script.py I use:
from parent.shared_module.package1 import some_func
from parent.shared_module.package2 importsome_class
to access shared code. However, when trying to deploy the function on Cloud Functions, since I assume it only looks at the folder alone, the parent module is unavailable.
Of course I could simply nest all required code inside the cloud_function folder - but from a project perspective that isn't ideal - as that code is shared across other resources, and does not logically belong there.
Does anyone have a good idea how to this better?
Thanks in advance!
Very shortly - it is difficult.
Here is Python runtime - Specifying dependencies description -
requirements.txt file or packaging local dependencies alongside your
function
You probably can play with Private dependencies in the Cloud Build
Some ideas are provided in the Deploy a Python Cloud Function with all package dependencies SO question.
I have deployed a Flask app to heroku with the below structure. In the app folder, you'll notice an sqlite database file site.db, which I created using flask_sqlalchemy. The database contains a table which stores login credentials (email and password) when people register an account on my web app (think of it like a social media site).
My app is successfully deployed to heroku. When I register a user account on my web app, I am able to use those email/password to log into my web app. However, when I do heroku git:clone -a my-app-name to pull the app to my local machine and run it locally, I'm unable to log in with the same email/password that I registered on the deployed version of my web app (my-app-name.herokuapp.com). Does anyone know why this may be the case? When I use git clone, Heroku seems to be pulling all files, including site.db, but the email/password I registered on the deployed app doesn't seem to be stored in login table.
I ask this question because in future if I need to make some changes to my social media site, I will use git clone to pull the app locally, make changes to my code, and then push it back to heroku master. But I don't want existing users to lose their login credentials just because I had to make an update to the app. Thanks!
├── Procfile
├── app
│ ├── __init__.py
│ ├── __pycache__
│ │ ├── __init__.cpython-36.pyc
│ │ ├── forms.cpython-36.pyc
│ │ ├── models.cpython-36.pyc
│ │ └── routes.cpython-36.pyc
│ ├── forms.py
│ ├── models.py
│ ├── routes.py
│ ├── site.db
│ ├── static
│ │ ├── main.css
│ │ └── profile_pics
│ │ ├── 04e2e4347d6947a2.png
│ │ ├── 0bca0a62856cdc6e.png
│ │ └── default.jpg
│ └── templates
│ ├── about.html
│ ├── account.html
│ ├── home.html
│ ├── layout.html
│ ├── login.html
│ └── register.html
├── requirements.txt
└── run.py
I think the only solution is to use Heroku's provided postgres data store. Then you can set app.config['SQLALCHEMY_DATABASE_URI'] = link_to_db
I have an IoT device (Android device) and some python script to interact with gRPC server running on the same device (as an Android app). I've cross-compiled Python3 and in general it's working - i was able to run some python script with it from Android app (using Runtime.getRuntime().exec(...) and passing PATH, PWD, PYTHONHOME, PYTHONPATH, ANDROID_DATA, ANDROID_ROOT env vars).
The python script that uses gRPC looks as follows:
...
import gprc
...
channel = grpc.insecure_channel(url)
When the script is executed i get the following error:
import error: No module named 'grpc'
Here is the structure of python directories on IoT device (that i've prepared):
├── bin
├── lib
│ ├── pkgconfig
│ ├── python3.9
│ │ ├── asyncio
│ │ ├── collections
│ │ ├── concurrent
│ │ ├── ctypes
│ │ ├── curses
│ │ ├── dbm
│ │ ├── distutils
│ │ ├── encodings
│ │ ├── ensurepip
│ │ ├── html
│ │ ├── http
│ │ ├── idlelib
│ │ ├── importlib
│ │ ├── json
│ │ ├── lib-dynload
│ │ ├── lib2to3
│ │ ├── logging
│ │ ├── multiprocessing
│ │ ├── pydoc_data
│ │ ├── site-packages
│ │ ├── sqlite3
│ │ ├── tkinter
│ │ ├── turtledemo
│ │ ├── unittest
│ │ ├── urllib
│ │ ├── venv
│ │ ├── wsgiref
│ │ ├── xml
│ │ └── xmlrpc
│ └── site-packages
│ ├── google
│ ├── grpc
│ ├── grpcio-1.30.0.dist-info
│ └── protobuf-3.12.2.dist-info
└── share
├── man
│ └── man1
└── terminfo
...
As you can see i've put relevant packages to site-packages (by just copying the same files from my mac machine to iot device which can be incorrect).
What's the right way to do it (where and what exactly should i put relevant libs to python dirs tree)? Can i put any dirs/files to the same dir where the script is (locally)? Is there any lite gRPC impl (probably with limited functionality) in python which can be easily prepared for distribution (eg. copy/pasted)?
FYI I've tried to use python -m pip install grpcio --target and then python -m zipapp resources -m "grpc_serial:main" but it's not working even locally because of module cygrpc not found too (but working if using grpc package that is installed globally):
import error: cannot import name 'cygrpc' from 'grpc._cython' (../python3/lib/python3.9/grpc/_cython/init.py)
If i run "python -m pip install cygrpc --target resources" to have a standalone dist for cygrpc i get ~30 directories (probably transitive deps) about 50Mb which is just crazy heavy.
I can provide a tree output for site-packages if it helps.
Solved it (workaround) by using Thrift RPC which seems to be more lightweight on python side (it required only thrift and six deps installed locally (in the same directory)).
I guess the rootcause for non using of gRPC was not installed transitive deps (not sure it is gRPC or pip issue).
I am trying to study how to use alembic in flask, I want to import a method in flask app:
tree .
.
├── README.md
├── alembic
│ ├── README
│ ├── env.py
│ ├── env.pyc
│ ├── script.py.mako
│ └── versions
│ ├── 8f167daabe6_create_account_table.py
│ └── 8f167daabe6_create_account_table.pyc
├── alembic.ini
├── app
│ ├── __init__.py
│ ├── main
│ │ ├── __init__.py
│ │ ├── errors.py
│ │ ├── forms.py
│ │ └── views.py
│ ├── models.py
│ └── templates
│ ├── 404.html
│ ├── 500.html
│ ├── base.html
│ ├── index.html
│ └── user.html
├── config.py
├── data.sqlite
├── manage.py
└── requirements.txt
in app/__init__.py:
def create_app(config_name):
app = Flask(__name__)
I want to import create_app in env.py:
from app import create_app
but the error shows as below when I run the command alembic upgrade head:
File "alembic/env.py", line 5, in <module>
from app import create_app
ImportError: No module named app
Any idea for this?
I guess you are trying to run
python env.py
In this case, your app directory is not in PYTHONPATH.
solution 1
Run the app from parent dir:
python alembic/env.py
solution 2
Set the PYTHONPATH before running the app
PYTHONPATH=/path/to/parent/dir python env.py
edit
I read about alembic. As #mrorno said, just set the PYTHONPATH before running alembic:
PYTHONPATH=. alembic upgrade head
alembic just tries to load your env.py source code. It's not in your package, so it can't access your app module.
Use solution 2 #tomasz-jakub-rup suggested, you can execute like
$ PYTHONPATH=. alembic upgrade head
and you should get your result.
Create file .env and insert PYTHONPATH=.
I read a lot about this problem but could not find any solution, so I'll ask yet another question about it, since I'm not even sure if I use the correct folder structure for my Python package.
So basically I'm developing an application which uses the Tornado web-server framework and I want to package it, so the users can install it via pip and get access to a basic script to start the web server.
The directory structure is the following:
├── MANIFEST.in
├── README.md
├── config
│ └── default.cfg
├── docs
│ ├── Makefile
│ ├── _build
│ ├── _static
│ ├── _templates
│ ├── conf.py
│ ├── index.rst
├── foopackage
│ ├── __init__.py
│ ├── barmodule.py
│ └── bazmodule.py
├── setup.py
├── static
│ ├── css
│ │ ├── menu.css
│ │ └── main.css
│ ├── img
│ │ └── logo.png
│ ├── js
│ │ ├── ui.js
│ │ └── navigation.js
│ └── lib
│ ├── d3.v3.min.js
│ └── jquery-1.11.0.min.js
└── templates
├── index.html
└── whatever.html
The Python code is as you can see in the package foopackage.
The MANIFEST.in file recursively includes the directories config, static, templates, and docs.
This is my setup.py (only the relevant parts:
from setuptools import setup
setup(name='foo',
version='0.1.0',
packages=['foopackage'],
include_package_data=True,
install_requires=[
'tornado>=3.2.2',
],
entry_points={
'console_scripts': [
'foo=foopackage.barmodule:main',
],
},
)
If I run python setup.py sdist, everything gets packaged nicely, the docs, templates and config files etc. are included. However, if I run pip install ..., only the foopackage gets installed and everything else is ignored.
How do I include those additional files to the install procedure? Is my directory structure OK? I also read about "faking a package", so putting everything in a directory and touch a __init__.py file, but that seems pretty odd to me :-\
I solved the problem by moving the static directory to the actual Python module directory (foopackage). It seems that top level "non-package" folders are ignored otherwise.