I created a simple Flask web app with CRUD operations and deployed in beanstalk with the below requirements.txt file
Flask==1.1.1
Flask-MySQLdb==0.2.0
Jinja2==2.11.1
mysql==0.0.2
mysqlclient==1.4.6
SQLAlchemy==1.3.15
Werkzeug==1.0.0
Flask-Cors==3.0.8
Flask-Mail==0.9.1
Flask-SocketIO==4.3.0
It worked fine, and then I wrote a below function
import tensorflow as tf
import keras
from keras.models import load_model
import cv2
import os
def face_shape_model():
classifier = load_model('face_shape_recog_model.h5')
image = cv2.imread('')
res = str(classifier.predict_classes(image, 1, verbose=0)[0])
return {"prediction": res}
with including below packages in to requirments.txt file
keras==2.3.1
tensorflow==1.14.0
opencv-python==4.2.0.32
whole flask application working fine in my local environment so I zipped and deploy into AWS elasticbeanstalk after deployment it logged below error
Unsuccessful command execution on instance id(s) 'i-0a2a8a4c5b3e56b81'. Aborting the operation.
Your requirements.txt is invalid. Snapshot your logs for details.
as mentioned above I checked my log and it shows below error
distutils.errors.CompileError: command 'gcc' failed with exit status 1
so I searched about the above error find below solution according to that and I created yml file and added it into .ebextension file as below
packages:
yum:
gcc-c++: []
but I still get the same error. how can I solve this or is there any wrong steps above
Thank you.
Finally solved with docker container, I created docker environment In AWS ElasticBeanstalk and deployed it, and now it works fine, below shows my config file and Dockerfile
Dockerfile
FROM python:3.6.8
RUN mkdir -p /usr/src/flask_app/
COPY src/requirements.txt /usr/src/flask_app/
WORKDIR /usr/src/flask_app/
RUN pip install -r requirements.txt
COPY . /usr/src/flask_app
ENTRYPOINT ["python", "src/app.py"]
EXPOSE 5000
Dockerrun.aws.json
{
"AWSEBDockerrunVersion": "1",
"Ports": [
{
"ContainerPort": "5000",
"HostPort": "80"
}
]
}
Related
I have migrated my Flask application to FastApi application and I'm trying to deploy the new fastapi application to coud run using Dockerfile but I got an error due to port Issue.
I have tried all the solutions given before on such error but nothing works, also I have try many different way to write the dockerfile but yet I had the same issue.
Last try I use FastApi Documentation to create my docker file it didn't work also.
My Dockerfile:
# Start from the official slim Python base image.
FROM python:3.9-slim
# Set the current working directory to /code.
#This is where we'll put the requirements.txt file and the app directory.
WORKDIR /code
# Copy the file with the requirements to the /code directory.
# Copy only the file with the requirements first, not the rest of the code.
# As this file doesn't change often, Docker will detect it and use the cache for this step, enabling the cache for the next step too.
COPY ./requirements.txt /code/requirements.txt
# Install the package dependencies in the requirements file.
# The --no-cache-dir option tells pip to not save the downloaded packages locally,
# as that is only if pip was going to be run again to install the same packages,
# but that's not the case when working with containers.
RUN pip install --no-cache-dir --upgrade -r /code/requirements.txt
# As this has all the code which is what changes most frequently the Docker
# cache won't be used for this or any following steps easily
COPY ./app /code/app
# Because the program will be started at /code and inside of it is the directory ./app with your code,
# Uvicorn will be able to see and import app from app.main.
CMD ["uvicorn", "app.main:app", "--host", "0.0.0.0", "--port", "80"]
Also I have tried to config the running port in the main:
if __name__ == "__main__":
uvicorn.run("main:app", host="0.0.0.0", port=int(os.environ.get('PORT', 8080)), log_level="info")
I'm deploying my application using cloudbuild.yaml file:
steps:
# Build the container image
- name: 'gcr.io/cloud-builders/docker'
args: ['build', '-t', 'gcr.io/cog-dev/new-serving', '.']
# Push the container image to Container Registry
- name: 'gcr.io/cloud-builders/docker'
args: ['push', 'gcr.io/cog-dev/new-serving']
# Deploy container image to Cloud Run
- name: 'gcr.io/google.com/cloudsdktool/cloud-sdk'
entrypoint: gcloud
args: ['run', 'deploy', 'new-serving', '--image', 'gcr.io/cog-dev/new-serving', '--region', 'europe-west1', '--allow-unauthenticated', '--platform', 'managed']
# Store images in Google Artifact Registry
images:
- gcr.io/cog-dev/new-serving
Most of the solutions on stackoverflow have been tried even changing the ports number
Update:
after following Use Google Cloud user credentials when testing containers locally I test the docker image locally and I get this error:
File "/code/./app/endpoints/campaign.py", line 11, in <module>
from app.services.recommend_service import RecommendService
File "/code/./app/services/recommend_service.py", line 19, in <module>
datastore_client = datastore.Client()
File "/usr/local/lib/python3.9/site-packages/google/cloud/datastore/client.py", line 301, in __init__
super(Client, self).__init__(
File "/usr/local/lib/python3.9/site-packages/google/cloud/client/__init__.py", line 320, in __init__
_ClientProjectMixin.__init__(self, project=project, credentials=credentials)
File "/usr/local/lib/python3.9/site-packages/google/cloud/client/__init__.py", line 271, in __init__
raise EnvironmentError(
OSError: Project was not passed and could not be determined from the environment.
I am deploying a model on Azure Machine Learning studio using azure kubernetes service
env = Environment(name='ocr')
aks_name = 'ocr-compute-2'
# Create the cluster
aks_target = AksCompute(ws, aks_name)
env.python.conda_dependencies.add_pip_package('google-cloud-vision')
env.python.conda_dependencies.add_pip_package('Pillow')
env.python.conda_dependencies.add_pip_package('Flask == 2.2.2')
env.python.conda_dependencies.add_pip_package('azureml-defaults')
inference_config = InferenceConfig(environment=env, source_directory='./', entry_script='./run1.py')
deployment_config = AksWebservice.deploy_configuration(autoscale_enabled=True,
autoscale_target_utilization=20,
autoscale_min_replicas=1,
autoscale_max_replicas=4)
I am getting this error
"statusCode": 400,
"message": "Kubernetes Deployment failed",
"details": [
{
"code": "CrashLoopBackOff",
"message": "Your container application crashed as it does not have AzureML serving stack.
Make sure you have 'azureml-defaults>=1.0.45' package in your pip dependencies, it contains requirements for the AzureML serving stack."
}
Will be great if I can know what I am missing here.
When the packages which are required to run the pipeline are mentioned in the requirements.txt, we shouldn’t use the manual approach to update the pod with the libraries.
RUN pip install -r requirements.txt
When the pod failed to find the library which is required from the requirements.txt it throughs CrashLoopBackOff error based on the dependencies.
The dependencies must be available in the requirements.txt
azureml-defaults>=1.0.45 -> install the package at pod level and include that in the requirements.txt
Super new to python, and never used docker before. I want to host my python script on Google Cloud Run but need to package into a Docker container to submit to google.
What exactly needs to go in this DockerFile to upload to google?
Current info:
Python: v3.9.1
Flask: v1.1.2
Selenium Web Driver: v3.141.0
Firefox Geckodriver: v0.28.0
Beautifulsoup4: v4.9.3
Pandas: v1.2.0
Let me know if further information about the script is required.
I have found the following snippets of code to use as a starting point from here. I just don't know how to adjust to fit my specifications, nor do I know what 'gunicorn' is used for.
# Use the official Python image.
# https://hub.docker.com/_/python
FROM python:3.7
# Install manually all the missing libraries
RUN apt-get update
RUN apt-get install -y gconf-service libasound2 libatk1.0-0 libcairo2 libcups2 libfontconfig1 libgdk-pixbuf2.0-0 libgtk-3-0 libnspr4 libpango-1.0-0 libxss1 fonts-liberation libappindicator1 libnss3 lsb-release xdg-utils
# Install Chrome
RUN wget https://dl.google.com/linux/direct/google-chrome-stable_current_amd64.deb
RUN dpkg -i google-chrome-stable_current_amd64.deb; apt-get -fy install
# Install Python dependencies.
COPY requirements.txt requirements.txt
RUN pip install -r requirements.txt
# Copy local code to the container image.
ENV APP_HOME /app
WORKDIR $APP_HOME
COPY . .
# Run the web service on container startup. Here we use the gunicorn
# webserver, with one worker process and 8 threads.
# For environments with multiple CPU cores, increase the number of workers
# to be equal to the cores available.
CMD exec gunicorn --bind :$PORT --workers 1 --threads 8 main:app
# requirements.txt
Flask==1.0.2
gunicorn==19.9.0
selenium==3.141.0
chromedriver-binary==77.0.3865.40.0
Gunicorn is an application server for running your python application instance, it is a pure-Python HTTP server for WSGI applications. It allows you to run any Python application concurrently by running multiple Python processes within a single dyno.
Please have a look into the following Tutorial which explains in detail regarding gunicorn.
Regarding Cloud Run, to deploy to Cloud Run, please follow next steps or the Cloud Run Official Documentation:
1) Create a folder
2) In that folder, create a file named main.py and write your Flask code
Example of simple Flask code
import os
from flask import Flask
app = Flask(__name__)
#app.route("/")
def hello_world():
name = os.environ.get("NAME", "World")
return "Hello {}!".format(name)
if __name__ == "__main__":
app.run(debug=True, host="0.0.0.0", port=int(os.environ.get("PORT", 8080)))
3) Now your app is finished and ready to be containerized and uploaded to Container Registry
3.1) So to containerize your app, you need a Dockerfile in the same directory as the source files (main.py)
3.2) Now build your container image using Cloud Build, run the following command from the directory containing the Dockerfile:
gcloud builds submit --tag gcr.io/PROJECT-ID/FOLDER_NAME
where PROJECT-ID is your GCP project ID. You can get it by running gcloud config get-value project
4) Finally you can deploy to Cloud Run by executing the following command:
gcloud run deploy --image gcr.io/PROJECT-ID/FOLDER_NAME --platform managed
You can also have a look into the Google Cloud Run Official GitHub Repository for a Cloud Run Hello World Sample.
I started sentry using the recommended method for aiohttp as follows. When I start my script with "python [script name]", it works like a charm. However, when I start the same server inside a minimal docker environment (from from python:3.8), it never captures errors. Is there a problem with sentry's official recommended setup?
from sentry_sdk.integrations.aiohttp import AioHttpIntegration
# Sentry
sentry_sdk.init(
dsn="https://xxxx.ingest.sentry.io/12345",
integrations=[AioHttpIntegration()]
)
The server is running correctly, so it can't be that the library is missing. Indeed, it's in requirements.txt:
sentry-sdk==0.14.3
The Dockerfile couldn't be simpler:
from python:3.8
copy . /app
workdir /app
run pip install -r requirements.txt
expose 5000
cmd [ "python", "file.py" ]
I'm working on a Flask application based on the Microblog app from Miguel Grinberg's mega-tutorial. Code lives here: https://github.com/dnilasor/quickgig . I have a working docker implementation with a linked MySQL 5.7 container. Today I added an Admin View function using the Flask-Admin module. It works beautifully served locally (OSX) on Flask server via 'flask run' but when I build and run the new docker image (based on python:3.8-alpine), it crashes on boot with a OSError: libc not found error, the code for which seems to indicate an unknown library
It looks to me like Gunicorn is unable to serve the app following my additions. My classmate and I are stumped!
I originally got the error using the python:3.6-alpine base image and so tried with 3.7 and 3.8 to no avail. I also noticed that I was redundantly adding PyMySQL, once in requirements.txt specifying version no. and again explicitly in the dockerfile with no spec. Removed the requirements.txt entry. Also tried incrementing the Flask-Admin version no. up and down. Also tried cleaning up my database migrations as I have seen multiple migration files causing the container to fail to boot (admittedly this was when using SQLite). Now there is only a single migration file and based on the stack trace it seems like the flask db upgrade works just fine.
One thing I have yet to try is a different base image (less minimal?), can try soon and update this. But the issue is so mysterious to me that I thought it time to ask if anyone else has seen it : )
I did find this socket bug which seemed potentially relevant but it was supposed to be fully fixed in python 3.8.
Also FYI I followed some of the advice here on circular imports and imported my admin controller function inside create_app.
Dockerfile:
FROM python:3.8-alpine
RUN adduser -D quickgig
WORKDIR /home/quickgig
COPY requirements.txt requirements.txt
RUN python -m venv venv
RUN venv/bin/pip install -r requirements.txt
RUN venv/bin/pip install gunicorn pymysql
COPY app app
COPY migrations migrations
COPY quickgig.py config.py boot.sh ./
RUN chmod +x boot.sh
ENV FLASK_APP quickgig.py
RUN chown -R quickgig:quickgig ./
USER quickgig
EXPOSE 5000
ENTRYPOINT ["./boot.sh"]
boot.sh:
#!/bin/sh
source venv/bin/activate
while true; do
flask db upgrade
if [[ "$?" == "0" ]]; then
break
fi
echo Upgrade command failed, retrying in 5 secs...
sleep 5
done
# flask translate compile
exec gunicorn -b :5000 --access-logfile - --error-logfile - quickgig:app
Implementation in init.py:
from flask_admin import Admin
app_admin = Admin(name='Dashboard')
def create_app(config_class=Config):
app = Flask(__name__)
app.config.from_object(config_class)
...
app_admin.init_app(app)
...
from app.admin import add_admin_views
add_admin_views()
...
return app
from app import models
admin.py:
from flask_admin.contrib.sqla import ModelView
from app.models import User, Gig, Neighborhood
from app import db
# Add views to app_admin
def add_admin_views():
from . import app_admin
app_admin.add_view(ModelView(User, db.session))
app_admin.add_view(ModelView(Neighborhood, db.session))
app_admin.add_view(ModelView(Gig, db.session))
requirements.txt:
alembic==0.9.6
Babel==2.5.1
blinker==1.4
certifi==2017.7.27.1
chardet==3.0.4
click==6.7
dominate==2.3.1
elasticsearch==6.1.1
Flask==1.0.2
Flask-Admin==1.5.4
Flask-Babel==0.11.2
Flask-Bootstrap==3.3.7.1
Flask-Login==0.4.0
Flask-Mail==0.9.1
Flask-Migrate==2.1.1
Flask-Moment==0.5.2
Flask-SQLAlchemy==2.3.2
Flask-WTF==0.14.2
guess-language-spirit==0.5.3
idna==2.6
itsdangerous==0.24
Jinja2==2.10
Mako==1.0.7
MarkupSafe==1.0
PyJWT==1.5.3
python-dateutil==2.6.1
python-dotenv==0.7.1
python-editor==1.0.3
pytz==2017.2
requests==2.18.4
six==1.11.0
SQLAlchemy==1.1.14
urllib3==1.22
visitor==0.1.3
Werkzeug==0.14.1
WTForms==2.1
When I run the container in interactive terminal I see the following stack trace:
(venv) ****s-MacBook-Pro:quickgig ****$ docker run -ti quickgig:v7
INFO [alembic.runtime.migration] Context impl SQLiteImpl.
INFO [alembic.runtime.migration] Will assume non-transactional DDL.
INFO [alembic.runtime.migration] Running upgrade -> 1f5feeca29ac, test
Traceback (most recent call last):
File "/home/quickgig/venv/bin/gunicorn", line 6, in <module>
from gunicorn.app.wsgiapp import run
File "/home/quickgig/venv/lib/python3.8/site-packages/gunicorn/app/wsgiapp.py", line 9, in <module>
from gunicorn.app.base import Application
File "/home/quickgig/venv/lib/python3.8/site-packages/gunicorn/app/base.py", line 12, in <module>
from gunicorn.arbiter import Arbiter
File "/home/quickgig/venv/lib/python3.8/site-packages/gunicorn/arbiter.py", line 16, in <module>
from gunicorn import sock, systemd, util
File "/home/quickgig/venv/lib/python3.8/site-packages/gunicorn/sock.py", line 14, in <module>
from gunicorn.socketfromfd import fromfd
File "/home/quickgig/venv/lib/python3.8/site-packages/gunicorn/socketfromfd.py", line 26, in <module>
raise OSError('libc not found')
OSError: libc not found
I'd like the app to boot/be served by gunicorn inside the container so I can continue developing with my team using the docker implementation and leveraging dockerized MySQL vs the pain of local MySQL for development. Can you advise?
In your Dockerfile:
RUN apk add binutils libc-dev
Yes Gunicorn 20.0.0 requires the package libc-dev.
So this works for me:
RUN apk --no-cache add libc-dev
This was an issue with gunicorn 20.0.0, tracked here:
https://github.com/benoitc/gunicorn/issues/2160
The issue is fixed in 20.0.1 and forward. So, change this:
RUN venv/bin/pip install gunicorn pymysql
to this:
RUN venv/bin/pip install 'gunicorn>=20.0.1,<21' pymysql
If upgrading is not an option, as a workaround you can add the following line:
RUN apk --no-cache add binutils musl-dev
Unfortunately this adds about 20MB to the resulting docker container, but there isn't any other known workaround at the moment.
This problem seems related to a new version of Gunicorn 20.0.0. Try to use a previous one 19.9.0
I have solved this problem:
Dockerfile: remove this installation "RUN venv/bin/pip install
gunicorn"
requirement.txt: add this line "gunicorn==19.7.1"