I'm having trouble getting Flask and Gunicorn to work properly on Docker using Docker-compose
Dockerfile:
FROM ubuntu:latest
MAINTAINER Kyle Calica "Kyle Calica"
RUN apt-get update -y
RUN apt-get install -y python3-dev build-essential python-pip gunicorn
RUN pip install --upgrade setuptools
RUN pip install ez_setup
COPY . /app
WORKDIR /app
RUN pip install -r ./app/requirements.txt
CMD [ "gunicorn", "-b", ":8000", "run" ]
Docker-Compose.yml:
version: '2'
services:
web:
build: .
volumes:
- ./:/var/www/crypto
ports:
- "5000:5000"
run.py:
from app import app
app.run()
From my understanding the Gunicorn master will run at port 8000 on all interfaces in the container
And then it'll spawn a node to run at port 5000 in the container at 127.0.0.1/localhost.
From there I link port 5000 in the container to my hostport 8000
I expected to see my application from my host computer at http://127.0.0.1:8000
instead nothing happened and nothing seemed to be connecting.
I have done this before but can't remember what I did differently.
(env) paper-street:CoinSlack kyle$ gunicorn -b :8000 run
[2017-09-16 17:43:59 -0700] [15402] [INFO] Starting gunicorn 19.7.1
[2017-09-16 17:43:59 -0700] [15402] [INFO] Listening at: http://0.0.0.0:8000 (15402)
[2017-09-16 17:43:59 -0700] [15402] [INFO] Using worker: sync
[2017-09-16 17:43:59 -0700] [15405] [INFO] Booting worker with pid: 15405
* Running on http://127.0.0.1:5000/ (Press CTRL+C to quit)
^the reason why is because it seems like it spawned a worker and is running it at port 5000, i can't access my app through port 8000
app.run() and gunicorn are two ways to run a webserver. The first is the Flask development server, and it's useful for development but shouldn't be deployed in production. You shouldn't run both at the same time.
gunicorn should be pointed to the app object so that it can import it and use it to run the webserver itself. That's all it needs.
Instead of CMD [ "gunicorn", "-b", ":8000", "run" ]
Do CMD ["gunicorn", "app:app", "-b", "0.0.0.0:8000"]
You can see that instead of the telling the gunicorn process to run you instead tell the process where to look. The application that you want gunicorn to serve is app. You can also add more options to the gunicorn command such as reload, the number of workers, timeout, log-levels, etc...
To expand on Alex Hall's answer, you don't want to run a Flask server on production, because the ability to scale is very limited. According to the Flask docs, the mention that:
Flask’s built-in server is not suitable for production as it doesn’t
scale well and by default serves only one request at a time
Related
I am able to run python FastAPI locally(connecting to local host http://127.0.0.1:8000/), but when I am trying to run through container, not getting any response on browser. No error message either.
content of main.py
from typing import Optional
from fastapi import FastAPI
app = FastAPI()
#app.get("/")
def read_root():
return {"Hello": "World"}
#app.get("/items/{item_id}")
def read_item(item_id: int, q: Optional[str] = None):
return {"item_id": item_id, "q": q}
content of Dockerfile
FROM python:3.9.5
WORKDIR /code
COPY ./docker_req.txt /code/docker_req.txt
RUN pip install --no-cache-dir --upgrade -r /code/docker_req.txt
COPY ./app /code/app
CMD ["uvicorn", "app.main:app", "--reload", "--host", "0.0.0.0"]
output on cmd when running container:-
docker run --name my-app1 python-fastapi:1.5
INFO: Will watch for changes in these directories: ['/code']
INFO: Uvicorn running on http://0.0.0.0:8000 (Press CTRL+C to quit)
INFO: Started reloader process [1] using statreload
INFO: Started server process [7]
INFO: Waiting for application startup.
INFO: Application startup complete.
docker_req.txt --
fastapi==0.73.0
pydantic==1.9.0
uvicorn==0.17.4
Insert -d before docker run to run your container in Detached mode. try to run:
docker run -d --name my-app5 -p 8000:8000 python-fastapi:1.5
Simpler: you can redirect local port 8000 to docker port 8000
docker run -p 8000:8000 ...
and now you can access it using one of
http://0.0.0.0:8000
http://localhost:8000
http://127.0.0.1:8000
Longer: you can expose port 8000 and run it using docker IP
docker run --expose 8000 ...
or in Dockerfile you can use EXPOSE 8000
Next you have to find Container ID
docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
0e7c6c543246 furas/fastapi "uvicorn app.main:ap…" 3 seconds ago Up 2 seconds 8000/tcp stupefied_nash
or
docker ps -q
0e7c6c543246
And use (part of) CONTAINER ID to get container IP address
docker inspect 0e7c | grep IPAddress
"SecondaryIPAddresses": null,
"IPAddress": "172.17.0.2",
"IPAddress": "172.17.0.2",
or
docker inspect --format '{{ .NetworkSettings.IPAddress }}' 0e7c
172.17.0.2
And now you can access it using
http://172.17.0.2:8000
EDIT:
In one line (on Bash)
docker inspect --format '{{ .NetworkSettings.IPAddress }}' $(docker ps -q)
172.17.0.2
I solved this problem by creating yaml file and running it via docker-compose
version: '3'
services:
my-app:
image: python-fastapi:1.5
ports:
- 8000:8000
In yaml file, I also tried without ports and it still worked whereas the below docker run command not working
docker run --name my-app5 -p 8000:8000 python-fastapi:1.5
Can anyone pls explain why its not working from "docker run" command but working from yaml file?
I am dockerizing a React-Flask web application with separate containers for the frontend and the backend Flask API. Up to now I have only run this on my localhost using the default Flask development server. I then installed Gunicorn to prep the application for deployment with Docker later, and that also ran smoothly on my localhost.
After I ran docker compose up the two images built successfully, are attached to the same network, and I got this in the logs:
Logs For backend:
Starting gunicorn 20.1.0
Listening at: http://0.0.0.0:5000 (1)
Using worker: gthread
Logs For frontend:
react-flask-app#0.1.0 start
react-scripts start
Project is running at http://172.21.0.2/
Starting the development server...
But when I try to access the site at http://172.21.0.2/, localhost:5000 or localhost:3000 it is not accessible. Do I maybe need to add the name of the frontend or backend service?
In Docker Desktop it's showing that the frontend is running at port 5000 but there is no port listed for the backend, it just says it's running.
This is what my files and setup look like:
I added a gunicorn_config.py file as I read it is a good practice, rather than adding all of the arguments to the CMD in the Dockerfile:
bind = "0.0.0.0:5000"
workers = 4
threads = 4
timeout = 120
Then in my Flask backend Dockerfile I have the following CMD for Gunicorn:
FROM python:3.8-alpine
EXPOSE 5000
WORKDIR /app
COPY requirements.txt requirements.txt
ADD requirements.txt /app
RUN pip install --upgrade pip
ADD . /app
COPY . .
RUN apk add build-base
RUN apk add libffi-dev
RUN pip install -r requirements.txt
CMD ["gunicorn", "--config", "gunicorn_config.py", "main:app"]
Here I do "main:app" where my Flask app file is called main.pyand then app is my Flask app object.
I'm generally confused about ports and how this will interact with Gunicorn and in general. I specified port 5000 in the EXPOSE of both of my Dockerfiles.
This is my frontend Dockerfile:
WORKDIR /app
COPY . /app
RUN npm install --legacy-peer-deps
COPY package*.json ./
EXPOSE 3000
ENTRYPOINT [ "npm" ]
CMD ["start"]
And used 5000 in the bind value of my Gunicorn config file. Also, I previously added port 5000 as a proxy in package.json.
I will initially want to run the application using Docker on my localhost but will deploy it to a public host service like Digital Ocean later.
This is my Docker compose file:
services:
middleware:
build: .
ports:
- "5000:5000"
frontend:
build:
context: ./react-flask-app
dockerfile: Dockerfile
ports:
- "3000:3000"
The other thing to mention is that I also created a wsgi.py file and I was wondering do I need to add this to the Gunicorn CMD in my Dockerfile:
from main import app
if __name__ == "__main__":
app.run()
I am having this python command, not sure how to pass it in dockerfile.
Command
python3 app/main.py start --config config.yml
I am writing dockerfile but not sure how to pass the above command in my docker file. In my main.py file I have given start, stop condition in form of actions.
config.yaml file
host: 127.0.0.1
port: 9000
db: elastic
elastic:
port: 9200
host: localhost
user: null
secret: null
ssl: false
sqlserver:
port: 1433
host: localhost
instance: MSSQLSERVER
ssl: false
user: null
password: null
kafka:
port: null
host: null
api-docs: true
rocketchat:
host:null
port:null
auth-backend:
- basic
- bearer
- oauth
name: Smartapp
Dockerfile
FROM python:3.8
WORKDIR /usr/src/app
COPY requirements.txt ./
RUN pip install -r requirements.txt
COPY . .
EXPOSE 8000
RUN python3 app/main.py start --config config.yml
When I am running the dockerfile, it is going in an infinite loop at RUN step.
Step 7/7 : RUN python3 smartinsights/main.py start --config config.yml
---> Running in 8a81bfe608d6
[91m/usr/src/app/smartinsights/system/DB.py:27: SyntaxWarning: "is" with a literal. Did you mean "=="?
if self.database_name is 'elastic':
/usr/src/app/smartinsights/system/DB.py:29: SyntaxWarning: "is" with a literal. Did you mean "=="?
elif self.database_name is 'sqlserver':
[0mSetting /usr/src/app/smartinsights as project folder
Running...
registering ActionIncident
registering ActionIncidentTag
registering MemoryCount
registering MemoryCloseCount
registering MemoryOpenCount
registering AutoCloseCount
registering AgeingAnalysisData
[2021-04-14 09:57:19 +0000] [9] [INFO] Goin' Fast # http://127.0.0.1:8000
[2021-04-14 09:57:19 +0000] [9] [INFO] Starting worker [9]
Below error can also be seen at startup server
[2021-04-14 10:17:37 +0000] [9] [INFO] Goin' Fast # http://localhost:8000
[2021-04-14 10:17:37 +0000] [9] [ERROR] Unable to start server
Traceback (most recent call last):
File "/usr/local/lib/python3.8/site-packages/sanic/server.py", line 891, in serve
http_server = loop.run_until_complete(server_coroutine)
File "uvloop/loop.pyx", line 1494, in uvloop.loop.Loop.run_until_complete
File "uvloop/loop.pyx", line 1768, in create_server
OSError: [Errno 99] error while attempting to bind on address ('::1', 8000, 0, 0): cannot assign requested address
[2021-04-14 10:17:37 +0000] [9] [INFO] Server Stopped
The dockerfile 'builds' an image -- you should/must not run your application during the build process. You want your application to run only when the container runs.
Change your dockerfile to look like this:
FROM python:3.8
WORKDIR /pyapp/
COPY app/* app/
COPY . .
RUN pip install -r requirements.txt
CMD ["python3", "app/main.py", "start", "--config", "config.yml"]
This CMD line tells docker that when it runs the container, it should run this command within it. You can build it like this:
docker build --tag myPythonApp .
And run it like this
docker run -it --rm myPythonApp
You have added some output in the comments that suggests that this container is listening on port 9000. You can expose this port on the host like this:
docker run -it --rm -p 9000:9000 myPythonApp
And maybe access it in your browser on `http://localhost:9000/".
That command will run the container in the current shell process. When you hit ctrl+c then the process will stop and the container will exit. If you want to keep the container running in the background try this:
docker run -it --rm -p 9000:9000 -d myPythonApp
And, if you're sure that you'll only be running one container at a time, it may help to give it a name.
docker run -it --rm -p 9000:9000 -d --name MyPythonApp myPythonApp
That will allow you to kill a background container with:
docker rm -f MyPythonApp
Btw, if you're in a mess, and you're running bash, you can remove all running and stopped containers with:
docker rm -f $(docker ps -qa)
1.create any python script
2.create the docker file using the following code
FROM python:3
WORKDIR /usr/src/app
COPY . .
CMD ["test.py"]
ENTRYPOINT ["python3"]
3.Build the docker
docker build -t hello
4.run the docker
docker run -it hello test.py
Sorry, I am a newbie in Django, so please comment if I have forgotten any important information.
I have set up my Django App following this VSCode tutorial. In short, it teaches me to build a docker image of a Django App in Venv.
When I started to run with VSCode, an error occurred saying that
Exception has occurred: ImproperlyConfigured Requested setting DEBUG, but settings are not configured. You must either define the environment variable DJANGO_SETTINGS_MODULE or call settings.configure() before accessing settings..
However, I have os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'yiweis_blog.settings') in my both wsgi.py and manage.py.
Meanwhile, when I attach a shell directly to the container, and run python manage.py runserver, it prints Django version 3.1.1, using settings 'yiweis_blog.settings'.
I have also tried to assign variable yiweis_blog.settings to DJANGO_SETTINGS_MODULE in dockerfile and export the variable in terminal, but both of them still did not work.
Any help is appreciated. Thanks!
Append:
Directory Tree
yiweis_blog/yiweis_blog_env/lib/python3.8/site-packages/django/Dockerfile
(where yiweis_blog is my root folder where manage.py exists)
# For more information, please refer to https://aka.ms/vscode-docker-python
FROM python:3.8-slim-buster
EXPOSE 8000
# Keeps Python from generating .pyc files in the container
ENV PYTHONDONTWRITEBYTECODE 1
# Turns off buffering for easier container logging
ENV PYTHONUNBUFFERED 1
# Install pip requirements
ADD requirements.txt .
RUN python -m pip install -r requirements.txt
WORKDIR /app
ADD . /app
# Switching to a non-root user, please refer to https://aka.ms/vscode-docker-python-user-rights
RUN useradd appuser && chown -R appuser /app
USER appuser
# During debugging, this entry point will be overridden. For more information, please refer to https://aka.ms/vscode-docker-python-debug
CMD ["gunicorn", "--bind", "0.0.0.0:8000", "yiweis_blog.wsgi"]
yiweis_blog/manage.py
#!/usr/bin/env python
"""Django's command-line utility for administrative tasks."""
import os
import sys
def main():
"""Run administrative tasks."""
os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'yiweis_blog.settings')
try:
from django.core.management import execute_from_command_line
except ImportError as exc:
raise ImportError(
"Couldn't import Django. Are you sure it's installed and "
"available on your PYTHONPATH environment variable? Did you "
"forget to activate a virtual environment?"
) from exc
execute_from_command_line(sys.argv)
if __name__ == '__main__':
main()
docker-compose.yml
version: '3.4'
services:
yiweisblog:
image: yiweisblog
build:
context: .
dockerfile: Dockerfile
ports:
- 8000:8000
The result of gunicorn --bind 0.0.0.0:8000 yiweis_blog.wsgi is that:
appuser#7a4e96306663:/app$ gunicorn --bind 0.0.0.0:8000 yiweis_blog.wsgi
[2020-10-19 07:11:24 +0000] [40] [INFO] Starting gunicorn 20.0.4
[2020-10-19 07:11:24 +0000] [40] [INFO] Listening at: http://0.0.0.0:8000 (40)
[2020-10-19 07:11:24 +0000] [40] [INFO] Using worker: sync
[2020-10-19 07:11:24 +0000] [42] [INFO] Booting worker with pid: 42
[2020-10-19 07:11:41 +0000] [40] [INFO] Handling signal: winch
After this, it just stuck and did not respond any more.
Use this when utilizing F5 to runserver:
In manage.py
os.environ['DJANGO_SETTINGS_MODULE'] = '<appname>.settings'
instead of
os.environ.setdefault("DJANGO_SETTINGS_MODULE", "<appname>.settings")
I'm developing an app on Python using flask and I'm getting this error while trying to deploy it to Heroku:
Error R10 (Boot timeout) -> Web process failed to bind to $PORT within 60 seconds of launch
On the Heroku logs, I can see this line
Few possibilities that I have tried
In my Procfile I have written this web: python hello-mysql.py
I have also tried web: python hello-mysql.py runserver 0.0.0.0=$PORT
Replace "web" with "worker" in your Procfile.
To #damien's point, it looks like you're not binding to the $PORT env var. Here's some documentation that may help: https://devcenter.heroku.com/articles/getting-started-with-python#define-a-procfile and https://devcenter.heroku.com/articles/dynos#web-dynos
Also, do not rename your process to "worker" since only processes named web will be accessible via http/https.
Simply, use gunicorn to easen the burden.
Within the project directory, with the virtual environment activated, install gunicorn as follows:
pip install gunicorn
If you're using pipenv you can try:
pipenv install gunicorn
Update the requirements.txt file to include the new installed gunicorn module by running:
pip freeze > requirements.txt
Update the Procfile as follows:
web: gunicorn your_django_project_name.wsgi --log-file -
N.B:
There should be space between the web: and gunicorn as well as between --log-file and the - next to it.
Lastly, add, commit and push the changes