I'm running Linux Mint with Python 3.6.
I have read through every link on here but can't figure out what is wrong. I am running a simple flask app which works fine when I'm running it locally on my machine, but then running it with Docker I can't access the IP in my browser.
I have set the flask app to run on host 0.0.0.0, with app.run(host='0.0.0.0').
Dockerfile:
FROM python:3.7
RUN mkdir -p /var/app
WORKDIR /var/app
COPY . /var/app
RUN pip install --upgrade pip
RUN pip install -r requirements.txt
EXPOSE 5000
CMD ["pytest", "-v", "tests/test_flask_api.py"]
# CMD ["python3", "app.py"]
CMD ["python3", "-m", "Flask", "run", "--host=0.0.0.0"]
docker-compose.yml:
web:
build: ./app
ports:
- "5000:5000"
volumes:
- .:/code
After running the command docker-compose up -d to build and run the container, I run the command docker inspect --format '{{ .NetworkSettings.IPAddress }}' to get the IP address of the container as 172.17.0.2.
I try to access the site via 172.17.0.2:5000 and localhost:5000, but both just hang and don't load.
Finally, I ran docker exec -it restapimma_web_1 /bin/bash to get into the container. Then I ran curl localhost:5000 and was able to get the correct response. So the flask app is running inside the container I just can't access it outside the container.
I had a similar problem. To get it working:
Allow your flask app to accept a HOST argument from your environment
if __name__ == "__main__":
app.run(
host=os.environ.get("BACKEND_HOST", "172.0.0.1"),
port=your_port,
debug=True,
)
set your host environmental var in your composition
services:
[your service name]:
image:[your image]
environment:
- BACKEND_HOST=[your service name]
ports:
- "[etc]"
Basically flask wants to be called using the right hostname
Related
I am developing a fastapi inside a docker container in windows/ubuntu (code below). When I test the app outside the container by running python -m uvicorn app:app --reload in the terminal and then navigating to 127.0.0.1:8000/home everything works fine:
{
Data: "Test"
}
However, when I docker-compose up I can neither run python -m uvicorn app:app --reload in the container (due to the port already being used), nor see anything returned in the browser. I have tried 127.0.0.1:8000/home, host.docker.internal:8000/home and localhost:8000/home and I always receive:
{
detail: "Not Found"
}
What step am I missing?
Dockerfile:
FROM python:3.8-slim
EXPOSE 8000
ENV PYTHONDONTWRITEBYTECODE=1
ENV PYTHONUNBUFFERED=1
COPY requirements.txt .
RUN python -m pip install -r requirements.txt
WORKDIR /app
COPY . /app
RUN adduser -u nnnn --disabled-password --gecos "" appuser && chown -R appuser /app
USER appuser
CMD ["gunicorn", "--bind", "0.0.0.0:8000", "-k", "uvicorn.workers.UvicornWorker", "app:app"]
Docker-compose:
version: '3.9'
services:
fastapitest:
image: fastapitest
build:
context: .
dockerfile: ./Dockerfile
ports:
- 8000:8000
extra_hosts:
- "host.docker.internal:host-gateway"
app.py:
from fastapi import FastAPI
app = FastAPI()
#app.get("/home") #must be one line above the function fro the route
def home():
return {"Data": "Test"}
if __name__ == '__main__':
import uvicorn
uvicorn.run(app, host="127.0.0.1", port=8000)
The issue here is that when you specify host="127.0.0.1" to uvicorn, that means you can only access that port from that same machine. Now, when you run outside docker, you are on the same machine, so everything works. But since a docker container is (at least to some degree) a different computer, you need to tell it to allow connections from outside the container as well. To do this, switch to host="0.0.0.0", and then you should be able to access your dockerized API on http://localhost:8000.
I am able to run python FastAPI locally(connecting to local host http://127.0.0.1:8000/), but when I am trying to run through container, not getting any response on browser. No error message either.
content of main.py
from typing import Optional
from fastapi import FastAPI
app = FastAPI()
#app.get("/")
def read_root():
return {"Hello": "World"}
#app.get("/items/{item_id}")
def read_item(item_id: int, q: Optional[str] = None):
return {"item_id": item_id, "q": q}
content of Dockerfile
FROM python:3.9.5
WORKDIR /code
COPY ./docker_req.txt /code/docker_req.txt
RUN pip install --no-cache-dir --upgrade -r /code/docker_req.txt
COPY ./app /code/app
CMD ["uvicorn", "app.main:app", "--reload", "--host", "0.0.0.0"]
output on cmd when running container:-
docker run --name my-app1 python-fastapi:1.5
INFO: Will watch for changes in these directories: ['/code']
INFO: Uvicorn running on http://0.0.0.0:8000 (Press CTRL+C to quit)
INFO: Started reloader process [1] using statreload
INFO: Started server process [7]
INFO: Waiting for application startup.
INFO: Application startup complete.
docker_req.txt --
fastapi==0.73.0
pydantic==1.9.0
uvicorn==0.17.4
Insert -d before docker run to run your container in Detached mode. try to run:
docker run -d --name my-app5 -p 8000:8000 python-fastapi:1.5
Simpler: you can redirect local port 8000 to docker port 8000
docker run -p 8000:8000 ...
and now you can access it using one of
http://0.0.0.0:8000
http://localhost:8000
http://127.0.0.1:8000
Longer: you can expose port 8000 and run it using docker IP
docker run --expose 8000 ...
or in Dockerfile you can use EXPOSE 8000
Next you have to find Container ID
docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
0e7c6c543246 furas/fastapi "uvicorn app.main:ap…" 3 seconds ago Up 2 seconds 8000/tcp stupefied_nash
or
docker ps -q
0e7c6c543246
And use (part of) CONTAINER ID to get container IP address
docker inspect 0e7c | grep IPAddress
"SecondaryIPAddresses": null,
"IPAddress": "172.17.0.2",
"IPAddress": "172.17.0.2",
or
docker inspect --format '{{ .NetworkSettings.IPAddress }}' 0e7c
172.17.0.2
And now you can access it using
http://172.17.0.2:8000
EDIT:
In one line (on Bash)
docker inspect --format '{{ .NetworkSettings.IPAddress }}' $(docker ps -q)
172.17.0.2
I solved this problem by creating yaml file and running it via docker-compose
version: '3'
services:
my-app:
image: python-fastapi:1.5
ports:
- 8000:8000
In yaml file, I also tried without ports and it still worked whereas the below docker run command not working
docker run --name my-app5 -p 8000:8000 python-fastapi:1.5
Can anyone pls explain why its not working from "docker run" command but working from yaml file?
I am dockerizing a React-Flask web application with separate containers for the frontend and the backend Flask API. Up to now I have only run this on my localhost using the default Flask development server. I then installed Gunicorn to prep the application for deployment with Docker later, and that also ran smoothly on my localhost.
After I ran docker compose up the two images built successfully, are attached to the same network, and I got this in the logs:
Logs For backend:
Starting gunicorn 20.1.0
Listening at: http://0.0.0.0:5000 (1)
Using worker: gthread
Logs For frontend:
react-flask-app#0.1.0 start
react-scripts start
Project is running at http://172.21.0.2/
Starting the development server...
But when I try to access the site at http://172.21.0.2/, localhost:5000 or localhost:3000 it is not accessible. Do I maybe need to add the name of the frontend or backend service?
In Docker Desktop it's showing that the frontend is running at port 5000 but there is no port listed for the backend, it just says it's running.
This is what my files and setup look like:
I added a gunicorn_config.py file as I read it is a good practice, rather than adding all of the arguments to the CMD in the Dockerfile:
bind = "0.0.0.0:5000"
workers = 4
threads = 4
timeout = 120
Then in my Flask backend Dockerfile I have the following CMD for Gunicorn:
FROM python:3.8-alpine
EXPOSE 5000
WORKDIR /app
COPY requirements.txt requirements.txt
ADD requirements.txt /app
RUN pip install --upgrade pip
ADD . /app
COPY . .
RUN apk add build-base
RUN apk add libffi-dev
RUN pip install -r requirements.txt
CMD ["gunicorn", "--config", "gunicorn_config.py", "main:app"]
Here I do "main:app" where my Flask app file is called main.pyand then app is my Flask app object.
I'm generally confused about ports and how this will interact with Gunicorn and in general. I specified port 5000 in the EXPOSE of both of my Dockerfiles.
This is my frontend Dockerfile:
WORKDIR /app
COPY . /app
RUN npm install --legacy-peer-deps
COPY package*.json ./
EXPOSE 3000
ENTRYPOINT [ "npm" ]
CMD ["start"]
And used 5000 in the bind value of my Gunicorn config file. Also, I previously added port 5000 as a proxy in package.json.
I will initially want to run the application using Docker on my localhost but will deploy it to a public host service like Digital Ocean later.
This is my Docker compose file:
services:
middleware:
build: .
ports:
- "5000:5000"
frontend:
build:
context: ./react-flask-app
dockerfile: Dockerfile
ports:
- "3000:3000"
The other thing to mention is that I also created a wsgi.py file and I was wondering do I need to add this to the Gunicorn CMD in my Dockerfile:
from main import app
if __name__ == "__main__":
app.run()
I am trying to test a simple server endpoint on my local machine when running docker compose up but it does not seem the ports are exposed when running docker this way. If I just do a docker build and docker run I can use localhost to get a successful endpoint call but not when I use my docker compose file.
docker-compose.yml file:
version: '3'
services:
simple:
build:
context: .
dockerfile: Dockerfile
container_name: simple
ports:
- 3000:80
environment:
- SOMEKEY=ABCD
- ANOTHERKEY=EFG
Dockerfile
FROM python:3.9.5
ARG VERSION
ARG SERVICE_NAME
ENV PYTHONPATH=/app
COPY requirements.txt /app/requirements.txt
RUN pip install -r /app/requirements.txt
COPY app /app/app
COPY main.py /app/
CMD ["python", "./app/main.py"]
And then my main.py file
import uvicorn
from fastapi import FastAPI
app = FastAPI()
#app.get("/")
def read_root():
return {"Hello": "World"}
if __name__ == '__main__':
uvicorn.run(app, port=3000, host="0.0.0.0")
docker compose up does not seem to want to expose to local host.
What I use with build and run that does expose:
docker build -t test-test .
docker run -p 3000:3000 test-test
Is there a way to expose the port to localhost with docker compose up?
The syntax for ports is HOST:CONTAINER. The port on the container is 3000, so you've got it backwards.
version: '3'
services:
simple:
build:
context: .
dockerfile: Dockerfile
container_name: simple
ports:
- 80:3000
environment:
- SOMEKEY=ABCD
- ANOTHERKEY=EFG
I have a flask application that I'm trying to dockerize but the ports are not getting exposed properly.
DockerFile
FROM tiangolo/uwsgi-nginx-flask:python3.7
LABEL Name=testAPP Version=0.0.1
EXPOSE 5000
ADD . /app
WORKDIR /app
# Using pip:
RUN python3 -m pip install -r requirements.txt
ENTRYPOINT [ "python3" ]
CMD ["application.py" ,"runserver","-h 0.0.0.0"]
Docker Build is successful:
docker build --rm -f "Dockerfile" -t testAPP .
Docker Run is building the image successfully
docker run -device -expose 5000:5000 testAPP
Also tried,
docker run --rm -d -p 443:443/tcp -p 5000:5000/tcp -p 80:80/tcp testAPP
But when I try to access the site it throws an error
site can't be reached error
Flask App(Inside the APP)
if __name__ == '__main__':
app.run(host='127.0.0.1', port=5000)
On Execution of the command
Docker container ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
8724cdb38e14 testAPP "/entrypoint.sh pyth…" 15 seconds ago Up 13 seconds 80/tcp, 443/tcp, 0.0.0.0:5000->5000/tcp funny_galois
Defining a port as exposed doesn’t publish the port by itself. Try with the flag -p , like:
-p container_port:local_port
example:
docker run -p 8080:8080 -v ~/Code/PYTHON/ttftt-recipes-manager:/app python_dev
But before running try to check if there is something else that already running on the specified port like:
lsof -i :PORTNUM
and after with something like:
docker logs my_container
Make sure you're mapping your localhost port to the container's port
docker run -p 127.0.0.1:8000:8000 your_image
And once you're application is in the container, you want to run your app with the host set to 0.0.0.0