I have 2 files that depend on each other when docker is start up. 1 is a flask file and one is a file with a few functions. When docker starts, only the functions file will be executed but it imports flask variables from the flask file. Example:
Flaskfile
import flask
from flask import Flask, request
import json
_flask = Flask(__name__)
#_flask.route('/', methods = ['POST'])
def flask_main():
s = str(request.form['abc'])
ind = global_fn_main(param1,param2,param3)
return ind
def run(fn_main):
global global_fn_main
global_fn_main = fn_main
_flask.run(debug = False, port = 8080, host = '0.0.0.0', threaded = True)
Main File
import flaskfile
#a few functions then
if__name__ == '__main__':
flaskfile.run(main_fn)
The script runs fine without need a gunicorn.
Dockerfile
FROM python-flask
ADD *.py *.pyc /code/
ADD requirements.txt /code/
WORKDIR /code
EXPOSE 8080
CMD ["python","main_file.py"]
In the Command line: i usally do: docker run -it -p 8080:8080 my_image_name and then docker will start and listen.
Now to use gunicorn:
I tried to modify my CMD parameter in the dockerfile to
["gunicorn", "-w", "20", "-b", "127.0.0.1:8083", "main_file:flaskfile"]
but it just keeps exiting. Am i not writing the docker gunicorn command right?
I just went through this problem this week and stumbled on your question along the way. Fair to say you either resolved this or changed approaches by now, but for future's sake:
The command in my Dockerfile is:
CMD ["gunicorn" , "-b", "0.0.0.0:8000", "app:app"]
Where the first "app" is the module and the second "app" is the name of the WSGI callable, in your case, it should be _flask from your code although you've some other stuff going on that makes me less certain.
Gunicorn takes the place of all the run statements in your code, if Flask's development web server and Gunicorn try to take the same port it can conflict and crash Gunicorn.
Note that when run by Gunicorn, __name__ is not "main". In my example it is equal to "app".
At my admittedly junior level of both Python, Docker, and Gunicorn the fastest way to debug is to comment out the "CMD" in the Dockerfile, get the container up and running:
docker run -it -d -p 8080:8080 my_image_name
Hop onto the running container:
docker exec -it container_name /bin/bash
And start Gunicorn from the command line until you've got it working, then test with curl - I keep a basic route in my app.py file that just prints out "Hi" and has no dependencies for validating the server is up before worrying about the port binding to the host machine.
After struggling with this issue over the last 3 days, I found that all you need to do is to bind to the non-routable meta-address 0.0.0.0 rather than the loopback IP 127.0.0.1:
CMD ["gunicorn" , "--bind", "0.0.0.0:8000", "app:app"]
And don't forget to expose the port, one option to do that is to use EXPOSE
in your Dockerfile:
EXPOSE 8000
Now:
docker build -t test .
Finally you can run:
docker run -d -p 8000:8000 test
This is my last part of my Dockerfile with Django App
EXPOSE 8002
COPY entrypoint.sh /code/
WORKDIR /code
ENTRYPOINT ["sh", "entrypoint.sh"]
then in entrypoint.sh
#!/bin/bash
# Prepare log files and start outputting logs to stdout
mkdir -p /code/logs
touch /code/logs/gunicorn.log
touch /code/logs/gunicorn-access.log
tail -n 0 -f /code/logs/gunicorn*.log &
export DJANGO_SETTINGS_MODULE=django_docker_azure.settings
exec gunicorn django_docker_azure.wsgi:application \
--name django_docker_azure \
--bind 0.0.0.0:8002 \
--workers 5 \
--log-level=info \
--log-file=/code/logs/gunicorn.log \
--access-logfile=/code/logs/gunicorn-access.log \
"$#"
Hope this could be useful
This work for me:
FROM docker.io/python:3.7
WORKDIR /app
COPY requirements.txt ./
RUN pip install --no-cache-dir -r requirements.txt
ENV GUNICORN_CMD_ARGS="--bind=0.0.0.0 --chdir=./src/"
COPY . .
EXPOSE 8000
CMD [ "gunicorn", "app:app" ]
I was trying to run a flask app as well. I found out that you can just use
ENTRYPOINT['gunicorn', '-b', ':8080', 'app:APP']
This will take take the file you have specified and run on the docker instance. Also, don't forget the shebang on the top, #!/usr/bin/env python if you are running the Debug LOG-LEVEL.
gunicorn main:app --workers 4 --bind :3000 --access-logfile '-'
Related
I am developing a fastapi inside a docker container in windows/ubuntu (code below). When I test the app outside the container by running python -m uvicorn app:app --reload in the terminal and then navigating to 127.0.0.1:8000/home everything works fine:
{
Data: "Test"
}
However, when I docker-compose up I can neither run python -m uvicorn app:app --reload in the container (due to the port already being used), nor see anything returned in the browser. I have tried 127.0.0.1:8000/home, host.docker.internal:8000/home and localhost:8000/home and I always receive:
{
detail: "Not Found"
}
What step am I missing?
Dockerfile:
FROM python:3.8-slim
EXPOSE 8000
ENV PYTHONDONTWRITEBYTECODE=1
ENV PYTHONUNBUFFERED=1
COPY requirements.txt .
RUN python -m pip install -r requirements.txt
WORKDIR /app
COPY . /app
RUN adduser -u nnnn --disabled-password --gecos "" appuser && chown -R appuser /app
USER appuser
CMD ["gunicorn", "--bind", "0.0.0.0:8000", "-k", "uvicorn.workers.UvicornWorker", "app:app"]
Docker-compose:
version: '3.9'
services:
fastapitest:
image: fastapitest
build:
context: .
dockerfile: ./Dockerfile
ports:
- 8000:8000
extra_hosts:
- "host.docker.internal:host-gateway"
app.py:
from fastapi import FastAPI
app = FastAPI()
#app.get("/home") #must be one line above the function fro the route
def home():
return {"Data": "Test"}
if __name__ == '__main__':
import uvicorn
uvicorn.run(app, host="127.0.0.1", port=8000)
The issue here is that when you specify host="127.0.0.1" to uvicorn, that means you can only access that port from that same machine. Now, when you run outside docker, you are on the same machine, so everything works. But since a docker container is (at least to some degree) a different computer, you need to tell it to allow connections from outside the container as well. To do this, switch to host="0.0.0.0", and then you should be able to access your dockerized API on http://localhost:8000.
I'm running Linux Mint with Python 3.6.
I have read through every link on here but can't figure out what is wrong. I am running a simple flask app which works fine when I'm running it locally on my machine, but then running it with Docker I can't access the IP in my browser.
I have set the flask app to run on host 0.0.0.0, with app.run(host='0.0.0.0').
Dockerfile:
FROM python:3.7
RUN mkdir -p /var/app
WORKDIR /var/app
COPY . /var/app
RUN pip install --upgrade pip
RUN pip install -r requirements.txt
EXPOSE 5000
CMD ["pytest", "-v", "tests/test_flask_api.py"]
# CMD ["python3", "app.py"]
CMD ["python3", "-m", "Flask", "run", "--host=0.0.0.0"]
docker-compose.yml:
web:
build: ./app
ports:
- "5000:5000"
volumes:
- .:/code
After running the command docker-compose up -d to build and run the container, I run the command docker inspect --format '{{ .NetworkSettings.IPAddress }}' to get the IP address of the container as 172.17.0.2.
I try to access the site via 172.17.0.2:5000 and localhost:5000, but both just hang and don't load.
Finally, I ran docker exec -it restapimma_web_1 /bin/bash to get into the container. Then I ran curl localhost:5000 and was able to get the correct response. So the flask app is running inside the container I just can't access it outside the container.
I had a similar problem. To get it working:
Allow your flask app to accept a HOST argument from your environment
if __name__ == "__main__":
app.run(
host=os.environ.get("BACKEND_HOST", "172.0.0.1"),
port=your_port,
debug=True,
)
set your host environmental var in your composition
services:
[your service name]:
image:[your image]
environment:
- BACKEND_HOST=[your service name]
ports:
- "[etc]"
Basically flask wants to be called using the right hostname
I'm trying to make my first django container with uwsgi. It works as follows:
FROM python:3.5
RUN apt-get update && \
apt-get install -y && \
pip3 install uwsgi
COPY ./projects.thux.it/requirements.txt /opt/app/requirements.txt
RUN pip3 install -r /opt/app/requirements.txt
COPY ./projects.thux.it /opt/app
COPY ./uwsgi.ini /opt/app
COPY ./entrypoint /usr/local/bin/entrypoint
ENV PYTHONPATH=/opt/app:/opt/app/apps
WORKDIR /opt/app
ENTRYPOINT ["entrypoint"]
EXPOSE 8000
#CMD ["--ini", "/opt/app/uwsgi.ini"]
entrypoint here is a script that detects whether to call uwsgi (in case there are no args) or python manage in all other cases.
I'd like to use this container both as an executable (dj migrate, dj shell, ... - dj here is python manage.py the handler for django interaction) and as a long-term container (uwsgi --ini uwsgi.ini). I use docker-compose as follows:
web:
image: thux-projects:3.5
build: .
ports:
- "8001:8000"
volumes:
- ./projects.thux.it/web/settings:/opt/app/web/settings
- ./manage.py:/opt/app/manage.py
- ./uwsgi.ini:/opt/app/uwsgi.ini
- ./logs:/var/log/django
And I manage in fact to serve the project correctly but to interact with django to "check" I need to issue:
docker-compose exec web entrypoint check
while reading the docs I would have imagined I just needed the arguments (without entrypoint)
Command line arguments to docker run will be appended after
all elements in an exec form ENTRYPOINT, and will override all
elements specified using CMD. This allows arguments to be passed to
the entry point, i.e., docker run -d will pass the -d argument
to the entry point.
The working situation with "repeated" entrypoint:
$ docker-compose exec web entrypoint check
System check identified no issues (0 silenced).
The failing one if I avoid 'entrypoint':
$ docker-compose exec web check
OCI runtime exec failed: exec failed: container_linux.go:346: starting container process caused "exec: \"check\": executable file not found in $PATH": unknown
docker exec never uses a container's entrypoint; it just directly runs the command you give it.
When you docker run a container, the entrypoint and command you give to start it are combined to produce a single command line, and that command becomes the main container process. On the other hand, when you docker exec a command in a running container, it's interpreted literally; there aren't two parts of the command line to assemble, and the container's entrypoint isn't considered at all.
For the use case you describe, you don't need an entrypoint script to process the command in an unusual way. You can create a symlink to the manage.py script to give a shorter alias to run it, but make the default command be the uwsgi runner.
RUN chmod +x manage.py
RUN ln -s /opt/app/manage.py /usr/local/bin/dj
CMD ["uwsgi", "--ini", "/opt/app/uwsgi.ini"]
# Runs uwsgi:
docker run -p 8000:8000 myimage
# Manually trigger database migrations:
docker run --rm myimage dj migrate
I have a flask application that I'm trying to dockerize but the ports are not getting exposed properly.
DockerFile
FROM tiangolo/uwsgi-nginx-flask:python3.7
LABEL Name=testAPP Version=0.0.1
EXPOSE 5000
ADD . /app
WORKDIR /app
# Using pip:
RUN python3 -m pip install -r requirements.txt
ENTRYPOINT [ "python3" ]
CMD ["application.py" ,"runserver","-h 0.0.0.0"]
Docker Build is successful:
docker build --rm -f "Dockerfile" -t testAPP .
Docker Run is building the image successfully
docker run -device -expose 5000:5000 testAPP
Also tried,
docker run --rm -d -p 443:443/tcp -p 5000:5000/tcp -p 80:80/tcp testAPP
But when I try to access the site it throws an error
site can't be reached error
Flask App(Inside the APP)
if __name__ == '__main__':
app.run(host='127.0.0.1', port=5000)
On Execution of the command
Docker container ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
8724cdb38e14 testAPP "/entrypoint.sh pyth…" 15 seconds ago Up 13 seconds 80/tcp, 443/tcp, 0.0.0.0:5000->5000/tcp funny_galois
Defining a port as exposed doesn’t publish the port by itself. Try with the flag -p , like:
-p container_port:local_port
example:
docker run -p 8080:8080 -v ~/Code/PYTHON/ttftt-recipes-manager:/app python_dev
But before running try to check if there is something else that already running on the specified port like:
lsof -i :PORTNUM
and after with something like:
docker logs my_container
Make sure you're mapping your localhost port to the container's port
docker run -p 127.0.0.1:8000:8000 your_image
And once you're application is in the container, you want to run your app with the host set to 0.0.0.0
I am trying to get a django project that I have built to run on docker and create an image and container for my project so that I can push it to my dockerhub profile.
Now I have everything set up and I've created the initial image of my project. However, when I run it I am not getting any port number attached to the container. I need this to test and see if the container is actually working.
Here is what I have:
Successfully built a047506ef54b
Successfully tagged test_1:latest
(MySplit) omars-mbp:mysplit omarjandali$ docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
test_1 latest a047506ef54b 14 seconds ago 810MB
(MySplit) omars-mbp:mysplit omarjandali$ docker run --name testing_first -d -p 8000:80 test_1
01cc8173abfae1b11fc165be3d900ee0efd380dadd686c6b1cf4ea5363d269fb
(MySplit) omars-mbp:mysplit omarjandali$ docker container ls -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
01cc8173abfa test_1 "python manage.py ru…" 13 seconds ago Exited (1) 11 seconds ago testing_first
(MySplit) omars-mbp:mysplit omarjandali$ Successfully built a047506ef54b
You can see there is no port number so I don't know how to access the container through my local machine on my web browser.
dockerfile:
FROM python:3
WORKDIR tab/
COPY requirements.txt ./
RUN pip install -r requirements.txt
COPY . .
CMD ["python", "manage.py", "runserver", "0.0.0.0"]
This line from the question helps reveal the problem;
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
01cc8173abfa test_1 "python manage.py ru…" 13 seconds ago Exited (1) 11 seconds ago testing_first
Exited (1) (from the STATUS column) means that the main process has already exited with a status code of 1 - usually meaning an error. This would have freed up the ports, as the docker container stops running when the main process finishes for any reason.
You need to view the logs in order to diagnose why.
docker logs 01cc will show the logs of the docker container that has the ID starting with 01cc. You should find that reading these will help you on your way. Knowing this command will help you immensely in debugging weirdness in docker, whether the container is running or stopped.
An alternative 'quick' way is to drop the -d in your run command. This will make your container run inline rather than as a daemon.
Created Dockerise django seed project
django-admin.py startproject djangoapp
Need a requirements.txt file outlining the Python dependencies
cd djangoapp/
RUN follwoing command to create the files required for dockerization
cat <<EOF > requirements.txt
Django
psycopg2
EOF
Dockerfile
cat <<EOF > Dockerfile
FROM python:3.6
ENV PYTHONUNBUFFERED 1
RUN mkdir /app
WORKDIR /app
ADD requirements.txt /app/
RUN pip install -r requirements.txt
ADD . /app/
EXPOSE 8000
CMD ["python", "manage.py", "runserver", "0.0.0.0:8000"]
EOF
docker-compose.yml
cat <<EOF > docker-compose.yml
version: "3.2"
services:
web:
image: djangoapp
command: python manage.py runserver 0.0.0.0:8000
ports:
- "8000:8000"
EOF
Run the application with
docker-compose up -d
When you created the container you published the ports. Your container would be accessible via port 8000 if it successfully built. However, as Shadow pointed out, your container exited with an error. That is why you must add the -a flag to your docker container ls command. docker container ls only shows running containers without the -a flag.
I recommend forgoing the detached flag -d to see what is causing the error. Then creating a new container after you have successfully launched the one you are working on. Or simply run the following commands once you fix the issue. docker stop testing_first then docker container rm testing_first finally run the same command you ran before. docker run --name testing_first -d -p 8000:80 test_1
I ran into similar problems with the first docker instances I attempted to run as well.