Unable to connect to server when running docker django container - python

I have looked through the questions on this site, but I have not been able to fix this problem.
I created and ran an image of my django app, but when I try to view the app from the browser, the page does not load (can't establish a connection to the server)
I am using docker toolbox, I am using OS X El Capitan and the Macbook is from 2009.
The container IP is: 192.168.99.100
The django project root is called "Web app" and is the directory containing manage.py. My Dockerfile and my requirements.txt files are in this directory.
My dockerfile is:
FROM python:3.5
WORKDIR /usr/src/app
COPY requirements.txt ./
RUN pip install -r requirements.txt
COPY . .
EXPOSE 8000
CMD ["python", "manage.py", "runserver", "0.0.0.0:8000"]
My requirements.txt has django and mysqlclient
My django app uses Mysql, and I tried to view the dockerized django app in the browser with and without linking it to the standard mysql image. In both cases, I only see the following error:
problem loading page couldn't establish connection to server
When I did try linking the django container to the mysql container I used:
docker run --link mysqlapp:mysql -d app
Where mysqlapp is my mysql image and 'app' is my django image.
In my django settings.py, the allowed hosts are:
ALLOWED_HOSTS: ['localhost', '127.0.0.1', '0.0.0.0', '192.168.99.100']
Again, the image is successfully created when I used docker build, and it is successfully run as a container. Why is the page not loading in the browser?

I suggest to use yml file and docker compose. Below is a template to get you started:
[Dockerfile]
FROM python:2.7
RUN pip install Django
RUN mkdir /code
WORKDIR /code
COPY code/ /code/
where your files are located in code directory.
[docker-compose.yml]
version: '2'
services:
db:
image: mysql
web0:
build: .
command: python manage.py runserver 0.0.0.0:8000
ports:
- "8000:8000"
depends_on:
- db
There might be a problem with your working directory path defined in Dockerfile. Hope above helps.

Solution provided by salehinejad seems to be good enough ,although i have not tested it personally but if you do not want to use yml file and want to go your way then you should expose the port by adding
-p 0:8000
in your run command
So your should look like this :
docker run -p 0:8000 --link mysqlapp:mysql -d app

I suspect you have not told Docker to talk to your VM, and that your containers are running on your host machine (if you can access at localhost, this is the issue).
Please see this post for resolution:
Connect to docker container using IP

Related

How to access host port after running docker compose up

I am dockerizing a React-Flask web application with separate containers for the frontend and the backend Flask API. Up to now I have only run this on my localhost using the default Flask development server. I then installed Gunicorn to prep the application for deployment with Docker later, and that also ran smoothly on my localhost.
After I ran docker compose up the two images built successfully, are attached to the same network, and I got this in the logs:
Logs For backend:
Starting gunicorn 20.1.0
Listening at: http://0.0.0.0:5000 (1)
Using worker: gthread
Logs For frontend:
react-flask-app#0.1.0 start
react-scripts start
Project is running at http://172.21.0.2/
Starting the development server...
But when I try to access the site at http://172.21.0.2/, localhost:5000 or localhost:3000 it is not accessible. Do I maybe need to add the name of the frontend or backend service?
In Docker Desktop it's showing that the frontend is running at port 5000 but there is no port listed for the backend, it just says it's running.
This is what my files and setup look like:
I added a gunicorn_config.py file as I read it is a good practice, rather than adding all of the arguments to the CMD in the Dockerfile:
bind = "0.0.0.0:5000"
workers = 4
threads = 4
timeout = 120
Then in my Flask backend Dockerfile I have the following CMD for Gunicorn:
FROM python:3.8-alpine
EXPOSE 5000
WORKDIR /app
COPY requirements.txt requirements.txt
ADD requirements.txt /app
RUN pip install --upgrade pip
ADD . /app
COPY . .
RUN apk add build-base
RUN apk add libffi-dev
RUN pip install -r requirements.txt
CMD ["gunicorn", "--config", "gunicorn_config.py", "main:app"]
Here I do "main:app" where my Flask app file is called main.pyand then app is my Flask app object.
I'm generally confused about ports and how this will interact with Gunicorn and in general. I specified port 5000 in the EXPOSE of both of my Dockerfiles.
This is my frontend Dockerfile:
WORKDIR /app
COPY . /app
RUN npm install --legacy-peer-deps
COPY package*.json ./
EXPOSE 3000
ENTRYPOINT [ "npm" ]
CMD ["start"]
And used 5000 in the bind value of my Gunicorn config file. Also, I previously added port 5000 as a proxy in package.json.
I will initially want to run the application using Docker on my localhost but will deploy it to a public host service like Digital Ocean later.
This is my Docker compose file:
services:
middleware:
build: .
ports:
- "5000:5000"
frontend:
build:
context: ./react-flask-app
dockerfile: Dockerfile
ports:
- "3000:3000"
The other thing to mention is that I also created a wsgi.py file and I was wondering do I need to add this to the Gunicorn CMD in my Dockerfile:
from main import app
if __name__ == "__main__":
app.run()

How to make FASTAPI pickup changes in an API routing file automatically while running inside a docker container?

I am running FastApi via docker by creating a sevice called ingestion-data in docker-compose. My Dockerfile :
FROM tiangolo/uvicorn-gunicorn-fastapi:python3.7
# Environment variable for directory containing our app
ENV APP /var/www/app
ENV PYTHONUNBUFFERED 1
# Define working directory
RUN mkdir -p $APP
WORKDIR $APP
COPY . $APP
# Install missing dependencies
RUN pip install -r requirements.txt
AND my docker-compose.yml file
version: '3.8'
services:
ingestion-service:
build:
context: ./app
dockerfile: Dockerfile
ports:
- "80:80"
volumes:
- .:/app
restart: always
I am not sure why this is not picking up any change automatically when I make any change in any endpoint of my application. I have to rebuild my images and container every time.
Quick answer: Yes :)
In the Dockerfile, you copying your app into /var/www/app.
The instructions form the Dockerfile are executed when you build your image (docker build -t <imgName>:<tag>)
If you change the code later on, how could the image be aware of that?
However, you can mount a volume(a directory) from your host machine, into the container when you execute the docker run / docker-compose up command, right under /var/www/app. You'll then be able to change the code in your local directory and the changes will automatically be seen in the container as well.
Perhaps you want to mount the current working directory(the one containing your app) at /var/www/app?
volumes:
- .:/var/www/app

Connection Refused on MongoDB Docker Container from Flask Docker Container

I have two docker containers.
Flask app
MongoDB
Flask app has a DockerFile that looks like this.
from alpine:latest
RUN apk add --no-cache python3-dev \
&& pip3 install --upgrade pip
WORKDIR /app
COPY . /app
RUN pip3 --no-cache-dir install -r requirements.txt
EXPOSE 5000
ENTRYPOINT ["python3"]
CMD ["app.py"]
This is how I am connecting my local Mongo (Not Container) from Flask
mongo_uri = "mongodb://host.docker.internal:27017/myDB"
appInstance.config["MONGO_URI"] = mongo_uri
mongo = PyMongo(appInstance)
MongoDB is running on the container in mongodb://0.0.0.0:2717/myDB.
This is obvious when I run Flask container with local mongo uri which is mongodb://host.docker.internal:27017/myDB, everything works. But It shouldn't work when I try to connect the Mongo Container in the same way. Coz Flask container doesn't know anything about that Mongo Container.
My question is - how do I connect this Mongo Container with Flask Container so that I can query Mongo container from Flask Container.
Thanks in advance.
If I was you, I would use docker-compose.
Solution just using docker
You'd have to find out the IP address of your mongo container and put this IP in the flask configuration file. Keep in mind that the IP address of the container can change - for example if you use a newer image.
Find IP address:
docker inspect -f '{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' container_name_or_id
Solution using docker-compose
In your docker-compose file you'd define two services - one for flask and one for mongo. In the flask configuration file you can then access the mongo container with its service name as both services run in the same network.
docker-compose.yml:
services:
mongo:
...
flask:
...
flask configuration:
mongo_uri = "mongodb://mongo/myDB"
In this example mongo is the name for your mongo service.

View Docker Swarm CMD Line Output

I am trying to incorporate a python container and a dynamodb container into one stack file to experiment with Docker swarm. I have done tutorials on docker swarm seeing web apps running across multiple nodes before but never built anything independently. I am able to run docker-compose up with no issues, but struggling with swarm.
My docker-compose.yml looks like
version: '3.3'
services:
dynamodb:
image: "amazon/dynamodb-local"
ports:
- "8000:8000"
track-count:
image: "my-app"
links:
- "dynamodb:localhost"
Running docker stack deploy -c docker-compose.yml trial_stack brings up no errors however printing 'hello world' as the first line of python code is not displayed in the terminal. I get the following as CMD line output
Ignoring unsupported options: links
Creating network trial_stack_default
Creating service trial_stack_dynamodb
Creating service trial_stack_track-count
My question is:
1) Why is the deploy service ignoring the links? I have noticed this is repeated in the docs https://docs.docker.com/engine/reference/commandline/stack_deploy/ but unsure if this will cause my stack to fail.
2) Assuming the links issue is fixed, where will any command line output be shown, to confirm the system is running? Currently I only have one node, my local machine, which is the manager.
For reference, my python image is being built by the following Dockerfile:
FROM python:3.8-slim-buster
RUN mkdir /app
WORKDIR /app
RUN pip install --upgrade pip
COPY ./requirements.txt ./
RUN pip install -r ./requirements.txt
COPY / /
COPY /resources/secrets.py /resources/secrets.py
CMD [ "python", "/main.py" ]
You can update docker-compose.yaml to enable tty for the services for which you want to see the stdout on console.
Updated docker-compose.yaml should look like this:
version: '3.3'
services:
dynamodb:
image: "amazon/dynamodb-local"
ports:
- "8000:8000"
track-count:
image: "my-app"
tty: true
links:
- "dynamodb:localhost"
and then when once you have the task deployed, to check service logs you can run:
# get the service name
docker stack services <STACK_NAME>
# display the service logs, edited based on user's suggestion
docker service logs --follow --raw <SERVICE_NAME>

how to run server using docker container?

Django server is running well in localhost. however, When I try to run server on the docker container, it doesn't find the manage.py file when using docker-compose file and even I run the container manually and run the server, it doesn't appear in browser. how can I solve this problem?
So I wrote all the code testing on my local server and using the dockerfile, I built the image of my project.
and I tried to run server on the docker container, suddenly this doesn't run.
what's worse, if I use docker-compose to run the server, it doesn't find the manage.py file though I already checked with 'docker run -it $image_name sh'
here is the code of my project
I am new to docker and new to programming.
hope you can give me a help. thanks!
file structure
current directory
└─example
└─db.sqlite3
└─docker-compose.yml
└─Dockerfile
└─manage.py
└─Pipfile
└─Pipfile.lock
Docker file
# Base image - Python version
FROM python:3.6-alpine
# Set environment variables
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1
# Set work directory
WORKDIR /code
# Copy Pipfile
COPY Pipfile /code
COPY Pipfile.lock /code
# Install dependencies
RUN pip install pipenv
RUN pipenv install --system
# Copy files
COPY . /code/
docker-compose.yml
# docker-compose.yml
version: '3.3'
services:
web:
build: .
command: python /code/manage.py runserver 0.0.0.0:8000
volumes:
- .:/code
ports:
- 8000:8000
expected result : running server in web browser like in chrome
actual result :
when using docker-compose :
ERROR like this in the prompt : web_1 | python: can't open file '/code/manage.py': [Errno 2] No such file or directory
when running the container manually with 'docker run -it $image_name sh' and 'python manage.py runserver' on the shell :
server is running but, doesn't connect to web browser. (doesn't show up in browser like chrome'
Yo have done same thing in many ways. You have copy source files using a COPY command and then you have mounted a host volume in your docker-compose.yml file. In first place you don't need a volume because volume mounts are to persisting data generated by and used by Docker containers.
Following simplified Dockerfile and docker-compose file would fix the problem.
# Base image - Python version
FROM python:3.6-alpine
# Set environment variables
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1
# Copy files
COPY . /code/
# Set work directory
WORKDIR /code
# Install dependencies
RUN pip install pipenv
RUN pipenv install --system
docker-compose.yml -:
# docker-compose.yml
version: '3.3'
services:
web:
build: .
command: python ./manage.py runserver 0.0.0.0:8000
ports:
- 8000:8000

Categories