Docker executes python scripts only when I stopping the container - python

I'm trying to build a Docker image for my python app (a small api on aiohttp with a couple endpoints)
FROM python:3
WORKDIR /home/emil/Projects/elastic_simple_engine
COPY . .
RUN pip3 install -r requirements.txt
EXPOSE 5000/tcp
CMD ["python3", "entry.py"]
The last line of the Dockerfile runs a python script which starts aiohttp.web.Application():
# entry.py
# ...a few dozens of code lines above...
if __name__ == '__main__':
print('Initializing...')
aiohttp.web.run_app(app, host='127.0.0.1', port=5000)
After building an image I'm trying to run the container:
$ docker run -p 5000:5000 myapp
Docker runs the container silently without any output in shell but I can't reach my app's host: 127.0.0.1:5000 (everything works perfectly when I launch it without docker).
Only when I stop the container it prints in console the lines that should be shown during app's launch and shuts down:
Initializing...
======== Running on http://127.0.0.1:5000 ========
(Press CTRL+C to quit)
Please, help me figure out that I do wrong.

TLDR
Set host to 0.0.0.0
127.0.0.1 is the IP address to local interface. It can only communicate within the same host.
0.0.0.0 means a server (in this context) would listen to every available network interface (including 127.0.0.1).
Here, since you are not sharing the docker network, 127.0.0.1 is only available inside the container and not from outside the container. You should use 0.0.0.0 to access it from outside the container or pass --network="host" in docker run but this can have other complications with port sharing.

Related

Container didn't respond to HTTP pings on port: 80, failing site start

I have deployed python:3.8-slim-buster image to the App Service. Generally it is being run correctly as I can see the processing in the logs, however the health-check mechanism tries to ping the hosted server but it does not respond as it is only code that runs in a loop and process the messages from the queue.
It would be fine, but the application is being killed with the error:
Container didn't respond to HTTP pings on port: 80, failing site start.
Stopping site because it failed during startup.
Is there either a way to remove this Waiting for response to warmup request for container or specify in the dockerfile to respond with OK to those requests?
Currently my dockerfile is a 2 liner, that only copies the scripts and then runs python script.
The code that is inside this script is copied from https://learn.microsoft.com/en-us/azure/event-hubs/event-hubs-python-get-started-send#create-a-python-script-to-receive-events
The Dockerfile:
FROM python:3.8-slim-buster
COPY ./Scripts .
CMD [ "python3","-u","./calculate.py"]
The fix for that is to either host the script in a server e.g. nodejs or equivalent for given language, or create separate process that will return something for the 80 port.
There also might be a problem with the port configured on default, here is an answer how to fix that case:
https://serverfault.com/questions/1003418/azure-docker-app-error-site-did-not-start-within-expected-time-limit-and-co
step-1
add EXPOSE 8080 inside Dockerfile
step-2
build image from Dockerfile:
docker build . -t python-calculator
step-3
docker images #search the image with the tag you mentioned earlier i.e python-calculator
step-4
docker run -p 8080:8080 -d
step-5
localhost:8080

How to access the Django app running inside the Docker container?

I am currently running my django app inside the docker container by using the below command
docker-compose run app sh -c "python manage.py runserver"
but I am not able to access the app with local host url, (not using any additional db server or ngnix or gunicorn, just simply running the django devlopment server inside the docker).
please let me know how to access the app
docker-compose run is intended to launch a utility container based on a service in your docker-compose.yml as a template. It intentionally does not publish the ports: declared in the Compose file, and you shouldn't need it to run the main service.
docker-compose up should be your go-to call for starting the services. Just docker-compose up on its own will start everything in the docker-compose.yml, concurrently, in the foreground; you can add -d to start the processes in the background, or a specific service name docker-compose up app to only start the app service and its dependencies.
The python command itself should be the main CMD in your image's Dockerfile. You shouldn't need to override it in your docker-compose.yml file or to provide it at the command line.
A typical Compose YAML file might look like:
version: '3.8'
services:
app:
build: . # from the Dockerfile in the current directory
ports:
- 5000:5000 # make localhost:5000 forward to port 5000 in the container
While Compose supports many settings, you do not need to provide most of them. Compose provides reasonable defaults for container_name:, hostname:, image:, and networks:; expose:, entrypoint:, and command: will generally come from your Dockerfile and don't need to be overridden.
Try 0.0.0.0:<PORT_NUMBER> (typically 80 or 8000), If you are still troubling to connect the server you should use the Docker Machine IP instead of localhost. Enter the following in terminal and navigate to the provided url:
docker-machine ip

Flask + Docker app - The container dies when i run docker run

I have an application with Dockerfile + docker-compose.
Dockerfile
docker-compose.yml
I have a CI, which creates an image from my dockerfile and send it to the hub.docker
Travis.yaml
When I drop this image on my cloud server I can not run this image by running the command below:
docker run -d -p 80:80 flask-example
because the container dies.
Besides the downloaded image from hub.docker after compiled by travis, will I need docker-compose on my server? Executing the command:
docker-compose up -d
To run the application? Or is there another way to do it?
Thanks guys.
running docker with -d flag detached your container, which mean that it runs in background.
Thus, you cannot see the error. Just remove this flag and you will see why it is dying.
From the link to your docker-compose file, it seems that port 80 is already in used (by frontend container) so maybe you can try using a different port?
(for example: docker run -d -p 8080:80 flask-example)
Second, you are right.
docker-compose is just another way to run your container. You don't have to use both.

Debug Docker that quits immediately?

I am following the official docker tutorial:
https://docs.docker.com/get-started/part2/#build-the-app
I can successfully build the Docker image (after creating the Dockerfile, app.py and requirements.txt) and see it:
docker build -t friendlyhello .
docker ps -a
However, it quits immediately when running
docker run -p 4000:80 friendlyhello
I cannot find the way to find why it did not work
1) "docker ps -a" - says the container exited
2) docker logs "container name" returns no information about logs
3) I can attach the shell to it:
docker run -p 4000:80 friendlyhello /bin/sh
but I did not manage to find (grep) any logging information there (in /var/log)
4) attaching foreground and detached mode with -t and -d did not help
What else could I do?
Note: a docker exec on an exited (stopped) container should not be possible (see moby issue 30361)
docker logs and docker inspect on a stopped container should still be possible, but docker exec indeed not.
You should see
Error response from daemon: Container a21... is not running
So a docker inspect of the image you are running should reveal the entrypoint and cmd, as in this answer.
The normal behavior is the one described in this answer.
I had this exact same issue...and it drove me nuts. I am using Docker Toolbox as I am running Windows 7. I ran docker events& prior to my docker run -p 4000:80 friendlyhello. It showed me nothing more than the container starts, and exits pretty much straight away. docker logs <container id> showed nothing.
I was just about to give up when I came across a troubleshooting page with the suggestion to remove the docker machine and re-create it. I know that might sound like a sledgehammer type solution, but the examples seemed to show that the re-create downloads the latest release. I followed the steps shown and it worked! If it helps anyone the steps I ran were;
docker-machine stop default
docker-machine rm default
docker-machine create --driver virtualbox default
Re-creating the example files, building the image and then running it now gives me;
$ docker run -p 4000:80 friendlyhello
* Serving Flask app "app" (lazy loading)
* Environment: production
WARNING: Do not use the development server in a production environment.
Use a production WSGI server instead.
* Debug mode: off
* Running on http://0.0.0.0:80/ (Press CTRL+C to quit)
And with Docker Toolbox running, I can access this at http://192.168.99.100:4000/ and now I get;
Hello World!
Hostname: ca4507de3f48
Visits: cannot connect to Redis, counter disabled

Can't manage to build and run a bottle.py app

I've been trying to setup a container to run an app with the bottle framework. Read everything I could find about it, but even so I can't do it. Here's what I did:
Dockerfile:
# Use an official Python runtime as a parent image
FROM python:2.7
# Set the working directory to /app
WORKDIR /app
# Copy the current directory contents into the container at /app
ADD . /app
# Install any needed packages specified in requirements.txt
RUN pip install -r requirements.txt
# Make port 80 available to the world outside this container
EXPOSE 8080
# Define environment variable
ENV NAME World
# Run app.py when the container launches
CMD ["python", "app.py"]
app.py:
import os
from bottle import route, run, template
#route('/<name>')
def index(name):
return template('<b>Hello {{name}}</b>!', name=name)
run(host='localhost', port=8080)
requirements.txt
bottle
By running the command docker build -t testapp I create the container.
Then by running the command docker run -p 8080:8080 testapp I get this terminal output:
Bottle v0.12.13 server starting up (using WSGIRefServer())...
Listening on http://localhost:8080/
Hit Ctrl-C to quit.
But when I go to localhost:8080/testing I get localhost refused connection.
Can anyone point me to the right direction?
Problem is this line:
run(host='localhost', port=8080)
It is exposing it for "localhost" insde the container you are running the code. You can use python library netifaces to get container external interface if you want to but I suggest you to set 0.0.0.0 as host like:
run(host='0.0.0.0', port=8080)
Then you will be able to access http://localhost:8080/ (asuming your docker engine is at localhost)
EDIT: mind your previous container might still be listening on 8080/tcp. Remove or stop previous container first.

Categories