Docker container is not cretaed - python

I work with a Python tool that uses the docker for project management. I run the setup process with command,
$ bin/butler.py setup
The went through seamlessly but when I try to install new PHP plugins using the composure, the tool doesn't find the container itself.
So, my conclusion is the tool is not creating the container properly in the first place.
I describe the process below for the setup. After the initial configuration, this is where it starts,
# all done
print("pull doker images images")
self.docker.compose_pull(self.local_yml)
print("create containers")
self.docker.compose_setup(self.local_yml)
print("setup completed")
This is the general command for the docker execution. I know it has a security bug, but, at this moment this is not the concern.
def compose(self, params, yaml_path="docker-compose.yml"):
""" execte docker-compose commmand """
cmd = f"docker-compose -f {yaml_path} {params}"
print(cmd)
try:
subprocess.run(cmd, shell=True, check=True)
except Exception:
pass
def compose_pull(self, yaml_path):
self.compose("pull --ignore-pull-failures", yaml_path)
def compose_setup(self, yaml_path):
self.compose(f"--project-name {self.project_name} up --no-start ", yaml_path)
The printout provides with the commands,
pull doker images images
# We use a docker-compose.yml and perform the pull operation
docker-compose -f /Users/chaklader/PycharmProjects/Welance-Craft-Starter/build/docker-compose.yml pull --ignore-pull-failures
Pulling database ...
Pulling craft ...
create containers
# We use a docker-compose.yml and perform the up operation for the project
docker-compose -f /Users/chaklader/PycharmProjects/Welance-Craft-Starter/build/docker-compose.yml --project-name p13-27 up --no-start
Creating network "p13-27_default" with the default driver
Creating p13-27_database ...
Creating p13-27_craft ...
setup completed
The docker-compose.yml file is provided,
services:
craft:
container_name: p13-27_craft
environment:
CRAFT_ALLOW_UPDATES: 'false'
CRAFT_DEVMODE: 1
CRAFT_EMAIL: admin#welance.de
CRAFT_ENABLE_CACHE: 0
CRAFT_LOCALE: en_us
CRAFT_PASSWORD: welance
CRAFT_SITENAME: Welance
CRAFT_SITEURL: //localhost
CRAFT_USERNAME: admin
DB_DATABASE: craft
DB_DRIVER: mysql
DB_PASSWORD: craft
DB_PORT: '3306'
DB_SCHEMA: public
DB_SERVER: database
DB_TABLE_PREFIX: craft_
DB_USER: craft
ENVIRONMENT: dev
HTTPD_OPTIONS: ''
LANG: C.UTF-8
SECURITY_KEY: some_key_:)
image: welance/craft:3.1.17.2
links:
- database
ports:
- 80:80
volumes:
- /var/log
- ./docker/craft/conf/apache2/craft.conf:/etc/apache2/conf.d/craft.conf
- ./docker/craft/conf/php/php.ini:/etc/php7/php.ini
- ./docker/craft/logs/apache2:/var/log/apache2
- ./docker/craft/adminer:/data/adminer
- ../config:/data/craft/config
- ../templates:/data/craft/templates
- ../web:/data/craft/web
database:
command: mysqld --character-set-server=utf8 --collation-server=utf8_unicode_ci
--init-connect='SET NAMES UTF8;'
container_name: p13-27_database
environment:
MYSQL_DATABASE: xyz
MYSQL_PASSWORD: xyz
MYSQL_ROOT_PASSWORD: xyz
MYSQL_USER: xyz
image: mysql:5.7
volumes:
- /var/lib/mysql
version: '3.1'
In the summary, my base image is welance/craft:3.1.17.2 and I use that to create the container named p13-27_craft. The additional configuration is provided in the docker-compose.yml file and I run the pull and up command with the docker.
I think the container is itself not created. For example, I provided the data for customer ID 15 and project ID 55 and the printout says informs Creating p15-55_craft ... done.
When I run the command to see if the container is created from the terminal, I find,
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
cf2ea4638772 welance/craft:3.1.17.1 "/data/scripts/run-c…" 37 minutes ago Up 37 minutes 0.0.0.0:80->80/tcp p13-17_craft
4504ae62035f mysql:5.7 "docker-entrypoint.s…" About an hour ago Up About an hour 3306/tcp, 33060/tcp p13-17_database
518e3535859b mysql:5.7
So the information from the print is not correct and container is not created in the first place.
How do I investigate what is the issue here and why the container is not creating?
Thank you.

Get rid of the --no-start option, and add the -d flag to run as daemon (background process). If I run my own solution:
docker-compose up --no-start
Creating alerts-cache ... done
Creating mongoClientTemp ... done
Creating apilayer_alerts-api_1 ... done
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
Nothing is found, even though my containers are created.
docker-compose up -d
Starting alerts-cache ... done
Starting mongoClientTemp ... done
Starting apilayer_alerts-api_1 ... done
docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
af557a2add73 bdsdev.azurecr.io/rva_flask "python app.py alert…" 2 minutes ago Up 1 second 0.0.0.0:5000->5000/tcp apilayer_alerts-api_1
829da0fabe62 bdsdev.azurecr.io/temp_mongo "docker-entrypoint.s…" 2 minutes ago Up 2 seconds 27017/tcp mongoClientTemp
cdb67a305233 mongo

How do I investigate what is the issue here and why the container is
not creating?
Your configuration seems correct and docker-compose does not report any error, its probably that your container was created but either was not started or exited right after being started. You are using docker ps which only shows running container, you will probably see your missing container by running docker ps -a.
docker-compose won't report any error if container is created (and started) successfuly but exited right after starting. If you can see your container with docker ps -a, try running docker logs <container name> to see why your container exited. The step to solve the issues afterward will depend on how your container works.

Related

Python process in docker crashes when runned non-interactively

I face an odd problem with an tool I found on github, it's a script that distributes an mjpeg stream: https://github.com/OliverF/mjpeg-relay
I create a docker image with the provided command (after initializing the submodule git submodule update --init):
docker build -t relay .
When I now run the container as follows (with the -it flag) the script runs fine, when removing the flag the container exits after some seconds.
docker run -it -p 54017:54321 relay "http://192.0.2.1:1234/?action=stream"
As I want to be able to start the script for multiple streams in a docker compose file, adding restart: unless-stopped leads to an endless loop of containers being restarted.
services:
mjpeg:
image: relay
command: "http://192.0.2.1:1234/?action=stream"
ports:
- "54017:54017"
restart: unless-stopped
I thought about encapsulating the command into tmux sessions, however I had no success with it. Or can you help me finding what leads to the crash of the script when running it non-interactive?
Thanks you very much!

Send logs from docker container to Windows host

I've seen other posts with similar questions but I can't find a solution for my case scenario. I hope someone can help.
Here is the thing: I have a python script that listens for UDP traffic and stores the messages in a log file. I've put this script in a docker image so I can run it in a container.
I need to map the generated logs (python script logs) FROM inside the container TO a folder outside the container, on the host machine (Windows host).
If I use docker-compose everything works fine!! but I can't find the way to make it work using a "docker run ..." command.
Here is my docker-compose.yml file
version: '3.3'
services:
'20001':
image: udplistener
container_name: '20001'
ports:
- 20001:20001/udp
environment:
- UDP_PORT=20001
- BUFFER_SIZE=1024
volumes:
- ./UDPLogs/20001:/usr/src/app/logs
And here is the corresponding docker run command:
docker run -d --name 20001 -e UDP_PORT=20001 -e BUFFER_SIZE=1024 -v "/C/Users/kgonzale/Desktop/UDPLogs/20001":/usr/src/app/logs -p 20001:20001 udplistener
I think the problem may be related to the way I'm creating the volumes, I know is different (docker-compose -> Relative path) (docker command -> Absolute path) but I can't find the way to use relative paths when using the docker run command..
To summarize: the python script is creating logs Inside the container, I want to map those logs Outside the container. I can see the logs in the host machine if I use "docker-compose up -d" but I need the "docker run ..." corresponding command.
Container: python:3.7-alpine
Host: Windows 10
Thanks in advance for your help!

How to access the Django app running inside the Docker container?

I am currently running my django app inside the docker container by using the below command
docker-compose run app sh -c "python manage.py runserver"
but I am not able to access the app with local host url, (not using any additional db server or ngnix or gunicorn, just simply running the django devlopment server inside the docker).
please let me know how to access the app
docker-compose run is intended to launch a utility container based on a service in your docker-compose.yml as a template. It intentionally does not publish the ports: declared in the Compose file, and you shouldn't need it to run the main service.
docker-compose up should be your go-to call for starting the services. Just docker-compose up on its own will start everything in the docker-compose.yml, concurrently, in the foreground; you can add -d to start the processes in the background, or a specific service name docker-compose up app to only start the app service and its dependencies.
The python command itself should be the main CMD in your image's Dockerfile. You shouldn't need to override it in your docker-compose.yml file or to provide it at the command line.
A typical Compose YAML file might look like:
version: '3.8'
services:
app:
build: . # from the Dockerfile in the current directory
ports:
- 5000:5000 # make localhost:5000 forward to port 5000 in the container
While Compose supports many settings, you do not need to provide most of them. Compose provides reasonable defaults for container_name:, hostname:, image:, and networks:; expose:, entrypoint:, and command: will generally come from your Dockerfile and don't need to be overridden.
Try 0.0.0.0:<PORT_NUMBER> (typically 80 or 8000), If you are still troubling to connect the server you should use the Docker Machine IP instead of localhost. Enter the following in terminal and navigate to the provided url:
docker-machine ip

View Docker Swarm CMD Line Output

I am trying to incorporate a python container and a dynamodb container into one stack file to experiment with Docker swarm. I have done tutorials on docker swarm seeing web apps running across multiple nodes before but never built anything independently. I am able to run docker-compose up with no issues, but struggling with swarm.
My docker-compose.yml looks like
version: '3.3'
services:
dynamodb:
image: "amazon/dynamodb-local"
ports:
- "8000:8000"
track-count:
image: "my-app"
links:
- "dynamodb:localhost"
Running docker stack deploy -c docker-compose.yml trial_stack brings up no errors however printing 'hello world' as the first line of python code is not displayed in the terminal. I get the following as CMD line output
Ignoring unsupported options: links
Creating network trial_stack_default
Creating service trial_stack_dynamodb
Creating service trial_stack_track-count
My question is:
1) Why is the deploy service ignoring the links? I have noticed this is repeated in the docs https://docs.docker.com/engine/reference/commandline/stack_deploy/ but unsure if this will cause my stack to fail.
2) Assuming the links issue is fixed, where will any command line output be shown, to confirm the system is running? Currently I only have one node, my local machine, which is the manager.
For reference, my python image is being built by the following Dockerfile:
FROM python:3.8-slim-buster
RUN mkdir /app
WORKDIR /app
RUN pip install --upgrade pip
COPY ./requirements.txt ./
RUN pip install -r ./requirements.txt
COPY / /
COPY /resources/secrets.py /resources/secrets.py
CMD [ "python", "/main.py" ]
You can update docker-compose.yaml to enable tty for the services for which you want to see the stdout on console.
Updated docker-compose.yaml should look like this:
version: '3.3'
services:
dynamodb:
image: "amazon/dynamodb-local"
ports:
- "8000:8000"
track-count:
image: "my-app"
tty: true
links:
- "dynamodb:localhost"
and then when once you have the task deployed, to check service logs you can run:
# get the service name
docker stack services <STACK_NAME>
# display the service logs, edited based on user's suggestion
docker service logs --follow --raw <SERVICE_NAME>

Debug Docker that quits immediately?

I am following the official docker tutorial:
https://docs.docker.com/get-started/part2/#build-the-app
I can successfully build the Docker image (after creating the Dockerfile, app.py and requirements.txt) and see it:
docker build -t friendlyhello .
docker ps -a
However, it quits immediately when running
docker run -p 4000:80 friendlyhello
I cannot find the way to find why it did not work
1) "docker ps -a" - says the container exited
2) docker logs "container name" returns no information about logs
3) I can attach the shell to it:
docker run -p 4000:80 friendlyhello /bin/sh
but I did not manage to find (grep) any logging information there (in /var/log)
4) attaching foreground and detached mode with -t and -d did not help
What else could I do?
Note: a docker exec on an exited (stopped) container should not be possible (see moby issue 30361)
docker logs and docker inspect on a stopped container should still be possible, but docker exec indeed not.
You should see
Error response from daemon: Container a21... is not running
So a docker inspect of the image you are running should reveal the entrypoint and cmd, as in this answer.
The normal behavior is the one described in this answer.
I had this exact same issue...and it drove me nuts. I am using Docker Toolbox as I am running Windows 7. I ran docker events& prior to my docker run -p 4000:80 friendlyhello. It showed me nothing more than the container starts, and exits pretty much straight away. docker logs <container id> showed nothing.
I was just about to give up when I came across a troubleshooting page with the suggestion to remove the docker machine and re-create it. I know that might sound like a sledgehammer type solution, but the examples seemed to show that the re-create downloads the latest release. I followed the steps shown and it worked! If it helps anyone the steps I ran were;
docker-machine stop default
docker-machine rm default
docker-machine create --driver virtualbox default
Re-creating the example files, building the image and then running it now gives me;
$ docker run -p 4000:80 friendlyhello
* Serving Flask app "app" (lazy loading)
* Environment: production
WARNING: Do not use the development server in a production environment.
Use a production WSGI server instead.
* Debug mode: off
* Running on http://0.0.0.0:80/ (Press CTRL+C to quit)
And with Docker Toolbox running, I can access this at http://192.168.99.100:4000/ and now I get;
Hello World!
Hostname: ca4507de3f48
Visits: cannot connect to Redis, counter disabled

Categories