Python process in docker crashes when runned non-interactively - python

I face an odd problem with an tool I found on github, it's a script that distributes an mjpeg stream: https://github.com/OliverF/mjpeg-relay
I create a docker image with the provided command (after initializing the submodule git submodule update --init):
docker build -t relay .
When I now run the container as follows (with the -it flag) the script runs fine, when removing the flag the container exits after some seconds.
docker run -it -p 54017:54321 relay "http://192.0.2.1:1234/?action=stream"
As I want to be able to start the script for multiple streams in a docker compose file, adding restart: unless-stopped leads to an endless loop of containers being restarted.
services:
mjpeg:
image: relay
command: "http://192.0.2.1:1234/?action=stream"
ports:
- "54017:54017"
restart: unless-stopped
I thought about encapsulating the command into tmux sessions, however I had no success with it. Or can you help me finding what leads to the crash of the script when running it non-interactive?
Thanks you very much!

Related

Send logs from docker container to Windows host

I've seen other posts with similar questions but I can't find a solution for my case scenario. I hope someone can help.
Here is the thing: I have a python script that listens for UDP traffic and stores the messages in a log file. I've put this script in a docker image so I can run it in a container.
I need to map the generated logs (python script logs) FROM inside the container TO a folder outside the container, on the host machine (Windows host).
If I use docker-compose everything works fine!! but I can't find the way to make it work using a "docker run ..." command.
Here is my docker-compose.yml file
version: '3.3'
services:
'20001':
image: udplistener
container_name: '20001'
ports:
- 20001:20001/udp
environment:
- UDP_PORT=20001
- BUFFER_SIZE=1024
volumes:
- ./UDPLogs/20001:/usr/src/app/logs
And here is the corresponding docker run command:
docker run -d --name 20001 -e UDP_PORT=20001 -e BUFFER_SIZE=1024 -v "/C/Users/kgonzale/Desktop/UDPLogs/20001":/usr/src/app/logs -p 20001:20001 udplistener
I think the problem may be related to the way I'm creating the volumes, I know is different (docker-compose -> Relative path) (docker command -> Absolute path) but I can't find the way to use relative paths when using the docker run command..
To summarize: the python script is creating logs Inside the container, I want to map those logs Outside the container. I can see the logs in the host machine if I use "docker-compose up -d" but I need the "docker run ..." corresponding command.
Container: python:3.7-alpine
Host: Windows 10
Thanks in advance for your help!

Docker container is not cretaed

I work with a Python tool that uses the docker for project management. I run the setup process with command,
$ bin/butler.py setup
The went through seamlessly but when I try to install new PHP plugins using the composure, the tool doesn't find the container itself.
So, my conclusion is the tool is not creating the container properly in the first place.
I describe the process below for the setup. After the initial configuration, this is where it starts,
# all done
print("pull doker images images")
self.docker.compose_pull(self.local_yml)
print("create containers")
self.docker.compose_setup(self.local_yml)
print("setup completed")
This is the general command for the docker execution. I know it has a security bug, but, at this moment this is not the concern.
def compose(self, params, yaml_path="docker-compose.yml"):
""" execte docker-compose commmand """
cmd = f"docker-compose -f {yaml_path} {params}"
print(cmd)
try:
subprocess.run(cmd, shell=True, check=True)
except Exception:
pass
def compose_pull(self, yaml_path):
self.compose("pull --ignore-pull-failures", yaml_path)
def compose_setup(self, yaml_path):
self.compose(f"--project-name {self.project_name} up --no-start ", yaml_path)
The printout provides with the commands,
pull doker images images
# We use a docker-compose.yml and perform the pull operation
docker-compose -f /Users/chaklader/PycharmProjects/Welance-Craft-Starter/build/docker-compose.yml pull --ignore-pull-failures
Pulling database ...
Pulling craft ...
create containers
# We use a docker-compose.yml and perform the up operation for the project
docker-compose -f /Users/chaklader/PycharmProjects/Welance-Craft-Starter/build/docker-compose.yml --project-name p13-27 up --no-start
Creating network "p13-27_default" with the default driver
Creating p13-27_database ...
Creating p13-27_craft ...
setup completed
The docker-compose.yml file is provided,
services:
craft:
container_name: p13-27_craft
environment:
CRAFT_ALLOW_UPDATES: 'false'
CRAFT_DEVMODE: 1
CRAFT_EMAIL: admin#welance.de
CRAFT_ENABLE_CACHE: 0
CRAFT_LOCALE: en_us
CRAFT_PASSWORD: welance
CRAFT_SITENAME: Welance
CRAFT_SITEURL: //localhost
CRAFT_USERNAME: admin
DB_DATABASE: craft
DB_DRIVER: mysql
DB_PASSWORD: craft
DB_PORT: '3306'
DB_SCHEMA: public
DB_SERVER: database
DB_TABLE_PREFIX: craft_
DB_USER: craft
ENVIRONMENT: dev
HTTPD_OPTIONS: ''
LANG: C.UTF-8
SECURITY_KEY: some_key_:)
image: welance/craft:3.1.17.2
links:
- database
ports:
- 80:80
volumes:
- /var/log
- ./docker/craft/conf/apache2/craft.conf:/etc/apache2/conf.d/craft.conf
- ./docker/craft/conf/php/php.ini:/etc/php7/php.ini
- ./docker/craft/logs/apache2:/var/log/apache2
- ./docker/craft/adminer:/data/adminer
- ../config:/data/craft/config
- ../templates:/data/craft/templates
- ../web:/data/craft/web
database:
command: mysqld --character-set-server=utf8 --collation-server=utf8_unicode_ci
--init-connect='SET NAMES UTF8;'
container_name: p13-27_database
environment:
MYSQL_DATABASE: xyz
MYSQL_PASSWORD: xyz
MYSQL_ROOT_PASSWORD: xyz
MYSQL_USER: xyz
image: mysql:5.7
volumes:
- /var/lib/mysql
version: '3.1'
In the summary, my base image is welance/craft:3.1.17.2 and I use that to create the container named p13-27_craft. The additional configuration is provided in the docker-compose.yml file and I run the pull and up command with the docker.
I think the container is itself not created. For example, I provided the data for customer ID 15 and project ID 55 and the printout says informs Creating p15-55_craft ... done.
When I run the command to see if the container is created from the terminal, I find,
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
cf2ea4638772 welance/craft:3.1.17.1 "/data/scripts/run-c…" 37 minutes ago Up 37 minutes 0.0.0.0:80->80/tcp p13-17_craft
4504ae62035f mysql:5.7 "docker-entrypoint.s…" About an hour ago Up About an hour 3306/tcp, 33060/tcp p13-17_database
518e3535859b mysql:5.7
So the information from the print is not correct and container is not created in the first place.
How do I investigate what is the issue here and why the container is not creating?
Thank you.
Get rid of the --no-start option, and add the -d flag to run as daemon (background process). If I run my own solution:
docker-compose up --no-start
Creating alerts-cache ... done
Creating mongoClientTemp ... done
Creating apilayer_alerts-api_1 ... done
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
Nothing is found, even though my containers are created.
docker-compose up -d
Starting alerts-cache ... done
Starting mongoClientTemp ... done
Starting apilayer_alerts-api_1 ... done
docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
af557a2add73 bdsdev.azurecr.io/rva_flask "python app.py alert…" 2 minutes ago Up 1 second 0.0.0.0:5000->5000/tcp apilayer_alerts-api_1
829da0fabe62 bdsdev.azurecr.io/temp_mongo "docker-entrypoint.s…" 2 minutes ago Up 2 seconds 27017/tcp mongoClientTemp
cdb67a305233 mongo
How do I investigate what is the issue here and why the container is
not creating?
Your configuration seems correct and docker-compose does not report any error, its probably that your container was created but either was not started or exited right after being started. You are using docker ps which only shows running container, you will probably see your missing container by running docker ps -a.
docker-compose won't report any error if container is created (and started) successfuly but exited right after starting. If you can see your container with docker ps -a, try running docker logs <container name> to see why your container exited. The step to solve the issues afterward will depend on how your container works.

docker-compose: Why is my python application being invoked here?

I've been scratching my head for a while with this. I have the following Dockerfile for my python application:
# Use an official Python runtime as a parent image
FROM frankwolf/rpi-python3
# Set the working directory to /app
WORKDIR /app
# Copy the current directory contents into the container at /app
COPY . /app
RUN chmod 777 docker-entrypoint.sh
# Install any needed packages specified in requirements.txt
RUN pip3 install --trusted-host pypi.python.org -r requirements.txt
# Run __main__.py when the container launches
CMD ["sudo", "python3", "__main__.py", "-debug"] # Not sure if I need sudo here
docker-compose file:
version: "3"
services:
mongoDB:
restart: unless-stopped
volumes:
- "/data/db:/data/db"
ports:
- "27017:27017"
- "28017:28017"
image: "andresvidal/rpi3-mongodb3:latest"
mosquitto:
restart: unless-stopped
ports:
- "1883:1883"
image: "mjenz/rpi-mosquitto"
FG:
privileged: true
network_mode: "host"
depends_on:
- "mosquitto"
- "mongoDB"
volumes:
- "/home/pi:/home/pi"
#image: "arkfreestyle/fg:v1.8"
image: "test:latest"
entrypoint: /app/docker-entrypoint.sh
restart: unless-stopped
And this what docker-entrypoint.sh looks like:
#!/bin/sh
if [ ! -f /home/pi/.initialized ]; then
echo "Initializing..."
echo "Creating .initialized"
# Create .initialized hidden file
touch /home/pi/.initialized
else
echo "Initialized already!"
sudo python3 __main__.py -debug
fi
Here's what I am trying to do:
(This stuff already works)
1) I need a docker image which runs my python application when I run it in a container. (this works)
2) I need a docker-compose file which runs 2 services + my python application, BUT before running my python application I need to do some initialization work, for this I created a shell script which is docker-entrypoint.sh. I want to do this initialization work ONLY ONCE when I deploy my application on a machine for the first time. So I'm creating a .initialized hidden file which I'm using as a check in my shell script.
I read that using entrypoint in a docker-compose file overwrites any old entrypoint/cmd given to the Dockerfile. So that's why in the else portion of my shell script I'm manually running my code using "sudo python3 main.py -debug", this else portion works fine.
(This is the main question)
In the if portion, I do not run my application in the shell script. I've tested the shell script itself separately, both if and else statements work as I expect, but when I run "sudo docker-compose up", the first time when my shell script hits the if portion it echoes the two statements, creates the hidden file and THEN RUNS MY APPLICATION. The console output appears in purple/pink/mauve for the application, while the other two services print their logs out in yellow and cyan. I'm not sure if the colors matter, but in the normal condition my application logs are always green, in fact the first two echoes "Initializing" and "Creating .initialized" are also green! so I thought I'd mention this detail. After those two echoes, my application mysteriously begins and logs console output in purple...
Why/how is my application being invoked in the if statement of the shell script?
(This is only happens if I run through docker-compose, not if I just run the shell script with sh docker-entrypoint.sh)
Problem 1
Using ENTRYPOINT and CMD at the same time has some strange effects.
Problem 2
This happens to your container:
It is started the first time. The .initialized file does not exist.
The if case is executed. The file is created.
The script and therefore the container ends.
The restart: unless-stopped option restarts the container.
The .initialized file exists now, the else case is run.
python3 __main__.py -debug is executed.
BTW the USER command in the Dockerfile or the user option in Docker Compose are better options than sudo.

Flask + Docker app - The container dies when i run docker run

I have an application with Dockerfile + docker-compose.
Dockerfile
docker-compose.yml
I have a CI, which creates an image from my dockerfile and send it to the hub.docker
Travis.yaml
When I drop this image on my cloud server I can not run this image by running the command below:
docker run -d -p 80:80 flask-example
because the container dies.
Besides the downloaded image from hub.docker after compiled by travis, will I need docker-compose on my server? Executing the command:
docker-compose up -d
To run the application? Or is there another way to do it?
Thanks guys.
running docker with -d flag detached your container, which mean that it runs in background.
Thus, you cannot see the error. Just remove this flag and you will see why it is dying.
From the link to your docker-compose file, it seems that port 80 is already in used (by frontend container) so maybe you can try using a different port?
(for example: docker run -d -p 8080:80 flask-example)
Second, you are right.
docker-compose is just another way to run your container. You don't have to use both.

Debug Docker that quits immediately?

I am following the official docker tutorial:
https://docs.docker.com/get-started/part2/#build-the-app
I can successfully build the Docker image (after creating the Dockerfile, app.py and requirements.txt) and see it:
docker build -t friendlyhello .
docker ps -a
However, it quits immediately when running
docker run -p 4000:80 friendlyhello
I cannot find the way to find why it did not work
1) "docker ps -a" - says the container exited
2) docker logs "container name" returns no information about logs
3) I can attach the shell to it:
docker run -p 4000:80 friendlyhello /bin/sh
but I did not manage to find (grep) any logging information there (in /var/log)
4) attaching foreground and detached mode with -t and -d did not help
What else could I do?
Note: a docker exec on an exited (stopped) container should not be possible (see moby issue 30361)
docker logs and docker inspect on a stopped container should still be possible, but docker exec indeed not.
You should see
Error response from daemon: Container a21... is not running
So a docker inspect of the image you are running should reveal the entrypoint and cmd, as in this answer.
The normal behavior is the one described in this answer.
I had this exact same issue...and it drove me nuts. I am using Docker Toolbox as I am running Windows 7. I ran docker events& prior to my docker run -p 4000:80 friendlyhello. It showed me nothing more than the container starts, and exits pretty much straight away. docker logs <container id> showed nothing.
I was just about to give up when I came across a troubleshooting page with the suggestion to remove the docker machine and re-create it. I know that might sound like a sledgehammer type solution, but the examples seemed to show that the re-create downloads the latest release. I followed the steps shown and it worked! If it helps anyone the steps I ran were;
docker-machine stop default
docker-machine rm default
docker-machine create --driver virtualbox default
Re-creating the example files, building the image and then running it now gives me;
$ docker run -p 4000:80 friendlyhello
* Serving Flask app "app" (lazy loading)
* Environment: production
WARNING: Do not use the development server in a production environment.
Use a production WSGI server instead.
* Debug mode: off
* Running on http://0.0.0.0:80/ (Press CTRL+C to quit)
And with Docker Toolbox running, I can access this at http://192.168.99.100:4000/ and now I get;
Hello World!
Hostname: ca4507de3f48
Visits: cannot connect to Redis, counter disabled

Categories