I have an application with Dockerfile + docker-compose.
Dockerfile
docker-compose.yml
I have a CI, which creates an image from my dockerfile and send it to the hub.docker
Travis.yaml
When I drop this image on my cloud server I can not run this image by running the command below:
docker run -d -p 80:80 flask-example
because the container dies.
Besides the downloaded image from hub.docker after compiled by travis, will I need docker-compose on my server? Executing the command:
docker-compose up -d
To run the application? Or is there another way to do it?
Thanks guys.
running docker with -d flag detached your container, which mean that it runs in background.
Thus, you cannot see the error. Just remove this flag and you will see why it is dying.
From the link to your docker-compose file, it seems that port 80 is already in used (by frontend container) so maybe you can try using a different port?
(for example: docker run -d -p 8080:80 flask-example)
Second, you are right.
docker-compose is just another way to run your container. You don't have to use both.
Related
I face an odd problem with an tool I found on github, it's a script that distributes an mjpeg stream: https://github.com/OliverF/mjpeg-relay
I create a docker image with the provided command (after initializing the submodule git submodule update --init):
docker build -t relay .
When I now run the container as follows (with the -it flag) the script runs fine, when removing the flag the container exits after some seconds.
docker run -it -p 54017:54321 relay "http://192.0.2.1:1234/?action=stream"
As I want to be able to start the script for multiple streams in a docker compose file, adding restart: unless-stopped leads to an endless loop of containers being restarted.
services:
mjpeg:
image: relay
command: "http://192.0.2.1:1234/?action=stream"
ports:
- "54017:54017"
restart: unless-stopped
I thought about encapsulating the command into tmux sessions, however I had no success with it. Or can you help me finding what leads to the crash of the script when running it non-interactive?
Thanks you very much!
I have a dockerfile
FROM python:3
WORKDIR /app
ADD ./venv ./venv
ADD ./data/file1.csv.gz ./data/file1.csv.gz
ADD ./data/file2.csv.gz ./data/file2.csv.gz
ADD ./requirements.txt ./venv/requirements.txt
WORKDIR /app/venv
RUN pip install --no-cache-dir -r requirements.txt
CMD ["python", "./src/script.py", "/app/data/file1.csv.gz", "/app/data/file2.csv.gz"]
After building an image from it and running it, the image runs the app as it should, but then the container shuts down immediately after finishing. This is definitely problematic since I can't expect the output file.
I have tried using docker run -d -t <imgname> and docker ps shows the app for a few seconds, but once again, as soon as it finishes the process, the container shuts itself down.
So it's impossible to access, even with docker exec <imgid> -it --entrypoint /bin/bash, it just immediately exits.
I've also tried adding a last RUN /bin/bash after the last CMD but it doesn't help either.
What can I do actually be able to log into the container and inspect the file?
As long as the container hasen't been removed, you will be able to get at the data. You can find the name of the container using docker ps -a.
Then, if you know the location of the file, you can copy it to your host using
docker cp <container name>:<file> .
Alternatively, you can commit the contents of the container to a new image and run a shell in that using
docker commit <container name> newimagename
docker run --rm -it newimagename /bin/bash
Then you can look around in the container and find your files.
Unfortunately there's no way to start the container up again and look around in it. docker start will start the container, but will run the same command again as was run when you did docker run.
I'm trying to build a Docker image for my python app (a small api on aiohttp with a couple endpoints)
FROM python:3
WORKDIR /home/emil/Projects/elastic_simple_engine
COPY . .
RUN pip3 install -r requirements.txt
EXPOSE 5000/tcp
CMD ["python3", "entry.py"]
The last line of the Dockerfile runs a python script which starts aiohttp.web.Application():
# entry.py
# ...a few dozens of code lines above...
if __name__ == '__main__':
print('Initializing...')
aiohttp.web.run_app(app, host='127.0.0.1', port=5000)
After building an image I'm trying to run the container:
$ docker run -p 5000:5000 myapp
Docker runs the container silently without any output in shell but I can't reach my app's host: 127.0.0.1:5000 (everything works perfectly when I launch it without docker).
Only when I stop the container it prints in console the lines that should be shown during app's launch and shuts down:
Initializing...
======== Running on http://127.0.0.1:5000 ========
(Press CTRL+C to quit)
Please, help me figure out that I do wrong.
TLDR
Set host to 0.0.0.0
127.0.0.1 is the IP address to local interface. It can only communicate within the same host.
0.0.0.0 means a server (in this context) would listen to every available network interface (including 127.0.0.1).
Here, since you are not sharing the docker network, 127.0.0.1 is only available inside the container and not from outside the container. You should use 0.0.0.0 to access it from outside the container or pass --network="host" in docker run but this can have other complications with port sharing.
I've seen other posts with similar questions but I can't find a solution for my case scenario. I hope someone can help.
Here is the thing: I have a python script that listens for UDP traffic and stores the messages in a log file. I've put this script in a docker image so I can run it in a container.
I need to map the generated logs (python script logs) FROM inside the container TO a folder outside the container, on the host machine (Windows host).
If I use docker-compose everything works fine!! but I can't find the way to make it work using a "docker run ..." command.
Here is my docker-compose.yml file
version: '3.3'
services:
'20001':
image: udplistener
container_name: '20001'
ports:
- 20001:20001/udp
environment:
- UDP_PORT=20001
- BUFFER_SIZE=1024
volumes:
- ./UDPLogs/20001:/usr/src/app/logs
And here is the corresponding docker run command:
docker run -d --name 20001 -e UDP_PORT=20001 -e BUFFER_SIZE=1024 -v "/C/Users/kgonzale/Desktop/UDPLogs/20001":/usr/src/app/logs -p 20001:20001 udplistener
I think the problem may be related to the way I'm creating the volumes, I know is different (docker-compose -> Relative path) (docker command -> Absolute path) but I can't find the way to use relative paths when using the docker run command..
To summarize: the python script is creating logs Inside the container, I want to map those logs Outside the container. I can see the logs in the host machine if I use "docker-compose up -d" but I need the "docker run ..." corresponding command.
Container: python:3.7-alpine
Host: Windows 10
Thanks in advance for your help!
I am following the official docker tutorial:
https://docs.docker.com/get-started/part2/#build-the-app
I can successfully build the Docker image (after creating the Dockerfile, app.py and requirements.txt) and see it:
docker build -t friendlyhello .
docker ps -a
However, it quits immediately when running
docker run -p 4000:80 friendlyhello
I cannot find the way to find why it did not work
1) "docker ps -a" - says the container exited
2) docker logs "container name" returns no information about logs
3) I can attach the shell to it:
docker run -p 4000:80 friendlyhello /bin/sh
but I did not manage to find (grep) any logging information there (in /var/log)
4) attaching foreground and detached mode with -t and -d did not help
What else could I do?
Note: a docker exec on an exited (stopped) container should not be possible (see moby issue 30361)
docker logs and docker inspect on a stopped container should still be possible, but docker exec indeed not.
You should see
Error response from daemon: Container a21... is not running
So a docker inspect of the image you are running should reveal the entrypoint and cmd, as in this answer.
The normal behavior is the one described in this answer.
I had this exact same issue...and it drove me nuts. I am using Docker Toolbox as I am running Windows 7. I ran docker events& prior to my docker run -p 4000:80 friendlyhello. It showed me nothing more than the container starts, and exits pretty much straight away. docker logs <container id> showed nothing.
I was just about to give up when I came across a troubleshooting page with the suggestion to remove the docker machine and re-create it. I know that might sound like a sledgehammer type solution, but the examples seemed to show that the re-create downloads the latest release. I followed the steps shown and it worked! If it helps anyone the steps I ran were;
docker-machine stop default
docker-machine rm default
docker-machine create --driver virtualbox default
Re-creating the example files, building the image and then running it now gives me;
$ docker run -p 4000:80 friendlyhello
* Serving Flask app "app" (lazy loading)
* Environment: production
WARNING: Do not use the development server in a production environment.
Use a production WSGI server instead.
* Debug mode: off
* Running on http://0.0.0.0:80/ (Press CTRL+C to quit)
And with Docker Toolbox running, I can access this at http://192.168.99.100:4000/ and now I get;
Hello World!
Hostname: ca4507de3f48
Visits: cannot connect to Redis, counter disabled