Dockerfile - Can't use another name than 'app.py' - python

I've been searching a lot for the past days reagarding Dockerfile. I'm using cx_Oracle in python 2.7. Here's how my Dockerfile looks like:
FROM sbanal/python-oracle-xe12.1-latest
WORKDIR /code/app
COPY generate_distance.py /code/app/app.py
COPY generate_values.py /code/app/app2.py
To make it easier to explain, I've made a method to print out the name of the file. In generate_distance.py:
def test():
print "Generate distance"
test()
In generate_values.py:
def test():
print "Generate values"
test()
Then I'm running docker build with a tag:
docker build -t gen .
Sending build context to Docker daemon 13.82kB
Step 1/4 : FROM sbanal/python-oracle-xe12.1-latest
---> 723335924016
Step 2/4 : WORKDIR /code/app
---> Using cache
---> 9fde6fb3ac02
Step 3/4 : COPY generate_distance.py /code/app/app.py
---> 1dbf7ef85ee3
Removing intermediate container ae626dcef48c
Step 4/4 : COPY generate_values.py /code/app/app2.py
---> 7a54500b88a3
Removing intermediate container f496edfc237d
Successfully built 7a54500b88a3
Successfully tagged gen:latest
When running 'docker images', I can see the 'gen' image. But when I run the 'gen' image, only app.py is working:
>docker run -p 5500:5000 gen
>Generate distance
I can't see what mistake I've done. I also don't know why it has to be called app.py. If I use different file name during COPY in Dockerfile, I get 'No such file or directory' error. That is:
FROM sbanal/python-oracle-xe12.1-latest
WORKDIR /code/app
COPY generate_relation_distance.py /code/app/generate_relation_distance.py
COPY generate_ten_values.py /code/app/generate_ten_values.py
Build and run like the part over:
docker run -p 5500:5000 gen
python: can't open file 'app.py': [Errno 2] No such file or directory
Hope someone can help me :)

Your image is based on sbanal/python-oracle-xe12.1-latest (first line of your Dockerfile).
In this Dockerfile is a "CMD" defined, which specifies the first command of your container. Here, that is
CMD python app.py
(see last line of your base image).
The command will be executed as sh -c "python app.py".
This is why your Dockerfile starts on container creation python app.py
You need to override the "CMD" part in your Dockerfile, e.g.
CMD ["python", "app2.py"]
See the official docker docs to understand CMD.
You should only have one CMD in your Dockerfile containing the first command, which is automatically executed by the container.
If you want to start multiple services, you should consider, if this should really be packed into one image. Or you follow the official docs and consider using a supervisor or a script, which starts your desired services.

If anyone wondering how I solved the problem, I used bash script instead.
script.sh:
#Build the image
docker build -t image_name .
#Run image image_name in container
docker run -d -p 550:500 image_name tail -f /dev/null
#Get the container id
container_id="$(docker ps | grep $IMAGE_NAME | grep -Eo '^[^ ]+')"
#Run your program in container
docker exec -it container_id /bin/sh -c "python generate_distance.py"
docker exec -it container_id /bin/sh -c "python generate_values.py"
My goal was to store the output in text files and copy those from docker container to localhost.

Related

Is it possible to share a volume with 2 docker containters?

I can't run 2 containers whereas I can run each one them separately.
I have this 1st container/image related to this DockerFile
FROM debian:latest
RUN apt-get update && apt-get install python3-pip -y && pip3 install requests
ADD test1.py /app/container1/test1.py
WORKDIR /app/
CMD python3 container1/test1.py
I have this 2nd container/image related to this DockerFile
FROM debian:latest
RUN apt-get update && apt-get install python3-pip -y && pip3 install requests
ADD test2.py /app/container2/test2.py
WORKDIR /app/
CMD python3 container2/test2.py
No issues to create images:
docker image build ./authentif -t test1:latest
docker image build ./authoriz -t test2:latest
When I run the 1st container with this command:
docker container run -it --network my_network --name test1_container\
--mount type=volume,src=my_volume,dst=/app -e LOG=1\
--rm test1:latest
it works.
And If i want to check my volume:
sudo ls /var/lib/docker/volumes/my_volume/_data
I can see data in my volume
However when I want run the 2nd container:
docker container run -it --network my_network --name test2_container\
--mount type=volume,src=my_volume,dst=/app -e LOG=1\
--rm test2:latest
I have this error:
python3: can't open file '/app/container2/test2.py': [Errno 2] No such file or directory
If i delete everything and start over : if I start running the 2nd container it works but then id I want to run the 1st container, i have the error again.
why is that?
in my container1, let's assume that my script python writes data in a file, for example :
import os
print("test111111111")
if os.environ.get('LOG') == "1":
print("1111111")
with open('record.log', 'a') as file:
file.write("file11111")
I can't reproduce your issue. When I start 2 containers using
docker run -d --rm -v myvolume:/app --name container1 debian tail -f /dev/null
docker run -d --rm -v myvolume:/app --name container2 debian tail -f /dev/null
and then do
docker exec container1 /bin/sh -c 'echo hello > /app/hello.txt'
docker exec container2 cat /app/hello.txt
it prints out 'hello' as expected.
You are mounting the volume over /app, the directory that contains your application code. That hides the code and replaces it with something else.
The absolute best approach here, if you can handle it, is to avoid sharing files at all. Keep the data somewhere like a relational database (which may be stateful). Don't mount anything on to your containers. Especially if you're looking forward to a clustered environment like Kubernetes, sharing files can be surprisingly tricky.
If you can't get rid of the shared directory, then put it somewhere other than /app. You might need to configure the alternate directory using an environment variable.
docker container run ... \
--mount type=volume,src=my_volume,dst=/data \ # /data, not /app
...
What's actually happening in your setup is that Docker has a feature to copy the contents of the image into an empty named volume on first use. This only happens if the volume is completely empty, this only happens with a named Docker volume and not bind mounts, and this doesn't happen on other container systems like Kubernetes. (I'd discourage actually relying on this behavior.)
So when you run the first container, it sees that my_volume is empty and copies the test1 image into it; then the container sees the code it expects it in /app and it apparently works fine. The second container sees my_volume is non-empty, and so the volume contents (with the first image's code) hide what was in the image (the second image's code). I'd expect, if you started from scratch, whichever of the two containers you started first would work, but not the other, and if you change the code in the working image, a new container won't see that change (it will use the code out of the volume).

Python docker container shuts down immediately after finishing running the app, even if specified to stay in -d -t

I have a dockerfile
FROM python:3
WORKDIR /app
ADD ./venv ./venv
ADD ./data/file1.csv.gz ./data/file1.csv.gz
ADD ./data/file2.csv.gz ./data/file2.csv.gz
ADD ./requirements.txt ./venv/requirements.txt
WORKDIR /app/venv
RUN pip install --no-cache-dir -r requirements.txt
CMD ["python", "./src/script.py", "/app/data/file1.csv.gz", "/app/data/file2.csv.gz"]
After building an image from it and running it, the image runs the app as it should, but then the container shuts down immediately after finishing. This is definitely problematic since I can't expect the output file.
I have tried using docker run -d -t <imgname> and docker ps shows the app for a few seconds, but once again, as soon as it finishes the process, the container shuts itself down.
So it's impossible to access, even with docker exec <imgid> -it --entrypoint /bin/bash, it just immediately exits.
I've also tried adding a last RUN /bin/bash after the last CMD but it doesn't help either.
What can I do actually be able to log into the container and inspect the file?
As long as the container hasen't been removed, you will be able to get at the data. You can find the name of the container using docker ps -a.
Then, if you know the location of the file, you can copy it to your host using
docker cp <container name>:<file> .
Alternatively, you can commit the contents of the container to a new image and run a shell in that using
docker commit <container name> newimagename
docker run --rm -it newimagename /bin/bash
Then you can look around in the container and find your files.
Unfortunately there's no way to start the container up again and look around in it. docker start will start the container, but will run the same command again as was run when you did docker run.

How to run a simple Python script without writing a complete Dockerfile?

I have set up Docker Toolbox on a Win 10 machine. I have some simple single file Python scripts that I want to run in Docker, just for learning purpose.
Started learning Docker today, and Python 3 days ago.
I assume I have set up Docker correctly, I can run the example hello-world image. No error messages during setup.
I am following an instruction from here https://runnable.com/docker/python/dockerize-your-python-application,
which says:
If you only need to run a simple script (with a single file), you can avoid writing a complete Dockerfile. In the examples below, assume you store my_script.py in /usr/src/widget_app/, and you want to name the container my-first-python-script:
docker run -it --rm --name my-first-python-script -v "$PWD":/usr/src/widget_app python:3 python my_script.py
If I type pwd, it shows:
/c/Program Files/Docker Toolbox
And the script I want to run is located here:
C:\Docker\Python\my_script.py
This is what I think should work:
docker run -it --rm --name my-first-python-script -v "$PWD":/c/Docker/Python python:3 python my_script.py
No matter how I try to specify the file directory, I get an error:
python: can't open file 'my_script.py': [Errno 2] No such file or directory
When you run -v "$PWD":/c/Docker/Python, you are saying you want to link your current working directory to the path /c/Docker/Python in the container, which isn't what you want to do. What you are trying to do is link C:\Docker\Python\ on your host to the container folder /usr/src/widget_app.
This command will put your script inside the container path /usr/src/widget_app, then run it:
docker run -it --rm --name my-first-python-script -v /c/Docker/Python:/usr/src/widget_app python:3 python /usr/src/widget_app/my_script.py

How to run my python script on docker?

I am trying to run my python script on docker. I tried different ways to do it but not able to run it on docker. My python script is given below:
import os
print ('hello')
I have already installed docker on my mac. But i want to know how i can make images and then push it to docker after that i wanna pull and run my script on docker itself.
Going by question title, and if one doesn't want to create docker image but just want to run a script using standard python docker images, it can run using below command
docker run -it --rm --name my-running-script -v "$PWD":/usr/src/myapp -w /usr/src/myapp python:3.7-alpine python script_to_run.py
Alright, first create a specific project directory for your docker image. For example:
mkdir /home/pi/Desktop/teasr/capturing
Copy your dockerfile and script in there and change the current context to this directory.
cp /home/pi/Desktop/teasr/capturing.py /home/pi/Desktop/teasr/dockerfile /home/pi/Desktop/teasr/capturing/
cd /home/pi/Desktop/teasr/capturing
This is for best practice, as the first thing the docker-engine does on build, is read the whole current context.
Next we'll take a look at your dockerfile. It should look something like this now:
FROM python:latest
WORKDIR /usr/local/bin
COPY capturing.py .
CMD ["capturing.py", "-OPTIONAL_FLAG"]
The next thing you need to do is build it with a smart name. Using dots is generally discouraged.
docker build -t pulkit/capturing:1.0 .
Next thing is to just run the image like you've done.
docker run -ti --name capturing pulkit/capturing:1.0
The script now get executed inside the container and will probably exit upon completion.
Edit after finding the problem that created the following error:
standard_init_linux.go:195: exec user process caused "exec format error"
There's a different architecture beneath raspberry pi's (ARM instead of x86_64), which COULD'VE BEEN the problem, but wasn't. If that would've been the problem, a switch of the parent image to FROM armhf/python would've been enough.
Source
BUT! The error kept occurring.
So the solution to this problem is a simple missing Sha-Bang on top of the python script. The first line in the script needs to be #!/usr/bin/env python and that should solve the problem.
Source
You need to create a dockerfile in the directory your script is in.
You can take this template:
FROM python:latest
COPY scriptname.py /usr/local/share/
CMD ["scriptname.py", "-flag"]
Then simply execute docker build -t pulkit/scriptname:1.0 . and your image should be created.
Your image should be visible under docker images. If you want to execute it on your local computer, use docker run.
If you want it to upload to the DockerHub, you need to log into the DockerHub with docker login, then upload the image with docker push.
I Followed #samprog (most accepted) answer on my machine running on UBUNTU VERSION="14.04.6".
and was getting "standard_init_linux.go:195: exec user process caused "exec format error"
None of the solution worked for me mentioned above.
Fixed the error after changing my Dockerfile as follows
FROM python:latest
COPY capturing.py ./capturing.py
CMD ["python","capturing.py"]
Note: If your script import some other module then you need to modify COPY statement in your Dockerfile as follows - COPY *.py ./
Hope this will be useful for others.
Another way to run python script on docker can be:
copy the local python script to docker:
docker cp yourlocalscript.path container_id:/dst_path/
container id can be found using:
docker ps
run the python script on docker:
docker exec -it python /container_script_path.py
its very simple
1- go to your Python script directory and create a file with this title without any extension
Dockerfile
2-now open the docker file and write your script name instead of sci.py
( content of Dockerfile )
FROM python:slim #i choice slim version you can choose another tag for example python:3
WORKDIR /usr/local/bin
COPY sci.py . #replace you scrip name with sci.py
CMD [ "python", "sci.py" ] #replace you scrip name with sci.py
save it and now you should create image file from this dockerfile and script py
and next run it
3-in path address folder write CMD and press Enter key :
4-When the cmd window opens for you, type in it :
docker build -t my-python-app . #this create image in docker by this title my-python-app
5- and findly run image:
docker run -it --rm --name my-running-app my-python-app
I've encountered this problem recently, this dependency HELL between python2 and python3 got me. Here is the solution.
Bind your current working directory to a Docker container with python2 and pip2 running.
Pull the docker image.
docker pull frolvlad/alpine-python2
Add this alias into /home/user/.zshrc or /home/user/.bashrc
alias python2='docker run -it --rm --name python2 -v "$PWD":"$PWD" -w
"$PWD" frolvlad/alpine-python2'
Once you type python2 into your CMD you'll be thrown into the Docker instance.

Using iPython with remote docker container in a local project directory

I am completely new to Docker and I followed the instructions found on this page https://cmusatyalab.github.io/openface/setup/:
docker pull bamos/openface
docker run -p 9000:9000 -p 8000:8000 -t -i bamos/openface /bin/bash
cd /root/openface
./demos/compare.py images/examples/{lennon*,clapton*}
And I was able to run an example of openface following the example. However, normally I develop in iPython and would like to do so. However, I cannot import openface from iPython, since presumably it is not installed locally. Similarly, I do not know how to cd into my project directory, which is in /Users/name/documents/my-project.
What is the idiomatic way to proceed?
My recommended approach is using docker volumes. As you already have the project outside the container, you can start the container with a volume to map your project directory to a directory inside the container. That's idiomatic.
For instance:
docker run -v /Users/name/Documents/my-project:/root/my-project -p 9000:9000 -p 8000:8000 -t -i bamos/openface /bin/bash

Categories