I have a python code and to convert it to docker image, I can use below command:
sudo docker build -t customdocker .
This converts python code to docker image. To convert it I use a Dockerfile with below commands:
FROM python:3
ADD my_script.py /
ADD user.conf /srv/config/conf.d/
RUN pip3 install <some-package>
CMD [ "python3", "./my_script.py" ]
In this, we have RUN command which install required packages. Lets say if we have deleted the image for some reason and want to build it again, so is there any way we can skip this RUN step to save some time because I think this is already installed.
Also in my code I am using a file user.conf which is in other directory. So for that I am including this in DOckerfile and also saving a copy of it in current directory. Is there a way in docker where I can define my working directory so that docker image searches for the file inside those directories.
Thanks
Yes you cannot remove the RUN or other statements in dockerfile, if you want to build the docker image again after deleteing.
You use the command WORKDIR in your dockerfile but its scope will be within the docker images, i.e when you create the container from the image workdir will be set to that metioned in WORKDIR
For ex :
WORKDIR /srv/config/conf.d/
This /srv/config/conf.d/ will set as workingdir, but you have to use below in dockerfile while building in-order to copy that file in specified location
ADD user.conf /srv/config/conf.d/
Answering your first question: A docker image holds everything related to your python environment including the packages you install. When you delete the image then the packages are also deleted from the image. Therefore, no you cannot skip that step.
Now on to your second question, you can bind a direectory while starting the container by:
docker run -v /directory-you-want-to-mount:/src/config/ customdocker
You can also set the working directory with -w flag.
docker run -w /path/to/dir/ -i -t customdocker
https://docs.docker.com/v1.10/engine/reference/commandline/run/
Related
I'm trying to build a docker image from inside a container using the Python docker SDK. The build command
client.build(dockerfile="my.Dockerfile", path=".", tag="my-tag")
fails with
OSError: Can not read file in context: /proc/1/mem
The issue was that docker cannot build from the container's root directory, which was implicit due to the build context path='.'. This can easily be fixed by using a working directory in the Dockerfile of the container performing the build operation, e.g.
FROM python:3.9-slim
RUN apt-get update -y
WORKDIR my-workdir <-- ADD TO FIX
COPY . .
CMD python -m my-script
I'm trying to create a Docker image from a Dockerfile, and while doing this, I encounter following error with the COPY steps:
failed to compute cache key: not found: not found when using relative paths, and
error from sender: Create file .......\Temp\empty-dir347165903\C:: The filename, directory name, or volume label syntax is incorrect when using absolute ones
The exact command I'm trying is COPY main.py ./
Important notes would be there is no .dockerignore file whatsoever, no container is set and both main.py and Dockerfile are located in the same directory
Here's what the Dockerfile itself looks like:
From public.ecr.aws/lambda/python:3.8
COPY requirements.txt .
RUN pip3 install -r requirements.txt
COPY main.py ./
RUN mkdir chrome
RUN curl -SL (chromedriver link here) > chromedriver.zip
RUN unzip chromedriver.zip -d chrome/
RUN rm chromedriver.zip
The command I'm running is docker build - < Dockerfile
This syntax is only valid if your build doesn't use the context. The docker build command expects one argument, and that's not the Dockerfile, rather it's the build context. Typically it's a directory, could be a remote git repo, or you can pass a tar file of the directory on stdin with the - syntax. There is an exception for passing a Dockerfile instead of the build context, but when this is done, you can't have any COPY or ADD steps that pull files from the build context. Instead, you almost certainly want:
docker build .
To perform the build using the current directory as your build context, which also contains the Dockerfile. And after that, you'll likely want to add a tag to your resulting image:
docker build -t your-image:latest .
(Thanks to to David for the pointer to the Dockerfile as input syntax.)
I am running a python application that reads two paths from Windows env vars and proceeds to use the executables in those paths to do OCR on some documents. Since POPPLER, TESSERACT env vars are already set in Windows, this Python snippet works for me:
popplerPath = os.environ.get('POPPLER')
tesseractPath = os.environ.get('TESSERACT')
Now I am trying to dockerize the app, and, to my understanding, since my container will need access to those paths, I need to mount them using VOLUME during run. My dockerfile looks like this:
FROM python:3.7.7-slim
WORKDIR ./
COPY requirements.txt .
RUN pip install -r requirements.txt
COPY documents/ .
COPY src/ ./src
CMD [ "python", "./src/run.py" ]
I build the image using:
docker build -t ocr .
And I try to run my container using:
docker run -v %POPPLER%:%POPPLER% -v %TESSERACT%:%TESSERACT% ocr
... but my app still gets a None value for these paths and can't use the executable files. Is my approach correct and beyond that, is it a good dev practice?
See the doc, the switch for environment variable is -e:
$ docker run -e MYVAR1 --env MYVAR2=foo --env-file ./env.list ubuntu bash
and in dockerfile, you can use
ENV FOO=/bar
If I understand your statement correctly, your paths are mounted in the container in the same path as the host. The only problem is your Python script, which expects the paths to be provided by the environment variable. This will not exist unless you pass on them from your host system to your container system.
Once you verified your mounted volume with -v is there correctly, you can try with
docker run -v %POPPLER%:%POPPLER% -v %TESSERACT%:%TESSERACT% --env POPPLER=%POPPLER% --env TESSERACT=%TESSERACT% ocr
or, if you always run this, you can consider to put them in your dockerfile to save some keystroke.
Any executable you call must be built into the image. Containers can't usually call executables on the host or in other containers. In the specific example you show, a Linux container can't run a Windows executable, even if you do use a bind mount to inject it into the container.
The "slim" python images are built on Debian GNU/Linux, and you need to use its APT tool to install these executable dependencies in your Dockerfile. (https://www.debian.org/distrib/packages has a search box to help you find the right package name; Ubuntu Linux also uses Debian packages.)
FROM python:3.7-slim
RUN apt-get update \
&& DEBIAN_FRONTEND=noninteractive \
apt-get install -y \
popper-utils \
tesseract-ocr-all
COPY requirements.txt .
...
I'd suggest putting reasonable defaults in your code if these environment variables aren't set. The apt-get install command will put them in the system path inside the image.
popplerPath = os.environ.get('POPPLER', 'poppler')
tesseractPath = os.environ.get('TESSERACT', 'tesseract')
If you really need them as environment variables you could use the Dockerfile ENV directive
ENV POPPLER=poppler TESSERACT=tesseract
Environment variables from the host don't automatically get passed through to the container; you need a Dockerfile ENV or docker run -e option. Also remember that the container has an isolated filesystem (and Windows-syntax paths don't make sense in Linux containers) so these environment variables would need to be container paths, the second half of your proposed docker run -v option.
FROM python:3
WORKDIR /Users/vaibmish/Documents/new/graph-report
RUN pip install graphreport==1.2.1
CMD [ cd /Users/vaibmish/Documents/new/graph-report/graphreport_metrics ]
CMD [ graphreport ]
THIS IS PART OF DCOKERFIILE
i wish to remove cd volumes from tha file and have a command like -v there so that whoever runs that can give his or her own volume path in same
The line
CMD [ cd /Users/vaibmish/Documents/new/graph-report/graphreport_metrics ]
is wrong. You achieve the same with WORKDIR:
WORKDIR /Users/vaibmish/Documents/new/graph-report/graphreport_metrics
WORKDIR creates the path if it doesn't exist and then changes the current directory to that path (same as mkdir -p /path/new && cd /path/new)
You can also declare the path as a volume and instruct who runs the container to provide their own path (docker run -v host_path:container_path ...)
VOLUME /Users/vaibmish/Documents/new/graph-report
A final note: It looks like these paths are from the host. Remember that the paths in the Dockerfile are not host paths. They are paths inside the container.
Typical practice here is to pick some fixed path inside the Docker container. It should be a different path from where your application is installed; it does not need to match any particular host path at all.
FROM python:3
RUN pip3 install graphreport==1.2.1
WORKDIR /data
CMD ["graphreport"]
docker build -t me/graphreport:1.2.1 .
docker run --rm \
-v /Users/vaibmish/Documents/new/graph-report:/data \
me/graphreport:1.2.1
(Remember that only the last CMD has an effect, and if it's not a well-formed JSON array, Docker will interpret it as a shell command. What you show in the question would run the test(1) command and not the program you're installing.)
If you're trying to install a single package from PyPI and just run it on local files, a Python virtual environment will be much easier to set up than anything based on Docker, and will essentially work as you expect:
python3 -m venv graphreport
. graphreport/bin/activate
pip3 install graphreport==1.2.1
cd /Users/vaibmish/Documents/new/graph-report
graphreport
deactivate # switch back to system Python/pip
All of the installed Python code is inside the graphreport virtual environment directory, and if you don't need this application again, you can just delete the directory tree.
I am trying to run my python script on docker. I tried different ways to do it but not able to run it on docker. My python script is given below:
import os
print ('hello')
I have already installed docker on my mac. But i want to know how i can make images and then push it to docker after that i wanna pull and run my script on docker itself.
Going by question title, and if one doesn't want to create docker image but just want to run a script using standard python docker images, it can run using below command
docker run -it --rm --name my-running-script -v "$PWD":/usr/src/myapp -w /usr/src/myapp python:3.7-alpine python script_to_run.py
Alright, first create a specific project directory for your docker image. For example:
mkdir /home/pi/Desktop/teasr/capturing
Copy your dockerfile and script in there and change the current context to this directory.
cp /home/pi/Desktop/teasr/capturing.py /home/pi/Desktop/teasr/dockerfile /home/pi/Desktop/teasr/capturing/
cd /home/pi/Desktop/teasr/capturing
This is for best practice, as the first thing the docker-engine does on build, is read the whole current context.
Next we'll take a look at your dockerfile. It should look something like this now:
FROM python:latest
WORKDIR /usr/local/bin
COPY capturing.py .
CMD ["capturing.py", "-OPTIONAL_FLAG"]
The next thing you need to do is build it with a smart name. Using dots is generally discouraged.
docker build -t pulkit/capturing:1.0 .
Next thing is to just run the image like you've done.
docker run -ti --name capturing pulkit/capturing:1.0
The script now get executed inside the container and will probably exit upon completion.
Edit after finding the problem that created the following error:
standard_init_linux.go:195: exec user process caused "exec format error"
There's a different architecture beneath raspberry pi's (ARM instead of x86_64), which COULD'VE BEEN the problem, but wasn't. If that would've been the problem, a switch of the parent image to FROM armhf/python would've been enough.
Source
BUT! The error kept occurring.
So the solution to this problem is a simple missing Sha-Bang on top of the python script. The first line in the script needs to be #!/usr/bin/env python and that should solve the problem.
Source
You need to create a dockerfile in the directory your script is in.
You can take this template:
FROM python:latest
COPY scriptname.py /usr/local/share/
CMD ["scriptname.py", "-flag"]
Then simply execute docker build -t pulkit/scriptname:1.0 . and your image should be created.
Your image should be visible under docker images. If you want to execute it on your local computer, use docker run.
If you want it to upload to the DockerHub, you need to log into the DockerHub with docker login, then upload the image with docker push.
I Followed #samprog (most accepted) answer on my machine running on UBUNTU VERSION="14.04.6".
and was getting "standard_init_linux.go:195: exec user process caused "exec format error"
None of the solution worked for me mentioned above.
Fixed the error after changing my Dockerfile as follows
FROM python:latest
COPY capturing.py ./capturing.py
CMD ["python","capturing.py"]
Note: If your script import some other module then you need to modify COPY statement in your Dockerfile as follows - COPY *.py ./
Hope this will be useful for others.
Another way to run python script on docker can be:
copy the local python script to docker:
docker cp yourlocalscript.path container_id:/dst_path/
container id can be found using:
docker ps
run the python script on docker:
docker exec -it python /container_script_path.py
its very simple
1- go to your Python script directory and create a file with this title without any extension
Dockerfile
2-now open the docker file and write your script name instead of sci.py
( content of Dockerfile )
FROM python:slim #i choice slim version you can choose another tag for example python:3
WORKDIR /usr/local/bin
COPY sci.py . #replace you scrip name with sci.py
CMD [ "python", "sci.py" ] #replace you scrip name with sci.py
save it and now you should create image file from this dockerfile and script py
and next run it
3-in path address folder write CMD and press Enter key :
4-When the cmd window opens for you, type in it :
docker build -t my-python-app . #this create image in docker by this title my-python-app
5- and findly run image:
docker run -it --rm --name my-running-app my-python-app
I've encountered this problem recently, this dependency HELL between python2 and python3 got me. Here is the solution.
Bind your current working directory to a Docker container with python2 and pip2 running.
Pull the docker image.
docker pull frolvlad/alpine-python2
Add this alias into /home/user/.zshrc or /home/user/.bashrc
alias python2='docker run -it --rm --name python2 -v "$PWD":"$PWD" -w
"$PWD" frolvlad/alpine-python2'
Once you type python2 into your CMD you'll be thrown into the Docker instance.