docker tensorflow linking with folder - python

I am trying docker for tensorflow on windows 10 education, I have installed docker successfully and could run/ pull/ import images. I linked my docker container using
C:\User\xyz_folder> docker run -it tensorflow/tensorflow:latest-devel
root#23433215319e:~#cd /tensorflow
root#23433215319e:/tensorflow#git pull
From https://github.com/tensorflow/tensorflow
* [new tag] v1.11.0 -> v1.11.0
Already up-to-date.
Until here it ran fine without error. Following is the problem:
root#23433215319e:/tensorflow# cd abc_folder
bash: cd: abc_folder: No such file or directory
the abc_folder is there in linked folder but can not be seen in when it list it using 'ls'
root#23433215319e:/tensorflow#ls
ACKNOWLEDGMENTS CODEOWNERS LICENSE WORKSPACE bazel-out configure.py tools ADOPTERS.md CODE_OF_CONDUCT.md README.md arm_compiler.BUILD bazel-tensorflow models.BUILD AUTHORS CONTRIBUTING.md RELEASE.md bazel-bin bazel-testlogs tensorflow BUILD ISSUE_TEMPLATE.md SECURITY.md bazel-genfiles configure third_party
Please suggest how to link this properly so that I can see the shared folders content.
Thanks in advance.

To make a directory outside the container visible inside the container, you have to use the option -v or --volume as is stated here.
So, your command would have to be:
docker run -v c:\local\directory:container/directory -it tensorflow/tensorflow:latest-devel
Whit that, you should be able to see the directory inside the container

Related

docker build from inside container

I'm trying to build a docker image from inside a container using the Python docker SDK. The build command
client.build(dockerfile="my.Dockerfile", path=".", tag="my-tag")
fails with
OSError: Can not read file in context: /proc/1/mem
The issue was that docker cannot build from the container's root directory, which was implicit due to the build context path='.'. This can easily be fixed by using a working directory in the Dockerfile of the container performing the build operation, e.g.
FROM python:3.9-slim
RUN apt-get update -y
WORKDIR my-workdir <-- ADD TO FIX
COPY . .
CMD python -m my-script

Best way to handle multiple python projects in one directory?

Suppose I have a main directory:
main/
|
|---- requirements.txt # requirements for ALL python subprojects
|--- PythonProject1/
|--- PythonProject2/
|--- PythonProject3/
So there are 3 python projects under main/.
The thing I want to do is have only one requirements.txt file so that the sub-projects dont have to have dependency management on their own.
I am wondering what the best way to accomplish this is? Is having main/requirements.txt good enough? Or is this not ideal?
I would recommend creating a Docker Images for your projects if you have the relevant experience.
This way you can easily share your image with your co-workers without having to maintain a requirements.txt file.
It is quite easy to install Docker
After installation, you can either (1) build via a DockerFile or (2) pull-down official Images from DockerHub
(1) docker build -t project1 .
(2) docker pull [Official_Image_name_from_Docker_Hub]
IF new Run these commands from CLI after starting Docker.
docker container ls -a --will get you recent containers
docker image ls --will get you all your images
docker run hello-world --run this to get an image pulled + run
Once you create your base image you can run it as a container after mounting a volume as
docker run -it -v "%cd%":/data [yourImageName] bash
After completing your projects you can commit these containers as new Images and push them onto Docker Hub, which can be pulled down by your team members.
docker commit [containerID/Name] [newImageName]:[TAG]
docker push [newImageName]:[TAG]

Docker Run: Mounted Volume not showing change in files

I am struggling with running the latest changes. Below are the details.
Dockerfile
FROM python:3.7.3
RUN mkdir -p /usr/apps
COPY test.py /usr/apps
RUN pip install mindsdb
CMD [ "python","test.py" ]
Build
docker build -t py37:custom .
Run
docker run -it -v /Development/PetProjects/mindsdb:/usr/apps/ py37:custom
But it shows only the changes at the time of build.
First of all while starting your container you are not using volumes but bind mounts. So you mount directory /Development/PetProjects/mindsdb on your host machine to /usr/apps/ directory. Every change made to files on your host machine in this directory, will be visible in the container, and the other way round.
If you wanted to use volumes, you could create one using docker volume create command and then running container with this volume : docker container run -v volume_name:path_in_container image_name. Then you would be able to stop container and run it again by passing this volume to run command and changes to path_in_container directory could be stored across container creations.
Another thing is that you are trying to mount /usr/apps/ in your container and you copied a python script there using Dockerfile. Note that in you current docker run command contents of /Development/PetProjects/mindsdb will replace content of /usr/apps/ in your container and if you do not have your script in /Development/PetProjects/mindsdb - script will not be visible in the container.
Moreover your CMD seems not to work because of path relativeness. You should change your CMD to CMD [ "python","/usr/apps/test.py" ] or use WORKDIR option - WORKDIR /usr/apps/ so your python command could be executed from this directory and script could be visible there.
More information about differences between volumes and bind mounts can be found in docker documentation.

Docker- Do we need to include RUN command in Dockerfile

I have a python code and to convert it to docker image, I can use below command:
sudo docker build -t customdocker .
This converts python code to docker image. To convert it I use a Dockerfile with below commands:
FROM python:3
ADD my_script.py /
ADD user.conf /srv/config/conf.d/
RUN pip3 install <some-package>
CMD [ "python3", "./my_script.py" ]
In this, we have RUN command which install required packages. Lets say if we have deleted the image for some reason and want to build it again, so is there any way we can skip this RUN step to save some time because I think this is already installed.
Also in my code I am using a file user.conf which is in other directory. So for that I am including this in DOckerfile and also saving a copy of it in current directory. Is there a way in docker where I can define my working directory so that docker image searches for the file inside those directories.
Thanks
Yes you cannot remove the RUN or other statements in dockerfile, if you want to build the docker image again after deleteing.
You use the command WORKDIR in your dockerfile but its scope will be within the docker images, i.e when you create the container from the image workdir will be set to that metioned in WORKDIR
For ex :
WORKDIR /srv/config/conf.d/
This /srv/config/conf.d/ will set as workingdir, but you have to use below in dockerfile while building in-order to copy that file in specified location
ADD user.conf /srv/config/conf.d/
Answering your first question: A docker image holds everything related to your python environment including the packages you install. When you delete the image then the packages are also deleted from the image. Therefore, no you cannot skip that step.
Now on to your second question, you can bind a direectory while starting the container by:
docker run -v /directory-you-want-to-mount:/src/config/ customdocker
You can also set the working directory with -w flag.
docker run -w /path/to/dir/ -i -t customdocker
https://docs.docker.com/v1.10/engine/reference/commandline/run/

How to run my python script on docker?

I am trying to run my python script on docker. I tried different ways to do it but not able to run it on docker. My python script is given below:
import os
print ('hello')
I have already installed docker on my mac. But i want to know how i can make images and then push it to docker after that i wanna pull and run my script on docker itself.
Going by question title, and if one doesn't want to create docker image but just want to run a script using standard python docker images, it can run using below command
docker run -it --rm --name my-running-script -v "$PWD":/usr/src/myapp -w /usr/src/myapp python:3.7-alpine python script_to_run.py
Alright, first create a specific project directory for your docker image. For example:
mkdir /home/pi/Desktop/teasr/capturing
Copy your dockerfile and script in there and change the current context to this directory.
cp /home/pi/Desktop/teasr/capturing.py /home/pi/Desktop/teasr/dockerfile /home/pi/Desktop/teasr/capturing/
cd /home/pi/Desktop/teasr/capturing
This is for best practice, as the first thing the docker-engine does on build, is read the whole current context.
Next we'll take a look at your dockerfile. It should look something like this now:
FROM python:latest
WORKDIR /usr/local/bin
COPY capturing.py .
CMD ["capturing.py", "-OPTIONAL_FLAG"]
The next thing you need to do is build it with a smart name. Using dots is generally discouraged.
docker build -t pulkit/capturing:1.0 .
Next thing is to just run the image like you've done.
docker run -ti --name capturing pulkit/capturing:1.0
The script now get executed inside the container and will probably exit upon completion.
Edit after finding the problem that created the following error:
standard_init_linux.go:195: exec user process caused "exec format error"
There's a different architecture beneath raspberry pi's (ARM instead of x86_64), which COULD'VE BEEN the problem, but wasn't. If that would've been the problem, a switch of the parent image to FROM armhf/python would've been enough.
Source
BUT! The error kept occurring.
So the solution to this problem is a simple missing Sha-Bang on top of the python script. The first line in the script needs to be #!/usr/bin/env python and that should solve the problem.
Source
You need to create a dockerfile in the directory your script is in.
You can take this template:
FROM python:latest
COPY scriptname.py /usr/local/share/
CMD ["scriptname.py", "-flag"]
Then simply execute docker build -t pulkit/scriptname:1.0 . and your image should be created.
Your image should be visible under docker images. If you want to execute it on your local computer, use docker run.
If you want it to upload to the DockerHub, you need to log into the DockerHub with docker login, then upload the image with docker push.
I Followed #samprog (most accepted) answer on my machine running on UBUNTU VERSION="14.04.6".
and was getting "standard_init_linux.go:195: exec user process caused "exec format error"
None of the solution worked for me mentioned above.
Fixed the error after changing my Dockerfile as follows
FROM python:latest
COPY capturing.py ./capturing.py
CMD ["python","capturing.py"]
Note: If your script import some other module then you need to modify COPY statement in your Dockerfile as follows - COPY *.py ./
Hope this will be useful for others.
Another way to run python script on docker can be:
copy the local python script to docker:
docker cp yourlocalscript.path container_id:/dst_path/
container id can be found using:
docker ps
run the python script on docker:
docker exec -it python /container_script_path.py
its very simple
1- go to your Python script directory and create a file with this title without any extension
Dockerfile
2-now open the docker file and write your script name instead of sci.py
( content of Dockerfile )
FROM python:slim #i choice slim version you can choose another tag for example python:3
WORKDIR /usr/local/bin
COPY sci.py . #replace you scrip name with sci.py
CMD [ "python", "sci.py" ] #replace you scrip name with sci.py
save it and now you should create image file from this dockerfile and script py
and next run it
3-in path address folder write CMD and press Enter key :
4-When the cmd window opens for you, type in it :
docker build -t my-python-app . #this create image in docker by this title my-python-app
5- and findly run image:
docker run -it --rm --name my-running-app my-python-app
I've encountered this problem recently, this dependency HELL between python2 and python3 got me. Here is the solution.
Bind your current working directory to a Docker container with python2 and pip2 running.
Pull the docker image.
docker pull frolvlad/alpine-python2
Add this alias into /home/user/.zshrc or /home/user/.bashrc
alias python2='docker run -it --rm --name python2 -v "$PWD":"$PWD" -w
"$PWD" frolvlad/alpine-python2'
Once you type python2 into your CMD you'll be thrown into the Docker instance.

Categories