I am on popos 20.04 LTS and I want to use Tensorman for tenserflow/python. I'm new into docker and I want to install additional dependencies for example using default Image I can run jupyter notebook using these commands -
tensorman run -p 8888:8888 --gpu --python3 --jupyter bash
jupyter notebook --ip=0.0.0.0 --no-browser
but now I have to install additional dependencies for example if I want to install jupytertheme how can I change that ? I have tried to install it directly inside docker container but its not working that way.
this issue is looking similar to my problem but there was no explanation exactly how I have to make custom image in tensorman.
There are two ways to install dependencies.
Create a custom image, install dependencies and save it.
Use the --root tag to gain root access to the container, install dependencies and use them.
Build your own custom image
If you are working on a project and want some dependencies for that project or just want to save all your favourite dependencies, you can create a custom image according to that project and save it and later use that image for your project.
Now make a list of all packages once you are ready use this command
tensorman run -p 8888:8888 --root --python3 --gpu --jupyter --name CONTAINER_NAME bash
Where CONTAINER_NAME is the name of the container you can give any name you want and -p sets the port (you can search about port forwarding in docker)
Now you are running container as root, now in the container shell use.
# its always a good idea to update the apt, you can install packages from apt also
apt update
# install jupyterthemes
pip install jupyterthemes
# check if all your desired packages are installed
pip list
Now it's time to save your image
Open a new terminal and use this command to save your image
tensorman save CONTAINER_NAME IMAGE_NAME
CONTAINER_NAME should be the one which was used earlier, and for IMAGE_NAME you can choose according to your preferences.
Now you can close the terminals use tensorman list to check your custom image is there or not. To use your custom image use
tensorman =IMAGE_NAME run -p 8888:8888 --gpu bash
# to use jupyter
jupyter notebook --ip=0.0.0.0 --no-browser
Use --root and install dependencies
Now you might be wondering that in a normal Jupyter notebook, you can install dependencies even inside the notebook, but that's not the case with tensorman; it's because we're not running it as root because if we run it as root, exported files in host machine would also be using root permissions that's why it is good to avoid using --root tag but we can use that to install dependencies. After installing, you have to save that image (it's not necessary though you can also install them every time) otherwise, installed dependencies will be lost.
In the last step of the custom image building, use these commands instead
# notice --root
tensorman =IMAGE_NAME run -p 8888:8888 --gpu --root bash
# to use jupyter, notice --allow-root
jupyter notebook --allow-root --ip=0.0.0.0 --no-browser
Related
I try to create a standalone python app using Pyinstaller which should run and can be reproducibly recreated on Windows, Linux, and Mac.
The idea is to use Docker to have a fixed environment that creates the app and exports it again. For Linux and Windows, I could make it work using https://github.com/cdrx/docker-pyinstaller.
the idea is to let docker create a fully functioning pyinstaller app (with GUI) within and export this app. Since pyinstaller depends on the package versions etc. this should be fixed in the docker and only the new source code should be supplied to compile and export a new version of the software.
in the ideal scenario (and how it already works with Linux and Windows), the user can create the docker and compile the software itself:
docker build -t docker_pyinstaller_linux https://raw.githubusercontent.com/loipf/calimera_docker_export/main/linux/Dockerfile
docker run --rm -v "/path/to/app_source_code:/code/" -v "${PWD}:/code/dist/" docker_pyinstaller_linux
For Mac, however, there is no simple straightforward solution. There is one Mac docker image out there https://github.com/sickcodes/Docker-OSX, but the docker creation code is not that simple.
my idea:
take https://github.com/sickcodes/Docker-OSX/blob/master/Dockerfile.auto
add to the end the download of miniconda:
RUN chmod -R 777 /Users/user
### install miniconda to /miniconda
RUN curl -LO "http://repo.continuum.io/miniconda/Miniconda3-4.4.10-MacOSX-x86_64.sh"
RUN bash Miniconda3-4.4.10-MacOSX-x86_64.sh -p /miniconda -b
ENV PATH=/miniconda/bin:${PATH}
RUN conda update -y conda
### install packages from conda
RUN conda install -c anaconda -y python=3.8
RUN apt-get install -y python3-pyqt5
...
but already the first command fails due to chmod: cannot access '/Users/user': No such file or directory since I am in a different "environment" than the interactive session (/home/arch/OSX-KVM). Can somebody help me out?
I know these asking for code pieces questions are not the best, and I could show you all the things I tried, but I doubt they will help anybody. I would like to have a minimum Mac example without user login or gui etc (which should be possible using https://github.com/sickcodes/osx-optimizer). It should only have miniconda installed (with pyinstaller and a few packages).
other info:
I can run the previous commands in the mac environment interactively in the docker environment but would like to fix these commands permanently in the image:
docker pull sickcodes/docker-osx:auto ### replace with docker with pyinstaller and packages
docker run -it \
--device /dev/kvm \
-p 50922:10022 \
sickcodes/docker-osx:auto
ideally in the end, we can run the above command with OSX_commands pyinstaller file.spec
related questions but without solution:
Using Docker and Pyinstaller to distribute my application
How to create OS X app with Python on Windows
Tl;Dr: is there a way to check if a volume /foo built into a docker image has been overwritten at runtime by remounting using -v host/foo:/foo?
I have designed an image that runs a few scripts on container initialization using s6-overlay to dynamically generate a user, transfer permissions, and launch a few services under that uid:gid.
One of the things I need to do is pip install -e /foo a python module mounted at /foo. This also installs a default version of /foo contained in the docker image if the user doesn't specify a volume. The reason I am doing this install at runtime is because this container is designed to contain the entire environment for development and experimentation of foo, so if a user mounts a system version of foo, e.g. -v /home/user/foo:/foo, the user can develop by updating host:/home/user/foo or in container:/foo and all changes will persist and the image won't need to be rebuilt to get new changes. It needs to be an editable install so that new changes don't require reinstallation.
I have this working now.
I would like to speed up container initialization by moving this pip install into the image build, and then only install the module at runtime if the user has mounted a new /foo at runtime using -v /home/user/foo:/foo.
Of course, there are other ways to do this. For example, I could build foo into the image copying it to /bar at build time and install foo using pip install /bar... Then at runtime just check if /foo exists and if it doesn't then create a symlink /foo->/bar. If it does exist then pip uninstall foo and pip install -e /foo.. but this isn't the cleanest solution. I could also just mv /bar /foo at runtime if /foo doesn't exist.. but I'm not sure how pip will handle the change in module path.
The only way to do this, that I can think of, is to map the docker socket into the container, so you can do docker inspect from inside the container and see the mounted volumes. Like this
docker run --rm -it -v /var/run/docker.sock:/var/run/docker.sock docker
and then inside the container
docker inspect $(hostname)
I've used the 'docker' image, since that has docker installed. You can, of course, use another image. You just have to install docker in it.
I've a RHEL host with docker installed, it has default Py 2.7. My python scripts needs a bit more modules which
I can't install due to lack of sudo access & moreover, I dont want to screw up with the default Py which is needed for host to function.
Now, I am trying to get a python in docker container where I get to add few modules do the needfull.
Issue - docker installed RHEL is not connected to internet and cant be connected as well
The laptop i have doesnt have the docker either and I can't install docker here (no admin acccess) to create the docker image and copy them to RHEL host
I was hoping if docker image with python can be downloaded from Internet I might be able to use that as is!,
Any pointers in any approprite direction would be appreciated.
what have I done - tried searching for the python images, been through the dockers documentation to create the image.
Apologies if the above question sounds silly, I am getting better with time on docker :)
If your environment is restricted enough that you can't use sudo to install packages, you won't be able to use Docker: if you can run any docker run command at all you can trivially get unrestricted root access on the host.
My python scripts needs a bit more modules which I can't install due to lack of sudo access & moreover, I dont want to screw up with the default Py which is needed for host to function.
That sounds like a perfect use for a virtual environment: it gives you an isolated local package tree that you can install into as an unprivileged user and doesn't interfere with the system Python. For Python 2 you need a separate tool for it, with a couple of steps to install:
export PYTHONUSERBASE=$HOME
pip install --user virtualenv
~/bin/virtualenv vpy
. vpy/bin/activate
pip install ... # installs into vpy/lib/python2.7/site-packages
you can create a docker image on any standalone machine and push the final required image to docker registry ( docker hub ). Then in your laptop you can pull that image and start working :)
Below are some key commands that will be required for the same.
To create a image, you will need to create a Dockerfile with all the packages installed
Or you can also do sudo docker run -it ubuntu:16.04 then install python and other packages as required.
then sudo docker commit container_id name
sudo docker tag SOURCE_IMAGE[:TAG] TARGET_IMAGE[:TAG]
sudo docker push IMAGE_NAME
Then you pull this image in your laptop and start working.
You can refer to this link for more docker commands https://github.com/akasranjan005/docker-k8s/blob/master/docker/basic-commands.md
Hope this helps. Thanks
I am a beginner in using Docker.
I'm using Docker toolbox for Windows 7 , i built an image for my python web app and everything works fine.
However, for this app, i use nltk module which also needs java and java_home setting to the java file.
When running on my computer, i can mannualy set the java_home, but how to do it in the dockerfile so that it wont get error when running on another machine.
Here is my error :
p.s : Answer below
When you are running a container you have the option of passing in environment variables that will be set in your container using the -e flag. This answer explains environment variables nicely: How do I pass environment variables to Docker containers?
docker container run -e JAVA_HOME='/path/to/java' <your image>
Make sure your image actually contains Java as well. You might want to look at something like the openjdk:8 image on docker hub.
It sounds like you need a docker file to build your image. Have a look at the ENV command documented here to set the JAVA_HOME var: https://docs.docker.com/engine/reference/builder/#env and then build your image with docker build /path/to/Dockerfile
I see you've already tried that and didn't have much luck.. run the container and instead of running your application process just run a bash script along the lines of echo $JAVA_HOME so you can at least verify that part is working.
Also make sure you copy in whatever files/binaries needed to the appropriate directories within the image in the docker file as noted below.
i finally found the way to install the java for dockerfile , it is use the java install commandline of ubuntu image.
Below is the docker file . Thanks for your reading.
RUN apt-get update
RUN apt-get install -y python-software-properties
RUN apt-get install -y software-properties-common
RUN add-apt-repository -y ppa:openjdk-r/ppa
RUN apt-get update
RUN apt-get install -y openjdk-8-jdk
ENV JAVA_HOME /usr/lib/jvm/java-8-openjdk-amd64/
RUN export JAVA_HOME
I am using Windows and learning to use tensorflow, so I need to run it under Docker (Toolbox).
Following the usual instruction:
$ docker run -it gcr.io/tensorflow/tensorflow
I can launch a Jupyter notebook on my browser at 192.168.99.100:8888 and run the tutorial notebooks without problems.
Now when I try to import pandas as pd , which is installed in my computer with pip, on Juypter it just said ImportError: No module named pandas
Any idea how I can get this library to work inside the tensorflow images launched from docker?
Screenshot
The Docker image should be built on a linux operating system. You should launch a shell inside the Docker image grc.io/tensorflow/tensorflow to install the requisite python dependencies.
See Docker quickstart for using
docker run -it grc.io/tensorflow/tensorflow /bin/bash
and then
sudo apt-get install python-pandas
according to pandas docs.
To avoid doing this every time you launch the image, you need to commit the change to create a new image.
To commit the change, you need to get the container id (after run and installation steps above):
sudo docker ps –a # Get list of all containers previously started with run command
Then, commit your changes git style using the container_id displayed in the container list you just got and giving it an image_name of your choosing:
sudo docker commit container_id image_name
The new image will now show up in the list displayed by sudo docker ps –a.
If you get a free docker account you can push and pull your updated image to your docker repo, or just keep it locally.
See docs under 'Updating and Committing your image'.
For windows users:
docker run -d -p 8888:8888 -v /c/Users/YOUR_WIN_FOLDER:/home/ds/notebooks gcr.io/tensorflow/tensorflow
Then use the following command to see the name of your container for easy execution commands later on (the last column will be the name):
docker ps
Then run:
docker exec <NAME OF CONTAINER> apt-get update
And finally to install pandas:
docker exec <NAME OF CONTAINER> apt-get install -y python-pandas
(the -y is an automatic 'yes' to stop a prompt from appearing for you to agree to the installation taking up additional disk space)
Here is an image with the pandas installed -
https://hub.docker.com/r/zavolokas/tensorflow-udacity/
Or pull it docker pull zavolokas/tensorflow-udacity:pandas