I pulled a python image from hub images doing docker pull python:3.9-alpine.
Then i tried to launch the container from this image doing like docker run -p 8888:8888 --name test_container d4d6be1b90ec.
The container is never up. with docker ps i didn't find it.
Do you know why please?
Thank you,
Your container is not launched because there is no server(nginx,apache etc..) to point, there is only python and the necessary dependencies.
In order to run that image you can try the following command:
docker run --name test_python -it [id_image]
And if you open another terminal and use docker ps you will see that the container is up.
Related
I am following a course on udemy for docker with python. I am supposed to be creating a custom docker image by pulling down the pre-existing jupyter/tensorflow-notebook image which does not have pytorch installed in it, install pytorch and torchvision then run it in a jupyter notebook.
I get a 403 forbidden error when I run it and I can't figure out why. Here are the steps in order:
docker pull jupyter/tensorflow-notebook
docker run -d --name temp jupyter/tensorflow-notebook
docker exec -it temp pip install torch torchvision
docker stop temp
docker commit temp myjupyter:torch
docker run -it -rm -p 8888:8888 -v ${PWD}:/home/joyvan/notebooks myjupyter:torch (my working directory is the folder containing the ipynb file that the course will have me working in once the image is running successfully)
I've seen some other SO posts that say to check docker container ps and that returns
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
(base) justinbenfit#MacBook-Pro-3 manual-build %
so I'm assuming temp images don't show up so I'm not really sure what to do next to find out why the server is saying forbidden. Thanks.
I need to use a docker image which has compiled versions of certain programs that are very hard to compile from scratch.
I need to run a program in that environment.
I installed docker and pulled the image (john/custom:py2).
But how do I run a program (A Python program that works in the docker environment) using the environment provided to me by docker, sending in the local folder as input and producing the output back to my local system.
So far all resources tutorials show me how to work inside docker and not the problem I want.
Thanks for our help.
The technical issue is:
docker run -it -v /tmp:/home/ubuntu/myfolder/ john/custom:py2
This sends me to root. But I see none of the folders or files of myfolder in the shell.
i.e ls command gives empty results
How can I run a program inside this docker environment that works on the input of the folder and writes in the same folder.
It sounds like you've reversed the order on the volume syntax. The first half is the host location or volume source, while the second half is the target directory inside the container where the volume is mounted. Try:
docker run -it -v /home/ubuntu/myfolder/:/tmp john/custom:py2
To mount myfolder into the container's /tmp directory.
I tried a little variation using the ubuntu container and it works for me.
$ docker pull ubuntu
$ docker run -it -v /tmp:/home/ubuntu/myfolder ubuntu:latest
$ ls /home/ubuntu/myfolder
Try this and see whether or not it works for you.
I would also try mounting other directories besides /tmp to a directory in the docker container. For example:
$ mkdir /home/john/foo
$ docker run -it -v /home/john/foo:/home/ubuntu/foo ubuntu:latest
/tmp is a little special and I don't know if it's a good idea to mount that directory inside docker.
I installed docker and tested with hello-world just to make sure its fine. Docker is set to Windows Containers. Then I used the following :
docker run -it gcr.io/tensorflow/tensorflow:latest-devel
Got error :
C:\Users\pubud>docker run -it gcr.io/tensorflow/tensorflow:latest-devel
Unable to find image 'gcr.io/tensorflow/tensorflow:latest-devel' locally
docker: Error response from daemon: manifest for gcr.io/tensorflow/tensorflow:latest-devel not found.
See 'docker run --help'.
Any help?
I am following this link for setup. I am using Docker for Windows since I am using Windows 10 Pro.
You dont need to add gcr.io/. just add tensorflow/tensorflow to your docker run command.
docker run -it -p 8888:8888 tensorflow/tensorflow
I have just installed Docker ToolBox on Windows 10 OS and created a Linux Python container. I have cloned a project from GitHub in that container. How can I access the project directories that are created in the Linux containers on my Windows OS? Are they saved in any drive or directory?
I have obtained the docker image container using the command
docker pull floydhub/dl-docker:cpu
and running it using the command
docker run -it -p 8888:8888 -p 6006:6006 -v /sharedfolder:/root/sharedfolder floydhub/dl-docker:cpu bash
Also I am running my project using docker quick start terminal, is there any other way (GUI based) to manage and run my projects?
More details about how your Linux Python container was launched, I.E. the build file/ run command, would be helpful. But that being said, you can copy files out of a container using docker cp as described here:
https://docs.docker.com/engine/reference/commandline/cp/
Also, if you use volumes you can view the cloned repo right on your Windows machine directly, but those volumes must be setup during the build process and cannot be added to a running container. More details on volumes and how data persistence works in docker:
https://docs.docker.com/engine/admin/volumes/volumes/
To answer your question about a GUI docker tool, I've used Kitematic (https://kitematic.com/) but personally found the CLI faster and easier to use.
Edit
According to edits you do in fact have volumes that will persist data to your local Windows machine. But I believe you need to use a separate -v flag for each: (docker documentation is unclear about this as they now suggest using --mount, but according to this SO post: Mounting multiple volumes on a docker container? suggests the multiple flags):
docker -v /sharedfolder:/root/sharedfolder \
-v floydhub/dl-docker:cpu
A side note: it is sometimes confusing to not use absolute paths when mounting volumes as you've done in floydhub/dl-docker:cpu. Its not always clear where this data might live.
Any data that is stored in the docker container at /root/sharedfolder should be available in Windows at /sharedfolder
Edit 2
To match your original docker run command:
docker run -it -p 8888:8888 -p 6006:6006 -v /sharedfolder:/root/sharedfolder -v floydhub/dl-docker:cpu bash
In Kaggle/python docker on Ubuntu 14.04 my browser is not starting.Anyone has face this issue and resolution? I am using below command from terminal
"(sleep 3 && sensible-browser "http://127.0.0.1:8888")& docker run -v $PWD:/tmp/working -w=/tmp/working -p 8888:8888 --rm -it kaggle/python jupyter notebook --no-browser --ip=0.0.0.0 --notebook-dir=/tmp/working"
and when I go to Firefox web browser localhost:8888, it ain't show up nothing.
You can use datmo/kaggle:python to be able to run kaggle projects with jupyter notebook. After pulling the image, you can use code like this in the shell:
docker run --rm -it -p 8888:8888 -v ~/.:/home/ datmo/kaggle:python 'jupyter notebook'
This mounts the local directory onto the container having access to it.
Then, go to the browser and hit https://localhost:8888, and when I open a new kernel it's with Python 3.5. I don't recall doing anything special when pulling the image or setting up Docker.
You can find more information from here.
You can also try using datmo in order to easily setup environment and track machine learning projects to make experiments reproducible. You can run datmo task command as follows for setting up jupyter notebook,
datmo task run 'jupyter notebook' --port 8888
It sets up your project and files inside the environment to keep track of your progress.