Jupyter does not show folder in my working directory - python

I am running docker Jupiter notebook on a MacBook Pro. When starting Jupyter home only shows some of the folders in the working directory. When I cd to a folder and use it as the working directory I get the message "Notebook list is empty."
See examples below.
My directory:
LewIss-MacBook-Pro:MyTensorFlow lewleib$ ls
Gorner_tensorflow-mnist Tensor2018 models
Gorner_tensorflow-rnn Untitled.ipynb tensorflow
MyDeepTest generate_hmb3.py tensorflow-without-a-phd-master
My_tensor1.html guided testgen
NeuralNet1.ipynb install.sh
README.md mnist
One level down:
LewIss-MacBook-Pro:MyDeepTest lewleib$ ls
README.md guided models
generate_hmb3.py install.sh testgen
And one level more:
LewIss-MacBook-Pro:guided lewleib$ ls
chauffeur_guided.py epoch_guided.py ncoverage.py rambo_guided.py
When I try and call Jupiter note book:
LewIss-MacBook-Pro:guided lewleib$ docker run -it -p 8888:8888 -p 6006:6006 -v ~/lewleib/MyTensorFlow/MyDeepTest/guided:/notebooks tensorflow/tensorflow
I get the following:
guided
Last Modified Name
..seconds ago
The notebook list is empty.

It ran with the following:
LewIss-MacBook-Pro:guided lewleib$ docker run -it -p 8888:8888 -p 6006:6006 -v /Users/lewleib/MyTensorFlow/guided:/notebooks tensorflow/tensorflow

Related

Serving Univeral Sentence Encoder Model using tensorflow serving and docker

I have universal sentence encoder model saved on my local drive. I am trying to serve the model on docker container using tensorflow serving.
Command:
sudo docker run -p 8502:8502 --name tf-serve -v /home/ubuntu/first/models:/models -t tensorflow/serving --model_base_path=/models/test
where /home/ubuntu/first/models is the folder path to my model files.
After running initially it is going into an infinite loop.
What is happening here?! Any help will be appreciated.
edit your command like this:
sudo docker run -p 8502:8502 --name tf-serve -v /home/ubuntu/first/models:/models/test -t tensorflow/serving --model_base_path=/models/test
your --model_base_path is /models/test so you need to copy the files into that folder in your -v

How to run a simple Python script without writing a complete Dockerfile?

I have set up Docker Toolbox on a Win 10 machine. I have some simple single file Python scripts that I want to run in Docker, just for learning purpose.
Started learning Docker today, and Python 3 days ago.
I assume I have set up Docker correctly, I can run the example hello-world image. No error messages during setup.
I am following an instruction from here https://runnable.com/docker/python/dockerize-your-python-application,
which says:
If you only need to run a simple script (with a single file), you can avoid writing a complete Dockerfile. In the examples below, assume you store my_script.py in /usr/src/widget_app/, and you want to name the container my-first-python-script:
docker run -it --rm --name my-first-python-script -v "$PWD":/usr/src/widget_app python:3 python my_script.py
If I type pwd, it shows:
/c/Program Files/Docker Toolbox
And the script I want to run is located here:
C:\Docker\Python\my_script.py
This is what I think should work:
docker run -it --rm --name my-first-python-script -v "$PWD":/c/Docker/Python python:3 python my_script.py
No matter how I try to specify the file directory, I get an error:
python: can't open file 'my_script.py': [Errno 2] No such file or directory
When you run -v "$PWD":/c/Docker/Python, you are saying you want to link your current working directory to the path /c/Docker/Python in the container, which isn't what you want to do. What you are trying to do is link C:\Docker\Python\ on your host to the container folder /usr/src/widget_app.
This command will put your script inside the container path /usr/src/widget_app, then run it:
docker run -it --rm --name my-first-python-script -v /c/Docker/Python:/usr/src/widget_app python:3 python /usr/src/widget_app/my_script.py

docker tensorflow linking with folder

I am trying docker for tensorflow on windows 10 education, I have installed docker successfully and could run/ pull/ import images. I linked my docker container using
C:\User\xyz_folder> docker run -it tensorflow/tensorflow:latest-devel
root#23433215319e:~#cd /tensorflow
root#23433215319e:/tensorflow#git pull
From https://github.com/tensorflow/tensorflow
* [new tag] v1.11.0 -> v1.11.0
Already up-to-date.
Until here it ran fine without error. Following is the problem:
root#23433215319e:/tensorflow# cd abc_folder
bash: cd: abc_folder: No such file or directory
the abc_folder is there in linked folder but can not be seen in when it list it using 'ls'
root#23433215319e:/tensorflow#ls
ACKNOWLEDGMENTS CODEOWNERS LICENSE WORKSPACE bazel-out configure.py tools ADOPTERS.md CODE_OF_CONDUCT.md README.md arm_compiler.BUILD bazel-tensorflow models.BUILD AUTHORS CONTRIBUTING.md RELEASE.md bazel-bin bazel-testlogs tensorflow BUILD ISSUE_TEMPLATE.md SECURITY.md bazel-genfiles configure third_party
Please suggest how to link this properly so that I can see the shared folders content.
Thanks in advance.
To make a directory outside the container visible inside the container, you have to use the option -v or --volume as is stated here.
So, your command would have to be:
docker run -v c:\local\directory:container/directory -it tensorflow/tensorflow:latest-devel
Whit that, you should be able to see the directory inside the container

Dockerfile - Can't use another name than 'app.py'

I've been searching a lot for the past days reagarding Dockerfile. I'm using cx_Oracle in python 2.7. Here's how my Dockerfile looks like:
FROM sbanal/python-oracle-xe12.1-latest
WORKDIR /code/app
COPY generate_distance.py /code/app/app.py
COPY generate_values.py /code/app/app2.py
To make it easier to explain, I've made a method to print out the name of the file. In generate_distance.py:
def test():
print "Generate distance"
test()
In generate_values.py:
def test():
print "Generate values"
test()
Then I'm running docker build with a tag:
docker build -t gen .
Sending build context to Docker daemon 13.82kB
Step 1/4 : FROM sbanal/python-oracle-xe12.1-latest
---> 723335924016
Step 2/4 : WORKDIR /code/app
---> Using cache
---> 9fde6fb3ac02
Step 3/4 : COPY generate_distance.py /code/app/app.py
---> 1dbf7ef85ee3
Removing intermediate container ae626dcef48c
Step 4/4 : COPY generate_values.py /code/app/app2.py
---> 7a54500b88a3
Removing intermediate container f496edfc237d
Successfully built 7a54500b88a3
Successfully tagged gen:latest
When running 'docker images', I can see the 'gen' image. But when I run the 'gen' image, only app.py is working:
>docker run -p 5500:5000 gen
>Generate distance
I can't see what mistake I've done. I also don't know why it has to be called app.py. If I use different file name during COPY in Dockerfile, I get 'No such file or directory' error. That is:
FROM sbanal/python-oracle-xe12.1-latest
WORKDIR /code/app
COPY generate_relation_distance.py /code/app/generate_relation_distance.py
COPY generate_ten_values.py /code/app/generate_ten_values.py
Build and run like the part over:
docker run -p 5500:5000 gen
python: can't open file 'app.py': [Errno 2] No such file or directory
Hope someone can help me :)
Your image is based on sbanal/python-oracle-xe12.1-latest (first line of your Dockerfile).
In this Dockerfile is a "CMD" defined, which specifies the first command of your container. Here, that is
CMD python app.py
(see last line of your base image).
The command will be executed as sh -c "python app.py".
This is why your Dockerfile starts on container creation python app.py
You need to override the "CMD" part in your Dockerfile, e.g.
CMD ["python", "app2.py"]
See the official docker docs to understand CMD.
You should only have one CMD in your Dockerfile containing the first command, which is automatically executed by the container.
If you want to start multiple services, you should consider, if this should really be packed into one image. Or you follow the official docs and consider using a supervisor or a script, which starts your desired services.
If anyone wondering how I solved the problem, I used bash script instead.
script.sh:
#Build the image
docker build -t image_name .
#Run image image_name in container
docker run -d -p 550:500 image_name tail -f /dev/null
#Get the container id
container_id="$(docker ps | grep $IMAGE_NAME | grep -Eo '^[^ ]+')"
#Run your program in container
docker exec -it container_id /bin/sh -c "python generate_distance.py"
docker exec -it container_id /bin/sh -c "python generate_values.py"
My goal was to store the output in text files and copy those from docker container to localhost.

Using iPython with remote docker container in a local project directory

I am completely new to Docker and I followed the instructions found on this page https://cmusatyalab.github.io/openface/setup/:
docker pull bamos/openface
docker run -p 9000:9000 -p 8000:8000 -t -i bamos/openface /bin/bash
cd /root/openface
./demos/compare.py images/examples/{lennon*,clapton*}
And I was able to run an example of openface following the example. However, normally I develop in iPython and would like to do so. However, I cannot import openface from iPython, since presumably it is not installed locally. Similarly, I do not know how to cd into my project directory, which is in /Users/name/documents/my-project.
What is the idiomatic way to proceed?
My recommended approach is using docker volumes. As you already have the project outside the container, you can start the container with a volume to map your project directory to a directory inside the container. That's idiomatic.
For instance:
docker run -v /Users/name/Documents/my-project:/root/my-project -p 9000:9000 -p 8000:8000 -t -i bamos/openface /bin/bash

Categories