Running a Docker Image on Local Folder - python

I need to use a docker image which has compiled versions of certain programs that are very hard to compile from scratch.
I need to run a program in that environment.
I installed docker and pulled the image.
But how do I run a program (A Python program that works in the docker environment) using the environment provided to me by docker, sending in the local folder as input and producing the output back to my local system.
So far all resources tutorials show me how to work inside docker and not the problem I want.
Thanks for our help.

You should look at bind mounts. Here’s the Docker documentation of those.
Essentially, that will mount a folder in the host as a folder in the container.

Related

Can't add interpreter from Docker Image within Pycharm

I have a project which contains a Dockerfile. It is normally used to build docker container on a server when it comes to deployment. Now I am trying to debug with docker locally.
I build the image for the dockfile inside Pycharm. And after the dockerfile run successfully, I see a docker contrainer and a few images as below:
Following this instruction, I was trying to add Python Interpreter from Docker.
Configure an interpreter using Docker
However, in the dropdown menu, there are only three images to choose from. And my understanding is that the pycharm_helper image should be the one I'm looking for.
python:3
ubuntu:18.04
busybox:latest
I also read this post, and choose busybox:latest, but I don't know what should be the Python interpreter path for that either.
Is there a repository for pycharm_helpers
Any one has idea on how to add Python Interpreter from the image I just built and run my code inside the container?
Thanks.

Docker Container for Custom Python Setup to Run Tests

I'm new to using docker, and I was wondering how I could create a docker image for others to use to replicate the exact same setup I have for python when testing code. Currently what I've done is I created a docker image with the dockerfile below:
FROM python:3.7
COPY requirements.txt ./
RUN pip install -r requirements.txt
where the requirements text file contains the libraries and the versions I am using. I then ran this image on my local machine, and things looked good, so I pushed this image to a public repository on my docker hub account, so other people could pull it. This makes sense so far, but I'm still a little confused what to do next. If someone wants to test a python script on their local machine using my configuration what exactly would they do? I think they would pull my image from docker hub and then create a container from this image on their local machine. However, I tried this on my machine, testing a python file that runs and saves a file, and it's not really working how I anticipated. I can make a python file and put it in WORKDIR and have it run, but I think that's all it does. I'd really like to be able to navigate the container to find the file the python program created and then save it back on my local machine. Is there any way to do this, and am I going about this the wrong way?
you can execute bash in the container and check the files created inside of the container. Or you can share a volume between host and container. Such that you can check the files created in your host machine. Check this link, it might be helpful for you :-)
Volume

How to mount a docker container so that I can run python scripts, which are stored in the inside of the container

I am using a docker image (not mine) created through this dockerfile.
ROS kinetic, ROS2 and some important packages are already installed on this image.
When I run the docker image with docker run -it <image-hash-code> ROS kinetic is working well and the packages, like gym, can be found by python3.
So, all in all the docker image is a great starting point for my own project.
However, I would like to change the python scripts, which are stored on the docker image. The python scripts are using the installed packages and are interacting with ROS kinetic as well as with ROS2.
I do not want to install on my Ubuntu system all these programs and packages, which are already installed on the docker image in order to test my own python scripts.
Is there a way to mount the docker image so that I can test my python scripts?
Of course, I can use vim to edit python scripts, but I am thinking more of IntelliJ.
So, how can an IDE (e.g. IntelliJ) access and run a python script, which is stored on the docker image, with the same result as I would execute this script directly on the running container.
The method by Lord Johar, mounting the docker, edit the scripts with an IDE, save the image and then to run the image, is working, but is not what I would like to achieve.
My goal is to use the docker container as a development environment on which an IDE has access to and can use the installed programs and packages.
In other words: I would like to use an IDE on my host system in order to test my python scripts in the same way as the IDE would be installed on the docker image.
you can use docker commit
use this command docker commit <your python container>.
Now type docker images to see the image.
You should rename and tag image like this command docker tag myphthon:v1 <image ID>
use docker run command and then enjoy your code.
It's better to mount a volume to your container to persist your code and data Docker volume.
However, I would like to change the python scripts, which are stored on the docker image. The python scripts are using the installed packages and are interacting with ROS kinetic as well as with ROS2.
you must mount volume to your docker and edit your file.
better way is make your image
install docker on your ubuntu, pull python image, use Dockerfile to create your image, every time you change your code build new image by new tag then run image and enjoy docker container
In second way
Copy your python app to /path/to/your/app (My main file is index.py)
Change your directory to /path/to/your/app
Create a file with name Dockerfile :
FROM python:alpine3.7
COPY . /app
WORKDIR /app
RUN pip install -r requirements.txt
EXPOSE 5000
CMD python ./index.py
Also note is the RUN directive that is calling PyPi (pip) and pointing to the requirements.txt file. This file contains a list of the dependencies that the application needs to run.
Build Your Image.
docker build --tag my-app .
Note: at the end of command is a dot, that is too important. another thing is you must be at /path/to/your/app inside Dockerfile
now you can run your container
docker run --name python-app -p 5000:5000 my-app
What you are looking for is a tooling which can communicate with a local or remote docker demon.
I know that eclipse can do that. The tooling for this is called Docker Tooling. It can explore docker images and containers on a machine running a docker demon in your network. It can start and stop containers, commit containers to images and create images.
What you require (as I understand) is the ability to commit containers, since you are asking for changing scripts inside your container. If you like to persist your work on those docker containers, committing is indispensable.
Since I am not familiar wit IntelliJ, I would suggest to have a look onto the eclipse's docker tooling wiki to get a clue whether it is what you are looking for. And then after you got an idea, look for analogies in your favorite IDE like IntelliJ.
Another IDE which supports docker exploring is clion

Setting up docker container so that I can access python packages on ubuntu server

I'm new to using Docker, so I'm either looking for direct help or a link to a relevant guide. I need to train some deep learning models on my school's linux server, but I can't manually install pytorch and other python packages since I don't have root access (sudo). Another student said that he uses docker and has everything ready to go in his container.
I'm wondering how to wrap up my code and relevant packages into a container that I can push to the linux server and then run.
To address your specific problem the easiest way I found to get code into a container is to use git.
start the container in interactive mode or ssh to it if it's attached to a network.
git clone <your awesome deep learning code>. In your git repo have a requirements.txt file. Change directories into your local clone of your repo and run pip install -r requirements.txt
Run whatever script you need to run your code. Note you can easily put your pip install command in one of your run scripts.
It's important to remember that docker containers are stateless/ephemeral. You should not expect the container nor its contents to exist in some durable fashion. This specific issue is addressed by mapping a directory on the host system to a directory in the container.
Side note: I first recommend starting with the docker tutorial. You can easily skip over the installation parts if you are working on system that already has docker installed and where you have permissions to build, start, and stop containers.
I don't have root access (sudo). Another student said that he uses docker
I would like to point out that docker requires sudo permissions.
Instead I think you should look at using something like Google Colab or JupyterLab. This gives you the added benefit of code that is backed-up on a remote server

How can I create an image and containerize it from my own personal computer

I am currently new to Docker, I have been through a good amount of online tutorials and still haven't grasped the entire process. I understand that most of the tutorials have you pull from online public repositories.
But for my application I feel I need to create these images and implement them into containers from my local or SSH'd computer. So I guess my overall question is, How can I create an image and implement it into a container from nothing? I want to try it on something such as Python before I move onto my big project I will be doing in the future. My future project will be containerizing a weather research model.
I do not want someone to do it for me, I just have not had any luck searching for documentation, that I understand, that gives me a basis of how to do it without pulling from repositories online. Any links or documentation would be greatly received and appreciated.
What I found confusing about Docker, and important to understand, was the difference between images and containers. Docker uses collections of files with your existing kernal to create systems within your existing system. Containers are collections of files that are updated as you run them. Images are saved copies of files that cannot be manipulated. There's more to it than that, based on what commands you can use on them, but you can learn that as you go.
First you should download an existing image that has the base files for an operating system. You can use docker search to look for one. I wanted a small operating system that was 32 bit. I decided to try Debian, so I used docker search debian32. Once you find an image, use docker pull to get it. I used docker pull hugodby/debian32 to get my base image.
Next you'll want to build a container using the image. You'll want to create a 'Dockerfile' that has all of your commands for creating the image. However, if you're not certain about what you want in the system, you can use the image that you downloaded to create a container, make the changes (while writing down what you've done), and then create your 'Dockerfile' with the commands that perform those tasks afterward.
If you create a 'Dockerfile', you would then move into the directory with the 'Dockerfile' and, to build the image, run the command: docker build -t TAG.
You can now create a container from the image and run it using:
docker run -it --name=CONTAINER_NAME TAG
CONTAINER_NAME is what you want to reference the container as and TAG was the tag from the image that you downloaded or the one that you previously assigned to the image created from the 'Dockerfile'.
Once you're inside the container you can install software and much of what you'd do with a regular Linux system.
Some additional commands that may be useful are:
CTRL-p CTRL-q # Exits the terminal of a running container without stopping it
docker --help # For a list of docker commands and options
docker COMMAND --help # For help with a specific docker command
docker ps -a # For a list of all containers (running or not)
docker ps # For a list of running containers
docker start CONTAINER_NAME # To start a container that isn't running
docker images # For a list of images
docker rmi TAG # To remove an image

Categories