Suppose I have a main directory:
main/
|
|---- requirements.txt # requirements for ALL python subprojects
|--- PythonProject1/
|--- PythonProject2/
|--- PythonProject3/
So there are 3 python projects under main/.
The thing I want to do is have only one requirements.txt file so that the sub-projects dont have to have dependency management on their own.
I am wondering what the best way to accomplish this is? Is having main/requirements.txt good enough? Or is this not ideal?
I would recommend creating a Docker Images for your projects if you have the relevant experience.
This way you can easily share your image with your co-workers without having to maintain a requirements.txt file.
It is quite easy to install Docker
After installation, you can either (1) build via a DockerFile or (2) pull-down official Images from DockerHub
(1) docker build -t project1 .
(2) docker pull [Official_Image_name_from_Docker_Hub]
IF new Run these commands from CLI after starting Docker.
docker container ls -a --will get you recent containers
docker image ls --will get you all your images
docker run hello-world --run this to get an image pulled + run
Once you create your base image you can run it as a container after mounting a volume as
docker run -it -v "%cd%":/data [yourImageName] bash
After completing your projects you can commit these containers as new Images and push them onto Docker Hub, which can be pulled down by your team members.
docker commit [containerID/Name] [newImageName]:[TAG]
docker push [newImageName]:[TAG]
Related
I have added code to a repo to build a Cloud Run service. The structure is like this:
I want to run b.py in cr.
Is there any way I can deploy cr without just copying b.py into the cr directory? (I don't want to do it since here are lots of other folders and files that b.py represents).
The problem is due to the Dockerfile being unable to see folders above.
Also how would eg api.py import from b.py?
TIA you lovely people.
You have to build your container with the correct parameters, and therefore, not to use the gcloud run deploy --source=. .... to build your container with default parameters
With docker, the Dockerfile by default is in the PATH/Dokerfile. But you can override that default behavior with the -f parameter to indicate the Dockerfile location.
For example, you can do that
cd top
docker build -f ./a/cr/Dockerfile .
Like that, you provide to the docker build runtime the current path (here top, and the current path is represented but the dot at the end .).
And you also specify the full path of the Dockerfile inside this current path.
So that, you have to update your Dockerfile, because the COPY . . will no longer copy the cr path, but the whole top directory.
EDIT 1
To validate my answer, I exactly do what you ask in your comment. I used gcloud build summit but:
I ran the command from the top directory
I created a cloudbuild.yaml file
steps:
- name: 'gcr.io/cloud-builders/docker'
entrypoint: 'bash'
args:
- -c
- |
docker build -f ./a/cr/Dockerfile -t <YOUR TAG> .
docker push <YOUR TAG>
you can't perform a gcloud builds submit --tag <YOUR TAG> from the top directory if you haven't a Dockerfile in the root dir.
First off, I might have formulated the question inaccurately, feel free to modify if necessary.
Although I am quite new to docker and all its stuff, yet somehow managed to create an image (v2) and a container (cont) on my Win 11 laptop. And I have a demo.py which requires an .mp4 file as an arg.
Now, if I want to run the demo.py file, 1) I go to the project's folder (where demo.py lives), 2) open cmd and 3) run: docker start -i cont. This starts the container as:
root:834b2342e24c:/project#
Then, I should copy 4) my_video.mp4 from local project folder to container's project/data folder (with another cmd) as follows:
docker cp data/my_video.mp4 cont:project/data/.
Then I run: 5) python demo.py data/my_video.mp4. After a while, it makes two files: my_video_demo.mp4 and my_video.json in the data folder in the container. Similarly, I should copy them back to my local project folder: 6)
docker cp cont:project/data/my_video_demo.mp4 data/, docker cp cont:project/data/my_video_demo.json data/
Only then I can go to my local project/data folder and inspect the files.
I want to be able to just run a particular command that does 4) - 6) all in one.
I have read about -v option where, in my case, it would be(?) -v /data:project/data, but I don't know how to proceed.
Is it possible to do what I want? If everything is clear, I hope to get your support. Thank you.
You should indeed use Docker volumes. The following command should do it.
docker run -it -v /data:project/data v2
Well, I think I've come up with a way of dealing with it.
I have learned that with -v one can create a place that is shared between the local host and the container. All you need to do is that run the docker and provide -v as follows:
docker run --gpus all -it -v C:/Workspace/project/data:/project/data v2:latest python demo.py data/my_video_demo.mp4
--gpus - GPU devices to add to the container ('all' to pass all GPUs);
-it - tells the docker that it should open an interactive container instance.
Note that every time you run this, it will create a new container because of -it flag.
Partial credit to #bill27
I am new to docker, so bear with me on this.
I have a app.py file, which simply uses apscheduler to print a sentence on the console. I have followed the structure from the official guide for the python file. When I run the file on my console, it runs as expected. (prints the Tick statement every 10 seconds.)
Now, I want to dockerize it and upload the image to dockerhub. I followed the docker documentations and this is how my DockerFile looks like:
FROM python:3
COPY requirements.txt .
COPY app.py .
RUN pip install --trusted-host pypi.python.org -r requirements.txt
CMD [ "python", "app.py" ]
I have listed the module names in requirements.txt as below:
datetime
apscheduler
The folder is flat. app.py and requirements.txt are in the same level in the directory.
|
|- app.py
|- requirements.txt
I use below commands to build the docker image:
docker build . -t app1:ver3
The docker image builds successfully and shows up when I do
docker images
Problem is, when I run the docker image with
docker run app1:ver3
the image does not show any output.
In fact the image shows as listed when I do docker ps - which is expected but the run command should show me print statements on the console every 10 seconds.
There are two things here
You need to use docker run -it app1:ver3
-i: Interactive mode
-t: Enable TTY
I believe just -t also may do the job. See the link below for details
https://docs.docker.com/engine/reference/run/
I am trying docker for tensorflow on windows 10 education, I have installed docker successfully and could run/ pull/ import images. I linked my docker container using
C:\User\xyz_folder> docker run -it tensorflow/tensorflow:latest-devel
root#23433215319e:~#cd /tensorflow
root#23433215319e:/tensorflow#git pull
From https://github.com/tensorflow/tensorflow
* [new tag] v1.11.0 -> v1.11.0
Already up-to-date.
Until here it ran fine without error. Following is the problem:
root#23433215319e:/tensorflow# cd abc_folder
bash: cd: abc_folder: No such file or directory
the abc_folder is there in linked folder but can not be seen in when it list it using 'ls'
root#23433215319e:/tensorflow#ls
ACKNOWLEDGMENTS CODEOWNERS LICENSE WORKSPACE bazel-out configure.py tools ADOPTERS.md CODE_OF_CONDUCT.md README.md arm_compiler.BUILD bazel-tensorflow models.BUILD AUTHORS CONTRIBUTING.md RELEASE.md bazel-bin bazel-testlogs tensorflow BUILD ISSUE_TEMPLATE.md SECURITY.md bazel-genfiles configure third_party
Please suggest how to link this properly so that I can see the shared folders content.
Thanks in advance.
To make a directory outside the container visible inside the container, you have to use the option -v or --volume as is stated here.
So, your command would have to be:
docker run -v c:\local\directory:container/directory -it tensorflow/tensorflow:latest-devel
Whit that, you should be able to see the directory inside the container
I am trying to run my python script on docker. I tried different ways to do it but not able to run it on docker. My python script is given below:
import os
print ('hello')
I have already installed docker on my mac. But i want to know how i can make images and then push it to docker after that i wanna pull and run my script on docker itself.
Going by question title, and if one doesn't want to create docker image but just want to run a script using standard python docker images, it can run using below command
docker run -it --rm --name my-running-script -v "$PWD":/usr/src/myapp -w /usr/src/myapp python:3.7-alpine python script_to_run.py
Alright, first create a specific project directory for your docker image. For example:
mkdir /home/pi/Desktop/teasr/capturing
Copy your dockerfile and script in there and change the current context to this directory.
cp /home/pi/Desktop/teasr/capturing.py /home/pi/Desktop/teasr/dockerfile /home/pi/Desktop/teasr/capturing/
cd /home/pi/Desktop/teasr/capturing
This is for best practice, as the first thing the docker-engine does on build, is read the whole current context.
Next we'll take a look at your dockerfile. It should look something like this now:
FROM python:latest
WORKDIR /usr/local/bin
COPY capturing.py .
CMD ["capturing.py", "-OPTIONAL_FLAG"]
The next thing you need to do is build it with a smart name. Using dots is generally discouraged.
docker build -t pulkit/capturing:1.0 .
Next thing is to just run the image like you've done.
docker run -ti --name capturing pulkit/capturing:1.0
The script now get executed inside the container and will probably exit upon completion.
Edit after finding the problem that created the following error:
standard_init_linux.go:195: exec user process caused "exec format error"
There's a different architecture beneath raspberry pi's (ARM instead of x86_64), which COULD'VE BEEN the problem, but wasn't. If that would've been the problem, a switch of the parent image to FROM armhf/python would've been enough.
Source
BUT! The error kept occurring.
So the solution to this problem is a simple missing Sha-Bang on top of the python script. The first line in the script needs to be #!/usr/bin/env python and that should solve the problem.
Source
You need to create a dockerfile in the directory your script is in.
You can take this template:
FROM python:latest
COPY scriptname.py /usr/local/share/
CMD ["scriptname.py", "-flag"]
Then simply execute docker build -t pulkit/scriptname:1.0 . and your image should be created.
Your image should be visible under docker images. If you want to execute it on your local computer, use docker run.
If you want it to upload to the DockerHub, you need to log into the DockerHub with docker login, then upload the image with docker push.
I Followed #samprog (most accepted) answer on my machine running on UBUNTU VERSION="14.04.6".
and was getting "standard_init_linux.go:195: exec user process caused "exec format error"
None of the solution worked for me mentioned above.
Fixed the error after changing my Dockerfile as follows
FROM python:latest
COPY capturing.py ./capturing.py
CMD ["python","capturing.py"]
Note: If your script import some other module then you need to modify COPY statement in your Dockerfile as follows - COPY *.py ./
Hope this will be useful for others.
Another way to run python script on docker can be:
copy the local python script to docker:
docker cp yourlocalscript.path container_id:/dst_path/
container id can be found using:
docker ps
run the python script on docker:
docker exec -it python /container_script_path.py
its very simple
1- go to your Python script directory and create a file with this title without any extension
Dockerfile
2-now open the docker file and write your script name instead of sci.py
( content of Dockerfile )
FROM python:slim #i choice slim version you can choose another tag for example python:3
WORKDIR /usr/local/bin
COPY sci.py . #replace you scrip name with sci.py
CMD [ "python", "sci.py" ] #replace you scrip name with sci.py
save it and now you should create image file from this dockerfile and script py
and next run it
3-in path address folder write CMD and press Enter key :
4-When the cmd window opens for you, type in it :
docker build -t my-python-app . #this create image in docker by this title my-python-app
5- and findly run image:
docker run -it --rm --name my-running-app my-python-app
I've encountered this problem recently, this dependency HELL between python2 and python3 got me. Here is the solution.
Bind your current working directory to a Docker container with python2 and pip2 running.
Pull the docker image.
docker pull frolvlad/alpine-python2
Add this alias into /home/user/.zshrc or /home/user/.bashrc
alias python2='docker run -it --rm --name python2 -v "$PWD":"$PWD" -w
"$PWD" frolvlad/alpine-python2'
Once you type python2 into your CMD you'll be thrown into the Docker instance.