Docker run does not mount the local folder - python

I need to use a docker image which has compiled versions of certain programs that are very hard to compile from scratch.
I need to run a program in that environment.
I installed docker and pulled the image (john/custom:py2).
But how do I run a program (A Python program that works in the docker environment) using the environment provided to me by docker, sending in the local folder as input and producing the output back to my local system.
So far all resources tutorials show me how to work inside docker and not the problem I want.
Thanks for our help.
The technical issue is:
docker run -it -v /tmp:/home/ubuntu/myfolder/ john/custom:py2
This sends me to root. But I see none of the folders or files of myfolder in the shell.
i.e ls command gives empty results
How can I run a program inside this docker environment that works on the input of the folder and writes in the same folder.

It sounds like you've reversed the order on the volume syntax. The first half is the host location or volume source, while the second half is the target directory inside the container where the volume is mounted. Try:
docker run -it -v /home/ubuntu/myfolder/:/tmp john/custom:py2
To mount myfolder into the container's /tmp directory.

I tried a little variation using the ubuntu container and it works for me.
$ docker pull ubuntu
$ docker run -it -v /tmp:/home/ubuntu/myfolder ubuntu:latest
$ ls /home/ubuntu/myfolder
Try this and see whether or not it works for you.
I would also try mounting other directories besides /tmp to a directory in the docker container. For example:
$ mkdir /home/john/foo
$ docker run -it -v /home/john/foo:/home/ubuntu/foo ubuntu:latest
/tmp is a little special and I don't know if it's a good idea to mount that directory inside docker.

Related

Receiving OSError: [Errno 8] Exec format error in app running in Docker Container

I have a React/Flask app running within a Docker container. There is no issue with me building the project using docker-compose, and running the app itself in the container. Where I am running into issues is a particular API route that is supposed to fetch user profiles from the DB, encrypt the values in a text file, and return to the frontend for download. The encryption script is written in C, though the API route is written in Python. When I try and encrypt through the app running in Docker, I am given the following error message:
OSError: [Errno 8] Exec format error: './app/Crypto/encrypt.exe'
I know the following command works in the CLI if invoked outside of the Docker Container (still invoked at the same directory level as it would in app):
./app/Crypto/encrypt.exe -k ./app/Crypto/secretkey -i ./profile.txt -o ./profile.encr
I am using the following Python code to invoke the command in the API route which is where it fails:
proc = subprocess.Popen(f"./app/Crypto/encrypt.exe -k ./app/Crypto/secretkey -i ./{profile.profile_name}.txt -o ./{profile.profile_name}.encr", shell=True)
The Dockerfile for my backend is pasted below:
FROM python:3
WORKDIR /app
ENV FLASK_APP=main.py
COPY ./requirements.txt .
RUN pip install -r requirements.txt
COPY . .
CMD ["python", "main.py"]
I have tried to tackle the issue a few different ways:
By default my Docker Container was built with Architecture of ARM64. I read that the OS Error was caused by Architecture not being AMD64, so I rebuilt the container with AMD64 and it gave me the same error.
In case this was a permissions error, I ran chmod +rwx on encrypt.exe through the Dockerfile when building the container. Pretty sure it has nothing to do with permissions especially as it still failed.
I added a shebang (#!/bin/bash) to the script as well as to the Dockerfile.
At the end of the day I know I am failing when using subprocess.Popen, so I am positive I must be missing something when invoking the script using Python, or there is a configuration in my Docker Container that is preventing this functionality. My machine is a Macbook Pro which the script runs fine on. The script has also successfully been utilized on a machine running Linux.
Any chances folks have seen similar issues arise with this error? Thanks in advance!
So thanks to David Maze's comment on this, I followed the lead that maybe the executable I wanted to run needed to be built within the Dockerfile. I destroyed my original container, added in a step to run the Makefile that generates the executable, and finally ran the program through the app running in the Docker container. This did the trick! Not sure as to why the executable needed to be compiled within the Docker container, but running 'make' on the Makefile within the Dockerfile did the trick.

how to run docker container from python image?

I pulled a python image from hub images doing docker pull python:3.9-alpine.
Then i tried to launch the container from this image doing like docker run -p 8888:8888 --name test_container d4d6be1b90ec.
The container is never up. with docker ps i didn't find it.
Do you know why please?
Thank you,
Your container is not launched because there is no server(nginx,apache etc..) to point, there is only python and the necessary dependencies.
In order to run that image you can try the following command:
docker run --name test_python -it [id_image]
And if you open another terminal and use docker ps you will see that the container is up.

Share directory in docker container with parent scope docker container in gitlab-ci?

I am using a volume when running a docker container with something like docker run --rm --network host --volume $PWD/some_dir:/tmp/some_dir .... Inside the container I am running Python code which creates the directory /tmp/some_dir (overrides it in case it's already there) and puts files into there. If I run the docker container on my local dev machine the files are available on my dev machine in /tmp/some_dir after running the container.
However if I run the same container as part of a gitlab-ci job ("docker in docker", the container is not the image used for the job itself) the files are not accessible in /tmp/some_dir (the directory exists).
What could be a reason for the missing files?
The first half of the docker run -v option, if it's a directory path, is a directory specifically on the host machine. If a container has access to the Docker socket and launches another container, any directory mappings it provides are in the host filesystem's space, not the container filesystem's space. (If you're actually using Docker-in-Docker it is probably the filesystem space of the container running the nested Docker daemon, but it's still not the calling container's filesystem space.)
The most straightforward option is to docker cp files out of the inner container after it's stopped but before you docker rm it; it can copy a whole directory tree.
Did you checked the good directory on the good server ?
Creating a $PWD/some_dir in a DinD context, The result should be in a some_dir created in docker user home dir in the server running Gitlab CI container.

How can I access the project directories that are created in the Docker toolbox Linux containers on my Windows OS?

I have just installed Docker ToolBox on Windows 10 OS and created a Linux Python container. I have cloned a project from GitHub in that container. How can I access the project directories that are created in the Linux containers on my Windows OS? Are they saved in any drive or directory?
I have obtained the docker image container using the command
docker pull floydhub/dl-docker:cpu
and running it using the command
docker run -it -p 8888:8888 -p 6006:6006 -v /sharedfolder:/root/sharedfolder floydhub/dl-docker:cpu bash
Also I am running my project using docker quick start terminal, is there any other way (GUI based) to manage and run my projects?
More details about how your Linux Python container was launched, I.E. the build file/ run command, would be helpful. But that being said, you can copy files out of a container using docker cp as described here:
https://docs.docker.com/engine/reference/commandline/cp/
Also, if you use volumes you can view the cloned repo right on your Windows machine directly, but those volumes must be setup during the build process and cannot be added to a running container. More details on volumes and how data persistence works in docker:
https://docs.docker.com/engine/admin/volumes/volumes/
To answer your question about a GUI docker tool, I've used Kitematic (https://kitematic.com/) but personally found the CLI faster and easier to use.
Edit
According to edits you do in fact have volumes that will persist data to your local Windows machine. But I believe you need to use a separate -v flag for each: (docker documentation is unclear about this as they now suggest using --mount, but according to this SO post: Mounting multiple volumes on a docker container? suggests the multiple flags):
docker -v /sharedfolder:/root/sharedfolder \
-v floydhub/dl-docker:cpu
A side note: it is sometimes confusing to not use absolute paths when mounting volumes as you've done in floydhub/dl-docker:cpu. Its not always clear where this data might live.
Any data that is stored in the docker container at /root/sharedfolder should be available in Windows at /sharedfolder
Edit 2
To match your original docker run command:
docker run -it -p 8888:8888 -p 6006:6006 -v /sharedfolder:/root/sharedfolder -v floydhub/dl-docker:cpu bash

How to run python script in docker with the script being sent dynamically to docker container?

How to run python script in docker with the script being sent dynamically to docker container ?
Also, it should handle multiple simultaneous connections. For example, if two run is executed by two people at once, it should not override the file created by one person by the another.
Normally, you Mount a host file as a data volume, or, in your case, a host directory.
See the python image:
docker run -it --rm --name my-running-script -v "$PWD":/usr/src/myapp -w /usr/src/myapp python:3 python your-daemon-or-script.py
That way, if a file is created in that mounted folder, it will be created on the host hard-drive, and won't be overridden by another user executing the same script in his/her own container.
For an Ubutu image, you need
an initial copy of the Git repo, cloned as a bare repo (git clone --mirror).
an Apache installed, listening for Git request
When you fetch a PR, you can run a new container, and push that PR branch to the container Git repo. A post-receive hook on that container repo can trigger a python script.
- then you can

Categories