docker airflow configuration issues (puckel/docker) - python

After pulling the docker image from here I realised after attaching a shell that the tutorial files are not in the dag folder specified in airflow.cg (dags_folder = /usr/local/airflow/dags, the folder dags does not exist).
The tutorial file is actually found here instead:
/usr/local/lib/python3.6/site-packages/airflow/example_dags/tutorial.py
In addition, running a airflow list_dag raises warnings about kubernetes not bieng installed and I am missing the permissions to run apt-get for applications like vim to edit py files, or even run ps to view processes.
As I am new to docker and airflow, is there anything I need to change in the dockerfile when building?
Note: I am using Docker for windows to build the linux image.

The warnings about Kubernetes come from the fact that the airflow[kubernetes] module is not installed by default by Puckel's Dockerfile, but it's not something to worry about unless you want to use Airflow's KubernetesPodOperator.
It's also normal that you don't have permission to edit python modules when you go inside the container, because there you are logged as user airflow and not as root and that user only has write access to the $AIRFLOW_HOME directory. In general editing files from inside the container is hackish and you should try to avoid that.
If I guess correctly, what you want to do is to have your own dags been loaded from airflow-docker. If that's the case, you can run something like the following:
docker run -d -p 8080:8080 -v <local_path_to_your_dags>:/usr/local/airflow/dags puckel/docker-airflow webserver
Here you're mounting a local folder from your machine to the HOME/dags folder in the container, which is the one used to load dags.

Related

Docker Container for Custom Python Setup to Run Tests

I'm new to using docker, and I was wondering how I could create a docker image for others to use to replicate the exact same setup I have for python when testing code. Currently what I've done is I created a docker image with the dockerfile below:
FROM python:3.7
COPY requirements.txt ./
RUN pip install -r requirements.txt
where the requirements text file contains the libraries and the versions I am using. I then ran this image on my local machine, and things looked good, so I pushed this image to a public repository on my docker hub account, so other people could pull it. This makes sense so far, but I'm still a little confused what to do next. If someone wants to test a python script on their local machine using my configuration what exactly would they do? I think they would pull my image from docker hub and then create a container from this image on their local machine. However, I tried this on my machine, testing a python file that runs and saves a file, and it's not really working how I anticipated. I can make a python file and put it in WORKDIR and have it run, but I think that's all it does. I'd really like to be able to navigate the container to find the file the python program created and then save it back on my local machine. Is there any way to do this, and am I going about this the wrong way?
you can execute bash in the container and check the files created inside of the container. Or you can share a volume between host and container. Such that you can check the files created in your host machine. Check this link, it might be helpful for you :-)
Volume

Running a Docker Image on Local Folder

I need to use a docker image which has compiled versions of certain programs that are very hard to compile from scratch.
I need to run a program in that environment.
I installed docker and pulled the image.
But how do I run a program (A Python program that works in the docker environment) using the environment provided to me by docker, sending in the local folder as input and producing the output back to my local system.
So far all resources tutorials show me how to work inside docker and not the problem I want.
Thanks for our help.
You should look at bind mounts. Here’s the Docker documentation of those.
Essentially, that will mount a folder in the host as a folder in the container.

How to mount a docker container so that I can run python scripts, which are stored in the inside of the container

I am using a docker image (not mine) created through this dockerfile.
ROS kinetic, ROS2 and some important packages are already installed on this image.
When I run the docker image with docker run -it <image-hash-code> ROS kinetic is working well and the packages, like gym, can be found by python3.
So, all in all the docker image is a great starting point for my own project.
However, I would like to change the python scripts, which are stored on the docker image. The python scripts are using the installed packages and are interacting with ROS kinetic as well as with ROS2.
I do not want to install on my Ubuntu system all these programs and packages, which are already installed on the docker image in order to test my own python scripts.
Is there a way to mount the docker image so that I can test my python scripts?
Of course, I can use vim to edit python scripts, but I am thinking more of IntelliJ.
So, how can an IDE (e.g. IntelliJ) access and run a python script, which is stored on the docker image, with the same result as I would execute this script directly on the running container.
The method by Lord Johar, mounting the docker, edit the scripts with an IDE, save the image and then to run the image, is working, but is not what I would like to achieve.
My goal is to use the docker container as a development environment on which an IDE has access to and can use the installed programs and packages.
In other words: I would like to use an IDE on my host system in order to test my python scripts in the same way as the IDE would be installed on the docker image.
you can use docker commit
use this command docker commit <your python container>.
Now type docker images to see the image.
You should rename and tag image like this command docker tag myphthon:v1 <image ID>
use docker run command and then enjoy your code.
It's better to mount a volume to your container to persist your code and data Docker volume.
However, I would like to change the python scripts, which are stored on the docker image. The python scripts are using the installed packages and are interacting with ROS kinetic as well as with ROS2.
you must mount volume to your docker and edit your file.
better way is make your image
install docker on your ubuntu, pull python image, use Dockerfile to create your image, every time you change your code build new image by new tag then run image and enjoy docker container
In second way
Copy your python app to /path/to/your/app (My main file is index.py)
Change your directory to /path/to/your/app
Create a file with name Dockerfile :
FROM python:alpine3.7
COPY . /app
WORKDIR /app
RUN pip install -r requirements.txt
EXPOSE 5000
CMD python ./index.py
Also note is the RUN directive that is calling PyPi (pip) and pointing to the requirements.txt file. This file contains a list of the dependencies that the application needs to run.
Build Your Image.
docker build --tag my-app .
Note: at the end of command is a dot, that is too important. another thing is you must be at /path/to/your/app inside Dockerfile
now you can run your container
docker run --name python-app -p 5000:5000 my-app
What you are looking for is a tooling which can communicate with a local or remote docker demon.
I know that eclipse can do that. The tooling for this is called Docker Tooling. It can explore docker images and containers on a machine running a docker demon in your network. It can start and stop containers, commit containers to images and create images.
What you require (as I understand) is the ability to commit containers, since you are asking for changing scripts inside your container. If you like to persist your work on those docker containers, committing is indispensable.
Since I am not familiar wit IntelliJ, I would suggest to have a look onto the eclipse's docker tooling wiki to get a clue whether it is what you are looking for. And then after you got an idea, look for analogies in your favorite IDE like IntelliJ.
Another IDE which supports docker exploring is clion

Setting up docker container so that I can access python packages on ubuntu server

I'm new to using Docker, so I'm either looking for direct help or a link to a relevant guide. I need to train some deep learning models on my school's linux server, but I can't manually install pytorch and other python packages since I don't have root access (sudo). Another student said that he uses docker and has everything ready to go in his container.
I'm wondering how to wrap up my code and relevant packages into a container that I can push to the linux server and then run.
To address your specific problem the easiest way I found to get code into a container is to use git.
start the container in interactive mode or ssh to it if it's attached to a network.
git clone <your awesome deep learning code>. In your git repo have a requirements.txt file. Change directories into your local clone of your repo and run pip install -r requirements.txt
Run whatever script you need to run your code. Note you can easily put your pip install command in one of your run scripts.
It's important to remember that docker containers are stateless/ephemeral. You should not expect the container nor its contents to exist in some durable fashion. This specific issue is addressed by mapping a directory on the host system to a directory in the container.
Side note: I first recommend starting with the docker tutorial. You can easily skip over the installation parts if you are working on system that already has docker installed and where you have permissions to build, start, and stop containers.
I don't have root access (sudo). Another student said that he uses docker
I would like to point out that docker requires sudo permissions.
Instead I think you should look at using something like Google Colab or JupyterLab. This gives you the added benefit of code that is backed-up on a remote server

Python requirements.txt use local git dependency

I have a small python flask app on a CentOS-7 VM that runs in docker, along with an nginx reverse proxy. The requirements.txt pulls in several external utilities using git+ssh such as:
git+ssh://path-to-our-repo/some-utility.git
I had to make a change to the utility, so I cloned it locally, and I need the app to use my local version.
Say the cloned and modified utility is in a local directory:
/var/work/some-utility
In the requirements.txt I changed the entry to:
git+file:///var/work/some-utility
But when I try to run the app with
sudo docker-compose up
I get the error message
Invalid requirement: 'git+file:///var/work/some-utility'
it looks like a path. Does it exist ?
How can I get it to use my local copy of "some-utility" ?
I also tried:
git+file:///var/work/some-utility#egg=someutility
but that produced the same error.
I looked at PIP install from local git repository.
This is related to this question:
https://stackoverflow.com/questions/7225900/how-to-pip-install-packages-according-to-requirements-txt-from-a-local-directory?rq=1
I suppose most people would say why not just check in a development branch of some-utility to the corporate git repo, but in my case I do not have privileges for that.
Or maybe my problem is related to docker, and I need to map the some-utility folder into the docker container, and then use that path? I am a docker noob.
--- Edit ---
Thank you larsks for your answer. I tried to add the some-utility folder to the docker-compose.yml:
volumes:
- ./some-utility:/usr/local/some-utility
and then changed the requirements.txt to
git+file:///usr/local/some-utility
but our local git repo just went down for maintenance, so I will have to wait a bit for it to come back up to try this.
=== Edit 2 ===
After I made the above changes, I get the following error when running docker-compose when it tries to build my endpoint app:
Cloning file:///usr/local/some-utility to /tmp/pip-yj9xxtae-build
fatal: '/usr/local/some-utility' does not appear to be a git repository
But the /usr/local/some-utility folder does contain the cloned some-utility repo, and I can go there and run git status.
If you're running pip install inside a container, then of course /var/work/some-utility needs to be available inside the container.
You can expose the directory inside your container using a host volume mount, like this:
docker run -v /var/work/some-utility:/var/work/some-utility ...

Categories