Build docker image with python application using PyQt4 library - python

I am trying to dockerise a small python application. The python code uses the PyQt4 library. The app has some test units which I run when I build the image. Something like the following:
RUN [ "/bin/bash", "-c", "source activate conda_environment && python -m unittest tests/tests_html_consistency.py" ]
The PyQt4 library in the python code needs a X server to do its things, but docker do not have one, therefore, unfortunately when I build the image I get the following error:
python -m unittest: cannot connect to X server
On other similar stack questions I found that a possible solution would be to simply mount the socket for the X server as a Docker volume and tell Docker to use that instead.
docker run -v /tmp/.X11-unix:/tmp/.X11-unix -e DISPLAY=unix$DISPLAY TheImage
But how do I do this at image build time? The above command works only when the image is already build, at 'docker run' time. Moreover, if the host machine is an aws instance (hence without x server) would that work? I don't think so...

Try using the --build-arg command so:
docker build -t yourContainer --build-arg DISPLAY=$DISPLAY .

Related

Receiving OSError: [Errno 8] Exec format error in app running in Docker Container

I have a React/Flask app running within a Docker container. There is no issue with me building the project using docker-compose, and running the app itself in the container. Where I am running into issues is a particular API route that is supposed to fetch user profiles from the DB, encrypt the values in a text file, and return to the frontend for download. The encryption script is written in C, though the API route is written in Python. When I try and encrypt through the app running in Docker, I am given the following error message:
OSError: [Errno 8] Exec format error: './app/Crypto/encrypt.exe'
I know the following command works in the CLI if invoked outside of the Docker Container (still invoked at the same directory level as it would in app):
./app/Crypto/encrypt.exe -k ./app/Crypto/secretkey -i ./profile.txt -o ./profile.encr
I am using the following Python code to invoke the command in the API route which is where it fails:
proc = subprocess.Popen(f"./app/Crypto/encrypt.exe -k ./app/Crypto/secretkey -i ./{profile.profile_name}.txt -o ./{profile.profile_name}.encr", shell=True)
The Dockerfile for my backend is pasted below:
FROM python:3
WORKDIR /app
ENV FLASK_APP=main.py
COPY ./requirements.txt .
RUN pip install -r requirements.txt
COPY . .
CMD ["python", "main.py"]
I have tried to tackle the issue a few different ways:
By default my Docker Container was built with Architecture of ARM64. I read that the OS Error was caused by Architecture not being AMD64, so I rebuilt the container with AMD64 and it gave me the same error.
In case this was a permissions error, I ran chmod +rwx on encrypt.exe through the Dockerfile when building the container. Pretty sure it has nothing to do with permissions especially as it still failed.
I added a shebang (#!/bin/bash) to the script as well as to the Dockerfile.
At the end of the day I know I am failing when using subprocess.Popen, so I am positive I must be missing something when invoking the script using Python, or there is a configuration in my Docker Container that is preventing this functionality. My machine is a Macbook Pro which the script runs fine on. The script has also successfully been utilized on a machine running Linux.
Any chances folks have seen similar issues arise with this error? Thanks in advance!
So thanks to David Maze's comment on this, I followed the lead that maybe the executable I wanted to run needed to be built within the Dockerfile. I destroyed my original container, added in a step to run the Makefile that generates the executable, and finally ran the program through the app running in the Docker container. This did the trick! Not sure as to why the executable needed to be compiled within the Docker container, but running 'make' on the Makefile within the Dockerfile did the trick.

Running a python script inside an nginx docker container

I'm using nginx to serve some of my docs. I have a python script that processes these docs for me. I don't want to pre-process the docs and then add them in before the docker container is built since these docs can grow to be pretty big and they increase in number. What I want is to run my python (and bash) scripts inside the nginx container and have nginx just serve those docs. Is there a way to do this without pre-processing the docs before building the container?
I've attempted to execute RUN python3 process_docs.py, but I keep seeing the following error:
/bin/sh: 1: python: not found
The command '/bin/sh -c python process_docs.py' returned a non-zero code: 127
Is there a way to get python3 onto the Nginx docker container? I was thinking of installing python3 using:
apt-get update -y
apt-get install python3.6 -y
but I'm not sure that this would be good practice. Please let me know the best way to run my pre processing script.
You can use a bind mount to inject data from your host system into the container. This will automatically update itself when the host data changes. If you're running this in Docker Compose, the syntax looks like
version: '3.8'
services:
nginx:
image: nginx
volumes:
- ./html:/usr/share/nginx/html
- ./data:/usr/share/nginx/html/data
ports:
- '8000:80' # access via http://localhost:8000
In this sample setup, the html directory holds your static assets (checked into source control) and the data directory holds the generated data. You can regenerate the data from the host, outside Docker, the same way you would if Docker weren't involved
# on the host
. venv/bin/activate
./regenerate_data.py --output-directory ./data
You should not need docker exec in normal operation, though it can be extremely useful as a debugging tool. Conceptually it might help to think of a container as identical to a process; if you ask "can I run this Python script inside the Nginx process", no, you should generally run it somewhere else.

Python Virtualenv HTTP Server With Docker

Trying to host a python http server and works fine.
FROM python:latest
COPY index.html /
CMD python3 -m http.server
But when trying with python virtualenv, facing issues.
FROM python:3
COPY index.html .
RUN pip install virtualenv
RUN virtualenv --python="python3" .virtualenv
RUN .virtualenv/bin/pip install boto3
RUN python3 -m http.server &
CMD ["/bin/bash"]
Please help.
I just want to point up that using virtualenv within docker container might be redundant.
With docker, you are encapsulating your one specific application along with its dependencies (libraries, frameworks, boto3 in your case), as opposed to your local machine where you might have several applications being developed, each with different dependencies.
Thus, I would recommend considering again the necessity of virtualenv within docker.
Second, running the command:
RUN python3 -m http.server &
in the background is also bad practice here. You want to run it with the CMD command in the foreground, so it will run as the first process (PID 1). Then it will receive all docker signals, and start automatically with the container start:
CMD ["virtualenv/bin/python3", "-m", "http.server"]

How can I access the project directories that are created in the Docker toolbox Linux containers on my Windows OS?

I have just installed Docker ToolBox on Windows 10 OS and created a Linux Python container. I have cloned a project from GitHub in that container. How can I access the project directories that are created in the Linux containers on my Windows OS? Are they saved in any drive or directory?
I have obtained the docker image container using the command
docker pull floydhub/dl-docker:cpu
and running it using the command
docker run -it -p 8888:8888 -p 6006:6006 -v /sharedfolder:/root/sharedfolder floydhub/dl-docker:cpu bash
Also I am running my project using docker quick start terminal, is there any other way (GUI based) to manage and run my projects?
More details about how your Linux Python container was launched, I.E. the build file/ run command, would be helpful. But that being said, you can copy files out of a container using docker cp as described here:
https://docs.docker.com/engine/reference/commandline/cp/
Also, if you use volumes you can view the cloned repo right on your Windows machine directly, but those volumes must be setup during the build process and cannot be added to a running container. More details on volumes and how data persistence works in docker:
https://docs.docker.com/engine/admin/volumes/volumes/
To answer your question about a GUI docker tool, I've used Kitematic (https://kitematic.com/) but personally found the CLI faster and easier to use.
Edit
According to edits you do in fact have volumes that will persist data to your local Windows machine. But I believe you need to use a separate -v flag for each: (docker documentation is unclear about this as they now suggest using --mount, but according to this SO post: Mounting multiple volumes on a docker container? suggests the multiple flags):
docker -v /sharedfolder:/root/sharedfolder \
-v floydhub/dl-docker:cpu
A side note: it is sometimes confusing to not use absolute paths when mounting volumes as you've done in floydhub/dl-docker:cpu. Its not always clear where this data might live.
Any data that is stored in the docker container at /root/sharedfolder should be available in Windows at /sharedfolder
Edit 2
To match your original docker run command:
docker run -it -p 8888:8888 -p 6006:6006 -v /sharedfolder:/root/sharedfolder -v floydhub/dl-docker:cpu bash

How to watch xvfb session that's inside a docker on remote server from my local browser?

I'm running a docker (That I built on my own), that's docker running E2E tests.
The browser is up and running but I want to have another nice to have feature, I want the ability of watching the session online.
My docker run command is:
docker run -p 4444:4444 --name ${DOCKER_TAG_NAME}
-e Some_ENVs
-v Volume:Volume
--privileged
-d "{docker-registry}" >> /dev/null 2>&1
I'm able to export screenshots but in some cases it's not enough and the ability of watching what is the exact state of the test would be amazing.
I tried a lot of options but I came to a dead end, Any help would be great.
My tests are in Python 2.7
My Docker base is ubuntu:14.04
My environment is in AWS (If that's matter)
The docker runs on Ubuntu servers.
I know it a duplicate of this but no one answered him so...
There is a recent tool called Selenoid. It is launching browsers in Docker containers (i.e. headless as you require). It has a standalone UI capable to show live session screen via VNC. So you can launch multiple sessions in parallel and then look and even intercept actions happening in target browser. All this stuff perfectly works in cloud environment.
I have faced the same issue before with vnc, you need to know your xvfb/vnc in which port is using then open that port on you aws secuirty group once you done with that then you should be able to connect.
On my case i was starting selenium docker "https://github.com/elgalu/docker-selenium" and used this command to start the docker machine "docker run -d --name=grid -p 4444:24444 -p 5900:25900 \
-v /dev/shm:/dev/shm -e VNC_PASSWORD=hola \
-e SCREEN_WIDTH=1920 -e SCREEN_HEIGHT=1480 \
elgalu/selenium"
The VNC port as per the command is "5900" so i opened that port on instance security group, and connected using VNC viewer on port 5900

Categories