How to Write a Dockerfile to run Python3 + PyQt5 - python

The story is, I have built a little Python software using Python3.8, PyQt5 and Postgres, so I am trying to create a container in order to dockerize all this stuff, I am thinking to create one Dockerfile to create a container for python + pqt5, another container just for Postgres and then using docker-compose to link everything.
The problem is, when I try to create a container for Python and PyQt5, I am facing this error.
qt.qpa.plugin: Could not load the Qt platform plugin "xcb" in "" even though it was found.
This application failed to start because no Qt platform plugin could be initialized. Reinstalling the application may fix this problem.
Available platform plugins are: eglfs, linuxfb, minimal, minimalegl, offscreen, vnc, wayland-egl, wayland, wayland-xcomposite-egl, wayland-xcomposite-glx, webgl, xcb.
And this is actually the Dockerfile I am talking about
FROM python:3
COPY *.py /code/
COPY requirements.txt /code/
WORKDIR /code/
RUN apt-get update -y && apt-get upgrade -y && \
apt-get install xvfb -y
RUN pip install --no-cache-dir -r requirements.txt
CMD ["python3", "main.py"]
This is the content of my requirements.txt
psycopg2
requests
PyQt5
I tried all the solutions I found on the web and others on the Docker Hub, but none of them gave me the expected result.
Could any good soul shed light on this problem?
Preferably with written code.

It’s not too difficult. Is the Qt software interactive?
If not, then you need to use another “dummy” platform plugin. The xcb plugin is for use with X displays only.
If yes, then you have several choices:
a. Run an X server on your desktop. Set the DISPLAY env var in the container, and forward relevant port from the server into the container. The SHM extension will not be available, so the performance will be a bit lower than running the application directly on the desktop.
b. Run a web browser client using Qt webgl streaming technology. The Qt software will run as a server in the container, and offer it’s GUI to a browser client.
c. Run the Qt application directly (natively) on the desktop, and run the necessary services in the container. Have the application communicate with the services in the container. You can either have some middleware that exposes an API, or you can talk directly to the database. That will somewhat depend on the needs of your project, and the eventual direction it’s heading.

use dockerhub
https://hub.docker.com/r/fadawar/docker-pyqt5
The image you want seems to have already been created.

Related

How to mount a docker container so that I can run python scripts, which are stored in the inside of the container

I am using a docker image (not mine) created through this dockerfile.
ROS kinetic, ROS2 and some important packages are already installed on this image.
When I run the docker image with docker run -it <image-hash-code> ROS kinetic is working well and the packages, like gym, can be found by python3.
So, all in all the docker image is a great starting point for my own project.
However, I would like to change the python scripts, which are stored on the docker image. The python scripts are using the installed packages and are interacting with ROS kinetic as well as with ROS2.
I do not want to install on my Ubuntu system all these programs and packages, which are already installed on the docker image in order to test my own python scripts.
Is there a way to mount the docker image so that I can test my python scripts?
Of course, I can use vim to edit python scripts, but I am thinking more of IntelliJ.
So, how can an IDE (e.g. IntelliJ) access and run a python script, which is stored on the docker image, with the same result as I would execute this script directly on the running container.
The method by Lord Johar, mounting the docker, edit the scripts with an IDE, save the image and then to run the image, is working, but is not what I would like to achieve.
My goal is to use the docker container as a development environment on which an IDE has access to and can use the installed programs and packages.
In other words: I would like to use an IDE on my host system in order to test my python scripts in the same way as the IDE would be installed on the docker image.
you can use docker commit
use this command docker commit <your python container>.
Now type docker images to see the image.
You should rename and tag image like this command docker tag myphthon:v1 <image ID>
use docker run command and then enjoy your code.
It's better to mount a volume to your container to persist your code and data Docker volume.
However, I would like to change the python scripts, which are stored on the docker image. The python scripts are using the installed packages and are interacting with ROS kinetic as well as with ROS2.
you must mount volume to your docker and edit your file.
better way is make your image
install docker on your ubuntu, pull python image, use Dockerfile to create your image, every time you change your code build new image by new tag then run image and enjoy docker container
In second way
Copy your python app to /path/to/your/app (My main file is index.py)
Change your directory to /path/to/your/app
Create a file with name Dockerfile :
FROM python:alpine3.7
COPY . /app
WORKDIR /app
RUN pip install -r requirements.txt
EXPOSE 5000
CMD python ./index.py
Also note is the RUN directive that is calling PyPi (pip) and pointing to the requirements.txt file. This file contains a list of the dependencies that the application needs to run.
Build Your Image.
docker build --tag my-app .
Note: at the end of command is a dot, that is too important. another thing is you must be at /path/to/your/app inside Dockerfile
now you can run your container
docker run --name python-app -p 5000:5000 my-app
What you are looking for is a tooling which can communicate with a local or remote docker demon.
I know that eclipse can do that. The tooling for this is called Docker Tooling. It can explore docker images and containers on a machine running a docker demon in your network. It can start and stop containers, commit containers to images and create images.
What you require (as I understand) is the ability to commit containers, since you are asking for changing scripts inside your container. If you like to persist your work on those docker containers, committing is indispensable.
Since I am not familiar wit IntelliJ, I would suggest to have a look onto the eclipse's docker tooling wiki to get a clue whether it is what you are looking for. And then after you got an idea, look for analogies in your favorite IDE like IntelliJ.
Another IDE which supports docker exploring is clion

Setting up docker container so that I can access python packages on ubuntu server

I'm new to using Docker, so I'm either looking for direct help or a link to a relevant guide. I need to train some deep learning models on my school's linux server, but I can't manually install pytorch and other python packages since I don't have root access (sudo). Another student said that he uses docker and has everything ready to go in his container.
I'm wondering how to wrap up my code and relevant packages into a container that I can push to the linux server and then run.
To address your specific problem the easiest way I found to get code into a container is to use git.
start the container in interactive mode or ssh to it if it's attached to a network.
git clone <your awesome deep learning code>. In your git repo have a requirements.txt file. Change directories into your local clone of your repo and run pip install -r requirements.txt
Run whatever script you need to run your code. Note you can easily put your pip install command in one of your run scripts.
It's important to remember that docker containers are stateless/ephemeral. You should not expect the container nor its contents to exist in some durable fashion. This specific issue is addressed by mapping a directory on the host system to a directory in the container.
Side note: I first recommend starting with the docker tutorial. You can easily skip over the installation parts if you are working on system that already has docker installed and where you have permissions to build, start, and stop containers.
I don't have root access (sudo). Another student said that he uses docker
I would like to point out that docker requires sudo permissions.
Instead I think you should look at using something like Google Colab or JupyterLab. This gives you the added benefit of code that is backed-up on a remote server

Dependency in requirement.txt not installed

I need to deploy a flask app to google app engine.
I used docker and there lines are in Dockerfile:
ADD requirements.txt /app/requirements.txt
RUN pip install -r /app/requirements.txt
In requirements.txt file:
Flask==0.12
gunicorn==19.6.0
boto==2.46.1
gcs-oauth2-boto-plugin==1.8
ffmpeg-normalize
It is supposed to install install all dependencies. But somehow "ffmpeg-normalize" is not installed in google app engine instances.
Can anyone help me with that?
If there is another better way doing the package installation, I will be happy to go with as well. Thanks!!
This could be happening for a few reasons. Here are my guesses :)
How do you know the package isn't being installed? Can you share the docker build output that happens when you gcloud app deploy?
Another thing to try here, just to be sure is to run:
gcloud app instances list
Then...
gcloud beta app instances ssh [instance]
--service [svc]
--version [v]
--container gaeapp
From there, you can ls around in the container and see exactly what got installed.
I would guess that the pip package is getting installed, but maybe you just didn't install the native dependency you need for ffmpeg. Here's an example of how to do this with Docker + App Engine:
https://github.com/JustinBeckwith/next17/blob/master/videobooth/Dockerfile
Since you're already using docker - what happens when you build this container locally? Have you tried:
docker build -t myapp .
docker run -it -p 8080:8080 myapp
Hopefully one of this helps give you a clue to figuring out what's happening. Hope this helps!

Deploying python site to production server

I have django website in testing server and i am confused with how should the deployement procedure goes.
Locally i have these folders
code
virtualenv
static
static/app/bower_components
node_modules
Current on git i only have code folder in there.
My Initial thought was to do this on production server
git clone repo
pip install
npm install
bower install
colectstatic
But i had this problem that sometimes some components in pip or npm or bowel fail to install and then production deployemnet fails.
I was thinking of put everything in static, bower, npm etc inside git so that i can fetch all in prodcution.
Is that the right way to do. i want to know the right way to tackle that problem
But i had this problem that sometimes some components in pip or npm or
bowel fail to install and then production deployment fails.
There is no solution for this other than to find out why things are failing in production (or a way around would be to not install anything in production, just copy stuff over).
I would caution against the second option because Python virtual environments are not designed to be portable. If you have components such as PIL/Pillow or database drivers, these need system libraries to be installed and compiled against at build time.
Here is what I would recommend, which is in-line with the deployment section in the documentation:
Create an updated requirements file (pip freeze > requirements.txt)
Run collectstatic on your testing environment.
Move the static directory to your frontend/proxy machine, and map it to STATIC_URL. Confirm this works by browsing the static URL (for example: http://example.com/static/images/logo.png)
Clone/copy your codebase to the production server.
Create a blank virtual environment.
Install dependencies with pip install -r requirements.txt
Make sure you run through the deployment checklist, which includes security tips and settings you need to enable for production.
After this point, you can bring up your django server using your favorite method.
There are many, many guides on deploying django and many are customized for particular environments (for example, AWS automation, Heroku deployment tips, Digital Ocean, etc.) You can browse those for ideas (I usually pick out any automation tips) but be careful adopting one strategy without making sure it works with your particular environment/requirements.
In addition this might be helpful for some guidelines on deployment.

Openshift Python and Nodejs in the same app

I am building a Django project and in Openshift I have an app with the cartridge for Python 2.7 and Mysql 5.5. I also want to use bower to manage the client side packages, but bower has as dependencies npm and Node. In Openshift, I've got npm installed, but I don't have Node, so I can't install bower.
How can I install Nodejs in openshift?
Note: I don't have sudo permission in openshift.
Thanks.
The host environment provides access to npm and nodejs-0.6 - even if you've selected the python web service.
If you want to minimize your repo contents, and use OpenShift to run your builds remotely, I'd try using action_hooks to provide your own custom build steps.
You could also consider running your builds locally, and committing and shipping your build results, possibly via an alternate "release" or "build" branch.

Categories