how many docker containers should a java web app w/ database have? - python

I'm trying to "dockerize" my java web application and finally run the docker image on EC2.
My application is a WAR file and connects to a database. There is also a python script which the application calls via REST. The python side uses the tornado webserver
Question 1:
Should I have the following Docker containers?
Container for Application Server (Tomcat 7)
Container for HTTP Server (nginx of httpd)
Container for postgres db
Container for python script (this will have tornado web server and my python script).
Question 2:
What is the best way to build dockerfile? I will have to do trial and error for what commands need to be put into the dockerfile for each container. Should I have an ubuntu VM on which I do trial and error and once I nail down which commands I need then put them into the dockerfile for that container?

That list looks about right.
The advantage of splitting up your stack to separate containers is that you can (in many cases) use off-the-shelf official images, and only have to provide the right configuration to make them work together. In addition, you'd be able to upgrade the components (containers) separately.
Note that combining multiple services in a single container is not forbidden, but in Docker it's overall best practice to separate concerns, and have a
single container only be responsible for a single task/service.
To get all containers started with the right configuration, docker-compose is
a good choice; it enables you to create a single file (docker-compose.ymlhttps://docs.docker.com/compose/compose-file/) that
describes your project; which images to build for each container, how the containers relate to each-other, and pass configurations to them.
With docker-compose you can then start all containers by simply running
docker-compose up -d

You can use Docker Machine to create a Docker development environment on Mac or Windows. This is really good for trial and error. There is no need to for Ubuntu VM.
Docker container does one thing only. So your application would consist of multiple containers, one for each component. You've also clearly identified the different containers for your application. Here is how the workflow might look like:
Create a Dockerfile for Tomcat container, nginx, postgres, tornado
Deploy the application to Tomcat in Dockerfile or by mapping volumes
Create image for each of the container
Optionally push these images to Docker hub
If you plan to deploy these containers on multiple hosts then create an overlay network
Use Docker Compose to start these containers together. It would use the network created previously. Alternatively you can also use --x-networking for Docker Compose to create the network.

Related

Communicate with Docker containers via API

I'm creating an API (FastAPI or Flask) that will be deployed in a docker container. It's main objective will be to serve as a 'controller' that will launch 2-3 python apps via specific endpoinds.
Each python app is installed in a separate docker container. Each app does a specific job - extracts data, creates an image, etc.
What options do I have to allow the 'controller' to communicate with the apps living on separate docker containers?
While exploring, I identified the following routes:
Install Docker CLI in the 'controller' container. Designated endpoints to run 'docker start' command to launch each python app.
Create separate REST API for each of the APP containers. Allow the 'controller' to interact with each app via HTTPS.
Is there anything I'm missing? Maybe there are better routes?

Consume a docker container inside Django docker container? Connecting two docker containers

I have a Django container and I want to consume another DL container inside it? For example, I have a Django app that predicting images classes and I want to make the prediction using a docker container and not a python library. That Django app will be containerised as well. In production, I will have three docker containers: Django container + Postgres container + YoloV5 container. How can I link the Django with the YoloV5 so that the prediction inside the Django will be done using the YoloV5?
I want to connect a deep learning container with Django container to make prediction using the DL container and not a python package.
The easiest way to do this is to make a network call to the other container. You may find it simplest to wrap the YoloV5 code in a very thin web layer, e.g. using Flask, to create an API. Then call that in your Django container when you need it using requests.
As suggested by Nick and others, the solution is: by calling the YoloV5 docker container inside Django container using host.docker.internal. I mean that inside Django container (views.py) I used host.docker.internal to call the YoloV5 container.

How to use Single port for 3 containers instead of 3 ports Python-Flask, PostgreSQL & Angular8 ? so I can use Docker Run instead of Docker Compose

I have created three containers, one is for Python Flask application, second is for PostgreSQL db and third is for angular8. Im using Docker compose to run this. my question so each container has ports so total 3 ports. Is there a way I can use only one port to run this whole application like Docker Run instead of Docker Compose? All I want is a single port where this API can be called from anywhere.
If the only thing you want to be visible to the "outside" is the API, you can use the --link flag when calling docker run. Basically, start up the PG container, then start up the Flask container, linking to PG, then start up the Angular container, linking to Flask. However, the --link flag is a legacy feature, and may disappear sometime in the future.
Another option is to create a network with docker network create and make sure your three containers are all using that same network. They should all be able to communicate with each other in this way, and you just need to publish the API port so that other apps can use your API.
I'm not sure what your requirements are, but docker-compose is generally the cleaner way to do it, as it helps you achieve consistency in your automations.

Docker image with python & mongodb. Exporting from container to the host

I wrote a Python script that analyzes and filters stock data. I write the stock data into a mongodb. The result is a CSV file with the filtered values in it.
Is it now possible to create a docker container that contains Python & mongodb and copies the CSV from the container to the host?
I tried creating a Dockerfile with python only. But when it comes to adding the mongodb service and exporting the file to the host i am a little overstrained.
My goal is that at the end I'll have one Docker container that runs the python script & exports the file to the host.
Do you know any best practice? Or a good tutorial that covers my needs?
I would not recommend python and mongodb to be installed on the same docker container. Usually db and the app should be installed on separate containers using docker-compose. But still, if you want them to be on same containers, then you can do so by using an Ubuntu ( or anything else you are comfortable with) image installing mongodb and python on it and then running your scripts. I found this following git repo that contains one such Dockerfile.
Regarding copying CSVs from dockert to host machine, you can do so by using volumes, if you want to use docker-compose which I would totally recommend, or you can use docker cp command to get the data manually from docker to host.

How to serve static files from a Dockerized Python web app?

I have a Python web application that sits behind Nginx, and is served via Gunicorn.
I have configured it so that Nginx servers static files directly from the disk and it only talks to Gunicorn for static assets such as images.
My questions:
Is it a good idea or a big "no" to dockerize the web app together with static assets?
If I want to deploy my container in 2 servers, which need access to same assets, how can I make the static assets portable just like the containerized app?
What I'd like to have if possible:
I'd like to put my app in a container and I would like to make it as portable as possible, without spending more funds on additional resources such as a separate server to keep the images (like a DB)
If you know your app will always-and-forever have the same static assets, then just containerize them with the app and be done with it.
But things change, so when you need it I would recommend a Docker Volume Container approach: put your static assets in a DVC and mount that DVC in the main container so it's all pretty much "just one app container". You could use Docker Compose something like this:
appdata:
image: busybox
volumes:
- /path/to/app/static
command: echo "I'm just a volume container"
app:
build: .
volumes_from:
- appdata
command: …
You can expand further by starting your container with a bootstrap script that copies initial static files into the destination path on startup. That way your app is guaranteed to always have a default set to get started, but you can add more static files as the app grows. For an example of this, pull the official Jenkins container and read /usr/local/bin/jenkins.sh.
I agree with kojiro, if things do not change much, containerize the static files with your app. Regarding your second question, it seems that you think the Docker Volume Container approach is still not flexible enough since you will have multiple docker hosts. Maybe Flocker addresses your requirements? From the Flocker docs (https://docs.clusterhq.com/en/0.3.2/):
Flocker lets you move your Docker containers and their data together
between Linux hosts. This means that you can run your databases,
queues and key-value stores in Docker and move them around as easily
as the rest of your app. Even stateless apps depend on many stateful
services and currently running these services in Docker containers in
production is nearly impossible. Flocker aims to solve this problem by
providing an orchestration framework that allows you to port both your
stateful and stateless containers between environments.

Categories