Communicate with Docker containers via API - python

I'm creating an API (FastAPI or Flask) that will be deployed in a docker container. It's main objective will be to serve as a 'controller' that will launch 2-3 python apps via specific endpoinds.
Each python app is installed in a separate docker container. Each app does a specific job - extracts data, creates an image, etc.
What options do I have to allow the 'controller' to communicate with the apps living on separate docker containers?
While exploring, I identified the following routes:
Install Docker CLI in the 'controller' container. Designated endpoints to run 'docker start' command to launch each python app.
Create separate REST API for each of the APP containers. Allow the 'controller' to interact with each app via HTTPS.
Is there anything I'm missing? Maybe there are better routes?

Related

Schedule task in Azure to request Container App

I have a Container App in Azure cloud which hosts an API.
My goal is to execute a task which sends some specific requests to this API and sends e-mail with results. I want to run it twice a day. Let's say it's a single Python script task.py
What would be the best option to achieve that?
So far I've seen the following options:
Azure Functions
Logic Apps
Container Registry tasks???
Please note that container image used in the Container App is stored in Container Registry and contains task.py script too.

How to deploy python flask application on another windows 10?

I have develop python flask application(REST API). Now I want to deploy this application on client system(Windows 10 Professional ). My client dont have any internet service.
Previously, I done in java that time I make a .war file and deployed in tomcat on client system. He was able to access REST API.
Now I want know any similar way to deploy python app on client system, on start system his able to access my REST API
use PyInstaller.
pip install pyinstaller
go to project dir
cd C:\Users\sandip\Desktop\MyPython
use
pyinstaller --onefile HelloFlask.py
If you just want to make your rest APIs accessible by other users in same network, you can simply do it without installing anything on client side by replacing the app.run() in your code to app.run(host= '0.0.0.0'). By default flask app runs on localhost, by changing it to latter causes it to run on your machines IP address, thus making it accessible by all the users under same network. You can read more on flask's documentation under the heading Externally Visible Server.
To deploy your app in production, you need a WSGI server, you can read about deployment of flask app here

Deploying python web app using docker image on azure

I am trying to follow this tutorial DEPLOY PYTHON WEB APP IN WEB APP FOR CONTAINERS
I have cloned the project, tested it manually and it worked fine. The tutorial recommended to push the docker image on the docker hub. Instead of that I created the container registry on azure itself and pushed the docker image in azure container registry. I haven't enabled the admin used in azure container registry so no need to worry about credentials as its not private registry.
I then used the command mentioned in the tutorial and started the web app but when I try to access the url, it shows Service Unavailable. I do not have any idea on what wrong I am doing here.
Please help. Thanks
DOCKERFILE
FROM tiangolo/uwsgi-nginx-flask:python3.6-alpine3.7
ENV LISTEN_PORT=8000
EXPOSE 8000
COPY /app /app
For deploying Web Application on Azure container. The images are the import key.
When the images are in the public registry, you just need to follow the steps below:
Create a resource group in Azure.
Create a Service Plan.
Create the Web Application with the image.
For more details, see Deploy a Docker/Go web app in Web App for Containers.
When the images are in the private registry, here make an example with Azure Container Registry.
You need one more step, set the container config for your web app like this:
az webapp config container set --name <app_name> --resource-group <resourceGroup_Name> --docker-custom-image-name <azure-container-registry-name>.azurecr.io/<image_name> --docker-registry-server-url https://<azure-container-registry-name>.azurecr.io --docker-registry-server-user <registry-username> --docker-registry-server-password <password>
For more details, see Use a Docker image from any private registry.

Am I using Python Flasks built in server?

I am building a back end in python via the python flask application from IBM Cloud/Bluemix. I have heard/read a lot about people complaining regarding that Flasks built in server isn’t good for production. But how do I know if the application uses the Flask built in server or if IBM sets something else? Is there a simple way to see this in the code?
Deploying the Flask boilerplate app from the IBM cloud catalogue will indeed deploy a Flask application running on the Flask dev webserver.
You will need to alter the application if you want to run a production WSGI server.
I work for IBM and am in this stuff all day every day.
If you want to verify this, SSH into your application container on Cloud Foundry with the bash command
cf ssh <yourappnamehere>
You will need to have either the bluemix or cloud foundry CLIs installed and be logged in to the relevant endpoint before submitting this command.
It will open a bash shell in your application container, and you can cd around and open and/or download your project files for inspection.
This line:
app = Flask(__name__)
is a sure fire way to know that you are running a Flask web server application.
If you are concerned with which WSGI server your application is running under, checking your procfile (you should see this when SSHing int your container) will show you which command starts your application. If the command is
python <yourapp>.py
then you are running the dev server. Otherwise, you would be running some other python file, most likely via the server's command rather than the python command, that would import your application as a dependency.
You can also take a look at whether or not any WSGI server libraries were downloaded during the compilation of your droplet, and what command was used to start your application with
cf logs <yourappname> --recent
after deploying it.
Or, you can just believe me that the boilerplate deploys a Flask app under a Flask dev server.
A tutorial on running Flask on a different WSGI server:
https://www.digitalocean.com/community/tutorials/how-to-serve-flask-applications-with-uwsgi-and-nginx-on-ubuntu-14-04

how many docker containers should a java web app w/ database have?

I'm trying to "dockerize" my java web application and finally run the docker image on EC2.
My application is a WAR file and connects to a database. There is also a python script which the application calls via REST. The python side uses the tornado webserver
Question 1:
Should I have the following Docker containers?
Container for Application Server (Tomcat 7)
Container for HTTP Server (nginx of httpd)
Container for postgres db
Container for python script (this will have tornado web server and my python script).
Question 2:
What is the best way to build dockerfile? I will have to do trial and error for what commands need to be put into the dockerfile for each container. Should I have an ubuntu VM on which I do trial and error and once I nail down which commands I need then put them into the dockerfile for that container?
That list looks about right.
The advantage of splitting up your stack to separate containers is that you can (in many cases) use off-the-shelf official images, and only have to provide the right configuration to make them work together. In addition, you'd be able to upgrade the components (containers) separately.
Note that combining multiple services in a single container is not forbidden, but in Docker it's overall best practice to separate concerns, and have a
single container only be responsible for a single task/service.
To get all containers started with the right configuration, docker-compose is
a good choice; it enables you to create a single file (docker-compose.ymlhttps://docs.docker.com/compose/compose-file/) that
describes your project; which images to build for each container, how the containers relate to each-other, and pass configurations to them.
With docker-compose you can then start all containers by simply running
docker-compose up -d
You can use Docker Machine to create a Docker development environment on Mac or Windows. This is really good for trial and error. There is no need to for Ubuntu VM.
Docker container does one thing only. So your application would consist of multiple containers, one for each component. You've also clearly identified the different containers for your application. Here is how the workflow might look like:
Create a Dockerfile for Tomcat container, nginx, postgres, tornado
Deploy the application to Tomcat in Dockerfile or by mapping volumes
Create image for each of the container
Optionally push these images to Docker hub
If you plan to deploy these containers on multiple hosts then create an overlay network
Use Docker Compose to start these containers together. It would use the network created previously. Alternatively you can also use --x-networking for Docker Compose to create the network.

Categories