I am new to python and Docker. I have a LUIS endpoint on Azure and I have containerized it on docker on my localhost listening to port 5000. Now I need to create a python web api, that will call this (LUIS container) docker endpoint.
How to achieve this?
You can take a look at: https://docker-py.readthedocs.io/en/stable/
Looks like you an create containers with. https://docker-py.readthedocs.io/en/stable/containers.html
You already have an endpoint hosted in a Docker container? Now you need a python script that calls that endpoint? Then I think you need the requests container. Best is to put your API credentials in environment variables which you can get with: os.environ: docs.python.org/3/library/os.html#os.environ
Related
I'm creating an API (FastAPI or Flask) that will be deployed in a docker container. It's main objective will be to serve as a 'controller' that will launch 2-3 python apps via specific endpoinds.
Each python app is installed in a separate docker container. Each app does a specific job - extracts data, creates an image, etc.
What options do I have to allow the 'controller' to communicate with the apps living on separate docker containers?
While exploring, I identified the following routes:
Install Docker CLI in the 'controller' container. Designated endpoints to run 'docker start' command to launch each python app.
Create separate REST API for each of the APP containers. Allow the 'controller' to interact with each app via HTTPS.
Is there anything I'm missing? Maybe there are better routes?
I am trying to convert docker-compose to a native kubernetes manifest in my docker compose i have depends_on for my nginx which depends upon the backend mysql and python app.
In kubernetes how do i map the connection from my nginx(a static site code inside /var/www/html/app) to backend python and mysql using the k8s service or ENV ??
Do i need to specify any connection string in my nginx web app (static site code) or can it be managed using the ENV or how does the connection work ?
Thanks in Advance
As you can read here:
Kubernetes supports 2 primary modes of finding a Service - environment
variables and DNS. The former works out of the box while the latter
requires the CoreDNS cluster addon.
So you don't have to hardcode it. Instead you can refer in your application code to kubernetes Service which exposes your backend Pods for other Pods inside your cluster.
If you decide to use environment variables, remember that your Service must be created prior to your frontend Pods so the corresponding environment variables are set. If they were already running before you exposed your backend Pods via ClusterIP Service, no problem, you can recreate such Pods e.g. by scaling your Deployment in and out:
scale deployment my-nginx --replicas=0; kubectl scale deployment my-nginx --replicas=2;
If you've exposed your python-backend Deployment by running:
kubectl expose deployment python-backend
your Service will have the same name i.e. python-backend.
If you need to refer to it from any Pod in your cluster (e.g. frontend ones) you can use both ways:
Environment variables:
$PYTHON_BACKEND_SERVICE_HOST:80
or even:
$PYTHON_BACKEND_SERVICE_HOST:$PYTHON_BACKEND_SERVICE_PORT
if you think your backend application port may change for some reason.
DNS:
Use simply your Service name i.e. python-backend if both your frontend and backend Pods are running withing the same namespace or fully qualified domain name (FQDN) which has the following form:
<service-name>.<namespace>.svc.cluster.local
which in your case will be:
python-backend.default.svc.cluster.local
So having in mind that you added in your application code (or its configuration files) following connection strings:
either $PYTHON_BACKEND_SERVICE_HOST:80 or python-backend.default.svc.cluster.local:80
you only need to remember to name your Services accordingly.
I have created three containers, one is for Python Flask application, second is for PostgreSQL db and third is for angular8. Im using Docker compose to run this. my question so each container has ports so total 3 ports. Is there a way I can use only one port to run this whole application like Docker Run instead of Docker Compose? All I want is a single port where this API can be called from anywhere.
If the only thing you want to be visible to the "outside" is the API, you can use the --link flag when calling docker run. Basically, start up the PG container, then start up the Flask container, linking to PG, then start up the Angular container, linking to Flask. However, the --link flag is a legacy feature, and may disappear sometime in the future.
Another option is to create a network with docker network create and make sure your three containers are all using that same network. They should all be able to communicate with each other in this way, and you just need to publish the API port so that other apps can use your API.
I'm not sure what your requirements are, but docker-compose is generally the cleaner way to do it, as it helps you achieve consistency in your automations.
i did my flask application using flask restplus, i try it locally using runserver.py and i can see my swagger api, i put it on a docker container exposing port 80 and i run it with -p 4000:80 and also works fine, but when i send the image to the container registry in azure, and put it on my webaplication all the other end points works, but just the root that points to the swagger api page doesnt =( i get the message
No spec provided.
is there some other things that i need to expose? like another port? or might be an azure security? maybe flask restplus needs to get something from internet? like a CND website to get the colors or svgs?
thanks guys.
I'm trying to "dockerize" my java web application and finally run the docker image on EC2.
My application is a WAR file and connects to a database. There is also a python script which the application calls via REST. The python side uses the tornado webserver
Question 1:
Should I have the following Docker containers?
Container for Application Server (Tomcat 7)
Container for HTTP Server (nginx of httpd)
Container for postgres db
Container for python script (this will have tornado web server and my python script).
Question 2:
What is the best way to build dockerfile? I will have to do trial and error for what commands need to be put into the dockerfile for each container. Should I have an ubuntu VM on which I do trial and error and once I nail down which commands I need then put them into the dockerfile for that container?
That list looks about right.
The advantage of splitting up your stack to separate containers is that you can (in many cases) use off-the-shelf official images, and only have to provide the right configuration to make them work together. In addition, you'd be able to upgrade the components (containers) separately.
Note that combining multiple services in a single container is not forbidden, but in Docker it's overall best practice to separate concerns, and have a
single container only be responsible for a single task/service.
To get all containers started with the right configuration, docker-compose is
a good choice; it enables you to create a single file (docker-compose.ymlhttps://docs.docker.com/compose/compose-file/) that
describes your project; which images to build for each container, how the containers relate to each-other, and pass configurations to them.
With docker-compose you can then start all containers by simply running
docker-compose up -d
You can use Docker Machine to create a Docker development environment on Mac or Windows. This is really good for trial and error. There is no need to for Ubuntu VM.
Docker container does one thing only. So your application would consist of multiple containers, one for each component. You've also clearly identified the different containers for your application. Here is how the workflow might look like:
Create a Dockerfile for Tomcat container, nginx, postgres, tornado
Deploy the application to Tomcat in Dockerfile or by mapping volumes
Create image for each of the container
Optionally push these images to Docker hub
If you plan to deploy these containers on multiple hosts then create an overlay network
Use Docker Compose to start these containers together. It would use the network created previously. Alternatively you can also use --x-networking for Docker Compose to create the network.