I am trying to follow this tutorial DEPLOY PYTHON WEB APP IN WEB APP FOR CONTAINERS
I have cloned the project, tested it manually and it worked fine. The tutorial recommended to push the docker image on the docker hub. Instead of that I created the container registry on azure itself and pushed the docker image in azure container registry. I haven't enabled the admin used in azure container registry so no need to worry about credentials as its not private registry.
I then used the command mentioned in the tutorial and started the web app but when I try to access the url, it shows Service Unavailable. I do not have any idea on what wrong I am doing here.
Please help. Thanks
DOCKERFILE
FROM tiangolo/uwsgi-nginx-flask:python3.6-alpine3.7
ENV LISTEN_PORT=8000
EXPOSE 8000
COPY /app /app
For deploying Web Application on Azure container. The images are the import key.
When the images are in the public registry, you just need to follow the steps below:
Create a resource group in Azure.
Create a Service Plan.
Create the Web Application with the image.
For more details, see Deploy a Docker/Go web app in Web App for Containers.
When the images are in the private registry, here make an example with Azure Container Registry.
You need one more step, set the container config for your web app like this:
az webapp config container set --name <app_name> --resource-group <resourceGroup_Name> --docker-custom-image-name <azure-container-registry-name>.azurecr.io/<image_name> --docker-registry-server-url https://<azure-container-registry-name>.azurecr.io --docker-registry-server-user <registry-username> --docker-registry-server-password <password>
For more details, see Use a Docker image from any private registry.
Related
I'm creating an API (FastAPI or Flask) that will be deployed in a docker container. It's main objective will be to serve as a 'controller' that will launch 2-3 python apps via specific endpoinds.
Each python app is installed in a separate docker container. Each app does a specific job - extracts data, creates an image, etc.
What options do I have to allow the 'controller' to communicate with the apps living on separate docker containers?
While exploring, I identified the following routes:
Install Docker CLI in the 'controller' container. Designated endpoints to run 'docker start' command to launch each python app.
Create separate REST API for each of the APP containers. Allow the 'controller' to interact with each app via HTTPS.
Is there anything I'm missing? Maybe there are better routes?
I'm trying to write a python script to auto-delete old prerelease docker images that are taking up space in Google container registry. I'm stuck in how can I authenticate google cloud in python script and list docker images in GCR.
Google Container Registry (GCR) implements (the de facto standard) Docker Registry API HTTP V2.
This means that you can use docker, podman and other tools that implement these APIs to interact with GCR and it means that you should use a 3rd-party client library (often provided by Docker) to implement interaction with a Docker Registry such as GCR.
Google (!) documents this on the page Docker Registry API
There is no Google API to interact with GCR.
You can demonstrate this to yourself by running e.g. gcloud container images list with the --log-http flag to show that the underlying REST calls are of the form http://gcr.io/v2/{repo}/tags/list. This is documented on Docker's API page for GET tags.
I've not used this but Docker provides a Docker SDK for Python and this should enable you to interact with any registry that implements the Docker Registry API HTTP V2 such as GCR.
You must authenticate to GCR using a suitable permitted Google identity. This is documented Authenticating using the Docker Registry API
You can use the Google Cloud Python clients in your Python script.
Then use a service account and a download a Json key from the GCP IAM page.
You have to give the needed permissions to this service account in the IAM page.
Before to run the main of your Python script, you can do an authentication on GCP with in your bash terminal :
export GOOGLE_APPLICATION_CREDENTIALS=your_path/your_service_account.json
To list your Docker images, you can also use a shell script instead of a Python script. The following command list all the images of the current project from GCR :
gcloud container images list
As explain before, in order to use gcloud commands, you have to be authenticated with your service account or other identity.
https://cloud.google.com/sdk/gcloud/reference/container/images/list
I need to perform the following from a python program:
docker pull foo/bar:tag
docker tag foo/bar:tag gcr.io/project_id/mirror/foo/bar:tag
gcloud auth configure-docker --quiet
docker push gcr.io/project_id/mirror/foo/bar:tag
I want to accomplish this with the minimal possible footprint - no root, no privileged Docker installation, etc. The Google Cloud SDK is installed.
How to programmatically mirror the image with minimal app footprint?
Google cloud build API can be used to perform all your required steps in one command Or use Trigger.
gcloud builds submit --tag gcr.io/$DEVSHELL_PROJECT_ID/$UMAGE_NAME:v0.1 .
Above command, you can call using Python cloud Build API
https://googleapis.dev/python/cloudbuild/latest/gapic/v1/api.html
i did my flask application using flask restplus, i try it locally using runserver.py and i can see my swagger api, i put it on a docker container exposing port 80 and i run it with -p 4000:80 and also works fine, but when i send the image to the container registry in azure, and put it on my webaplication all the other end points works, but just the root that points to the swagger api page doesnt =( i get the message
No spec provided.
is there some other things that i need to expose? like another port? or might be an azure security? maybe flask restplus needs to get something from internet? like a CND website to get the colors or svgs?
thanks guys.
I have a existing django web application currently deployed on aws. I want to deploy it on Microsoft Azure by using cloud services. How to create config files for deploying web app on Azure? How to access environment variables on Azure? I am not using Visual Studio. I am developing web app in linux env and using git for code management. Please help
It sounds like you want to know which way is the best choice for deploying a django app via Git for code management on Linux, using Cloud Services or App Services on Azure.
Per my experience, I think deploying a pure web app into App Service on Azure via Git on Linux is the simplest way for you. You can refer to the offical docuemnts below to know how to do it via Azure CLI or only Git.
Deploy your first Python web app to Azure in five minutes
Local Git Deployment to Azure App Service
And there is a code sample of Django on App Service as reference that you can know how to configure it for running on Azure.
However, if your app need more powerful features & performance, using Cloud Services for your django app is also a better way than using VM directly. Also as references, please view the document Python web and worker roles with Python Tools for Visual Studio to know how to let Azure support Python & Django on Cloud Services, and you can create & deploy it via Azure portal in the browser on Linux. Meanwhile, thanks for the third party GitHub sample of Django WebRole for Cloud Service which you can refer to know how to create a cloud service project structure without PTVS for VS on Linux.
Hope it helps.
I read this post, decided the how-to guides Peter Pan posted looked good, and set off on my own. With my one business day's worth of experience if you are looking to deploy your app to Azure, start with the Marketplace Django app and go from there. Reason being the virtual environment comes with it along with the activate script needed to run the virtual environment and the web.config is setup for you. If you follow the start from scratch how-to guides, these are the hardest parts to setup correctly. Once you create the app service from the template, do a git clone of the repo to your local machine. Make a small change and push it back up by running the command below in bash.
az webapp deployment source config-local-git --name <app name> --resource-group <group name> --query url --output tsv
Use the result of the command to add the git repo as a remote source.
git remote add azure https://<ftp_credential>#<app_name>.scm.azurewebsites.net/<app_name>.git
Finally, commit your changes and deploy
git add -A
git commit -m "Test change"
git push azure remote
A couple of side notes
If you do not have your bash environment setup, you'll need to do so to use the az commands. The marketplace app does run error-free locally. I have not dug into this yet.
Good luck!