I am trying to set up an application running in a python 3 App Engine Flexible environment. I have an app.yaml file:
runtime: python
env: flex
entrypoint: gunicorn -b :$PORT application:app
runtime_config:
python_version: 3
I have a requirements.txt listing some packages my app needs:
Flask==0.12
gunicorn==19.7.1
...
I also have a common functions package that is located in a GCP source respository (git). I don't want to host it publicly on PyPi. Is it possible to still include it as a requirement? Something like:
git+https://source.developers.google.com/p/app/r/common
Using the above ask for a username and password when I try it on my local machine, even though I have a helper set up:
git config credential.helper gcloud.sh
You can add -i http://yourhost.com --trusted-host yourhost.com flags to requirements.txt file.
Related
I'm very new to DevOps, so this may be a very silly question. I'm trying to deploy a python Web scraping script onto an azure webapp using GitHub actions. This script is meant to be run for a long period of time as it is analyzing websites word by word for hours. It then logs the results to .log files.
I know a bit of how GitHub actions work, I know that I can trigger jobs when I push to the repo for instance. However, I'm a bit confused as to how one runs the app or a script on an azure resource (like a VM or webapp) for example. Does this process involve SSH-ing into the resource and then automatically run the cli command "python main.py" or "docker-compose up", or is there something more sophisticated involved?
For better context, this is my script inside of my workflows folder:
on:
[push]
env:
AZURE_WEBAPP_NAME: emotional-news-service # set this to your application's name
WORKING_DIRECTORY: '.' # set this to the path to your path of working directory inside GitHub repository, defaults to the repository root
PYTHON_VERSION: '3.9'
STARTUP_COMMAND: 'docker-compose up --build -d' # set this to the startup command required to start the gunicorn server. default it is empty
name: Build and deploy Python app
jobs:
build-and-deploy:
runs-on: ubuntu-latest
environment: dev
steps:
# checkout the repo
- uses: actions/checkout#master
# setup python
- name: Setup Python
uses: actions/setup-python#v1
with:
python-version: ${{ env.PYTHON_VERSION }}
# setup docker compose
- uses: KengoTODA/actions-setup-docker-compose#main
with:
version: '1.26.2'
# install dependencies
- name: python install
working-directory: ${{ env.WORKING_DIRECTORY }}
run: |
sudo apt install python${{ env.PYTHON_VERSION }}-venv
python -m venv --copies antenv
source antenv/bin/activate
pip install setuptools
pip install -r requirements.txt
python -m spacy download en_core_web_md
# Azure login
- uses: azure/login#v1
with:
creds: ${{ secrets.AZURE_CREDENTIALS }}
- uses: azure/appservice-settings#v1
with:
app-name: ${{ env.AZURE_WEBAPP_NAME }}
mask-inputs: false
general-settings-json: '{"linuxFxVersion": "PYTHON|${{ env.PYTHON_VERSION }}"}' #'General configuration settings as Key Value pairs'
# deploy web app
- uses: azure/webapps-deploy#v2
with:
app-name: ${{ env.AZURE_WEBAPP_NAME }}
package: ${{ env.WORKING_DIRECTORY }}
startup-command: ${{ env.STARTUP_COMMAND }}
# Azure logout
- name: logout
run: |
az logout
most of the script above was taken from: https://github.com/Azure/actions-workflow-samples/blob/master/AppService/python-webapp-on-azure.yml.
is env.STARTUP_COMMAND the "SSH and then run the command" part that I was thinking of, or is it something else entirely?
I also have another question: is there a better way to view logs from that python script running from within the azure resource? The only way I can think of is to ssh into it and then type in "cat 'whatever.log'".
Thanks in advance!
Instead of using STARTUP_COMMAND: 'docker-compose up --build -d' you can use the startup file name.
startUpCommand: 'gunicorn --bind=0.0.0.0 --workers=4 startup:app'
or
StartupCommand: 'startup.txt'
The StartupCommand parameter defines the app in the startup.py file. By default, Azure App Service looks for the Flask app object in a file named app.py or application.py. If your code doesn't follow this pattern, you need to customize the startup command. Django apps may not need customization at all. For more information, see How to configure Python on Azure App Service - Customize startup command.
Also, because the python-vscode-flask-tutorial repository contains the same startup command in a file named startup.txt, you could specify that file in the StartupCommand parameter rather than the command, by using StartupCommand: 'startup.txt'.
Refer: here for more info
My current organization is migrating to DataDog for Application Performance Monitoring. I am deploying a Python Flask web application using docker to Azure Container Registry. After the deployment to Azure the app should be listed/available on Datadog portal.
Please note I just started learning Docker containers. There is a high chance I could do completely wrong. Please bear with me
Steps followed
Option 1: Create a docker container on local machine and push to ACR
Added dd-trace python library to the docker image
Added dd-trace run command the docker file
build the image
run the container on local
Getting OSError: [Errno 99] Cannot assign requested address
FROM python:3.7
ENV VIRTUAL_ENV=/opt/venv
RUN python -m venv $VIRTUAL_ENV
ENV PATH="$VIRTUAL_ENV/bin:$PATH"
ENV DD_API_KEY=apikeyfromdatadoghq
ENV DD_ENV=safhire-dev
ENV DD_LOGS_ENABLED=true
ENV DD_LOGS_INJECTION=true
ENV DD_SERVICE=dev-az1-pythonbusinessservice
ENV DD_TAGS=products:myprojects
ENV DD_TRACE_DEBUG=true
ENV DD_TRACE_ENABLED=true
ENV DOCKER_ENABLE_CI=true
COPY /app /app
COPY requirements.txt /
RUN pip install --no-cache-dir -U pip
RUN pip install --no-cache-dir -r /requirements.txt
CMD ddtrace-run python app/main.py runserver 127.0.0.1:3000
Option 2: Forward logs to Azure Blob Storage but a heavy process
Deploy Python using Code base Linux
Forward the logs to a Azure Blob storage
Create a BlobTrigger Azure Function to forward the logs to DataDogAPI
I believe with this approach we can not capture APM logs but, we can capture application and console logs
Option 3: using Serilog but, my organization does not want to use third party logging framework, we have our own logging framework
Any help is highly appreciated, I am looking for a solution using Option 1. I went through the Microsoft articles, Datadog documentation but, no luck.
I setup app registrations, Manage reader permissions on Subscription, created ClientID and app secrets on Azure portal. none of them helped
Could you confirm whether is there a way to collect the APM logs on datadog with out installing agent on Azure.
Thank you in advance.
After few days of research and follow up with datadog support team, I am able to get the APM logs on datadog portal.
Below is my docker-compose.yml file configuration, I believe it helps someone in future
version: "3"
services:
web:
build: web
command: ddtrace-run python standalone_api.py
volumes:
- .:/usr/src/app
depends_on:
datadog-agent:
condition: service_healthy
image: pythonbusinessservice:ICDNew
ports:
- 5000:5000
environment:
- DATADOG_HOST=datadog-agent
- DD_TRACE_AGENT_PORT=8126
- DD_AGENT_HOST=datadog-agent
datadog-agent:
build: datadog
image: gcr.io/datadoghq/agent:latest
ports:
- 8126:8126
environment:
- DD_API_KEY=9e3rfg*****************adf3
- DD_SITE=datadoghq.com
- DD_HOSTNAME=pythonbusinessservice
- DD_TAGS=env:dev
- DD_APM_ENABLED=true
- DD_APM_NON_LOCAL_TRAFFIC=true
- DD_DOGSTATSD_NON_LOCAL_TRAFFIC=true
- DD_SERVICE=pythonbusinessservice
- DD_LOGS_CONFIG_CONTAINER_COLLECT_ALL=true
- DD_CONTAINER_EXCLUDE="name:datadog-agent"
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- /proc/:/host/proc/:ro
- /opt/datadog-agent/run:/opt/datadog-agent/run:rw
- /sys/fs/cgroup:/host/sys/fs/cgroup:ro
The Dockerfile for my python long running application
FROM python:3.7
COPY . /app
WORKDIR /app
RUN pip install -r requirements.txt
CMD ["ddtrace-run python", "/app/standalone_api.py"]
Please note, on the requirements.txt file I have ddtrace package listed
Super new to python, and never used docker before. I want to host my python script on Google Cloud Run but need to package into a Docker container to submit to google.
What exactly needs to go in this DockerFile to upload to google?
Current info:
Python: v3.9.1
Flask: v1.1.2
Selenium Web Driver: v3.141.0
Firefox Geckodriver: v0.28.0
Beautifulsoup4: v4.9.3
Pandas: v1.2.0
Let me know if further information about the script is required.
I have found the following snippets of code to use as a starting point from here. I just don't know how to adjust to fit my specifications, nor do I know what 'gunicorn' is used for.
# Use the official Python image.
# https://hub.docker.com/_/python
FROM python:3.7
# Install manually all the missing libraries
RUN apt-get update
RUN apt-get install -y gconf-service libasound2 libatk1.0-0 libcairo2 libcups2 libfontconfig1 libgdk-pixbuf2.0-0 libgtk-3-0 libnspr4 libpango-1.0-0 libxss1 fonts-liberation libappindicator1 libnss3 lsb-release xdg-utils
# Install Chrome
RUN wget https://dl.google.com/linux/direct/google-chrome-stable_current_amd64.deb
RUN dpkg -i google-chrome-stable_current_amd64.deb; apt-get -fy install
# Install Python dependencies.
COPY requirements.txt requirements.txt
RUN pip install -r requirements.txt
# Copy local code to the container image.
ENV APP_HOME /app
WORKDIR $APP_HOME
COPY . .
# Run the web service on container startup. Here we use the gunicorn
# webserver, with one worker process and 8 threads.
# For environments with multiple CPU cores, increase the number of workers
# to be equal to the cores available.
CMD exec gunicorn --bind :$PORT --workers 1 --threads 8 main:app
# requirements.txt
Flask==1.0.2
gunicorn==19.9.0
selenium==3.141.0
chromedriver-binary==77.0.3865.40.0
Gunicorn is an application server for running your python application instance, it is a pure-Python HTTP server for WSGI applications. It allows you to run any Python application concurrently by running multiple Python processes within a single dyno.
Please have a look into the following Tutorial which explains in detail regarding gunicorn.
Regarding Cloud Run, to deploy to Cloud Run, please follow next steps or the Cloud Run Official Documentation:
1) Create a folder
2) In that folder, create a file named main.py and write your Flask code
Example of simple Flask code
import os
from flask import Flask
app = Flask(__name__)
#app.route("/")
def hello_world():
name = os.environ.get("NAME", "World")
return "Hello {}!".format(name)
if __name__ == "__main__":
app.run(debug=True, host="0.0.0.0", port=int(os.environ.get("PORT", 8080)))
3) Now your app is finished and ready to be containerized and uploaded to Container Registry
3.1) So to containerize your app, you need a Dockerfile in the same directory as the source files (main.py)
3.2) Now build your container image using Cloud Build, run the following command from the directory containing the Dockerfile:
gcloud builds submit --tag gcr.io/PROJECT-ID/FOLDER_NAME
where PROJECT-ID is your GCP project ID. You can get it by running gcloud config get-value project
4) Finally you can deploy to Cloud Run by executing the following command:
gcloud run deploy --image gcr.io/PROJECT-ID/FOLDER_NAME --platform managed
You can also have a look into the Google Cloud Run Official GitHub Repository for a Cloud Run Hello World Sample.
I have a Python Serverless project that uses a private Git (on Github) repo.
Requirements.txt file looks like this:
itsdangerous==0.24
boto3>=1.7
git+ssh://git#github.com/company/repo.git#egg=my_alias
Configurations of the project mainly looks like this
plugins:
- serverless-python-requirements
- serverless-wsgi
custom:
wsgi:
app: app.app
packRequirements: false
pythonRequirements:
dockerizePip: true
dockerSsh: true
When I deploy using this command:
sls deploy --aws-profile my_id --stage dev --region eu-west-1
I get this error:
Command "git clone -q ssh://git#github.com/company/repo.git /tmp/pip-install-a0_8bh5a/my_alias" failed with error code 128 in None
What am I doing wrong? I'm suspecting either the way I configured my SSH key for Github access or the configurations of the serverless package.
So the only way I managed to sort this issue was
Configure the SSH WITH NO PASSPHRASE. Following steps here.
In serverless.yml, I added the following:
custom:
wsgi:
app: app.app
packRequirements: false
pythonRequirements:
dockerizePip: true
dockerSsh: true
dockerSshSymlink: ~/.ssh
Notice I added dockerSshSymlink to report the location of the ssh files on my local machine; ~/.ssh.
In requirements.txt, I added my private dependency like this:
git+ssh://git#github.com/my_comp/my_repo.git#egg=MyRepo
All works.
Although not recommeneded. Have you tried using sudo sls deploy --aws-profile my_id --stage dev --region eu-west-1
This error can be also created by using the wrong password or ssh key.
I need to deploy a Python application to AWS Elastic Beanstalk, however this module requires dependencies from our private PyPi index. How can I configure pip (like what you do with ~/.pip/pip.conf) so that AWS can connect to our private index while deploying the application?
My last resort is to modify the dependency in requirements.txt to -i URL dependency before deployment, but there must be a clean way to achieve this goal.
In .ebextensions/files.config add something like this:
files:
"/opt/python/run/venv/pip.conf":
mode: "000755"
owner: root
user: root
content: |
[global]
find-links = <URL>
trusted-host = <HOST>
index-url = <URL>
Or whatever other configurations you'd like to set in your pip.conf. This will place the pip.conf file in the virtual environment of your application, which will be activated before pip -r requirements.txt is executed. Hopefully this helps!