I've been trying to understand how to go about deploying my Python Function App to Azure using Bitbucket pipelines.
I've read some answers on the web, and it seems pretty simple once I have my python app zipped.
It can easily be done using this answer: Azure Function and BitBucket build pipelines
script:
- pipe: microsoft/azure-functions-deploy:1.0.2
variables:
AZURE_APP_ID: $AZURE_APP_ID
AZURE_PASSWORD: $AZURE_PASSWORD
AZURE_TENANT_ID: $AZURE_TENANT_ID
FUNCTION_APP_NAME: '<string>'
ZIP_FILE: '<string>'
However, I can't, for the life of me, find the format Azure Functions is expecting the zip file to be in.
Where do the requirements go? Even better - what pipeline spec comes before this one that creates the sought after ZIP_FILE?
Thanks!
I tried this solution and work also for me, but there is a deprecation problem. The pipe microsoft/azure-functions-deploy is using a deprecated image for azure cli: microsoft/azure-cli, you can read this here.
So you can use the atlassian version of this pipe but for python doesn't work for me because in the command:
az functionapp deployment source config-zip...
there isn't specified --build-remote.
So my solution is not use pipe but write your step with azure-cli commands:
- step:
name: Deploy on Azure
image: mcr.microsoft.com/azure-cli:latest
script:
- az login --service-principal --username ${AZURE_APP_ID} --password ${AZURE_PASSOWRD} --tenant ${AZURE_TENANT_ID}
- az functionapp deployment source config-zip -g ${RESOURCE_GROUP_NAME} -n 'functioAppName' --src 'function.zip' --build-remote
This is a step that work in my case, another solution can be write a step that use this
Ended up finding the answer scattered in different places:
image: python:3.8
pipelines:
branches:
master:
- step:
name: Build function zip
caches:
- pip
script:
- apt-get update
- apt-get install -y zip
- pip install --target .python_packages/lib/site-packages -r requirements.txt
- zip -r function.zip .
artifacts:
- function.zip
- step:
name: Deploy zip to Azure
deployment: Production
script:
- pipe: microsoft/azure-functions-deploy:1.0.0
variables:
AZURE_APP_ID: $AZURE_APP_ID
AZURE_PASSWORD: $AZURE_PASSWORD
AZURE_TENANT_ID: $AZURE_TENANT_ID
ZIP_FILE: 'function.zip'
FUNCTION_APP_NAME: $FUNCTION_NAME
Related
I am trying to test a lambda function locally, the function is created from the public docker image from aws, however I want to install my own python library from my github, according to the documentation AWS sam Build I have to add a variable to be taken in the Dockerfile like this:
Dockerfile
FROM public.ecr.aws/lambda/python:3.8
COPY lambda_preprocessor.py requirements.txt ./
RUN yum install -y git
RUN python3.8 -m pip install -r requirements.txt -t .
ARG GITHUB_TOKEN
RUN python3.8 -m pip install git+https://${GITHUB_TOKEN}#github.com/repository/library.git -t .
And to pass the GITHUB_TOKEN I can create a .json file containing the variables for the docker environment.
.json file named env.json
{
"LambdaPreprocessor": {
"GITHUB_TOKEN": "TOKEN_VALUE"
}
}
And simply pass the file address in the sam build: sam build --use-container --container-env-var-file env.json Or directly the value without the .json with the command: sam build --use-container --container-env-var GLOBAL_ENV_VAR=TOKEN_VALUE
My problem is that I don't get the GITHUB_TOKEN variable either with the .json file or by putting it directly in the command with --container-env-var GITHUB_TOKEN=TOKEN_VALUE
Using sam build --use-container --container-env-var GLOBAL_ENV_VAR=TOKEN_VALUE --debug shows that it doesn't take it when creating the lambda image.
The only way that has worked for me is to put the token directly in the Dockerfile not as an build argument.
Promt output
Building image for LambdaPreprocessor function
Setting DockerBuildArgs: {} for LambdaPreprocessor function
Does anyone know why this is happening, am I doing something wrong?
If you need to see the template.yaml this is the lambda definition.
template.yaml
LambdaPreprocessor:
Type: AWS::Serverless::Function
Properties:
PackageType: Image
Architectures:
- x86_64
Timeout: 180
Metadata:
Dockerfile: Dockerfile
DockerContext: ./lambda_preprocessor
DockerTag: python3.8-v1
I'm doing it with vscode and wsl 2 with ubuntu 20.04 lts on windows 10
I am having this issue too. What I have learned is that in the Metadata field there is DockerBuildArgs: that you can also add. Example:
Metadata:
DockerBuildArgs:
MY_VAR: <some variable>
When I add this it does make it to the DockerBuildArgs dict.
I have a Python app that takes the value of a certificate in a Dockerfile and updates it. However, I'm having difficulty knowing how to get the app to work within Gitlab.
When I push the app with the Dockerfile to be updated I want the app to run in the Gitlab pipeline and update the Dockerfile. I'm a little stuck on how to do this. I'm thinking that I would need to pull the repo, run the app and then push back up.
Would like some advice on if this is the right approach and if so how I would go about doing so?
This is just an example of the Dockerfile to be updated (I know this image wouldn't actually work, but the app would only update the ca-certificate present in the DF:
#syntax=docker/dockerfile:1
#init the base image
FROM alpine:3.15
#define present working directory
#WORKDIR /library
#run pip to install the dependencies of the flask app
RUN apk add -u \
ca-certificates=20211220 \
git=3.10
#copy all files in our current directory into the image
COPY . /library
EXPOSE 5000
#define command to start the container, need to make app visible externally by specifying host 0.0.0.0
CMD [ "python3", "-m", "flask", "run", "--host=0.0.0.0"]
gitlab-ci.yml:
stages:
- build
- test
- update_certificate
variables:
PYTHON_IMG: "python:3.10"
pytest_installation:
image: $PYTHON_IMG
stage: build
script:
- pip install pytest
- pytest --version
python_requirements_installation:
image: $PYTHON_IMG
stage: build
script:
- pip install -r requirements.txt
unit_test:
image: $PYTHON_IMG
stage: test
script:
- pytest ./tests/test_automated_cert_checker.py
cert_updater:
image: $PYTHON_IMG
stage: update_certificate
script:
- pip install -r requirements.txt
- python3 automated_cert_updater.py
I'm aware there's a lot of repetition with installing the requirements multiple times and that this is an area for improvement. I doesn't feel like it's necessary for the app to be built into an image because it's only used for updating the DF.
requirements.txt installs pytest and BeautifulSoup4
Additional context: The pipeline that builds the Dockerimage already exists and builds successfully. I am looking for a way to run this app once a day which will check if the ca-certificate is still up to date. If it isn't then the app is run, the ca-certificate in the Dockerfile is updated and then the updated Dockerfile is re built automatically.
My thoughts are that I may need to set the gitlab-ci.yml up pull the repo, run the app (that updates the ca-certificate) and then re push it, so that a new image is built based upon the update to the certificate.
The Dockerfile shown here is just a basic example showing that the actual DF in the repo looks like.
What you probably want to do is identify the appropriate version before you build the Dockerfile. Then, pass a --build-arg with the ca-certificates version. That way, if the arg changes, then the cached layer becomes invalid and will install the new version. But if the version is the same, the cached layer would be used.
FROM alpine:3.15
ARG CA_CERT_VERSION
RUN apk add -u \
ca-certificates=$CA_CERT_VERSION \
git=3.10
# ...
Then when you build your image, you should figure out the appropriate ca-certificates version and pass it as a build-arg.
Something like:
version="$(python3 ./get-cacertversion.py)" # you implement this
docker build --build-arg CA_CERT_VERSION=$version -t myimage .
Be sure to add appropriate bits to leverage docker caching in GitLab.
I'm very new to DevOps, so this may be a very silly question. I'm trying to deploy a python Web scraping script onto an azure webapp using GitHub actions. This script is meant to be run for a long period of time as it is analyzing websites word by word for hours. It then logs the results to .log files.
I know a bit of how GitHub actions work, I know that I can trigger jobs when I push to the repo for instance. However, I'm a bit confused as to how one runs the app or a script on an azure resource (like a VM or webapp) for example. Does this process involve SSH-ing into the resource and then automatically run the cli command "python main.py" or "docker-compose up", or is there something more sophisticated involved?
For better context, this is my script inside of my workflows folder:
on:
[push]
env:
AZURE_WEBAPP_NAME: emotional-news-service # set this to your application's name
WORKING_DIRECTORY: '.' # set this to the path to your path of working directory inside GitHub repository, defaults to the repository root
PYTHON_VERSION: '3.9'
STARTUP_COMMAND: 'docker-compose up --build -d' # set this to the startup command required to start the gunicorn server. default it is empty
name: Build and deploy Python app
jobs:
build-and-deploy:
runs-on: ubuntu-latest
environment: dev
steps:
# checkout the repo
- uses: actions/checkout#master
# setup python
- name: Setup Python
uses: actions/setup-python#v1
with:
python-version: ${{ env.PYTHON_VERSION }}
# setup docker compose
- uses: KengoTODA/actions-setup-docker-compose#main
with:
version: '1.26.2'
# install dependencies
- name: python install
working-directory: ${{ env.WORKING_DIRECTORY }}
run: |
sudo apt install python${{ env.PYTHON_VERSION }}-venv
python -m venv --copies antenv
source antenv/bin/activate
pip install setuptools
pip install -r requirements.txt
python -m spacy download en_core_web_md
# Azure login
- uses: azure/login#v1
with:
creds: ${{ secrets.AZURE_CREDENTIALS }}
- uses: azure/appservice-settings#v1
with:
app-name: ${{ env.AZURE_WEBAPP_NAME }}
mask-inputs: false
general-settings-json: '{"linuxFxVersion": "PYTHON|${{ env.PYTHON_VERSION }}"}' #'General configuration settings as Key Value pairs'
# deploy web app
- uses: azure/webapps-deploy#v2
with:
app-name: ${{ env.AZURE_WEBAPP_NAME }}
package: ${{ env.WORKING_DIRECTORY }}
startup-command: ${{ env.STARTUP_COMMAND }}
# Azure logout
- name: logout
run: |
az logout
most of the script above was taken from: https://github.com/Azure/actions-workflow-samples/blob/master/AppService/python-webapp-on-azure.yml.
is env.STARTUP_COMMAND the "SSH and then run the command" part that I was thinking of, or is it something else entirely?
I also have another question: is there a better way to view logs from that python script running from within the azure resource? The only way I can think of is to ssh into it and then type in "cat 'whatever.log'".
Thanks in advance!
Instead of using STARTUP_COMMAND: 'docker-compose up --build -d' you can use the startup file name.
startUpCommand: 'gunicorn --bind=0.0.0.0 --workers=4 startup:app'
or
StartupCommand: 'startup.txt'
The StartupCommand parameter defines the app in the startup.py file. By default, Azure App Service looks for the Flask app object in a file named app.py or application.py. If your code doesn't follow this pattern, you need to customize the startup command. Django apps may not need customization at all. For more information, see How to configure Python on Azure App Service - Customize startup command.
Also, because the python-vscode-flask-tutorial repository contains the same startup command in a file named startup.txt, you could specify that file in the StartupCommand parameter rather than the command, by using StartupCommand: 'startup.txt'.
Refer: here for more info
Hi everyone and sorry for the silly question but its my day 2 with Yaml.
Problem statement:
I have a python code which runs for 12 mins, (so I cant use Cloud Function to automate it), hence using cloud build as a hack.
Steps done so far:
I have my code in google cloud repository and I used cloud build to build an image and created a google cloud build trigger. Now I want to run the main.py python code each time I trigger the build ( which I will do using a Cloud Scheduler as described here
Folder structure (as shown in the image below)
cloudbuild.yaml which I managed to write so far
steps:
- name: 'gcr.io/$PROJECT_ID/p2p-cloudbuild'
entrypoint: '/bin/bash'
args: ['-c','virtualenv /workspace/venv' ]
# Create a Python virtualenv stored in /workspace/venv that will persist across container runs.
- name: 'gcr.io/$PROJECT_ID/p2p-cloudbuild'
entrypoint: 'venv/bin/pip'
args: ['install', '-V', '-r', 'requirements.txt']
# Installs any dependencies listed in the project's requirements.txt.
Question : How do I add the step to call/execute 'my_function' inside main.py file ?
appreciate your help.
steps:
- name: 'gcr.io/$PROJECT_ID/p2p-cloudbuild'
entrypoint: '/bin/bash'
args: ['-c','virtualenv /workspace/venv' ]
# Create a Python virtualenv stored in /workspace/venv that will persist across container runs.
- name: 'gcr.io/$PROJECT_ID/p2p-cloudbuild'
entrypoint: 'venv/bin/pip'
args: ['install', '-V', '-r', 'requirements.txt']
# Installs any dependencies listed in the project's requirements.txt.
Let's say I have a file main.py:
def foo():
return "bar"
This actually can be simplified into:
- name: 'gcr.io/$PROJECT_ID/p2p-cloudbuild'
entrypoint: '/bin/bash'
args:
- '-c'
- |
virtualenv /workspace/venv
source /workspace/venv/bin/activate
pip install -V -r requirements.txt
python -c 'from main import foo; print (foo())'
I'm trying to upload a package to pypi using a Gitlab CI job, but I cannot make it work :/ Anyone has a working example?
What I have tried so far in my .gitlab-ci.yaml (from my local machine all of them are working):
Twine with a .pypirc file
- echo "[distutils]" >> ~/.pypirc
- echo "index-servers =" >> ~/.pypirc
- echo " pypi" >> ~/.pypirc
- echo "" >> ~/.pypirc
- echo "[pypi]" >> ~/.pypirc
- 'echo "repository: https://upload.pypi.org/legacy/" >> ~/.pypirc'
- 'echo "username: ${PYPI_USER}" >> ~/.pypirc'
- 'echo "password: ${PYPI_PASSWORD}" >> ~/.pypirc'
- python3 setup.py check sdist bdist # This will fail if your creds are bad.
- cat ~/.pypirc
- twine upload dist/* --config-file ~/.pypirc
Same as before but with $VARIABLE
[...]
- 'echo "username: $PYPI_USER" >> ~/.pypirc'
- 'echo "password: $PYPI_PASSWORD" >> ~/.pypirc'
[...]
Two options before but using python setup.py ... upload
twine upload dist/* -u $PYPI_USER -p $PYPI_PASSWORD
twine upload dist/* wiht TWINE_USERNAME and TWINE_PASSWORD environment variables.
... and always get a 403 Client Error: Invalid or non-existent authentication information. I'm running out of options...
I am simply using the TWINE_USERNAME and TWINE_PASSWORD variables, it worked out of the box.
This is the relevant part in my gitlab-ci.yml (replace the image with your desired one and of course change all the other stuff like stage, cache etc. to your needs):
pypi:
image: docker.km3net.de/base/python:3
stage: deploy
cache: {}
script:
- pip install -U twine
- python setup.py sdist
- twine upload dist/*
only:
- tags
And add the environment variables in GitLab under Settings->CI/CD->Variables (https://your-gitlab-instance.oerg/GIT_NAMESPACE/GIT_PROJECT/settings/ci_cd):
Here is the successful pipeline:
I got this working, using a modified version of your code:
pypi:
stage: upload
script:
- pip install twine
- rm -rf dist
- echo "[distutils]" >> ~/.pypirc
- echo "index-servers =" >> ~/.pypirc
- echo " nexus" >> ~/.pypirc
- echo "" >> ~/.pypirc
- echo "[nexus]" >> ~/.pypirc
- echo "${PYPI_REPO}" >> ~/.pypirc
- echo "${PYPI_USER}" >> ~/.pypirc
- echo "${PYPI_PASSWORD}" >> ~/.pypirc
- python3 setup.py check sdist bdist # This will fail if your creds are bad.
- python setup.py sdist bdist_wheel
- twine upload -r nexus dist/*.tar.gz
The difference is I didn't use the "'" and got rid of the colons in the yaml; instead I set the values of the secrets as e.g., username: myuser
If problems with EOF appears, make sure to change Settings/Repository/Tags to be protected, so they will work again. I've posted here a more complete description.
Note that GitLab 12.10 (April 2020) will offer in its premium or more edition, a simpler way, using CI_JOB_TOKEN (See below the second part of this answer, with GitLab 13.4, Sept. 2020)
Build, publish, and share Python packages to the GitLab PyPI Repository
Python developers need a mechanism to create, share, and consume packages that contain compiled code and other content in projects that use these packages. PyPI, an open source project maintained by the Python Packaging Authority, is the standard for how to define, create, host, and consume Python packages.
In GitLab 12.10, we are proud to offer PyPI repositories built directly into GitLab! Developers now have an easier way to publish their projects’ Python packages. By integrating with PyPI, GitLab will provide a centralized location to store and view those packages in the same place as their source code and pipelines.
In March, we announced that the GitLab PyPI Repository and support for other package manager formats will be moved to open source.
You can follow along as we work to make these features more broadly available in the epic.
See Documentation and Issue.
And with GitLab 13.4 (September 2020)
Use CI_JOB_TOKEN to publish PyPI packages
You can use the GitLab PyPI Repository to build, publish, and share python packages, right alongside your source code and CI/CD Pipelines.
However, previously you couldn’t authenticate with the repository by using the pre-defined environment variable CI_JOB_TOKEN.
As a result, you were forced to use your personal credentials for making updates to the PyPI Repository, or you may have decided not to use the repository at all.
Now it is easier than ever to use GitLab CI/CD to publish and install PyPI packages by using the predefined CI_JOB_TOKEN environment variable.
See Documentation and Issue.
You can also upload python package to a private Pypi server in one line (I am using it with gilab-ci):
Set environment variables PYPI_SERVER, PYPI_USER and PYPI_PASSWORD through Gitlab CI settings
Call
twine upload --repository-url ${PYPI_SERVER} --username $PYPI_USER --password $PYPI_PASSWORDD $artifact
Note: I had to use twine from PIP (pip3 install twine) and not from my Ubuntu package as the version 10 of twine seems to have a bug (zipfile.BadZipFile: File is not a zip file).
You can also look into using dpl: Here's how I'm doing it:
pip:
stage: upload
script:
- apt-get update -qy
- apt-get install -y ruby-dev
- gem install dpl
- python setup.py sdist
- dpl --provider=pypi --user=$PIP_USERNAME --password=$PIP_PASSWORD --skip_existing=true
only:
- master
You can set $PIP_USERNAME and $PIP_PASSWORD in the variables section for your project: settings -> CI/CD -> Variables
I know this is an old question, but if you're using poetry (I'm testing with version 1.1.11) you can do it quite easily, like this:
poetry config repositories.my_private_repo [URL_TO_YOUR_PYPI_REPO]
poetry config http-basic.my_private_repo [USERNAME] [PASSWORD]
poetry build
poetry publish --repository my_private_repo
On develop branches, you can add the --dry-run argument to poetry publish so it won't actually get uploaded