I'm trying to deploy a very simple flask app to linux azure web app, but it seems like it cannot find any of the packages installed
it works on localhost using a virtual env
Here is the app code that is on folder startup.py:
from flask import Flask
from flask_restful import Resource, Api
class Index(Resource):
def get(self):
return "Hellow World"
app = Flask(__name__)
api = Api(app)
api.add_resource(Index, "/home")
I'm trying to deploy the app using azure DevOps pipelines, here is my azure-pipeline.yml file (I removed the variables for this post)
variables:
ConnectedServiceName: <name of my service connection>
WebAppName: <name of my web app>
pool:
name: Hosted Ubuntu 1604
steps:
- task: UsePythonVersion#0
inputs:
versionSpec: '3.6'
architecture: 'x64'
- script: pip install -r requirements.txt
displayName: 'Install requirements'
- task: ArchiveFiles#2
inputs:
rootFolderOrFile: '$(Build.SourcesDirectory)'
includeRootFolder: false
archiveType: 'zip'
archiveFile: '$(Build.ArtifactStagingDirectory)/Application$(Build.BuildId).zip'
replaceExistingArchive: true
#verbose: # (no value); this input is optional
- task: AzureRMWebAppDeployment#4
displayName: Azure App Service Deploy
inputs:
appType: webAppLinux
RuntimeStack: 'PYTHON|3.6'
ConnectedServiceName: $(ConnectedServiceName)
WebAppName: $(WebAppName)
Package: '$(Build.ArtifactStagingDirectory)/Application$(Build.BuildId).zip'
StartupCommand: 'gunicorn --bind=0.0.0.0 --workers=4 startup:app'
and this is my requirements.txt file
aniso8601==7.0.0
astroid==2.2.5
Click==7.0
colorama==0.4.1
Flask==1.1.1
Flask-JWT-Extended==3.22.0
Flask-RESTful==0.3.7
flask-swagger-ui==3.20.9
The pipelines runs all the steps without error
Deployment Image
but it seems like there are no packages in wwwroot folder
/home>cd site/wwwroot
/home/site/wwwroot>pip freeze
virtualenv==16.6.2
/home/site/wwwroot>
Also tried pip install -r requirements.txt standing at this folder, it collects the packages but nevers ends to install them (it show "Cleaning up...>" for ever)
On the application logs I can see
ModuleNotFoundError: No module named 'flask_restful'
2019-09-04T14:55:19.215300670Z [2019-09-04 14:55:19 +0000] [39] [INFO] Worker exiting (pid: 39)
2019-09-04T14:55:19.234394151Z [2019-09-04 14:55:19 +0000] [38] [ERROR] Exception in worker process
How can I make sure that the web app uses the packages listed in the requirements.txt
Any ideas of what could be wrong?
Tried to create a sample pipeline -
pool:
name: Azure Pipelines
steps:
- task: UsePythonVersion#0
displayName: 'Use Python 3.6'
inputs:
versionSpec: 3.6
- task: PythonScript#0
displayName: 'Run a Python script'
inputs:
scriptSource: inline
script: 'pip install -r req.txt'
enabled: false
- script: 'pip install -r req.txt --target=$(Build.BinariesDirectory)'
displayName: 'Command Line Script'
- task: CopyFiles#2
displayName: 'Copy Files to: $(Build.BinariesDirectory)'
inputs:
TargetFolder: '$(Build.BinariesDirectory)'
- task: ArchiveFiles#2
displayName: 'Archive $(Build.BinariesDirectory)'
- task: PublishBuildArtifacts#1
displayName: 'Publish Artifact: drop'
- task: AzureWebApp#1
displayName: 'Azure Web App Deploy: '
inputs:
azureSubscription: ''
appType: webAppLinux
appName:
package: '$(Build.ArtifactStagingDirectory)/$(Build.BuildId).zip'
runtimeStack: 'PYTHON|3.6'
PublishBuildArtifacts - is not required here but can be used to verify the actual build artifact that will be published to webapp. You can try and verify whether the packages are present in the artifacts or not.
Related
I need to publish the result, but could not and have looked through this site and made research but still having issue to publish test result. Do I need to edit certain file similar to the cypress.json to be able to resolve the issue of Publishing. Both Unit test and the coveratge are not publishing any result and files are not found. The code and also the result with error are highlighted below.
Code :
trigger:
branches:
include:
- #infra-ML-unit-test
variables:
- group: aws_details #just for backup roles approach
- group: snowflake-variables
- group: unit-test-dev
jobs:
- job: 'Unit_Test'
pool:
vmImage: 'windows-latest' # other options: 'macOS-latest', 'windows-latest', 'ubuntu-latest'
steps:
- task: UsePythonVersion#0
inputs:
versionSpec: '3.8'
displayName: 'Use Python 3.x'
- task: UsePythonVersion#0
inputs:
versionSpec: '3.8'
- script: |
pip install virtualenv
virtualenv venv
source venv/bin/activate
pip install -r requirements.txt
python -m pip install --upgrade pip setuptools sqlalchemy snowflake.sqlalchemy
echo "$(System.DefaultWorkingDirectory)"
echo "$(Build.StagingDirectory)"
pip show pandas
python -m pytest --log-file $SYSTEM_ARTIFACTSDIRECTORY/smoke-qa.log --junitxml=TEST-qa-smoke.xml -rP --excelreport=report.xls --html=pytest_report.html --self-contained-html
#displayName: 'Install python tools'
- job: publish_result
dependsOn: Unit_Test
condition: succeededOrFailed()
steps:
- task: PublishTestResults#2
displayName: 'Publish test result /010.xml'
inputs:
testResultsFormat: 'JUnit' # Options: JUnit, NUnit, VSTest, xUnit, cTest
testResultsFiles: '**/TEST-*.xml'
testRunTitle: 010
mergeTestResults: true
Pytest result
Unit test and coverage
Is there any file to locate in the repository to update and link with publishing the coverate and test result since the error or warning is mentioning :
**
##[warning]No test result files matching /TEST-*.xml were found.
**
You have two options:
Install the pytest-azurepipelines package before running the tests:
pip install pytest pytest-azurepipelines
Make the PublishTestResults#2 task start the search from $(Pipeline.Workspace):
- task: PublishTestResults#2
displayName: 'Publish test result /010.xml'
inputs:
testResultsFormat: 'JUnit'
testResultsFiles: '$(Pipeline.Workspace)/junit/test-*.xml'
For more info see Python: Run tests
Python Flask Azure Linux WebApp fails with below error.
This error is caused after importing the kiteconnect library.
As enum32 is the prerequisit for the kiteconnect, it installs that along with kiteconnect. I have manually uninstalled it via YAML script. I also have attached the screenshot of successful uninstallation along with YAML Script file.
Source Code :
from flask import Flask, request
from kiteconnect import KiteConnect
app = Flask(__name__)
#app.route('/')
def index():
Zerodha_API_KEY = "XXXXXXXXXX"
Zerodha_API_SECRET= "XXXXXXXXXXX"
Zerodha_Access_Token= "XXXXXXXXXXX"
kite = KiteConnect(api_key=Zerodha_API_KEY)
kite.set_access_token(Zerodha_Access_Token)
data = kite.orders()
print(data)
return str(data)
Error :
2021-07-15T11:12:09.280237068Z 2021-07-15T11:12:09.280292471Z _____ 2021-07-15T11:12:09.280304271Z / _ \ __________ _________ ____ 2021-07-15T11:12:09.280311872Z / /_\ \___ / | \_ __ \_/ __ \ 2021-07-15T11:12:09.280319472Z / | \/ /| | /| | \/\ ___/ 2021-07-15T11:12:09.280326672Z \____|__ /_____ \____/ |__| \___ > 2021-07-15T11:12:09.280334173Z \/ \/ \/ 2021-07-15T11:12:09.280341373Z 2021-07-15T11:12:09.280348073Z A P P S E R V I C E O N L I N U X 2021-07-15T11:12:09.280354874Z 2021-07-15T11:12:09.280361474Z Documentation: http://aka.ms/webapp-linux 2021-07-15T11:12:09.280368174Z Python 3.8.6 2021-07-15T11:12:09.280374875Z Note: Any data outside '/home' is not persisted 2021-07-15T11:12:09.654320473Z Starting OpenBSD Secure Shell server: sshd. 2021-07-15T11:12:09.739608672Z App Command Line not configured, will attempt auto-detect 2021-07-15T11:12:09.740706500Z Launching oryx with: create-script -appPath /home/site/wwwroot -output /opt/startup/startup.sh -virtualEnvName antenv -defaultApp /opt/defaultsite 2021-07-15T11:12:09.871149910Z Found build manifest file at '/home/site/wwwroot/oryx-manifest.toml'. Deserializing it... 2021-07-15T11:12:09.881601967Z Build Operation ID: |8ufRzjfGT4Q=.9903ac7b_ 2021-07-15T11:12:09.890669690Z Oryx Version: 0.2.20210420.1, Commit: 85c6e9278aae3980b86cb1d520aaad532c814ed7, ReleaseTagName: 20210420.1 2021-07-15T11:12:09.891252705Z Output is compressed. Extracting it... 2021-07-15T11:12:09.895295404Z Extracting '/home/site/wwwroot/output.tar.gz' to directory '/tmp/8d94707cb81be1f'... 2021-07-15T11:12:14.577736859Z App path is set to '/tmp/8d94707cb81be1f' 2021-07-15T11:12:16.276763516Z Detected an app based on Flask 2021-07-15T11:12:16.278429174Z Generating gunicorn` command for 'app:app'
2021-07-15T11:12:16.836063591Z Writing output script to '/opt/startup/startup.sh'
2021-07-15T11:12:17.568867477Z Using packages from virtual environment antenv located at /tmp/8d94707cb81be1f/antenv.
2021-07-15T11:12:17.569943514Z Updated PYTHONPATH to ':/tmp/8d94707cb81be1f/antenv/lib/python3.8/site-packages'
2021-07-15T11:12:17.674962252Z Traceback (most recent call last):
2021-07-15T11:12:17.675003253Z File "/opt/python/3.8.6/bin/gunicorn", line 3, in <module>
2021-07-15T11:12:17.675027254Z import re
2021-07-15T11:12:17.675036154Z File "/opt/python/3.8.6/lib/python3.8/re.py", line 145, in <module>
2021-07-15T11:12:17.675044155Z class RegexFlag(enum.IntFlag):
2021-07-15T11:12:17.675051755Z AttributeError: module 'enum' has no attribute 'IntFlag'
2021-07-15T11:12:25.447Z ERROR - Container rocketzerodha_0_7bea6f30 for site rocketzerodha has exited, failing site start
2021-07-15T11:12:25.461Z ERROR - Container rocketzerodha_0_7bea6f30 didn't respond to HTTP pings on port: 8000, failing site start. See container logs for debugging.
2021-07-15T11:12:25.475Z INFO - Stopping site rocketzerodha because it failed during startup.
As suggested by Grace, I uninstalled the enum32 via YAML Script. This is also not helping as the error remains same.
YAML Script for uninstallation proof
# Python to Linux Web App on Azure
# Build your Python project and deploy it to Azure as a Linux Web App.
# Change python version to one thats appropriate for your application.
# https://learn.microsoft.com/azure/devops/pipelines/languages/python
trigger:
- master
variables:
# Azure Resource Manager connection created during pipeline creation
azureServiceConnectionId: 'ea695c8e-016a-42de-9492-868b1f100d6b'
# Web app name
webAppName: 'RocketZerodha'
# Agent VM image name
vmImageName: 'ubuntu-latest'
# Environment name
environmentName: 'RocketZerodha'
# Project root folder. Point to the folder containing manage.py file.
projectRoot: $(System.DefaultWorkingDirectory)
# Python version: 3.9
pythonVersion: '3.9'
stages:
- stage: Build
displayName: Build stage
jobs:
- job: BuildJob
pool:
vmImage: $(vmImageName)
steps:
- task: UsePythonVersion#0
inputs:
versionSpec: '$(pythonVersion)'
displayName: 'Use Python $(pythonVersion)'
- script: |
python -m venv antenv
source antenv/bin/activate
pip freeze
python -m pip install --upgrade pip setuptools wheel
pip install setup
pip install -r requirements.txt
pip uninstall -y enum34
pip freeze
workingDirectory: $(projectRoot)
displayName: "Install requirements"
- task: ArchiveFiles#2
displayName: 'Archive files'
inputs:
rootFolderOrFile: '$(projectRoot)'
includeRootFolder: false
archiveType: zip
archiveFile: $(Build.ArtifactStagingDirectory)/$(Build.BuildId).zip
replaceExistingArchive: true
- upload: $(Build.ArtifactStagingDirectory)/$(Build.BuildId).zip
displayName: 'Upload package'
artifact: drop
- stage: Deploy
displayName: 'Deploy Web App'
dependsOn: Build
condition: succeeded()
jobs:
- deployment: DeploymentJob
pool:
vmImage: $(vmImageName)
environment: $(environmentName)
strategy:
runOnce:
deploy:
steps:
- task: UsePythonVersion#0
inputs:
versionSpec: '$(pythonVersion)'
displayName: 'Use Python version'
- script: |
pip freeze
pip uninstall -y enum34
workingDirectory: $(projectRoot)
displayName: "Install requirements"
- task: AzureWebApp#1
displayName: 'Deploy Azure Web App : RocketZerodha'
inputs:
azureSubscription: $(azureServiceConnectionId)
appName: $(webAppName)
package: $(Pipeline.Workspace)/drop/$(Build.BuildId).zip
I see two different errors here.
For the module 'enum' error you might want to uninstall enum34 if you have it. It's known for causing package issues. See this similar post for more info. Use this command to uninstall: pip uninstall -y enum34
Based on the second error message, it looks like port 8000 is not exposed that is why your container is not responding to HTTP pings. Try setting both PORT and WEBSITES_PORT to 8000 in the configuration. You can set the PORT variable in the Azure portal. App Service -> Configuration -> Application settings -> + PORT 8000
(source)
Why Python 3.6.1 throws AttributeError: module 'enum' has no attribute 'IntFlag'?
I'm having issues setting up a small pipeline for my Django app.
Here's the yaml configuration:
trigger:
- main
pool:
vmImage: ubuntu-latest
strategy:
matrix:
Python39:
PYTHON_VERSION: '3.9'
maxParallel: 2
steps:
- task: UsePythonVersion#0
inputs:
versionSpec: '$(PYTHON_VERSION)'
architecture: 'x64'
- task: DownloadSecureFile#1
name: dotEnv
inputs:
secureFile: '.env'
- task: PythonScript#0
displayName: 'Export project path'
inputs:
scriptSource: 'inline'
script: |
"""Search all subdirectories for `manage.py`."""
from glob import iglob
from os import path
# Python >= 3.5
manage_py = next(iglob(path.join('**', 'manage.py'), recursive=True), None)
if not manage_py:
raise SystemExit('Could not find a Django project')
project_location = path.dirname(path.abspath(manage_py))
print('Found Django project in', project_location)
print('##vso[task.setvariable variable=projectRoot]{}'.format(project_location))
- task: CopyFiles#2
displayName: 'Add .env file'
inputs:
SourceFolder: '$(Agent.TempDirectory)'
Contents: '.env'
TargetFolder: '$(projectRoot)'
- script: |
python -m pip install --upgrade pip setuptools wheel pipenv
pipenv install unittest-xml-reporting
displayName: 'Install prerequisites'
- script: python -m pipenv lock -r > requirements.txt
displayName: 'Create requirements.txt from Pipfile'
- script: pipenv install
displayName: 'Install requirements'
- script: |
pushd '$(projectRoot)'
pipenv run python manage.py test --testrunner xmlrunner.extra.djangotestrunner.XMLTestRunner --no-input
displayName: 'Run tests'
- task: PublishTestResults#2
inputs:
testResultsFiles: "**/TEST-*.xml"
testRunTitle: 'Python $(PYTHON_VERSION)'
condition: succeededOrFailed()
The problem happens when the Run tests script starts as Django can't create a test database, but the error message seems quite generic:
psycopg2.OperationalError: could not connect to server: Connection refused
Is the server running on host "localhost" (::1) and accepting
TCP/IP connections on port 5432?
Should I be adding steps to create a PostgreSQL database?
Is it possible to run with a lighter solution.. but without creating another settings.py module just for CI (which doesn't seem ideal)?
DATABASES = {
"default": {
"ENGINE": "django.db.backends.postgresql",
"NAME": os.environ.get("DB_NAME", ""),
"USER": os.environ.get("DB_USER", ""),
"PASSWORD": os.environ.get("DB_PASS", ""),
"HOST": "localhost",
"PORT": "5432",
}
}
I have a simple python dockerized application whose structure is
/src
- server.py
- test_server.py
Dockerfile
requirements.txt
in which the docker base image is Linux-based, and server.py exposes a FastAPI endpoint.
For completeness, server.py looks like this:
from fastapi import FastAPI
from pydantic import BaseModel
class Item(BaseModel):
number: int
app = FastAPI(title="Sum one", description="Get a number, add one to it", version="0.1.0")
#app.post("/compute")
async def compute(input: Item):
return {'result': input.number + 1}
Tests are meant to be done with pytest (following https://fastapi.tiangolo.com/tutorial/testing/) with a test_server.py:
from fastapi.testclient import TestClient
from server import app
import json
client = TestClient(app)
def test_endpoint():
"""test endpoint"""
response = client.post("/compute", json={"number": 1})
values = json.loads(response.text)
assert values["result"] == 2
Dockerfile looks like this:
FROM tiangolo/uvicorn-gunicorn:python3.7
COPY . /app
RUN pip install -r requirements.txt
WORKDIR /app/src
EXPOSE 8000
CMD ["uvicorn", "server:app", "--host", "0.0.0.0", "--port", "8000", "--reload"]
At the moment, if I want to run the tests on my local machine within the container, one way to do this is
Build the Docker container
Run the container, get its name via docker ps
Run docker exec -it <mycontainer> bash and execute pytest to see the tests passing.
Now, I would like to run tests in Azure DevOps (Server) before pushing the image to my Docker registry and triggering a release pipeline. If this sounds an OK thing to do, what's the proper way to do it?
So far, I hoped that something along the lines of adding a "PyTest" step in the build pipeline would magically work:
I am currently using a Linux agent, and the step fails with
The failure is not surprising, as (I think) the container is not run after being built, and therefore pytest can't run within it either :(
Another way to solve the solve this is to include pytest commands in the Dockerfile and deal with the tests in a release pipeline. However I would like to decouple the testing from the container that is ultimately pushed to the registry and deployed.
Is there a standard way to run pytest within a Docker container in Azure DevOps, and get a graphical report?
Update your azure-pipelines.yml file as follows to run the tests in Azure Pipelines
Method-1 (using docker)
trigger:
- master
pool:
vmImage: 'ubuntu-latest'
steps:
- task: Docker#2
inputs:
command: 'build'
Dockerfile: '**/Dockerfile'
arguments: '-t fast-api:$(Build.BuildId)'
- script: |
docker run fast-api:$(Build.BuildId) python -m pytest
displayName: 'Run PyTest'
Successfull pipeline screenshot
Method-2 (without docker)
trigger:
- master
pool:
vmImage: 'ubuntu-latest'
strategy:
matrix:
Python37:
python.version: '3.7'
steps:
- task: UsePythonVersion#0
inputs:
versionSpec: '$(python.version)'
displayName: 'Use Python $(python.version)'
- script: |
python -m pip install --upgrade pip
pip install -r requirements.txt
displayName: 'Install dependencies'
- script: |
pip install pytest pytest-azurepipelines
python -m pytest
displayName: 'pytest'
BTW, I have one simple FastAPI project, you can reference if your want.
Test your docker script using pytest-azurepipelines:
- script: |
python -m pip install --upgrade pip
pip install pytest pytest-azurepipelines
pip install -r requirements.txt
pip install -e .
displayName: 'Install dependencies'
- script: |
python -m pytest /src/test_server.py
displayName: 'pytest'
Running pytest with the plugin pytest-azurepipelines will let you see your test results in the Azure Pipelines UI.
https://pypi.org/project/pytest-azurepipelines/
You can run your unit tests directly from within your Docker container using pytest-azurepipelines (that you need to install previously in the Docker image):
- script: |
docker run --mount type=bind,source="$(pwd)",target=/results \
--entrypoint /bin/bash my_docker_image \
-c "cd results && pytest"
displayName: 'tests'
continueOnError: true
pytest will create an xml file containing the test results, that will be made available to Azure DevOps pipeline thanks to the --mount flag in the docker run command. Then pytest-azurepipelines will publish directly the results to Azure DevOps.
I'm using the unaccent extension for Postgres and followed all the docs to get it to work (installed the extension directly via CREATE EXTENSION and put django.contrib.postgres in INSTALLED_APPS in django settings).
In local environment it's working perfectly however after building and deploying the app to Heroku it looks like it isn't installing django.contrib.postgres. Therefore when i try to use the functionality of my app that queries using unaccent i get the "Unsupported lookup 'unaccent' for CharField" that happens when you don't have django.contrib.postgres in INSTALLED_APPS.
In python shell printing settings.INSTALLED_APPS on local environment shows that django.contrib.postgres is there, but running it on Heroku shows it's missing. Is it unsupported by the buildpack for heroku/python or something or am i missing some config?
I tried to pip install the django-contrib-postgres backport for earlier versions of Django (put it in requirements.txt) to no avail. Python version is 3.6.7 and Django is 2.1.2. Creating the extension using migrations with UnaccentExtension doesn't change anything either and i'm sure it's not Postgres problem because querying directly on the database using unaccent(columnname) works as expected.
Thanks in advance.
Edit: YAML definition for Azure DevOps Pipelines and requirements.txt
requirements.txt
Django==2.1.2
django-cors-middleware==1.3.1
django-heroku==0.3.1
django-oauth-toolkit==1.2.0
djangorestframework==3.9.0
djangorestframework-camel-case==0.2.0
django-contrib-postgres==0.0.1
facepy==1.0.9
factory_boy==2.11.1
flake8==3.5.0
gunicorn==19.8.1
psycopg2-binary==2.7.5
pylint==2.1.1
pytest==3.9.1
pytest-cov==2.6.0
pytest-django==3.4.3
python-dateutil==2.7.5
raven==6.9.0
freezegun==0.3.11
mailchimp3==3.0.4
Build
pool:
name: Hosted VS2017
steps:
- task: UsePythonVersion#0
displayName: 'Use Python 3.6'
inputs:
versionSpec: 3.6
- script: 'pip install -r requirements.txt'
workingDirectory: 'back-end'
displayName: 'Restore dependencies'
- script: 'python manage.py collectstatic'
workingDirectory: 'back-end'
displayName: 'Export static files'
- script: 'flake8 .'
workingDirectory: 'back-end'
displayName: 'Style analysis'
- script: 'pytest --junitxml=junit.xml --cov --cov-report=xml --cov-report=html'
workingDirectory: 'back-end'
displayName: 'Run tests'
- task: PublishTestResults#2
displayName: 'Publish test results'
inputs:
testResultsFiles: 'back-end/junit.xml'
- task: PublishCodeCoverageResults#1
displayName: 'Publish test coverage'
inputs:
codeCoverageTool: Coverage
summaryFileLocation: '$(System.DefaultWorkingDirectory)/back-end/coverage.xml'
reportDirectory: '$(System.DefaultWorkingDirectory)/back-end/htmlcov/'
- task: PublishBuildArtifacts#1
displayName: 'publish artifact'
inputs:
PathtoPublish: 'back-end'
ArtifactName: BackendArtifact
Release (key and names are hidden with *)
steps:
- task: boostingmy.vsts-heroku-tasks.pushu-to-heroku.PushToHeroku#1
displayName: 'Publish on Heroku'
inputs:
ApiKey: '***'
AppName: '***'
PushRoot: '$(System.DefaultWorkingDirectory)/****-Back-end-CI/BackendArtifact'