I'm using the unaccent extension for Postgres and followed all the docs to get it to work (installed the extension directly via CREATE EXTENSION and put django.contrib.postgres in INSTALLED_APPS in django settings).
In local environment it's working perfectly however after building and deploying the app to Heroku it looks like it isn't installing django.contrib.postgres. Therefore when i try to use the functionality of my app that queries using unaccent i get the "Unsupported lookup 'unaccent' for CharField" that happens when you don't have django.contrib.postgres in INSTALLED_APPS.
In python shell printing settings.INSTALLED_APPS on local environment shows that django.contrib.postgres is there, but running it on Heroku shows it's missing. Is it unsupported by the buildpack for heroku/python or something or am i missing some config?
I tried to pip install the django-contrib-postgres backport for earlier versions of Django (put it in requirements.txt) to no avail. Python version is 3.6.7 and Django is 2.1.2. Creating the extension using migrations with UnaccentExtension doesn't change anything either and i'm sure it's not Postgres problem because querying directly on the database using unaccent(columnname) works as expected.
Thanks in advance.
Edit: YAML definition for Azure DevOps Pipelines and requirements.txt
requirements.txt
Django==2.1.2
django-cors-middleware==1.3.1
django-heroku==0.3.1
django-oauth-toolkit==1.2.0
djangorestframework==3.9.0
djangorestframework-camel-case==0.2.0
django-contrib-postgres==0.0.1
facepy==1.0.9
factory_boy==2.11.1
flake8==3.5.0
gunicorn==19.8.1
psycopg2-binary==2.7.5
pylint==2.1.1
pytest==3.9.1
pytest-cov==2.6.0
pytest-django==3.4.3
python-dateutil==2.7.5
raven==6.9.0
freezegun==0.3.11
mailchimp3==3.0.4
Build
pool:
name: Hosted VS2017
steps:
- task: UsePythonVersion#0
displayName: 'Use Python 3.6'
inputs:
versionSpec: 3.6
- script: 'pip install -r requirements.txt'
workingDirectory: 'back-end'
displayName: 'Restore dependencies'
- script: 'python manage.py collectstatic'
workingDirectory: 'back-end'
displayName: 'Export static files'
- script: 'flake8 .'
workingDirectory: 'back-end'
displayName: 'Style analysis'
- script: 'pytest --junitxml=junit.xml --cov --cov-report=xml --cov-report=html'
workingDirectory: 'back-end'
displayName: 'Run tests'
- task: PublishTestResults#2
displayName: 'Publish test results'
inputs:
testResultsFiles: 'back-end/junit.xml'
- task: PublishCodeCoverageResults#1
displayName: 'Publish test coverage'
inputs:
codeCoverageTool: Coverage
summaryFileLocation: '$(System.DefaultWorkingDirectory)/back-end/coverage.xml'
reportDirectory: '$(System.DefaultWorkingDirectory)/back-end/htmlcov/'
- task: PublishBuildArtifacts#1
displayName: 'publish artifact'
inputs:
PathtoPublish: 'back-end'
ArtifactName: BackendArtifact
Release (key and names are hidden with *)
steps:
- task: boostingmy.vsts-heroku-tasks.pushu-to-heroku.PushToHeroku#1
displayName: 'Publish on Heroku'
inputs:
ApiKey: '***'
AppName: '***'
PushRoot: '$(System.DefaultWorkingDirectory)/****-Back-end-CI/BackendArtifact'
Related
I'm rather new to Azure and currently playing around with the pipelines. My goal is to run a postgres alpine docker container in the background, so I can perform tests through my python backend.
This is my pipeline config
trigger:
- main
pool:
vmImage: ubuntu-latest
variables:
POSTGRE_CONNECTION_STRING: postgresql+psycopg2://postgres:passw0rd#localhost/postgres
resources:
containers:
- container: postgres
image: postgres:13.6-alpine
trigger: true
env:
POSTGRES_PASSWORD: passw0rd
ports:
- 1433:1433
options: --name postgres
stages:
- stage: QA
jobs:
- job: test
services:
postgres: postgres
steps:
- task: UsePythonVersion#0
inputs:
versionSpec: $(PYTHON_VERSION)
- task: Cache#2
inputs:
key: '"$(PYTHON_VERSION)" | "$(Agent.OS)" | requirements.txt'
path: $(PYTHON_VENV)
cacheHitVar: 'PYTHON_CACHE_RESTORED'
- task: CmdLine#2
displayName: Wait for db to start
inputs:
script: |
sleep 5
- script: |
python -m venv .venv
displayName: create virtual environment
condition: eq(variables.PYTHON_CACHE_RESTORED, 'false')
- script: |
source .venv/bin/activate
python -m pip install --upgrade pip
pip install -r requirements.txt
displayName: pip install
condition: eq(variables.PYTHON_CACHE_RESTORED, 'false')
- script: |
source .venv/bin/activate
python -m pytest --junitxml=test-results.xml --cov=app --cov-report=xml tests
displayName: run pytest
- task: PublishTestResults#2
condition: succeededOrFailed()
inputs:
testResultsFormat: 'JUnit'
testResultsFiles: 'test-results.xml'
testRunTitle: 'Publish FastAPI test results'
- task: PublishCodeCoverageResults#1
inputs:
codeCoverageTool: 'Cobertura'
summaryFileLocation: 'coverage.xml'
But the pipeline always fails at the step "Initialize Containers", giving this error:
Error response from daemon: Container <containerID> is not running as if it was just shutting down because there is nothing to do. Which seems right, but I don't know how to keep it running until my tests are done, the backend just runs pytest against the database. I also tried adding that resource as container using the container property, but then the pipeline crashes at the same step, saying that the container was just running less than a second.
I'm thankful for any ideas!
I'm suspicious that your container is not stopping because of "there is nothing to do", the postgres image is configured in a way to act as a service. Your container is probably stopping because of an error.
I'm sure there is something to improve: you have to add the PGPORT env var to your container and set to 1433 because that port is not the default port for the postgres docker image, so opening that port on your container like you are doing with ports is not doing too much in this case.
Also, your trigger: true property would mean that you are expecting updates on the official DockerHub repository for postgres and in case of a new image release, run your pipeline. I think that does not makes too much sense, you should remove it, just in case, although this is marginal problem from the perspective of your question.
I need to publish the result, but could not and have looked through this site and made research but still having issue to publish test result. Do I need to edit certain file similar to the cypress.json to be able to resolve the issue of Publishing. Both Unit test and the coveratge are not publishing any result and files are not found. The code and also the result with error are highlighted below.
Code :
trigger:
branches:
include:
- #infra-ML-unit-test
variables:
- group: aws_details #just for backup roles approach
- group: snowflake-variables
- group: unit-test-dev
jobs:
- job: 'Unit_Test'
pool:
vmImage: 'windows-latest' # other options: 'macOS-latest', 'windows-latest', 'ubuntu-latest'
steps:
- task: UsePythonVersion#0
inputs:
versionSpec: '3.8'
displayName: 'Use Python 3.x'
- task: UsePythonVersion#0
inputs:
versionSpec: '3.8'
- script: |
pip install virtualenv
virtualenv venv
source venv/bin/activate
pip install -r requirements.txt
python -m pip install --upgrade pip setuptools sqlalchemy snowflake.sqlalchemy
echo "$(System.DefaultWorkingDirectory)"
echo "$(Build.StagingDirectory)"
pip show pandas
python -m pytest --log-file $SYSTEM_ARTIFACTSDIRECTORY/smoke-qa.log --junitxml=TEST-qa-smoke.xml -rP --excelreport=report.xls --html=pytest_report.html --self-contained-html
#displayName: 'Install python tools'
- job: publish_result
dependsOn: Unit_Test
condition: succeededOrFailed()
steps:
- task: PublishTestResults#2
displayName: 'Publish test result /010.xml'
inputs:
testResultsFormat: 'JUnit' # Options: JUnit, NUnit, VSTest, xUnit, cTest
testResultsFiles: '**/TEST-*.xml'
testRunTitle: 010
mergeTestResults: true
Pytest result
Unit test and coverage
Is there any file to locate in the repository to update and link with publishing the coverate and test result since the error or warning is mentioning :
**
##[warning]No test result files matching /TEST-*.xml were found.
**
You have two options:
Install the pytest-azurepipelines package before running the tests:
pip install pytest pytest-azurepipelines
Make the PublishTestResults#2 task start the search from $(Pipeline.Workspace):
- task: PublishTestResults#2
displayName: 'Publish test result /010.xml'
inputs:
testResultsFormat: 'JUnit'
testResultsFiles: '$(Pipeline.Workspace)/junit/test-*.xml'
For more info see Python: Run tests
Python Flask Azure Linux WebApp fails with below error.
This error is caused after importing the kiteconnect library.
As enum32 is the prerequisit for the kiteconnect, it installs that along with kiteconnect. I have manually uninstalled it via YAML script. I also have attached the screenshot of successful uninstallation along with YAML Script file.
Source Code :
from flask import Flask, request
from kiteconnect import KiteConnect
app = Flask(__name__)
#app.route('/')
def index():
Zerodha_API_KEY = "XXXXXXXXXX"
Zerodha_API_SECRET= "XXXXXXXXXXX"
Zerodha_Access_Token= "XXXXXXXXXXX"
kite = KiteConnect(api_key=Zerodha_API_KEY)
kite.set_access_token(Zerodha_Access_Token)
data = kite.orders()
print(data)
return str(data)
Error :
2021-07-15T11:12:09.280237068Z 2021-07-15T11:12:09.280292471Z _____ 2021-07-15T11:12:09.280304271Z / _ \ __________ _________ ____ 2021-07-15T11:12:09.280311872Z / /_\ \___ / | \_ __ \_/ __ \ 2021-07-15T11:12:09.280319472Z / | \/ /| | /| | \/\ ___/ 2021-07-15T11:12:09.280326672Z \____|__ /_____ \____/ |__| \___ > 2021-07-15T11:12:09.280334173Z \/ \/ \/ 2021-07-15T11:12:09.280341373Z 2021-07-15T11:12:09.280348073Z A P P S E R V I C E O N L I N U X 2021-07-15T11:12:09.280354874Z 2021-07-15T11:12:09.280361474Z Documentation: http://aka.ms/webapp-linux 2021-07-15T11:12:09.280368174Z Python 3.8.6 2021-07-15T11:12:09.280374875Z Note: Any data outside '/home' is not persisted 2021-07-15T11:12:09.654320473Z Starting OpenBSD Secure Shell server: sshd. 2021-07-15T11:12:09.739608672Z App Command Line not configured, will attempt auto-detect 2021-07-15T11:12:09.740706500Z Launching oryx with: create-script -appPath /home/site/wwwroot -output /opt/startup/startup.sh -virtualEnvName antenv -defaultApp /opt/defaultsite 2021-07-15T11:12:09.871149910Z Found build manifest file at '/home/site/wwwroot/oryx-manifest.toml'. Deserializing it... 2021-07-15T11:12:09.881601967Z Build Operation ID: |8ufRzjfGT4Q=.9903ac7b_ 2021-07-15T11:12:09.890669690Z Oryx Version: 0.2.20210420.1, Commit: 85c6e9278aae3980b86cb1d520aaad532c814ed7, ReleaseTagName: 20210420.1 2021-07-15T11:12:09.891252705Z Output is compressed. Extracting it... 2021-07-15T11:12:09.895295404Z Extracting '/home/site/wwwroot/output.tar.gz' to directory '/tmp/8d94707cb81be1f'... 2021-07-15T11:12:14.577736859Z App path is set to '/tmp/8d94707cb81be1f' 2021-07-15T11:12:16.276763516Z Detected an app based on Flask 2021-07-15T11:12:16.278429174Z Generating gunicorn` command for 'app:app'
2021-07-15T11:12:16.836063591Z Writing output script to '/opt/startup/startup.sh'
2021-07-15T11:12:17.568867477Z Using packages from virtual environment antenv located at /tmp/8d94707cb81be1f/antenv.
2021-07-15T11:12:17.569943514Z Updated PYTHONPATH to ':/tmp/8d94707cb81be1f/antenv/lib/python3.8/site-packages'
2021-07-15T11:12:17.674962252Z Traceback (most recent call last):
2021-07-15T11:12:17.675003253Z File "/opt/python/3.8.6/bin/gunicorn", line 3, in <module>
2021-07-15T11:12:17.675027254Z import re
2021-07-15T11:12:17.675036154Z File "/opt/python/3.8.6/lib/python3.8/re.py", line 145, in <module>
2021-07-15T11:12:17.675044155Z class RegexFlag(enum.IntFlag):
2021-07-15T11:12:17.675051755Z AttributeError: module 'enum' has no attribute 'IntFlag'
2021-07-15T11:12:25.447Z ERROR - Container rocketzerodha_0_7bea6f30 for site rocketzerodha has exited, failing site start
2021-07-15T11:12:25.461Z ERROR - Container rocketzerodha_0_7bea6f30 didn't respond to HTTP pings on port: 8000, failing site start. See container logs for debugging.
2021-07-15T11:12:25.475Z INFO - Stopping site rocketzerodha because it failed during startup.
As suggested by Grace, I uninstalled the enum32 via YAML Script. This is also not helping as the error remains same.
YAML Script for uninstallation proof
# Python to Linux Web App on Azure
# Build your Python project and deploy it to Azure as a Linux Web App.
# Change python version to one thats appropriate for your application.
# https://learn.microsoft.com/azure/devops/pipelines/languages/python
trigger:
- master
variables:
# Azure Resource Manager connection created during pipeline creation
azureServiceConnectionId: 'ea695c8e-016a-42de-9492-868b1f100d6b'
# Web app name
webAppName: 'RocketZerodha'
# Agent VM image name
vmImageName: 'ubuntu-latest'
# Environment name
environmentName: 'RocketZerodha'
# Project root folder. Point to the folder containing manage.py file.
projectRoot: $(System.DefaultWorkingDirectory)
# Python version: 3.9
pythonVersion: '3.9'
stages:
- stage: Build
displayName: Build stage
jobs:
- job: BuildJob
pool:
vmImage: $(vmImageName)
steps:
- task: UsePythonVersion#0
inputs:
versionSpec: '$(pythonVersion)'
displayName: 'Use Python $(pythonVersion)'
- script: |
python -m venv antenv
source antenv/bin/activate
pip freeze
python -m pip install --upgrade pip setuptools wheel
pip install setup
pip install -r requirements.txt
pip uninstall -y enum34
pip freeze
workingDirectory: $(projectRoot)
displayName: "Install requirements"
- task: ArchiveFiles#2
displayName: 'Archive files'
inputs:
rootFolderOrFile: '$(projectRoot)'
includeRootFolder: false
archiveType: zip
archiveFile: $(Build.ArtifactStagingDirectory)/$(Build.BuildId).zip
replaceExistingArchive: true
- upload: $(Build.ArtifactStagingDirectory)/$(Build.BuildId).zip
displayName: 'Upload package'
artifact: drop
- stage: Deploy
displayName: 'Deploy Web App'
dependsOn: Build
condition: succeeded()
jobs:
- deployment: DeploymentJob
pool:
vmImage: $(vmImageName)
environment: $(environmentName)
strategy:
runOnce:
deploy:
steps:
- task: UsePythonVersion#0
inputs:
versionSpec: '$(pythonVersion)'
displayName: 'Use Python version'
- script: |
pip freeze
pip uninstall -y enum34
workingDirectory: $(projectRoot)
displayName: "Install requirements"
- task: AzureWebApp#1
displayName: 'Deploy Azure Web App : RocketZerodha'
inputs:
azureSubscription: $(azureServiceConnectionId)
appName: $(webAppName)
package: $(Pipeline.Workspace)/drop/$(Build.BuildId).zip
I see two different errors here.
For the module 'enum' error you might want to uninstall enum34 if you have it. It's known for causing package issues. See this similar post for more info. Use this command to uninstall: pip uninstall -y enum34
Based on the second error message, it looks like port 8000 is not exposed that is why your container is not responding to HTTP pings. Try setting both PORT and WEBSITES_PORT to 8000 in the configuration. You can set the PORT variable in the Azure portal. App Service -> Configuration -> Application settings -> + PORT 8000
(source)
Why Python 3.6.1 throws AttributeError: module 'enum' has no attribute 'IntFlag'?
I'm trying to deploy a very simple flask app to linux azure web app, but it seems like it cannot find any of the packages installed
it works on localhost using a virtual env
Here is the app code that is on folder startup.py:
from flask import Flask
from flask_restful import Resource, Api
class Index(Resource):
def get(self):
return "Hellow World"
app = Flask(__name__)
api = Api(app)
api.add_resource(Index, "/home")
I'm trying to deploy the app using azure DevOps pipelines, here is my azure-pipeline.yml file (I removed the variables for this post)
variables:
ConnectedServiceName: <name of my service connection>
WebAppName: <name of my web app>
pool:
name: Hosted Ubuntu 1604
steps:
- task: UsePythonVersion#0
inputs:
versionSpec: '3.6'
architecture: 'x64'
- script: pip install -r requirements.txt
displayName: 'Install requirements'
- task: ArchiveFiles#2
inputs:
rootFolderOrFile: '$(Build.SourcesDirectory)'
includeRootFolder: false
archiveType: 'zip'
archiveFile: '$(Build.ArtifactStagingDirectory)/Application$(Build.BuildId).zip'
replaceExistingArchive: true
#verbose: # (no value); this input is optional
- task: AzureRMWebAppDeployment#4
displayName: Azure App Service Deploy
inputs:
appType: webAppLinux
RuntimeStack: 'PYTHON|3.6'
ConnectedServiceName: $(ConnectedServiceName)
WebAppName: $(WebAppName)
Package: '$(Build.ArtifactStagingDirectory)/Application$(Build.BuildId).zip'
StartupCommand: 'gunicorn --bind=0.0.0.0 --workers=4 startup:app'
and this is my requirements.txt file
aniso8601==7.0.0
astroid==2.2.5
Click==7.0
colorama==0.4.1
Flask==1.1.1
Flask-JWT-Extended==3.22.0
Flask-RESTful==0.3.7
flask-swagger-ui==3.20.9
The pipelines runs all the steps without error
Deployment Image
but it seems like there are no packages in wwwroot folder
/home>cd site/wwwroot
/home/site/wwwroot>pip freeze
virtualenv==16.6.2
/home/site/wwwroot>
Also tried pip install -r requirements.txt standing at this folder, it collects the packages but nevers ends to install them (it show "Cleaning up...>" for ever)
On the application logs I can see
ModuleNotFoundError: No module named 'flask_restful'
2019-09-04T14:55:19.215300670Z [2019-09-04 14:55:19 +0000] [39] [INFO] Worker exiting (pid: 39)
2019-09-04T14:55:19.234394151Z [2019-09-04 14:55:19 +0000] [38] [ERROR] Exception in worker process
How can I make sure that the web app uses the packages listed in the requirements.txt
Any ideas of what could be wrong?
Tried to create a sample pipeline -
pool:
name: Azure Pipelines
steps:
- task: UsePythonVersion#0
displayName: 'Use Python 3.6'
inputs:
versionSpec: 3.6
- task: PythonScript#0
displayName: 'Run a Python script'
inputs:
scriptSource: inline
script: 'pip install -r req.txt'
enabled: false
- script: 'pip install -r req.txt --target=$(Build.BinariesDirectory)'
displayName: 'Command Line Script'
- task: CopyFiles#2
displayName: 'Copy Files to: $(Build.BinariesDirectory)'
inputs:
TargetFolder: '$(Build.BinariesDirectory)'
- task: ArchiveFiles#2
displayName: 'Archive $(Build.BinariesDirectory)'
- task: PublishBuildArtifacts#1
displayName: 'Publish Artifact: drop'
- task: AzureWebApp#1
displayName: 'Azure Web App Deploy: '
inputs:
azureSubscription: ''
appType: webAppLinux
appName:
package: '$(Build.ArtifactStagingDirectory)/$(Build.BuildId).zip'
runtimeStack: 'PYTHON|3.6'
PublishBuildArtifacts - is not required here but can be used to verify the actual build artifact that will be published to webapp. You can try and verify whether the packages are present in the artifacts or not.
I have the following yaml pipeline build file:
pr:
branches:
include:
- master
jobs:
- job: 'Test'
pool:
vmImage: 'Ubuntu-16.04'
strategy:
matrix:
Python36:
python.version: '3.6'
maxParallel: 4
steps:
- task: UsePythonVersion#0
inputs:
versionSpec: '$(python.version)'
architecture: 'x64'
env:
POSTGRES: $(POSTGRES)
- script: python -m pip install --upgrade pip && pip install -r requirements.txt
displayName: 'Install dependencies'
- script: |
pip install pytest
pytest tests -s --doctest-modules --junitxml=junit/test-results.xml
displayName: 'pytest'
I set the variable POSTGRES in the pipeline settings as a secret variable. In the python code all environment variables are read with the call
if not os.getenv(var):
raise ValueError(f'Environment variable \'{var}\' is not set')
When the build is executed it will throw exactly the above error for the POSTGRES variable. Are the environment variables not set correctly?
To make the environment variable available in the Python script, you need to define it in the step where it's used:
- script: |
pip install pytest
pytest tests -s --doctest-modules --junitxml=junit/test-results.xml
displayName: 'pytest'
env:
POSTGRES: $(POSTGRES)
I don't know if you still need this but...
If you take a look at the documentation here it says:
Unlike a normal variable, they are not automatically decrypted into
environment variables for scripts. You can explicitly map them in,
though.
So it looks like you were doing it right. Maybe try using a different name for the mapped variable. It could be the name of the initial encrypted variable is confounding the mapping (because it's already a variable it won't remap it).