I have created a python application and I would to deploy it via Gitlab. To achieve this, I create the following gitlab-ci.yml file:
# This file is a template, and might need editing before it works on your project.
# Official language image. Look for the different tagged releases at:
# https://hub.docker.com/r/library/python/tags/
image: "python:3.10"
#commands to run in the Docker container before starting each job.
before_script:
- python --version
- pip install -r requirements.txt
# different stages in the pipeline
stages:
- Static Analysis
- Test
- Deploy
#defines the job in Static Analysis
pylint:
stage: Static Analysis
script:
- pylint -d C0301 src/*.py
#tests the code
pytest:
stage: Test
script:
- cd test/;pytest -v
#deploy
deploy:
stage: Deploy
script:
- echo "test ms deploy"
- cd src/
- pyinstaller -F gui.py --noconsole
tags:
- macos
It runs fine through the Static Analysis and Test phases, but in Deploy I get the following error:
OSError: Python library not found: .Python, libpython3.10.dylib, Python3, Python, libpython3.10m.dylib
This means your Python installation does not come with proper shared library files.
This usually happens due to missing development package, or unsuitable build parameters of the Python installation.
* On Debian/Ubuntu, you need to install Python development packages:
* apt-get install python3-dev
* apt-get install python-dev
* If you are building Python by yourself, rebuild with `--enable-shared` (or, `--enable-framework` on macOS).
As I am working on a Macbook I tried with the following addition - env PYTHON_CONFIGURE_OPTS="--enable-framework" pyenv install 3.10.5 but then I get an error that python 3.10.5 already exists.
I tried some other things, but I am a bit stuck. Any advice or suggestions?
Related
I'm learning how to create conda packages, and I am having trouble with a package that will work locally, but that when downloaded from anaconda into a different server will return an error:
/lib64/libm.so.6: version `GLIBC_2.29' not found
The meta.yaml looks like the following:
package:
name: app
version: 2.4
source:
git_url: https://gitlab.com/user/repo.git
git_tag: v2.4
requirements:
build:
host:
run:
about:
home: https://gitlab.com/user/repo
license: GPL-3
license_family: GPL
summary: blabla.
The app is build with a simple build.sh script:
#!/bin/bash
set -x
echo $(pwd)
make
BIN=$PREFIX/bin
mkdir -p $BIN
cp app $BIN
I assumed that build: glibc >= 2.29 under requirements would make the job, but that results in an error when running conda build ..
How can I include GLIBC in the package? is that something meant to be done manually? from the package version I can download from anaconda, I can see other packages are downloaded as well (e.g. libgcc-ng) that I did not really mention in the meta.yaml or anywhere.
How can I include GLIBC in the package?
You can't, for reasons explained here.
Your best bet is to build on a system (or in a docker container) which has the lowest version of GLIBC you need to support (i.e. the version installed on the server(s) you will be running your package on).
I'm trying to run Appium tests written in python 3 on AWS Device Farm.
As stated in the documentation,
Python 2.7 is supported in both the standard environment and using Custom Mode. It is the default in both when specifying Python.
Python 3 is only supported in Custom Mode. To choose Python 3 as your python version, change the test spec to set the PYTHON_VERSION to 3, as shown here:
phases:
install:
commands:
# ...
- export PYTHON_VERSION=3
- export APPIUM_VERSION=1.14.2
# Activate the Virtual Environment that Device Farm sets up for Python 3, then use Pip to install required packages.
- cd $DEVICEFARM_TEST_PACKAGE_PATH
- . bin/activate
- pip install -r requirements.txt
# ...
I did run tests successfully on Device Farm in the past, using a python 3 custom environment, with this spec file (I include the install phase only):
phases:
install:
commands:
# Device Farm support two major versions of Python, each with one minor version: 2 (minor version 2.7), and 3 (minor version 3.7.4).
# The default Python version is 2, but you can switch to 3 by setting the following variable to 3:
- export PYTHON_VERSION=3
# This command will install your dependencies and verify that they're using the proper versions that you specified in your requirements.txt file. Because Device Farm preconfigures your environment within a
# Python Virtual Environment in its setup phase, you can use pip to install any Python packages just as you would locally.
- cd $DEVICEFARM_TEST_PACKAGE_PATH
- . bin/activate
- pip install -r requirements.txt
# ...
Now, when I run the tests, I get this log an then the test crashes due to incompatible code.
[DEVICEFARM] ########### Entering phase test ###########
[DeviceFarm] echo "Navigate to test package directory"
Navigate to test package directory
[DeviceFarm] cd $DEVICEFARM_TEST_PACKAGE_PATH
[DeviceFarm] echo "Start Appium Python test"
Start Appium Python test
[DeviceFarm] py.test tests/ --junit-xml $DEVICEFARM_LOG_DIR/junitreport.xml
============================= test session starts ==============================
platform linux2 -- Python 2.7.6, pytest-2.8.5, py-1.4.31, pluggy-0.3.1
rootdir: /tmp/scratchrPuGRa.scratch/test-packageM5E89M/tests, inifile:
collected 0 items / 3 errors
==================================== ERRORS ====================================
____________________ ERROR collecting test_home_buttons.py _____________________
/usr/local/lib/python2.7/dist-packages/_pytest/python.py:610: in _importtestmodule
mod = self.fspath.pyimport(ensuresyspath=importmode)
Is python 3.x not supported anymore, or have there been undocumented changes?
Is there a new way to run tests in a python 3 environment?
The documented way is still correct.
I found there was an error during the installation of the pytest package in the virtual environment. This made the py.test command refer to the default environment.
In my case the issue was resolved by installing an older version of pytest and bundling it in requirements.txt with other packages.
pip install pytest==6.2.4
I have below bitbucket pipeline
image: node:11.13.0-alpine
pipelines:
branches:
master:
- step:
caches:
- node
script:
- apk add python py-pip python3
- npm install -g serverless
- serverless config credentials --stage dev --provider aws --key $AWS_ACCESS_KEY_ID --secret $AWS_SECRET_ACCESS_KEY
- cd src/rsc_user
- pip install -r requirements.txt
- sls plugin install -n serverless-python-requirements
- sls plugin install -n serverless-wsgi
- npm i serverless-package-external --save-dev
- npm install serverless-domain-manager --save-dev
- serverless deploy --stage dev
Throwing error
Error --------------------------------------------------
Error: python3.7 not found! Try the pythonBin option.
at pipAcceptsSystem (/opt/atlassian/pipelines/agent/build/src/rsc_user/node_modules/serverless-python-requirements/lib/pip.js:100:13)
at installRequirements (/opt/atlassian/pipelines/agent/build/src/rsc_user/node_modules/serverless-python-requirements/lib/pip.js:173:9)
at installRequirementsIfNeeded (/opt/atlassian/pipelines/agent/build/src/rsc_user/node_modules/serverless-python-requirements/lib/pip.js:556:3)
at ServerlessPythonRequirements.installAllRequirements (/opt/atlassian/pipelines/agent/build/src/rsc_user/node_modules/serverless-python-requirements/lib/pip.js:635:29)
at ServerlessPythonRequirements.tryCatcher (/opt/atlassian/pipelines/agent/build/src/rsc_user/node_modules/bluebird/js/release/util.js:16:23)
at Promise._settlePromiseFromHandler (/opt/atlassian/pipelines/agent/build/src/rsc_user/node_modules/bluebird/js/release/promise.js:547:31)
at Promise._settlePromise (/opt/atlassian/pipelines/agent/build/src/rsc_user/node_modules/bluebird/js/release/promise.js:604:18)
at Promise._settlePromise0 (/opt/atlassian/pipelines/agent/build/src/rsc_user/node_modules/bluebird/js/release/promise.js:649:10)
at Promise._settlePromises (/opt/atlassian/pipelines/agent/build/src/rsc_user/node_modules/bluebird/js/release/promise.js:729:18)
at _drainQueueStep (/opt/atlassian/pipelines/agent/build/src/rsc_user/node_modules/bluebird/js/release/async.js:93:12)
at _drainQueue (/opt/atlassian/pipelines/agent/build/src/rsc_user/node_modules/bluebird/js/release/async.js:86:9)
at Async._drainQueues (/opt/atlassian/pipelines/agent/build/src/rsc_user/node_modules/bluebird/js/release/async.js:102:5)
at Immediate.Async.drainQueues [as _onImmediate] (/opt/atlassian/pipelines/agent/build/src/rsc_user/node_modules/bluebird/js/release/async.js:15:14)
at processImmediate (internal/timers.js:443:21)
at process.topLevelDomainCallback (domain.js:136:23)
For debugging logs, run again after setting the "SLS_DEBUG=*" environment variable.
Get Support --------------------------------------------
Docs: docs.serverless.com
Bugs: github.com/serverless/serverless/issues
Issues: forum.serverless.com
Your Environment Information ---------------------------
Operating System: linux
Node Version: 11.13.0
Framework Version: 2.1.1
Plugin Version: 4.0.4
SDK Version: 2.3.2
Components Version: 3.1.4
I am not able to get this error as I am new in python.
any help highly appreciated
Thanks
This error basically means you do not have the right installation of python. Some application needs python3.7 and you do not specify a version with apk add python3. Hence, the latest is probably installed (3.8).
This article deals with how to select a given python version for an agent in a bitbucket pipeline. It basically boils down to:
image: python:3.7
pipelines:
default:
- step:
script:
- python --version
Is there a reason you have to use Alpine? Otherwise I'd go for the pragmatic image above.
he solve with
pythonRequirements:
pythonBin: python3
Similar problem
For the first time I deployed a Python function app to Azure using a deployment pipeline:
https://learn.microsoft.com/bs-latn-ba/azure/azure-functions/functions-how-to-azure-devops
The package is deployed to Azure using Kudu Zip deploy.
My http triggered function runs wonderfully locally (on Windows), but I have a 500 internal errors on Azure because it does not find the module requests.
Exception: ModuleNotFoundError: No module named 'requests'
imports of __init__.py:
import logging, requests, os
import azure.functions as func
If I remove the 'requests' dependency the function works on Azure (status 200).
The requests library is imported by the requirement.txt and copied to the .venv36/lib/site-packages/requests by the build pipeline.
So I am wondering if the virtual environment .venv36 that is built in the package is used by the function deployed in Azure. There is no indication about how to activate virtual environments in Azure.
If you name your virtual env worker_venv as named in the documentation you linked, it should work (assuming you are using a Linux environment for your pipeline).
However, the Python Azure Functions documentation is to be updated very soon, and the recommended way would be to not deploy the entire virtual environment from your deployment pipeline.
Instead, you'd want to install your packages in .python_packages/lib/site-packages.
You could do --
pip3.6 install --target .python_packages/lib/site-packages -r requirements.txt
Instead of --
python3.6 -m venv worker_venv
source worker_venv/bin/activate
pip3.6 install setuptools
pip3.6 install -r requirements.txt
And it should work fine.
We are also having the same issue using the newest version of the YAML pipeline template:
- task: UsePythonVersion#0
displayName: 'Use Python 3.6'
inputs:
versionSpec: 3.6 # Functions V2 supports Python 3.6 as of today
- bash: |
python -m venv worker_venv
source worker_venv/bin/activate
pip install -r requirements.txt
workingDirectory: $(workingDirectory)
displayName: 'Install application dependencies'
Removing the virtual environment step, the Function App deployed and run without any issues. This does not seem to be Python best practices; however, it was the only thing we could do to get this deployed correctly on Azure DevOps Pipelines.
Separately, before making this change, we were able to deploy using the Visual Studio code plugin, which indicated to us that this was an environment issue.
Updated docs from Microsoft (1/12/2020)
https://learn.microsoft.com/en-us/azure/azure-functions/functions-how-to-azure-devops?tabs=python
azure-pipelines.yml (our working version on Azure DevOps Pipelines)
- master
variables:
# Azure Resource Manager connection created during pipeline creation
azureSubscription: '<subscription-id>'
# Function app name
functionAppName: '<built-function-app-name>'
# Agent VM image name
vmImageName: 'ubuntu-latest'
# Working Directory
workingDirectory: '$(System.DefaultWorkingDirectory)/__app__'
stages:
- stage: Build
displayName: Build stage
jobs:
- job: Build
displayName: Build
pool:
vmImage: $(vmImageName)
steps:
- bash: |
if [ -f extensions.csproj ]
then
dotnet build extensions.csproj --runtime ubuntu.16.04-x64 --output ./bin
fi
workingDirectory: $(workingDirectory)
displayName: 'Build extensions'
- task: UsePythonVersion#0
displayName: 'Use Python 3.7'
inputs:
versionSpec: 3.7 # Functions V2 supports Python 3.6 as of today
- bash: |
pip install --upgrade pip
pip install --target="./.python_packages/lib/site-packages" -r ./requirements.txt
workingDirectory: $(workingDirectory)
displayName: 'Install application dependencies'
- task: ArchiveFiles#2
displayName: 'Archive files'
inputs:
rootFolderOrFile: '$(workingDirectory)'
includeRootFolder: false
archiveType: zip
archiveFile: $(Build.ArtifactStagingDirectory)/$(Build.BuildId).zip
replaceExistingArchive: true
- publish: $(Build.ArtifactStagingDirectory)/$(Build.BuildId).zip
artifact: drop
- stage: Deploy
displayName: Deploy stage
dependsOn: Build
condition: succeeded()
jobs:
- deployment: Deploy
displayName: Deploy
environment: 'production'
pool:
vmImage: $(vmImageName)
strategy:
runOnce:
deploy:
steps:
- task: AzureFunctionApp#1
displayName: 'Azure functions app deploy'
inputs:
azureSubscription: '$(azureSubscription)'
appType: functionAppLinux
appName: $(functionAppName)
package: '$(Pipeline.Workspace)/drop/$(Build.BuildId).zip'
It definitely needs to be more clearly pointed out that the proper directory for Python packages when deploying Azure Functions is .python_packages/lib/site-packages. I had to go digging through the Azure Function Core Tools source code to see where they put Python packages.
Also had to dig around in the Function debug console to see where Oryx grabs packages from.
I guess there is a pointer in the Version 3.7 YAML file here, but no callout of the directory's importance and does it apply to Python 3.8 Functions?
If I'm not mistaken, this is a requirement to use DevOps to deploy Python Functions (unless you want to install Function Core Tools as part of your build pipeline!).
You need to handle those 2 imports separately,
import azure.functions as func
import requests
Hopefully I am understanding your problem correctly.
When you are installing on your local machine, libs are installed where python is (or at least somewhere other than where your actual code is). This means, when you package your code, you aren't actually keeping the libs together.
To get around this, you can use a virtual env. Python provide a venv tool (there is also a a standard linux virtual env tool) which you can run via:
python -m venv /path/to/my/dir
source /path/to/my/dir/bin/activate
cd /path/to/my/dir/bin/activate
pip install -r requirements.txt
deactivate
I know you mentioned windows, so I would suggest using WSL and the ubuntu image (generally a nice tool to have anyway). There probably is a way to get that working in windows otherwise though I don't know it.
EDIT: Fixed format
Although its old but:
*pip(python version) install --target .python_packages/lib/site-packages -r requirements.txt
For ex. if you are using 3.7 then
pip3.7 install --target .python_packages/lib/site-packages -r requirements.txt
Works like a charm
I am having some problems when I am using google python SDK in Travis-CI. I'm always getting this exception:
Failure: ImportError (No module named google.appengine.api) ... ERROR
I think the problem is in my travis file or django settings file. Can I use the GAE SDK API in the Travis platform?
I write down my .travis.yml file:
language: python
python:
- "2.7"
before_script:
- wget https://storage.googleapis.com/appengine-sdks/featured/google_appengine_1.9.10.zip -nv
- unzip -q google_appengine_1.9.10.zip
- mysql -e 'create database DATABASE_NAME;'
- echo "USE mysql;\nUPDATE user SET password=PASSWORD('A_PASSWORD') WHERE user='USER';\nFLUSH PRIVILEGES;\n" | mysql -u USER
- python manage.py syncdb --noinput
install:
- pip install -r requirements.txt
- pip install mysql-python
script: python manage.py test --with-coverage
branches:
only:
- testing
Thank you
After trying a lot I solved it adding this in my travis.yml file in the before_script section after the unzip order:
- export PYTHONPATH=${PYTHONPATH}:google_appengine