I have the following Github action, in which I'm specifying Python 3.10:
name: Unit Tests
runs-on: ubuntu-latest
defaults:
run:
shell: bash
working-directory: app
steps:
- uses: actions/checkout#v3
- name: Install poetry
run: pipx install poetry
- uses: actions/setup-python#v3
with:
python-version: "3.10"
cache: "poetry"
- run: poetry install
- name: Run tests
run: |
make mypy
make test
The pyproject.toml specifies Python 3.10 as well:
[tool.poetry.dependencies]
python = ">=3.10,<3.11"
When the action runs, I get the following:
The currently activated Python version 3.8.10 is not supported by the project
(>=3.10,<3.11).
Trying to find and use a compatible version.
Using python3 (3.10.5)
It would look like it's using 3.10, but py.test is using 3.8.10:
platform linux -- Python 3.8.10, pytest-6.2.5, py-1.11.0, pluggy-1.0.0 --
/home/runner/.cache/pypoetry/virtualenvs/vital-background-pull-yluVa_Vi-py3.10/bin/python
For context, this Github action was running on 3.8 before. I've updated the python version in both the test.yaml and the pyproject.toml but it's still using 3.8. Anything else I should change to make it use 3.10?
Thank you
The root cause is the section
- uses: actions/setup-python#v3
with:
python-version: "3.10"
cache: "poetry"
with the line caching poetry. Since poetry was previously installed with a pip associated with Python 3.8, the package will be retrieved from the cache associated with that Python version. It needs to be re-installed with the new Python version.
You can either remove the cache: poetry from a single GH actions execution, or remove the cache manually. This will fix your issue.
Pipx might install poetry using an unexpected version of python. You can specify the python version to use:
pipx install poetry --python $(which python)
# or
pipx install poetry --python python3.10
This is what I do after the setup-python#v3 step.
You could also specify the path to the expected python version, those are available in the github docs. This would allow you to do the cache: poetry step in the order you have above.
I had a similar problem and solved it after reading how does pipx know which python to use. I did not use pyenv in my case, since I'm specifying version in my setup-python#v3.
You might also install poetry after the python setup step to be sure your version is available, supposing you did python-version: "3.10.12" or something. Then what remains is cacheing, perhaps using the cache action separately from the setup-python step.
In my case, this happens because my pyproject.toml is in a subdirectory of the repository.
The log for my actions/setup-python#v4 action looks like this:
/opt/pipx_bin/poetry env use /opt/hostedtoolcache/Python/3.9.14/x64/bin/python
Poetry could not find a pyproject.toml file in /home/runner/work/PLAT/PLAT or its parents
Warning:
Poetry could not find a pyproject.toml file in /home/runner/work/PLAT/PLAT or its parents
But the action completes successfully. Later, poetry doesn't know what python to use because it was unable to write to its global envs.toml. Eventually I did find that there's an open issue for this in actions/setup-python.
Fix
Cheat
You can do one of two things. The simplest is a cheat:
runs-on: ubuntu-22.04
The ubuntu-22.04 image has Python 3.10 baked in, so you can just forget about switching pythons and that'll be ok for a while.
Actual Fix
The better fix is to add a step after setup-python but before poetry install:
- run: poetry env use ${pythonLocation}/bin/python
working-directory: wherever/your/pyproject.toml/is
Related
I'm learning how to create conda packages, and I am having trouble with a package that will work locally, but that when downloaded from anaconda into a different server will return an error:
/lib64/libm.so.6: version `GLIBC_2.29' not found
The meta.yaml looks like the following:
package:
name: app
version: 2.4
source:
git_url: https://gitlab.com/user/repo.git
git_tag: v2.4
requirements:
build:
host:
run:
about:
home: https://gitlab.com/user/repo
license: GPL-3
license_family: GPL
summary: blabla.
The app is build with a simple build.sh script:
#!/bin/bash
set -x
echo $(pwd)
make
BIN=$PREFIX/bin
mkdir -p $BIN
cp app $BIN
I assumed that build: glibc >= 2.29 under requirements would make the job, but that results in an error when running conda build ..
How can I include GLIBC in the package? is that something meant to be done manually? from the package version I can download from anaconda, I can see other packages are downloaded as well (e.g. libgcc-ng) that I did not really mention in the meta.yaml or anywhere.
How can I include GLIBC in the package?
You can't, for reasons explained here.
Your best bet is to build on a system (or in a docker container) which has the lowest version of GLIBC you need to support (i.e. the version installed on the server(s) you will be running your package on).
I'm trying to run Appium tests written in python 3 on AWS Device Farm.
As stated in the documentation,
Python 2.7 is supported in both the standard environment and using Custom Mode. It is the default in both when specifying Python.
Python 3 is only supported in Custom Mode. To choose Python 3 as your python version, change the test spec to set the PYTHON_VERSION to 3, as shown here:
phases:
install:
commands:
# ...
- export PYTHON_VERSION=3
- export APPIUM_VERSION=1.14.2
# Activate the Virtual Environment that Device Farm sets up for Python 3, then use Pip to install required packages.
- cd $DEVICEFARM_TEST_PACKAGE_PATH
- . bin/activate
- pip install -r requirements.txt
# ...
I did run tests successfully on Device Farm in the past, using a python 3 custom environment, with this spec file (I include the install phase only):
phases:
install:
commands:
# Device Farm support two major versions of Python, each with one minor version: 2 (minor version 2.7), and 3 (minor version 3.7.4).
# The default Python version is 2, but you can switch to 3 by setting the following variable to 3:
- export PYTHON_VERSION=3
# This command will install your dependencies and verify that they're using the proper versions that you specified in your requirements.txt file. Because Device Farm preconfigures your environment within a
# Python Virtual Environment in its setup phase, you can use pip to install any Python packages just as you would locally.
- cd $DEVICEFARM_TEST_PACKAGE_PATH
- . bin/activate
- pip install -r requirements.txt
# ...
Now, when I run the tests, I get this log an then the test crashes due to incompatible code.
[DEVICEFARM] ########### Entering phase test ###########
[DeviceFarm] echo "Navigate to test package directory"
Navigate to test package directory
[DeviceFarm] cd $DEVICEFARM_TEST_PACKAGE_PATH
[DeviceFarm] echo "Start Appium Python test"
Start Appium Python test
[DeviceFarm] py.test tests/ --junit-xml $DEVICEFARM_LOG_DIR/junitreport.xml
============================= test session starts ==============================
platform linux2 -- Python 2.7.6, pytest-2.8.5, py-1.4.31, pluggy-0.3.1
rootdir: /tmp/scratchrPuGRa.scratch/test-packageM5E89M/tests, inifile:
collected 0 items / 3 errors
==================================== ERRORS ====================================
____________________ ERROR collecting test_home_buttons.py _____________________
/usr/local/lib/python2.7/dist-packages/_pytest/python.py:610: in _importtestmodule
mod = self.fspath.pyimport(ensuresyspath=importmode)
Is python 3.x not supported anymore, or have there been undocumented changes?
Is there a new way to run tests in a python 3 environment?
The documented way is still correct.
I found there was an error during the installation of the pytest package in the virtual environment. This made the py.test command refer to the default environment.
In my case the issue was resolved by installing an older version of pytest and bundling it in requirements.txt with other packages.
pip install pytest==6.2.4
I'm switching a python project over to poetry for dependency and packaging management, and am running into issues getting my github actions unit tests working. I believe the issue is that poetry is not actually installing my package. When I run poetry install locally, after it installs dependencies it shows that it installs the current project with the line:
Installing the current project: monaco (0.1.0)
However when I run poetry install in github actions, it installs the dependencies but never shows that line where it installed the current project. Here's the github test log for context, look under the "Install library" step. Then, when I try to run tests after that, they fail because they can't find the package:
ImportError while importing test module '/home/runner/work/monaco/monaco/tests/test_MCCase.py'.
Hint: make sure your test modules/packages have valid Python names.
Traceback:
/opt/hostedtoolcache/Python/3.9.7/x64/lib/python3.9/importlib/__init__.py:127: in import_module
return _bootstrap._gcd_import(name[level:], package, level)
tests/test_MCCase.py:4: in <module>
from monaco.MCCase import MCCase
E ModuleNotFoundError: No module named 'monaco'
This is my first time using poetry, so it's likely that I'm doing something silly somewhere. But I've spent the last couple hours trying to figure it out and have gotten nowhere. Any help would be much appreciated!
My unit_tests.yml file looks like this:
name: Unit Tests
on:
push:
branches:
- master
- develop
pull_request:
branches:
- master
- develop
jobs:
test:
runs-on: ubuntu-latest
steps:
- name: Check out repository
uses: actions/checkout#v2
- name: Set up python
uses: actions/setup-python#v2
with:
python-version: 3.9
- name: Install Poetry
uses: snok/install-poetry#v1
- name: Install library
run: poetry install
- name: Run tests
run: |
poetry run python -m pytest
Turns out I'm an idiot, I had just renamed my project from "Monaco" to "monaco", but forgot to update the module directory name to lowercase. Fixing that fixed my issue.
I cloned a repository, installed pre-commit and was committing for the first time.
This is the time when pre-commit packages actually get installed and setup. I faced the following issue.
[INFO] Installing environment for https://github.com/asottile/seed-isort-config.
[INFO] Once installed this environment will be reused.
[INFO] This may take a few minutes...
An unexpected error has occurred: CalledProcessError: command: ('/home/roopak/.cache/pre-commit/repokb2ckm/py_env-python2.7/bin/python', u'/home/roopak/.cache/pre-commit/repokb2ckm/py_env-python2.7/bin/pip', 'install', '.')
return code: 1
expected return code: 0
stdout:
Processing /home/roopak/.cache/pre-commit/repokb2ckm
stderr:
DEPRECATION: Python 2.7 reached the end of its life on January 1st, 2020. Please upgrade your Python as Python 2.7 is no longer maintained. A future version of pip will drop support for Python 2.7. More details about Python 2 support in pip, can be found at https://pip.pypa.io/en/latest/development/release-process/#python-2-support
ERROR: Package 'seed-isort-config' requires a different Python: 2.7.17 not in '>=3.6.1'
The issue was that I have both Python2.7 and 3 installed. And my pre-commit was installed was using Python 2.7 as the default.
Solution 1: remove pre-commit from Python2.7 and add it to Python3.
As per the creator of pre-commit - #anthony-sottile - it is better to use pre-commit with Python3. To do that we will have to uninstall pre-commit from Python2.7 and install it via Python3.
$ pip uninstall pre-commit # uninstall from Python2.7
$ pip3 install pre-commit # install with Python3
Solution 2: keeping pre-commit with Python2.7 (not recommended)
To solve this I used default_language_version from the pre-commit documentation.
Refer: https://pre-commit.com/#overriding-language-version
By setting the default_language_version all hooks will use this particular version. If any particular hook needs to be overridden this property - language_version: - may be set on the hook.
Eg:-
default_language_version:
# force all unspecified python hooks to run python3
python: python3
repos:
- repo: https://github.com/pre-commit/pre-commit-hooks
rev: v2.5.0
hooks:
- id: trailing-whitespace
name: trim trailing whitespace
description: This hook trims trailing whitespace on files
entry: trailing-whitespace-fixer
- id: check-merge-conflict
name: check for merge conflict
description: Prevent accidentally commiting files with merge conflicts.
language_version:
python: python2.7
This example .pre-commit-config.yaml file set the default python version to Python 3. For the hook - check-merge-conflict it will use Python 2.7.
For the first time I deployed a Python function app to Azure using a deployment pipeline:
https://learn.microsoft.com/bs-latn-ba/azure/azure-functions/functions-how-to-azure-devops
The package is deployed to Azure using Kudu Zip deploy.
My http triggered function runs wonderfully locally (on Windows), but I have a 500 internal errors on Azure because it does not find the module requests.
Exception: ModuleNotFoundError: No module named 'requests'
imports of __init__.py:
import logging, requests, os
import azure.functions as func
If I remove the 'requests' dependency the function works on Azure (status 200).
The requests library is imported by the requirement.txt and copied to the .venv36/lib/site-packages/requests by the build pipeline.
So I am wondering if the virtual environment .venv36 that is built in the package is used by the function deployed in Azure. There is no indication about how to activate virtual environments in Azure.
If you name your virtual env worker_venv as named in the documentation you linked, it should work (assuming you are using a Linux environment for your pipeline).
However, the Python Azure Functions documentation is to be updated very soon, and the recommended way would be to not deploy the entire virtual environment from your deployment pipeline.
Instead, you'd want to install your packages in .python_packages/lib/site-packages.
You could do --
pip3.6 install --target .python_packages/lib/site-packages -r requirements.txt
Instead of --
python3.6 -m venv worker_venv
source worker_venv/bin/activate
pip3.6 install setuptools
pip3.6 install -r requirements.txt
And it should work fine.
We are also having the same issue using the newest version of the YAML pipeline template:
- task: UsePythonVersion#0
displayName: 'Use Python 3.6'
inputs:
versionSpec: 3.6 # Functions V2 supports Python 3.6 as of today
- bash: |
python -m venv worker_venv
source worker_venv/bin/activate
pip install -r requirements.txt
workingDirectory: $(workingDirectory)
displayName: 'Install application dependencies'
Removing the virtual environment step, the Function App deployed and run without any issues. This does not seem to be Python best practices; however, it was the only thing we could do to get this deployed correctly on Azure DevOps Pipelines.
Separately, before making this change, we were able to deploy using the Visual Studio code plugin, which indicated to us that this was an environment issue.
Updated docs from Microsoft (1/12/2020)
https://learn.microsoft.com/en-us/azure/azure-functions/functions-how-to-azure-devops?tabs=python
azure-pipelines.yml (our working version on Azure DevOps Pipelines)
- master
variables:
# Azure Resource Manager connection created during pipeline creation
azureSubscription: '<subscription-id>'
# Function app name
functionAppName: '<built-function-app-name>'
# Agent VM image name
vmImageName: 'ubuntu-latest'
# Working Directory
workingDirectory: '$(System.DefaultWorkingDirectory)/__app__'
stages:
- stage: Build
displayName: Build stage
jobs:
- job: Build
displayName: Build
pool:
vmImage: $(vmImageName)
steps:
- bash: |
if [ -f extensions.csproj ]
then
dotnet build extensions.csproj --runtime ubuntu.16.04-x64 --output ./bin
fi
workingDirectory: $(workingDirectory)
displayName: 'Build extensions'
- task: UsePythonVersion#0
displayName: 'Use Python 3.7'
inputs:
versionSpec: 3.7 # Functions V2 supports Python 3.6 as of today
- bash: |
pip install --upgrade pip
pip install --target="./.python_packages/lib/site-packages" -r ./requirements.txt
workingDirectory: $(workingDirectory)
displayName: 'Install application dependencies'
- task: ArchiveFiles#2
displayName: 'Archive files'
inputs:
rootFolderOrFile: '$(workingDirectory)'
includeRootFolder: false
archiveType: zip
archiveFile: $(Build.ArtifactStagingDirectory)/$(Build.BuildId).zip
replaceExistingArchive: true
- publish: $(Build.ArtifactStagingDirectory)/$(Build.BuildId).zip
artifact: drop
- stage: Deploy
displayName: Deploy stage
dependsOn: Build
condition: succeeded()
jobs:
- deployment: Deploy
displayName: Deploy
environment: 'production'
pool:
vmImage: $(vmImageName)
strategy:
runOnce:
deploy:
steps:
- task: AzureFunctionApp#1
displayName: 'Azure functions app deploy'
inputs:
azureSubscription: '$(azureSubscription)'
appType: functionAppLinux
appName: $(functionAppName)
package: '$(Pipeline.Workspace)/drop/$(Build.BuildId).zip'
It definitely needs to be more clearly pointed out that the proper directory for Python packages when deploying Azure Functions is .python_packages/lib/site-packages. I had to go digging through the Azure Function Core Tools source code to see where they put Python packages.
Also had to dig around in the Function debug console to see where Oryx grabs packages from.
I guess there is a pointer in the Version 3.7 YAML file here, but no callout of the directory's importance and does it apply to Python 3.8 Functions?
If I'm not mistaken, this is a requirement to use DevOps to deploy Python Functions (unless you want to install Function Core Tools as part of your build pipeline!).
You need to handle those 2 imports separately,
import azure.functions as func
import requests
Hopefully I am understanding your problem correctly.
When you are installing on your local machine, libs are installed where python is (or at least somewhere other than where your actual code is). This means, when you package your code, you aren't actually keeping the libs together.
To get around this, you can use a virtual env. Python provide a venv tool (there is also a a standard linux virtual env tool) which you can run via:
python -m venv /path/to/my/dir
source /path/to/my/dir/bin/activate
cd /path/to/my/dir/bin/activate
pip install -r requirements.txt
deactivate
I know you mentioned windows, so I would suggest using WSL and the ubuntu image (generally a nice tool to have anyway). There probably is a way to get that working in windows otherwise though I don't know it.
EDIT: Fixed format
Although its old but:
*pip(python version) install --target .python_packages/lib/site-packages -r requirements.txt
For ex. if you are using 3.7 then
pip3.7 install --target .python_packages/lib/site-packages -r requirements.txt
Works like a charm