i just committed a seemingly uninteresting commit, updating the release notes and setup for pypi. the travis-ci build fails however running tox with py26, py33 and pypy:
https://travis-ci.org/Turbo87/aerofiles
1.13s$ tox -e $TOX_ENV -- --cov aerofiles --cov-report term-missing
py26 create: /home/travis/build/Turbo87/aerofiles/.tox/py26
ERROR: InterpreterNotFound: python2.6
i didn't change anything to the travis.yml and tox has been fixed on the 1.7.2 version:
language: python
python: 2.7
sudo: false
env:
- TOX_ENV=py26
- TOX_ENV=py27
- TOX_ENV=py33
- TOX_ENV=py34
- TOX_ENV=pypy
install:
# Install tox and flake8 style checker
- pip install tox==1.7.2 flake8==2.1.0
script:
# Run the library through flake8
- flake8 --exclude=".git,docs" --ignore=E501 .
# Run the unit test suite
- tox -e $TOX_ENV -- --cov aerofiles --cov-report term-missing
Would be great if someone could help out. I am quite new to travis-ci (and tox) and it's quite a black box at the moment.
A few week ago I was forced to change all my .travis.yml exactly because of the problem. See my commit. Instead of
env:
- TOXENV=py27
- TOXENV=py34
write
matrix:
include:
- python: "2.7"
env: TOXENV=py27
- python: "3.4"
env: TOXENV=py34
Related
I have created a python application and I would to deploy it via Gitlab. To achieve this, I create the following gitlab-ci.yml file:
# This file is a template, and might need editing before it works on your project.
# Official language image. Look for the different tagged releases at:
# https://hub.docker.com/r/library/python/tags/
image: "python:3.10"
#commands to run in the Docker container before starting each job.
before_script:
- python --version
- pip install -r requirements.txt
# different stages in the pipeline
stages:
- Static Analysis
- Test
- Deploy
#defines the job in Static Analysis
pylint:
stage: Static Analysis
script:
- pylint -d C0301 src/*.py
#tests the code
pytest:
stage: Test
script:
- cd test/;pytest -v
#deploy
deploy:
stage: Deploy
script:
- echo "test ms deploy"
- cd src/
- pyinstaller -F gui.py --noconsole
tags:
- macos
It runs fine through the Static Analysis and Test phases, but in Deploy I get the following error:
OSError: Python library not found: .Python, libpython3.10.dylib, Python3, Python, libpython3.10m.dylib
This means your Python installation does not come with proper shared library files.
This usually happens due to missing development package, or unsuitable build parameters of the Python installation.
* On Debian/Ubuntu, you need to install Python development packages:
* apt-get install python3-dev
* apt-get install python-dev
* If you are building Python by yourself, rebuild with `--enable-shared` (or, `--enable-framework` on macOS).
As I am working on a Macbook I tried with the following addition - env PYTHON_CONFIGURE_OPTS="--enable-framework" pyenv install 3.10.5 but then I get an error that python 3.10.5 already exists.
I tried some other things, but I am a bit stuck. Any advice or suggestions?
I have the following Github action, in which I'm specifying Python 3.10:
name: Unit Tests
runs-on: ubuntu-latest
defaults:
run:
shell: bash
working-directory: app
steps:
- uses: actions/checkout#v3
- name: Install poetry
run: pipx install poetry
- uses: actions/setup-python#v3
with:
python-version: "3.10"
cache: "poetry"
- run: poetry install
- name: Run tests
run: |
make mypy
make test
The pyproject.toml specifies Python 3.10 as well:
[tool.poetry.dependencies]
python = ">=3.10,<3.11"
When the action runs, I get the following:
The currently activated Python version 3.8.10 is not supported by the project
(>=3.10,<3.11).
Trying to find and use a compatible version.
Using python3 (3.10.5)
It would look like it's using 3.10, but py.test is using 3.8.10:
platform linux -- Python 3.8.10, pytest-6.2.5, py-1.11.0, pluggy-1.0.0 --
/home/runner/.cache/pypoetry/virtualenvs/vital-background-pull-yluVa_Vi-py3.10/bin/python
For context, this Github action was running on 3.8 before. I've updated the python version in both the test.yaml and the pyproject.toml but it's still using 3.8. Anything else I should change to make it use 3.10?
Thank you
The root cause is the section
- uses: actions/setup-python#v3
with:
python-version: "3.10"
cache: "poetry"
with the line caching poetry. Since poetry was previously installed with a pip associated with Python 3.8, the package will be retrieved from the cache associated with that Python version. It needs to be re-installed with the new Python version.
You can either remove the cache: poetry from a single GH actions execution, or remove the cache manually. This will fix your issue.
Pipx might install poetry using an unexpected version of python. You can specify the python version to use:
pipx install poetry --python $(which python)
# or
pipx install poetry --python python3.10
This is what I do after the setup-python#v3 step.
You could also specify the path to the expected python version, those are available in the github docs. This would allow you to do the cache: poetry step in the order you have above.
I had a similar problem and solved it after reading how does pipx know which python to use. I did not use pyenv in my case, since I'm specifying version in my setup-python#v3.
You might also install poetry after the python setup step to be sure your version is available, supposing you did python-version: "3.10.12" or something. Then what remains is cacheing, perhaps using the cache action separately from the setup-python step.
In my case, this happens because my pyproject.toml is in a subdirectory of the repository.
The log for my actions/setup-python#v4 action looks like this:
/opt/pipx_bin/poetry env use /opt/hostedtoolcache/Python/3.9.14/x64/bin/python
Poetry could not find a pyproject.toml file in /home/runner/work/PLAT/PLAT or its parents
Warning:
Poetry could not find a pyproject.toml file in /home/runner/work/PLAT/PLAT or its parents
But the action completes successfully. Later, poetry doesn't know what python to use because it was unable to write to its global envs.toml. Eventually I did find that there's an open issue for this in actions/setup-python.
Fix
Cheat
You can do one of two things. The simplest is a cheat:
runs-on: ubuntu-22.04
The ubuntu-22.04 image has Python 3.10 baked in, so you can just forget about switching pythons and that'll be ok for a while.
Actual Fix
The better fix is to add a step after setup-python but before poetry install:
- run: poetry env use ${pythonLocation}/bin/python
working-directory: wherever/your/pyproject.toml/is
I'm trying to run Appium tests written in python 3 on AWS Device Farm.
As stated in the documentation,
Python 2.7 is supported in both the standard environment and using Custom Mode. It is the default in both when specifying Python.
Python 3 is only supported in Custom Mode. To choose Python 3 as your python version, change the test spec to set the PYTHON_VERSION to 3, as shown here:
phases:
install:
commands:
# ...
- export PYTHON_VERSION=3
- export APPIUM_VERSION=1.14.2
# Activate the Virtual Environment that Device Farm sets up for Python 3, then use Pip to install required packages.
- cd $DEVICEFARM_TEST_PACKAGE_PATH
- . bin/activate
- pip install -r requirements.txt
# ...
I did run tests successfully on Device Farm in the past, using a python 3 custom environment, with this spec file (I include the install phase only):
phases:
install:
commands:
# Device Farm support two major versions of Python, each with one minor version: 2 (minor version 2.7), and 3 (minor version 3.7.4).
# The default Python version is 2, but you can switch to 3 by setting the following variable to 3:
- export PYTHON_VERSION=3
# This command will install your dependencies and verify that they're using the proper versions that you specified in your requirements.txt file. Because Device Farm preconfigures your environment within a
# Python Virtual Environment in its setup phase, you can use pip to install any Python packages just as you would locally.
- cd $DEVICEFARM_TEST_PACKAGE_PATH
- . bin/activate
- pip install -r requirements.txt
# ...
Now, when I run the tests, I get this log an then the test crashes due to incompatible code.
[DEVICEFARM] ########### Entering phase test ###########
[DeviceFarm] echo "Navigate to test package directory"
Navigate to test package directory
[DeviceFarm] cd $DEVICEFARM_TEST_PACKAGE_PATH
[DeviceFarm] echo "Start Appium Python test"
Start Appium Python test
[DeviceFarm] py.test tests/ --junit-xml $DEVICEFARM_LOG_DIR/junitreport.xml
============================= test session starts ==============================
platform linux2 -- Python 2.7.6, pytest-2.8.5, py-1.4.31, pluggy-0.3.1
rootdir: /tmp/scratchrPuGRa.scratch/test-packageM5E89M/tests, inifile:
collected 0 items / 3 errors
==================================== ERRORS ====================================
____________________ ERROR collecting test_home_buttons.py _____________________
/usr/local/lib/python2.7/dist-packages/_pytest/python.py:610: in _importtestmodule
mod = self.fspath.pyimport(ensuresyspath=importmode)
Is python 3.x not supported anymore, or have there been undocumented changes?
Is there a new way to run tests in a python 3 environment?
The documented way is still correct.
I found there was an error during the installation of the pytest package in the virtual environment. This made the py.test command refer to the default environment.
In my case the issue was resolved by installing an older version of pytest and bundling it in requirements.txt with other packages.
pip install pytest==6.2.4
I made this citlab-ci.yml file but i can find the html report in my repo at the end.
Can you tell me why ?
image: python
services:
- selenium/standalone-chrome:latest
variables:
selenium_remote_url: "http://selenium__standalone-chrome:4444/wd/hub"
cucumber:
script:
- python --version
- pwd
- ls
- pip install pytest
- pip install pytest_bdd
- pip install selenium
- pip install chromedriver
- pip install pytest-html
- cd test_pytest
- ls
- python -m pytest step_defs/test_web_steps.py --html=report.html
tx
Hadrien
You can actually generate test reports in gitlab. For this, generate an XML report from Pytest that would be stored in GitLab as an artifact. On your .gitlab-ci.yml file
image: python:3.6
stages:
- test
testing:
stage: test
when: manual
script:
...
- pytest --junitxml=report.xml
artifacts:
when: always
reports:
junit: report.xml
Then, you can download this report
or visualize it under the Tests tag of your pipeline.
To speed up a part of code in a Python package, I wrote a Fortran subroutine. It works ok on my local system. Here is the package structure.
p_name/
setup.py
p_name/
__init__.py
fortran_sc_folder/
rsp.f95
Here is the (part of) setup.py:
from numpy.distutils.core import setup, Extension
ext1 = Extension(name='p_name.fortran_sc_folder.rsp', sources=['p_name/fortran_sc_folder/rsp.f95'])
setup(
...
ext_modules = [ext1]
...
)
Inside the program, I use the following to access the module:
import .fortran_sc_folder.rsp
It installs and works without raising any error. However, when I push the changes, it cannot pass the GitHub actions. The error is:
E ModuleNotFoundError: No module named 'p_name.fortran_cs_folder.rsp'
Do you know how to fix this, and is there any better way to use a Fortran code inside a python package?
Update:
Here is the GitHub action workflow:
# This workflow will install Python dependencies, run tests and lint with a variety of Python versions
# For more information see: https://help.github.com/actions/language-and-framework-guides/using-python-with-github-actions
name: Python package
on:
push:
branches: [ master, develop ]
pull_request:
branches: [ master, develop ]
jobs:
build:
runs-on: ubuntu-latest
strategy:
matrix:
python-version: [3.7, 3.8]
steps:
- uses: actions/checkout#v2
- name: Set up Python ${{ matrix.python-version }}
uses: actions/setup-python#v2
with:
python-version: ${{ matrix.python-version }}
- name: Setup conda
uses: s-weigand/setup-conda#v1
- name: Install dependencies
run: |
python -m pip install --upgrade pip
pip install flake8 pytest
if [ -f requirements.txt ]; then pip install -r requirements.txt; fi
conda install -c conda-forge cartopy
# - name: Lint with flake8
# run: |
# # stop the build if there are Python syntax errors or undefined names
# flake8 . --count --select=E9,F63,F7,F82 --show-source --statistics
# # exit-zero treats all errors as warnings. The GitHub editor is 127 chars wide
# flake8 . --count --exit-zero --max-complexity=10 --max-line-length=127 --statistics
- name: Test with pytest
run: |
pytest
After #RossMacArthur 's comment/question I realized that the package on GitHub action is the code base, not the compiled version. Since using setup from Numpy library compiles the Fortran code (from numpy.distutils.core import setup) during the installing process, we need to install the package in the GitHub workflow. As a result, I added the following line at the end of the install dependencies section.
pip install -e ../p_name
That resolved the issue.