I am trying to implement a python linter using pylint. But i am getting the score of each python file and also displaying the suggestion to improve the score but I am also looking to terminate the GitHub action job if my pylint score is below 6.0 but currently its not failing my job.
This is the workflow which I have used :
name: Python Linting
on:
push:
branches:
- 'test'
jobs:
linting:
runs-on: ubuntu-latest
steps:
- name: Checkout the code
uses: actions/checkout#v2
- name: Install dependencies
run: |
python -m pip install --upgrade pip
pip install pylint
pip install umsgpack
pip install cryptography
pip install pylint-fail-under
- name: Analysing the code with pylint
run: find . -name '*.py' -print -exec pylint-fail-under --fail-under=7.0 --enable=W {} \;
My goal is the show that pylint has failed for a file and then terminate Git-hub action job. But i am not able to implement it using this can someone help ?
pylint-fail-under can be removed as pylint has the feature since 2.5.0 which was released a long time ago. You should also be able to use pylint . --recursive=y if your pylint version is above 2.13.0 (it does the same thing than find in your script)
Add --recursive option to allow recursive discovery of all modules and packages in subtree. Running pylint with --recursive=y option will check all discovered .py files and packages found inside subtree of directory provided as parameter to pylint.
https://pylint.pycqa.org/en/latest/whatsnew/2/2.13/full.html#what-s-new-in-pylint-2-13-0
The final command could be: pylint --fail-under=7.0 --recursive=y --enable=W
You have to make your command in "Analysing the code with pylint" step to return code != 0.
You are using https://pubs.opengroup.org/onlinepubs/009695399/utilities/find.html which ignores completely the exit code or exec part and will return always 0, unless there is an error in iterating over files.
You have to combine find with xargs instead - then your exit code will be coming from your pylint command instead of find.
The find + xargs will go through all resources and nonzero status if any of executed command returned nonzero status.
If you would like to stop on the first file not passing the lining I would recommend using set -e and writing the script differently:
set -e
for file in **/*.py; do pylint "$file"; done
I have finally able to fail the build when pylint score is below 7.0
This is the workflow which i have used
name: Python Linting
on:
push:
branches:
- 'test'
jobs:
linting:
runs-on: ubuntu-latest
steps:
- name: Checkout the code
uses: actions/checkout#v2
- name: Install dependencies
run: |
python -m pip install --upgrade pip
pip install pylint
pip install umsgpack
pip install cryptography
pip install pylint-fail-under
#lists pyling suggestions to improve the score & pylint score of the file
- name: code review
run: find . -name '*.py' -print -exec pylint {} \;
#fails the build if one file has pylint score below 7.0
- name: Analyse code
run: |
for file in */*.py; do pylint "$file" --fail-under=7.0; done
Refer : Fail pylint using Github actions workflow if file score is less than 6.0
Related
Here's the GitHub Actions job that uses to build wheel for a Python module with C++ code, (bound using the pybind11 module):
jobs:
build_wheels:
name: Build wheels on ${{ matrix.os }}
# if: ${{ github.event.workflow_run.conclusion == 'success' }}
runs-on: ${{ matrix.os }}
strategy:
fail-fast: false
matrix:
os: [ubuntu-latest, windows-latest, macos-latest]
steps:
- uses: actions/checkout#v2
- name: Install build dependencies
run: |
python -m pip install pybind11 cibuildwheel
- name: Build wheels
# uses: pypa/cibuildwheel#v2.8.1
run: |
cibuildwheel
Related configuration in pyproject.toml:
[tool.cibuildwheel]
# before-build = "pip install pybind11"
before-all = "pip install pybind11" <------
test-requires = "pytest"
test-command = "pytest"
And it failed with error ModuleNotFoundError: No module named 'pybind11', even though it's set to be installed via CIBW_BEFORE_ALL option, can you help me figure out why? Thank you in advance.
I read the documentation on CIBW_BEFORE_ALL, it says the option Execute a shell command on the build system before any wheels are built, so I supposed that it should do the job.
I have included links to the job run's output, the full workflow file, and my setup.py file for reference. I am also including commands I use to build and run locally.
Any help would be greatly appreciated.
Link to the job run's output
Link to the full workflow file
Link to setup.py
Commands to build and run locally:
git clone https://github.com/easy-graph/Easy-Graph && cd Easy-Graph && git checkout pybind11
pip install pybind11
python3 setup.py install
This question is solved by Joe Rickerby here:
pybind11 is required by the build, so the right place to specify this requirement is in pyproject.toml, in build-system.requires.
Simply adding the following to pyproject.toml solved this issue:
[build-system]
requires = ["setuptools>=42", "wheel", "Cython", "pybind11"]
build-backend = "setuptools.build_meta"
I have the following Github action, in which I'm specifying Python 3.10:
name: Unit Tests
runs-on: ubuntu-latest
defaults:
run:
shell: bash
working-directory: app
steps:
- uses: actions/checkout#v3
- name: Install poetry
run: pipx install poetry
- uses: actions/setup-python#v3
with:
python-version: "3.10"
cache: "poetry"
- run: poetry install
- name: Run tests
run: |
make mypy
make test
The pyproject.toml specifies Python 3.10 as well:
[tool.poetry.dependencies]
python = ">=3.10,<3.11"
When the action runs, I get the following:
The currently activated Python version 3.8.10 is not supported by the project
(>=3.10,<3.11).
Trying to find and use a compatible version.
Using python3 (3.10.5)
It would look like it's using 3.10, but py.test is using 3.8.10:
platform linux -- Python 3.8.10, pytest-6.2.5, py-1.11.0, pluggy-1.0.0 --
/home/runner/.cache/pypoetry/virtualenvs/vital-background-pull-yluVa_Vi-py3.10/bin/python
For context, this Github action was running on 3.8 before. I've updated the python version in both the test.yaml and the pyproject.toml but it's still using 3.8. Anything else I should change to make it use 3.10?
Thank you
The root cause is the section
- uses: actions/setup-python#v3
with:
python-version: "3.10"
cache: "poetry"
with the line caching poetry. Since poetry was previously installed with a pip associated with Python 3.8, the package will be retrieved from the cache associated with that Python version. It needs to be re-installed with the new Python version.
You can either remove the cache: poetry from a single GH actions execution, or remove the cache manually. This will fix your issue.
Pipx might install poetry using an unexpected version of python. You can specify the python version to use:
pipx install poetry --python $(which python)
# or
pipx install poetry --python python3.10
This is what I do after the setup-python#v3 step.
You could also specify the path to the expected python version, those are available in the github docs. This would allow you to do the cache: poetry step in the order you have above.
I had a similar problem and solved it after reading how does pipx know which python to use. I did not use pyenv in my case, since I'm specifying version in my setup-python#v3.
You might also install poetry after the python setup step to be sure your version is available, supposing you did python-version: "3.10.12" or something. Then what remains is cacheing, perhaps using the cache action separately from the setup-python step.
In my case, this happens because my pyproject.toml is in a subdirectory of the repository.
The log for my actions/setup-python#v4 action looks like this:
/opt/pipx_bin/poetry env use /opt/hostedtoolcache/Python/3.9.14/x64/bin/python
Poetry could not find a pyproject.toml file in /home/runner/work/PLAT/PLAT or its parents
Warning:
Poetry could not find a pyproject.toml file in /home/runner/work/PLAT/PLAT or its parents
But the action completes successfully. Later, poetry doesn't know what python to use because it was unable to write to its global envs.toml. Eventually I did find that there's an open issue for this in actions/setup-python.
Fix
Cheat
You can do one of two things. The simplest is a cheat:
runs-on: ubuntu-22.04
The ubuntu-22.04 image has Python 3.10 baked in, so you can just forget about switching pythons and that'll be ok for a while.
Actual Fix
The better fix is to add a step after setup-python but before poetry install:
- run: poetry env use ${pythonLocation}/bin/python
working-directory: wherever/your/pyproject.toml/is
I have my GitHub action that has runs-on set to windows-latest and my mypy command.
jobs:
build:
runs-on: windows-latest
steps:
- uses: actions/checkout#v2
with:
ref: ${{ github.head_ref }}
- name: Set up Python 3.x
uses: actions/setup-python#v2
with:
python-version: '3.8'
- name: Install dependencies
run: |
python -m pip install --upgrade pip
pip install mypy
pip install -r requirements.txt
- name: Lint with mypy
run: |
Get-ChildItem . -Filter "*.py" -Recurse | foreach {mypy $_.FullName `
--show-error-codes `
--raise-exceptions
}
I have errors in the GitHub console for the action run, but it doesn't cause the job to fail. How can I make the job fail on mypy errors?
The mypy documentation doesn't mention anything about specifying failure on errors, or specifying error return codes.
If you want to fail the job or step then you need to return a non 0 exit code.
See here: https://docs.github.com/en/free-pro-team#latest/actions/creating-actions/setting-exit-codes-for-actions
I'm not familiar with what mypy is doing in your example, but if you want it to fail the step based on some output, then you should probably save the output to a variable, check it for what you are expecting as a 'failure' and then 'exit 1' so that returns to github actions which will subsequently fail that step.
To speed up a part of code in a Python package, I wrote a Fortran subroutine. It works ok on my local system. Here is the package structure.
p_name/
setup.py
p_name/
__init__.py
fortran_sc_folder/
rsp.f95
Here is the (part of) setup.py:
from numpy.distutils.core import setup, Extension
ext1 = Extension(name='p_name.fortran_sc_folder.rsp', sources=['p_name/fortran_sc_folder/rsp.f95'])
setup(
...
ext_modules = [ext1]
...
)
Inside the program, I use the following to access the module:
import .fortran_sc_folder.rsp
It installs and works without raising any error. However, when I push the changes, it cannot pass the GitHub actions. The error is:
E ModuleNotFoundError: No module named 'p_name.fortran_cs_folder.rsp'
Do you know how to fix this, and is there any better way to use a Fortran code inside a python package?
Update:
Here is the GitHub action workflow:
# This workflow will install Python dependencies, run tests and lint with a variety of Python versions
# For more information see: https://help.github.com/actions/language-and-framework-guides/using-python-with-github-actions
name: Python package
on:
push:
branches: [ master, develop ]
pull_request:
branches: [ master, develop ]
jobs:
build:
runs-on: ubuntu-latest
strategy:
matrix:
python-version: [3.7, 3.8]
steps:
- uses: actions/checkout#v2
- name: Set up Python ${{ matrix.python-version }}
uses: actions/setup-python#v2
with:
python-version: ${{ matrix.python-version }}
- name: Setup conda
uses: s-weigand/setup-conda#v1
- name: Install dependencies
run: |
python -m pip install --upgrade pip
pip install flake8 pytest
if [ -f requirements.txt ]; then pip install -r requirements.txt; fi
conda install -c conda-forge cartopy
# - name: Lint with flake8
# run: |
# # stop the build if there are Python syntax errors or undefined names
# flake8 . --count --select=E9,F63,F7,F82 --show-source --statistics
# # exit-zero treats all errors as warnings. The GitHub editor is 127 chars wide
# flake8 . --count --exit-zero --max-complexity=10 --max-line-length=127 --statistics
- name: Test with pytest
run: |
pytest
After #RossMacArthur 's comment/question I realized that the package on GitHub action is the code base, not the compiled version. Since using setup from Numpy library compiles the Fortran code (from numpy.distutils.core import setup) during the installing process, we need to install the package in the GitHub workflow. As a result, I added the following line at the end of the install dependencies section.
pip install -e ../p_name
That resolved the issue.
I am trying to install python package "M2Crypto" via requirements.txt and I receive the following error message:
/usr/include/openssl/opensslconf.h:36: Error: CPP #error ""This openssl-devel package does not work your architecture?"". Use the -cpperraswarn option to continue swig processing.
error: command 'swig' failed with exit status 1
I tried passing
option_name: SWIG_FEATURES
value: "-cpperraswarn -includeall -I/usr/include/openssl"
But the error persists. Any idea?
The following config file (placed in .ebextensions) works for me:
packages:
yum:
swig: []
container_commands:
01_m2crypto:
command: 'SWIG_FEATURES="-cpperraswarn -includeall -D`uname -m` -I/usr/include/openssl" pip install M2Crypto==0.21.1'
Make sure you don't specify M2Crypto in your requirements.txt though, Elastic Beanstalk will try to install all dependencies before running the container commands.
I have found a solution that gets M2Crypto installed on Beanstalk but it is a bit of hack and it is your responsibility to make sure that it is good for a production environment. I dropped M2Crypto from my project because this issue is ridiculous, try pycrypto if you can.
Based on (I only added python setup.py test):
#!/bin/bash
python -c "import M2Crypto" 2> /dev/null
if [ "$?" == 1 ]
then
cd /tmp/
pip install -d . --use-mirrors M2Crypto==0.21.1
tar xvfz M2Crypto-0.21.1.tar.gz
cd M2Crypto-0.21.1
./fedora_setup.sh build
./fedora_setup.sh install
python setup.py test
fi`
In the environment config file
commands:
m2crypto:
command: scripts/m2crypto.sh
ignoreErrors: True
test: echo '! python -c "import M2Crypto"' | bash
ignoreErrors is NOT a good idea but I just used it to test if the package actually gets installed and seems like it.
Again, this might seem to get the package installed but I am not sure because removing ignoreErrors causes failure. Therefore, I won't mark this as the accepted answer but it was way too much to be a comment.