Share the same workspace across different jobs - python

Can I share the same workspace installed in one job among other jobs?
In particular, I want to keep share the software installed in one job to later jobs. According to the documentation,
When you run a pipeline on a self-hosted agent, by default, none of the sub-directories are cleaned in between two consecutive runs.
However, this pipeline below failed in job J2 because the sphinx installed in job J1 is lost in J2.
jobs:
- job: 'J1'
pool:
vmImage: 'Ubuntu-16.04'
strategy:
matrix:
Python37:
python.version: '3.7'
maxParallel: 3
steps:
- task: UsePythonVersion#0
inputs:
versionSpec: '$(python.version)'
architecture: 'x64'
- script: python -m pip install --upgrade pip
displayName: 'Install dependencies'
- script: pip install --upgrade pip
displayName: 'Update pip'
- script: |
echo "Publishing document for development version $(Build.BuildId)"
pip install -U sphinx
displayName: 'TEST J1'
- script: |
echo "TEST SPHINX"
sphinx-build --help
displayName: 'TEST SPHINX'
- job: 'J2'
dependsOn: 'J1'
steps:
- task: UsePythonVersion#0
inputs:
versionSpec: '3.x'
architecture: 'x64'
- script: |
echo "TEST SPHINX"
sphinx-build --help
displayName: 'TEST SPHINX'

This error does not relevant with Workspace.
Yes, workspace can be shared across jobs, and on your code, the sphinx is also in your workspace. But, it is not installed in the PATH which is a global path, so while you want to use and execute it later, it failed because the wrong PATH value.
In Ubuntu agent, Pip installed with --user by default. This is a agent user which the agent do not have while you create and using the VM image. If you do not have any change code, it will be installed to the ~/.local/bin instead of PATH by default.
For solved, you need to make sure that the command you are using can be found within PATH. If the command is not in your path either include it or use absolute full path to it.
So, you should use export to specified the PATH value manually:
export PATH="xxx"
You can check this blog for more details.

Related

Pyinstaller not working in Gitlab CI file

I have created a python application and I would to deploy it via Gitlab. To achieve this, I create the following gitlab-ci.yml file:
# This file is a template, and might need editing before it works on your project.
# Official language image. Look for the different tagged releases at:
# https://hub.docker.com/r/library/python/tags/
image: "python:3.10"
#commands to run in the Docker container before starting each job.
before_script:
- python --version
- pip install -r requirements.txt
# different stages in the pipeline
stages:
- Static Analysis
- Test
- Deploy
#defines the job in Static Analysis
pylint:
stage: Static Analysis
script:
- pylint -d C0301 src/*.py
#tests the code
pytest:
stage: Test
script:
- cd test/;pytest -v
#deploy
deploy:
stage: Deploy
script:
- echo "test ms deploy"
- cd src/
- pyinstaller -F gui.py --noconsole
tags:
- macos
It runs fine through the Static Analysis and Test phases, but in Deploy I get the following error:
OSError: Python library not found: .Python, libpython3.10.dylib, Python3, Python, libpython3.10m.dylib
This means your Python installation does not come with proper shared library files.
This usually happens due to missing development package, or unsuitable build parameters of the Python installation.
* On Debian/Ubuntu, you need to install Python development packages:
* apt-get install python3-dev
* apt-get install python-dev
* If you are building Python by yourself, rebuild with `--enable-shared` (or, `--enable-framework` on macOS).
As I am working on a Macbook I tried with the following addition - env PYTHON_CONFIGURE_OPTS="--enable-framework" pyenv install 3.10.5 but then I get an error that python 3.10.5 already exists.
I tried some other things, but I am a bit stuck. Any advice or suggestions?

Pylint fail under does not fail my Github action job

I am trying to implement a python linter using pylint. But i am getting the score of each python file and also displaying the suggestion to improve the score but I am also looking to terminate the GitHub action job if my pylint score is below 6.0 but currently its not failing my job.
This is the workflow which I have used :
name: Python Linting
on:
push:
branches:
- 'test'
jobs:
linting:
runs-on: ubuntu-latest
steps:
- name: Checkout the code
uses: actions/checkout#v2
- name: Install dependencies
run: |
python -m pip install --upgrade pip
pip install pylint
pip install umsgpack
pip install cryptography
pip install pylint-fail-under
- name: Analysing the code with pylint
run: find . -name '*.py' -print -exec pylint-fail-under --fail-under=7.0 --enable=W {} \;
My goal is the show that pylint has failed for a file and then terminate Git-hub action job. But i am not able to implement it using this can someone help ?
pylint-fail-under can be removed as pylint has the feature since 2.5.0 which was released a long time ago. You should also be able to use pylint . --recursive=y if your pylint version is above 2.13.0 (it does the same thing than find in your script)
Add --recursive option to allow recursive discovery of all modules and packages in subtree. Running pylint with --recursive=y option will check all discovered .py files and packages found inside subtree of directory provided as parameter to pylint.
https://pylint.pycqa.org/en/latest/whatsnew/2/2.13/full.html#what-s-new-in-pylint-2-13-0
The final command could be: pylint --fail-under=7.0 --recursive=y --enable=W
You have to make your command in "Analysing the code with pylint" step to return code != 0.
You are using https://pubs.opengroup.org/onlinepubs/009695399/utilities/find.html which ignores completely the exit code or exec part and will return always 0, unless there is an error in iterating over files.
You have to combine find with xargs instead - then your exit code will be coming from your pylint command instead of find.
The find + xargs will go through all resources and nonzero status if any of executed command returned nonzero status.
If you would like to stop on the first file not passing the lining I would recommend using set -e and writing the script differently:
set -e
for file in **/*.py; do pylint "$file"; done
I have finally able to fail the build when pylint score is below 7.0
This is the workflow which i have used
name: Python Linting
on:
push:
branches:
- 'test'
jobs:
linting:
runs-on: ubuntu-latest
steps:
- name: Checkout the code
uses: actions/checkout#v2
- name: Install dependencies
run: |
python -m pip install --upgrade pip
pip install pylint
pip install umsgpack
pip install cryptography
pip install pylint-fail-under
#lists pyling suggestions to improve the score & pylint score of the file
- name: code review
run: find . -name '*.py' -print -exec pylint {} \;
#fails the build if one file has pylint score below 7.0
- name: Analyse code
run: |
for file in */*.py; do pylint "$file" --fail-under=7.0; done
Refer : Fail pylint using Github actions workflow if file score is less than 6.0

How to fail a Python CI GitHub Action job if mypy returns errors?

I have my GitHub action that has runs-on set to windows-latest and my mypy command.
jobs:
build:
runs-on: windows-latest
steps:
- uses: actions/checkout#v2
with:
ref: ${{ github.head_ref }}
- name: Set up Python 3.x
uses: actions/setup-python#v2
with:
python-version: '3.8'
- name: Install dependencies
run: |
python -m pip install --upgrade pip
pip install mypy
pip install -r requirements.txt
- name: Lint with mypy
run: |
Get-ChildItem . -Filter "*.py" -Recurse | foreach {mypy $_.FullName `
--show-error-codes `
--raise-exceptions
}
I have errors in the GitHub console for the action run, but it doesn't cause the job to fail. How can I make the job fail on mypy errors?
The mypy documentation doesn't mention anything about specifying failure on errors, or specifying error return codes.
If you want to fail the job or step then you need to return a non 0 exit code.
See here: https://docs.github.com/en/free-pro-team#latest/actions/creating-actions/setting-exit-codes-for-actions
I'm not familiar with what mypy is doing in your example, but if you want it to fail the step based on some output, then you should probably save the output to a variable, check it for what you are expecting as a 'failure' and then 'exit 1' so that returns to github actions which will subsequently fail that step.

How to make a continuous delivery of a python function app deployed in Azure?

For the first time I deployed a Python function app to Azure using a deployment pipeline:
https://learn.microsoft.com/bs-latn-ba/azure/azure-functions/functions-how-to-azure-devops
The package is deployed to Azure using Kudu Zip deploy.
My http triggered function runs wonderfully locally (on Windows), but I have a 500 internal errors on Azure because it does not find the module requests.
Exception: ModuleNotFoundError: No module named 'requests'
imports of __init__.py:
import logging, requests, os
import azure.functions as func
If I remove the 'requests' dependency the function works on Azure (status 200).
The requests library is imported by the requirement.txt and copied to the .venv36/lib/site-packages/requests by the build pipeline.
So I am wondering if the virtual environment .venv36 that is built in the package is used by the function deployed in Azure. There is no indication about how to activate virtual environments in Azure.
If you name your virtual env worker_venv as named in the documentation you linked, it should work (assuming you are using a Linux environment for your pipeline).
However, the Python Azure Functions documentation is to be updated very soon, and the recommended way would be to not deploy the entire virtual environment from your deployment pipeline.
Instead, you'd want to install your packages in .python_packages/lib/site-packages.
You could do --
pip3.6 install --target .python_packages/lib/site-packages -r requirements.txt
Instead of --
python3.6 -m venv worker_venv
source worker_venv/bin/activate
pip3.6 install setuptools
pip3.6 install -r requirements.txt
And it should work fine.
We are also having the same issue using the newest version of the YAML pipeline template:
- task: UsePythonVersion#0
displayName: 'Use Python 3.6'
inputs:
versionSpec: 3.6 # Functions V2 supports Python 3.6 as of today
- bash: |
python -m venv worker_venv
source worker_venv/bin/activate
pip install -r requirements.txt
workingDirectory: $(workingDirectory)
displayName: 'Install application dependencies'
Removing the virtual environment step, the Function App deployed and run without any issues. This does not seem to be Python best practices; however, it was the only thing we could do to get this deployed correctly on Azure DevOps Pipelines.
Separately, before making this change, we were able to deploy using the Visual Studio code plugin, which indicated to us that this was an environment issue.
Updated docs from Microsoft (1/12/2020)
https://learn.microsoft.com/en-us/azure/azure-functions/functions-how-to-azure-devops?tabs=python
azure-pipelines.yml (our working version on Azure DevOps Pipelines)
- master
variables:
# Azure Resource Manager connection created during pipeline creation
azureSubscription: '<subscription-id>'
# Function app name
functionAppName: '<built-function-app-name>'
# Agent VM image name
vmImageName: 'ubuntu-latest'
# Working Directory
workingDirectory: '$(System.DefaultWorkingDirectory)/__app__'
stages:
- stage: Build
displayName: Build stage
jobs:
- job: Build
displayName: Build
pool:
vmImage: $(vmImageName)
steps:
- bash: |
if [ -f extensions.csproj ]
then
dotnet build extensions.csproj --runtime ubuntu.16.04-x64 --output ./bin
fi
workingDirectory: $(workingDirectory)
displayName: 'Build extensions'
- task: UsePythonVersion#0
displayName: 'Use Python 3.7'
inputs:
versionSpec: 3.7 # Functions V2 supports Python 3.6 as of today
- bash: |
pip install --upgrade pip
pip install --target="./.python_packages/lib/site-packages" -r ./requirements.txt
workingDirectory: $(workingDirectory)
displayName: 'Install application dependencies'
- task: ArchiveFiles#2
displayName: 'Archive files'
inputs:
rootFolderOrFile: '$(workingDirectory)'
includeRootFolder: false
archiveType: zip
archiveFile: $(Build.ArtifactStagingDirectory)/$(Build.BuildId).zip
replaceExistingArchive: true
- publish: $(Build.ArtifactStagingDirectory)/$(Build.BuildId).zip
artifact: drop
- stage: Deploy
displayName: Deploy stage
dependsOn: Build
condition: succeeded()
jobs:
- deployment: Deploy
displayName: Deploy
environment: 'production'
pool:
vmImage: $(vmImageName)
strategy:
runOnce:
deploy:
steps:
- task: AzureFunctionApp#1
displayName: 'Azure functions app deploy'
inputs:
azureSubscription: '$(azureSubscription)'
appType: functionAppLinux
appName: $(functionAppName)
package: '$(Pipeline.Workspace)/drop/$(Build.BuildId).zip'
It definitely needs to be more clearly pointed out that the proper directory for Python packages when deploying Azure Functions is .python_packages/lib/site-packages. I had to go digging through the Azure Function Core Tools source code to see where they put Python packages.
Also had to dig around in the Function debug console to see where Oryx grabs packages from.
I guess there is a pointer in the Version 3.7 YAML file here, but no callout of the directory's importance and does it apply to Python 3.8 Functions?
If I'm not mistaken, this is a requirement to use DevOps to deploy Python Functions (unless you want to install Function Core Tools as part of your build pipeline!).
You need to handle those 2 imports separately,
import azure.functions as func
import requests
Hopefully I am understanding your problem correctly.
When you are installing on your local machine, libs are installed where python is (or at least somewhere other than where your actual code is). This means, when you package your code, you aren't actually keeping the libs together.
To get around this, you can use a virtual env. Python provide a venv tool (there is also a a standard linux virtual env tool) which you can run via:
python -m venv /path/to/my/dir
source /path/to/my/dir/bin/activate
cd /path/to/my/dir/bin/activate
pip install -r requirements.txt
deactivate
I know you mentioned windows, so I would suggest using WSL and the ubuntu image (generally a nice tool to have anyway). There probably is a way to get that working in windows otherwise though I don't know it.
EDIT: Fixed format
Although its old but:
*pip(python version) install --target .python_packages/lib/site-packages -r requirements.txt
For ex. if you are using 3.7 then
pip3.7 install --target .python_packages/lib/site-packages -r requirements.txt
Works like a charm

Using ansible core's pip and virtualenv on Centos or Redhat

I have created a playbook that is suppose to run a django website for local developers. These are organizational constraints
Currently the VM is Centos - http://puppet-vagrant-boxes.puppetlabs.com/centos-64-x64-vbox4210.box
The machine is being provisioned with ansible via Vagrant.
The developer will need python2.7.
I attempted to follow the software collections route in
adding a scl repo to box
installing python27 via yum
using shell modoule to enable python27
creating a virtualenv inside that shell
The newly create virtualenv and python binaries give an error after provision. Here is the pertinent part of my playbook:
main.yml
- hosts: app
sudo: yes
sudo_user: root
gather_facts: true
roles:
# insert other roles
tasks:
- name: Add SCL Repos
command: sh -c 'wget -qO- http://people.redhat.com/bkabrda/scl_python27.repo >> /etc/yum.repos.d/scl.repo'
- name: Install python dependencies
yum: pkg={{ item }} state=present
with_items:
- "python-devel"
- "scl-utils"
- "python27"
- name: Manually create virtual .env and install requirements
shell: "source /opt/rh/python27/enable && virtualenv /vagrant/.env && source /vagrant/.env/bin/activate && pip install -r /vagrant/requirements/local.txt"
Ansible - stdout
Here is the tail end of my ansible's stdout message.
pip can't proceed with requirement 'pytz (from -r /vagrant/requirements/base.txt (line 3))' due to a pre-existing build directory.\n location: /vagrant/.env/build/pytz\nThis is likely due to a previous installation that failed.\npip is being responsible and not assuming it can delete this.\nPlease delete it and try again.\n\nCleaning up...
Post Mortem Test via SSH
In an attempt to glean more information out the problem, I sshed into the box to see what feedback I could get.
$ vagrant ssh
Last login: Fri Feb 12 22:17:03 2016 from 10.0.2.2
Welcome to your Vagrant-built virtual machine.
[vagrant#localhost ~]$ cd /vagrant/
[vagrant#localhost vagrant]$ source .env/bin/activate
(.env)[vagrant#localhost vagrant]$ pip install -r requirements/local.txt
/vagrant/.env/bin/python: error while loading shared libraries: libpython2.7.so.1.0: cannot open shared object file: No such file or directory
In general, the approach feels like a square peg in a round hole. I'd love to hear some feedback from the community about the appropriate way to run a centos box locally using a python27 virtualenv provisioned through ansible.
You could always use ansible environment directive to manually set appropriate variables so that correct executables get called. Here's an example:
environment:
PATH: "/opt/rh/rh-python34/root/usr/bin:{{ ansible_env.PATH }}"
LD_LIBRARY_PATH: "/opt/rh/rh-python34/root/usr/lib64"
MANPATH: "/opt/rh/rh-python34/root/usr/share/man"
XDG_DATA_DIRS: "/opt/rh/rh-python34/root/usr/share"
PKG_CONFIG_PATH: "/opt/rh/rh-python34/root/usr/lib64/pkgconfig"
pip: "virtualenv={{root_dir}}/{{venvs_dir}}/{{app_name}}_{{spec}} requirements={{root_dir}}/{{spec}}_sites/{{app_name}}/requirements.txt"
In the end, I had to rebuild python from source to create a python2.7 virtual environment. I used an open source playbook.
https://github.com/Ken24/ansible-role-python
main.yml
- hosts: app
roles:
- { role: Ken24.python }
tasks:
- name: Install virtualenv
command: "/usr/local/bin/pip install virtualenv"
- name: Create virtualenv and install requirements
pip: requirements=/vagrant/requirements/local.txt virtualenv=/vagrant/cfgov-refresh virtualenv_command=/usr/local/bin/virtualenv

Categories