I apologize in advance.I have a task to create CI pipeline in Gitlab for projects on python language with results in SonarQube. I found some gitlab-ci.yml file:
image: image-registry/gitlab/python
before_script:
- cd ..
- git clone https://gitlab-ci-token:${CI_JOB_TOKEN}#gitlab/python-education/junior.git
stages:
- PyLint
pylint:
stage: PyLint
only:
- merge_requests
script:
- cp -R ${CI_PROJECT_NAME}/* junior/project
- cd junior && python3 run.py --monorepo
Is it possible to some code in script to output in SonarQube?
Yes, third party issues are supported with SonarQube. For PyLint, you can set sonar.python.pylint.reportPath in your sonar.properties file with the path of the report(s) for pylint. You must use --output-format=parseable argument to pylint.
When you run sonar scanner, it will collect the report(s) and send it to SonarQube.
Related
We're moving our CI from Jenkins to Gitlab and I'm trying to setup a pipeline that runs on both Windows and Linux.
Running multiple Python versions on a Linux gitlab runner works ok by defining versions like this:
.versions:
parallel:
matrix:
- PYTHON_VERSION: ['3.7', '3.8', '3.9']
OPERATING_SYSTEM: ['linux', 'windows']
then calling them for each stage they are needed
build_wheel:
parallel: !reference [.versions, parallel]
I'm trying to add a Windows runner now and have run into the snag that Powershell syntax is different to bash. Most of the Python calls still work, but calling the activate script needs to be different. How do I switch scripts depending on the operating system?
It doesn't seem to be possible to add rules to a script, so I'm trying something like this
.activate_linux: &activate_linux
rules:
- if: $OPERATING_SYSTEM == 'linux'
script:
- source venv/bin/activate
.activate_windows: &activate_windows
rules:
- if: $OPERATING_SYSTEM == 'windows'
script:
- .\venv\Scripts\activate
.activate: &activate
- *activate_linux
- *activate_windows
before_script:
- python -m venv venv
- *activate
- pip install --upgrade pip wheel "setuptools<60"
but it gives me the error: "before_script config should be a string or a nested array of strings up to 10 levels deep".
Is it possible to have one .gitlab-ci.yml file that works on both Windows and Linux? Surely someone has worked this out, but I can't find any solutions.
You can't easily do this in the yaml with that matrix.
Instead, you can do this:
.scripts:
make_venv:
- python -m venv venv
activate_linux:
- !reference [.scripts, make_venv]
- source ./venv/bin/activate
activate_windows:
- !reference [.scripts, make_venv]
- venv/Scripts/activate.ps1
.job_template:
parallel:
matrix:
- PYTHON_VERSION: ['3.7', '3.8', '3.9']
script:
- pip install --upgrade pip wheel "setuptools<60"
- # ...
build_linux:
extends: .job_template
variables:
OPERATING_SYSTEM: 'linux'
before_script:
- !reference [.scripts, activate_linux]
build_windows:
extends: .job_template
variables:
OPERATING_SYSTEM: 'windows'
before_script:
- !reference [.scripts, activate_windows]
I decided to build my pipeline on this plan:
Build stage: Run only if the branch is the main one or one of my build files has been modified. It inherits docker:latest, and builds a test-ready container (pytest, lint) and pushes it to the local registry.
Test stage: always runs, inherits the latest or own branch container from the previous stage. All tests are run in it.
Push to production: it doesn't matter now.
Problems in 2 stage:
I run the ls -la command and I don't see my venv, node_modules folders. I thought GIT_CLEAN_FLAGS would solve my problem. But it didn't help.
How reproduce the problem:
Building image
FROM python:3.7-slim
ARG CI_PROJECT_DIR
WORKDIR $CI_PROJECT_DIR
RUN pip install -r requirements.txt
build:
stage: build
tags:
- build
script:
- docker build --build-arg CI_PROJECT_DIR=$CI_PROJECT_DIR .
Test
lint:
variables:
GIT_CLEAN_FLAGS: none
stage: test
tags:
- test
script:
- pwd
- ls -lah
You don't need to use CI_PROJECT_DIR. Save your code in another directory:
/my-app for example.
And in your second stage use cd /my-app.
Code example of your second stage:
test:
stage: test
tags:
- test
before_script:
- cd /my-app
script:
- pwd
- ls -lah
Whenever I open my gitpod workspace I have to re-install my requirements.txt file. I was reading about the gitpod.yml file and see that I have to add it in there so the dependencies get installed during the prebuild.
I can't find any examples of this so I just want to see if I understand it correctly.
Right now my gitpod.yml file looks like this...
image:
file: .gitpod.Dockerfile
# List the start up tasks. Learn more https://www.gitpod.io/docs/config-start-tasks/
tasks:
- init: echo 'init script' # runs during prebuild
command: echo 'start script'
# List the ports to expose. Learn more https://www.gitpod.io/docs/config-ports/
ports:
- port: 3000
onOpen: open-preview
vscode:
extensions:
- ms-python.python
- ms-azuretools.vscode-docker
- eamodio.gitlens
- batisteo.vscode-django
- formulahendry.auto-close-tag
- esbenp.prettier-vscode
Do I just add these two new 'init' and 'command' lines under tasks?
image:
file: .gitpod.Dockerfile
# List the start up tasks. Learn more https://www.gitpod.io/docs/config-start-tasks/
tasks:
- init: echo 'init script' # runs during prebuild
command: echo 'start script'
- init: pip3 install -r requirements.txt
command: python3 manage.py
# List the ports to expose. Learn more https://www.gitpod.io/docs/config-ports/
ports:
- port: 3000
onOpen: open-preview
vscode:
extensions:
- ms-python.python
- ms-azuretools.vscode-docker
- eamodio.gitlens
- batisteo.vscode-django
- formulahendry.auto-close-tag
- esbenp.prettier-vscode
Thanks so much for your help. I'm still semi-new to all this and trying to figure my way around.
To install requirements in the prebuild, you have to install them in the Dockerfile. The exception is editable installs, pip install -e ..
For example, to install a package named <package-name>, add this line to .gitpod.Dockerfile:
RUN python3 -m pip install <package-name>
Installing from a requirements file is slightly trickier because the Dockerfile can't "see" the file when it's building. One workaround is to give the Dockerfile the URL of the requirements file in the repo.
RUN python3 -m pip install -r https://gitlab.com/<gitlab-username>/<repo-name>/-/raw/master/requirements.txt
Edit: Witness my embarrassing struggle with the same issue today: https://github.com/gitpod-io/gitpod/issues/7306
I have the following yaml pipeline build file:
pr:
branches:
include:
- master
jobs:
- job: 'Test'
pool:
vmImage: 'Ubuntu-16.04'
strategy:
matrix:
Python36:
python.version: '3.6'
maxParallel: 4
steps:
- task: UsePythonVersion#0
inputs:
versionSpec: '$(python.version)'
architecture: 'x64'
env:
POSTGRES: $(POSTGRES)
- script: python -m pip install --upgrade pip && pip install -r requirements.txt
displayName: 'Install dependencies'
- script: |
pip install pytest
pytest tests -s --doctest-modules --junitxml=junit/test-results.xml
displayName: 'pytest'
I set the variable POSTGRES in the pipeline settings as a secret variable. In the python code all environment variables are read with the call
if not os.getenv(var):
raise ValueError(f'Environment variable \'{var}\' is not set')
When the build is executed it will throw exactly the above error for the POSTGRES variable. Are the environment variables not set correctly?
To make the environment variable available in the Python script, you need to define it in the step where it's used:
- script: |
pip install pytest
pytest tests -s --doctest-modules --junitxml=junit/test-results.xml
displayName: 'pytest'
env:
POSTGRES: $(POSTGRES)
I don't know if you still need this but...
If you take a look at the documentation here it says:
Unlike a normal variable, they are not automatically decrypted into
environment variables for scripts. You can explicitly map them in,
though.
So it looks like you were doing it right. Maybe try using a different name for the mapped variable. It could be the name of the initial encrypted variable is confounding the mapping (because it's already a variable it won't remap it).
I'm unsure how to set the PYTHONPATH correctly in CircleCI 2.0 to allow the build to run. This is a Django project that was previously building on CircleCI 1.0 successfully so I've started by using the auto generated config.yml file.
version: 2
jobs:
build:
working_directory: ~/mygithubname/myproject
parallelism: 1
shell: /bin/bash --login
environment:
CIRCLE_ARTIFACTS: /tmp/circleci-artifacts
CIRCLE_TEST_REPORTS: /tmp/circleci-test-results
DATABASE_URL: 'sqlite://:memory:'
DJANGO_SETTINGS_MODULE: myproject.settings.test
DEBUG: 0
PYTHONPATH: ${HOME}/myproject/myproject
docker:
- image: circleci/build-image:ubuntu-14.04-XXL-upstart-1189-5614f37
command: /sbin/init
steps:
- checkout
- run: mkdir -p $CIRCLE_ARTIFACTS $CIRCLE_TEST_REPORTS
- restore_cache:
keys:
# This branch if available
- v1-dep-{{ .Branch }}-
# Default branch if not
- v1-dep-master-
# Any branch if there are none on the default branch - this should be unnecessary if you have your default branch configured correctly
- v1-dep-
- run: pip install -r requirements/testing.txt
- save_cache:
key: v1-dep-{{ .Branch }}-{{ epoch }}
paths:
# This is a broad list of cache paths to include many possible development environments
# You can probably delete some of these entries
- vendor/bundle
- ~/virtualenvs
- ~/.m2
- ~/.ivy2
- ~/.bundle
- ~/.go_workspace
- ~/.gradle
- ~/.cache/bower
- run: pytest
- store_test_results:
path: /tmp/circleci-test-results
- store_artifacts:
path: /tmp/circleci-artifacts
- store_artifacts:
path: /tmp/circleci-test-results
The run: pytest command is failing in CircleCI with the error stating pytest-django could not find a Django project (no manage.py file could be found). You must explicitly add your Django project to the Python path to have it picked up. I know what the error means but not sure how to fix in version 2 (it works when building on version 1), and I'm struggling to find anything in the documents.
In circleci environment variables can't be used with expansion
you need to either use BASH_ENV https://circleci.com/docs/2.0/env-vars/#using-bash_env-to-set-environment-variables
- run: echo 'export PYTHONPATH="${PYTHONPATH}:${HOME}/myproject/folder_with_manage.py:${HOME}/myproject/folder_with_tests"' >> $BASH_ENV
Or set proper paths manually, add project folder and folder with manage.py and folder with tests
environment:
PYTHONPATH: /root/myproject/:/root/myproject/folder_with_manage.py/:/root/myproject/folder_with_tests/
To check that it is working you could do
- run: echo $PYTHONPATH
or
- run: python -c "import sys; print(sys.path)"
If you are using image without bash do not forget to do https://circleci.com/docs/2.0/env-vars/#setting-an-environment-variable-in-a-shell-command
source $BASH_ENV
# run tests