Save a file from gitlab-ci to a gitlab repository - python

I made this citlab-ci.yml file but i can find the html report in my repo at the end.
Can you tell me why ?
image: python
services:
- selenium/standalone-chrome:latest
variables:
selenium_remote_url: "http://selenium__standalone-chrome:4444/wd/hub"
cucumber:
script:
- python --version
- pwd
- ls
- pip install pytest
- pip install pytest_bdd
- pip install selenium
- pip install chromedriver
- pip install pytest-html
- cd test_pytest
- ls
- python -m pytest step_defs/test_web_steps.py --html=report.html
tx
Hadrien

You can actually generate test reports in gitlab. For this, generate an XML report from Pytest that would be stored in GitLab as an artifact. On your .gitlab-ci.yml file
image: python:3.6
stages:
- test
testing:
stage: test
when: manual
script:
...
- pytest --junitxml=report.xml
artifacts:
when: always
reports:
junit: report.xml
Then, you can download this report
or visualize it under the Tests tag of your pipeline.

Related

Pyinstaller not working in Gitlab CI file

I have created a python application and I would to deploy it via Gitlab. To achieve this, I create the following gitlab-ci.yml file:
# This file is a template, and might need editing before it works on your project.
# Official language image. Look for the different tagged releases at:
# https://hub.docker.com/r/library/python/tags/
image: "python:3.10"
#commands to run in the Docker container before starting each job.
before_script:
- python --version
- pip install -r requirements.txt
# different stages in the pipeline
stages:
- Static Analysis
- Test
- Deploy
#defines the job in Static Analysis
pylint:
stage: Static Analysis
script:
- pylint -d C0301 src/*.py
#tests the code
pytest:
stage: Test
script:
- cd test/;pytest -v
#deploy
deploy:
stage: Deploy
script:
- echo "test ms deploy"
- cd src/
- pyinstaller -F gui.py --noconsole
tags:
- macos
It runs fine through the Static Analysis and Test phases, but in Deploy I get the following error:
OSError: Python library not found: .Python, libpython3.10.dylib, Python3, Python, libpython3.10m.dylib
This means your Python installation does not come with proper shared library files.
This usually happens due to missing development package, or unsuitable build parameters of the Python installation.
* On Debian/Ubuntu, you need to install Python development packages:
* apt-get install python3-dev
* apt-get install python-dev
* If you are building Python by yourself, rebuild with `--enable-shared` (or, `--enable-framework` on macOS).
As I am working on a Macbook I tried with the following addition - env PYTHON_CONFIGURE_OPTS="--enable-framework" pyenv install 3.10.5 but then I get an error that python 3.10.5 already exists.
I tried some other things, but I am a bit stuck. Any advice or suggestions?

Share the same workspace across different jobs

Can I share the same workspace installed in one job among other jobs?
In particular, I want to keep share the software installed in one job to later jobs. According to the documentation,
When you run a pipeline on a self-hosted agent, by default, none of the sub-directories are cleaned in between two consecutive runs.
However, this pipeline below failed in job J2 because the sphinx installed in job J1 is lost in J2.
jobs:
- job: 'J1'
pool:
vmImage: 'Ubuntu-16.04'
strategy:
matrix:
Python37:
python.version: '3.7'
maxParallel: 3
steps:
- task: UsePythonVersion#0
inputs:
versionSpec: '$(python.version)'
architecture: 'x64'
- script: python -m pip install --upgrade pip
displayName: 'Install dependencies'
- script: pip install --upgrade pip
displayName: 'Update pip'
- script: |
echo "Publishing document for development version $(Build.BuildId)"
pip install -U sphinx
displayName: 'TEST J1'
- script: |
echo "TEST SPHINX"
sphinx-build --help
displayName: 'TEST SPHINX'
- job: 'J2'
dependsOn: 'J1'
steps:
- task: UsePythonVersion#0
inputs:
versionSpec: '3.x'
architecture: 'x64'
- script: |
echo "TEST SPHINX"
sphinx-build --help
displayName: 'TEST SPHINX'
This error does not relevant with Workspace.
Yes, workspace can be shared across jobs, and on your code, the sphinx is also in your workspace. But, it is not installed in the PATH which is a global path, so while you want to use and execute it later, it failed because the wrong PATH value.
In Ubuntu agent, Pip installed with --user by default. This is a agent user which the agent do not have while you create and using the VM image. If you do not have any change code, it will be installed to the ~/.local/bin instead of PATH by default.
For solved, you need to make sure that the command you are using can be found within PATH. If the command is not in your path either include it or use absolute full path to it.
So, you should use export to specified the PATH value manually:
export PATH="xxx"
You can check this blog for more details.

Gitlab CI - Django functional tests - splinter

I want to run some automation tests on github on a project of mine with a Django framework. Therefore I am using a Django functional test. While executing the test on my local pc works fine, my pipeline is always failing with those tests.
I assumed, that chromedriver wasn't working correctly and after some research on the internet, I found out, that I need to install chrome as browser, so I modified my requirements.txt for pip like this:
applescript==2018.11.19
astroid==2.1.0
autopep8==1.4.3
chromedriver==2.24.1
decorator==4.3.0
detect==2018.11.19
Django==2.1.3
flake8==3.6.0
google-chrome==2018.11.19
google-chrome-cli==2018.11.19
isort==4.3.4
lazy-object-proxy==1.3.1
mccabe==0.6.1
only==2018.11.20
psutil==5.4.8
public==2018.11.20
pycodestyle==2.4.0
pyflakes==2.0.0
pylint==2.2.1
pytz==2018.7
runcmd==2018.11.20
selenium==3.141.0
six==1.11.0
splinter==0.10.0
temp==2018.11.20
urllib3==1.24.1
wrapt==1.10.11
.gitlab-ci.yml
image: python:latest
before_script:
- pip install virtualenv
- virtualenv --python=python3 venv/
- source venv/bin/activate
- pip install -r requirements.txt
- cd src/
- python manage.py migrate
stages:
- quality
- tests
flake8:
stage: quality
script:
- flake8 ./
test:
stage: tests
script:
- python manage.py test
test_functional.py
def setUp(self):
# LINUX x64
executable_path = {'executable_path': settings.CHROMEDRIVER_PATH_LINUX64}
# chrome
self.browser_chrome = Browser('chrome', **executable_path)
[..]
With this, a chrome browser has been installed, but now I get this error:
selenium.common.exceptions.WebDriverException:
Message: Service /builds/mitfahrzentrale/mitfahrzentrale/venv/chromedriver unexpectedly exited.
Status code was: 127
What do I need to modify in order to use chromedriver for gitlab?
I don't think the google-chrome package does what you think it does. Looking at its source code, it's a Python wrapper for a set of AppleScript commands around the Chrome browser on MacOS and will certainly not install the browser on Linux.
For reference, here is the (stripped) Gitlab CI pipeline we're using with Django and Selenium to run tests with Firefox and Chrome:
stages:
- test
.test:
coverage: '/TOTAL.*\s+(\d+%)$/'
test-linux_x86_64:
extends: .test
image: python:3.7.1-stretch
stage: test
tags:
- linux_x86_64
script:
- apt -qq update
- DEBIAN_FRONTEND=noninteractive apt -qq -y install xvfb firefox-esr chromium chromedriver
# download geckodriver as no distro offers a package
- apt install -qq -y jq # I don't want to parse JSON with regexes
- curl -s https://api.github.com/repos/mozilla/geckodriver/releases/latest | jq -r '.assets[].browser_download_url | select(contains("linux64"))' | xargs -n1 curl -sL | tar -xz -C /usr/local/bin
- chmod +x /usr/local/bin/geckodriver
# prepare Django installation
- python -m venv /opt/testing
# bundled pip and setuptools are outdated
- /opt/testing/bin/pip install --quiet --upgrade pip setuptools
- /opt/testing/bin/pip install --quiet -r requirements.txt
- xvfb-run /opt/testing/bin/python manage.py test
Some notes:
taking a closer look at the job, all the steps besides the last two are preparation steps; moving them to a custom Docker image will reduce the test running time and the amount of boilerplate in your pipeline.
here, xvfb is used to run the browser in a virtual display; the modern browsers are able to run in headless mode (add --headless to chromedriver options), making the virtual display unnecessary. If you don't need to support old browser versions, you can omit the xvfb installation and xvfb-run usage.
The tests will run as root in container; at first, we got the error
E selenium.common.exceptions.WebDriverException: Message: unknown error: Chrome failed to start: exited abnormally
E (unknown error: DevToolsActivePort file doesn't exist)
E (The process started from chrome location /usr/bin/chromium is no longer running, so ChromeDriver is assuming that Chrome has crashed.)
E (Driver info: chromedriver=2.41,platform=Linux 4.15.10-300.fc27.x86_64 x86_64)
If you face this, you need to pass the additional flag --no-sandbox to Chrome because it refuses to run as root without it:
chrome_options = webdriver.ChromeOptions()
chrome_options.add_argument('--no-sandbox')
ds = DesiredCapabilities.CHROME
ds['loggingPrefs'] = {'browser': 'ALL'}
driver = webdriver.Chrome(desired_capabilities=ds, options=chrome_options)

Travis-ci with tox failing on python2.6, python3.3 and pypy

i just committed a seemingly uninteresting commit, updating the release notes and setup for pypi. the travis-ci build fails however running tox with py26, py33 and pypy:
https://travis-ci.org/Turbo87/aerofiles
1.13s$ tox -e $TOX_ENV -- --cov aerofiles --cov-report term-missing
py26 create: /home/travis/build/Turbo87/aerofiles/.tox/py26
ERROR: InterpreterNotFound: python2.6
i didn't change anything to the travis.yml and tox has been fixed on the 1.7.2 version:
language: python
python: 2.7
sudo: false
env:
- TOX_ENV=py26
- TOX_ENV=py27
- TOX_ENV=py33
- TOX_ENV=py34
- TOX_ENV=pypy
install:
# Install tox and flake8 style checker
- pip install tox==1.7.2 flake8==2.1.0
script:
# Run the library through flake8
- flake8 --exclude=".git,docs" --ignore=E501 .
# Run the unit test suite
- tox -e $TOX_ENV -- --cov aerofiles --cov-report term-missing
Would be great if someone could help out. I am quite new to travis-ci (and tox) and it's quite a black box at the moment.
A few week ago I was forced to change all my .travis.yml exactly because of the problem. See my commit. Instead of
env:
- TOXENV=py27
- TOXENV=py34
write
matrix:
include:
- python: "2.7"
env: TOXENV=py27
- python: "3.4"
env: TOXENV=py34

Travis CI with GAE and django

I am having some problems when I am using google python SDK in Travis-CI. I'm always getting this exception:
Failure: ImportError (No module named google.appengine.api) ... ERROR
I think the problem is in my travis file or django settings file. Can I use the GAE SDK API in the Travis platform?
I write down my .travis.yml file:
language: python
python:
- "2.7"
before_script:
- wget https://storage.googleapis.com/appengine-sdks/featured/google_appengine_1.9.10.zip -nv
- unzip -q google_appengine_1.9.10.zip
- mysql -e 'create database DATABASE_NAME;'
- echo "USE mysql;\nUPDATE user SET password=PASSWORD('A_PASSWORD') WHERE user='USER';\nFLUSH PRIVILEGES;\n" | mysql -u USER
- python manage.py syncdb --noinput
install:
- pip install -r requirements.txt
- pip install mysql-python
script: python manage.py test --with-coverage
branches:
only:
- testing
Thank you
After trying a lot I solved it adding this in my travis.yml file in the before_script section after the unzip order:
- export PYTHONPATH=${PYTHONPATH}:google_appengine

Categories