GitHub Actions not picking up Django tests - python

I want to write a simple GitHub Action that runs my Django app's tests when I push to GitHub. GitHub runs the workflow on push, but for some reason, it doesn't pick up any of the tests, even though running python ./api/manage.py test locally works.
The Run tests section of the Job summary shows this:
1s
Run python ./api/manage.py test
System check identified no issues (0 silenced).
----------------------------------------------------------------------
Ran 0 tests in 0.000s
OK
1s
2s
0s
For background, my local setup is using docker-compose, with a dockerfile for each app. The Django app is the API. All I want to do is runn the django tests on push.
I've come across GitHub service containers, and I thought they might be necessary since django needs a postgres db connection to run its tests.
I'm new to GitHub Actions so any direction would be appreciated. My hunch is that it should be simpler than this, but below is my current .github/workflows/django.yml file:
name: Django CI
on:
push:
branches: [ master ]
pull_request:
branches: [ master ]
jobs:
tests:
runs-on: ubuntu-latest
container: python:3
services:
# Label used to access the service container
db:
# Docker Hub image
image: postgres
# Provide the password for postgres
env:
POSTGRES_PASSWORD: password
# Set health checks to wait until postgres has started
options: >-
--health-cmd pg_isready
--health-interval 10s
--health-timeout 5s
--health-retries 5
steps:
# Downloads a copy of the code in your repository before running CI tests
- name: Check out repository code
uses: actions/checkout#v2
# Performs a clean installation of all dependencies
- name: Install Dependencies
run: |
python -m pip install --upgrade pip
pip install -r api/requirements.txt
- name: Run Tests
run: |
python ./api/manage.py test
env:
# The hostname used to communicate with the PostgreSQL service container
POSTGRES_HOST: postgres
# The default PostgreSQL port
POSTGRES_PORT: 5432

Don't know if you have solved it by yourself, if so please share your solution. However, what I ran into when doing the same thing as you is that I had to specify the app that I wanted to test for it to work.
For example, if you have your app called "testapp" you need to do:
run: |
python ./api/manage.py test testapp

Related

Keep postgres docker container in azure pipeline job running

I'm rather new to Azure and currently playing around with the pipelines. My goal is to run a postgres alpine docker container in the background, so I can perform tests through my python backend.
This is my pipeline config
trigger:
- main
pool:
vmImage: ubuntu-latest
variables:
POSTGRE_CONNECTION_STRING: postgresql+psycopg2://postgres:passw0rd#localhost/postgres
resources:
containers:
- container: postgres
image: postgres:13.6-alpine
trigger: true
env:
POSTGRES_PASSWORD: passw0rd
ports:
- 1433:1433
options: --name postgres
stages:
- stage: QA
jobs:
- job: test
services:
postgres: postgres
steps:
- task: UsePythonVersion#0
inputs:
versionSpec: $(PYTHON_VERSION)
- task: Cache#2
inputs:
key: '"$(PYTHON_VERSION)" | "$(Agent.OS)" | requirements.txt'
path: $(PYTHON_VENV)
cacheHitVar: 'PYTHON_CACHE_RESTORED'
- task: CmdLine#2
displayName: Wait for db to start
inputs:
script: |
sleep 5
- script: |
python -m venv .venv
displayName: create virtual environment
condition: eq(variables.PYTHON_CACHE_RESTORED, 'false')
- script: |
source .venv/bin/activate
python -m pip install --upgrade pip
pip install -r requirements.txt
displayName: pip install
condition: eq(variables.PYTHON_CACHE_RESTORED, 'false')
- script: |
source .venv/bin/activate
python -m pytest --junitxml=test-results.xml --cov=app --cov-report=xml tests
displayName: run pytest
- task: PublishTestResults#2
condition: succeededOrFailed()
inputs:
testResultsFormat: 'JUnit'
testResultsFiles: 'test-results.xml'
testRunTitle: 'Publish FastAPI test results'
- task: PublishCodeCoverageResults#1
inputs:
codeCoverageTool: 'Cobertura'
summaryFileLocation: 'coverage.xml'
But the pipeline always fails at the step "Initialize Containers", giving this error:
Error response from daemon: Container <containerID> is not running as if it was just shutting down because there is nothing to do. Which seems right, but I don't know how to keep it running until my tests are done, the backend just runs pytest against the database. I also tried adding that resource as container using the container property, but then the pipeline crashes at the same step, saying that the container was just running less than a second.
I'm thankful for any ideas!
I'm suspicious that your container is not stopping because of "there is nothing to do", the postgres image is configured in a way to act as a service. Your container is probably stopping because of an error.
I'm sure there is something to improve: you have to add the PGPORT env var to your container and set to 1433 because that port is not the default port for the postgres docker image, so opening that port on your container like you are doing with ports is not doing too much in this case.
Also, your trigger: true property would mean that you are expecting updates on the official DockerHub repository for postgres and in case of a new image release, run your pipeline. I think that does not makes too much sense, you should remove it, just in case, although this is marginal problem from the perspective of your question.

How to run API endpoints tests with Docker-Compose in Gitlab CI/CD pipeline

I want to automate testing process for my simple API with Gitlab CI/CD pipeline and with docker-compose. I have tests that I want to run when the app container is build the question is that I cannot wait for app service before run tests on http://app:80 address.
Project structure:
project:
-- app
-- tests
-- docker-compose.yml
-- .gitlab-ci.yml
What I have:
docker-compose:
version: "3.0"
services:
app:
build:
context: ./app
dockerfile: Dockerfile
ports:
- "81:80"
volumes:
- ./app:/app/app
tests:
build:
context: ./tests
dockerfile: Dockerfile
postgres:
image: postgres:12-alpine
ports:
- "5432"
environment:
- POSTGRES_USER=${POSTGRES_USER}
- POSTGRES_PASSWORD=${POSTGRES_PASS}
- POSTGRES_DB=${POSTGRES_DB}
volumes:
- ./app/data:/var/lib/postgresql/data
tests/ dir with files:
def test_select():
url = f"{HOST}/select/"
response = requests.get(url)
status_code = response.status_code
result_len = len(response.json().get("result"))
assert status_code == 200
assert result_len != 0
.gitlab-ci.yml:
stages:
- build
build:
stage: build
script:
- sudo mkdir -p /home/app/
- sudo cp -r $PWD/* /home/app/
- cd /home/app/
- docker-compose up -d
The end goal is to ran the tests before docker-compose build is finished and if some test fails than docker-compose will fail and the pipeline too.
Is this possible and if there is another way to resolve this I will be very grateful.
There are a few solutions for this with varying levels of sophistication:
Add a long enough wait to the start of your container
Add retry logic (ideally with backoff) to your code running inside the contianer
Depend on an intermediate container that whose logic is responsible for ensuring the other dependency is fully available and functional.
Though, I think your issue is that you're simply missing a depends_on declaration in your docker-compose. Also be sure your app image has proper EXPOSE declarations or add them in the compose file.
Also, since you're running your test inside the docker network, you don't need the port mapping. You can contact the service directly on its exposed port.
app:
ports:
- "80"
tests:
depends_on: # IMPORTANT! Waits for app to be available
- app # makes sure you can talk to app on the network
# ...
Then your tests should be able to reach http://app
As a complete example using public projects:
version: "3"
services:
app:
image: strm/helloworld-http
ports:
- "80" # not strictly needed since this image has EXPOSE 80
tests:
depends_on:
- app
image: curlimages/curl
command: "curl http://app"
If you ran docker-compose up you'd see the following output:
Creating testproj_app_1 ... done
Creating testproj_tests_1 ... done
Attaching to testproj_app_1, testproj_tests_1
app_1 | 172.18.0.3 - - [12/Nov/2021 03:03:46] "GET / HTTP/1.1" 200 -
tests_1 | % Total % Received % Xferd Average Speed Time Time Time Current
tests_1 | Dload Upload Total Spent Left Speed
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0<html><head><title>HTTP Hello World</title></head><body><h1>Hello from d8
f6894ccd1e</h1></body></html
100 102 100 102 0 0 4706 0 --:--:-- --:--:-- --:--:-- 4857
testproj_tests_1 exited with code 0
You could also opt to use gitlab's services: however, if you already have a workflow testing locally using docker-compose, then this is less ideal because then you have a different way of testing locally vs. in GitLab and then the test method is not portable to other CI systems or for other developer's local environments.
Usually you do not start docker-compose.yml from within the pipeline. Your docker-compose.yml is useful for local development, but in the pipeline you have to use a different approach, using GitLab services: https://docs.gitlab.com/ee/ci/services/
But if you want to E2E or load test your API from a GitLab pipeline you can use services for it to expose for example the postgress database:
test:e2e:
image: ubuntu:20.04
stage: test
services:
- name: postgres:12-alpine
alias: postgress
script:
- curl http://postgress:5432 # should work!
Next steps are to start your api in detached mode. For example:
script:
- python my-app.py &
- sleep 30
- # your app should be app now and should be exposed on let's say localhost:81 according your specs. You can safely run your API tests here
Note python will not be out-of-the-box available. For that you have to either install it in the pipeline or create a docker image that you use in the pipeline. Personally I always use a custom docker image within GitLab pipelines to prevent Docker rate limits. I have an example of a personal project to create custom images and store them in GitLab.

How does deploying and running a python script to an azure resource work?

I'm very new to DevOps, so this may be a very silly question. I'm trying to deploy a python Web scraping script onto an azure webapp using GitHub actions. This script is meant to be run for a long period of time as it is analyzing websites word by word for hours. It then logs the results to .log files.
I know a bit of how GitHub actions work, I know that I can trigger jobs when I push to the repo for instance. However, I'm a bit confused as to how one runs the app or a script on an azure resource (like a VM or webapp) for example. Does this process involve SSH-ing into the resource and then automatically run the cli command "python main.py" or "docker-compose up", or is there something more sophisticated involved?
For better context, this is my script inside of my workflows folder:
on:
[push]
env:
AZURE_WEBAPP_NAME: emotional-news-service # set this to your application's name
WORKING_DIRECTORY: '.' # set this to the path to your path of working directory inside GitHub repository, defaults to the repository root
PYTHON_VERSION: '3.9'
STARTUP_COMMAND: 'docker-compose up --build -d' # set this to the startup command required to start the gunicorn server. default it is empty
name: Build and deploy Python app
jobs:
build-and-deploy:
runs-on: ubuntu-latest
environment: dev
steps:
# checkout the repo
- uses: actions/checkout#master
# setup python
- name: Setup Python
uses: actions/setup-python#v1
with:
python-version: ${{ env.PYTHON_VERSION }}
# setup docker compose
- uses: KengoTODA/actions-setup-docker-compose#main
with:
version: '1.26.2'
# install dependencies
- name: python install
working-directory: ${{ env.WORKING_DIRECTORY }}
run: |
sudo apt install python${{ env.PYTHON_VERSION }}-venv
python -m venv --copies antenv
source antenv/bin/activate
pip install setuptools
pip install -r requirements.txt
python -m spacy download en_core_web_md
# Azure login
- uses: azure/login#v1
with:
creds: ${{ secrets.AZURE_CREDENTIALS }}
- uses: azure/appservice-settings#v1
with:
app-name: ${{ env.AZURE_WEBAPP_NAME }}
mask-inputs: false
general-settings-json: '{"linuxFxVersion": "PYTHON|${{ env.PYTHON_VERSION }}"}' #'General configuration settings as Key Value pairs'
# deploy web app
- uses: azure/webapps-deploy#v2
with:
app-name: ${{ env.AZURE_WEBAPP_NAME }}
package: ${{ env.WORKING_DIRECTORY }}
startup-command: ${{ env.STARTUP_COMMAND }}
# Azure logout
- name: logout
run: |
az logout
most of the script above was taken from: https://github.com/Azure/actions-workflow-samples/blob/master/AppService/python-webapp-on-azure.yml.
is env.STARTUP_COMMAND the "SSH and then run the command" part that I was thinking of, or is it something else entirely?
I also have another question: is there a better way to view logs from that python script running from within the azure resource? The only way I can think of is to ssh into it and then type in "cat 'whatever.log'".
Thanks in advance!
Instead of using STARTUP_COMMAND: 'docker-compose up --build -d' you can use the startup file name.
startUpCommand: 'gunicorn --bind=0.0.0.0 --workers=4 startup:app'
or
StartupCommand: 'startup.txt'
The StartupCommand parameter defines the app in the startup.py file. By default, Azure App Service looks for the Flask app object in a file named app.py or application.py. If your code doesn't follow this pattern, you need to customize the startup command. Django apps may not need customization at all. For more information, see How to configure Python on Azure App Service - Customize startup command.
Also, because the python-vscode-flask-tutorial repository contains the same startup command in a file named startup.txt, you could specify that file in the StartupCommand parameter rather than the command, by using StartupCommand: 'startup.txt'.
Refer: here for more info

Ignoring Test Files with Nose/unittest

I have two test files.
One is standard TestCases. The other has selenium tests in it. I have a docker-compose file setup to run the selenium tests with their associated services.
The tests run in bitbucket-pipelines. First it runs the test with the standard TestCases. Then in the next step it uses docker-compose to run the tests that include selenium since it needs compose to set up the selenium services.
The issue I'm having is it's trying to run the selenium tests during the first step when there isn't any selenium containers running.
I entered this in my Django settings file used for the first testing part:
NOSE_ARGS = [
'--exclude=(__init__.py)',
'--exclude=(../tests/selenium_tests.py)',
'--exclude=(selenium_tests.py)',
'--with-coverage',
'--cover-package=sasite',
'--with-xunit',
'--xunit-file=test-results/nose/noseresults.xml',
'--verbosity=3',
'--cover-xml',
'--cover-xml-file=test-results/nose/noseresults.xml',
'--id-file=test-results/nose/noseids'
]
and in my settings for the second:
NOSE_ARGS = [
'--exclude=(__init__.py)',
'--with-coverage',
'--cover-package=sasite',
'--with-xunit',
'--xunit-file=test-results/nose/noseresults.xml',
'--verbosity=3',
'--cover-xml',
'--cover-xml-file=test-results/nose/noseresults.xml',
'--id-file=test-results/nose/noseids'
]
As you can see, I want to exclude the selenium tests for the first part, and use them for the second. But even with my exclude in nose_args it still runs the selenium tests in the first part.
Here is my bitbucket-pipelines.yml:
prod-final:
- step:
name: Build image, run tests, and push to Docker Hub
caches:
- docker
services:
- docker
script:
# Set $DOCKER_HUB_USERNAME and $DOCKER_HUB_PASSWORD as environment variables in repository settings
- export DOCKER_HUB_USERNAME=X
- export DOCKER_HUB_PASSWORD=XXX
# build the Docker image (this will use the Dockerfile in the root of the repo)
- docker build -t meleiyu/sawebsite-redesign -f Dockerfile.imagetest .
# run tests
- docker run meleiyu/sawebsite-redesign
# tag Docker image
- docker tag meleiyu/sawebsite-redesign meleiyu/sawebsite-redesign:latest
# authenticate with the Docker Hub registry
- docker login --username $DOCKER_HUB_USERNAME --password $DOCKER_HUB_PASSWORD
# push the new Docker image to the Docker registry
- docker push meleiyu/sawebsite-redesign:latest
- step:
name: Run second level of testing with BrowserStack
image: docker:stable
trigger: manual
caches:
- docker
services:
- docker
script:
# Test production setup with compose
- apk add --no-cache py-pip bash
- pip install --no-cache docker-compose
- docker-compose -f docker-compose.test.yml up -d --build --exit-code-from app
- step:
# set AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY and AWS_DEFAULT_REGION as environment variables
name: Deploy to AWS
deployment: staging # set to test, staging or production
trigger: manual # uncomment to have a manual step
image: atlassian/pipelines-awscli
script:
- aws deploy push --application-name sawebsite-redesign --s3-location s3://<S3 bucket>/<s3-key> --ignore-hidden-files
- aws deploy create-deployment --application-name sawebsite-redesign --s3-location bucket=<s3-bucket>,key=<s3-key>,bundleType=zip --deployment-group-name <deployment-group>
definitions:
services:
docker:
memory: 3072
Dockerfile.imagetest is for the first part, and has the settings to exclude selenium tests file.
Dockerfile.test and docker-compose.test.yml are for the second part and don't exclude the selenium tests file.
Does my config/setup make sense?? Am I excluding the files correctly in nose_args? Because as I said it's running the selenium tests in part one when it shouldn't be doing so.
Any input would be appreciated.
Thanks
EDIT:
My NOSE_ARGS now:
To not ignore any tests:
NOSE_ARGS = [
'--exclude=(__init__.py)',
'--with-coverage',
'--cover-package=sasite',
'--with-xunit',
'--xunit-file=test-results/nose/noseresults.xml',
'--verbosity=3',
'--cover-xml',
'--cover-xml-file=test-results/nose/noseresults.xml',
'--id-file=test-results/nose/noseids'
]
To exclude selenium tests:
NOSE_ARGS = [
'--exclude=(__init__.py)',
'--exclude=(../tests/selenium_tests.py)',
'--exclude=(selenium_tests.py)',
'--ignore=(selenium_tests.py)',
'--ignore=(../tests/selenium_tests.py)',
'--ignore-files=../tests/selenium_tests.py',
'--ignore-files=selenium_tests.py',
'--with-coverage',
'--cover-package=sasite',
'--with-xunit',
'--xunit-file=test-results/nose/noseresults.xml',
'--verbosity=3',
'--cover-xml',
'--cover-xml-file=test-results/nose/noseresults.xml',
'--id-file=test-results/nose/noseids'
]
Based on https://nose.readthedocs.io/en/latest/usage.html#cmdoption-e --exclude argument should be a regex.
Did you try something like --exclude=selenium?
EDIT
Say there's a test tests/integration/test_integration.py in the project which you want to ignore. Running
nosetests -v --exclude=integration
or
nosetests -v --ignore-files=test_integration.py
does the trick (i.e. all tests but those in test_integration.py are executed).

How to run Odoo tests unittest2?

I tried running odoo tests using --test-enable, but it won't work. I have a couple of questions.
According to the documentation Tests can only be run during module installation, what happens when we add functionality and then want to run tests?
Is it possible to run tests from IDE like Pycharm ?
This useful For Run odoo test case:
./odoo.py -i/-u module_being_tested -d being_used_to_test --test-enable
Common options:
-i INIT, --init=INIT
install one or more modules (comma-separated list, use "all" for all modules), requires -d
-u UPDATE, --update=UPDATE
update one or more modules (comma-separated list, use "all" for all modules). Requires -d.
Database related options:
-d DB_NAME, --database=DB_NAME
specify the database name
Testing Configuration:
--test-enable: Enable YAML and unit tests.
#aftab You need add log-level please see below.
./odoo.py -d <dbname> --test-enable --log-level=test
and regarding you question, If you are making changes to installed modules and need to re test all test cases then you need to simple restart you server with -u <module_name> or -u all(for all modules) with the above command.
Here is a REALLY nice plugin to run unit odoo tests directly with pytest:
https://github.com/camptocamp/pytest-odoo
Here's a result example:
I was able to run odoo's tests using pycharm, to achieve this I used docker + pytest-odoo + pycharm (using remote interpreters).
First you setup a Dockerfile like this:
FROM odoo:14
USER root
RUN apt-get update && \
apt-get install -y --no-install-recommends \
python3-pip
RUN pip3 install pytest-odoo coverage pytest-html
USER odoo
And a docker-compose.yml like this:
version: '2'
services:
web:
container_name: plusteam-odoo-web
build:
context: .
dockerfile: Dockerfile
image: odoo:14
depends_on:
- db
ports:
- "8069:8069"
volumes:
- odoo-web-data:/var/lib/odoo
- ./config:/etc/odoo
- ./addons:/mnt/extra-addons
command: --dev all
db:
container_name: plusteam-odoo-db
image: postgres:13
ports:
- "5432:5432"
environment:
- POSTGRES_DB=postgres
- POSTGRES_PASSWORD=odoo
- POSTGRES_USER=odoo
- PGDATA=/var/lib/postgresql/data/pgdata
volumes:
- odoo-db-data:/var/lib/postgresql/data/pgdata
volumes:
odoo-web-data:
odoo-db-data:
So we extend an odoo image with packages to generate coverage reports and pytest-odoo
Once you have this, you can run docker-compose up -d to get your odoo instance running, the odoo container will have pytest-odoo installed, the next part is to tell pycharm to use a remote interpreter with the odoo modified image including the pyest-odoo package:
Now every time you run a script in pycharm they will launch a new container based on the image you provided.
After examining the containers launched by pycharm I realized they bind the project's directory to the /opt/project/ directory inside the container, this is useful because you will need to modify the odoo.conf file when you run your tests.
You can customize the database connection for a custom testing db which you should do, and the important part is that you need to map the addons_path option to /opt/project/addons or the final path inside the containers launched by pycharm where your custom addons are available.
With this you can create a pycharm script for pytest like this:
Notice how we provided the path for the odoo config with modifications for testing, this way the odoo available in the container launched by pycharm will know where your custom addon's code is located.
Now we can run the script and even debug it and everything will work as expected.
I go further in this matter (my particular solution) in a medium article, I even wrote a repository with a working demo so you can try it out, hope this helps:
https://medium.com/plusteam/how-to-run-odoo-tests-with-pycharm-51e4823bdc59 https://github.com/JSilversun/odoo-testing-example
Be aware that using remote interpreters you just need to make sure the odoo binary can find the addons folder properly and you will be all set :) besides using a Dockerfile to extend an image helps to speed up development.

Categories