Ignoring Test Files with Nose/unittest - python

I have two test files.
One is standard TestCases. The other has selenium tests in it. I have a docker-compose file setup to run the selenium tests with their associated services.
The tests run in bitbucket-pipelines. First it runs the test with the standard TestCases. Then in the next step it uses docker-compose to run the tests that include selenium since it needs compose to set up the selenium services.
The issue I'm having is it's trying to run the selenium tests during the first step when there isn't any selenium containers running.
I entered this in my Django settings file used for the first testing part:
NOSE_ARGS = [
'--exclude=(__init__.py)',
'--exclude=(../tests/selenium_tests.py)',
'--exclude=(selenium_tests.py)',
'--with-coverage',
'--cover-package=sasite',
'--with-xunit',
'--xunit-file=test-results/nose/noseresults.xml',
'--verbosity=3',
'--cover-xml',
'--cover-xml-file=test-results/nose/noseresults.xml',
'--id-file=test-results/nose/noseids'
]
and in my settings for the second:
NOSE_ARGS = [
'--exclude=(__init__.py)',
'--with-coverage',
'--cover-package=sasite',
'--with-xunit',
'--xunit-file=test-results/nose/noseresults.xml',
'--verbosity=3',
'--cover-xml',
'--cover-xml-file=test-results/nose/noseresults.xml',
'--id-file=test-results/nose/noseids'
]
As you can see, I want to exclude the selenium tests for the first part, and use them for the second. But even with my exclude in nose_args it still runs the selenium tests in the first part.
Here is my bitbucket-pipelines.yml:
prod-final:
- step:
name: Build image, run tests, and push to Docker Hub
caches:
- docker
services:
- docker
script:
# Set $DOCKER_HUB_USERNAME and $DOCKER_HUB_PASSWORD as environment variables in repository settings
- export DOCKER_HUB_USERNAME=X
- export DOCKER_HUB_PASSWORD=XXX
# build the Docker image (this will use the Dockerfile in the root of the repo)
- docker build -t meleiyu/sawebsite-redesign -f Dockerfile.imagetest .
# run tests
- docker run meleiyu/sawebsite-redesign
# tag Docker image
- docker tag meleiyu/sawebsite-redesign meleiyu/sawebsite-redesign:latest
# authenticate with the Docker Hub registry
- docker login --username $DOCKER_HUB_USERNAME --password $DOCKER_HUB_PASSWORD
# push the new Docker image to the Docker registry
- docker push meleiyu/sawebsite-redesign:latest
- step:
name: Run second level of testing with BrowserStack
image: docker:stable
trigger: manual
caches:
- docker
services:
- docker
script:
# Test production setup with compose
- apk add --no-cache py-pip bash
- pip install --no-cache docker-compose
- docker-compose -f docker-compose.test.yml up -d --build --exit-code-from app
- step:
# set AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY and AWS_DEFAULT_REGION as environment variables
name: Deploy to AWS
deployment: staging # set to test, staging or production
trigger: manual # uncomment to have a manual step
image: atlassian/pipelines-awscli
script:
- aws deploy push --application-name sawebsite-redesign --s3-location s3://<S3 bucket>/<s3-key> --ignore-hidden-files
- aws deploy create-deployment --application-name sawebsite-redesign --s3-location bucket=<s3-bucket>,key=<s3-key>,bundleType=zip --deployment-group-name <deployment-group>
definitions:
services:
docker:
memory: 3072
Dockerfile.imagetest is for the first part, and has the settings to exclude selenium tests file.
Dockerfile.test and docker-compose.test.yml are for the second part and don't exclude the selenium tests file.
Does my config/setup make sense?? Am I excluding the files correctly in nose_args? Because as I said it's running the selenium tests in part one when it shouldn't be doing so.
Any input would be appreciated.
Thanks
EDIT:
My NOSE_ARGS now:
To not ignore any tests:
NOSE_ARGS = [
'--exclude=(__init__.py)',
'--with-coverage',
'--cover-package=sasite',
'--with-xunit',
'--xunit-file=test-results/nose/noseresults.xml',
'--verbosity=3',
'--cover-xml',
'--cover-xml-file=test-results/nose/noseresults.xml',
'--id-file=test-results/nose/noseids'
]
To exclude selenium tests:
NOSE_ARGS = [
'--exclude=(__init__.py)',
'--exclude=(../tests/selenium_tests.py)',
'--exclude=(selenium_tests.py)',
'--ignore=(selenium_tests.py)',
'--ignore=(../tests/selenium_tests.py)',
'--ignore-files=../tests/selenium_tests.py',
'--ignore-files=selenium_tests.py',
'--with-coverage',
'--cover-package=sasite',
'--with-xunit',
'--xunit-file=test-results/nose/noseresults.xml',
'--verbosity=3',
'--cover-xml',
'--cover-xml-file=test-results/nose/noseresults.xml',
'--id-file=test-results/nose/noseids'
]

Based on https://nose.readthedocs.io/en/latest/usage.html#cmdoption-e --exclude argument should be a regex.
Did you try something like --exclude=selenium?
EDIT
Say there's a test tests/integration/test_integration.py in the project which you want to ignore. Running
nosetests -v --exclude=integration
or
nosetests -v --ignore-files=test_integration.py
does the trick (i.e. all tests but those in test_integration.py are executed).

Related

Gitlab CI in docker. Disable cleaning directory before stage starts

I decided to build my pipeline on this plan:
Build stage: Run only if the branch is the main one or one of my build files has been modified. It inherits docker:latest, and builds a test-ready container (pytest, lint) and pushes it to the local registry.
Test stage: always runs, inherits the latest or own branch container from the previous stage. All tests are run in it.
Push to production: it doesn't matter now.
Problems in 2 stage:
I run the ls -la command and I don't see my venv, node_modules folders. I thought GIT_CLEAN_FLAGS would solve my problem. But it didn't help.
How reproduce the problem:
Building image
FROM python:3.7-slim
ARG CI_PROJECT_DIR
WORKDIR $CI_PROJECT_DIR
RUN pip install -r requirements.txt
build:
stage: build
tags:
- build
script:
- docker build --build-arg CI_PROJECT_DIR=$CI_PROJECT_DIR .
Test
lint:
variables:
GIT_CLEAN_FLAGS: none
stage: test
tags:
- test
script:
- pwd
- ls -lah
You don't need to use CI_PROJECT_DIR. Save your code in another directory:
/my-app for example.
And in your second stage use cd /my-app.
Code example of your second stage:
test:
stage: test
tags:
- test
before_script:
- cd /my-app
script:
- pwd
- ls -lah

GitHub Actions not picking up Django tests

I want to write a simple GitHub Action that runs my Django app's tests when I push to GitHub. GitHub runs the workflow on push, but for some reason, it doesn't pick up any of the tests, even though running python ./api/manage.py test locally works.
The Run tests section of the Job summary shows this:
1s
Run python ./api/manage.py test
System check identified no issues (0 silenced).
----------------------------------------------------------------------
Ran 0 tests in 0.000s
OK
1s
2s
0s
For background, my local setup is using docker-compose, with a dockerfile for each app. The Django app is the API. All I want to do is runn the django tests on push.
I've come across GitHub service containers, and I thought they might be necessary since django needs a postgres db connection to run its tests.
I'm new to GitHub Actions so any direction would be appreciated. My hunch is that it should be simpler than this, but below is my current .github/workflows/django.yml file:
name: Django CI
on:
push:
branches: [ master ]
pull_request:
branches: [ master ]
jobs:
tests:
runs-on: ubuntu-latest
container: python:3
services:
# Label used to access the service container
db:
# Docker Hub image
image: postgres
# Provide the password for postgres
env:
POSTGRES_PASSWORD: password
# Set health checks to wait until postgres has started
options: >-
--health-cmd pg_isready
--health-interval 10s
--health-timeout 5s
--health-retries 5
steps:
# Downloads a copy of the code in your repository before running CI tests
- name: Check out repository code
uses: actions/checkout#v2
# Performs a clean installation of all dependencies
- name: Install Dependencies
run: |
python -m pip install --upgrade pip
pip install -r api/requirements.txt
- name: Run Tests
run: |
python ./api/manage.py test
env:
# The hostname used to communicate with the PostgreSQL service container
POSTGRES_HOST: postgres
# The default PostgreSQL port
POSTGRES_PORT: 5432
Don't know if you have solved it by yourself, if so please share your solution. However, what I ran into when doing the same thing as you is that I had to specify the app that I wanted to test for it to work.
For example, if you have your app called "testapp" you need to do:
run: |
python ./api/manage.py test testapp

View Docker Swarm CMD Line Output

I am trying to incorporate a python container and a dynamodb container into one stack file to experiment with Docker swarm. I have done tutorials on docker swarm seeing web apps running across multiple nodes before but never built anything independently. I am able to run docker-compose up with no issues, but struggling with swarm.
My docker-compose.yml looks like
version: '3.3'
services:
dynamodb:
image: "amazon/dynamodb-local"
ports:
- "8000:8000"
track-count:
image: "my-app"
links:
- "dynamodb:localhost"
Running docker stack deploy -c docker-compose.yml trial_stack brings up no errors however printing 'hello world' as the first line of python code is not displayed in the terminal. I get the following as CMD line output
Ignoring unsupported options: links
Creating network trial_stack_default
Creating service trial_stack_dynamodb
Creating service trial_stack_track-count
My question is:
1) Why is the deploy service ignoring the links? I have noticed this is repeated in the docs https://docs.docker.com/engine/reference/commandline/stack_deploy/ but unsure if this will cause my stack to fail.
2) Assuming the links issue is fixed, where will any command line output be shown, to confirm the system is running? Currently I only have one node, my local machine, which is the manager.
For reference, my python image is being built by the following Dockerfile:
FROM python:3.8-slim-buster
RUN mkdir /app
WORKDIR /app
RUN pip install --upgrade pip
COPY ./requirements.txt ./
RUN pip install -r ./requirements.txt
COPY / /
COPY /resources/secrets.py /resources/secrets.py
CMD [ "python", "/main.py" ]
You can update docker-compose.yaml to enable tty for the services for which you want to see the stdout on console.
Updated docker-compose.yaml should look like this:
version: '3.3'
services:
dynamodb:
image: "amazon/dynamodb-local"
ports:
- "8000:8000"
track-count:
image: "my-app"
tty: true
links:
- "dynamodb:localhost"
and then when once you have the task deployed, to check service logs you can run:
# get the service name
docker stack services <STACK_NAME>
# display the service logs, edited based on user's suggestion
docker service logs --follow --raw <SERVICE_NAME>

Pass Environment Variables from Shippable to Docker

I am using Shippable for two reasons: to automate the build of my docker images and to pass encrypted environment variables. I am able to automate the builds but I can't pass the variables.
I start with entering the environment variable to the Shippable text box in the project settings:
SECRET_KEY=123456
I click the 'encrypt' button and then shippable returns:
- secure : hash123abc...
I put this hash into my shippable.yml file. It looks like:
language: python
python:
- 2.7
build_image: myusername/myimagename
env:
- secure : hash123abc...
build:
post_ci:
- docker login -u myusername -p mypassword
- docker build -t myusername/myimagename:latest .
- docker push myusername/myimagename:latest
integrations:
hub:
- integrationName : myintegrationname
type: docker
branches:
only:
- master
The automated build works! But if I try:
sudo docker run myusername/myimagename:latest echo $SECRET_KEY
I get nothing.
My Dockerfile which sets the environment variables (in this case SECRET_KEY) looks like this:
FROM python:2.7.11
RUN apt-get update
RUN apt-get install -y git
RUN get clone https://github.com/myusername/myrepo.git
ENV SECRET_KEY=$SECRET_KEY
It might be helpful to explain MY logic as I see it. Because my thinking may be the issue if it's not in the code:
The shippable project build is triggered (by a repo push or manually). In shippable.yml it does some things:
builds the initial image
sets the SECRET_KEY environment variable
builds the new image based on the Dockerfile
the Dockerfile:
-- sets the env variable SECRET_KEY to the SECRET_KEY set by the .yml two steps earlier
pushes the image
I'm thinking that now I've set an environment variable in my image I can now access it. But I get nothing. What's the issue here?
Thanks #Alex Hall for working this out with me!
It turns out that passing environment variables with Docker in this setting must be done with a simple flag to start. So in my shippable.yml I changed:
- docker build -t myusername/myimagename:latest .
to
- docker build --build-arg SECRET_KEY=$SECRET_KEY -t myusername/myimagename:latest .
Then in my Dockerfile I added:
ARG SECRET_KEY
RUN echo $SECRET_KEY > env_file
Lo and behold the key was in env_file

How to run Odoo tests unittest2?

I tried running odoo tests using --test-enable, but it won't work. I have a couple of questions.
According to the documentation Tests can only be run during module installation, what happens when we add functionality and then want to run tests?
Is it possible to run tests from IDE like Pycharm ?
This useful For Run odoo test case:
./odoo.py -i/-u module_being_tested -d being_used_to_test --test-enable
Common options:
-i INIT, --init=INIT
install one or more modules (comma-separated list, use "all" for all modules), requires -d
-u UPDATE, --update=UPDATE
update one or more modules (comma-separated list, use "all" for all modules). Requires -d.
Database related options:
-d DB_NAME, --database=DB_NAME
specify the database name
Testing Configuration:
--test-enable: Enable YAML and unit tests.
#aftab You need add log-level please see below.
./odoo.py -d <dbname> --test-enable --log-level=test
and regarding you question, If you are making changes to installed modules and need to re test all test cases then you need to simple restart you server with -u <module_name> or -u all(for all modules) with the above command.
Here is a REALLY nice plugin to run unit odoo tests directly with pytest:
https://github.com/camptocamp/pytest-odoo
Here's a result example:
I was able to run odoo's tests using pycharm, to achieve this I used docker + pytest-odoo + pycharm (using remote interpreters).
First you setup a Dockerfile like this:
FROM odoo:14
USER root
RUN apt-get update && \
apt-get install -y --no-install-recommends \
python3-pip
RUN pip3 install pytest-odoo coverage pytest-html
USER odoo
And a docker-compose.yml like this:
version: '2'
services:
web:
container_name: plusteam-odoo-web
build:
context: .
dockerfile: Dockerfile
image: odoo:14
depends_on:
- db
ports:
- "8069:8069"
volumes:
- odoo-web-data:/var/lib/odoo
- ./config:/etc/odoo
- ./addons:/mnt/extra-addons
command: --dev all
db:
container_name: plusteam-odoo-db
image: postgres:13
ports:
- "5432:5432"
environment:
- POSTGRES_DB=postgres
- POSTGRES_PASSWORD=odoo
- POSTGRES_USER=odoo
- PGDATA=/var/lib/postgresql/data/pgdata
volumes:
- odoo-db-data:/var/lib/postgresql/data/pgdata
volumes:
odoo-web-data:
odoo-db-data:
So we extend an odoo image with packages to generate coverage reports and pytest-odoo
Once you have this, you can run docker-compose up -d to get your odoo instance running, the odoo container will have pytest-odoo installed, the next part is to tell pycharm to use a remote interpreter with the odoo modified image including the pyest-odoo package:
Now every time you run a script in pycharm they will launch a new container based on the image you provided.
After examining the containers launched by pycharm I realized they bind the project's directory to the /opt/project/ directory inside the container, this is useful because you will need to modify the odoo.conf file when you run your tests.
You can customize the database connection for a custom testing db which you should do, and the important part is that you need to map the addons_path option to /opt/project/addons or the final path inside the containers launched by pycharm where your custom addons are available.
With this you can create a pycharm script for pytest like this:
Notice how we provided the path for the odoo config with modifications for testing, this way the odoo available in the container launched by pycharm will know where your custom addon's code is located.
Now we can run the script and even debug it and everything will work as expected.
I go further in this matter (my particular solution) in a medium article, I even wrote a repository with a working demo so you can try it out, hope this helps:
https://medium.com/plusteam/how-to-run-odoo-tests-with-pycharm-51e4823bdc59 https://github.com/JSilversun/odoo-testing-example
Be aware that using remote interpreters you just need to make sure the odoo binary can find the addons folder properly and you will be all set :) besides using a Dockerfile to extend an image helps to speed up development.

Categories