py.test console output in Jenkins - python

Jenkins console output from jobs running py.test tests contains unexpected characters "[1m" "[0m" like
[1m============== test session starts ==============[0m
Apparently these characters are leftovers from py.test output formatting ("test session starts" shows up as bold and colored in a terminal window). Is there a way to disable the output formatting? py.test's "--color no" option is not enough.

In my case I'm running the pytest inside a docker by suing Jenkins descriptive pipeline, so need to verify several things:
First, add ansiColor to the options
pipeline {
...
options {
ansiColor('xterm')
...
}
...
}
Second, verify that you added the docker run command the flag -t
-t : Allocate a pseudo-tty
for docker-compose it's tty: true
Third, you can force colorize by adding --color=yes to the pytest command
python -m pytest --color=yes ...
--color=color color terminal output (yes/no/auto).

Install the AnsiColor plugin in Jenkins. In its configuration panel there will be a new item "Ansi Color" with a xterm color map
Your pipeline should contain something like:
stage('Pytest'){
wrap([$class: 'AnsiColorBuildWrapper', 'colorMapName': 'xterm']) {
sh """
source ./$PYTHON3_ENV_NAME/bin/activate
# Execute tests
python3 -m pytest test_cases/${TEST_FILTER} --color=yes ....
"""
}
}
During run you should see then colors for pytest stage.
If you see the bash color formatting but not the colors, press F5 on browser!
Riccardo

Related

Test output not visible in Jenkins build log

I want to see stdout coming from a python test inside the jenkins build logs. I'm running pytest (==5.3.1) from within my Jenkins pipeline inside an sh script:
stage('unit tests') {
print "starting unit tests"
sh script: """
source env-test/bin/activate && \
python -m pytest -x -s src/test/test*.py
""", returnStdout: true, returnStatus: true
}
Note that I'm running my tests from with a virtual environment (env-test).
Unfortunately, the Jenkins logs do not display output that I send from within my tests:
def test_it(self):
print('\nhello world')
self.assertTrue(True)
But it only shows the initial call:
+ python -m pytest -x -s src/test/testModel.py
[Pipeline] }
[Pipeline] // stage
Whereas my local pycharm ide and gitbash shows all output:
============================= test session starts =============================
platform win32 -- Python 3.6.4, pytest-5.3.1, py-1.8.0, pluggy-0.13.1 -- C:\...\Anaconda3\python.exe
cachedir: .pytest_cache
rootdir: C:\...\src\test
collecting ... collected 1 item
testModel.py::TestModel::test_it
PASSED [100%]
hello world
============================== 1 passed in 0.57s ==============================
The pytest docs are talking about Capturing of the stdout/stderr output. So I tried to use the -s parameter in order to disable capturing but without success.
The issue was the returnStdout parameter of the groovy sh script command:
returnStdout (optional) If checked, standard output from the task is
returned as the step value as a String, rather than being printed to
the build log. (Standard error, if any, will still be printed to the
log.) You will often want to call .trim() on the result to strip off a
trailing newline. Type: boolean
So I simply remove that option from the sh script command.

execute pytest using pipeline in Jenkins

currently i execute my tests in pytest by command like "pytest mytest.py" copied to inside form field "Execute Windows batch command" in the job Jenkins. I want to change my job to execute it by pipeline. I try a lot of code from Stackoverflow but any one of them doesn't work. Do you have a simple code to run regresion tests with pytest connection to Git?
I'll suggest you to use : 'Pyenv Pipeline' plugin (https://plugins.jenkins.io/pyenv-pipeline)
stage("test PythonEnv") {
withPythonEnv('python3') {
sh 'pip install pytest'
sh 'pytest mytest.py'
}
}
If you are just after running it as simple Jenkins pipeline (say scripted for now), you can run something like below?
node
{
stage('Run pytest')
{
bat "pytest mytest.py"
}
}

Advance Scripting inside a DockerFile

I am trying to create a Docker image/container that will run on Windows 10/Linux and test a REST API. Is it possible to embed the function (from my .bashrc file) inside the DockerFile? The function pytest calls pylint before running the .py file. If the rating is not 10/10, then it prompts the user to fix the code and exits. This works fine on Linux.
Basically here is the pseudo-code inside the DockerFile I am attempting to build an image.
------------------------------------------
From: Ubuntu x.xx
install python
Install pytest
install pylint
copy test_file to the respective folder
Execute pytest test_file_name.py
if the rating is not 10\10:
prompt the user to resolve the rating issue and exit
------------here is the partial code snippet from the func------------------------
function pytest () {
argument1="$1"
# Extract the path and file name for pylint when method name is passed
pathfilename=`echo ${argument1} | sed 's/::.*//'`
clear && printf '\e[3J'
output=$(docker exec -t orch-$USER pylint -r n ${pathfilename})
if (echo "$output" | grep 'warning.*error' o&>/dev/null or
echo "${output}" | egrep 'warning|convention' &>/dev/null)
then
echo echo "${output}" | sed 's/\(warning\)/\o033[33m\1\o033[39m/;s/\(errors\|error\)/\o033[31m\1\o033[39m/'
YEL='\033[0;1;33m'
NC='\033[0m'
echo -e "\n ${YEL}Fix module as per pylint/PEP8 messages to achieve 10/10 rating before pusing to github\n${NC}"`
fi
Another option I can think of:
Step 1] Build the image (using DockerFile) with all the required software
Step 2] In a .py file, add the call for execution of pytest with the logic from the function.
Your thoughts?
You can turn that function into a standalone shell script. (Pretty much by just removing the function wrapper, and taking out the docker exec part of the tool invocation.) Once you've done that, you can COPY the shell script into your image, and once you've done that, you can RUN it.
...
COPY pylint-enforcer.sh .
RUN chmod +x ./pylint-enforcer.sh \
&& ./pylint-enforcer.sh
...
It looks like pylint will produce a non-zero exit code if it emits any messages. For purposes of a Dockerfile, it may be enough to just RUN pylint -r -n .; if it prints anything, it looks like it will return a non-zero exit code, which docker build will interpret as "failure" and not proceed.
You might consider whether you'll ever want the ability to build and push an image of code that isn't absolutely perfect (during a production-down event, perhaps), and whether you want to require root-level permissions to run simple code-validity tools (if you can docker anything you can edit arbitrary files on the host as root). I'd suggest running these tools out of a non-Docker virtual environment during your CI process, and neither place them in your Dockerfile nor depend on docker exec to run them.

How do I run sumo-gui on instant-veins-4.7.1-i1.ova

I downloaded instant veins 4.7.1-i1 (a virtual appliance for running Veins) from the link below:
https://veins.car2x.org/download/instant-veins-4.7.1-i1.ova
But in my simulation, I need to run the sumo-gui, while the connection (sumo-launchd.py) in the virtualBox only runs the sumo. I tried to modify that a bit,
but i can't.
Can you help me??
You can start the sumo-launchd with different parameters. The default configuration (shortcut in the "Activities" menu) executes the sumo-launchd.py with the -vv parameter.
/home/veins/src/veins/sumo-launchd.py -vv
To use sumo-gui, you can use the following command in a terminal:
python /home/veins/src/veins/sumo-launchd.py -vv -c sumo-gui

How to automatically run tests when there's any change in my project (Django)?

At the moment I am running python manage.py test every once in a while after I make significant changes in my django project. Is it possible to run those tests automatically whenever I change and save a file in my project? It'll be useful to detect bugs earlier (I know rails has something like this with rspec). I am using nose and django-nose. Thanks in advance.
Use entr:
$ brew install entr
$ find . -name '*.py' | entr python ./manage.py test
Or, for extra credit, combine it with ack:
$ ack --python | entr python ./manage.py test
If you want it to even find new files as you add them:
$ until ack -f --python | entr -d python ./manage.py test; do sleep 1; done
py.test answer (which also works for nose):
pip install pytest-xdist
py.test -f # will watch all subfolders for changes, and rerun the tests
Since py.test understands nose, this works for nose too.
I'm a JavaScript developer so I used the tools JS developer have built with Node.js to achieve the same goal in my projects. It is very simple but you also need to install nodeJS to get it working.
I created a file called gruntfile.js in my project root directory:
//gruntfile.js
module.exports = function(grunt) {
grunt.initConfig({
watch: {
files: ['*.py'],
tasks: ['shell']
},
shell: {
test: {
command: 'python main_test.py'
}
}
});
grunt.loadNpmTasks('grunt-contrib-watch');
grunt.loadNpmTasks('grunt-shell');
grunt.registerTask('default', ['watch']);
};
What it's doing is basically watching any file in that directory that has a py extension and if they changed it execute a shell command which in this case is my python test (you might wanna change it, my test name was main_test.py). In order to run this grunt script you need to install Node.js and after that you will have npm in your global path. after that you need to insall a few node modules as well. All these modules except grunt-cli will be stored in your current folder so make sure you are at the root of your project or what ever folder you put that gruntfile.js in. then run the fallowing commands.
npm install grunt-cli -g
npm install grunt
npm install grunt-contrib-watch
npm install grunt-shell
Don't worry about the size, these are very small modules. Now that you have every thing setup you can simply run grunt and it will start watching your py files and when you saved them it will run your tests. It may not be best way for running python tests but as I said I'm a JavaScript developer and I think Grunt has provided a very simple way of executing tests even for other languages so I use it.
I just tried nose-watch and it worked fantastic! install the plugin and run the test with the --with-watch option.
Update: :( it does not seem to work well when running tests from django-nose's manage.py helper.
Eventually I opted to use tdaemon, which supports django, although might require a bit of fiddling as well for full fledged projects.
For example here is how I ran it for my django project:
tdaemon -t django --custom-args=a_specific_app_to_test -d --ignore-dirs logs
The --custom-args was to focus tests to specific app (same as you would do python manage.py test a_specific_app_to_test
The -d argument is to enable debug logging, which prints which file change triggered the run.
The --ignore-dirs was necessary because my tests wrote to the logs (which in itself is a problem!) and tdaemon went into an infinite loop.
Another Javascript dev here, I've found nodemon (https://github.com/remy/nodemon) to work pretty well. By default it watches *.js files but that's configurable with the --ext flag. To use it, do:
npm install -g nodemon
cd /your/project/dir/
nodemon --ext py --exec "python manage.py test"
Now, whenever a *.py file changes, it'll re-run your command. It even finds new files.
I'm used watchr, something like Watchr
I did this using gulp. Install gulp-shell:
npm install gulp-shell --save-dev
And in the gulpfile.js:
var shell = require('gulp-shell')
gulp.task('run-tests', shell.task([
'python3 manage.py test']))
gulp.task('watch', function(){
gulp.watch(['./your-project-name/**/*.html', './your-project-name/**/*.py'], ['run-tests']);
});
gulp.task('default',['run-tests','watch']);
And it runs the tests anytime there are changes saved to any .py or .html files!
You can use Django Supervisor on top of Django. This will avoid the use of a CI tool (which may be useful in any case, this isn't invalidating the other response - maybe just complementary).
I would recommend setting up django-nose and sniffer. It's quite easy to setup and works great. Something along the lines of this scent.py was the only customization I needed. Then you can just run sniffer -x myapp.tests.
Nose comes with some other goodies that make tests a bit nicer to work with as well.
if you use git control code, another way to is use git hook pre-commit
maybe error like remote: fatal: Not a git repository: '.', check this post https://stackoverflow.com/a/4100577/7007942
I've found the easiest way is to use gulp as recommended by this post. Note that gulp-shell (recommended by other answers) is actually blacklisted by gulp, but thankfully you don't even need a plugin. Try this instead:
// gulpfile.js
const { watch } = require('gulp')
var exec = require('child_process').exec
function test (cb) {
exec('python manage.py test', function (err, stdout, stderr) {
console.log(stdout)
console.log(stderr)
cb(err)
})
}
exports.default = function () {
watch('./**/*.py', test)
}
In the past, I've tried many of the options suggested in other answers. This one was comparatively painless. It's helpful if you have some knowledge of JavaScript.
I wrote a Gulp task to automatically run ./manage.py test whenever any specified Python files are changed or removed. You'll need Node for this.
First, install Gulp:
yarn add -D gulp#next
Then use the following gulpfile.js and make any necessary adjustments:
const { exec } = require('child_process');
const gulp = require('gulp');
const runDjangoTests = (done) => {
const task = exec('./manage.py test --keepdb', {
shell: '/bin/bash', // Accept SIGTERM signal. Doesn't work with /bin/sh
});
task.stdout.pipe(process.stdout);
task.stderr.pipe(process.stderr);
task.on('exit', () => {
done();
});
return task;
};
gulp.task('test', runDjangoTests);
gulp.task('watch', (done) => {
let activeTask;
const watcher = gulp.watch('**/*.py', {
// Ignore whatever directories you need to
ignored: ['.venv/*', 'node_modules'],
});
const runTask = (message) => {
if (activeTask) {
activeTask.kill();
console.log('\n');
}
console.log(message);
activeTask = runDjangoTests(done);
};
watcher.on('change', (path) => {
runTask(`File ${path} was changed. Running tests:`);
});
watcher.on('unlink', (path) => {
runTask(`File ${path} was removed. Running tests:`);
});
});
Then simply run node node_modules/gulp/bin/gulp.js watch to run the task :)
https://pycrunch.com/
One of key features - run only impacted tests on file changes. Available as plugin for PyCharm
On ubuntu this script works, after installing inotifywait (sudo apt install inotify-tools):
inotifywait -qrm -e close_write * |
while read -r filename event; do
python manage.py test accord
done
Similar to entr you can use watchexec with
brew install watchexec
watchexec -rc -e py -- python ./manage.py test
with options
-r # restart
-c # clear screen
-e extensions # list of file extensions to filter by
You'll need a continuous integration server, something like Jenkins.

Categories