Here is the story...
i trying build a jenkins pipeline to automate a python test.
when i'm trying run 'pip install xxxx'. The jenkins built process returned pip command not found error. i notice jenkins might require python in the runtime and jenkins by default doesn't come with python.
Hence i trying to pull a docker image in pipeline.
added line "agent{ docker { image 'python:3.7.2' } }"
however docker not found error occurred during the build process
i have installed docker plugin on jenkins, but the docker error still remain...
the question is how i can make the docker work or
how i can run python in jenkins pipeline? as all i'm doing this cos i want to run python tests in jenkins pipeline.
here is the example code
pipeline {
agent{ docker { image 'python:3.7.2' } }
}
stages {
stage('checkout') {
steps{
xxxxxx
}
}
stage('setup env') {
steps {
sh 'pip install -r dependencies.txt'
}
}
stage('run test') {
steps {
sh 'pytest'
}
}
}
I am a new programmer. I have a simple calculator python code which i want to deploy using Jenkins pipeline.
There is a python unittest file in my git repo named "unit-test". I tried to execute unit test as mentioned in the stage(test), but it is not working.
stage('Test') {
steps {
echo 'Testing..'
python setup.py nosetests --with-xunit
nosetests unit-test.py
Can someone please help me fix the issue.
My github repo url: https://github.com/Tasfiq23/Devops_assessment_2.git/
I have re-created your pipeline in Jenkins and it got success.
Pipeline Script
pipeline {
agent any
stages {
stage ('GIT Checkout'){
steps {
git changelog: false, poll: false, url: 'https://github.com/Tasfiq23/Devops_assessment_2.git'
}
}
stage('build') {
steps {
sh 'pip install -r requirements.txt'
}
}
stage ('Test'){
steps {
sh 'python unit-test.py'
}
}
}
}
Output:
I am a rookie to Jenkins trying to set up a Python pytest test suite to run. In order to properly execute the test suite, I have to install several Python packages. I'm having trouble with this particular step because Jenkins consistently is unable to find virtualenv and pip:
pipeline {
parameters {
gitParameter branchFilter: 'origin/(.*)', defaultValue: 'master', name: 'BRANCH', type: 'PT_BRANCH', quickFilterEnabled: true
}
agent any
stages {
stage('Checkout source code') {
steps {
checkout([$class: 'GitSCM', branches: [[name: '*/master']], doGenerateSubmoduleConfigurations: false, extensions: [], submoduleCfg: [], userRemoteConfigs: [[credentialsId: '------', url: 'git#github.com:path-to-my-repo/my-test-repo.git']]])
}
}
stage('Start Test Suite') {
steps {
sh script: 'PATH=/Library/Frameworks/Python.framework/Versions/3.6/bin/:$PATH'
echo "Checking out Test suite repo."
sh script: 'virtualenv venv --distribute'
sh label: 'install deps', script: '/Library/Frameworks/Python.framework/Versions/3.6/bin/pip install -r requirements.txt'
sh label: 'execute test suite, exit upon first failure', script: 'pytest --verbose -x --junit-xml reports/results.xml'
post {
always {
junit allowEmptyResults: true, testResults: 'reports/results.xml'
}
}
}
}
On the virtualenv venv --distribute step, Jenkins throws an error (I'm running this initially on a Jenkins server on my local instance, although in production it will be on an Amazon Linux 2 machine):
virtualenv venv --distribute /Users/Shared/Jenkins/Home/workspace/my-project-name#tmp/durable-5045c283/script.sh:
line 1: virtualenv: command not found
Why is this happening? The step before, I make sure to append where I know my virtualenv and pip are:
sh script: 'PATH=/Library/Frameworks/Python.framework/Versions/3.6/bin/:$PATH'
For instance, when I type in
sudo su jenkins
which pip
which virtualenv
I get the following outputs as expected:
/Library/Frameworks/Python.framework/Versions/3.6/bin/pip
/Library/Frameworks/Python.framework/Versions/3.6/bin/virtualenv
Here are the things I do know:
Jenkins runs as a user called jenkins
best practice is to create a virtual environment, activate it, and the perform my pip installations inside there
Jenkins runs sh by default, not bash (but I'm not sure if this has anything to do with my problem)
Why is Jenkins unable to find my virtualenv? What's the best practice for installing Python libraries for a Jenkins build?
Edit: I played around some more and found a working solution:
I don't know if this is the proper way to do it, but I used the following syntax:
withEnv(['PATH+EXTRA=/Library/Frameworks/Python.framework/Versions/3.6/bin/']) {
sh script: "pip install virtualenv"
// do other setup stuff
}
However, I'm now stuck w/ a new issue: I've clearly hardcoded in my Python path here. If I'm running on a remote Linux machine, am I going to have to install that specific version of Python (3.6)?
Jenkins console output from jobs running py.test tests contains unexpected characters "[1m" "[0m" like
[1m============== test session starts ==============[0m
Apparently these characters are leftovers from py.test output formatting ("test session starts" shows up as bold and colored in a terminal window). Is there a way to disable the output formatting? py.test's "--color no" option is not enough.
In my case I'm running the pytest inside a docker by suing Jenkins descriptive pipeline, so need to verify several things:
First, add ansiColor to the options
pipeline {
...
options {
ansiColor('xterm')
...
}
...
}
Second, verify that you added the docker run command the flag -t
-t : Allocate a pseudo-tty
for docker-compose it's tty: true
Third, you can force colorize by adding --color=yes to the pytest command
python -m pytest --color=yes ...
--color=color color terminal output (yes/no/auto).
Install the AnsiColor plugin in Jenkins. In its configuration panel there will be a new item "Ansi Color" with a xterm color map
Your pipeline should contain something like:
stage('Pytest'){
wrap([$class: 'AnsiColorBuildWrapper', 'colorMapName': 'xterm']) {
sh """
source ./$PYTHON3_ENV_NAME/bin/activate
# Execute tests
python3 -m pytest test_cases/${TEST_FILTER} --color=yes ....
"""
}
}
During run you should see then colors for pytest stage.
If you see the bash color formatting but not the colors, press F5 on browser!
Riccardo
At the moment I am running python manage.py test every once in a while after I make significant changes in my django project. Is it possible to run those tests automatically whenever I change and save a file in my project? It'll be useful to detect bugs earlier (I know rails has something like this with rspec). I am using nose and django-nose. Thanks in advance.
Use entr:
$ brew install entr
$ find . -name '*.py' | entr python ./manage.py test
Or, for extra credit, combine it with ack:
$ ack --python | entr python ./manage.py test
If you want it to even find new files as you add them:
$ until ack -f --python | entr -d python ./manage.py test; do sleep 1; done
py.test answer (which also works for nose):
pip install pytest-xdist
py.test -f # will watch all subfolders for changes, and rerun the tests
Since py.test understands nose, this works for nose too.
I'm a JavaScript developer so I used the tools JS developer have built with Node.js to achieve the same goal in my projects. It is very simple but you also need to install nodeJS to get it working.
I created a file called gruntfile.js in my project root directory:
//gruntfile.js
module.exports = function(grunt) {
grunt.initConfig({
watch: {
files: ['*.py'],
tasks: ['shell']
},
shell: {
test: {
command: 'python main_test.py'
}
}
});
grunt.loadNpmTasks('grunt-contrib-watch');
grunt.loadNpmTasks('grunt-shell');
grunt.registerTask('default', ['watch']);
};
What it's doing is basically watching any file in that directory that has a py extension and if they changed it execute a shell command which in this case is my python test (you might wanna change it, my test name was main_test.py). In order to run this grunt script you need to install Node.js and after that you will have npm in your global path. after that you need to insall a few node modules as well. All these modules except grunt-cli will be stored in your current folder so make sure you are at the root of your project or what ever folder you put that gruntfile.js in. then run the fallowing commands.
npm install grunt-cli -g
npm install grunt
npm install grunt-contrib-watch
npm install grunt-shell
Don't worry about the size, these are very small modules. Now that you have every thing setup you can simply run grunt and it will start watching your py files and when you saved them it will run your tests. It may not be best way for running python tests but as I said I'm a JavaScript developer and I think Grunt has provided a very simple way of executing tests even for other languages so I use it.
I just tried nose-watch and it worked fantastic! install the plugin and run the test with the --with-watch option.
Update: :( it does not seem to work well when running tests from django-nose's manage.py helper.
Eventually I opted to use tdaemon, which supports django, although might require a bit of fiddling as well for full fledged projects.
For example here is how I ran it for my django project:
tdaemon -t django --custom-args=a_specific_app_to_test -d --ignore-dirs logs
The --custom-args was to focus tests to specific app (same as you would do python manage.py test a_specific_app_to_test
The -d argument is to enable debug logging, which prints which file change triggered the run.
The --ignore-dirs was necessary because my tests wrote to the logs (which in itself is a problem!) and tdaemon went into an infinite loop.
Another Javascript dev here, I've found nodemon (https://github.com/remy/nodemon) to work pretty well. By default it watches *.js files but that's configurable with the --ext flag. To use it, do:
npm install -g nodemon
cd /your/project/dir/
nodemon --ext py --exec "python manage.py test"
Now, whenever a *.py file changes, it'll re-run your command. It even finds new files.
I'm used watchr, something like Watchr
I did this using gulp. Install gulp-shell:
npm install gulp-shell --save-dev
And in the gulpfile.js:
var shell = require('gulp-shell')
gulp.task('run-tests', shell.task([
'python3 manage.py test']))
gulp.task('watch', function(){
gulp.watch(['./your-project-name/**/*.html', './your-project-name/**/*.py'], ['run-tests']);
});
gulp.task('default',['run-tests','watch']);
And it runs the tests anytime there are changes saved to any .py or .html files!
You can use Django Supervisor on top of Django. This will avoid the use of a CI tool (which may be useful in any case, this isn't invalidating the other response - maybe just complementary).
I would recommend setting up django-nose and sniffer. It's quite easy to setup and works great. Something along the lines of this scent.py was the only customization I needed. Then you can just run sniffer -x myapp.tests.
Nose comes with some other goodies that make tests a bit nicer to work with as well.
if you use git control code, another way to is use git hook pre-commit
maybe error like remote: fatal: Not a git repository: '.', check this post https://stackoverflow.com/a/4100577/7007942
I've found the easiest way is to use gulp as recommended by this post. Note that gulp-shell (recommended by other answers) is actually blacklisted by gulp, but thankfully you don't even need a plugin. Try this instead:
// gulpfile.js
const { watch } = require('gulp')
var exec = require('child_process').exec
function test (cb) {
exec('python manage.py test', function (err, stdout, stderr) {
console.log(stdout)
console.log(stderr)
cb(err)
})
}
exports.default = function () {
watch('./**/*.py', test)
}
In the past, I've tried many of the options suggested in other answers. This one was comparatively painless. It's helpful if you have some knowledge of JavaScript.
I wrote a Gulp task to automatically run ./manage.py test whenever any specified Python files are changed or removed. You'll need Node for this.
First, install Gulp:
yarn add -D gulp#next
Then use the following gulpfile.js and make any necessary adjustments:
const { exec } = require('child_process');
const gulp = require('gulp');
const runDjangoTests = (done) => {
const task = exec('./manage.py test --keepdb', {
shell: '/bin/bash', // Accept SIGTERM signal. Doesn't work with /bin/sh
});
task.stdout.pipe(process.stdout);
task.stderr.pipe(process.stderr);
task.on('exit', () => {
done();
});
return task;
};
gulp.task('test', runDjangoTests);
gulp.task('watch', (done) => {
let activeTask;
const watcher = gulp.watch('**/*.py', {
// Ignore whatever directories you need to
ignored: ['.venv/*', 'node_modules'],
});
const runTask = (message) => {
if (activeTask) {
activeTask.kill();
console.log('\n');
}
console.log(message);
activeTask = runDjangoTests(done);
};
watcher.on('change', (path) => {
runTask(`File ${path} was changed. Running tests:`);
});
watcher.on('unlink', (path) => {
runTask(`File ${path} was removed. Running tests:`);
});
});
Then simply run node node_modules/gulp/bin/gulp.js watch to run the task :)
https://pycrunch.com/
One of key features - run only impacted tests on file changes. Available as plugin for PyCharm
On ubuntu this script works, after installing inotifywait (sudo apt install inotify-tools):
inotifywait -qrm -e close_write * |
while read -r filename event; do
python manage.py test accord
done
Similar to entr you can use watchexec with
brew install watchexec
watchexec -rc -e py -- python ./manage.py test
with options
-r # restart
-c # clear screen
-e extensions # list of file extensions to filter by
You'll need a continuous integration server, something like Jenkins.