Travis jobs reporting success, even though tests fail (using tox) - python

I'm looking specifically at the following build:
https://travis-ci.org/ababic/wagtailmenus/builds/267670218
All jobs seem to be reporting as successful, even though they all have a single, deliberately failing test, and this has been happening on different builds on the same project for at least the last 2 days.
The configuration in my .travis.yml hasn't changed significantly in a while, apart from switching to 'trusty' from 'precise' - and changing that back seems not to fix the issue.
My tox.ini hasn't been changed in a while either.
I tried forcing tox to an earlier version already, which didn't seem to help.
I know it's got to be something to do with tox or Travis, but that's where my knowledge ends. Any help at all would be greatly appreciated.

I had a look at the project and this has nothing to do with either tox or travis. The problem is that runtests.py used in tox always returns with exitcode 0 whatever happens. Tox (and in extension Travis) needs an exitcode != 0 to be able to know that something went wrong.
relevant code in runtests.py:
[...]
def runtests():
[...]
try:
execute_from_command_line(argv)
except:
pass
if __name__ == '__main__':
runtests()
I did not check what execute execute_from_command_line exactly does but I would reckon that it returns an error code if something went wrong (or raises an exception if something went really wrong).
Therefore I would rewrite the code above like this:
import sys
[...]
def runtests():
[...]
return execute_from_command_line(argv)
if __name__ == '__main__':
sys.exit(runtests())
This way you pass through whatever the function you run has to report about the outcome of your tests and exit the script with that as error code or if an exception is raised, the traceback is printed and the script also returns with a non zero code.

Related

Unit testing __main__.py

I have a Python package (Python 3.6, if it makes a difference) that I've designed to run as 'python -m package arguments' and I'd like to write unit tests for the __main__.py module. I specifically want to verify that it sets the exit code correctly. Is it possible to use runpy.run_module to execute my __main__.py and test the exit code? If so, how do I retrieve the exit code?
To be more clear, my __main__.py module is very simple. It just calls a function that has been extensively unit tested. But when I originally wrote __main__.py, I forgot to pass the result of that function to exit(), so I would like unit tests where the main function is mocked to make sure the exit code is set correctly. My unit test would look something like:
#patch('my_module.__main__.my_main', return_value=2)
def test_rc2(self, _):
"""Test that rc 2 is the exit code."""
sys.argv = ['arg0', 'arg1', 'arg2', …]
runpy.run_module('my_module')
self.assertEqual(mod_rc, 2)
My question is, how would I get what I’ve written here as ‘mod_rc’?
Thanks.
Misko Hevery has said before (I believe it was in Clean Code Talks: Don't Look for Things but I may be wrong) that he doesn't know how to effectively unit test main methods, so his solution is to make them so simple that you can prove logically that they work if you assume the correctness of the (unit-tested) code that they call.
For example, if you have a discrete, tested unit for parsing command line arguments; a library that does the actual work; and a discrete, tested unit for rendering the completed work into output, then a main method that calls all three of those in sequence is assuredly going to work.
With that architecture, you can basically get by with just one big system test that is expected to produce something other than the "default" output and it'll either crash (because you wired it up improperly) or work (because it's wired up properly and all of the individual parts work).
At this point, I'm dropping all pretense of knowing what I'm talking about. There is almost assuredly a better way to do this, but frankly you could just write a shell script:
python -m package args
test $? -eq [expected exit code]
That will exit with error iff your program outputs incorrectly, which TravisCI or similar will regard as build failing.
__main__.py is still subject to normal __main__ global behavior — which is to say, you can implement your __main__.py like so
def main():
# Your stuff
if __name__ == "__main__":
main()
and then you can test your __main__ in whatever testing framework you like by using
from your_package.__main__ import main
As an aside, if you are using argparse, you will probably want:
def main(arg_strings=None):
# …
args = parser.parse_args(arg_strings)
# …
if __name__ == "__main__":
main()
and then you can override arg strings from a unit test simply with
from your_package.__main__ import main
def test_main():
assert main(["x", "y", "z"]) == …
or similar idiom in you testing framework.
With pytest, I was able to do:
import mypkgname.__main__ as rtmain
where mypkgname is what you've named your app as a package/module. Then just running pytest as normal worked. I hope this helps some other poor soul.

PyCharm: Process finished with exit code 0

I am new to PyCharm and I have 'Process finished with exit code 0' instead of getting (683, 11) as a result (please see attachment), could you guys help me out please? Much appreciate it!
That is good news! It means that there is no error with your code. You have run it right through and there is nothing wrong with it. Pycharm returns 0 when it has found no errors (plus any output you give it) and returns 1 as well as an error message when it encounters errors.
Editors and scripts do not behave like the interactive terminal, when you run a function it does not automatically show the the result. You need to actually tell it to do it yourself.
Generally you just print the results.
If you use print(data.shape) it should return what you expect with the success message Process finished with exit code 0.
exit code 0 means you code run with no error.
Let's give a error code for example(clearly in the below image): in below code, the variable lst is an empty list,
but we get the 5 member in it(which not exists), so the program throws IndexError, and exit 1 which means there is error with the code.
You can also define exit code for analysis, for example:
ERROR_USERNAME, ERROR_PASSWORD, RIGHT_CODE = 683, 11, 0
right_name, right_password = 'xy', 'xy'
name, password = 'xy', 'wrong_password'
if name != right_name:
exit(ERROR_USERNAME)
if password != right_password:
exit(ERROR_PASSWORD)
exit(RIGHT_CODE)
I would recommend you to read up onexit codes.
exit 0 means no error.
exit 1 means there is some error in your code.
This is not pyCharm or python specific. This is a very common practice in most of the programming languages. Where exit 0 means the successful execution of the program and a non zero exit code indicates an error.
Almost all the program(C++/python/java..) return 0 if it runs successful.That isn't specific to pycharm or python.
In program there is no need to invoke exit function explicitly when it runs success it invoke exit(0) by default, invoke exit(not_zero_num) when runs failed.
You can also invoke exit function with different code(num) for analysis.
You can also see https://en.wikipedia.org/wiki/Exit_(system_call) for more details.
What worked for me when this happened was to go to
Run --> Edit Configurations --> Execution --> check the box Run with
Python Console (which was unchecked).
This means that the compilation was successful (no errors). PyCharm and command prompt (Windows OS), terminal (Ubuntu) don't work the same way. PyCharm is an editor and if you want to print something, you explicitly have to write the print statement:
print(whatever_you_want_to_print)
In your case,
print(data.shape)
I think there's no problem in your code and you could find your print results (and other outputs) in the tab 5: Debug rather than 4: Run.
I just ran into this, but couldn't even run a simple print('hello world') function.
Turns out Comodo's Firewall was stopping the script from printing. This is a pretty easy fix by deleting Python out of the Settings > Advanced > Script Analysis portion of Comodo.
Good Luck
I had same problem with yours. And I finally solve it
I see you are trying to run code "Kaggle - BreastCancer.py"
but your pycharm try to run "Breast.py" instead of your code.
(I think Breast.py only contains functions so pycharm can run without showing any result)
Check on tab [Run] which code you are trying to run.
Your starting the program's run from a different file than you have open there. In Run (alt+shift+F10), set the python file you would like to run or debug.

Nosetest initialisation error

calling nosetests gives me the following:
======================================================================
ERROR: Failure: TypeError (__init__() takes exactly 2 arguments (1 given))
----------------------------------------------------------------------
Traceback (most recent call last):
File "/home/sheena/WORK/CriticalID/workspace/flow_env2/local/lib/python2.7/site-packages/nose-1.3.4-py2.7.egg/nose/loader.py", line 519, in makeTest
return self._makeTest(obj, parent)
File "/home/sheena/WORK/CriticalID/workspace/flow_env2/local/lib/python2.7/site-packages/nose-1.3.4-py2.7.egg/nose/loader.py", line 578, in _makeTest
return MethodTestCase(obj)
File "/home/sheena/WORK/CriticalID/workspace/flow_env2/local/lib/python2.7/site-packages/nose-1.3.4-py2.7.egg/nose/case.py", line 345, in __init__
self.inst = self.cls()
TypeError: __init__() takes exactly 2 arguments (1 given)
as well as some other stuff.
My directory structure looks like:
MyStuff
./__init__.py
./tests
./some_tests.py
./other_tests.py
./ ... lots more
./a_useful_group_of_tests
./more_tests.py
./tasty_tests.py
./ ...lots more
./other_files_and_directories
Now there are a lot of tests in a lot of files and this error gives me no indication of where in my code the error came from. Any ideas about how I can find it? The best I can come up with so far is to get rid of all the test files and then put them back one by one but that is not exactly ideal.
The usual reason for this is that you are using a class in one of those tests that is not imported in it or that has broken imports in it. Or otherwise have an individually broken test or imported class that when initialised with the other tests via the global nose test case runner environment is OK due for example to the missing import being imported in another test.
But on initial loading of the test class or individual loading - it would raise a not found or other error. But nose doesnt make that very clear although it is probably showing the initialisation error for the class.
So using your sample code in the test environment something in 'goodies' is not being passed an argument at module initialisation time, that is required for it to be initialized correctly. But gets it if initialized later on in the tests.
Alternatively you have a method accidentally called test_something in a class that isn't meant to be a test class (eg. somewhere in goodies or a class used by it) and nose finds and initialises it without the required parameters.
You can try going through the imports one by one putting them back to the module level - then initialising the problem class on the python shell to fix it.
The solution:
remove import statements from the top of the script.
Why:
After locating the test file giving me issues I executed nosetests with the -vv option as per Evert's suggestion. It turned out that the error message wasn't coming from any specific test. Ie, the tests were running as expected, those errors were just tagged onto the output. The output looked something like:
Failure: TypeError (__init__() takes exactly 2 arguments (1 given)) ... ERROR
Failure: TypeError (__init__() takes exactly 2 arguments (1 given)) ... ERROR
...
test_clear_instructions (the_calculator2.tests.model_tests.workflow_tests.Workflow_tests) ...
...all my tests follow
The only things not in test cases were import statements. So I just moved them to where they were used.
But why would this happen? Bonus points to anyone who knows
again, I dont feel like reading through reams of code to find the answer
Illustrative code:
from my.stuff import goodies #<----------Error from this line
class My_tests(unittest.TestCase):
def test_one(self):
do stuff
def test_two(self):
do other stuff
No error in this code:
class My_tests(unittest.TestCase):
def test_one(self):
from my.stuff import goodies
do stuff
def test_two(self):
from my.stuff import goodies
do other stuff
So I spent way too much time on this issue and want to give back.
If you run nosetests with --with-xunit you'll notice that the test that the failure is booked under the path nose.failure.Failure.runTest. If you try to run that test, it will say it doesn't exist. Something weird is going on here.
Interestingly if you run it under --collect-only --verbose you'll get Failure: TypeError (__init__() takes exactly 2 arguments (1 given)) ... ok, so something is first being recognized as a test when it probably shouldn't, and second it fails immediately. Evily, the two packages adjacent to the failing test both pass so even figuring out which file is failing is not straightforward.
As it turns out, there's a non-test file someone created that had test in its name. Nose tried to interpret it as as test and hence the error.
Sadly I had to manually run every package to figure this out. One way you can do this is to find /path/to/package -name "*test*.py and then grep for anything that doesn't live in a tests directory.

Nosetest: Does it set the errorlevel to 1 on failure?

Windows environment, python 2.7, latest nosetest.
Looking at nosetest docs, and googling around, nowhere do I see that nosetest sets the cmd line errorlevel on test failure.
We need this so that our build system can detect test failure.
Questions are:
Does Nosetest set the cmd line, errorlevel? (if so, where are docs)
If not, what is the appropriate way to handle this? (must my build parse some log output, or?)
%errorlevel% on windows is the return code of the application, typically the argument given to the exit(int) call (exit code). These return codes are the same as unittest, but the documentation is not very explicit:
The testRunner argument can either be a test runner class or an already created instance of it. By default main calls sys.exit() with an exit code indicating success or failure of the tests run.
In the above sentence By default is to understand as if the call argument exit is not set to False:
main supports being used from the interactive interpreter by passing in the argument exit=False. This displays the result on standard output without calling sys.exit()
(New in 2.7 and 3.1. In older version, sys.exit is always called.)
I found no special documentation about the return code, but looking at the source, one can find that exit code is 0 for success, 1 for error (same for unittest alone) and 2 if the usage help has to be printed (given arguments when calling as standalone program are incorrect). Specific for nose, when program is asked to display version or list plugins, exit code is 0 too.

_shutdown AttributeError (ignored) when linting code that uses M2Crypto

I'm running lint as follows:
$ python -m pylint.lint m2test.py
with this code:
import M2Crypto
def f():
M2Crypto.RSA.new_pub_key("").as_pem(cipher=None).split("\n")
The lint output ends with:
Exception AttributeError: '_shutdown' in <module 'threading' from '/usr/lib/python2.7/site-packages/M2Crypto-0.21.1-py2.7-linux-x86_64.egg/M2Crypto/threading.pyc'> ignored
This code works fine when run (the above is actually a minimal test case; but the full version does work). The exception is ignored, but Bitten considers this a failure, so stops on this step.
I've tried adding 'M2Crypto.threading.init()'/'M2Crypto.threading.cleanup()' around the definition of the function, but that didn't fix the problem.
How can I prevent this problem from occurring?
I'm using M2Crypto 0.21.1, pylint 0.24 and Python 2.7 (also tried 2.7.2) on Debian Lenny x86_64.
The exception that you are seeing is caused by a bug in the astng package (presumably “Abstract Syntax Tree, Next Generation”?) which is a toolkit on which pylint depends, written by the same people. I should note in passing that I always encourage people to use pyflakes instead of pylint when possible, because it is quick, simple, fast, and predictable, whereas pylint tries to do several kinds of deep magic that are not only slow but that can get it into exactly this kind of trouble. :)
Here are the two packages on PyPI:
http://pypi.python.org/pypi/pylint
http://pypi.python.org/pypi/astng
And note that this problem had to be, necessarily, a bug in pylint and not in your code, because pylint does not run your code in order to produce its report — imagine the havoc that could be wreaked if it did (since code being linted might delete files, etcetera)! Since your code does not get run, no amount of caution, like protecting your call with threading init() or cleanup() functions, could possibly have prevented this error — unless the code snippets happened, for other reasons, to alter the behavior we are about to investigate.
So, on to your actual exception.
I had never actually heard of _shutdown before! A quick search of the Python standard library showed its definition in threading.py but not a call of the function from anywhere; only by searching the Python C source code did I discover where in pythonrun.c, during interpreter shutdown, the function is actually called:
static void
wait_for_thread_shutdown(void)
{
...
PyObject *threading = PyMapping_GetItemString(tstate->interp->modules,
"threading");
if (threading == NULL) {
/* threading not imported */
PyErr_Clear();
return;
}
result = PyObject_CallMethod(threading, "_shutdown", "");
if (result == NULL) {
PyErr_WriteUnraisable(threading);
}
...
}
Apparently it is some sort of cleanup function that the threading Standard Library module requires, and they have special-cased the Python interpreter itself to make sure that it gets called.
As you can see from the code above, Python quietly and without complaint handles the case where the threading module never gets imported during a program's run. But if threading does get imported, and still exists at shutdown time, then the interpreter looks inside for a _shutdown function and goes so far as to print an error message — and then return a non-zero exit status, the cause of your problems — if it cannot call it.
So we have to discover why the threading module exists but has no _shutdown method at the moment when pylint is done examining your program and Python is exiting. Some instrumention is called for. Can we print out what the module looks like as pylint exits? We can! The pylint/lint.py module, in its last few lines, runs its “main program” by instantiating a Run class it has defined:
if __name__ == '__main__':
Run(sys.argv[1:])
So I opened lint.py in my editor — one of the magnificent things about having each little project installed in a Python Virual Environment is that I can jump in and edit third-party code for quick experiments — and added the following print statement down at the bottom of the Run class's __init__() method:
sys.path.pop(0)
print "*****", sys.modules['threading'].__file__ # added by me!
if exit:
sys.exit(self.linter.msg_status)
I re-ran the command:
python -m pylint.lint m2test.py
And out came the __file__ string of the threading module:
***** /home/brandon/venv/lib/python2.7/site-packages/M2Crypto/threading.pyc
Well, look at that.
This is the problem!
According to this path, there actually exists an M2Crypto/threading.py module that, under all normal circumstances, should just be called M2Crypto.threading, and therefore sit in the sys.modules dictionary under the name:
sys.modules['M2Crypto.threading']
But somehow that file is also getting loaded as the main Python threading module, shadowing the official threading module that sits in the Standard Library. Because of this, the Python exit logic is quite correctly complaining that the Standard Library _shutdown() function is missing.
How could this happen? Top-level modules can only appear in paths that are listed explicitly in sys.path, not in sub-directories beneath them. This leads to a new question: is there any point during the pylint run that the …/M2Crypto/ directory itself is getting put on sys.path as though it contained top-level modules? Let's see!
We need more instrumentation: we need to have Python tell us the moment that a directory with M2Crypto in the name appears in sys.path. It will really slow things down, but let's add a trace function to pylint's __init__.py — because that is the first module that gets imported when you run -m pylint.lint — that will write an output file telling us, for every line of code executed, whether sys.path has any bad values in it:
def install_tracer():
import sys
output = open('mytracer.out', 'w')
def mytracer(frame, event, arg):
broken = any(p.endswith('M2Crypto') for p in sys.path)
output.write('{} {}:{} {}\n'.format(
broken, frame.f_code.co_filename, frame.f_lineno, event))
return mytracer
sys.settrace(mytracer)
install_tracer()
del install_tracer
Note how careful I am here: I define only one name in the module's namespace, and then carefully delete it to clean up after myself before I let pylint continue loading! And all of the resources that the trace function itself needs — namely, the sys module and the output open file — are available in the install_tracer() closure so that, from the outside, pylint looks exactly the same as always. Just in case anyone tries to introspect it, like pylint might!
This generates a file mytracer.out of about 800k lines, that each look something like this:
False /home/brandon/venv/lib/python2.7/posixpath.py:118 call
The False says that sys.path looks clean, the filename and line number are the line of code being executed, and call indicates what stage of execution the interpreter is in.
So does sys.path ever get poisoned? Let's look at just the first True or False on each line, and see how many successive lines start with each value:
$ awk '{print$1}' mytracer.out | uniq -c
607997 False
3173 True
4558 False
33217 True
4304 False
41699 True
2953 False
110503 True
52575 False
Wow! That's a problem! For runs of several thousand lines at a time, our test case is True, which means that the interpreter is running with …/M2Crypto/ — or some variant of a pathname with M2Crypto in it — on the path, where it should not be; only the directory that contains …/M2Crypto should ever be on the path. Looking for the first False to True transition in the file, I see this:
False /home/brandon/venv/lib/python2.7/site-packages/logilab/astng/builder.py:132 line
False /home/brandon/venv/lib/python2.7/posixpath.py:118 call
...
False /home/brandon/venv/lib/python2.7/posixpath.py:124 line
False /home/brandon/venv/lib/python2.7/posixpath.py:124 return
True /home/brandon/venv/lib/python2.7/site-packages/logilab/astng/builder.py:133 line
And looking at lines 132 and 133 in the builder.py file reveals our culprit:
130 # build astng representation
131 try:
132 sys.path.insert(0, dirname(path)) # XXX (syt) iirk
133 node = self.string_build(data, modname, path)
134 finally:
135 sys.path.pop(0)
Note the comment, which is part of the original code, not an addition of my own! Obviously, XXX (syt) iirk is an exclamation in this programmer's strange native language for the phrase, “put this module's parent directory on sys.path so that pylint will break mysteriously every time someone forces pylint to introspect a package with a threading sub-module.” It is, obviously, a very compact native language. :)
If you adjust the tracing module to watch sys.modules for the actual import of threading — an exercise I will leave to the reader — you will see that it happens when SocketServer, which is imported by some other Standard Library module during the analysis, in turn tries to innocently import threading.
So let us review what is happening:
pylint is dangerous magic.
As part of its magic, if it sees you import foo, then it runs off trying to find foo.py on disk, to parse it, and to predict whether you are loading valid or invalid names from its namespace.
[See my comment, below.] Because you call .split() on the return value of RSA.as_pem(), pylint tries to introspect the as_pem() method, which in turn uses the M2Crypto.BIO module, which in turn makes calls that induce pylint to import threading.
As part of loading any module foo.py, pylint throws the directory containing foo.py on sys.path, even if that directory is inside a package, and therefore gives modules in that directory the privilege of shadowing Standard Library modules of the same name during its analysis.
When Python exits, it is upset that the M2Crypto.threading library is sitting where threading belongs, because it wants to run the _shutdown() method of threading.
You should report this as a bug to the pylint / astng folks at logilab.org. Tell them I sent you.
If you decide to keep using pylint after it has done this to you, then there seem to be two solutions in this case: either don't inspect code that calls M2Crypto, or import threading during the pylint import process — by sticking import threading into the pylint/__init__.py, for example — so that the module gets the chance to grab the sys.modules['threading'] slot before pylint gets all excited and tries to let M2Crypto/threading.py grab the slot instead.
In conclusion, I think the author of astng says it best: XXX (syt) iirk. Indeed.
Many thanks to Brandon Craig Rhodes for having tracing this down and for such a detailed post.
I've removed the offending line from astng, code available from the hg repository until logilab-astng 0.23.0 is out. And I can confirm this fixes the OP's pb.
This looks more like a hack but I think it works. Copying the result of "as_pem()" and splitting it.
import M2Crypto
def f():
M2Crypto.RSA.new_pub_key("").as_pem(cipher=None)[:].split("\n")
I'm using Python 2.6.7, M2Crypto 0.21.1, pylint 0.23
I was unable to reproduce (pylint 0.24 and M2Crypto 0.21.1 on Ubuntu 11.04 64bit) but two suggestions:
Explicitly initialize threading:
import M2Crypto
def f():
M2Crypto.threading.init()
M2Crypto.RSA.new_pub_key("").as_pem(cipher=None).split("\n")
M2Crypto.threading.cleanup()
Or recompile without threading:
m2crypto = Extension(name = 'M2Crypto.__m2crypto',
sources = ['SWIG/_m2crypto.i'],
extra_compile_args = ['-DTHREADING'],
#extra_link_args = ['-Wl,-search_paths_first'], # Uncomment to build Universal Mac binaries
)

Categories