I'm setting up Python (CPython 3.4, 64-bit) on a new machine (Windows 10). I installed numpy & nose, and ran numpy.test() through the interpreter prompt to make sure everything's working as expected:
Running unit tests for numpy
NumPy version 1.9.0
NumPy is installed in C:\Python34\lib\site-packages\numpy
Python version 3.4.2 (v3.4.2:ab2c023a9432, Oct 6 2014, 22:16:31) [MSC v.1600 64 bit (AMD64)]
nose version 1.3.4
----------------------------------------------------------------------
Ran 5162 tests in 36.783s
OK (KNOWNFAIL=10, SKIP=20)
So far so good, but when I do the same thing through PTVS on VS2012 (my team uses TFS for source control), there are errors and test failures (below):
Running unit tests for numpy
NumPy version 1.9.0
NumPy is installed in C:\Python34\lib\site-packages\numpy
Python version 3.4.2 (v3.4.2:ab2c023a9432, Oct 6 2014, 22:16:31) [MSC v.1600 64 bit (AMD64)]
nose version 1.3.4
======================================================================
ERROR: test_basic (test_multiarray.TestResize)
----------------------------------------------------------------------
Traceback (most recent call last):
File "C:\Python34\lib\site-packages\numpy\core\tests\test_multiarray.py", line 2850, in test_basic
x.resize((5, 5))
ValueError: cannot resize an array that references or is referenced
by another array in this way. Use the resize function
======================================================================
ERROR: test_freeform_shape (test_multiarray.TestResize)
----------------------------------------------------------------------
Traceback (most recent call last):
File "C:\Python34\lib\site-packages\numpy\core\tests\test_multiarray.py", line 2880, in test_freeform_shape
x.resize(3, 2, 1)
ValueError: cannot resize an array that references or is referenced
by another array in this way. Use the resize function
======================================================================
ERROR: test_int_shape (test_multiarray.TestResize)
----------------------------------------------------------------------
Traceback (most recent call last):
File "C:\Python34\lib\site-packages\numpy\core\tests\test_multiarray.py", line 2862, in test_int_shape
x.resize(3)
ValueError: cannot resize an array that references or is referenced
by another array in this way. Use the resize function
======================================================================
ERROR: test_obj_obj (test_multiarray.TestResize)
----------------------------------------------------------------------
Traceback (most recent call last):
File "C:\Python34\lib\site-packages\numpy\core\tests\test_multiarray.py", line 2892, in test_obj_obj
a.resize(15,)
ValueError: cannot resize an array that references or is referenced
by another array in this way. Use the resize function
======================================================================
ERROR: test_zeros_appended (test_multiarray.TestResize)
----------------------------------------------------------------------
Traceback (most recent call last):
File "C:\Python34\lib\site-packages\numpy\core\tests\test_multiarray.py", line 2885, in test_zeros_appended
x.resize(2, 3, 3)
ValueError: cannot resize an array that references or is referenced
by another array in this way. Use the resize function
======================================================================
ERROR: Ticket #950
----------------------------------------------------------------------
Traceback (most recent call last):
File "C:\Python34\lib\site-packages\numpy\core\tests\test_regression.py", line 1272, in test_blasdot_uninitialized_memory
x.resize((m, 0))
ValueError: cannot resize an array that references or is referenced
by another array in this way. Use the resize function
======================================================================
FAIL: test_blasdot.test_dot_3args
----------------------------------------------------------------------
Traceback (most recent call last):
File "C:\Python34\lib\site-packages\nose\case.py", line 198, in runTest
self.test(*self.arg)
File "C:\Python34\lib\site-packages\numpy\core\tests\test_blasdot.py", line 54, in test_dot_3args
assert_equal(sys.getrefcount(r), 2)
File "C:\Python34\lib\site-packages\numpy\testing\utils.py", line 334, in assert_equal
raise AssertionError(msg)
AssertionError:
Items are not equal:
ACTUAL: 3
DESIRED: 2
======================================================================
FAIL: test_1d (test_indexing.TestMultiIndexingAutomated)
----------------------------------------------------------------------
Traceback (most recent call last):
File "C:\Python34\lib\site-packages\numpy\core\tests\test_indexing.py", line 940, in test_1d
self._check_single_index(a, index)
File "C:\Python34\lib\site-packages\numpy\core\tests\test_indexing.py", line 859, in _check_single_index
self._compare_index_result(arr, index, mimic_get, no_copy)
File "C:\Python34\lib\site-packages\numpy\core\tests\test_indexing.py", line 875, in _compare_index_result
assert_equal(sys.getrefcount(arr), 3)
File "C:\Python34\lib\site-packages\numpy\testing\utils.py", line 334, in assert_equal
raise AssertionError(msg)
AssertionError:
Items are not equal:
ACTUAL: 4
DESIRED: 3
======================================================================
FAIL: test_multidim (test_indexing.TestMultiIndexingAutomated)
----------------------------------------------------------------------
Traceback (most recent call last):
File "C:\Python34\lib\site-packages\numpy\core\tests\test_indexing.py", line 922, in test_multidim
self._check_multi_index(self.a, index)
File "C:\Python34\lib\site-packages\numpy\core\tests\test_indexing.py", line 836, in _check_multi_index
self._compare_index_result(arr, index, mimic_get, no_copy)
File "C:\Python34\lib\site-packages\numpy\core\tests\test_indexing.py", line 875, in _compare_index_result
assert_equal(sys.getrefcount(arr), 3)
File "C:\Python34\lib\site-packages\numpy\testing\utils.py", line 334, in assert_equal
raise AssertionError(msg)
AssertionError:
Items are not equal:
ACTUAL: 4
DESIRED: 3
======================================================================
FAIL: test_dot_3args (test_multiarray.TestDot)
----------------------------------------------------------------------
Traceback (most recent call last):
File "C:\Python34\lib\site-packages\numpy\core\tests\test_multiarray.py", line 3285, in test_dot_3args
assert_equal(sys.getrefcount(r), 2)
File "C:\Python34\lib\site-packages\numpy\testing\utils.py", line 334, in assert_equal
raise AssertionError(msg)
AssertionError:
Items are not equal:
ACTUAL: 3
DESIRED: 2
----------------------------------------------------------------------
Ran 5162 tests in 181.506s
FAILED (KNOWNFAIL=10, SKIP=20, errors=6, failures=4)
Press any key to continue . . .
I ran the script that I wrote in VS through the command line, and the results were the same as running the test through the interpreter directly, so I'm confident there's something amiss in my VS/Python setup rather than the script itself. What could the problem be?
There's nothing amiss in your setup. It's just that numpy tests seem to be fragile in a sense that they don't tolerate debuggers. I'm not sure what's going on there, but it seems that tests involving sys.getrefcount, and semantics that depend on it being something specifically (usually just having a single reference to an array) are what's failing.
You can repro this when running the interpreter directly by registering your own trace function with sys.settrace, e.g.:
import sys
def trace_func(f, e, a):
return trace_func
sys.settrace(trace_func)
import numpy
numpy.test()
Note that you can run the script from VS without debugging it via Debug -> Start Without Debugging. This will give you results identical to running the interpreter directly.
Related
I have installed the symbolic package. There are some python related errors in the command window of Octave when I try to use symbolic.Below is the output:
> pkg load symbolic
>> syms a
Symbolic pkg v2.9.0: Traceback (most recent call last):
File "<stdin>", line 28, in <module>
AttributeError: '_PrintFunction' object has no attribute '__globals__'
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "<stdin>", line 12, in octoutput_drv
File "<stdin>", line 54, in octoutput
File "<stdin>", line 55, in octoutput
AttributeError: module 'sympy' has no attribute 'compatibility'
Closing the Python communications link.
error: Python exception: AttributeError: '_PrintFunction' object has no attribute '__globals__'
occurred in python_header import block.
Try "sympref reset" and repeat your command?
(consider filing an issue at https://github.com/cbm755/octsympy/issues)
error: called from
pycall_sympy__ at line 191 column 5
valid_sym_assumptions at line 38 column 10
assumptions at line 82 column 7
syms at line 97 column 13
>> Opening in existing browser session.
Any idea how i can troubleshoot this?
Thanks.
The error is a known bug (https://github.com/cbm755/octsympy/issues/1035)
You need to use an older version of sympy sunch as version 1.5.1
I have simple tests in site_tests.py file:
import unittest
class SiteTests(unittest.TestCase):
def test(self):
self.assertEqual('a', 'b')
if __name__ == '__main__':
unittest.main()
When I run "Unittest in test_site.py" with default PyCharm configuration I'm getting:
Testing started at 23:45 ...
C:\Users\testSite\AppData\Local\Programs\Python\Python36-32\python.exe "C:\Program Files\JetBrains\PyCharm Community Edition 2018.1.3\helpers\pycharm\_jb_unittest_runner.py" --path C:/testSiteDemoTests/site_tests.py
Launching unittests with arguments python -m unittest C:/testSiteDemoTests/site_tests.py in C:\testSiteDemoTests
b != a
Expected :a
Actual :b
<Click to see difference>
Traceback (most recent call last):
File "C:\Program Files\JetBrains\PyCharm Community Edition 2018.1.3\helpers\pycharm\teamcity\diff_tools.py", line 32, in _patched_equals
old(self, first, second, msg)
File "C:\Users\testSite\AppData\Local\Programs\Python\Python36-32\lib\unittest\case.py", line 829, in assertEqual
assertion_func(first, second, msg=msg)
File "C:\Users\testSite\AppData\Local\Programs\Python\Python36-32\lib\unittest\case.py", line 1202, in assertMultiLineEqual
self.fail(self._formatMessage(msg, standardMsg))
File "C:\Users\testSite\AppData\Local\Programs\Python\Python36-32\lib\unittest\case.py", line 670, in fail
raise self.failureException(msg)
AssertionError: 'a' != 'b'
- a
+ b
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "C:\Users\testSite\AppData\Local\Programs\Python\Python36-32\lib\unittest\case.py", line 59, in testPartExecutor
yield
File "C:\Users\testSite\AppData\Local\Programs\Python\Python36-32\lib\unittest\case.py", line 605, in run
testMethod()
File "C:\testSiteDemoTests\site_tests.py", line 7, in test
self.assertEqual('a', 'b')
Ran 1 test in 0.000s
FAILED (failures=1)
Process finished with exit code 1
The last part is very interesting since running this file without _jb_unittest_runner.py so C:\testSite>python lost_hat_tests.py
the output is ok:
F
======================================================================
FAIL: test (__main__.SiteTests)
----------------------------------------------------------------------
Traceback (most recent call last):
File "lost_hat_tests.py", line 7, in test
self.assertEqual('a', 'b')
AssertionError: 'a' != 'b'
- a
+ b
----------------------------------------------------------------------
Ran 1 test in 0.000s
FAILED (failures=1)
Is there simple answer why that second message appearing in PyCharm runner?
I also run the test in console with _jb_unittest_runner.py - hmm two testSuites - interesting
C:\testSiteDemoTests>python "C:\Program Files\JetBrains\PyCharm Community Edition 2018.1.3\helpers\pycharm\_jb_unittest_runner.py" --path C:/testSiteDemoTests/site_tests.py
##teamcity[enteredTheMatrix timestamp='2018-05-20T23:59:59.931']
Launching unittests with arguments python -m unittest C:/testSiteDemoTests/site_tests.py in C:\testSiteDemoTests
##teamcity[testCount timestamp='2018-05-20T23:59:59.946' count='1']
##teamcity[testSuiteStarted timestamp='2018-05-20T23:59:59.946' locationHint='python<C:\testSiteDemoTests>://site_tests' name='site_tests' nodeId='1' parentNodeId='0']
##teamcity[testSuiteStarted timestamp='2018-05-20T23:59:59.946' locationHint='python<C:\testSiteDemoTests>://site_tests.SiteTests' name='SiteTests' nodeId='2' parentNodeId='1']
##teamcity[testStarted timestamp='2018-05-20T23:59:59.962' captureStandardOutput='true' locationHint='python<C:\testSiteDemoTests>://site_tests.SiteTests.test' name='test' nodeId='3' parentNodeId='2']
##teamcity[testFailed timestamp='2018-05-20T23:59:59.977' actual='b' details='Traceback (most recent call last):|n File "C:\Program Files\JetBrains\PyCharm Community Edition 2018.1.3\helpers\pycharm\teamcity\diff_tools.py", line 32, in _patched_equals|n old(
self, first, second, msg)|n File "C:\Users\testSite\AppData\Local\Programs\Python\Python36-32\lib\unittest\case.py", line 829, in assertEqual|n assertion_func(first, second, msg=msg)|n File "C:\Users\testSite\AppData\Local\Programs\Python\Python36-32\
lib\unittest\case.py", line 1202, in assertMultiLineEqual|n self.fail(self._formatMessage(msg, standardMsg))|n File "C:\Users\testSite\AppData\Local\Programs\Python\Python36-32\lib\unittest\case.py", line 670, in fail|n raise self.failureException(msg)
|nAssertionError: |'a|' != |'b|'|n- a|n+ b|n|n|nDuring handling of the above exception, another exception occurred:|n|nTraceback (most recent call last):|n File "C:\Users\testSite\AppData\Local\Programs\Python\Python36-32\lib\unittest\case.py", line 59, in t
estPartExecutor|n yield|n File "C:\Users\testSite\AppData\Local\Programs\Python\Python36-32\lib\unittest\case.py", line 605, in run|n testMethod()|n File "C:\testSiteDemoTests\site_tests.py", line 7, in test|n self.assertEqual(|'a|', |'b|')|n' e
xpected='a' locationHint='python<C:\testSiteDemoTests>://site_tests.SiteTests.test' message='|nb != a|n' name='test' nodeId='3' parentNodeId='2' type='comparisonFailure']
##teamcity[testFinished timestamp='2018-05-20T23:59:59.977' duration='30' locationHint='python<C:\testSiteDemoTests>://site_tests.SiteTests.test' name='test' nodeId='3' parentNodeId='2']
Ran 1 test in 0.031s
FAILED (failures=1)
##teamcity[testSuiteFinished timestamp='2018-05-20T23:59:59.977' locationHint='python<C:\testSiteDemoTests>://site_tests.SiteTests' name='SiteTests' nodeId='2' parentNodeId='1']
##teamcity[testSuiteFinished timestamp='2018-05-20T23:59:59.977' locationHint='python<C:\testSiteDemoTests>://site_tests' name='site_tests' nodeId='1' parentNodeId='0']
This message shows up in exception's formatting in Python 3 if another exception was raised in an exception handler or finally clause for the first one:
A similar mechanism works implicitly if an exception is raised inside an exception handler or a finally clause: the previous exception is then attached as the new exception’s __context__ attribute
Setting a breakpoint on the test in PyCharm, then stepping further into the machinery shows where this second exception is thrown. _jb_unittest_runner patches the assert methods in unittest:
PyCharm Community Edition\helpers\pycharm\_jb_unittest_runner.py:
from teamcity import unittestpy
PyCharm Community Edition\helpers\pycharm\teamcity\unittestpy.py:
def run(self, test):
<...>
patch_unittest_diff(subtest_filter)
<...>
PyCharm Community Edition\helpers\pycharm\teamcity\diff_tools.py:
def patch_unittest_diff(<...>):
old = unittest.TestCase.assertEqual
def _patched_equals(self, first, second, msg=None):
try:
old(self, first, second, msg)
return
except AssertionError as native_error:
if not test_filter or test_filter(self):
error = EqualsAssertionError(first, second, msg)
if error.can_be_serialized():
raise error
raise native_error
unittest.TestCase.assertEqual = _patched_equals
I am using Rpy2 and running the tests using $python -m "rpy2.tests". I got the following errors. I am using ubuntu 14.03, python 2.7 and trying to calling R in python using this package.
rpy2 version: 2.6.0
- built against R version: 3-2.1--68531
- running linked to R version: R version 3.2.1 (2015-06-18)
....../tmp/tmpRWL4lU.py:17: UserWarning:
ri.baseenv['eval'](ri.parse(rcode))
..................................................................................................................................................................................x.........................................................................................................E
Stderr:
/usr/local/lib/python2.7/dist-packages/rpy2-2.6.0-py2.7-linux-x86_64.egg/rpy2/robjects/methods.py:80: UserWarning: Error in as.environment(where) : no item called "None" on the search list
StrSexpVector((cls_packagename, )))
E...................................sssssssss................................E.s...s..s.....
======================================================================
ERROR: testRS4Auto_Type (robjects.tests.testMethods.MethodsTestCase)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/usr/local/lib/python2.7/dist-packages/rpy2-2.6.0-py2.7-linux-x86_64.egg/rpy2/robjects/tests/testMethods.py", line 115, in testRS4Auto_Type
robjects.methods.RS4)):
File "/usr/lib/python2.7/dist-packages/six.py", line 617, in with_metaclass
return meta("NewBase", bases, {})
File "/usr/local/lib/python2.7/dist-packages/rpy2-2.6.0-py2.7-linux-x86_64.egg/rpy2/robjects/methods.py", line 154, in __new__
cls_def = getclassdef(cls_rname, cls_rpackagename)
File "/usr/local/lib/python2.7/dist-packages/rpy2-2.6.0-py2.7-linux-x86_64.egg/rpy2/robjects/methods.py", line 80, in getclassdef
StrSexpVector((cls_packagename, )))
RRuntimeError: Error in as.environment(where) : no item called "None" on the search list
Stderr:
/usr/local/lib/python2.7/dist-packages/rpy2-2.6.0-py2.7-linux-x86_64.egg/rpy2/robjects/methods.py:80: UserWarning: Error in as.environment(where) : no item called "None" on the search list
StrSexpVector((cls_packagename, )))
======================================================================
ERROR: testRS4Auto_Type_nopackname (robjects.tests.testMethods.MethodsTestCase)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/usr/local/lib/python2.7/dist-packages/rpy2-2.6.0-py2.7-linux-x86_64.egg/rpy2/robjects/tests/testMethods.py", line 122, in testRS4Auto_Type_nopackname
robjects.methods.RS4)):
File "/usr/lib/python2.7/dist-packages/six.py", line 617, in with_metaclass
return meta("NewBase", bases, {})
File "/usr/local/lib/python2.7/dist-packages/rpy2-2.6.0-py2.7-linux-x86_64.egg/rpy2/robjects/methods.py", line 154, in __new__
cls_def = getclassdef(cls_rname, cls_rpackagename)
File "/usr/local/lib/python2.7/dist-packages/rpy2-2.6.0-py2.7-linux-x86_64.egg/rpy2/robjects/methods.py", line 80, in getclassdef
StrSexpVector((cls_packagename, )))
RRuntimeError: Error in as.environment(where) : no item called "None" on the search list
======================================================================
ERROR: test_Rconverter (ipython.tests.test_rmagic.TestRmagic)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/usr/local/lib/python2.7/dist-packages/rpy2-2.6.0-py2.7-linux-x86_64.egg/rpy2/ipython/tests/test_rmagic.py", line 127, in test_Rconverter
tuple(fromr_dataf_np.ix[col_i].values))
File "/usr/lib/python2.7/dist-packages/numpy/core/records.py", line 418, in __getattribute__
raise AttributeError("record array has no attribute %s" % attr)
AttributeError: record array has no attribute ix
----------------------------------------------------------------------
Ran 383 tests in 3.719s
FAILED (errors=3, skipped=12, expected failures=1)
Any ideas to fix these three errors?
Thanks,
Theano is failing it's tests when I do:
python -c "import theano; theano.test();"
If these are known failures, shouldn't it still pass? IE when I test other libraries, KnownFailures sometimes trigger, but the overall test still passes with "OK" (but will still note the KnownFails and Skipped tests).
My guess is this is ok, and the test really is "passing", but since I'm doing a fresh install following the deeplearning.net tutorials, and I'm getting this error, I assume others might have this question as well, and a search on Google, and SO, isn't really helpful.
Forgive the error-code-dump, I am sure no one will need to read all through this, but it's here for reference if someone else has this question. Here are the errors at the end of the tests:
======================================================================
ERROR: test_none (theano.compile.tests.test_function_module.T_function)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/usr/local/lib/python2.7/dist-packages/theano/compile/tests/test_function_module.py", line 42, in test_none
raise KnownFailureTest('See #254: Using None as function output leads to [] return value')
KnownFailureTest: See #254: Using None as function output leads to [] return value
======================================================================
ERROR: test002_generator_one_scalar_output (theano.sandbox.scan_module.tests.test_scan.TestScan)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/usr/local/lib/python2.7/dist-packages/theano/sandbox/scan_module/tests/test_scan.py", line 474, in test002_generator_one_scalar_output
raise KnownFailureTest('Work-in-progress sandbox ScanOp is not fully '
KnownFailureTest: Work-in-progress sandbox ScanOp is not fully functional yet
======================================================================
ERROR: test003_one_sequence_one_output_and_weights (theano.sandbox.scan_module.tests.test_scan.TestScan)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/usr/local/lib/python2.7/dist-packages/theano/sandbox/scan_module/tests/test_scan.py", line 512, in test003_one_sequence_one_output_and_weights
raise KnownFailureTest('Work-in-progress sandbox ScanOp is not fully '
KnownFailureTest: Work-in-progress sandbox ScanOp is not fully functional yet
======================================================================
ERROR: test_alloc_inputs2 (theano.scan_module.tests.test_scan.T_Scan)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/usr/local/lib/python2.7/dist-packages/theano/scan_module/tests/test_scan.py", line 2844, in test_alloc_inputs2
"This tests depends on an optimization for scan "
KnownFailureTest: This tests depends on an optimization for scan that has not been implemented yet.
======================================================================
ERROR: test_infershape_seq_shorter_nsteps (theano.scan_module.tests.test_scan.T_Scan)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/usr/local/lib/python2.7/dist-packages/theano/scan_module/tests/test_scan.py", line 3040, in test_infershape_seq_shorter_nsteps
raise KnownFailureTest('This is a generic problem with infershape'
KnownFailureTest: This is a generic problem with infershape that has to be discussed and figured out
======================================================================
ERROR: test_outputs_info_not_typed (theano.scan_module.tests.test_scan.T_Scan)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/usr/lib/python2.7/dist-packages/numpy/testing/decorators.py", line 213, in knownfailer
raise KnownFailureTest(msg)
KnownFailureTest: This test fails because not typed outputs_info are always gived the smallest dtype. There is no upcast of outputs_info in scan for now.
======================================================================
ERROR: test_arithmetic_cast (theano.tensor.tests.test_basic.test_arithmetic_cast)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/usr/local/lib/python2.7/dist-packages/theano/tensor/tests/test_basic.py", line 5583, in test_arithmetic_cast
raise KnownFailureTest('Known issue with '
KnownFailureTest: Known issue with numpy >= 1.6.x see #761
======================================================================
ERROR: test_abs_grad (theano.tensor.tests.test_complex.TestRealImag)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/usr/lib/python2.7/dist-packages/numpy/testing/decorators.py", line 213, in knownfailer
raise KnownFailureTest(msg)
KnownFailureTest: Complex grads not enabled, see #178
======================================================================
ERROR: test_complex_grads (theano.tensor.tests.test_complex.TestRealImag)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/usr/lib/python2.7/dist-packages/numpy/testing/decorators.py", line 213, in knownfailer
raise KnownFailureTest(msg)
KnownFailureTest: Complex grads not enabled, see #178
======================================================================
ERROR: test_mul_mixed (theano.tensor.tests.test_complex.TestRealImag)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/usr/lib/python2.7/dist-packages/numpy/testing/decorators.py", line 213, in knownfailer
raise KnownFailureTest(msg)
KnownFailureTest: Complex grads not enabled, see #178
======================================================================
ERROR: test_mul_mixed0 (theano.tensor.tests.test_complex.TestRealImag)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/usr/lib/python2.7/dist-packages/numpy/testing/decorators.py", line 213, in knownfailer
raise KnownFailureTest(msg)
KnownFailureTest: Complex grads not enabled, see #178
======================================================================
ERROR: test_mul_mixed1 (theano.tensor.tests.test_complex.TestRealImag)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/usr/lib/python2.7/dist-packages/numpy/testing/decorators.py", line 213, in knownfailer
raise KnownFailureTest(msg)
KnownFailureTest: Complex grads not enabled, see #178
======================================================================
ERROR: test_polar_grads (theano.tensor.tests.test_complex.TestRealImag)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/usr/lib/python2.7/dist-packages/numpy/testing/decorators.py", line 213, in knownfailer
raise KnownFailureTest(msg)
KnownFailureTest: Complex grads not enabled, see #178
======================================================================
ERROR: test_gradient (theano.tensor.tests.test_fourier.TestFourier)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/usr/lib/python2.7/dist-packages/numpy/testing/decorators.py", line 213, in knownfailer
raise KnownFailureTest(msg)
KnownFailureTest: Complex grads not enabled, see #178
======================================================================
ERROR: theano.tensor.tests.test_opt.test_log_add
----------------------------------------------------------------------
Traceback (most recent call last):
File "/usr/lib/python2.7/dist-packages/nose/case.py", line 197, in runTest
self.test(*self.arg)
File "/usr/local/lib/python2.7/dist-packages/theano/tensor/tests/test_opt.py", line 1508, in test_log_add
raise KnownFailureTest(('log(add(exp)) is not stabilized when adding '
KnownFailureTest: log(add(exp)) is not stabilized when adding more than 2 elements, see #623
======================================================================
ERROR: Currently Theano enable the constant_folding optimization before stabilization optimization.
----------------------------------------------------------------------
Traceback (most recent call last):
File "/usr/lib/python2.7/dist-packages/nose/case.py", line 197, in runTest
self.test(*self.arg)
File "/usr/local/lib/python2.7/dist-packages/theano/tensor/tests/test_opt.py", line 3068, in test_constant_get_stabilized
"Theano optimizes constant before stabilization. "
KnownFailureTest: Theano optimizes constant before stabilization. This breaks stabilization optimization in some cases. See #504.
======================================================================
ERROR: test_dot (theano.tests.test_rop.test_RopLop)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/usr/local/lib/python2.7/dist-packages/theano/tests/test_rop.py", line 277, in test_dot
self.check_rop_lop(tensor.dot(self.x, W), self.in_shape)
File "/usr/local/lib/python2.7/dist-packages/theano/tests/test_rop.py", line 191, in check_rop_lop
raise KnownFailureTest("Rop doesn't handle non-differentiable "
KnownFailureTest: Rop doesn't handle non-differentiable inputs correctly. Bug exposed by fixing Add.grad method.
======================================================================
ERROR: test_elemwise0 (theano.tests.test_rop.test_RopLop)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/usr/local/lib/python2.7/dist-packages/theano/tests/test_rop.py", line 280, in test_elemwise0
self.check_rop_lop((self.x + 1) ** 2, self.in_shape)
File "/usr/local/lib/python2.7/dist-packages/theano/tests/test_rop.py", line 191, in check_rop_lop
raise KnownFailureTest("Rop doesn't handle non-differentiable "
KnownFailureTest: Rop doesn't handle non-differentiable inputs correctly. Bug exposed by fixing Add.grad method.
----------------------------------------------------------------------
Ran 2441 tests in 807.791s
FAILED (errors=18)
Thanks!
KnownFailureTest are a valid return value for nosetests. When Theano started, we where creating tests for features to implement and raised KnownFailureTest in them until we implement them. We do not do that anymore as we end up with to much questions from people about this. So this cause too much distraction. But we didn't changed the old tests that did that.
I just created an issue to change that: https://github.com/Theano/Theano/issues/2375
I do not know when it will be changed.
Regarding this thread : http://goo.gl/uEyFua
I am experiencing a quite similar issue with the following code, trying to import a large Graph :
for (edge_id) in cursorSQL:
L.add((edge_id[2], str(edge_id[1])))
g = igraph.Graph.TupleList(L)
I get the following errors :
Traceback (most recent call last):
File "C.py", line 707, in __getitem__ return self._ids[item]
KeyError: '184840900'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
g = igraph.Graph.TupleList(L)
File "C:\Python34\lib\site-packages\igraph\__init__.py", line 2473, in TupleList
edge_list.append((idgen[item[0]], idgen[item[1]]))
File "C:\Python34\lib\site-packages\igraph\datatypes.py", line 709, in __getitem__
self._ids[item] = next(self._generator)
MemoryError
Just to make it clear. This code works perfectly until number of edges is too large (~4millions.)
Thanks.
Switched to python-64bits, and problem is now solved.
Problem was the 2GB-per-process limit in win7 for python-32bits.