Pandas is not working properly on my system, and I am trying to fix it. Below is the output I got running nosetests pandas (all output available here); do you have any suggestion on how to fix this?
======================================================================
FAIL: test_fred_parts (pandas.io.tests.test_data.TestFred)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/Users/Alberto/anaconda/lib/python2.7/site-packages/pandas/util/testing.py", line 1135, in wrapper
return t(*args, **kwargs)
File "/Users/Alberto/anaconda/lib/python2.7/site-packages/pandas/io/tests/test_data.py", line 424, in test_fred_parts
self.assertEqual(df.ix['2010-05-01'][0], 217.23)
AssertionError: 217.29900000000001 != 217.23
----------------------------------------------------------------------
Ran 4836 tests in 377.165s
FAILED (SKIP=88, failures=2)
This particular test is an "allowable failure" in the pandas network tests (it's decorated as network and I think may have even had to change recently in master to make it less sensitive). Data is grabbed from FRED and parsed and tested against what pandas has queried previously...
Sometimes network tests fails intermittently, it may be that the API has changed, the connection is down, or the numbers have been changed slightly (which from the assertion message looks to be the case here).
This is nothing to worry about, as you can see the other 4836 pass! :)
Related
the version of python is 3.6
I tried to execute my code but, there are still some errors as below:
Traceback (most recent call last):
File
"C:\Users\tmdgu\Desktop\NLP-master1\NLP-master\Ontology_Construction.py",
line 55, in
, binary=True)
File "E:\Program
Files\Python\Python35-32\lib\site-packages\gensim\models\word2vec.py",
line 1282, in load_word2vec_format
raise DeprecationWarning("Deprecated. Use gensim.models.KeyedVectors.load_word2vec_format instead.")
DeprecationWarning: Deprecated. Use
gensim.models.KeyedVectors.load_word2vec_format instead.
how to fix the code? or is the path to data wrong?
This is just a warning, not a fatal error. Your code likely still works.
"Deprecation" means a function's use has been marked by the authors as no longer encouraged.
The function typically still works, but may not for much longer – becoming unreliable or unavailable in some future library release. Often, there's a newer, more-preferred way to do the same thing, so you don't trigger the warning message.
Your warning message points you at the now-preferred way to load word-vectors of that format: use KeyedVectors.load_word2vec_format() instead.
Did you try using that, instead of whatever line of code (not shown in your question) that you were trying before seeing the warning?
I am trying to understand a traceback error that I am receiving. See below.
Traceback (most recent call last):
File "test.py", line 291, in test_cache_in_function
self.assertTrue("sunset" in testfilestr,"Testing that the sunset request was cached")
AssertionError: Testing that the sunset request was cached
Does the above error mean that "sunset" should not be in the cached file?
A point about nomenclature. You are getting a AssertionError. The error is printed along with the traceback, which indicates the sequence of calls that led to that error.
In your particular case, it looks like the error is caused because the assertion made by self.assertTrue(...) came out False. You are asserting that the string "sunset" is in testfilestr, but it is not. Probably because it is in the cache file instead.
The second argument to assertTrue is a message, which you see as the message to the AssertionError. This argument is optional, and is usually used to clarify the error beyond the obvious default message, which would be something to the effect of "sunset" in testfilestr is False, expected True.
I encountered an stranger error as below, when compile my theano function. I am using the version 0.7 of theano. I hope a quick work around is available. The function dump is here.
<<!! BUG IN FGRAPH.REPLACE OR A LISTENER !!>> <type 'exceptions.AssertionError'> local_shape_to_shape_i
ERROR (theano.gof.opt): Optimization failure due to: local_shape_to_shape_i
ERROR (theano.gof.opt): TRACEBACK:
ERROR (theano.gof.opt): Traceback (most recent call last):
File "/usr/local/lib/python2.7/dist-packages/theano/gof/opt.py", line 1527, in process_node
fgraph.replace_all_validate(repl_pairs, reason=lopt)
File "/usr/local/lib/python2.7/dist-packages/theano/gof/toolbox.py", line 259, in replace_all_validate
fgraph.replace(r, new_r, reason=reason, verbose=False)
File "/usr/local/lib/python2.7/dist-packages/theano/gof/fg.py", line 502, in replace
self.change_input(node, i, new_r, reason=reason)
File "/usr/local/lib/python2.7/dist-packages/theano/gof/fg.py", line 442, in change_input
self.__import_r__([new_r], reason=reason)
File "/usr/local/lib/python2.7/dist-packages/theano/gof/fg.py", line 257, in __import_r__
self.__import__(apply_node, reason=reason)
File "/usr/local/lib/python2.7/dist-packages/theano/gof/fg.py", line 365, in __import__
assert node not in self.apply_nodes
AssertionError
This error message appears when a bug in a Theano optimization causes an invalid graph modification.
If you ever see "Optimization failure due to: <something>", try the following:
Search the internet, and the theano-users mailing list in particular, for the message including the specific <something> (in this case <something> is "local_shape_to_shape_i"). You may find a message indicating that the bug has already been identified. If it's been reported to the Theano developers then it may have already been fixed though you may need to update to the bleeding edge version of Theano direct from GitHub (i.e. pip install --upgrade may not be sufficient).
Even if you can't find any mention online, try updating to the bleeding edge version if that's possible for you. It may have already been fixed.
If the latest bleeding edge version still exhibits the bug then report it on the theano-users mailing list.
Ignore it. Optimization failures do not cause invalid computations. The only side effect (at least in theory) is that the computation may not be as efficient as it might otherwise be.
I read in the howto documentation to install Trigger, but when I test in python environment, I get the error below:
>>> from trigger.netdevices import NetDevices
>>> nd = NetDevices()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/local/lib/python2.7/dist-packages/trigger/netdevices/__init__.py", line 913, in __init__
with_acls=with_acls)
File "/usr/local/lib/python2.7/dist-packages/trigger/netdevices/__init__.py", line 767, in __init__
production_only=production_only, with_acls=with_acls)
File "/usr/local/lib/python2.7/dist-packages/trigger/netdevices/__init__.py", line 83, in _populate
# device_data = _munge_source_data(data_source=data_source)
File "/usr/local/lib/python2.7/dist-packages/trigger/netdevices/__init__.py", line 73, in _munge_source_data
# return loader.load_metadata(path, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/trigger/netdevices/loader.py", line 163, in load_metadata
raise RuntimeError('No data loaders succeeded. Tried: %r' % tried)
RuntimeError: No data loaders succeeded. Tried: [<trigger.netdevices.loaders.filesystem.XMLLoader object at 0x7f550a1ed350>, <trigger.netdevices.loaders.filesystem.JSONLoader object at 0x7f550a1ed210>, <trigger.netdevices.loaders.filesystem.SQLiteLoader object at 0x7f550a1ed250>, <trigger.netdevices.loaders.filesystem.CSVLoader object at 0x7f550a1ed290>, <trigger.netdevices.loaders.filesystem.RancidLoader object at 0x7f550a1ed550>]
Does anyone have some idea how to fix it?
The NetDevices constructor is apparently trying to find a "metadata source" that isn't there.
Firstly, you need to define the metadata. Second, your code should handle the exception where none is found.
I'm the lead developer of Trigger. Check out the the doc Working with NetDevices. It is probably what you were missing. We've done some work recently to improve the quality of the setup/install docs, and I hope that this is more clear now!
If you want to get started super quickly, you can feed Trigger a CSV-formatted NetDevices file, like so:
test1-abc.net.example.com,juniper
test2-abc.net.example.com,cisco
Just put that in a file, e.g. /tmp/netdevices.csv and then set the NETDEVICES_SOURCE environment variable:
export NETDEVICES_SOURCE=/tmp/netdevices.csv
And then fire up python and continue on with your examples and you should be good to go!
I found that the default of /etc/trigger/netdevices.xml wasn't listed in the setup instructions. It did indicate to copy from the trigger source folder:
cp conf/netdevices.json /etc/trigger/netdevices.json
But, I didn't see how to specify this instead of the default NETDEVICES_SOURCE on the installation page. But, as soon as I had a file that NETDEVICES_SOURCE pointed to in my /etc/trigger folder, it worked.
I recommend this to get the verifying functionality examples to work right away with minimal fuss:
cp conf/netdevices.xml /etc/trigger/netdevices.xml
Using Ubuntu 14.04 with Python 2.7.3
I am using LibSVM and I used GRID.py for SVM. But the problem is I ran it grid.py more than hour but it's not giving any output. The error message it gives as follows
%%%%%%%%%%%%%%%%%%%%%%%
Traceback (most recent call last):
File "grid.py", line 266, in run
if rate is None: raise RuntimeError('get no rate')
RuntimeError: get no rate
worker local quit.
%%%%%%%%%%%%%%%
Can anybody tell me what's problem? And what is this "worker local quit"?
If anybody want to know more information about implementation or GRID.py please feel free to ask. I am having more than 9000 rows with 8 different columns as dataset
Are you calling GRID.py with either the parameter -log2c or -log2g?
It seems that the problem arises when LocalWorker.run_one returns None.