Can I patch a static method in python? - python

I've a class in python that contains a static method. I want to mock.patch it in order to see if it was called. When trying to do it I get an error:
AttributeError: path.to.A does not have the attribute 'foo'
My setup can be simplified to:
class A:
#staticMethod
def foo():
bla bla
Now the test code that fails with error:
def test():
with mock.patch.object("A", "foo") as mock_helper:
mock_helper.return_value = ""
A.some_other_static_function_that_could_call_foo()
assert mock_helper.call_count == 1

You can always use patch as a decorator, my preferred way of patching things:
from mock import patch
#patch('absolute.path.to.class.A.foo')
def test(mock_foo):
mock_foo.return_value = ''
# ... continue with test here
EDIT: Your error seems to hint that you have a problem elsewhere in your code. Possibly some signal or trigger that requires this method that is failing?

I was getting that same error message when trying to patch a method using the #patch decorator.
Here is the full error I got.
Traceback (most recent call last):
File "/usr/local/lib/python3.6/site-packages/tornado/testing.py", line 136, in __call__
result = self.orig_method(*args, **kwargs)
File "/usr/local/lib/python3.6/unittest/mock.py", line 1171, in patched
arg = patching.__enter__()
File "/usr/local/lib/python3.6/unittest/mock.py", line 1243, in __enter__
original, local = self.get_original()
File "/usr/local/lib/python3.6/unittest/mock.py", line 1217, in get_original
"%s does not have the attribute %r" % (target, name)
AttributeError: <module 'py-repo.models.Device' from
'/usr/share/projects/py-repo/models/Device.py'> does not have the attribute 'get_device_from_db'
What I ended up doing to fix this was changing the patch decorator I used
from
#patch('py-repo.models.Device.get_device_from_db')
to #patch.object(DeviceModel, 'get_device_from_db')
I really wish I could explain further why that was the issue but I'm still pretty new to Python myself. The patch documentation was especially helpful in figuring out what was available to work with. Important: I should note that get_device_from_db uses the #staticmethod decorator which may be changing things. Hope it helps though.

What worked for me:
#patch.object(RedisXComBackend, '_handle_conn')
def test_xcoms(self, mock_method: MagicMock):
mock_method.return_value = fakeredis.FakeStrictRedis()
'_handle_conn' (static function) looks like this:
#staticmethod
def _handle_conn():
redis_hook = RedisHook()
conn: Redis = redis_hook.get_conn()

Related

How do I mock a class's function's return value?

I have a method in Python that looks like this (in comicfile.py):
from zipfile import ZipFile
...
class ComicFile():
...
def page_count(self):
"""Return the number of pages in the file."""
if self.file == None:
raise ComicFile.FileNoneError()
if not os.path.isfile(self.file):
raise ComicFile.FileNotFoundError()
with ZipFile(self.file) as zip:
members = zip.namelist()
pruned = self.prune_dirs(members)
length = len(pruned)
return length
I'm trying to write a unit test for this (I've already tested prune_dirs), and so for this is what I have (test_comicfile.py):
import unittest
import unittest.mock
import comicfile
...
class TestPageCount(unittest.TestCase):
def setUp(self):
self.comic_file = comicfile.ComicFile()
#unittest.mock.patch('comicfile.ZipFile')
def test_page_count(self, mock_zip_file):
# Store as tuples to use as dictionary keys.
members_dict = {('dir/', 'dir/file1', 'dir/file2'):2,
('file1.jpg', 'file2.jpg', 'file3.jpg'):3
}
# Make the file point to something to prevent FileNoneError.
self.comic_file.file = __file__
for file_tuple, count in members_dict.items():
mock_zip_file.return_value.namelist = list(file_tuple)
self.assertEqual(count, self.comic_file.page_count())
When I run this test, I get the following:
F..ss....
======================================================================
FAIL: test_page_count (test_comicfile.TestPageCount)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/usr/local/Cellar/python3/3.5.1/Frameworks/Python.framework/Versions/3.5/lib/python3.5/unittest/mock.py", line 1157, in patched
return func(*args, **keywargs)
File "/Users/chuck/Dropbox/Projects/chiv/chiv.cbstar/test_comicfile.py", line 86, in test_page_count
self.assertEqual(count, self.comic_file.page_count())
AssertionError: 2 != 0
----------------------------------------------------------------------
Ran 9 tests in 0.010s
FAILED (failures=1, skipped=2)
OK, so self.comic_file.page_count() is returning 0. I tried placing the following line after members = zip.namelist() in page_count.
print('\nmembers -> ' + str(members))
During the test, I get this:
members -> <MagicMock name='ZipFile().__enter__().namelist()' id='4483358280'>
I'm quite new to unit testing and am quite nebulous on using unittest.mock, but my understanding is that mock_zip-file.return_value.namelist = list(file_tuple) should have made it so that the namelist method of the ZipFile class would return each of the file_tuple contents in turn. What it is doing I have no idea.
I think what I'm trying to do here is clear, but I can't seem to figure out how to override the namelist method so that my unit test is only testing this one function instead of having to deal with ZipFile as well.
ZipFile is instantiated as a context manager. to mock it you have to refer to its __enter__ method.
mock_zip_file.return_value.__enter__.return_value.namelist.return_value = list(file_tuple)
What you're trying to do is very clear, but the context manager adds complexity to the mocking.
One trick is that when a mock registers all calls made to it, in this example it is saying it has a call at:
members -> <MagicMock name='ZipFile().__enter__().namelist()' id='4483358280'>
This can guide you in registering your mocked object, replace all () with return_value

#mock.patch isn't raising an attribute error even after setting side_effect

I'm attempting to fix a bug in the python package caniusepython3 which arises because distlib isn't parsing pypi projects correctly. I've written this unit test
#mock.patch('distlib.locators.locate')
def test_blocking_dependencies_locators_fails(self, distlib_mock):
"""
Testing the work around for //bitbucket.org/pypa/distlib/issue/59/
"""
py3 = {'py3_project': ''}
breaking_project = 'test_project'
distlib_mock.locators.locate.return_value = "foo"
distlib_mock.locators.locate.side_effect = AttributeError()
got = dependencies.blocking_dependencies([breaking_project], py3)
# If you'd like to test that a message is logged we can use
# testfixtures.LogCapture or stdout redirects.
So that when distlib fixes the error in the next release of distlib the test case will still be valid.
The problem is that the MagicMock never raises a AttributeError as I expected and instead returns a string representation of the magic mock object
try:
# sets dist to <MagicMock name='locate()' id='4447530792'>
dist = distlib.locators.locate(project)
except AttributeError:
# This is a work around //bitbucket.org/pypa/distlib/issue/59/
log.warning('{0} found but had to be skipped.'.format(project))
continue
And causes this stack trace later on because it returns the object repr,
======================================================================
ERROR: Testing the work around for //bitbucket.org/pypa/distlib/issue/59/
----------------------------------------------------------------------
Traceback (most recent call last):
File "/usr/local/Cellar/python3/3.4.2_1/Frameworks/Python.framework/Versions/3.4/lib/python3.4/unittest/mock.py", line 1136, in patched
return func(*args, **keywargs)
File "/Users/alexlord/git/caniusepython3/caniusepython3/test/test_dependencies.py", line 81, in test_blocking_dependencies_locators_fails
got = dependencies.blocking_dependencies([breaking_project], py3)
File "/Users/alexlord/git/caniusepython3/caniusepython3/dependencies.py", line 119, in blocking_dependencies
return reasons_to_paths(reasons)
File "/Users/alexlord/git/caniusepython3/caniusepython3/dependencies.py", line 43, in reasons_to_paths
parent = reasons[blocker]
File "/Users/alexlord/git/caniusepython3/caniusepython3/dependencies.py", line 29, in __getitem__
return super(LowerDict, self).__getitem__(key.lower())
nose.proxy.KeyError: <MagicMock name='locate().name.lower().lower()' id='4345929400'>
-------------------- >> begin captured logging << --------------------
ciu: INFO: Checking top-level project: test_project ...
ciu: INFO: Locating <MagicMock name='locate().name.lower()' id='4344734944'>
ciu: INFO: Dependencies of <MagicMock name='locate().name.lower()' id='4344734944'>: []
--------------------- >> end captured logging << ---------------------
Why is the MagicMock not returning an exception when distlib.locator.locate() is called?
Update: I was able to get this unit test to work when I switched to using
def test_blocking_dependencies_locators_fails(self):
"""
Testing the work around for //bitbucket.org/pypa/distlib/issue/59/
"""
with mock.patch.object(distlib.locators, 'locate') as locate_mock:
py3 = {'py3_project': ''}
breaking_project = 'test_project'
locate_mock.side_effect = AttributeError()
got = dependencies.blocking_dependencies([breaking_project], py3)
# If you'd like to test that a message is logged we can use
# testfixtures.LogCapture or stdout redirects.
But I'm still wondering what I did wrong with the decorator format.
When you use #mock.patch, it mocks what you tell it, and passes that mock object as a parameter. Thus, your distlib_mock parameter is the mock locate function. You're effectively setting attributes on distlib.locators.locate.locators.locate. Set the attributes directly on the provided mock, and things should work better.
#mock.patch('distlib.locators.locate')
def test_blocking_dependencies_locators_fails(self, locate_mock):
# ...
locate_mock.return_value = "foo"
locate_mock.side_effect = AttributeError()
# ...

How can I mock sqlite3.Cursor

I've been pulling my hair out trying to figure out how to mock the sqlite3.Cursor class specifically the fetchall method.
Consider the following code sample
import sqlite3
from mock import Mock, patch
from nose.tools import assert_false
class Foo:
def check_name(name):
conn = sqlite3.connect('temp.db')
c = conn.cursor()
c.execute('SELECT * FROM foo where name = ?', name)
if len(c.fetchall()) > 0:
return True
return False
#patch('sqlite3.Cursor.fetchall', Mock(return_value=['John', 'Bob']))
def test_foo():
foo = Foo()
assert_false(foo.check_name('Cane'))
Running nosetests results in no fun error
E
======================================================================
ERROR: temp.test_foo
----------------------------------------------------------------------
Traceback (most recent call last):
File "/home/koddsson/.virtualenvs/temp/lib/python2.7/site-packages/nose/case.py", line 197, in runTest
self.test(*self.arg)
File "/home/koddsson/.virtualenvs/temp/lib/python2.7/site-packages/mock.py", line 1214, in patched
patching.__exit__(*exc_info)
File "/home/koddsson/.virtualenvs/temp/lib/python2.7/site-packages/mock.py", line 1379, in __exit__
setattr(self.target, self.attribute, self.temp_original)
TypeError: can't set attributes of built-in/extension type 'sqlite3.Cursor'
----------------------------------------------------------------------
Ran 1 test in 0.002s
FAILED (errors=1)
Should I not be able to mock the fetchall method or am I doing something horribly wrong?
I would take the approach of patching out sqlite3 imported in your module and then work from there.
Let's assume your module is named what.py.
I would patch out what.sqlite3 and then mock the return value of .connect().cursor().fetchall.
Here is a more complete example:
from mock import patch
from nose.tools import assert_true, assert_false
from what import Foo
def test_existing_name():
with patch('what.sqlite3') as mocksql:
mocksql.connect().cursor().fetchall.return_value = ['John', 'Bob']
foo = Foo()
assert_true(foo.check_name('John'))
I have found a way to mock sqlite3.Cursor in my tests:
cursor = MagicMock(Cursor)
cursor.fetchall.return_value = [{'column1': 'hello', 'column2': 'world'}]
I am pretty new in python but this is how I do it in Java.
You can't mock everything and databases are particularly tricky. I often find that the right thing to do (esp. with Sqlite since it's so easy) is to load up a test database with mock data and use that in the tests (i.e. fixtures). After all, what you really need to test is whether your code is accessing and querying the database correctly.
The question you are usually trying to answer in a test like this is "If there is X data in the DB and I execute query Y, does that query return Z like I expect", or at a higher level "If I pass parameter X to my method, does it return value Z (based on getting Y from the db)."
In your example, the real question is "Is SELECT * FROM foo where name = ? the right query in this method?" but you don't answer it if you mock the response.

web.py sessions: AttributeError: 'ThreadedDict' object has no attribute 'count'

I just ran this simple code snippet provided by the wiki, because I couldn't get sessions working:
import web
web.config.debug = False
urls = (
"/count", "count",
"/reset", "reset"
)
app = web.application(urls, locals())
session = web.session.Session(app, web.session.DiskStore('sessions'), initializer={'count': 0})
class count:
def GET(self):
session.count += 1
return str(session.count)
class reset:
def GET(self):
session.kill()
return ""
if __name__ == "__main__":
app.run()
But it results in this error:
Traceback (most recent call last):
File "/usr/local/lib/python2.7/dist-packages/web/application.py", line 237, in process
return self.handle()
File "/usr/local/lib/python2.7/dist-packages/web/application.py", line 228, in handle
return self._delegate(fn, self.fvars, args)
File "/usr/local/lib/python2.7/dist-packages/web/application.py", line 411, in _delegate
return handle_class(cls)
File "/usr/local/lib/python2.7/dist-packages/web/application.py", line 387, in handle_class
return tocall(*args)
File "temp.py", line 12, in GET
session.count += 1
File "/usr/local/lib/python2.7/dist-packages/web/session.py", line 71, in __getattr__
return getattr(self._data, name)
AttributeError: 'ThreadedDict' object has no attribute 'count'
Is webpy not compatible with 2.7.3? I'm running this on the internal webserver of webpy. I'm using Ubuntu 12.04.
session.count += 1 is equal to session.count = session.count + 1 so session.count must exist for this to work.
Add the following check to make it work:
if 'count' not in session:
session.count = 0
session.count += 1
There is also another way which is even shown in the very simple session simple example of the docs:
try:
s.click += 1
except AttributeError:
s.click = 1
Ok for the trick of try...except also i'm not convinced it's the best way to do so (not clean at all).
Like previously said, Session constructor offer a way to initialize the session's variable.
I'm not really sure we can rely on the "very simple session simple example".
First, we hardly have any explication on the purpose of various variablse. For instance, what is the purpose the *db_parameter* dict ?
Last but not least, it needs a serious update. The provided code simply didn't work with the actual framework.
There is simply no web.ctx.session.
By the way, I implemented a simple counter like in the example.
The displayed error you had is due to a drastic change in the Session's API.
You cannot just call "counter" from your session.
That would be more smthg like that: session.store.store_instance.get('counter') .Where store_instance is either a shelf or a db.
Like i said, the official documentation needs a serious update.
That said, I noticed that this is not the same for the docstring. To progress i start Ipython and i see every posibilities I have.
I know it's pure guessing but naming is good so we can figure out what to do.
I will submit my example to the team of web.py so they can update the official doc.
Until my pull request get accepted on GitHub, i post the snippet of code illustrating the use of a simple incrementor:
import web
import shelve
urls = (
'/add', 'counter',
'/reset', 'reset'
)
shelf = shelve.open('session')
shelfStore = web.session.ShelfStore(shelf)
app = web.application(urls, globals())
s = web.session.Session(app, shelfStore)
class counter:
def GET(self):
numberToAdd = web.input().get('number')
if not numberToAdd:
numberToAdd = 1
try:
print numberToAdd
s.store.shelf["count"] += int(numberToAdd)
except Exception:
s.store.shelf["count"] = 1
return s.store.shelf.get("count")
class reset:
def GET(self):
s.store.shelf.clear()
if __name__ == "__main__":
app.run()
The problem is the python version. I had the same problem and I solved it when I executed 2.7 version of python. Just doing > python2.7 code.py and the sessions works perfectly.
It's a pity that the doc to web.py is very poor.

PicklingError: Can't pickle <class 'decimal.Decimal'>: it's not the same object as decimal.Decimal

This is the error I got today at <a href"http://filmaster.com">filmaster.com:
PicklingError: Can't pickle <class
'decimal.Decimal'>: it's not the same
object as decimal.Decimal
What does that exactly mean? It does not seem to be making a lot of sense...
It seems to be connected with django caching. You can see the whole traceback here:
Traceback (most recent call last):
File
"/home/filmaster/django-trunk/django/core/handlers/base.py",
line 92, in get_response response =
callback(request, *callback_args,
**callback_kwargs)
File
"/home/filmaster/film20/film20/core/film_views.py",
line 193, in show_film
workflow.set_data_for_authenticated_user()
File
"/home/filmaster/film20/film20/core/film_views.py",
line 518, in
set_data_for_authenticated_user
object_id = self.the_film.parent.id)
File
"/home/filmaster/film20/film20/core/film_helper.py",
line 179, in get_others_ratings
set_cache(CACHE_OTHERS_RATINGS,
str(object_id) + "_" + str(user_id),
userratings)
File
"/home/filmaster/film20/film20/utils/cache_helper.py",
line 80, in set_cache return
cache.set(CACHE_MIDDLEWARE_KEY_PREFIX
+ full_path, result, get_time(cache_string))
File
"/home/filmaster/django-trunk/django/core/cache/backends/memcached.py",
line 37, in set
self._cache.set(smart_str(key), value,
timeout or self.default_timeout)
File
"/usr/lib/python2.5/site-packages/cmemcache.py",
line 128, in set val, flags =
self._convert(val)
File
"/usr/lib/python2.5/site-packages/cmemcache.py",
line 112, in _convert val =
pickle.dumps(val, 2)
PicklingError: Can't pickle <class
'decimal.Decimal'>: it's not the same
object as decimal.Decimal
And the source code for Filmaster can be downloaded from here: bitbucket.org/filmaster/filmaster-test
Any help will be greatly appreciated.
I got this error when running in an jupyter notebook. I think the problem was that I was using %load_ext autoreload autoreload 2. Restarting my kernel and rerunning solved the problem.
One oddity of Pickle is that the way you import a class before you pickle one of it's instances can subtly change the pickled object. Pickle requires you to have imported the object identically both before you pickle it and before you unpickle it.
So for example:
from a.b import c
C = c()
pickler.dump(C)
will make a subtly different object (sometimes) to:
from a import b
C = b.c()
pickler.dump(C)
Try fiddling with your imports, it might correct the problem.
I will demonstrate the problem with simple Python classes in Python2.7:
In [13]: class A: pass
In [14]: class B: pass
In [15]: A
Out[15]: <class __main__.A at 0x7f4089235738>
In [16]: B
Out[16]: <class __main__.B at 0x7f408939eb48>
In [17]: A.__name__ = "B"
In [18]: pickle.dumps(A)
---------------------------------------------------------------------------
PicklingError: Can't pickle <class __main__.B at 0x7f4089235738>: it's not the same object as __main__.B
This error is shown because we are trying to dump A, but because we changed its name to refer to another object "B", pickle is actually confused with which object to dump - class A or B. Apparently, pickle guys are very smart and they have already put a check on this behavior.
Solution:
Check if the object you are trying to dump has conflicting name with another object.
I have demonstrated debugging for the case presented above with ipython and ipdb below:
PicklingError: Can't pickle <class __main__.B at 0x7f4089235738>: it's not the same object as __main__.B
In [19]: debug
> /<path to pickle dir>/pickle.py(789)save_global()
787 raise PicklingError(
788 "Can't pickle %r: it's not the same object as %s.%s" %
--> 789 (obj, module, name))
790
791 if self.proto >= 2:
ipdb> pp (obj, module, name) **<------------- you are trying to dump obj which is class A from the pickle.dumps(A) call.**
(<class __main__.B at 0x7f4089235738>, '__main__', 'B')
ipdb> getattr(sys.modules[module], name) **<------------- this is the conflicting definition in the module (__main__ here) with same name ('B' here).**
<class __main__.B at 0x7f408939eb48>
I hope this saves some headaches! Adios!!
I can't explain why this is failing either, but my own solution to fix this was to change all my code from doing
from point import Point
to
import point
this one change and it worked. I'd love to know why... hth
There can be issues starting a process with multiprocessing by calling __init__. Here's a demo:
import multiprocessing as mp
class SubProcClass:
def __init__(self, pipe, startloop=False):
self.pipe = pipe
if startloop:
self.do_loop()
def do_loop(self):
while True:
req = self.pipe.recv()
self.pipe.send(req * req)
class ProcessInitTest:
def __init__(self, spawn=False):
if spawn:
mp.set_start_method('spawn')
(self.msg_pipe_child, self.msg_pipe_parent) = mp.Pipe(duplex=True)
def start_process(self):
subproc = SubProcClass(self.msg_pipe_child)
self.trig_proc = mp.Process(target=subproc.do_loop, args=())
self.trig_proc.daemon = True
self.trig_proc.start()
def start_process_fail(self):
self.trig_proc = mp.Process(target=SubProcClass.__init__, args=(self.msg_pipe_child,))
self.trig_proc.daemon = True
self.trig_proc.start()
def do_square(self, num):
# Note: this is an synchronous usage of mp,
# which doesn't make sense. But this is just for demo
self.msg_pipe_parent.send(num)
msg = self.msg_pipe_parent.recv()
print('{}^2 = {}'.format(num, msg))
Now, with the above code, if we run this:
if __name__ == '__main__':
t = ProcessInitTest(spawn=True)
t.start_process_fail()
for i in range(1000):
t.do_square(i)
We get this error:
Traceback (most recent call last):
File "start_class_process1.py", line 40, in <module>
t.start_process_fail()
File "start_class_process1.py", line 29, in start_process_fail
self.trig_proc.start()
File "/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/multiprocessing/process.py", line 105, in start
self._popen = self._Popen(self)
File "/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/multiprocessing/context.py", line 212, in _Popen
return _default_context.get_context().Process._Popen(process_obj)
File "/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/multiprocessing/context.py", line 274, in _Popen
return Popen(process_obj)
File "/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/multiprocessing/popen_spawn_posix.py", line 33, in __init__
super().__init__(process_obj)
File "/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/multiprocessing/popen_fork.py", line 21, in __init__
self._launch(process_obj)
File "/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/multiprocessing/popen_spawn_posix.py", line 48, in _launch
reduction.dump(process_obj, fp)
File "/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/multiprocessing/reduction.py", line 59, in dump
ForkingPickler(file, protocol).dump(obj)
_pickle.PicklingError: Can't pickle <function SubProcClass.__init__ at 0x10073e510>: it's not the same object as __main__.__init__
And if we change it to use fork instead of spawn:
if __name__ == '__main__':
t = ProcessInitTest(spawn=False)
t.start_process_fail()
for i in range(1000):
t.do_square(i)
We get this error:
Process Process-1:
Traceback (most recent call last):
File "/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/multiprocessing/process.py", line 254, in _bootstrap
self.run()
File "/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/multiprocessing/process.py", line 93, in run
self._target(*self._args, **self._kwargs)
TypeError: __init__() missing 1 required positional argument: 'pipe'
But if we call the start_process method, which doesn't call __init__ in the mp.Process target, like this:
if __name__ == '__main__':
t = ProcessInitTest(spawn=False)
t.start_process()
for i in range(1000):
t.do_square(i)
It works as expected (whether we use spawn or fork).
Did you somehow reload(decimal), or monkeypatch the decimal module to change the Decimal class? These are the two things most likely to produce such a problem.
Same happened to me
Restarting the kernel worked for me
Due to the restrictions based upon reputation I cannot comment, but the answer of Salim Fahedy and following the debugging-path set me up to identify a cause for this error, even when using dill instead of pickle:
Under the hood, dill also accesses some functions of dill. And in pickle._Pickler.save_global() there is an import happening. To me it seems, that this is more of a "hack" than a real solution as this method fails as soon as the class of the instance you are trying to pickle is not imported from the lowest level of the package the class is in. Sorry for the bad explanation, maybe examples are more suitable:
The following example would fail:
from oemof import solph
...
(some code here, giving you the object 'es')
...
model = solph.Model(es)
pickle.dump(model, open('file.pickle', 'wb))
It fails, because while you can use solph.Model, the class actually is oemof.solph.models.Model for example. The save_global() resolves that (or some function before that which passes it to save_global()), but then imports Model from oemof.solph.models and throws an error, because it's not the same import as from oemof import solph.Model (or something like that, I'm not 100% sure about the workings).
The following example would work:
from oemof.solph.models import Model
...
some code here, giving you the object 'es')
...
model = Model(es)
pickle.dump(model, open('file.pickle', 'wb'))
It works, because now the Model object is imported from the same place, the pickle._Pickler.save_global() imports the comparison object (obj2) from.
Long story short: When pickling an object, make sure to import the class from the lowest possible level.
Addition: This also seems to apply to objects stored in the attributes of the class-instance you want to pickle. If for example model had an attribute es that itself is an object of the class oemof.solph.energysystems.EnergySystem, we would need to import it as:
from oemof.solph.energysystems import EnergySystem
es = EnergySystem()
My issue was that I had a function with the same name defined twice in a file. So I guess it was confused about which one it was trying to pickle.
I had same problem while debugging (Spyder). Everything worked normally if run the program. But, if I start to debug I faced the picklingError.
But, once I chose the option Execute in dedicated console in Run configuration per file (short-cut: ctrl+F6) everything worked normally as expected. I do not know exactly how it is adapting.
Note: In my script I have many imports like
from PyQt5.QtWidgets import *
from PyQt5.Qt import *
from matplotlib.backends.backend_qt5agg import FigureCanvasQTAgg as FigureCanvas
import os, sys, re, math
My basic understanding was, because of star (*) I was getting this picklingError.
I had a problem that no one has mentioned yet. I have a package with a __init__ file that does, among other things:
from .mymodule import cls
Then my top-level code says:
import mypkg
obj = mypkg.cls()
The problem with this is that in my top-level code, the type appears to be mypkg.cls, but it's actually mypkg.mymodule.cls. Using the full path:
obj = mypkg.mymodule.cls()
avoids the error.
I had the same error in Spyder. Turned out to be simple in my case. I defined a class named "Class" in a file also named "Class". I changed the name of the class in the definition to "Class_obj". pickle.dump(Class_obj,fileh) works, but pickle.dump(Class,fileh) does not when its saved in a file named "Class".
This miraculous function solves the mentioned error, but for me it turned out to another error 'permission denied' which comes out of the blue. However, I guess it might help someone find a solution so I am still posting the function:
import tempfile
import time
from tensorflow.keras.models import save_model, Model
# Hotfix function
def make_keras_picklable():
def __getstate__(self):
model_str = ""
with tempfile.NamedTemporaryFile(suffix='.hdf5', delete=True) as fd:
save_model(self, fd.name, overwrite=True)
model_str = fd.read()
d = {'model_str': model_str}
return d
def __setstate__(self, state):
with tempfile.NamedTemporaryFile(suffix='.hdf5', delete=True) as fd:
fd.write(state['model_str'])
fd.flush()
model = load_model(fd.name)
self.__dict__ = model.__dict__
cls = Model
cls.__getstate__ = __getstate__
cls.__setstate__ = __setstate__
# Run the function
make_keras_picklable()
### create then save your model here ###

Categories