Python jsonpickle error: 'OrderedDict' object has no attribute '_OrderedDict__root' - python

I'm hitting this exception with jsonpickle, when trying to pickle a rather complex object that unfortunately I'm not sure how to describe here. I know that makes it tough to say much, but for what it's worth:
>>> frozen = jsonpickle.encode(my_complex_object_instance)
>>> thawed = jsonpickle.decode(frozen)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Library/Python/2.7/site-packages/jsonpickle/__init__.py",
line 152, in decode
return unpickler.decode(string, backend=backend, keys=keys)
:
:
File "/Library/Python/2.7/site-packages/jsonpickle/unpickler.py",
line 336, in _restore_from_dict
instance[k] = value
File "/Library/Python/2.7/site-packages/botocore/vendored/requests/packages/urllib3/packages/ordered_dict.py",
line 49, in __setitem__
root = self.__root
AttributeError: 'OrderedDict' object has no attribute '_OrderedDict__root'
I don't find much of assistance when googling the error. I do see what looks like the same issue was resolved at some time past for simpler objects:
https://github.com/jsonpickle/jsonpickle/issues/33
The cited example in that report works for me:
>>> jsonpickle.decode(jsonpickle.encode(collections.OrderedDict()))
OrderedDict()
>>> jsonpickle.decode(jsonpickle.encode(collections.OrderedDict(a=1)))
OrderedDict([(u'a', 1)])
Has anyone ever run into this themselves and found a solution? I ask with the understanding that my case may be "differently idiosynchratic" than another known example.

The requests module for me seems to be running into problems when I .decode(). After looking at the jsonpickle code a bit, I decided to fork it and change the following lines to see what was going on (and I ended up keeping a private copy of jsonpickle with the changes so I can move forward).
In jsonpickle/unpickler.py (in my version it's line 368), search for the if statement section in the method _restore_from_dict():
if (util.is_noncomplex(instance) or
util.is_dictionary_subclass(instance)):
instance[k] = value
else:
setattr(instance, k, value)
and change it to this (it will logERROR the ones that are failing and then you can either keep the code in place or change your OrderedDict's version that have __root)
if (util.is_noncomplex(instance) or
util.is_dictionary_subclass(instance)):
# Currently requests.adapters.HTTPAdapter is using a non-standard
# version of OrderedDict which doesn't have a _OrderedDict__root
# attribute
try:
instance[k] = value
except AttributeError as e:
import logging
import pprint
warnmsg = 'Unable to unpickle {}[{}]={}'.format(pprint.pformat(instance), pprint.pformat(k), pprint.pformat(value))
logging.error(warnmsg)
else:
setattr(instance, k, value)

Related

Multiprocessing Manager failing on very simple example with pool.apply_async

I'm seeing some unexpected behavior in my code related to python multiprocessing, and the Manager class in particular. I wrote out a super simple example to try and better understand what's going on:
import multiprocessing as mp
from collections import defaultdict
def process(d):
print('doing the process')
d['a'] = []
d['a'].append(1)
d['a'].append(2)
def main():
pool = mp.Pool(mp.cpu_count())
with mp.Manager() as manager:
d = manager.dict({'c': 2})
result = pool.apply_async(process, args=(d))
print(result.get())
pool.close()
pool.join()
print(d)
if __name__ == '__main__':
main()
This fails, and the stack trace printed from result.get() is as follows:
multiprocessing.pool.RemoteTraceback:
"""
Traceback (most recent call last):
File "/usr/local/Cellar/python/3.7.5/Frameworks/Python.framework/Versions/3.7/lib/python3.7/multiprocessing/pool.py", line 121, in worker
result = (True, func(*args, **kwds))
File "<string>", line 2, in __iter__
File "/usr/local/Cellar/python/3.7.5/Frameworks/Python.framework/Versions/3.7/lib/python3.7/multiprocessing/managers.py", line 825, in _callmethod
proxytype = self._manager._registry[token.typeid][-1]
AttributeError: 'NoneType' object has no attribute '_registry'
"""
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "mp_test.py", line 34, in <module>
main()
File "mp_test.py", line 25, in main
print(result.get())
File "/usr/local/Cellar/python/3.7.5/Frameworks/Python.framework/Versions/3.7/lib/python3.7/multiprocessing/pool.py", line 657, in get
raise self._value
AttributeError: 'NoneType' object has no attribute '_registry'
I'm still unclear on what's happening here. This seems to me to be a very, very straightforward application of the Manager class. It's nearly a copy of the actual example used in the official python documentation, with the only difference being that i'm using a pool and running the process with apply_async. I'm doing this because that's what i'm using in my actual project.
To clarify, I wouldn't get a stack trace if I didn't have the result = and print(result.get()) in there. I just see {'c': 2} printed when I run the script, which indicated to me that something was going wrong and wasn't being shown.
A couple things to start with: first, this isn't the code you ran. The code you posted has
result = pool.apply_async(process2, args=(d))
but there is no process2() defined. Assuming "process` was intended, the next thing is the
args=(d)
part. That's the same as typing
args=d
but that's not what's needed. You need to pass a sequence of the intended arguments. So you need to change that part to
args=(d,) # build a 1-tuple
or
args=[d] # build a list
Then the output changes, to
{'c': 2, 'a': []}
Why aren't 1 and 2 in the the 'a' list? Because it's only the dict itself that lives on the manager server.
d['a'].append(1)
first gets the mapping for 'a' from the server, which is an empty list. But that empty list is not shared in any way - it's local to process(). You append 1 to it, and then it's thrown away - the server knows nothing about it. Same thing for 2.
To get what you want, you need to "do something" to tell the manager server about what you changed; e.g.,
d['a'] = L = []
L.append(1)
L.append(2)
d['a'] = L

how to use TTreeReader in PyROOT

I'm trying to get up and running using the TTreeReader approach to reading TTrees in PyROOT. As a guide, I am using the ROOT 6 Analysis Workshop (http://root.cern.ch/drupal/content/7-using-ttreereader) and its associated ROOT file (http://root.cern.ch/root/files/tutorials/mockupx.root).
from ROOT import *
fileName = "mockupx.root"
file = TFile(fileName)
tree = file.Get("MyTree")
treeReader = TTreeReader("MyTree", file)
After this, I am a bit lost. I attempt to access variable information using the TTreeReader object and it doesn't quite work:
>>> rvMissingET = TTreeReaderValue(treeReader, "missingET")
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/user/ROOT/v6-03-01/root/lib/ROOT.py", line 198, in __call__
result = _root.MakeRootTemplateClass( *newargs )
SystemError: error return without exception set
Where am I going wrong here?
TTreeReaderValue is a templated class, as shown in the example on the TTreeReader documentation, so you need to specify the template type.
You can do this with
rvMissingET = ROOT.TTreeReaderValue(ROOT.Double)(treeReader, "missingET")
The Python built-ins can be used for int and float types, e.g.
rvInt = ROOT.TTreeReaderValue(int)(treeReader, "intBranch")
rvFloat = ROOT.TTreeReaderValue(float)(treeReader, "floatBranch")
Also note that using TTreeReader in PyROOT is not recommended. (If you're looking for faster ntuple branch access in Python, you might look in to the Ntuple class I wrote.)

TypeError: in Python

I have an issue, where a function returns a number. When I then try to assemble a URL that includes that number I am met with failure.
Specifically the error I get is
TypeError: cannot concatenate 'str' and 'NoneType' objects
Not sure where to go from here.
Here is the relevant piece of code:
# Get the raw ID number of the current configuration
configurationID = generate_configurationID()
# Update config name at in Cloud
updateConfigLog = open(logBase+'change_config_name_log.xml', 'w')
# Redirect stdout to file
sys.stdout = updateConfigLog
rest.rest(('put', baseURL+'configurations/'+configurationID+'?name=this_is_a_test_', user, token))
sys.stdout = sys.__stdout__
It works perfectly if I manually type the following into rest.rest()
rest.rest(('put', http://myurl.com/configurations/123456?name=this_is_a_test_, myusername, mypassword))
I have tried str(configurationID) and it spits back a number, but I no longer get the rest of the URL...
Ideas? Help?
OK... In an attempt to show my baseURL and my configurationID here is what I did.
print 'baseURL: '+baseURL
print 'configurationID: '+configurationID
and here is what I got back
it-tone:trunk USER$ ./skynet.py fresh
baseURL: https://myurl.com/
369596
Traceback (most recent call last):
File "./skynet.py", line 173, in <module>
main()
File "./skynet.py", line 30, in main
fresh()
File "./skynet.py", line 162, in fresh
updateConfiguration()
File "./skynet.py", line 78, in updateConfiguration
print 'configurationID: '+configurationID
TypeError: cannot concatenate 'str' and 'NoneType' objects
it-tone:trunk USER$
What is interesting to me is that the 369596 is the config ID, but like before it seems to clobber everything called up around it.
As kindall pointed out below, my generate_configurationID was not returning the value, but rather it was printing it.
# from generate_configurationID
def generate_configurationID():
dom = parse(logBase+'provision_template_log.xml')
name = dom.getElementsByTagName('id')
p = name[0].firstChild.nodeValue
print p
return p
Your configurationID is None. This likely means that generate_configurationID() is not returning a value. There is no way in Python for a variable name to "lose" its value. The only way, in the code you posted, for configurationID to be None is for generate_configurationID() to return None which is what will happen if you don't explicitly return any value.
"But it prints the configurationID right on the screen!" you may object. Sure, but that's probably in generate_configurationID() where you are printing it to make sure it's right but forgetting to return it.
You may prove me wrong by posting generate_configurationID() in its entirety, and I will admit that your program is magic.

AppEngine -> "AttributeError: 'unicode' object has no attribute 'has_key'" when using blobstore

There have been a number of other questions on AttributeErrors here, but I've read through them and am still not sure what's causing the type mismatch in my specific case.
Thanks in advance for any thoughts on this.
My model:
class Object(db.Model):
notes = db.StringProperty(multiline=False)
other_item = db.ReferenceProperty(Other)
time = db.DateTimeProperty(auto_now_add=True)
new_files = blobstore.BlobReferenceProperty(required=True)
email = db.EmailProperty()
is_purple = db.BooleanProperty()
My BlobstoreUploadHandler:
class FormUploadHandler(blobstore_handlers.BlobstoreUploadHandler):
def post(self):
try:
note = self.request.get('notes')
email_addr = self.request.get('email')
o = self.request.get('other')
upload_file = self.get_uploads()[0]
# Save the object record
new_object = Object(notes=note,
other=o,
email=email_addr,
is_purple=False,
new_files=upload_file.key())
db.put(new_object)
# Redirect to let user know everything's peachy.
self.redirect('/upload_success.html')
except:
self.redirect('/upload_failure.html')
And every time I submit the form that uploads the file, it throws the following exception:
ERROR 2010-10-30 21:31:01,045 __init__.py:391] 'unicode' object has no attribute 'has_key'
Traceback (most recent call last):
File "/home/user/Public/dir/google_appengine/google/appengine/ext/webapp/__init__.py", line 513, in __call__
handler.post(*groups)
File "/home/user/Public/dir/myapp/myapp.py", line 187, in post
new_files=upload_file.key())
File "/home/user/Public/dir/google_appengine/google/appengine/ext/db/__init__.py", line 813, in __init__
prop.__set__(self, value)
File "/home/user/Public/dir/google_appengine/google/appengine/ext/db/__init__.py", line 3216, in __set__
value = self.validate(value)
File "/home/user/Public/dir/google_appengine/google/appengine/ext/db/__init__.py", line 3246, in validate
if value is not None and not value.has_key():
AttributeError: 'unicode' object has no attribute 'has_key'
What perplexes me most is that this code is nearly straight out of the documentation, and jives with other examples of blob upload handler's I've found online in tutorials as well.
I've run --clear-datastore to ensure that any changes I've made to the DB schema aren't causing problems, and have tried casting upload_file as all sorts of things to see if it would appease Python - any ideas on what I've screwed up?
Edit: I've found a workaround, but it's suboptimal.
Altering the UploadHandler to this instead resolves the issue:
...
# Save the object record
new_object = Object()
new_object.notes = note
new_object.other = o
new_object.email = email.addr
new_object.is_purple = False
new_object.new_files = upload_file.key()
db.put(new_object)
...
I made this switch after noticing that commenting out the files line threw the same issues for the other line, and so on. This isn't an optimal solution, though, as I can't enforce validation this way (in the model, if I set anything as required, I can't declare an empty entity like above without throwing an exception).
Any thoughts on why I can't declare the entity and populate it at the same time?
You're passing in o as the value of other_item (in your sample code, you call it other, but I presume that's a typo). o is a string fetched from the request, though, and the model definition specifies that it's a ReferenceProperty, so it should either be an instance of the Other class, or a db.Key object.
If o is supposed to be a stringified key, pass in db.Key(o) instead, to deserialize it.
Object is a really terrible name for a datastore class (or any class, really), by the way - the Python base object is called object, and that's only one capitalized letter away - very easy to mistake.
has_key error is due to the ReferenceProperty other_items. You are most likely passing in '' for other_items when appengine's api expects a dict. In order to get around this, you need to convert other_items to hash.
[caveat lector: I know zilch about "google_app_engine"]
The message indicates that it is expecting a dict (the only known object that has a has_key attribute) or a work-alike object, not the unicode object that you supplied. Perhaps you should be passing upload_file, not upload_file.key() ...

PicklingError: Can't pickle <class 'decimal.Decimal'>: it's not the same object as decimal.Decimal

This is the error I got today at <a href"http://filmaster.com">filmaster.com:
PicklingError: Can't pickle <class
'decimal.Decimal'>: it's not the same
object as decimal.Decimal
What does that exactly mean? It does not seem to be making a lot of sense...
It seems to be connected with django caching. You can see the whole traceback here:
Traceback (most recent call last):
File
"/home/filmaster/django-trunk/django/core/handlers/base.py",
line 92, in get_response response =
callback(request, *callback_args,
**callback_kwargs)
File
"/home/filmaster/film20/film20/core/film_views.py",
line 193, in show_film
workflow.set_data_for_authenticated_user()
File
"/home/filmaster/film20/film20/core/film_views.py",
line 518, in
set_data_for_authenticated_user
object_id = self.the_film.parent.id)
File
"/home/filmaster/film20/film20/core/film_helper.py",
line 179, in get_others_ratings
set_cache(CACHE_OTHERS_RATINGS,
str(object_id) + "_" + str(user_id),
userratings)
File
"/home/filmaster/film20/film20/utils/cache_helper.py",
line 80, in set_cache return
cache.set(CACHE_MIDDLEWARE_KEY_PREFIX
+ full_path, result, get_time(cache_string))
File
"/home/filmaster/django-trunk/django/core/cache/backends/memcached.py",
line 37, in set
self._cache.set(smart_str(key), value,
timeout or self.default_timeout)
File
"/usr/lib/python2.5/site-packages/cmemcache.py",
line 128, in set val, flags =
self._convert(val)
File
"/usr/lib/python2.5/site-packages/cmemcache.py",
line 112, in _convert val =
pickle.dumps(val, 2)
PicklingError: Can't pickle <class
'decimal.Decimal'>: it's not the same
object as decimal.Decimal
And the source code for Filmaster can be downloaded from here: bitbucket.org/filmaster/filmaster-test
Any help will be greatly appreciated.
I got this error when running in an jupyter notebook. I think the problem was that I was using %load_ext autoreload autoreload 2. Restarting my kernel and rerunning solved the problem.
One oddity of Pickle is that the way you import a class before you pickle one of it's instances can subtly change the pickled object. Pickle requires you to have imported the object identically both before you pickle it and before you unpickle it.
So for example:
from a.b import c
C = c()
pickler.dump(C)
will make a subtly different object (sometimes) to:
from a import b
C = b.c()
pickler.dump(C)
Try fiddling with your imports, it might correct the problem.
I will demonstrate the problem with simple Python classes in Python2.7:
In [13]: class A: pass
In [14]: class B: pass
In [15]: A
Out[15]: <class __main__.A at 0x7f4089235738>
In [16]: B
Out[16]: <class __main__.B at 0x7f408939eb48>
In [17]: A.__name__ = "B"
In [18]: pickle.dumps(A)
---------------------------------------------------------------------------
PicklingError: Can't pickle <class __main__.B at 0x7f4089235738>: it's not the same object as __main__.B
This error is shown because we are trying to dump A, but because we changed its name to refer to another object "B", pickle is actually confused with which object to dump - class A or B. Apparently, pickle guys are very smart and they have already put a check on this behavior.
Solution:
Check if the object you are trying to dump has conflicting name with another object.
I have demonstrated debugging for the case presented above with ipython and ipdb below:
PicklingError: Can't pickle <class __main__.B at 0x7f4089235738>: it's not the same object as __main__.B
In [19]: debug
> /<path to pickle dir>/pickle.py(789)save_global()
787 raise PicklingError(
788 "Can't pickle %r: it's not the same object as %s.%s" %
--> 789 (obj, module, name))
790
791 if self.proto >= 2:
ipdb> pp (obj, module, name) **<------------- you are trying to dump obj which is class A from the pickle.dumps(A) call.**
(<class __main__.B at 0x7f4089235738>, '__main__', 'B')
ipdb> getattr(sys.modules[module], name) **<------------- this is the conflicting definition in the module (__main__ here) with same name ('B' here).**
<class __main__.B at 0x7f408939eb48>
I hope this saves some headaches! Adios!!
I can't explain why this is failing either, but my own solution to fix this was to change all my code from doing
from point import Point
to
import point
this one change and it worked. I'd love to know why... hth
There can be issues starting a process with multiprocessing by calling __init__. Here's a demo:
import multiprocessing as mp
class SubProcClass:
def __init__(self, pipe, startloop=False):
self.pipe = pipe
if startloop:
self.do_loop()
def do_loop(self):
while True:
req = self.pipe.recv()
self.pipe.send(req * req)
class ProcessInitTest:
def __init__(self, spawn=False):
if spawn:
mp.set_start_method('spawn')
(self.msg_pipe_child, self.msg_pipe_parent) = mp.Pipe(duplex=True)
def start_process(self):
subproc = SubProcClass(self.msg_pipe_child)
self.trig_proc = mp.Process(target=subproc.do_loop, args=())
self.trig_proc.daemon = True
self.trig_proc.start()
def start_process_fail(self):
self.trig_proc = mp.Process(target=SubProcClass.__init__, args=(self.msg_pipe_child,))
self.trig_proc.daemon = True
self.trig_proc.start()
def do_square(self, num):
# Note: this is an synchronous usage of mp,
# which doesn't make sense. But this is just for demo
self.msg_pipe_parent.send(num)
msg = self.msg_pipe_parent.recv()
print('{}^2 = {}'.format(num, msg))
Now, with the above code, if we run this:
if __name__ == '__main__':
t = ProcessInitTest(spawn=True)
t.start_process_fail()
for i in range(1000):
t.do_square(i)
We get this error:
Traceback (most recent call last):
File "start_class_process1.py", line 40, in <module>
t.start_process_fail()
File "start_class_process1.py", line 29, in start_process_fail
self.trig_proc.start()
File "/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/multiprocessing/process.py", line 105, in start
self._popen = self._Popen(self)
File "/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/multiprocessing/context.py", line 212, in _Popen
return _default_context.get_context().Process._Popen(process_obj)
File "/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/multiprocessing/context.py", line 274, in _Popen
return Popen(process_obj)
File "/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/multiprocessing/popen_spawn_posix.py", line 33, in __init__
super().__init__(process_obj)
File "/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/multiprocessing/popen_fork.py", line 21, in __init__
self._launch(process_obj)
File "/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/multiprocessing/popen_spawn_posix.py", line 48, in _launch
reduction.dump(process_obj, fp)
File "/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/multiprocessing/reduction.py", line 59, in dump
ForkingPickler(file, protocol).dump(obj)
_pickle.PicklingError: Can't pickle <function SubProcClass.__init__ at 0x10073e510>: it's not the same object as __main__.__init__
And if we change it to use fork instead of spawn:
if __name__ == '__main__':
t = ProcessInitTest(spawn=False)
t.start_process_fail()
for i in range(1000):
t.do_square(i)
We get this error:
Process Process-1:
Traceback (most recent call last):
File "/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/multiprocessing/process.py", line 254, in _bootstrap
self.run()
File "/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/multiprocessing/process.py", line 93, in run
self._target(*self._args, **self._kwargs)
TypeError: __init__() missing 1 required positional argument: 'pipe'
But if we call the start_process method, which doesn't call __init__ in the mp.Process target, like this:
if __name__ == '__main__':
t = ProcessInitTest(spawn=False)
t.start_process()
for i in range(1000):
t.do_square(i)
It works as expected (whether we use spawn or fork).
Did you somehow reload(decimal), or monkeypatch the decimal module to change the Decimal class? These are the two things most likely to produce such a problem.
Same happened to me
Restarting the kernel worked for me
Due to the restrictions based upon reputation I cannot comment, but the answer of Salim Fahedy and following the debugging-path set me up to identify a cause for this error, even when using dill instead of pickle:
Under the hood, dill also accesses some functions of dill. And in pickle._Pickler.save_global() there is an import happening. To me it seems, that this is more of a "hack" than a real solution as this method fails as soon as the class of the instance you are trying to pickle is not imported from the lowest level of the package the class is in. Sorry for the bad explanation, maybe examples are more suitable:
The following example would fail:
from oemof import solph
...
(some code here, giving you the object 'es')
...
model = solph.Model(es)
pickle.dump(model, open('file.pickle', 'wb))
It fails, because while you can use solph.Model, the class actually is oemof.solph.models.Model for example. The save_global() resolves that (or some function before that which passes it to save_global()), but then imports Model from oemof.solph.models and throws an error, because it's not the same import as from oemof import solph.Model (or something like that, I'm not 100% sure about the workings).
The following example would work:
from oemof.solph.models import Model
...
some code here, giving you the object 'es')
...
model = Model(es)
pickle.dump(model, open('file.pickle', 'wb'))
It works, because now the Model object is imported from the same place, the pickle._Pickler.save_global() imports the comparison object (obj2) from.
Long story short: When pickling an object, make sure to import the class from the lowest possible level.
Addition: This also seems to apply to objects stored in the attributes of the class-instance you want to pickle. If for example model had an attribute es that itself is an object of the class oemof.solph.energysystems.EnergySystem, we would need to import it as:
from oemof.solph.energysystems import EnergySystem
es = EnergySystem()
My issue was that I had a function with the same name defined twice in a file. So I guess it was confused about which one it was trying to pickle.
I had same problem while debugging (Spyder). Everything worked normally if run the program. But, if I start to debug I faced the picklingError.
But, once I chose the option Execute in dedicated console in Run configuration per file (short-cut: ctrl+F6) everything worked normally as expected. I do not know exactly how it is adapting.
Note: In my script I have many imports like
from PyQt5.QtWidgets import *
from PyQt5.Qt import *
from matplotlib.backends.backend_qt5agg import FigureCanvasQTAgg as FigureCanvas
import os, sys, re, math
My basic understanding was, because of star (*) I was getting this picklingError.
I had a problem that no one has mentioned yet. I have a package with a __init__ file that does, among other things:
from .mymodule import cls
Then my top-level code says:
import mypkg
obj = mypkg.cls()
The problem with this is that in my top-level code, the type appears to be mypkg.cls, but it's actually mypkg.mymodule.cls. Using the full path:
obj = mypkg.mymodule.cls()
avoids the error.
I had the same error in Spyder. Turned out to be simple in my case. I defined a class named "Class" in a file also named "Class". I changed the name of the class in the definition to "Class_obj". pickle.dump(Class_obj,fileh) works, but pickle.dump(Class,fileh) does not when its saved in a file named "Class".
This miraculous function solves the mentioned error, but for me it turned out to another error 'permission denied' which comes out of the blue. However, I guess it might help someone find a solution so I am still posting the function:
import tempfile
import time
from tensorflow.keras.models import save_model, Model
# Hotfix function
def make_keras_picklable():
def __getstate__(self):
model_str = ""
with tempfile.NamedTemporaryFile(suffix='.hdf5', delete=True) as fd:
save_model(self, fd.name, overwrite=True)
model_str = fd.read()
d = {'model_str': model_str}
return d
def __setstate__(self, state):
with tempfile.NamedTemporaryFile(suffix='.hdf5', delete=True) as fd:
fd.write(state['model_str'])
fd.flush()
model = load_model(fd.name)
self.__dict__ = model.__dict__
cls = Model
cls.__getstate__ = __getstate__
cls.__setstate__ = __setstate__
# Run the function
make_keras_picklable()
### create then save your model here ###

Categories