Below is an example of my my_create method, and an example of that method in use.
#contextmanager
def my_create(**attributes):
obj = MyObject(**attributes)
yield obj
obj.save()
with my_create(a=10) as new_obj:
new_obj.b = 7
new_obj.a # => 10
new_obj.b # => 7
new_obj.is_saved() # => True
To users of Ruby/Rails, this may look familiar. It's similar to the ActiveRecord::create method, with the code inside the with block acting as, well, a block.
However:
with my_create(a=10) as new_obj:
pass
new_obj.a # => 10
new_obj.is_saved() # => True
In the above example, I've passed an empty "block" to my my_create function. Things work as expected (my_obj was initialized, and saved), but the formatting looks a little wonky, and the with block seems unnecessary.
I would prefer to be able to call my_create directly, without having to setup a passing with block. Unfortunately, that's not possible with my current implementation of my_create.
my_obj = create(a=10)
my_obj # => <contextlib.GeneratorContextManager at 0x107c21050>
I'd have to call both __enter__ and __exit__ on the GeneratorContextManager to get my desired result.
The question:
Is there a way to write my my_create function so that it can be called with a "block" as an optional "parameter"? I don't want to pass an optional function to my_create. I want my_create to optionally yield execution to a block of code.
The solution doesn't have to involve with or contextmanager. For instance, the same results as above can be achieved with a generator and a for loop, although the syntax becomes even more unclear.
At this point I'm afraid that a readable-enough-to-be-sensibly-usable solution doesn't exist, but I'm still interested to see what everyone comes up with.
Some clarification:
Another example would be:
#contextmanager
def header_file(path):
touch(path)
f = open(path, 'w')
f.write('This is the header')
yield f
f.close()
with header_file('some/path') as f:
f.write('some more stuff')
another_f = header_file('some/other/path')
I always want to do the __enter__ and __exit__ parts of the context manager. I don't always want to supply a block. I don't want to have to set up a passing with block if I don't have to.
This is possible and easy in Ruby. It would be cool if it were possible in Python too, since we're already so close (we just have to set up a passing with block). I understand that the language mechanics make it a difficult (technically impossible?) but a close-enough solution is interesting to me.
Add a new method on MyObject which creates and saves.
class MyObject:
#classmethod
def create(cls, **attributes):
obj = cls(**attributes)
obj.save()
return obj
This is an alternate initializer, a factory, and the design pattern has precedent in Python standard libraries and in many popular frameworks. Django models use this pattern where an alternate initializer Model.create(**args) can offer additional features that the usual Model(**args) would not (e.g. persisting to the database).
Is there a way to write my my_create function so that it can be called with a "block" as an optional "parameter"?
No.
I'd suggest using different functions to get a context manager that saves an object on __exit__ and to get an automatically saved object. There's no easy way to have one function do both things. (There are no "blocks" that you can pass around, other than functions, which you say you don't want.)
For instance, you could create a second function that just creates and immediately saves an object without running any extra code to run in between:
def create_and_save(**args):
obj = MyObject(**args)
obj.save()
return obj
So you could make it work with two functions. But a more Pythonic approach would probably be to get rid of the context manager function and make the MyObject class serve as its own context manager. You can give it very simple __enter__ and __exit__ methods:
def __enter__(self):
return self
def __exit__(self, exception_type, exception_value, traceback):
if exception_type is None:
self.save()
Your first example would become:
with MyObject(a=10) as new_obj:
new_obj.b = 7
You could also turn the create_and_save function I showed above into a classmethod:
#classmethod
def create_and_save(cls, **args):
obj = cls(**args)
obj.save()
return obj
Your second example would then be:
new_obj = MyObject.create_and_save(a=10)
Both of those methods could be written in a base class and simply inherited by other classes, so don't think you'd need to rewrite them all the time.
Ok, there seems to be some confusion so I've been forced to come up with an example solution. Here's the best I've been able to come up with so far.
class my_create(object):
def __new__(cls, **attributes):
with cls.block(**attributes) as obj:
pass
return obj
#classmethod
#contextmanager
def block(cls, **attributes):
obj = MyClass(**attributes)
yield obj
obj.save()
If we design my_create like above, we can use it normally without a block:
new_obj = my_create(a=10)
new_obj.a # => 10
new_obj.is_saved() # => True
And we can call it slightly differently with a block.
with my_create.block(a=10) as new_obj:
new_obj.b = 7
new_obj.a # => 10
new_obj.b # => 7
new_obj.saved # => True
Calling my_create.block is kind of similar to calling Celery tasks Task.s, and users who don't want to call my_create with a block just call it normally, so I'll allow it.
However, this implementation of my_create looks wonky, so we can create a wrapper to make it more like the implementation of context_manager(my_create) in the question.
import types
# The abstract base class for a block accepting "function"
class BlockAcceptor(object):
def __new__(cls, *args, **kwargs):
with cls.block(*args, **kwargs) as yielded_value:
pass
return yielded_value
#classmethod
#contextmanager
def block(cls, *args, **kwargs):
raise NotImplementedError
# The wrapper
def block_acceptor(f):
block_accepting_f = type(f.func_name, (BlockAcceptor,), {})
f.func_name = 'block'
block_accepting_f.block = types.MethodType(contextmanager(f), block_accepting_f)
return block_accepting_f
Then my_create becomes:
#block_acceptor
def my_create(cls, **attributes):
obj = MyClass(**attributes)
yield obj
obj.save()
In use:
# creating with a block
with my_create.block(a=10) as new_obj:
new_obj.b = 7
new_obj.a # => 10
new_obj.b # => 7
new_obj.saved # => True
# creating without a block
new_obj = my_create(a=10)
new_obj.a # => 10
new_obj.saved # => True
Ideally the my_create function wouldn't need to accept a cls, and the block_acceptor wrapper would handle that, but I haven't got time to make those changes just now.
pythonic? no. useful? possibly?
I'm still interested to see what others come up with.
With a slight change, you can can really close to what you want, just not via implementation using contextlib.contextmanager:
creator = build_creator_obj()
# "with" contextmanager interface
with creator as obj:
obj.attr = 'value'
# "call" interface
obj = creator(attr='value')
Where creator is an object that implements __enter__ and __exit__ for the first usage and implements __call__ for the second usage.
You can also hide the construction of creator inside a property on some persistent object, e.g.:
class MyDatabase():
#property
def create(self):
return build_creator_obj()
db = MyDatabase()
# so that you can do either/both:
with db.create as obj:
obj.attr = 'value'
obj = db.create(attr='value')
Related
I have a database handler that utilizes SQLAlchemy ORM to communicate with a database. As part of SQLAlchemy's recommended practices, I interact with the session by using it as a context manager. How can I test what a function called inside the context manager using that context manager has done?
EDIT: I realized the file structure mattered due to the complexity in introduced. I re-structured the code below to more closely mirror what the end file structure will be like, and what a common production repo in my environment would look like, with code being defined in one file and tests in a completely separate file.
For example:
Code File (delete_things_from_table.py):
from db_handler import delete, SomeTable
def delete_stuff(handler):
stmt = delete(SomeTable)
with handler.Session.begin() as session:
session.execute(stmt)
session.commit()
Test File:
import pytest
import delete_things_from_table as dlt
from db_handler import Handler
def test_delete_stuff():
handler = db_handler()
dlt.delete_stuff(handler):
# Test that session.execute was called
# Test the value of 'stmt'
# Test that session.commit was called
I am not looking for a solution specific to SQLAlchemy; I am only utilizing this to highlight what I want to test within a context manager, and any strategies for testing context managers are welcome.
After sleeping on it, I came up with a solution. I'd love additional/less complex solutions if there are any available, but this works:
import pytest
import delete_things_from_table as dlt
from db_handler import Handler
class MockSession:
def __init__(self):
self.execute_params = []
self.commit_called = False
def execute(self, *args, **kwargs):
self.execute_params.append(["call", args, kwargs])
return self
def commit(self):
self.commit_called = True
return self
def begin(self):
return self
def __enter__(self):
return self
def __exit__(self, type, value, traceback):
pass
def test_delete_stuff(monkeypatch):
handler = db_handler()
# Parens in 'MockSession' below are Important, pass an instance not the class
monkeypatch.setattr(handler, Session, MockSession())
dlt.delete_stuff(handler):
# Test that session.execute was called
assert len(handler.Session.execute_params)
# Test the value of 'stmt'
assert str(handler.Session.execute_params[0][1][0]) == "DELETE FROM some_table"
# Test that session.commit was called
assert handler.Session.commit_called
Some key things to note:
I created a static mock instead of a MagicMock as it's easier to control the methods/data flow with a custom mock class
Since the SQLAlchemy session context manager requires a begin() to start the context, my mock class needed a begin. Returning self in begin allows us to test the values later.
context managers rely on on the magic methods __enter__ and __exit__ with the argument signatures you see above.
The mocked class contains mocked methods which alter instance variables allowing us to test later
This relies on monkeypatch (there are other ways I'm sure), but what's important to note is that when you pass your mock class you want to patch in an instance of the class and not the class itself. The parentheses make a world of difference.
I don't think it's an elegant solution, but it's working. I'll happily take any suggestions for improvement.
I'm trying to create reusable code that is modifying a class variable, whose name is not known to the method that is doing the logic.
In Example.receive_button_press, I am trying to call a function on an object that I have passed into the method, whose variable(s) would also be provided as parameters.
Can this be done in python? The code below does not quite work, but illustrates what I am trying to achieve.
import tkinter as tk
from tkinter import filedialog
class SomeOtherClass():
_locked_button = False
def __init__(self):
pass
def do_button_press(self, button):
if button:
return
button = True
someVal = tk.filedialog.askopenfilename(initialdir='\\')
button = False
return someVal
class Example():
def __init__(self):
pass
def receive_button_press(self, obj, func, var):
return obj.func(var)
if __name__ == "__main__":
root = tk.Tk()
toyClass = Example()
other = SomeOtherClass()
myButton = tk.Button(text="Press",
command=toyClass.receive_button_press(obj = other,func = other.do_button_press,
var=SomeOtherClass._locked_button))
myButton.pack()
root.mainloop()
Callable Objects
You need to understand how callables work in python. Specifically, callable objects are regular objects. The only special thing about them is that you can apply the () operator to them. Functions, methods and lambdas are just three of many types of callables.
Here is a callable named x:
x
x might have been defined as
def x():
return 3
Here is the result of calling x, also known as the "return value":
x()
Hopefully you can see the difference. You can assign x or x() to some other name, like a. If you do a = x, you can later do a() to actually call the object. If you do a = x(), a just refers to the number 3.
The command passed to tk.Button should be a callable object, not the result of calling a function. The following is the result of a function call, since you already applied () to the name receive_button_press:
toyClass.receive_button_press(obj=other, func=other.do_button_press,
var=SomeOtherClass._locked_button)
If this function call were to return another callable, you could use it as an argument to command. Otherwise, you will need to make a callable that performs the function call with no arguments:
lambda: toyClass.receive_button_press(obj=other, func=other.do_button_press,
var=SomeOtherClass._locked_button)
As an aside, if you add a __call__ method to a class, all if its instances will be callable objects.
Bound Methods
The object other.do_button_press is called a bound method, which is a special type of callable object. When you use the dot operator (.) on an instance (other) to get a function that belongs to a class (do_button_press), the method gets "bound" to the instance. The result is a callable that does not require self to be passed in. In fact, it has a __self__ attribute that encodes other.
Notice that you call
other.do_button_press(button)
not
other.do_button_press(other, button)
That's why you should change Example to read
def receive_button_press(self, func, var):
return func(var)
References to Attributes
You can access attributes in an object in a couple of different ways, but the simplest is by name. You can access a named attribute using the builtin hasattr and set them with setattr. To do so, you would have to change do_button_press to accept an object and an attribute name:
def do_button_press(self, lock_obj, lock_var):
if getattr(lock_obj, lock_var, False):
return
setattr(lock_obj, lock_var, True)
someVal = tk.filedialog.askopenfilename(initialdir='\\')
setattr(lock_obj, lock_var, False)
return someVal
Interfaces and Mixins
If this seems like a terrible way to do it, you're right. A much better way would be to use a pre-determined interface. In python, you don't make an interface: you just document it. So you could have do_button_press expect an object with an attibute called "locked" as "button", rather than just a reference to an immutable variable:
class SomeOtherClass():
locked = False
def do_button_press(self, lock):
if lock.locked:
return
lock.locked = True
someVal = tk.filedialog.askopenfilename(initialdir='\\')
lock.locked = False
return someVal
At the same time, you can make a "mixin class" to provide a reference implementation that users can just stick into their base class list. Something like this:
class LockMixin:
locked = False
class SomeOtherClass(LockMixin):
def do_button_press(self, lock):
if lock.locked:
return
lock.locked = True
someVal = tk.filedialog.askopenfilename(initialdir='\\')
lock.locked = False
return someVal
Locks
Notice that I started throwing the term "Lock" around a lot. That's because the idea you have implemented is called "locking" a segment of code. In python, you have access to thread locks via threading.Lock. Rather than manually setting locked, you use the acquire and release methods.
class SomeOtherClass:
def do_button_press(self, lock):
if lock.acquire(blocking=False):
try:
someVal = tk.filedialog.askopenfilename(initialdir='\\')
return someVal
finally:
lock.release()
Alternatively, you can use the lock as a context manager. The only catch is that acquire will be called with blocking=True by default, which may not be what you want:
def do_button_press(self, lock):
with lock:
someVal = tk.filedialog.askopenfilename(initialdir='\\')
return someVal
Decorators
Finally, there is one more tool that may apply here. Python lets you apply decorators to functions and classes. A decorator is a callable that accepts a function or class as an argument and returns a replacement. Examples of decorators include staticmethod, classmethod and property. Many decorators return the original function more-or-less untouched. You can write a decorator that acquires a lock and releases it when you're done:
from functools import wraps
from threading import Lock
def locking(func):
lock = Lock()
#wraps(func)
def wrapper(*args, **kwargs):
if lock.acquire(blocking=False):
try:
return func(*args, **kwargs)
except:
lock.release()
return wrapper
Notice that the wrapper function is itself decorated (to forward the name and other attributes of the original func to it). It passes through all the input arguments and return of the original, but inside a lock.
You would use this if you did not care where the lock came from, which is likely what you want here:
class SomeOtherClass:
#locking
def do_button_press(self):
return tk.filedialog.askopenfilename(initialdir='\\')
Conclusion
So putting all of this together, here is how I would rewrite your toy example:
import tkinter as tk
from tkinter import filedialog
from functools import wraps
from threading import Lock
def locking(func):
lock = Lock()
#wraps(func)
def wrapper(*args, **kwargs):
if lock.acquire(blocking=False):
try:
return func(*args, **kwargs)
finally:
lock.release()
return wrapper
class SomeOtherClass():
#locking
def do_button_press(self, button):
return tk.filedialog.askopenfilename(initialdir='\\')
if __name__ == "__main__":
root = tk.Tk()
toyClass = SomeOtherClass()
myButton = tk.Button(text="Press", command=toyClass.do_button_press)
myButton.pack()
root.mainloop()
Ive been on a tear of writing some decorators recently.
One of the ones I just wrote allows you to put the decorator just before a class definition, and it will cause every method of the class to print some logigng info when its run (more for debugging/initial super basic speed tests during a build)
def class_logit(cls):
class NCls(object):
def __init__(self, *args, **kwargs):
self.instance = cls(*args, **kwargs)
#staticmethod
def _class_logit(original_function):
def arg_catch(*args, **kwargs):
start = time.time()
result = original_function(*args, **kwargs)
print('Called: {0} | From: {1} | Args: {2} | Kwargs: {3} | Run Time: {4}'
''.format(original_function.__name__, str(inspect.getmodule(original_function)),
args, kwargs, time.time() - start))
return result
return arg_catch
def __getattribute__(self, s):
try:
x = super(NCls, self).__getattribute__(s)
except AttributeError:
pass
else:
return x
x = self.instance.__getattribute__(s)
if type(x) == type(self.__init__):
return self._class_logit(x)
else:
return x
return NCls
This works great when applied to a very basic class i create.
Where I start to encounter issues is when I apply it to a class that is inheriting another - for instance, using QT:
#scld.class_logit
class TestWindow(QtGui.QDialog):
def __init__(self):
print self
super(TestWindow, self).__init__()
a = TestWindow()
Im getting the following error... and im not entirely sure what to do about it!
self.instance = cls(*args, **kwargs)
File "<string>", line 15, in __init__
TypeError: super(type, obj): obj must be an instance or subtype of type
Any help would be appreciated!
(Apologies in advance, no matter WHAT i do SO is breaking the formatting on my first bit of code... Im even manually spending 10 minutes adding spaces but its coming out incorrectly... sorry!)
You are being a bit too intrusive with your decorator.
While if you want to profile methods defined on the Qt framework itself, a somewhat aggressive approach is needed, your decorator replaces the entire class by a proxy.
Qt bindings are somewhat complicated indeed, and it is hard to tell why it is erroring when being instantiated in this case.
So - first things first - if your intent would be to apply the decorator to a class hierarchy defined by yourself, or at least one defined in pure Python, a good approach there could be using metaclasses: with a metaclass you could decorate each method when a class is created, and do not mess anymore at runtime, when methods are retrieved from each class.
but Qt, as some other libraries, have its methods and classes defined in native code, and that will prevent you from wrapping existing methods in a new class. So, wrapping the methods on attribute retrieval on __getattribute__ could work.
Here is a simpler approach that instead of using a Proxy, just plug-in a foreign __getattribute__ that does the wrap-with-logger thing you want.
Your mileage may vary with it. Specially, it won't be triggered if one method of the class is called by other method in native code - as this won't go through Python's attribute retrieval mechanism (instead, it will use C++ method retrieval directly).
from PyQt5 import QtWidgets, QtGui
def log_dec(func):
def wraper(*args, **kwargs):
print(func.__name__, args, kwargs)
return func(*args, **kwargs)
return wraper
def decorate(cls):
def __getattribute__(self, attr):
attr = super(cls, self).__getattribute__(attr)
if callable(attr):
return log_dec(attr)
return attr
cls.__getattribute__ = __getattribute__
return cls
#decorate
class Example(QtGui.QWindow):
pass
app = QtWidgets.QApplication([])
w = Example()
w.show()
(Of course, just replace the basic logger by your fancy logger above)
I do not know why, but I get this strange error whenever I try to pass to the method of a shared object shared custom class object. Python version: 3.6.3
Code:
from multiprocessing.managers import SyncManager
class MyManager(SyncManager): pass
class MyClass: pass
class Wrapper:
def set(self, ent):
self.ent = ent
MyManager.register('MyClass', MyClass)
MyManager.register('Wrapper', Wrapper)
if __name__ == '__main__':
manager = MyManager()
manager.start()
try:
obj = manager.MyClass()
lst = manager.list([1,2,3])
collection = manager.Wrapper()
collection.set(lst) # executed fine
collection.set(obj) # raises error
except Exception as e:
raise
Error:
---------------------------------------------------------------------------
Traceback (most recent call last):
File "D:\Program Files\Python363\lib\multiprocessing\managers.py", line 228, in serve_client
request = recv()
File "D:\Program Files\Python363\lib\multiprocessing\connection.py", line 251, in recv
return _ForkingPickler.loads(buf.getbuffer())
File "D:\Program Files\Python363\lib\multiprocessing\managers.py", line 881, in RebuildProxy
return func(token, serializer, incref=incref, **kwds)
TypeError: AutoProxy() got an unexpected keyword argument 'manager_owned'
---------------------------------------------------------------------------
What's the problem here?
I ran into this too, as noted, this is a bug in Python multiprocessing (see issue #30256) and the pull request that corrects this has not yet been merged. The pull request has since been superseded by another PR that makes the same change but adds a test as well.
Apart from manually patching your local installation, you have three other options:
you could use the MakeProxyType() callable to specify your proxytype, without relying on the AutoProxy proxy generator,
you could define a custom proxy class,
you can patch the bug with a monkeypatch
I'll describe those options below, after explaining what AutoProxy does:
What's the point of the AutoProxy class
The multiprocessing Manager pattern gives access to shared values by putting the values all in the same, dedicated 'canonical values server' process. All other processes (clients) talk to the server through proxies that then pass messages back and forth with the server.
The server does need to know what methods are acceptable for the type of object, however, so clients can produce a proxy object with the same methods. This is what the AutoProxy object is for. Whenever a client needs a new instance of your registered class, the default proxy the client creates is an AutoProxy, which then asks the server to tell it what methods it can use.
Once it has the method names, it calls MakeProxyType to construct a new class and then creates an instance for that class to return.
All this is deferred until you actually need an instance of the proxied type, so in principle AutoProxy saves a little bit of memory if you are not using certain classes you have registered. It's very little memory, however, and the downside is that this process has to take place in each client process.
These proxy objects use reference counting to track when the server can remove the canonical value. It is that part that is broken in the AutoProxy callable; a new argument is passed to the proxy type to disable reference counting when the proxy object is being created in the server process rather than in a client but the AutoProxy type wasn't updated to support this.
So, how can you fix this? Here are those 3 options:
Use the MakeProxyType() callable
As mentioned, AutoProxy is really just a call (via the server) to get the public methods of the type, and a call to MakeProxyType(). You can just make these calls yourself, when registering.
So, instead of
from multiprocessing.managers import SyncManager
SyncManager.register("YourType", YourType)
use
from multiprocessing.managers import SyncManager, MakeProxyType, public_methods
# arguments: classname, sequence of method names
YourTypeProxy = MakeProxyType("YourType", public_methods(YourType))
SyncManager.register("YourType", YourType, YourTypeProxy)
Feel free to inline the MakeProxyType() call there.
If you were using the exposed argument to SyncManager.register(), you should pass those names to MakeProxyType instead:
# SyncManager.register("YourType", YourType, exposed=("foo", "bar"))
# becomes
YourTypeProxy = MakeProxyType("YourType", ("foo", "bar"))
SyncManager.register("YourType", YourType, YourTypeProxy)
You'd have to do this for all the pre-registered types, too:
from multiprocessing.managers import SyncManager, AutoProxy, MakeProxyType, public_methods
registry = SyncManager._registry
for typeid, (callable, exposed, method_to_typeid, proxytype) in registry.items():
if proxytype is not AutoProxy:
continue
create_method = hasattr(managers.SyncManager, typeid)
if exposed is None:
exposed = public_methods(callable)
SyncManager.register(
typeid,
callable=callable,
exposed=exposed,
method_to_typeid=method_to_typeid,
proxytype=MakeProxyType(f"{typeid}Proxy", exposed),
create_method=create_method,
)
Create custom proxies
You could not rely on multiprocessing creating a proxy for you. You could just write your own. The proxy is used in all processes except for the special 'managed values' server process, and the proxy should pass messages back and forth. This is not an option for the already-registered types, of course, but I'm mentioning it here because for your own types this offers opportunities for optimisations.
Note that you should have methods for all interactions that need to go back to the 'canonical' value instance, so you'd need to use properties to handle normal attributes or add __getattr__, __setattr__ and __delattr__ methods as needed.
The advantage is that you can have very fine-grained control over what methods actually need to exchange data with the server process; in my specific example, my proxy class caches information that is immutable (the values would never change once the object was created), but were used often. That includes a flag value that controls if other methods would do something, so the proxy could just check the flag value and not talk to the server process if not set. Something like this:
class FooProxy(BaseProxy):
# what methods the proxy is allowed to access through calls
_exposed_ = ("__getattribute__", "expensive_method", "spam")
#property
def flag(self):
try:
v = self._flag
except AttributeError:
# ask for the value from the server, "realvalue.flag"
# use __getattribute__ because it's an attribute, not a property
v = self._flag = self._callmethod("__getattribute__", ("flag",))
return flag
def expensive_method(self, *args, **kwargs):
if self.flag: # cached locally!
return self._callmethod("expensive_method", args, kwargs)
def spam(self, *args, **kwargs):
return self._callmethod("spam", args, kwargs
SyncManager.register("Foo", Foo, FooProxy)
Because MakeProxyType() returns a BaseProxy subclass, you can combine that class with a custom subclass, saving yourself having to write any methods that just consist of return self._callmethod(...):
# a base class with the methods generated for us. The second argument
# doubles as the 'permitted' names, stored as _exposed_
FooProxyBase = MakeProxyType(
"FooProxyBase",
("__getattribute__", "expensive_method", "spam"),
)
class FooProxy(FooProxyBase):
#property
def flag(self):
try:
v = self._flag
except AttributeError:
# ask for the value from the server, "realvalue.flag"
# use __getattribute__ because it's an attribute, not a property
v = self._flag = self._callmethod("__getattribute__", ("flag",))
return flag
def expensive_method(self, *args, **kwargs):
if self.flag: # cached locally!
return self._callmethod("expensive_method", args, kwargs)
def spam(self, *args, **kwargs):
return self._callmethod("spam", args, kwargs
SyncManager.register("Foo", Foo, FooProxy)
Again, this won't solve the issue with standard types nested inside other proxied values.
Apply a monkeypatch
I use this to fix the AutoProxy callable, this should automatically avoid patching when you are running a Python version where the fix has already been applied to the source code:
# Backport of https://github.com/python/cpython/pull/4819
# Improvements to the Manager / proxied shared values code
# broke handling of proxied objects without a custom proxy type,
# as the AutoProxy function was not updated.
#
# This code adds a wrapper to AutoProxy if it is missing the
# new argument.
import logging
from inspect import signature
from functools import wraps
from multiprocessing import managers
logger = logging.getLogger(__name__)
orig_AutoProxy = managers.AutoProxy
#wraps(managers.AutoProxy)
def AutoProxy(*args, incref=True, manager_owned=False, **kwargs):
# Create the autoproxy without the manager_owned flag, then
# update the flag on the generated instance. If the manager_owned flag
# is set, `incref` is disabled, so set it to False here for the same
# result.
autoproxy_incref = False if manager_owned else incref
proxy = orig_AutoProxy(*args, incref=autoproxy_incref, **kwargs)
proxy._owned_by_manager = manager_owned
return proxy
def apply():
if "manager_owned" in signature(managers.AutoProxy).parameters:
return
logger.debug("Patching multiprocessing.managers.AutoProxy to add manager_owned")
managers.AutoProxy = AutoProxy
# re-register any types already registered to SyncManager without a custom
# proxy type, as otherwise these would all be using the old unpatched AutoProxy
SyncManager = managers.SyncManager
registry = managers.SyncManager._registry
for typeid, (callable, exposed, method_to_typeid, proxytype) in registry.items():
if proxytype is not orig_AutoProxy:
continue
create_method = hasattr(managers.SyncManager, typeid)
SyncManager.register(
typeid,
callable=callable,
exposed=exposed,
method_to_typeid=method_to_typeid,
create_method=create_method,
)
Import the above and call the apply() function to fix multiprocessing. Do so before you start the manager server!
Solution editing multiprocessing source code
The original answer by Sergey requires you to edit multiprocessing source code as follows:
Find your multiprocessing package (mine, installed via Anaconda, was in /anaconda3/lib/python3.6/multiprocessing).
Open managers.py
Add the key argument manager_owned=True to the AutoProxy function.
Original AutoProxy:
def AutoProxy(token, serializer, manager=None, authkey=None,
exposed=None, incref=True):
...
Edited AutoProxy:
def AutoProxy(token, serializer, manager=None, authkey=None,
exposed=None, incref=True, manager_owned=True):
...
Solution via code, at run time
I have managed to solve the unexpected keyword argument TypeError exception without editing directly the source code of multiprocessing by instead adding these few lines of code where I use multiprocessing's Managers:
import multiprocessing
# Backup original AutoProxy function
backup_autoproxy = multiprocessing.managers.AutoProxy
# Defining a new AutoProxy that handles unwanted key argument 'manager_owned'
def redefined_autoproxy(token, serializer, manager=None, authkey=None,
exposed=None, incref=True, manager_owned=True):
# Calling original AutoProxy without the unwanted key argument
return backup_autoproxy(token, serializer, manager, authkey,
exposed, incref)
# Updating AutoProxy definition in multiprocessing.managers package
multiprocessing.managers.AutoProxy = redefined_autoproxy
Found temporary solution here.
I've managed to fix it by adding needed keyword to initializer of AutoProxy in multiprocessing\managers.py Though, I don't know if this kwarg is responsible for anything.
I want to force object instantiation via class context manager. So make it impossible to instantiate directly.
I implemented this solution, but technically user can still instantiate object.
class HessioFile:
"""
Represents a pyhessio file instance
"""
def __init__(self, filename=None, from_context_manager=False):
if not from_context_manager:
raise HessioError('HessioFile can be only use with context manager')
And context manager:
#contextmanager
def open(filename):
"""
...
"""
hessfile = HessioFile(filename, from_context_manager=True)
Any better solution ?
If you consider that your clients will follow basic python coding principles then you can guarantee that no method from your class will be called if you are not within the context.
Your client is not supposed to call __enter__ explicitly, therefore if __enter__ has been called you know your client used a with statement and is therefore inside context (__exit__ will be called).
You just need to have a boolean variable that helps you remember if you are inside or outside context.
class Obj:
def __init__(self):
self._inside_context = False
def __enter__(self):
self._inside_context = True
print("Entering context.")
return self
def __exit__(self, *exc):
print("Exiting context.")
self._inside_context = False
def some_stuff(self, name):
if not self._inside_context:
raise Exception("This method should be called from inside context.")
print("Doing some stuff with", name)
def some_other_stuff(self, name):
if not self._inside_context:
raise Exception("This method should be called from inside context.")
print("Doing some other stuff with", name)
with Obj() as inst_a:
inst_a.some_stuff("A")
inst_a.some_other_stuff("A")
inst_b = Obj()
with inst_b:
inst_b.some_stuff("B")
inst_b.some_other_stuff("B")
inst_c = Obj()
try:
inst_c.some_stuff("c")
except Exception:
print("Instance C couldn't do stuff.")
try:
inst_c.some_other_stuff("c")
except Exception:
print("Instance C couldn't do some other stuff.")
This will print:
Entering context.
Doing some stuff with A
Doing some other stuff with A
Exiting context.
Entering context.
Doing some stuff with B
Doing some other stuff with B
Exiting context.
Instance C couldn't do stuff.
Instance C couldn't do some other stuff.
Since you'll probably have many methods that you want to "protect" from being called from outside context, then you can write a decorator to avoid repeating the same code to test for your boolean:
def raise_if_outside_context(method):
def decorator(self, *args, **kwargs):
if not self._inside_context:
raise Exception("This method should be called from inside context.")
return method(self, *args, **kwargs)
return decorator
Then change your methods to:
#raise_if_outside_context
def some_other_stuff(self, name):
print("Doing some other stuff with", name)
I suggest the following approach:
class MainClass:
def __init__(self, *args, **kwargs):
self._class = _MainClass(*args, **kwargs)
def __enter__(self):
print('entering...')
return self._class
def __exit__(self, exc_type, exc_val, exc_tb):
# Teardown code
print('running exit code...')
pass
# This class should not be instantiated directly!!
class _MainClass:
def __init__(self, attribute1, attribute2):
self.attribute1 = attribute1
self.attribute2 = attribute2
...
def method(self):
# execute code
if self.attribute1 == "error":
raise Exception
print(self.attribute1)
print(self.attribute2)
with MainClass('attribute1', 'attribute2') as main_class:
main_class.method()
print('---')
with MainClass('error', 'attribute2') as main_class:
main_class.method()
This will outptut:
entering...
attribute1
attribute2
running exit code...
---
entering...
running exit code...
Traceback (most recent call last):
File "scratch_6.py", line 34, in <module>
main_class.method()
File "scratch_6.py", line 25, in method
raise Exception
Exception
None that I am aware of. Generally, if it exists in python, you can find a way to call it. A context manager is, in essence, a resource management scheme... if there is no use-case for your class outside of the manager, perhaps the context management could be integrated into the methods of the class? I would suggest checking out the atexit module from the standard library. It allows you to register cleanup functions much in the same way that a context manager handles cleanup, but you can bundle it into your class, such that each instantiation has a registered cleanup function. Might help.
It is worth noting that no amount of effort will prevent people from doing stupid things with your code. Your best bet is generally to make it as easy as possible for people to do smart things with your code.
You can think of hacky ways to try and enforce this (like inspecting the call stack to forbid direct calls to your object, boolean attribute that is set upon __enter__ that you check before allowing other actions on the instance) but that will eventually become a mess to understand and explain to others.
Irregardless, you should also be certain that people will always find ways to bypass it if wanted. Python doesn't really tie your hands down, if you want to do something silly it lets you do it; responsible adults, right?
If you need an enforcement, you'd be better off supplying it as a documentation notice. That way if users opt to instantiate directly and trigger unwanted behavior, it's their fault for not following guidelines for your code.