Context managers define setup/cleanup functions __enter__ and __exit__. Awesome. I want to keep one around as a member variable. When my class object goes out of scope, I want this cleanup performed. This is basically the behavior that I understand happens automatically with C++ constructors/destructors.
class Animal(object):
def __init__(self):
self.datafile = open("file.txt") # This has a cleanup function
# I wish I could say something like...
with open("file.txt") as self.datafile: # uh...
def makeSound(self):
sound = self.datafile # I'll be using it later
# Usage...
if True:
animal = Animal()
# file should be cleaned up and closed at this point.
I give classes a close function if it makes sense and then use the closing context manager:
class MyClass(object):
def __init__(self):
self.resource = acquire_resource()
def close():
release_resource(self.resource)
And then use it like:
from contextlib import closing
with closing(MyClass()) as my_object:
# use my_object
Python doesn't do C++-style RAII ("Resource Acquisition Is Initialization", meaning anything you acquire in the constructor, you release in the destructor). In fact, almost no languages besides C++ do C++-style RAII. Python's context managers and with statements are a different way to achieve the same thing that C++ does with RAII, and that most other languages do with finally statements, guard statements, etc. (Python also has finally, of course.)
What exactly do you mean by "When my class object goes out of scope"?
Objects don't go out of scope; references (or variables, or names, whatever you prefer) do. Some time after the last reference goes out of scope (for CPython, this is immediately, unless it's involved in a reference cycle; for other implementations, it's usually not), the object will be garbage-collected.
If you want to do some cleanup when your objects are garbage-collected, you use the __del__ method for that. But that's rarely what you actually want. (In fact, some classes have a __del__ method just to warn users that they forgot to clean up, rather than to silently clean up.)
A better solution is to make Animal itself a context manager, so it can manage other context managers—or just manage things explicitly. Then you can write:
if True:
with Animal() as animal:
# do stuff
# now the file is closed
Here's an example:
class Animal(object):
def __init__(self):
self.datafile = open("file.txt")
def __enter__(self):
return self
def __exit__(self, type, value, traceback):
self.datafile.close()
def makeSound(self):
sound = self.datafile # I'll be using it later
(Just falling off the end of __exit__ like that means that, after calling self.datafile.close(), we successfully do nothing if there was no exception, or re-raise the same exception if there was one. So, you don't have to write anything explicit to make that happen.)
But usually, if you're going to make a class into a context manager, you also want to add an explicit close. Just like files do. And once you do that, you really don't need to make Animal into a context manager; you can just use closing.
Related
I know the question has been asked before, but I find myself bumping into situations where a staticmethod is most appropriate, but there is also a need to reference an instance variable inside this class. As an example, lets say I have the following class:
class ExampleClass(object):
def __init__(self, filename = 'defaultFilename'):
self.file_name = filename
#staticmethod
def doSomethingWithFiles(file_2, file_1 = None):
#if user didn't supply a file use the instance variable
if file_1 is None:
# no idea how to handle the uninitialized class case to create
# self.file_name.
file_1 = __class__.__init__().__dict__['file_name'] <--- this seems sketchy
else:
file_1 = file_1
with open(file_1, 'r') as f1, open(file_2, 'w') as f2:
.....you get the idea...
def moreMethodsThatUseSelf(self):
pass
Now suppose I had a few instances of the ExampleClass (E1, E2, E3) with different filenames passed into __init__, but want to retain the ability to use either an uninitialized class ExampleClass.doSomethingWithFiles(file_2 = E1.file_name, file_1 = E2.file_name) or E1.doSomethingWithFiles(file_2 = E2.file_name, file_1 = 'some_other_file') as the situation requires.
Is there any reason for me to trying to find a way to do what I am thinking, or am I making a mess?
UPDATE
I think the comments are helpful and I also think it's an issue I'm encountering due to bad design.
The issue started out as a way to prevent concurrent access to HDF5 files by giving each class instance an rlock that I could use as a context manager for preventing any other attempts to access the file while it was in use. Each class instance had it's own rlock it acquired and released when done with whatever it needed to do. I was also using #staticmethod to perform a routine that then generated a file which was passed into it's own init() and was unique to each class instance. At the time it seemed clever, but I regret it. I also think I am entirely unsure of when #staticmethods are ever appropriate and maybe was confusing it with #classmethods, but a class variable would no longer make the rlocks and files that are unique to my class instances possible. I think I should probably just think more about design vs. trying to justify using a class definition I do not really understand in a manner it was designed to protect against.
If you think you keep bumping into situations where a staticmethod is most appropriate, you're probably wrong—good uses for them are very rare. And if your staticmethod needs to access instance variables, you're definitely wrong.
A staticmethod cannot access instance variables directly. There can be no instances of the class, or a thousands; which one would you access the variables from?
What you're trying to do is to create a new instance, just to access its instance variables. This can occasionally be useful—although it's more often a good sign you didn't need a class in the first place. (And, when it useful, it's unusual enough to be usually worth signaling, by having the caller write ExampleClass().doSomethingWithFiles instead of ExampleClass.doSomethingWithFiles.)
That's legal, but you do it by just calling the class, not by calling its __init__ method. That __init__ never returns anything; it receives an already-created self and modifies it. If you really want to, you can call its __new__ method, but that effectively just means the same thing as calling the class. (In the minor ways in which they're different, it's calling the class that you want.)
Also, once you've got an instance, you can just use it normally; you don't need to look at its __dict__. (Even if you only had the attribute name as a string variable, getattr(obj, name) is almost always what you want there, not obj.__dict__[name].)
So:
file_1 = __class__().file_name
So, what should you do instead?
Well, look at your design. The only thing an ExampleClass instance does is hold a filename, which has a default value. You don't need an object for that, just a plain old string variable that you pass in, or store as a global. (You may have heard that global variables are bad—but global variables in disguise are just as bad, and have the additional problem that they're in disguise. And that's basically what you've designed. And sometimes, global variables are the right answer.)
why not input the instance as parameter to static method. I hope this code will be helpful.
class ClassA:
def __init__(self, fname):
self.fname = fname
def print(self):
print('fname=', self.fname)
#staticmethod
def check(f):
if type(f)==ClassA :
print('f is exist.')
f.print()
print('f.fname=', f.fname)
else:
print('f is not exist: new ClassA')
newa = ClassA(f)
return newa
a=ClassA('temp')
b=ClassA('test')
ClassA.check(a)
ClassA.check(b)
newa = ClassA.check('hello')
newa.print()
You cannot refer to an instance attribute from a static method. Suppose multiple instances exist, which one would you pick the attribute from?
What you seem to need is to have a class attribute and a class method. You can define one by using the classmethod decorator.
class ExampleClass(object):
file_name = 'foo'
#classmethod
def doSomethingWithFiles(cls, file_2, file_1 = None):
file_1 = cls.file_name
# Do stuff
Maybe I'm misunderstanding what your intentions are but I think you're misusing the default parameter.
It appears you're trying to use 'defaultFilename' as the default parameter value. Why not just skip the awkward
if file_1 is None:
# no idea how to handle the uninitialized class case to create
# self.file_name.
file_1 = __class__.__init__().__dict__['file_name'] <--- this seems sketchy
and change the function as follows,
def doSomethingWithFiles(file_2, file_1='defaultFilename'):
If hardcoding that value makes you uncomfortable maybe try
class ExampleClass(object):
DEFAULT_FILE_NAME = 'defaultFilename'
def __init__(self, filename=DEFAULT_FILE_NAME):
self.file_name = filename
#staticmethod
def doSomethingWithFiles(file_2, file_1=DEFAULT_FILE_NAME):
with open(file_1, 'r') as f1, open(file_2, 'w') as f2:
# do magic in here
def moreMethodsThatUseSelf(self):
pass
In general, though, you're probably modeling your problem wrong if you want to access an instance variable inside a static method.
Context managers define setup/cleanup functions __enter__ and __exit__. Awesome. I want to keep one around as a member variable. When my class object goes out of scope, I want this cleanup performed. This is basically the behavior that I understand happens automatically with C++ constructors/destructors.
class Animal(object):
def __init__(self):
self.datafile = open("file.txt") # This has a cleanup function
# I wish I could say something like...
with open("file.txt") as self.datafile: # uh...
def makeSound(self):
sound = self.datafile # I'll be using it later
# Usage...
if True:
animal = Animal()
# file should be cleaned up and closed at this point.
I give classes a close function if it makes sense and then use the closing context manager:
class MyClass(object):
def __init__(self):
self.resource = acquire_resource()
def close():
release_resource(self.resource)
And then use it like:
from contextlib import closing
with closing(MyClass()) as my_object:
# use my_object
Python doesn't do C++-style RAII ("Resource Acquisition Is Initialization", meaning anything you acquire in the constructor, you release in the destructor). In fact, almost no languages besides C++ do C++-style RAII. Python's context managers and with statements are a different way to achieve the same thing that C++ does with RAII, and that most other languages do with finally statements, guard statements, etc. (Python also has finally, of course.)
What exactly do you mean by "When my class object goes out of scope"?
Objects don't go out of scope; references (or variables, or names, whatever you prefer) do. Some time after the last reference goes out of scope (for CPython, this is immediately, unless it's involved in a reference cycle; for other implementations, it's usually not), the object will be garbage-collected.
If you want to do some cleanup when your objects are garbage-collected, you use the __del__ method for that. But that's rarely what you actually want. (In fact, some classes have a __del__ method just to warn users that they forgot to clean up, rather than to silently clean up.)
A better solution is to make Animal itself a context manager, so it can manage other context managers—or just manage things explicitly. Then you can write:
if True:
with Animal() as animal:
# do stuff
# now the file is closed
Here's an example:
class Animal(object):
def __init__(self):
self.datafile = open("file.txt")
def __enter__(self):
return self
def __exit__(self, type, value, traceback):
self.datafile.close()
def makeSound(self):
sound = self.datafile # I'll be using it later
(Just falling off the end of __exit__ like that means that, after calling self.datafile.close(), we successfully do nothing if there was no exception, or re-raise the same exception if there was one. So, you don't have to write anything explicit to make that happen.)
But usually, if you're going to make a class into a context manager, you also want to add an explicit close. Just like files do. And once you do that, you really don't need to make Animal into a context manager; you can just use closing.
As it is common knowledge, the python __del__ method should not be used to clean up important things, as it is not guaranteed this method gets called. The alternative is the use of a context manager, as described in several threads.
But I do not quite understand how to rewrite a class to use a context manager. To elaborate, I have a simple (non-working) example in which a wrapper class opens and closes a device, and which shall close the device in any case the instance of the class gets out of its scope (exception etc).
The first file mydevice.py is a standard wrapper class to open and close a device:
class MyWrapper(object):
def __init__(self, device):
self.device = device
def open(self):
self.device.open()
def close(self):
self.device.close()
def __del__(self):
self.close()
this class is used by another class myclass.py:
import mydevice
class MyClass(object):
def __init__(self, device):
# calls open in mydevice
self.mydevice = mydevice.MyWrapper(device)
self.mydevice.open()
def processing(self, value):
if not value:
self.mydevice.close()
else:
something_else()
My question: When I implement the context manager in mydevice.py with __enter__ and __exit__ methods, how can this class be handled in myclass.py? I need to do something like
def __init__(self, device):
with mydevice.MyWrapper(device):
???
but how to handle it then? Maybe I overlooked something important? Or can I use a context manager only within a function and not as a variable inside a class scope?
I suggest using the contextlib.contextmanager class instead of writing a class that implements __enter__ and __exit__. Here's how it would work:
class MyWrapper(object):
def __init__(self, device):
self.device = device
def open(self):
self.device.open()
def close(self):
self.device.close()
# I assume your device has a blink command
def blink(self):
# do something useful with self.device
self.device.send_command(CMD_BLINK, 100)
# there is no __del__ method, as long as you conscientiously use the wrapper
import contextlib
#contextlib.contextmanager
def open_device(device):
wrapper_object = MyWrapper(device)
wrapper_object.open()
try:
yield wrapper_object
finally:
wrapper_object.close()
return
with open_device(device) as wrapper_object:
# do something useful with wrapper_object
wrapper_object.blink()
The line that starts with an at sign is called a decorator. It modifies the function declaration on the next line.
When the with statement is encountered, the open_device() function will execute up to the yield statement. The value in the yield statement is returned in the variable that's the target of the optional as clause, in this case, wrapper_object. You can use that value like a normal Python object thereafter. When control exits from the block by any path – including throwing exceptions – the remaining body of the open_device function will execute.
I'm not sure if (a) your wrapper class is adding functionality to a lower-level API, or (b) if it's only something you're including so you can have a context manager. If (b), then you can probably dispense with it entirely, since contextlib takes care of that for you. Here's what your code might look like then:
import contextlib
#contextlib.contextmanager
def open_device(device):
device.open()
try:
yield device
finally:
device.close()
return
with open_device(device) as device:
# do something useful with device
device.send_command(CMD_BLINK, 100)
99% of context manager uses can be done with contextlib.contextmanager. It is an extremely useful API class (and the way it's implemented is also a creative use of lower-level Python plumbing, if you care about such things).
The issue is not that you're using it in a class, it's that you want to leave the device in an "open-ended" way: you open it and then just leave it open. A context manager provides a way to open some resource and use it in a relatively short, contained way, making sure it is closed at the end. Your existing code is already unsafe, because if some crash occurs, you can't guarantee that your __del__ will be called, so the device may be left open.
Without knowing exactly what the device is and how it works, it's hard to say more, but the basic idea is that, if possible, it's better to only open the device right when you need to use it, and then close it immediately afterwards. So your processing is what might need to change, to something more like:
def processing(self, value):
with self.device:
if value:
something_else()
If self.device is an appropriately-written context manager, it should open the device in __enter__ and close it in __exit__. This ensures that the device will be closed at the end of the with block.
Of course, for some sorts of resources, it's not possible to do this (e.g., because opening and closing the device loses important state, or is a slow operation). If that is your case, you are stuck with using __del__ and living with its pitfalls. The basic problem is that there is no foolproof way to leave the device "open-ended" but still guarantee it will be closed even in the event of some unusual program failure.
I'm not quite sure what you're asking. A context manager instance can be a class member - you can re-use it in as many with clauses as you like and the __enter__() and __exit__() methods will be called each time.
So, once you'd added those methods to MyWrapper, you can construct it in MyClass just as you are above. And then you'd do something like:
def my_method(self):
with self.mydevice:
# Do stuff here
That will call the __enter__() and __exit__() methods on the instance you created in the constructor.
However, the with clause can only span a function - if you use the with clause in the constructor then it will call __exit__() before exiting the constructor. If you want to do that, the only way is to use __del__(), which has its own problems as you've already mentioned. You could open and close the device just when you need it using with but I don't know if this fulfills your requirements.
Is there any way to use monitor thread synchronization like java methods synchronization,in python class to ensure thread safety and avoid race condition?
I want a monitor like synchronization mechanism that allows only one method call in my class or object
You might want to have a look at python threading interface. For simple mutual exclusion functionality you might use a Lock object. You can easily do this using the with statement like:
...
lock = Lock()
...
with (lock):
# This code will only be executed by one single thread at a time
# the lock is released when the thread exits the 'with' block
...
See also here for an overview of different thread synchronization mechanisms in python.
There is no python language construct for Java's synchronized (but I guess it could be built using decorators)
I built a simple prototype for it, here's a link to the GitHub repository for all the details : https://github.com/m-a-rahal/monitor-sync-python
I used inheritance instead of decorators, but maybe I'll include that option later
Here's what the 'Monitor' super class looks like:
import threading
class Monitor(object):
def __init__(self, lock = threading.Lock()):
''' initializes the _lock, threading.Lock() is used by default '''
self._lock = lock
def Condition(self):
''' returns a condition bound to this monitor's lock'''
return threading.Condition(self._lock)
init_lock = __init__
Now all you need to do to define your own monitor is to inherit from this class:
class My_Monitor_Class(Monitor):
def __init__(self):
self.init_lock() # just don't forget this line, creates the monitor's _lock
cond1 = self.Condition()
cond2 = self.Condition()
# you can see i defined some 'Condition' objects as well, very simple syntax
# these conditions are bound to the lock of the monitor
you can also pass your own lock instead
class My_Monitor_Class(Monitor):
def __init__(self, lock):
self.init_lock(lock)
check out threading.Condition() documentation
Also you need to protect all the 'public' methods with the monitor's lock, like this:
class My_Monitor_Class(Monitor):
def method(self):
with self._lock:
# your code here
if you want to use 'private' methods (called inside the monitor), you can either NOT protect them with the _lock (or else the threads will get stuck), or use RLock instead for the monitor
EXTRA TIP
sometimes a monitor consists of 'entrance' and 'exit' protocols
monitor.enter_protocol()
<critical section>
monitor.exit_protocol()
in this case, you can exploit python's cool with statement :3
just define the __enter__ and __exit__ methods like this:
class monitor(Monitor):
def __enter__(self):
with self._lock:
# enter_protocol code here
def __exit__(self, type, value, traceback):
with self._lock:
# exit_protocol code here
now all you need to do is call the monitor using with statement:
with monitor:
<critical section>
If an object relies on a module that is not included with Python (like win32api, gstreamer, gui toolkits, etc.), and a class/function/method from that module may fail, what should the object do?
Here's an example:
import guimodule # Just an example; could be anything
class RandomWindow(object):
def __init__(self):
try:
self.dialog = guimodule.Dialog() # I might fail
except: guimodule.DialogError:
self.dialog = None # This can't be right
def update(self):
self.dialog.prepare()
self.dialog.paint()
self.dialog.update()
# ~30 more methods
This class would only be a tiny (and unnecessary, but useful) part of a bigger program.
Let's assume we have an imaginary module called guimodule, with a class called Dialog, that may fail to instantiate. If our RandomWindow class has say, 30 methods that manipulate this window, checking if self.dialog is not None will be a pain, and will slow down the program when implemented in constantly used methods (like the update method in the example above). Calling .paint() on a NoneType (when the Dialog fails to load) will raise an error, and making a dummy Dialog class with all of the original's methods and attributes would be absurd.
How can I modify my class to handle a failed creation of the Dialog class?
Rather than creating an invalid object, you should have allowed the exception raised in __init__ to propogate out so the error could be handled in an appropriate manner. Or you could have raised a different exception.
See also Python: is it bad form to raise exceptions within __init__?
You may find it useful to have two subclasses of it; one that uses that module and one that does not. A "factory" method could determine which subclass was appropriate, and return an instance of that subclass.
By subclassing, you allow them to share code that is independent of whether that module is available.
I agree that "checking if self.dialog is not None will be pain" but I don't agree that it will slow down things because if self.dialog existed it will be more slower. So forget about slowness for time being. so one way to handle is to create a MockDialog which does nothing on function calls e.g.
class RandomWindow(object):
def __init__(self):
try:
self.dialog = guimodule.Dialog() # I might fail
except: guimodule.DialogError:
self.dialog = DummyDialog() # create a placeholder
class DummyDialog(object):
# either list all methods or override __getattr__ to create a mock object
Making a dummy Dialog class is not as absurd as you might thing if you consider using Pythons __getattr__ feature. This following dummy-implementation would completely fit your needs:
class DummyDialog:
def __getattr__(self, name):
def fct(*args, **kwargs):
pass
return fct