I have a rather large and involved decorator to debug PyQt signals that I want to dynamically add to a class. Is there a way to add a decorator to a class dynamically?
I might be approaching this problem from the wrong angle, so here is what I want to accomplish.
Goal
I have a decorator that will discover/attach to all pyqt signals in a class and print debug when those signals are emitted.
This decorator is great for debugging a single class' signals. However, there might be a time when I would like to attach to ALL my signals in an application. This could be used to see if I'm emitting signals at unexpected times, etc.
I'd like to dynamically attach this decorator to all my classes that have signals.
Possible solutions/ideas
I've thought through a few possible solutions so far:
Inheritance: This would be easy if all my classes had the same base class (other than Python's built-in object and PyQt's built-in QtCore.QObject). I suppose I could just attach this decorator to my base class and everything would workout as expected. However, this is not the case in this particular application. I don't want to change all my classes to have the same base class either.
Monkey-patch Python object or QtCore.QObject: I don't know how this would work practically. However, in theory could I change one of these base classes' __init__ to be the new_init I define in my decorator? This seems really dangerous and hackish but maybe it's a good way?
Metaclasses: I don't think metaclasses will work in this scenario because I'd have to dynamically add the __metaclass__ attribute to the classes I want to inject the decorator into. I think this is impossible because to insert this attribute the class must have already been constructed. Thus, whatever metaclass I define won't be called. Is this true?
I tried a few variants of metaclass magic but nothing seemed to work. I feel like using metaclasses might be a way to accomplish what I want, but I can't seem to get it working.
Again, I might be going about this all wrong. Essentially I want to attach the behavior in my decorator referenced above to all classes in my application (maybe even a list of select classes). Also, I could refactor my decorator if necessary. I don't really care if I attach this behavior with a decorator or another mechanism. I just assumed this decorator already accomplishes what I want for a single class so maybe it was easy to extend.
Decorators are nothing more than callables that are applied automatically. To apply it manually, replace the class with the return value of the decorator:
import somemodule
somemodule.someclass = debug_signals(somemodule.someclass)
This replaces the somemodule.someclass name with the return value of debug_signals, which we passed the original somemodule.someclass class.
Related
I have a class which calls a lot of its methods in __init__. Since a lot is going on in these methods, I want to test them. Testing classes and class methods requires to instantiate the class and then call its methods. But if I instantiate the class, the methods will already be called before I can test it.
I have some ideas for possible solutions, but I am unsure if they are possible or a good way to go:
I could introduce a kwarg into the class like init=True and testing for it within __init__. So I could have default option to do all the magic stuff on object creation, and could deactivate it to instantiate the class and call functions separately.
I could define the methods outside the class in other classes or functions, if that works, and test them separately. The test of the bigger class would become something like an integration test.
It depends on what you would like to test
If you want to check if all the calls are happening correctly you could mock underlying functionality inside the __init__ method.
And then do assert on the mocks. (Pytest has a spy mocks which does not modify original behavior but could be tested as mocks for call count, arguments etc... I'm sure you could replicate that with unittest mock as well)
So you could mock everything that is necessary by the beginning and then create an instance.
If you want to check how it was assembled you could do this after initialization.
Generally modifying your source code just for the purpose of the test case is not a good idea.
Run your code through a debugger, check what you are looking for as a tester and automate it.
Simple question. What would be the most pythonic way to make a class unable to be instantiated.
I know this could be done by overriding new __new__ and raising an exception there. Is there a better way?
I am kinda looking in the abstract class direction. However, I will NOT use this class as a base class to any other classes. And will NOT have any abstract methods defined in the class.
Without those, the abstract class does not raise any exceptions upon instantiation.
I have looked at this answer. It is not what I am looking for.
Is it possible to make abstract classes in Python?
Edit:
I know this might not be the best approach, but it is how I solved something I needed. I think it is a simple question, that can have a yes + explanation or no answer. Answers/comments that basically amount to, you should not be asking this question make very little sense.
what you're after is being able to just call something like, math.sin(x) right?
you make it a static method. and you just simply don't make any instances.
#staticmethod
def awesome_method(x): return x
if you're thinking why can't you actually do a = math()
because math, isn't a class. it's a separate script with all those functions in it.
so you may want to make a separate script with all your static things in it. ;)
if you want everything in the same script, then it's just local variables. and if you're like me, you'd rather them be organised properly with classes, looks like you'd probably be best just making a normal class, and creating just one instance of it
I just used a metaclass for the first time.
The purpose was to get control of the help() output for a class or instance.
Specifying attributes in the __dir__() function of the metaclass allowed me to control the help content.
However, I observed that for intellisense/code_completion, within Jupyter, it was the __dir__() function of the class itself that matters.
It's enough for me to understand the fact.
However, I would like to know the reason for that.
Thanks for a clarification.
As I had stated in the other answer about this topic, Python does not define how the contents of help are picked. And using the metaclass __dir__ is something that will work due to the nature of the language, but third party modules certainly won't expect it to return custom results, different from the class's dir, as it is not a common thing.
So, what gives is that you are trying to customize features that are not customizable - you will either have to create a new class that proxies to all uneeded user methods, and therefore can hide from help and dir what you don't want to show, or create an entire aplication that just does what you need instead of relying on Jupiter notebooks to show your specific instructions and only those instructions.
Overall, if you are using a metaclass for other motives, ok, but if you are making use of a metaclass just to try to override help output, I'd say that is an incorrect use already.
Another option is to stick a manually callable "help2" method in your class, that prints only your desired output. Then you document on the main class docstring "please, use classname.help2() to learn the usage.", and otherwise leave __dir__ alone.
Related: Python object conversion
I recently learned that Python allows you to change an instance's class like so:
class Robe:
pass
class Dress:
pass
r = Robe()
r.__class__ = Dress
I'm trying to figure out whether there is a case where 'transmuting' an object like this can be useful. I've messed around with this in IDLE, and one thing I've noticed is that assigning a different class doesn't call the new class's __init__ method, though this can be done explicitly if needed.
Virtually every use case I can think of would be better served by composition, but I'm a coding newb so what do I know. ;)
There is rarely a good reason to do this for unrelated classes, like Robe and Dress in your example. Without a bit of work, it's hard to ensure that the object you get in the end is in a sane state.
However, it can be useful when inheriting from a base class, if you want to use a non-standard factory function or constructor to build the base object. Here's an example:
class Base(object):
pass
def base_factory():
return Base() # in real code, this would probably be something opaque
def Derived(Base):
def __new__(cls):
self = base_factory() # get an instance of Base
self.__class__ = Derived # and turn it into an instance of Derived
return self
In this example, the Derived class's __new__ method wants to construct its object using the base_factory method which returns an instance of the Base class. Often this sort of factory is in a library somewhere, and you can't know for certain how it's making the object (you can't just call Base() or super(Derived, cls).__new__(cls) yourself to get the same result).
The instance's __class__ attribute is rewritten so that the result of calling Derived.__new__ will be an instance of the Derived class, which ensures that it will have the Derived.__init__ method called on it (if such a method exists).
I remember using this technique ages ago to “upgrade” existing objects after recognizing what kind of data they hold. It was a part of an experimental XMPP client. XMPP uses many short XML messages (“stanzas”) for communication.
When the application received a stanza, it was parsed into a DOM tree. Then the application needed to recognize what kind of stanza it is (a presence stanza, message, automated query etc.). If, for example, it was recognized as a message stanza, the DOM object was “upgraded” to a subclass that provided methods like “get_author”, “get_body” etc.
I could of course just make a new class to represent a parsed message, make new object of that class and copy the relevant data from the original XML DOM object. There were two benefits of changing object's class in-place, though. Firstly, XMPP is a very extensible standard, and it was useful to still have an easy access to the original DOM object in case some other part of the code found something useful there, or while debugging. Secondly, profiling the code told me that creating a new object and explicitly copying data is much slower than just reusing the object that would be quickly destroyed anyway—the difference was enough to matter in XMPP, which uses many short messages.
I don't think any of these reasons justifies the use of this technique in production code, unless maybe you really need the (not that big) speedup in CPython. It's just a hack which I found useful to make code a bit shorter and faster in the experimental application. Note also that this technique will easily break JIT engines in non-CPython implementations, making the code much slower!
I have some Python code that creates a Calendar object based on parsed VEvent objects from and iCalendar file.
The calendar object just has a method that adds events as they get parsed.
Now I want to create a factory function that creates a calendar from a file object, path, or URL.
I've been using the iCalendar python module, which implements a factory function as a class method directly on the Class that it returns an instance of:
cal = icalendar.Calendar.from_string(data)
From what little I know about Java, this is a common pattern in Java code, though I seem to find more references to a factory method being on a different class than the class you actually want to instantiate instances from.
The question is, is this also considered Pythonic ? Or is it considered more pythonic to just create a module-level method as the factory function ?
[Note. Be very cautious about separating "Calendar" a collection of events, and "Event" - a single event on a calendar. In your question, it seems like there could be some confusion.]
There are many variations on the Factory design pattern.
A stand-alone convenience function (e.g., calendarMaker(data))
A separate class (e.g., CalendarParser) which builds your target class (Calendar).
A class-level method (e.g. Calendar.from_string) method.
These have different purposes. All are Pythonic, the questions are "what do you mean?" and "what's likely to change?" Meaning is everything; change is important.
Convenience functions are Pythonic. Languages like Java can't have free-floating functions; you must wrap a lonely function in a class. Python allows you to have a lonely function without the overhead of a class. A function is relevant when your constructor has no state changes or alternate strategies or any memory of previous actions.
Sometimes folks will define a class and then provide a convenience function that makes an instance of the class, sets the usual parameters for state and strategy and any other configuration, and then calls the single relevant method of the class. This gives you both the statefulness of class plus the flexibility of a stand-alone function.
The class-level method pattern is used, but it has limitations. One, it's forced to rely on class-level variables. Since these can be confusing, a complex constructor as a static method runs into problems when you need to add features (like statefulness or alternative strategies.) Be sure you're never going to expand the static method.
Two, it's more-or-less irrelevant to the rest of the class methods and attributes. This kind of from_string is just one of many alternative encodings for your Calendar objects. You might have a from_xml, from_JSON, from_YAML and on and on. None of this has the least relevance to what a Calendar IS or what it DOES. These methods are all about how a Calendar is encoded for transmission.
What you'll see in the mature Python libraries is that factories are separate from the things they create. Encoding (as strings, XML, JSON, YAML) is subject to a great deal of more-or-less random change. The essential thing, however, rarely changes.
Separate the two concerns. Keep encoding and representation as far away from state and behavior as you can.
It's pythonic not to think about esoteric difference in some pattern you read somewhere and now want to use everywhere, like the factory pattern.
Most of the time you would think of a #staticmethod as a solution it's probably better to use a module function, except when you stuff multiple classes in one module and each has a different implementation of the same interface, then it's better to use a #staticmethod
Ultimately weather you create your instances by a #staticmethod or by module function makes little difference.
I'd probably use the initializer ( __init__ ) of a class because one of the more accepted "patterns" in python is that the factory for a class is the class initialization.
IMHO a module-level method is a cleaner solution. It hides behind the Python module system that gives it a unique namespace prefix, something the "factory pattern" is commonly used for.
The factory pattern has its own strengths and weaknesses. However, choosing one way to create instances usually has little pragmatic effect on your code.
A staticmethod rarely has value, but a classmethod may be useful. It depends on what you want the class and the factory function to actually do.
A factory function in a module would always make an instance of the 'right' type (where 'right' in your case is the 'Calendar' class always, but you might also make it dependant on the contents of what it is creating the instance out of.)
Use a classmethod if you wish to make it dependant not on the data, but on the class you call it on. A classmethod is like a staticmethod in that you can call it on the class, without an instance, but it receives the class it was called on as first argument. This allows you to actually create an instance of that class, which may be a subclass of the original class. An example of a classmethod is dict.fromkeys(), which creates a dict from a list of keys and a single value (defaulting to None.) Because it's a classmethod, when you subclass dict you get the 'fromkeys' method entirely for free. Here's an example of how one could write dict.fromkeys() oneself:
class dict_with_fromkeys(dict):
#classmethod
def fromkeys(cls, keys, value=None):
self = cls()
for key in keys:
self[key] = value
return self