Simple question. What would be the most pythonic way to make a class unable to be instantiated.
I know this could be done by overriding new __new__ and raising an exception there. Is there a better way?
I am kinda looking in the abstract class direction. However, I will NOT use this class as a base class to any other classes. And will NOT have any abstract methods defined in the class.
Without those, the abstract class does not raise any exceptions upon instantiation.
I have looked at this answer. It is not what I am looking for.
Is it possible to make abstract classes in Python?
Edit:
I know this might not be the best approach, but it is how I solved something I needed. I think it is a simple question, that can have a yes + explanation or no answer. Answers/comments that basically amount to, you should not be asking this question make very little sense.
what you're after is being able to just call something like, math.sin(x) right?
you make it a static method. and you just simply don't make any instances.
#staticmethod
def awesome_method(x): return x
if you're thinking why can't you actually do a = math()
because math, isn't a class. it's a separate script with all those functions in it.
so you may want to make a separate script with all your static things in it. ;)
if you want everything in the same script, then it's just local variables. and if you're like me, you'd rather them be organised properly with classes, looks like you'd probably be best just making a normal class, and creating just one instance of it
Related
Suppose I have a project in ~/app/, containing at least files myclass.py, myobject.py, and app.py.
In myclass.py I have something like
def myclass():
# class attributes and methods...
In myobject.py, I have something like
from app import myclass
attribute1 = 1
attribute2 = 2
myobject = myclass(attribute1, attribute2)
Finally, app.py looks something like
from app import myobject
# do stuff with myobject
In practice, I'm using myobject.py to gather a common instance of myclass and make it easily importable, so I don't have to define all the attributes separately. My question is on the convention of myobject.py. Is this okay or is there something that would be better to achieve the purpose mentioned. The concerns I thought of is that there are all these other variables (in this case attribute1 and attribute2) which are just... there... in the myobject module. It just feels a little weird because these aren't things that would ever be accessed individually, but the fact that it is accessible... I feel like there's some other conventional way to do this. Is this perfectly fine, or am I right to have concerns (if so, how to fix it)?
Edit: To make it more clear, here is an example: I have a Camera class which stores the properties of the lens and CCD and such (like in myclass.py). So users are able to define different cameras and use them in the application. However, I want to allow them to have some preset cameras, thus I define objects of the Camera class that are specific to certain cameras I know are common to use for this application (like in myobject.py). So when they run the application, they can just import these preset cameras (as Camera objects) (like in app.py). How should these preset objects be written, if how it's written in myobject.py is not the best way?
so you this method fails to call function inside the class in first case. i think you do it by making a class of attribute and getting variables from it.
class Attribute():
def __init(self,a1,a2):
self.a1=a1
self.a2=a2
att=Attribute(1,2)
print(att.a1)
It looks like you stumbled upon the singleton pattern. Essentially, your class should only ever have one instance at any time, most likely to store global configurations or some similar purpose. In Java, you'd implement this pattern by making the constructor private, and have a static method (eg. getInstance()) that returns a private static instance.
For Python, it's actually quite tricky to implement singletons. You can see some discussion about that subject here. To me how you're doing it is probably the simplest way to do it, and although it doesn't strictly enforce the singleton constraint, it's probably better for a small project than adding a ton of complexity like metaclasses to make 'true' singletons.
This is the normal/right way to do in a Django project:
in models.py
class Reservation():
def cancel_reservation(self):
# ....
#classmethod
def get_client_reservations(cls):
The alternative way that I found in a company codebase:
in models.py
class Reservation():
# There is no method here except __unicode__
and in manage_reservations.py
def cancel_reservation(reservation):
# ...
def get_client_reservations():
# ...
I'd like to have an exshaustive list of the consequences of choosing the first way instead of the second one.
It's a coding style. "Object" in OOP is data and methods, together. The object has everything you need to hold the data and manipulate it. There is no "right" answer, more opinion and style.
So you can write:
r = Reservation.objects.get(pk=1)
r.get_client_reservation()
Rather then:
from . import get_client_reservation
get_client_reservation(r)
But the truth is that Python modules are a very good solution to keep things together, and it's easier to debug than a complex inheritance chain.
In django the OOP is essential because the framework lets you easily subclass components and customise only what you need, this is hard to do without objects.
If you need a specific form, with specific fields, then you can write it as a simple module with functions. But if you need a generic "Form" that everybody can customise (or a model, authentication backend etc), you need OOP.
So bottom line (IMHO): if Reservation is at the bottom of the pyramid, the end line of data and code, no big difference, more personal preference. If it's in the top and you are going to need ReservationThis and ReservationThat, OOP is better.
This isn't a technical answer, but try doing a git blame on that code, and seeing who wrote the methods, and ask them why they chose to do it like that. In general it's better to keep the methods on the class (for multiple reasons) - for example being able to do dir(r) (where r is a reservation) and seeing all the methods on r. There may be a reason though (that we can't know unless we saw the code)
You shoud put a method inside a class if the it's related with the class, for example if it needs some class variable or if it logically belongs with the class
I've recently started working at a company doing work in Python, and in their code they have a class which defines a handful of functions that do nothing, and return nothing. Code that is pretty much exactly
...
...
def foo(self):
return
I'm really confused as to why anyone would do that, and my manager is not around for me to ask. Would one do this for the sake of abstraction for child classes? A signal that the function will be overridden in the future? The class I'm looking at in particular inherits from a base class that does not contain any of the functions that are returning nothing, so I know that at least this class isn't doing some kind of weird function overriding.
Sometimes, if a class is meant to be used interchangeably with another class in the API, it can make sense to provide functions that don't do much (or anything). Depending on the API though, I would typically expect these functions to return something like NotImplemented.
Or, maybe somebody didn't get enough sleep the night before and forgot what they were typing ... or got called away to a meeting without finishing what they were working on ...
Ultimately, nobody can know the actual reason without having a good knowledge of the code you're working with. Basically -- I'd wait for your boss or a co-worker to come around and ask.
If the functions have meaningful names, then it could be a skeleton for future intended functionality.
I have a rather large and involved decorator to debug PyQt signals that I want to dynamically add to a class. Is there a way to add a decorator to a class dynamically?
I might be approaching this problem from the wrong angle, so here is what I want to accomplish.
Goal
I have a decorator that will discover/attach to all pyqt signals in a class and print debug when those signals are emitted.
This decorator is great for debugging a single class' signals. However, there might be a time when I would like to attach to ALL my signals in an application. This could be used to see if I'm emitting signals at unexpected times, etc.
I'd like to dynamically attach this decorator to all my classes that have signals.
Possible solutions/ideas
I've thought through a few possible solutions so far:
Inheritance: This would be easy if all my classes had the same base class (other than Python's built-in object and PyQt's built-in QtCore.QObject). I suppose I could just attach this decorator to my base class and everything would workout as expected. However, this is not the case in this particular application. I don't want to change all my classes to have the same base class either.
Monkey-patch Python object or QtCore.QObject: I don't know how this would work practically. However, in theory could I change one of these base classes' __init__ to be the new_init I define in my decorator? This seems really dangerous and hackish but maybe it's a good way?
Metaclasses: I don't think metaclasses will work in this scenario because I'd have to dynamically add the __metaclass__ attribute to the classes I want to inject the decorator into. I think this is impossible because to insert this attribute the class must have already been constructed. Thus, whatever metaclass I define won't be called. Is this true?
I tried a few variants of metaclass magic but nothing seemed to work. I feel like using metaclasses might be a way to accomplish what I want, but I can't seem to get it working.
Again, I might be going about this all wrong. Essentially I want to attach the behavior in my decorator referenced above to all classes in my application (maybe even a list of select classes). Also, I could refactor my decorator if necessary. I don't really care if I attach this behavior with a decorator or another mechanism. I just assumed this decorator already accomplishes what I want for a single class so maybe it was easy to extend.
Decorators are nothing more than callables that are applied automatically. To apply it manually, replace the class with the return value of the decorator:
import somemodule
somemodule.someclass = debug_signals(somemodule.someclass)
This replaces the somemodule.someclass name with the return value of debug_signals, which we passed the original somemodule.someclass class.
In his book Java Design, Peter Coad says that one of the five criteria a subclass should meet is that the subclass "does not subclass what is merely a utility class (useful functionality you'd like to reuse)." For an example in Java, he says that making one of your domain classes a subclass of the Observable class is a violation of his rule: "Observable is a utility class, a collection of useful methods--nothing more."
In that context, here are some example test classes patterned after actual tests I've written:
class BaseDataGeneratorTestCase (unittest.TestCase):
def _test_generate_data(self, generator, expected_value):
# Imagine there's a lot more code here, making it
# worthwhile to factor this method out.
assert generator.generate_data() == expected_value
class DataGeneratorTests (BaseDataGeneratorTestCase):
def test_generate_data(self):
self._test_generate_data(DataGenerator(), "data")
class VariantDataGeneratorTests (BaseDataGeneratorTestCase):
def test_generator_data(self):
self._test_generate_data(VariantDataGenerator(),
"different data")
Though this example is trivial, consider that the real tests and their surrounding system are, of course, much more complex. I think this example is usable as a vehicle to try and clear up some of my confusion about the proper use of inheritance.
Is subclassing BaseDataGeneratorTestCase a bad idea? Does it qualify as just "useful functionality [I'd] like to reuse"? Should _test_generate_data just be a function, not in any class?
Basically you can solve the problem any way you like. OOP is just a paradigm that's supposed to help you design things in a clearer and maintainable way.
So, the question you should ask yourself any time you want to use inheritance they are have they same role, and you just want to add functionality ? or do you want to just use the nice methods in the "base" class ?
It's just a matter of concept, and in your little example it could go either way because they have the same concept, but basically, you're just using the base class as a utility.
If you want the base class to actually be a base class you should design it so every inherited class will behave the similarly.
I do agree that conceptually it would make sense to have a unit test baseclass for all the tests that work similarly.