I use validators to filter user input. Normally I have my validators working like this:
my_reg_ex = QRegExp("[1-9]\d{0,5}")
my_validator = QRegExpValidator(my_reg_ex, self.ui.lineEdit_test)
self.ui.lineEdit_test.setValidator(my_validator)
I wrote this after looking at some examples online. But I just noticed that if I remove the last part on the second line:
, self.ui.lineEdit_test
The code works exactly the same. I have a couple of these validators all around. I was wondering if it's okay to just use it without the part I mentioned. For example:
my_reg_ex = QRegExp("[1-9]\d{0,5}")
my_validator = QRegExpValidator(my_reg_ex)
self.ui.lineEdit_test.setValidator(my_validator)
Is there any difference between these? If there is please explain and tell me which one is the better way to go.
The QRegExpValidator class inherits QObject, so its constructor has an argument that takes a parent QObject. One general reason for setting a parent, is to ensure the object doesn't get garbage-collected when it goes out of scope. This can easily happen if you don't keep any other reference to the object, and is a very common cause of many of the problems seen in newbie questions on SO.
However, this is actually not a problem in your specific example. This is because the line-edit takes ownership of the validator when you pass it to setValidator(), so you don't need to explicitly keep a reference to it yourself. As far as the code in your question is concerned, it really makes no difference whether you set a parent or not.
Having said that, there is at least one scenario where not setting a parent may be advantageous. This could arise if you needed to regularly reset the validator at runtime. If you gave each new validator a parent, the old one would not be deleted when setValidator() is called (because the parent would still hold a reference to it). So for the purposes of simplifying object-cleanup, not setting a parent in this situation might be preferrable.
Related
I just used a metaclass for the first time.
The purpose was to get control of the help() output for a class or instance.
Specifying attributes in the __dir__() function of the metaclass allowed me to control the help content.
However, I observed that for intellisense/code_completion, within Jupyter, it was the __dir__() function of the class itself that matters.
It's enough for me to understand the fact.
However, I would like to know the reason for that.
Thanks for a clarification.
As I had stated in the other answer about this topic, Python does not define how the contents of help are picked. And using the metaclass __dir__ is something that will work due to the nature of the language, but third party modules certainly won't expect it to return custom results, different from the class's dir, as it is not a common thing.
So, what gives is that you are trying to customize features that are not customizable - you will either have to create a new class that proxies to all uneeded user methods, and therefore can hide from help and dir what you don't want to show, or create an entire aplication that just does what you need instead of relying on Jupiter notebooks to show your specific instructions and only those instructions.
Overall, if you are using a metaclass for other motives, ok, but if you are making use of a metaclass just to try to override help output, I'd say that is an incorrect use already.
Another option is to stick a manually callable "help2" method in your class, that prints only your desired output. Then you document on the main class docstring "please, use classname.help2() to learn the usage.", and otherwise leave __dir__ alone.
I am new to OOP and am writing a small tool in Python that checks Bitcoin prices using a JSON load from the web Bitcoin() class, it monitors the prices Monitor(), notifies the user when thresholds are met Notify() and uses a console-interface Interface() for now to do so.
I have created a Bitcoin() class that can read the prices and volumes from the JSON load. The __init__ definition connects to the web using socket. Since every instance of this class would result in a new socket, I would only need/want one instance of this class running.
Is a class still the best way to approach this?
What is the best way to get other classes and instances to interact with my Bitcoin() instance?
Should I global a Bitcoin() instance? Pass the instance as an argument to every class that needs it?
The first thing which concerns me is the SRP violation, your Bitcoin class probably shouldn't be responsible for:
opening socket,
parsing results,
rendering output.
I don't know the details but from my point of view you should split that functionality to smaller classes/functions (in case of using only modules), and one of them will be responsible for retrieving data from web. Please also keep in mind that global state is evil (singletons in some contexts could be described as global state).
Another thing which is a smell from my point of view is opening a socket inside the constructor. This isn't testable, of course you could mock/stub socket, but from my point of view it's better when class requires all it's dependencies as a constructor parameter. By doing it that way you could also notice some classes with to wide responsibility (if your constructor requires more that 3,4 parameters it definitely could be simplified).
http://www.youtube.com/watch?v=o9pEzgHorH0
I'm not sure how relevant this video is for your project (no code to actually read). But maybe you'll pick up the answer to your question. At least you'll learn something new and that's what were here for.
If I were you my code would be something like:
( a class for every set of jobs, which is not what you are doing )
class Interface:
''' Handle UI '''
...
class Connect:
''' Handle web interface '''
...
class Bitcoin:
''' Handle the calculations '''
...
class Notify:
''' Notifier '''
...
In short, split your classes into smaller simpler classes.
Now for your question:
Yes, because you have a "complex-ish" problem at hand and you're using Python, so it's definitely easier to create a OOP version than a non-OOP one. So, unless you have a good reason not to, Stick to OOP.
In your case, it might as well be passing the instance as an argument.
This is a good idea. This eliminates the problems caused by scopes if you don't have a very good understanding of them.
But remember you pass the reference, not the value, so manipulating the instance, can and will affect other classes the instance is passed to.
Note: Opening a socket in the constructor of the class is not a good idea. It might be better if you have it in a method.
The answer is maybe. Depends upon you whole architecture,
You should look at the singleton pattern, because you description yells Singleton all over.
http://de.wikipedia.org/wiki/Singleton_%28Entwurfsmuster%29
If you don't find any good reason against creating a class in your given architecture, then just go for it.
OOP is a tool, not a goal, you can make a decision whether to use it or not. If you use a Python module, you can achieve encapsulation without ever writing "class".
Sure, you can use python classes for this purpose. You can use module-level instances as well(no global keyword or explicit passing as arguments needed). It is a matter of taste IMHO.
Basically you're asking about Singleton pattern python-specific implementation, it has been answered here:
Python and the Singleton Pattern
Description of pattern itself can be found here: http://en.wikipedia.org/wiki/Singleton_pattern
I am about to refactor the code of a python project built on top of twisted. So far I have been using a simple settings.py module to store constants and dictionaries like:
#settings.py
MY_CONSTANT='whatever'
A_SLIGHTLY_COMPLEX_CONF= {'param_a':'a', 'param_b':b}
A great deal of modules import settings.py to do their stuff.
The reason why I want to refactor the project is because I am in need to change/add configuration parameters on the fly. The approach that I am about to take is to gather all configuration in a singleton and to access its instance whenever I need to.
import settings.MyBloatedConfig
def first_insteresting_function():
cfg = MyBloatedConfig.get_instance()
a_much_needed_param = cfg["a_respectable_key"]
#do stuff
#several thousands of functions later
def gazillionth_function_in_module():
tired_cfg = MyBloatedConfig.get_instance()
a_frustrated_value = cfg["another_respectable_key"]
#do other stuff
This approach works but feels unpythonic and bloated. An alternative would be to externalize the cfg object in the module, like this:
CONFIG=MyBloatedConfig.get_instance()
def a_suspiciously_slimmer_function():
suspicious_value = CONFIG["a_shady_parameter_key"]
Unfortunately this does not work if I am changing the MyBloatedConfig instance entries in another module. Since I am using the reactor pattern, storing staff on a thread local is out of question as well as using a queue.
For completeness, following is the implementation I am using to implement a singleton pattern
instances = {}
def singleton(cls):
""" Use class as singleton. """
global instances
#wraps(cls)
def get_instance(*args, **kwargs):
if cls not in instances:
instances[cls] = cls(*args, **kwargs)
return instances[cls]
return get_instance
#singleton
class MyBloatedConfig(dict):
....
Is there some other more pythonic way to broadcast configuration changes across different modules?
The big, global (often singleton) config object is an anti-pattern.
Whether you have settings.py, a singleton in the style of MyBloatedConfig.get_instance(), or any of the other approaches you've outlined here, you're basically using the same anti-pattern. The exact spelling doesn't matter, these are all just ways to have a true global (as distinct from a Python module level global) shared by all of the code in your entire project.
This is an anti-pattern for a number of reasons:
It makes your code difficult to unit test. Any code that changes its behavior based on this global is going to require some kind of hacking - often monkey-patching - in order to let you unit test its behavior under different configurations. Compare this to code which is instead written to accept arguments (as in, function arguments) and alters its behavior based on the values passed to it.
It makes your code less re-usable. Since the configuration is global, you'll have to jump through hoops if you ever want to use any of the code that relies on that configuration object under two different configurations. Your singleton can only represent one configuration. So instead you'll have to swap global state back and forth to get the different behavior you want.
It makes your code harder to understand. If you look at a piece of code that uses the global configuration and you want to know how it works, you'll have to go look at the configuration. Much worse than this, though, is if you want to change your configuration you'll have to look through your entire codebase to find any code that this might affect. This leads to the configuration growing over time, as you add new items to it and only infrequently remove or modify old ones, for fear of breaking something (or for lack of time to properly track down all users of the old item).
The above problems should hint to you what the solution is. If you have a function that needs to know the value of some constant, make it accept that value as an argument. If you have a function that needs a lot of values, then create a class that can wrap up those values in a convenient container and pass an instance of that class to the function.
The part of this solution that often bothers people is the part where they don't want to spend the time typing out all of this argument passing. Whereas before you had functions that might have taken one or two (or even zero) arguments, now you'll have functions that might need to take three or four arguments. And if you're converting an application written in the style of settings.py, then you may find that some of your functions used half a dozen or more items from your global configuration, and these functions suddenly have a really long signature.
I won't dispute that this is a potential issue, but should be looked upon mostly as an issue with the structure and organization of the existing code. The functions that end up with grossly long signatures depended on all of that data before. The fact was just obscured from you. And as with most programming patterns which hide aspects of your program from you, this is a bad thing. Once you are passing all of these values around explicitly, you'll see where your abstractions need work. Maybe that 10 parameter function is doing too much, and would work better as three different functions. Or maybe you'll notice that half of those parameters are actually related and always belong together as part of a container object. Perhaps you can even put some logic related to manipulation of those parameters onto that container object.
I'm refactoring some code and found this (simplified of course, but general idea):
class Variable:
def __init__(self):
self.__constraints = []
def addConstraint(self, c):
self.__constraints.append(c)
class Constraint:
def __init__(self, variables):
for v in variables:
v.addConstraint(self)
The fact that the constructor of Constraint modifies other object's states instead of its own smells a little funky to me. What do other people think - is this OK, or is it a prime candidate for refactoring?
Edit: My concern is not the parent/child relationship, but that it is linked up inside the constructor rather than in a separate method.
I see it as a self registration pattern. "Hello I'm new here, please allow me to join."
I might prefer to have a differently named method so that the purpose is more clear, but I do actually quite like the approach.
I entirely concur with #djna's answer that the specific use case is perfectly legit -- here, it's an example of an object needing to en-register itself with a specified set of registries "at birth".
A very sharp and extremely common subcase of that would be an observer object that exists strictly for the purpose of observing a given observable -- perfectly fine to pass the observable to the observer's initializer, and exactly the right way to ensure the class invariant "instances of this observer class are at all times connected to exactly one observable", which would be not established "at birth" if the registration was carried out only after the completion of initialization.
Other similar cases include for example a widget object that must at all time exist within a container window: it would somewhat weird to implement it otherwise than having the widget take the parent as an initializer argument and tell the parent "hi, I'm your new child!".
At least in those 1-many cases you could imagine forcing the parent or observable to have a method that both creates and enregisters the new object. In a many-many case like this one, the somewhat inside-out nature of that approach gets revealed -- since the constraint must be registered with multiple variables, it would be "against the grain" to ask any specific one of them to create the constraint. The code you supply on the other hand is perfectly natural.
Only for cases that cannot reasonably be framed as the new object "enregistering itself" would I feel some doubt (there are a few other legit ones, such as objects creating and enregistering other auxiliary ones at birth, but they're nowhere near as common).
I agree with you. That's backwards. There may be some good reason for why, but it's unclear programming and it likely to bite someone if the foot sooner or later.
This is common usage when you have two objects that are closely related (i.e. where only one of them alone doesn't make sense). Most common case: Parent child relations. When you add a child to a parent (i.e. parent.children.append(child)), you often also update the child.parent pointer.
I personally am not necessarily opposed to this, but...
I would choose one usage pattern, and stick with it. In your case, since Variable already has a clean addConstraint method, my preference would be to use it.
Otherwise, you'll need to add good checking to prevent the user from constructing a Constraint, and then adding it to the Variable class (thereby adding it twice).
That being said, with something like a Constraint, though, I would probably not do this. A Constraint seems like a conceptually independent entity from a Variable. I see no logical reason the same constraint couldn't be added to two separate variables. I would just make it so you construct your constraint, then add them manually, specifically for this reason.
I have a block of code that basically intializes several classes, but they are placed in a sequential order, as later ones reference early ones.
For some reason the last one initializes before the first one...it seems to me there is some sort of threading going on. What I need to know is how can I stop it from doing this?
Is there some way to make a class init do something similar to sending a return value?
Or maybe I could use the class in an if statement of some sort to check if the class has already been initialized?
I'm a bit new to Python and am migrating from C, so I'm still getting used to the little differences like naming conventions.
Python upto 3.0 has a global lock, so everything is running in a single thread and in sequence.
My guess is that some side effect initializes the last class from a different place than you expect. Throw an exception in __init__ of that last class to see where it gets called.
Spaces vs. Tabs issue...ugh. >.>
Well, atleast it works now. I admit that I kind of miss the braces from C instead of forced-indentation. It's quite handy as a prototyping language though. Maybe I'll grow to love it more when I get a better grasp of it.