Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 3 years ago.
Improve this question
I would like to convert all instance fields of an object into properties (getter only) in order to make them read only. The fields might be defined by a subclass.
How can I achieve this?
class SomeClass(object):
def __init__(self, foo, bar):
self.foo = foo
self.bar = bar
convert_all_instance_fields_into_properties(self) # implementation ?
It is possible to achieve readonly fields using python builtins quite easily:
class X:
def __init__(self, val):
self.foo = val
def __setattr__(self, key, value):
if not hasattr(self, key): # only for first set
super(X, self).__setattr__(key, value)
else:
raise ValueError
def main():
x = X('bar')
y = X('baz')
assert x.foo == 'bar'
assert y.foo == 'baz'
# raises ValueError
x.foo = 'Raise an error!'
If you want to specify which fields are readonly
class X:
readonly = ['foo']
def __init__(self, val):
self.foo = val
def __setattr__(self, key, value):
if key in self.readonly and not hasattr(self, key):
super(X, self).__setattr__(key, value)
else:
raise ValueError
There are not such things as "private" in Python. If you want to and try hard enough, you can extract anything.
There are 2 levels of semi-private:
one underscore prefix for internal use, normally accessible outside - but most IDEs will underline it and warn about a not recommended practice
two underscore prefix for pseudo-private, still accessible if someone really wants it but ugly
Most of the time, you only want one underscore - people will access it only when they think they really need it.
In the code I'm working with, I saw two underscores used only as a way of encapsulating lazily loaded properties (one underscore was a "getter" and checked whether variable is already loaded).
Remember, that even with a "getter" you will be returning an object unless you return a copy yourself. Rethink your design, think in Python, rather than in "private" and "getters". ;)
Edit:
Seems like I misunderstood the question.
Functions marked with #property decorator will return without parentheses to call them (spam.eggs, instead of spam.eggs()) and be "read-only" (you can do bacon = spam.eggs but not spam.eggs = bacon).
BUT the rest of my comment still stands:
everything is accessible if one is determined enough
use underscores to ask people nicely not to access the value
if your getter property returns a mutable object, it will still be changeable inside (only reference will stay the same), e.g. eggs.list = [] will not work, but eggs.list.append("spam") will!
Related
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 1 year ago.
The community reviewed whether to reopen this question 1 year ago and left it closed:
Original close reason(s) were not resolved
Improve this question
I want to write a function that sets a self variable to None, but it feels very wrong to write a million functions for each variable.
I want to do something like:
class MyClass():
def __init__(self):
self.duck = None
self.dog = None
def makeduck(self):
self.duck = "Duck"
def makedog(self):
self.dog = "Dog"
def deleteVar(self,var):
self.var = None # or del self.var
I want to do this because the variables tend to be very large and I don't want to overload my ram so I have to delete some not needed vars depending on the context.
It is indeed possible.
Although having a clear separation between what should be program structure: variables, and data: text inside strings, Python allows one to retrieve and operate on variables or attributes given their name.
In this case, you will want to use the setattr and delattrcalls: both take an instance, an attribute name, given as textual data (a string), and operate on them like the corresponding assignment (self.var = xxx) and deleting (del self.var ). statements (but, as you intend to use, with "var" being a variable containign the actual attribute name).
def deleteVar(self, var):
# setattr(self, var, None). #<- sets attribute to None
delattr(self, var). # <- deletes attribute.
(for completeness: there is also the getattr call which allows attribute retrieval on the same bases)
That said: the memory usage of hard-coded variables, even if you have tens of them, will likely be negligible in a Python process.
Having tens of different "deleter" methods, however, would indeed be clumsy, and there are situations were your code might be more elegant by passing your attributes as data, as you intent.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 years ago.
Improve this question
In this answer it is shown how to automatically set as attribute the kwargs/args passed to a class __init__
For example, one could do:
class Employee(object):
def __init__(self, *initial_data, **kwargs):
# EDIT
self._allowed_attrs = ['name', 'surname', 'salary', 'address', 'phone', 'mail']
for dictionary in initial_data:
for key in dictionary:
if key in self._allowed_attrs: # EDIT
setattr(self, key, dictionary[key])
for key in kwargs:
if key in self._allowed_attrs: # EDIT
setattr(self, key, kwargs[key])
In my case I already know in advance the arguments I am going to pass, so I am considering this solution only to have a less repetitive and shorter code.
Is this considered good practice?
Which are the pro/cons of this solutions against manually initialise each attribute? Is there any other preferable approach?
EDIT: As the first comments/answers (rightly) focus on sanitizing arguments or listing arguments, I think this can be solved quite easily in this framework.
Question: ... less repetitive and shorter code
Your example code needs, 9 Lines and 28 Keywords.
class Employee(object):
def __init__(self, name, surname, salary, address, phone, mail):
self.name = name
self.surname = surename
self.salary = salary
self.address = address
self.phone = phone
self.mail = mail
This default one needs, 6 Lines and 19 Keywords.
Summary, your example needs more not "shorter code".
I can't see any "repetitive ... code" in the default one, all assignments are done once.
Compared these two lines, doing the same. Control which args can passed:
self._allowed_attrs = ['name', 'surname', 'salary', 'address', 'phone', 'mail']
with
def __init__(self, name, surname, salary, address, phone, mail):
The second one need less effort and do it all in one.
No if key in self._allowed_attrs: necessary, as python does it for you.
In a real project, I would use something like this
class Employee(object):
def __init__(self, person, salary=None):
self.id = unique_id()
self.person = person
self.salary = salary
All person related data are to be summarized in object person.
Conclusion:
For your given example class Employee i would never use (*args, **kwargs).
(*args, **kwargs) arguments are only usefull if one can't predict which args are passed.
Prior discussion: Python decorator to automatically define __init__ variables, Python: Is it a good idea to dynamically create variables?
Pros:
Reduces code duplication
Cons:
Less readable
Makes it harder to locate member references (thus may also upset static analyzers like pylint)
The amount of boilerplate code needed to handle all scenarios (e.g. a positional argument passed as a keyword one; check that all required arguments are present) makes code reduction nonexistent and duplicates stock subroutine invocation code
Requiring a lot of arguments in a constructor that are to be simply set as members is an anti-pattern in and of itself
It makes it impossible to figure out what kind of arguments the class expects.
If someone (or you in a few months' time) wants to create an Employee in their code, they look at the arguments of the constructor to see what they should pass (maybe by hand, or maybe an IDE automatically shows them). Your code achieves little except hiding this.
It is perfectly ok to use the language introspection capabilities to reduce teh ammoutn of repetion and typing you have to do.
Of course, it is better if you are taking care to handle the attributes correctly, and even sanitize contents - so the best thing is to have either a decorator for the __init__ method, or the equivalent of it in the base-class __init__ that will do everything that is wanted: check if the passed parameters are ok for the specific class, and then use setattr to set their values within the instance.
I think the less magic way is to have a convention in your class hierarchy to declare the wanted parameters as class attributes.
In that way, you could use those class attributes to document the expected parameters and their type, and keep the __init__ signature as *args, **kwargs and have your base-class init handle them all.
SQLAlchemy Base models do that - you specify the class attributes as special "instrumented attributes" and they are automatically assigned when called in __init__.
A simpler way would be:
_sentinel = object()
class Base(object):
def __init__(self, *args, **kwargs):
for attr_name, class_attr in self.__class__.__dict__.items():
if isinstance(class_attr, type) and kwargs.get(attr_name, _sentinel) != _sentinel:
attr_value = kwargs[attr_name]
if not isinstance(attr_value, class_attr):
raise TypeError("Parameter {} is expected to be of type {}".format(attr_name, class_attr))
setattr(self, attr_name, attr_value)
class Person(Base):
name = str
age = int
phonenumber = Phone
...
This would require all parameters to the class to be passed as named parameters - but all of them would automatically be assigned to instance attributes, it would work, be documentable and safe. If you want to get even better, just define some fancy descriptor class to be your class attr value.
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 6 years ago.
Improve this question
I have a class in which I collect data to list with the help of wide range of methods (say 23). Every method uses list and could modify it. My question is how can I call (in class, respectively) all methods of class in more generally accepted way?
class Example(object):
def __init__(self):
self.lst = []
def multiply(self):
for i in xrange(10):
self.lst.append(i**2)
def get_list(self):
return self.lst
# Calling:
ex = Example()
ex.multiply
print ex.get_list
# What I want is call multiply method inside class and just do this
print ex.get_list
Example class illustrates my idea. I know that it is possible to solve my problem through iterating with Example.__dict__values(), calling all methods in one class's method or with inspect module, but I am not sure that there are not more pure-Pythonic ways.
UPDATE:
All I want is to collect configuration data for yapf formatter.
The main problem is how to call all methods in class - I don't want to implement all configuration analysis of input file in one method. OOP and patterns is my guide.
UPDATE 2:
Answer for Jared Goguen. I want to create class to collect data to dictionary and send it to CreateStyleFromConfig method.
And when it will done, I want just to get get_style method from class without calling all methods inside it:
config = ConfData() # Class which collects all configurations from file
config.get_style()
ConfData class contains methods with specific for data name. For example:
def align_closing_bracket_with_visual_indent(self):
# Do some work..
pass
So, I guess there are two potential solution to this, but I don't really like either of them. I think you might be approaching the problem the wrong way.
You could use an external decorator track and a class variable tracker to keep track of which methods you want to call.
def track(tracker):
def wrapper(func):
tracker.append(func)
return func
return wrapper
class Example:
tracker = []
#track(tracker)
def method_a(self):
return [('key_a1', 'val_a1'), ('key_a2', 'val_a2')]
#track(tracker)
def method_b(self):
return [('key_b1', 'val_b1'), ('key_b2', 'val_b2')]
def collect_data(self):
return dict(tup for method in self.tracker for tup in method(self))
print Example().collect_data()
# {'key_b1': 'val_b1', 'key_b2': 'val_b2', 'key_a1': 'val_a1', 'key_a2': 'val_a2'}
With this approach, you can have utility methods in your class that you don't want to call.
Another approach would be to inspect the directory of your class and logically determine which methods you want to call.
from inspect import ismethod
class Example:
def method_a(self):
return [('key_a1', 'val_a1'), ('key_a2', 'val_a2')]
def method_b(self):
return [('key_b1', 'val_b1'), ('key_b2', 'val_b2')]
def collect_data(self):
data = {}
for attr in dir(self):
if not attr.startswith('_') and attr != 'collect_data':
possible_method = getattr(self, attr)
if ismethod(possible_method):
data.update(possible_method())
return data
This approach is similar to the one mentioned in your post (i.e. iterating over __dict__) and is weak because any instance methods that you don't want to call need to start with '_'. You can adapt this approach to use some other naming convention, but it might not be readable to anyone else.
Either of these methods could implement the collect_data portion as a super-class, allowing you to create minimal sub-classes. This doesn't really help much with the first approach.
class MethodTracker(object):
def collect_data(self):
return dict(tup for method in self.tracker for tup in method(self))
class Example(MethodTracker):
tracker = []
#track(tracker)
def method_a(self):
return [('key_a1', 'val_a1'), ('key_a2', 'val_a2')]
#track(tracker)
def method_b(self):
return [('key_b1', 'val_b1'), ('key_b2', 'val_b2')]
With the second approach, the resulting sub-class is minimal. Also, you can do a little reflection to allow the super-class to have utility methods that don't start with '_'.
from inspect import ismethod
class MethodTracker(object):
def collect_data(self):
data = {}
for attr in dir(self):
if not attr.startswith('_') and not hasattr(MethodTracker, attr):
possible_method = getattr(self, attr)
if ismethod(possible_method):
data.update(possible_method())
return data
def decoy_method(self):
return 'This is not added to data.'
class Example(MethodTracker):
def method_a(self):
return [('key_a1', 'val_a1'), ('key_a2', 'val_a2')]
def method_b(self):
return [('key_b1', 'val_b1'), ('key_b2', 'val_b2')]
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
I have the following piece of Python:
class CollectorGUI(Gtk.Window):
def __init__(self, prefill, flags, data_to_return):
"""prefill should be an instance of the Prefill class"""
self.prefill = prefill
self.flags = flags
self.data_to_return = data_to_return
......
My question is: (1) how to get rid of the documentation string? I want my code to be self-documenting; (2) how to get rid of these three lines:
self.prefill = prefill
self.flags = flags
self.data_to_return = data_to_return
Is there an abbreviation?
The Prefill requirement can be documented in the method signature using function annotations:
class CollectorGUI(Gtk.Window):
def __init__(self, prefill: Prefill, flags, data_to_return):
Annotations are discoverable at runtime, just like the docstring is. Annotations are not enforced (they are meant as a more generic stepping stone for different use cases) but are immediately obvious in the signature.
You can then optionally enforce it explicitly by asserting the type:
assert isinstance(prefill, Prefill), 'prefill must be an instance of Prefill'
As for auto-setting your attributes from the function arguments, that's answered elsewhere: What is the best way to do automatic attribute assignment in Python, and is it a good idea?
While you could use inspect to automatically create attributes from the arguments in the method's signature, it would obfuscate the perfectly readable code you have now.
One look at the constructor tells me that the class at least has the attributes prefill, flags, and data_to_return.
Making explicit code implicit is often not a good idea.
But if you insist:
import inspect
class C(object):
def __init__(self, a, b, c):
spec = inspect.getargspec(getattr(C, "__init__"))
for arg in spec.args[1:]:
setattr(self, arg, locals()[arg])
c = C(1, 2, 3)
print c.a
print c.b
print c.c
In Python (I'm talking 2 here, but would be interested to know about 3 too) is there a way to define in advance a list of all instance variables (member fields) you want available i.e. make it an error to use one you've not defined somewhere?
Something like
class MyClass(object):
var somefield
def __init__ (self):
self.somefield = 4
self.banana = 25 # error!
A bit like you do in Java, C++, PHP, etc
Edit:
The reason I wanted this kind of thing was to spot early on using variables that hadn't been setup initially. It seems that a linter will actually pick these errors up without any extra plumbing so perhaps my question is moot...
Why yes, you can.
class MyClass(object):
__slots__ = ['somefield']
def __init__ (self):
self.somefield = 4
self.banana = 25 # error!
But mind the caveats.
You can use the answer posted above, but for a more "pythonic" approach, try the method listed at (link to code.activestate.com)
For future reference, and until I can figure out how to link to the website, here's the code:
def frozen(set):
"""Raise an error when trying to set an undeclared name, or when calling
from a method other than Frozen.__init__ or the __init__ method of
a class derived from Frozen"""
def set_attr(self,name,value):
import sys
if hasattr(self,name): #If attribute already exists, simply set it
set(self,name,value)
return
elif sys._getframe(1).f_code.co_name is '__init__': #Allow __setattr__ calls in __init__ calls of proper object types
for k,v in sys._getframe(1).f_locals.items():
if k=="self" and isinstance(v, self.__class__):
set(self,name,value)
return
raise AttributeError("You cannot add attributes to %s" % self)
return set_attr