I'm trying to understand inheritance in Python. I have 4 different kind of logs that I want to process: cpu, ram, net and disk usage
I decided to implement this with classes, as they're formally the same except for the log file reading and the data type for the data. I have a the following code (log object is a logging object instance of a custom logging class)
class LogFile():
def __init__(self,log_file):
self._log_file=log_file
self.validate_log()
def validate_log(self):
try:
with open(self._log_file) as dummy_log_file:
pass
except IOError as e:
log.log_error(str(e[0])+' '+e[1]+' for log file '+self._log_file)
class Data(LogFile):
def __init__(self,log_file):
LogFile.__init__(self, log_file)
self._data=''
def get_data(self):
return self._data
def set_data(self,data):
self._data=data
def validate_data(self):
if self._data == '':
log.log_debug("Empty data list")
class DataCPU(Data):
def read_log(self):
self.validate_log()
reading and writing to LIST stuff
return LIST
class DataRAM(Data):
def read_log(self):
self.validate_log()
reading and writing to LIST stuff
return LIST
class DataNET(Data):
Now I want my DataNET class to be a object of the Data Class with some more attributes, in particular a dictionary for every one of the interfaces. How can I override the __init__() method to be the same as the Data.__init__() but adding self.dict={} without copying the Data builder? This is, without explicitly specifing the DataNet objects do have a ._data attribute, but inherited from Data.
Just call the Data.__init__() method from DataNET.__init__(), then set self._data = {}:
class DataNET(Data):
def __init__(self, logfile):
Data.__init__(self, logfile)
self._data = {}
Now whatever Data.__init__() does to self happens first, leaving your DataNET initializer to add new attributes or override attributes set by the parent initializer.
In Python 3 classes are already new-style, but if this is Python 2, I'd add object as a base class to LogFile() to make it new-style too:
class LogFile(object):
after which you can use super() to automatically look up the parent __init__ method to call; this has the advantage that in a more complex cooperative inheritance scheme the right methods are invoked in the right order:
class Data(LogFile):
def __init__(self,log_file):
super(Data, self).__init__(log_file)
self._data = ''
class DataNET(Data):
def __init__(self, logfile):
super(DataNET, self).__init__(logfile)
self._data = {}
super() provides you with bound methods, so you don't need to pass in self as an argument to __init__ in that case. In Python 3, you can omit the arguments to super() altogether:
class Data(LogFile):
def __init__(self,log_file):
super().__init__(log_file)
self._data = ''
class DataNET(Data):
def __init__(self, logfile):
super().__init__(logfile)
self._data = {}
Use new style classes (inherit from object) - change definition of LogFile to:
class LogFile(object):
and init method of Data to:
def __init__(self, log_file):
super(Data, self).__init__(log_file)
self._data = ''
Then you can define DataNET as:
class DataNET(Data):
def __init__(self, log_file):
super(DataNET, self).__init__(log_file)
self.dict = {}
Related
I have two classes, with similar methods (read, write, insert) but because of the file types the produce, their methods must be implemented differently. Ideally, I would like the user to initialize a base type and the appropriate subtype is returned based on keywords during instantiation:
c = SomeThing() # returns subclass of type 1 (set in default)
c = Something(flag=True) # returns the other subclass
Initially I tried putting a return statement in the __init__ of the base class, but appparently __init__ should return None, so not sure where to set this; should I just create a base class factory method that returns the appropriate type?:
class SomeThing:
def __init__(self, flag=False):
self.build(flag)
def build(self, flag):
if not flag:
return SubclassOne()
reutnr SubclassTwo()
Or is there a better way for dynamically binding the appropriate methods based on keywords passed at instantiation? I wouldnt think this would be ideal:
class SomeThing:
def __init__(self, flag=False):
if not flag:
setattr(self, 'write', self.write_one)
else:
setattr(self, 'write', self.write_two)
def write_one(self):
# stuff
def write_two(self):
# stuff
Because the user of the interface could simply access the other methods, and I wouldnt want to define each method outside the classes because then the user could say do from something import write_one which would be inappropriate behavior.
I'd recommend you go with a factory of sorts:
class Base(object):
# ...
class SomeThing(Base):
# ...
class OtherThing(Base):
# ...
def create_thing(flag = False):
if flag:
return SomeThing()
else:
return OtherThing()
Here is my code - my base_file.py
class modify_file(object):
def modify_file_delete_obj():
print "modify file here"
def modify_file_add_attributes():
print "modify file here"
return ["data"]
class task_list(object):
modify_file_instance = modify_file() #problem part when accessing from project1.py
def check_topology():
data = modify_file_instance.modify_file_add_attributes()
#use this data further in this method
def check_particles():
print "check for particles"
project1.py file
import base_file as base_file
class project1(base_file.modify_file,base_file.task_list):
#overriding method of modify_file class
def modify_file_add_attributes(self):
print "different attributes to modify"
return ["different data"]
The idea is to run base_file.py for most projects and the project specific ones when required.
But when i run the method
"check_topology" from project1.py
the modify_file class is being derived from the base_file.py not project1.py
So the output is still ["data"] not ["different data"]
If you want to correctly use inheritance, define a base class Pet which provides a method to be overridden by a specific kind of pet.
class Pet(object):
def talk(self):
pass
class Cat(Pet):
def talk(self):
return "meow"
class Dog(Pet):
def talk(self):
return "woof"
pets = [Cat(), Dog(), Cat()]
for p in pets:
print(p.talk())
# Outputs
# meow
# woof
# meow
(I leave the issue of what Pet.talk should do, if anything, as a topic for another question.)
You are mixing up object composition with multiple inheritance.
The task_list class uses object composition when it creates an internal instance of the modify_file class. But there is a problem here in that you are creating it as a class attribute, which means it will be shared by all instances of task_list. It should instead be an instance attribute that is created in an __init__ method:
class task_list(object):
def __init__(self):
super(task_list, self).__init__()
self.modify_file_instance = modify_file()
def check_topology(self):
data = self.modify_file_instance.modify_file_add_attributes()
The project1 class uses multiple inheritance, when in fact it should use single inheritance. It is a kind of task_list, so it makes no sense for it to inherit modify_file as well. Instead, it should create it's own internal sub-class of modify_file - i.e. use object composition, just like task_list class does:
# custom modify_file sub-class to override methods
class project1_modify_file(base_file.modify_file):
def modify_file_add_attributes(self):
print "different attributes to modify"
return ["different data"]
class project1(base_file.task_list):
def __init__(self):
super(project1, self).__init__()
self.modify_file_instance = project1_modify_file()
Now you have a consistent interface. So when project1.check_topology() is called, it will in turn call task_list.check_topology() (by inheritance), which then accessses self.modify_file_instance (by composition):
>>> p = project1()
>>> p.check_topology()
different attributes to modify
In your dog class you're re-constructing an instance of cat, this instance (and the cat type) does not know they are inherited elsewhere by pets.
So you can naturally try:
class cat(object):
def meow(self):
self.sound = "meow"
return self.sound
class dog(object):
def woof(self):
return self.meow()
class pets(cat,dog):
def meow(self):
self.sound = "meow meow"
return self.sound
print(pets().woof())
Which still make no sense with those actual names, but you told they are fake names so it make be OK.
How do you access an instance in an object and pass it to another 'main' object? I'm working with a parser for a file that parses different tags, INDI(individual), BIRT(event), FAMS(spouse), FAMC(children)
Basically there are three classes: Person, Event, Family
class Person():
def __init__(self, ref):
self._id = ref
self._birth : None
def addBirth(self, event):
self._birth: event
class Event():
def __init__(self, ref):
self._id = ref
self._event = None
def addEvent(self, event):
self._event = event
#**event = ['12 Jul 1997', 'Seattle, WA'] (this is generated from a function outside a class)
I want to transfer self._event from the Event class into addBirth method to add it into my person class. I have little knowledge on how classes and class inhertiances work. Please help!
If I understand your question, you want to pass an (for example) Event object to an instance of Person?
Honestly, I don't understand the intent of your code, but you probably just need to pass self from one class instance to the other class instance.
self references the current instance.
class Person:
def __init__(self):
self._events = []
def add_event(self, event)
self._events.append(event)
class Event:
def add_to_person(self, person):
person.add_event(self)
The most proper way to handle situations like this is to use getter and setter methods; data encapsulation is important in OO programming. I don't always see this done in Python where I think it should, as compared to other languages. It simply means to add methods to your classes who sole purpose are to return args to a caller, or modify args from a caller. For example
Say you have class A and B, and class B (caller) wants to use a variable x from class A. Then class A should provide a getter interface to handle such situations. Setting you work the same:
class class_A():
def __init__(self, init_args):
x = 0
def someMethod():
doStuff()
def getX():
return x
def setX(val):
x = val
class class_B():
def init(self):
init_args = stuff
A = class_A(init_args)
x = class_A.getX()
def someOtherMethod():
doStuff()
So if class B wanted the x property of an instance object A of class class_A, B just needs to call the getter method.
As far as passing instances of objects themselves, say if you wanted A to pass an already-created instance object of itself to a method in class B, then indeed, you simply would pass self.
I want to create some kind of descriptor on a class that returns a proxy object. The proxy object, when indexed retrieves members of the object and applies the index to them. Then it returns the sum.
E.g.,
class NDArrayProxy:
def __array__(self, dtype=None):
retval = self[:]
if dtype is not None:
return retval.astype(dtype, copy=False)
return retval
class ArraySumProxy(NDArrayProxy):
def __init__(self, arrays):
self.arrays = arrays
#property
def shape(self):
return self.arrays[0].shape
def __getitem__(self, indices):
return np.sum([a[indices]
for a in self.arrays],
axis=0)
This solution worked fine while I had actual arrays as member variables:
class CompartmentCluster(Cluster):
"""
Base class for cluster that manages evidence.
"""
def __init__(self, **kwargs):
super().__init__(**kwargs)
self.variable_evidence = ArraySumProxy([])
class BasicEvidenceTargetCluster(CompartmentCluster):
# This class variable creates a Python object named basic_in on the
# class, which implements the descriptor protocol.
def __init__(self,
*,
**kwargs):
super().__init__(**kwargs)
self.basic_in = np.zeros(self.size)
self.variable_evidence.arrays.append(self.basic_in)
class ExplanationTargetCluster(CompartmentCluster):
"""
These clusters accept explanation evidence.
"""
def __init__(self, **kwargs):
super().__init__(**kwargs)
self.explanation_in = np.zeros(self.size)
self.variable_evidence.arrays.append(self.explanation_in)
class X(BasicEvidenceTargetCluster, ExplanationTargetCluster):
pass
Now I've changed my arrays into Python descriptors (cluster_signal implements the descriptor protocol returning a numpy array):
class CompartmentCluster(Cluster):
"""
Base class for cluster that manages evidence.
"""
def __init__(self, **kwargs):
super().__init__(**kwargs)
self.variable_evidence = ArraySumProxy([])
class BasicEvidenceTargetCluster(CompartmentCluster):
# This class variable creates a Python object named basic_in on the
# class, which implements the descriptor protocol.
basic_in = cluster_signal(text="Basic (in)",
color='bright orange')
def __init__(self,
*,
**kwargs):
super().__init__(**kwargs)
self.variable_evidence.arrays.append(self.basic_in)
class ExplanationTargetCluster(CompartmentCluster):
"""
These clusters accept explanation evidence.
"""
explanation_in = cluster_signal(text="Explanation (in)",
color='bright yellow')
def __init__(self, **kwargs):
super().__init__(**kwargs)
self.variable_evidence.arrays.append(self.explanation_in)
class X(BasicEvidenceTargetCluster, ExplanationTargetCluster):
pass
This doesn't work because the append statements append the result of the descriptor call. What I need is to append either a bound method or similar proxy. What's the nicest way to modify my solution? In short: The variables basic_in and explanation_in were numpy arrays. They're now descriptors. I would like to develop some version of ArraySumProxy that works with descriptors rather than requiring actual arrays.
When you access a descriptor, it is evaluated and you only get the value. Since your descriptor does not always return the same object (I guess you cannot avioid it?), you dont want to access the descriptor when you are initializing your proxy.
The simplest way to avoid accessing it, is to just remember its name, so instead of:
self.variable_evidence.arrays.append(self.basic_in)
you do:
self.variable_evidence.arrays.append((self, 'basic_in'))
Then, of course, variable_evidence has to be aware of that and do getattr(obj, name) to access it.
Another option is to make the descriptor return a proxy object which is evaluated later. I don't know what you are doing, but that might be too many proxies for good taste...
EDIT
Or... you can store the getter:
self.variable_evidence.arrays.append(lambda: self.basic_in)
Following the SO questions What is a clean, pythonic way to have multiple constructors in Python? and Can you list the keyword arguments a Python function receives? I want to create a base class that has a from_yaml classmethod, and which also removes unneeded keyword args as shown below. But I think I need to reference the derived class's __init__ method from the base class. How do I do this in Python?
def get_valid_kwargs(func, args_dict):
valid_args = func.func_code.co_varnames[:func.func_code.co_argcount]
kwargs_len = len(func.func_defaults) # number of keyword arguments
valid_kwargs = valid_args[-kwargs_len:] # because kwargs are last
return dict((key, value) for key, value in args_dict.iteritems()
if key in valid_kwargs)
class YamlConstructableClass(object):
#classmethod
def from_yaml(cls, yaml_filename):
file_handle = open(yaml_filename, "r")
config_dict = yaml.load(file_handle)
valid_kwargs = get_valid_kwargs(AnyDerivedClass.__init__, config_dict) # I don't know the right way to do this
return cls(**valid_kwargs)
class MyDerivedClass(YamlConstructableClass):
def __init__(self, some_arg, other_arg):
do_stuff(some_arg)
self.other_arg = other_arg
derived_class = MyDerivedClass.from_yaml("my_yaml_file.yaml")
You already have a reference to the correct class: cls:
valid_kwargs = get_valid_kwargs(cls.__init__, config_dict)
The class method is bound to the class object it is being called on. For MyDerivedClass.from_yaml(), cls is not bound to parent class but to MyDerivedClass itself.