I try python introspection in some weird manner.
For example, I have Class LoggerManager which incapsulate pool of specific loggers classes for statistic. I know this is no standard way to do it, but I can't use mixins due to my object of class SpecificLogger is serializable to json in code and I really don't want to pass self.data throw params to init (in this case I need to rewrite many rows code and get problem with garbage in serialization again).
class LoggerManager(object):
def __init__(self):
self.data
self.b = SpecificLogger()
self.c = SpecificLogger2()
...
class SpecificLogger(LoggerGeneral):
def get_data(self):
global data #now I want get object from namespace object of LoggerManager, not really global
do_something_with_data(data)
I want behavior like this code with mixins:
import json
class A(object):
def __init__(self):
self.data = 'something'
def create_pool_of_specific_objects(self):
self.obj = B()
class B(A):
def __init__(self):
A.__init__(self)
def do_something_with_data(self):
print(self.data)
self.data = 'new_data'
print(self.data)
def save(self):
return json.dumps(self.__dict__, ensure_ascii=False)
def hack_save(self):
data_to_dumped = {x:y for x,y in self.__dict__.iteritems() if x != 'obj'}
return json.dumps(data_to_dumped, ensure_ascii=False)
b=B()
b.create_pool_of_specific_objects()
b.do_something_with_data()
b.save() #will raise exception due to we have all stuff from instance of class A there
b.hack_save() #will work but it is very ugly and unusable with runtime modify object of A class
Now I wrote code with GC, but using method get_objects I have some sort of overhead, is not it?
import json
import gc
class A(object):
def __init__(self):
self.data = 'something'
self.obj = B()
class B(object):
def __init__(self): pass
def do_something_with_data(self):
for obj in gc.get_objects(): #or even for obj in globals().values()
if isinstance(obj, A):
print(obj.data)
obj.data = 'new_data'
print(obj.data)
def save(self):
return json.dumps(self.__dict__, ensure_ascii=False)
a=A()
b=B()
b.do_something_with_data()
b.save() #will work
Any suggestion to do it with introspection and not inheritance and GC overhead? Maybe python save ref to namespace above and I can get A instance in friendly manner?
Ok, guyz. Correctly asked question includes right answer itself.
All I need is:
import inspect
def my_method(self):
parent_self = inspect.currentframe().f_back.f_locals['self']
do_something_with_parent_self(parent_self)
Related
I have 2 class (that are defined in two different package).
An A object as a "set" of B objects that all refer to the said A object.
Here is how it looks like :
the a.py :
from b import B
class A():
def __init__(self, data):
self.data = data
self.Bs = {}
def add_B(self, id, data_B):
self.Bs[id] = B(data_B, self)
the b.py :
class B():
def __init__(self, data, a_instance):
self.data = data
self.a = a_instance
so everything works preety good, but I'd like to hint python that the a_instance is indeed a class A object to have autocompletion in visual studio code.
At first i've tried to add from a import A and modify def __init__(self, data, a_instance : A): in the b.py file, but i've obviously got a circular import error
So I've been trying to use the typing package, and so added those lines to the a.py file :
from typing import NewType
A_type = NewType('A_type', A)
But I'm steel getting a circular import error.
Can Anyone explain me what I'm doing wrong ?
thanks for the help
PS: My classes actually have some complex methods and are defined in _a.py (resp. _b.py) and the __init__.py just import the class A and declare the A_type (resp. just import the class B)
Use
typing.TYPE_CHECKING, a variable that's never true at runtime
the string form of a type annotation to refer to a name that is not in scope at runtime:
from typing import TYPE_CHECKING
if TYPE_CHECKING:
from a import A
class B:
def __init__(self, data, a_instance: "A"):
...
However, if you can restructure your code in a way that avoids circular imports altogether, all the better.
You could try an abstract class with attributes of A and B, then implement each accordingly.
from collections.abc import abstractmethod, ABCMeta
class ABInerface(metaclass=ABCMeta):
#property
#abstractmethod
def data(self):
pass
#data.setter
#abstractmethod
def data(self, value):
pass
#property
#abstractmethod
def Bs(self):
pass
#Bs.setter
#abstractmethod
def Bs(self, value):
pass
#property
#abstractmethod
def a(self):
pass
#a.setter
#abstractmethod
def a(self, value):
pass
#abstractmethod
def add_B(self, id, data_B):
pass
Then, create each class by extending the Interface meta class.
class B(ABInerface):
def __init__(self, data, a_instance):
self.data = data
self.a = a_instance
class A(ABInerface):
def __init__(self, data):
self.data = data
def add_B(self, id, data_B):
self.Bs[_id] = B(data_B, self)
I wrote a class with methods that overwrites some methods of a parent class only when I want them to overwrite. Other times I call super of that method so that only things written in parent class method should execute. I observe that this works when I store data but not when I retrieve that data. A simplified take that shows the exact problem:
# parent class
class A(object):
def __init__(self):
self.var = {}
def assigner_method(self, value):
self.var = value
def returning_method(self):
return self.var
# child class
class B(A):
def returning_method(self):
#Do nothing
super(B, self).returning_method()
# What obviously works
class C(object):
def some_method(self):
self.obj = A()
self.obj.assigner_method("ABCD")
resp = self.obj.returning_method()
print resp
# What doesn't work:
class D(object):
def some_method(self):
self.obj2 = B()
self.obj2.assigner_method("ABCD")
resp = self.obj2.returning_method()
print resp
Now, this works:
print C().some_method()
ABCD
And this fails:
print D().some_method()
None
Putting some prints here and there, I see that setting the data self.var using self.obj2 works. Also when fetching data using self.obj2, the parent class returning_method prints returning data ABCD but when print at the caller, it says data received is NoneType. I think I did some fundamentally wrong here. Any help appreciated.
Let's say I have this:
from PySide2 import QtWidgets
class MyClass(object):
def __init__(self, parent=None):
self.class_variable = QtWidgets.QWidget()
class_instance = MyClass()
variable = class_instance.class_variable
class_instance_returned = mystery_method(variable) # How to make this return class_instance?
How should I define mystery_method so that it would return the class_instance instance?
The real-world case I have is that I'm sending a QWidget which I'm using as a base instance for .ui file loading into a function. Inside this function I need to figure out which class instance it belongs to.
Python 2.7
class MyClass(object):
def foo():
return 'bar'
instance = MyClass()
def mystery_method(method):
return method.im_self.__class__
print mystery_method(instance.foo)
Python 3
class MyClass(object):
def foo():
return 'bar'
instance = MyClass()
def mystery_method(method):
return method.__self__.__class__
print mystery_method(instance.foo)
EDIT
After the OP was edited:
class ParentClass():
def foo():
return 'bar'
class MyClass(object):
def __init__(self, parent=None):
self.instance_attribute = ParentClass()
def mystery_method(method):
return method.__class__
class_instance = MyClass()
print mystery_method(class_instance.instance_attribute)
One way would we to define foo as a custom property that returns both its value and the related instance when its value is fetched:
from collections import namedtuple
class Prop(object):
def __init__(self, val):
self.val = val
def __get__(self, instance, type):
return namedtuple('Prop', ('value', 'instance'))(self.val, instance)
def __set__(self, instance, val):
self.val = val
class MyClass(object):
foo = Prop('bar')
Now in your program you can explicitly use its value and the related instance using foo's value and instance attributes respectively.
Demo:
>>> instance = MyClass()
>>> instance.foo
Prop(value='bar', instance=<__main__.MyClass object at 0x10effbcd0>)
>>> instance.foo.value
'bar'
>>> instance.foo.instance
<__main__.MyClass object at 0x10effbcd0>
In general you cannot (at least not without a lot of searching through all the objects in the system) but if all you want is to find which instances of a class match a particular value then it's fairly easy.
You can create a set of all instances and iterate over them to find what you need.
from weakref import WeakSet
class MyClass(object):
_instances = WeakSet()
def __init__(self, foo):
self._instances.add(self)
self.foo = foo
#classmethod
def findFoo(cls, foo):
return [instance for instance in cls._instances if instance.foo == foo]
>>> instance1 = MyClass('bar')
>>> instance2 = MyClass('baz')
>>> MyClass.findFoo('baz')
[<__main__.MyClass object at 0x7f6723308f50>]
>>> MyClass.findFoo('bar')
[<__main__.MyClass object at 0x7f6723308c50>]
Note that deleting the object won't remove it immediately, it may not go until garbage collected:
>>> del instance1
>>> MyClass.findFoo('bar')
[<__main__.MyClass object at 0x7f6723308c50>]
>>> import gc
>>> gc.collect()
16
>>> MyClass.findFoo('bar')
[]
However in general you would be better to keep the reference to the original object hanging around and just use that.
Also, note that you cannot reliably tell which instance holds 'bar' if it is stored in more than one object: they could be the same 'bar' or they could be different ones, and whether they are the same or different is an implementation detail.
I have a big class which has a lot of functions and attributes.
the instances are created from data in a remote database.
the process of creating each instance is very long and heavy.
In performance sake ive created a bunch class from this heavy class.
so accessing the attributed is easy and works great .
the problem is how to use the methods from that class.
ex :
class clsA():
def __init__(self,obj):
self.attrA=obj.attrA
def someFunc(self):
print self
class bunchClsA(bunch):
def __getattr__(self, attr):
# this is the problem:
try:
#try and return a func
func = clsA.attr
return func
except:
# return simple attribute
return self.attr
Clearly this dosent work , Is there a way i could access the instance function staticly and override the "self" var ?
Found out a nice solution to the problem :
from bunch import Bunch
import types
#Original class:
class A():
y=6
def __init__(self,num):
self.x=num
def funcA(self):
print self.x
#class that wraps A using Bunch(thats what i needed .. u can use another):
class B(Bunch):
def __init__(self, data, cls):
self._cls = cls # notice, not an instance just the class it self
super(B, self).__init__(data)
def __getattr__(self, attr):
# Handles normal Bunch, dict attributes
if attr in self.keys():
return self[attr]
else:
res = getattr(self._cls, attr)
if isinstance(res, types.MethodType):
# returns the class func with self overriden
return types.MethodType(res.im_func, self, type(self))
else:
# returns class attributes like y
return res
data = {'x': 3}
ins_b = B(data, A)
print ins_b.funcA() # returns 3
print ins_b.y # returns 6
And this solves my issue, its a hack and if you have the privileges, redesign the code.
I need to protect a class variable. But what to do if the class supports save and load options?
import numpy as np
import pickle
class data(object):
def __init__(self):
self.__a = range(100)
#property
def a(self):
return self.__a
def save(self, path):
pickle.dump(self,open(path, 'wb'), protocol=2)
def load(self, path):
obj = pickle.load(open(path, 'wb'))
self.__a = obj.a
This is simple but __aattribute is no more protected because calling instance.a returns the exact instance.__a list and it can changed from the outside which is dangerous in my case.
is there any way around this?
To protect lists from being changed, you can return a copy of the list by your property:
#property
def a(self):
return list(self.__a)
Instead of non-standard save/load methods, stick with the standard pythonic way of pickling objects, i.e. using pickle.dump and pickle.load directly.
The data members will be as protected after loading as they were before dumping, i.e. your object behaves the same.
class data(object):
def __init__(self):
self.__a = range(100)
#property
def a(self):
return self.__a
obj = data()
# when you want to save, do:
x = pickle.dumps(obj)
# and for loading, do:
obj = pickle.loads(x)
obj.__dict__
=> {'_data__a': [0,
1,
2,
3,
...
]}
This approach has many advantages, e.g. you can safely pickle objects which reference instances of your class data.