I'm going to throw out some pseudocode. Then explain what I want, because I am not sure how to otherwise.
File_A
class Panel_A(wx.Panel)
def __init__(self):
button_a = wx.Button(parent=self)
def onButton(self, event):
pass to list view
File_B
class Panel_B(wx.panel):
def __init__(self):
listview_a = wx.ListView(parent=self)
File_C
import File_A
import File_B
panel_a = Panel_A()
panel_b = Panel_B()
OK, I have a panel in one module, that searches a database when I push button_a. The second module has a listview in it. Both modules are imported into a third module. I need to be able to pass the information from the search to listview_a in another module. I am not sure how to do this, since all the objects are declared in File_C but I need to use them in File_A.
Use the delegate design pattern:
(Pass in panel_b as an argument when instantiating Panel_A objects):
# File_A
class Panel_A(wx.Panel)
def __init__(self,panel_b):
self.panel_b=panel_b
button_a = wx.Button(parent=self)
def onButton(self, event):
pass to self.panel_b.listview_a
# File_B
class Panel_B(wx.panel):
def __init__(self):
listview_a = wx.ListView(parent=self)
# File_C
import File_A
import File_B
panel_b = Panel_B()
panel_a = Panel_A(panel_b)
You may want to pass in just the ListView, instead of the whole panel. I don't know enough about your situation to know what one would be best.
You can use a simplified version of the Observer pattern: the Panel_A class has a listener field with a fillView method that gets the list, the Panel_B implements such a method.
After the construction of both Panel_A and Panel_B, just assign to the Panel_A object's field
and call self.listener.fillView(list) from inside the onButton method
Related
I have a factory as shown in the following code:
class ClassFactory:
registry = {}
#classmethod
def register(cls, name):
def inner_wrapper(wrapped_class):
if name in cls.registry:
print(f'Class {name} already exists. Will replace it')
cls.registry[name] = wrapped_class
return wrapped_class
return inner_wrapper
#classmethod
def create_type(cls, name):
exec_class = cls.registry[name]
type = exec_class()
return type
#ClassFactory.register('Class 1')
class M1():
def __init__(self):
print ("Starting Class 1")
#ClassFactory.register('Class 2')
class M2():
def __init__(self):
print("Starting Class 2")
This works fine and when I do
if __name__ == '__main__':
print(ClassFactory.registry.keys())
foo = ClassFactory.create_type("Class 2")
I get the expected result of dict_keys(['Class 1', 'Class 2']) Starting Class 2
Now the problem is that I want to isolate classes M1 and M2 to their own files m1.py and m2.py, and in the future add other classes using their own files in a plugin manner.
However, simply placing it in their own file
m2.py
from test_ import ClassFactory
#MethodFactory.register('Class 2')
class M2():
def __init__(self):
print("Starting Class 2")
gives the result dict_keys(['Class 1']) since it never gets to register the class.
So my question is: How can I ensure that the class is registered when placed in a file different from the factory, without making changes to the factory file whenever I want to add a new class? How to self register in this way? Also, is this decorator way a good way to do this kind of thing, or are there better practices?
Thanks
How can I ensure that the class is registered when placed in a file different from the factory, without making changes to the factory file whenever I want to add a new class?
I'm playing around with a similar problem, and I've found a possible solution. It seems too much of a 'hack' though, so set your critical thinking levels to 'high' when reading my suggestion below :)
As you've mentioned in one of your comments above, the trick is to force the loading of the individual *.py files that contain individual class definitions.
Applying this to your example, this would involve:
Keeping all class implementations in a specific folders, e.g., structuring the files as follows:
.
└- factory.py # file with the ClassFactory class
└─ classes/
└- __init__.py
└- m1.py # file with M1 class
└- m2.py # file with M2 class
Adding the following statement to the end of your factory.py file, which will take care of loading and registering each individual class:
from classes import *
Add a piece of code like the snippet below to your __init__.py within the classes/ foder, so that to dynamically load all classes [1]:
from inspect import isclass
from pkgutil import iter_modules
from pathlib import Path
from importlib import import_module
# iterate through the modules in the current package
package_dir = Path(__file__).resolve().parent
for (_, module_name, _) in iter_modules([package_dir]):
# import the module and iterate through its attributes
module = import_module(f"{__name__}.{module_name}")
for attribute_name in dir(module):
attribute = getattr(module, attribute_name)
if isclass(attribute):
# Add the class to this package's variables
globals()[attribute_name] = attribute
If I then run your test code, I get the desired result:
# test.py
from factory import ClassFactory
if __name__ == "__main__":
print(ClassFactory.registry.keys())
foo = ClassFactory.create_type("Class 2")
$ python test.py
dict_keys(['Class 1', 'Class 2'])
Starting Class 2
Also, is this decorator way a good way to do this kind of thing, or are there better practices?
Unfortunately, I'm not experienced enough to answer this question. However, when searching for answers to this problem, I've came across the following sources that may be helpful to you:
[2] : this presents a method for registering class existence based on Python Metaclasses. As far as I understand, it relies on the registering of subclasses, so I don't know how well it applies to your case. I did not follow this approach, as I've noticed that the new edition of the book suggests the use of another technique (see bullet below).
[3], item 49 : this is the 'current' suggestion for subclass registering, which relies on the definition of the __init_subclass__() function in a base class.
If I had to apply the __init_subclass__() approach to your case, I'd do the following:
Add a Registrable base class to your factory.py (and slightly re-factor ClassFactory), like this:
class Registrable:
def __init_subclass__(cls, name:str):
ClassFactory.register(name, cls)
class ClassFactory:
registry = {}
#classmethod
def register(cls, name:str, sub_class:Registrable):
if name in cls.registry:
print(f'Class {name} already exists. Will replace it')
cls.registry[name] = sub_class
#classmethod
def create_type(cls, name):
exec_class = cls.registry[name]
type = exec_class()
return type
from classes import *
Slightly modify your concrete classes to inherit from the Registrable base class, e.g.:
from factory import Registrable
class M2(Registrable, name='Class 2'):
def __init__(self):
print ("Starting Class 2")
Given a class that export .csv files to a database:
import luigi
import csv
class CsvToDatabase(luigi.Task):
# (...)
def run(self):
## (...)
with open(self.input().some_attribute, 'r', encoding='utf-8') as some_dataframe:
y = csv.reader(some_dataframe, delimiter=';')
### (...) <several lines of code>
# (...)
I'm having problems trying to export a file with ISO-8859-1 encoding.
When I exclude the encoding argument from then open() function, everything works fine, but I cannot make permanent changes in the class definition (firm's other sectors uses it). So I thinked about the possibility of using polymorphism to solve it, like:
from script_of_interest import CsvToDatabase
class LatinCsvToDatabase(CsvToDatabase):
# code that uses everything in `run()` except the `some_dataframe` definition in the with statement
This possibility actually exists? How could I handle it without repeating the "several lines of code" inside the statement?
Thank you #martineau and #buran for the comments. Based on them, I will request a change in the base class definition that didn't affect other sector's work. It would look like this:
import luigi
import csv
class CsvToDatabase(luigi.Task):
# (...)
encoding_param = luigi.Parameter(default='utf-8') # as a class attribute
# (...)
def run(self):
## (...)
with open(self.input().some_attribute, 'r', encoding=self.encoding_param) as some_dataframe:
y = csv.reader(some_dataframe, delimiter=';')
### (...) <several lines of code>
# (...)
And finally, in my script, something like:
from script_of_interest import CsvToDatabase
class LatinCsvToDatabase(CsvToDatabase):
pass
LatinCsvToDatabase.encoding_param = None
You might consider as an alternative modifying the original class to add a new method get_cvs_encoding, which is used by the run method:
class CsvToDatabase(luigi.Task):
...
def get_cvs_encoding(self):
# default:
return 'utf-8'
def run(self):
## (...)
with open(self.input().some_attribute, 'r', encoding=self.get_cvs_encoding()) as some_dataframe:
y = csv.reader(some_dataframe, delimiter=';')
...
}
And then subclass this as follows:
class MyCsvToDatabase(CsvToDatabase):
def get_cvs_encoding(self):
return 'ISO-8859-1' # or None
And use an instance of the subclass. I just think this is neater and you can have multiple subclass instances "running" concurrently.
After you posted an answer, I think it's better to post some alternatives.
So, here is your current class, minimal example
class A:
def run(self):
# some code
with open('some_file.csv' , encoding='utf-8'):
pass
# more code
My idea, suggested in the comments - change class A and add parameter encoding to A.run() with default value utf-8. This way the change will not affect others (their existing code that use class A).
class A:
def run(self, encoding='utf-8'):
# some code
with open('some_file.csv' , encoding=encoding):
pass
# more code
Then in your code
a=A() # create instance of A
a.run(encoding=None)
Now, your idea to add class attribute. This is how you decided to change class A:
class A:
encoding='utf-8'
def run(self):
# some code
with open('some_file.csv', encoding=self.encoding):
pass
# more code
I don't think you need to subclass. With this new class A you can do
a=A() # create instance of the new class A
a.encoding=None # note, in run() you use self.encoding and this will work. still A.encoding is utf-8 and others will not be affected.
And if you insist to subclass the new class A
class B(A):
encoding=None
Then in your code you can use instance of class B.
I have scenario where I am passing a file name and checking if it has argument start as constructor if it has then I have to create instance of that class.
Consider the example where I have a file named test.py which have three class namely A,B,C now only class A has start parameter others have other different parameter or extra parameter.
#test.py
class A:
def __init__(self, start=""):
pass
class B:
def __init__(self, randomKeyword, start=""):
pass
class C:
def __init__(self):
pass
Now I want to write a script which takes test.py as an argument and create instance of A. Till now my progress is
detail = importlib.util.spec_from_file_location('test.py', '/path/to/test.py')
module = importlib.util.module_from_spec(detail)
spec.loader.exec_module(mod)
Bacially I need to write a program which finds init argument of all class in file and create an instance of file with start as init argument.
As mentioned by #deceze it's not a good idea to instantiate a class on the basis of it's init parameter as we're not sure what is there. But it's possible to do it. So I am posting this answer just so that you know how it can be done.
#test.py
class A:
def __init__(self, start=""):
pass
class B:
def __init__(self, randomKeyword, start=""):
pass
class C:
def __init__(self):
pass
One of the possibility is
#init.py
import importlib.util
from inspect import getmembers, isclass, signature
detail = importlib.util.spec_from_file_location('test.py', '/path/to/test.py')
module = importlib.util.module_from_spec(detail)
spec.loader.exec_module(module)
for name, data in getmembers(mod, isclass):
cls = getattr(mod, name)
parameter = signature(cls.__init__).parameters.keys()
# parameter start
if len(parameter) == 2 and 'start' in parameter:
object = cls(start="Whatever you want")
Ofcourse it's not the best approach so more answer are welcome and if you are in this scenario consider #deceze comment and define a builder.
I created the following class:
import loader
import pandas
class SavTool(pd.DataFrame):
def __init__(self, path):
pd.DataFrame.__init__(self, data=loader.Loader(path).data)
#property
def path(self):
return path
#property
def meta_dict(self):
return loader.Loader(path).dict
If the class is instantiated the instance becomes a pandas DataFrame which I wanted to extend by other attributes like the path to the file and a dictionary containing meta information (called 'meta_dict').
What I want is the following: the dictionary 'meta_dict' shall be mutable. Namely, the following should work:
df = SavTool("somepath")
df.meta_dict["new_key"] = "new_value"
print df.meta_dict["new_key"]
But what happens is that every time I use the syntax 'df.meta_dict' the method 'meta_dict' is called and the original 'meta_dict' from loader.Loader is returned such that 'df.meta_dict' cannot be changed. Therefore, the syntax leads to "KeyError: 'new_key'". 'meta_dict' shall be called only once and then never again if it is used/called a second/third... time. The second/third... time 'meta_dict' should just be an attribute, in this case a dictionary.
How can I fix this? Maybe the whole design of the class is bad and should be changed (I'm new to using classes)? Thanks for your answers!
When you call loader.Loader you'll create a new instance of the dictionary each time. The #property doesn't cache anything for you, just provides a convenience for wrapping complicated getters for a clean interface for the caller.
Something like this should work. I also updated the path variable so it's bound correctly on the class and returned in the path property correctly.
import loader
import pandas
class SavTool(pd.DataFrame):
def __init__(self, path):
pd.DataFrame.__init__(self, data=loader.Loader(path).data)
self._path = path
self._meta_dict = loader.Loader(path).dict
#property
def path(self):
return self._path
#property
def meta_dict(self):
return self._meta_dict
def update_meta_dict(self, **kwargs):
self._meta_dict.update(kwargs)
Another way to just cache the variable is by using hasattr:
#property
def meta_dict(self):
if not hasattr(self, "_meta_dict"):
self._meta_dict = loader.Loader(path).dict
return self._meta_dict
As I work and update a class, I want a class instance that is already created to be updated. How do I go about doing that?
class MyClass:
""" """
def __init__(self):
def myMethod(self, case):
print 'hello'
classInstance = MyClass()
I run Python inside of Maya and on software start the instance is created. When I call classInstance.myMethod() it always prints 'hello' even if I change this.
Thank you,
/Christian
More complete example:
class MayaCore:
'''
Super class and foundational Maya utility library
'''
def __init__(self):
""" MayaCore.__init__(): set initial parameters """
#maya info
self.mayaVer = self.getMayaVersion()
def convertToPyNode(self, node):
"""
SYNOPSIS: checks and converts to PyNode
INPUTS: (string?/PyNode?) node: node name
RETURNS: (PyNode) node
"""
if not re.search('pymel', str(node.__class__)):
if not node.__class__ == str and re.search('Meta', str(node)): return node # pass Meta objects too
return PyNode(node)
else: return node
def verifyMeshSelection(self, all=0):
"""
SYNOPSIS: Verifies the selection to be mesh transform
INPUTS: all = 0 - acts only on the first selected item
all = 1 - acts on all selected items
RETURNS: 0 if not mesh transform or nothing is selected
1 if all/first selected is mesh transform
"""
self.all = all
allSelected = []
error = 0
iSel = ls(sl=1)
if iSel != '':
if self.all: allSelected = ls(sl=1)
else:
allSelected.append(ls(sl=1)[0])
if allSelected:
for each in allSelected:
if nodeType(each) == 'transform' and nodeType(each.getShape()) == 'mesh':
pass
else: error = 1
else: error = 1
else: error = 1
if error: return 0
else: return 1
mCore = MayaCore()
The last line is inside the module file (mCore = MayaCore()).
There are tons of methods inside the class so I have removed them to shorten the scrolling :-)
Also there are import statements above the class but they screw up the formatting for some reason. Here they are:
from pymel.all import *
import re
from maya import OpenMaya as om
from our_libs.configobj import ConfigObj
if getMelGlobal('float', "mVersion") >= 2011:
from PyQt4 import QtGui, QtCore, uic
import sip
from maya import OpenMayaUI as omui
Inside Maya, we import this and subclasses of this class upon program start:
from our_maya.mayaCore import *
In other tools we write, we then call mCore.method() on a need basis.
The caveat I am running into is that when I am going back to modify the mCore method and the instance call is already in play, I have to restart Maya for all the instances to get updated with the method change (they will still use the un-modified method).
Alright, trying again, but with a new understanding of the question:
class Foo(object):
def method(self):
print "Before"
f = Foo()
f.method()
def new_method(self):
print "After"
Foo.method = new_method
f.method()
will print
Before
After
This will work with old style classes too. The key is modifying the class, not overriding the class's name.
You'll have to provide more details about what you are doing, but Python instances don't store methods, they always get them from their class. So if you change the method on the class, existing instances will see the new method.
My other answer answers your original question, so I'm leaving it there, but I think what you really want is the reload function.
import our_maya.mayaCore
reload(our_maya.mayaCore)
from our_maya.mayaCore import *
Do that after you change the class definition. Your new method ought to show up and be used by all the existing instances of your class.