I need to define a class which extends python's standard math module, without instantiating it (no need for that, all methods in the class are static):
import math
class more_math(math):
#staticmethod
def add_func(x):
return math.sqrt(x)+1
The code above doesn't run properly (script exits), with the error:
TypeError: Error when calling the metaclass bases
module.__init__() takes at most 2 arguments (3 given)
When the class declaration above is set to class more_math:, more_math.add_func(x) is called without error. However, more_math.sqrt(x) [sqrt is a method of math] can't be called, as more_math doesn't have math as its base class.
Ideas on how could this be setup properly?
Think long and hard about whether you actually need to provide the functions math implements. You almost certainly don't; you probably just need to provide your extras. That said, if you do need to provide a more_math module that implements all the standard math functions too, the easiest way to do this is to do a from math import *. This will bring every function the math module defines into your module. This is considered very bad practice, because it pollutes the namespace of your module and can make it difficult to tell what is actually being used. However, in this case, pollution of your module's namespace is exactly what you want.
As #user2357112 commented math is a module and not a class. You can create a more_math module simply by creating a more_math.py file with:
from math import *
def add_func(x):
return sqrt(x)+1
this module can be imported with import more_math or from more_math import add_func.
math isn't a class, it's an instance of class types.ModuleType. You can verify this with isinstance(math, types.ModuleType) which will return True. Normally you can't define a subclass that inherits from an instance of another class. However, it is possible with a bit of hackery.
(I got the idea from a recipe for inheriting from an instances on the ActiveState website.)
Since it is a hack, one might not want to use it in production code. However I thought you (and other readers) might find it a least interesting, if not useful.
Script more_math.py:
from copy import deepcopy
import math
import sys
def class_from_instance(instance):
copy = deepcopy(instance.__dict__)
def __init__(self, *args, **kwargs):
super(InstanceFactory, self).__init__(*args, **kwargs)
self.__dict__.update(copy)
InstanceFactory = type('InstanceFactory',
(instance.__class__,),
{'__init__': __init__})
return InstanceFactory
class MoreMathModule(class_from_instance(math)):
#staticmethod
def added_func(x):
return math.sqrt(x)+1
# Replace this module with an instance of the class above.
ref, sys.modules[__name__] = sys.modules[__name__], MoreMathModule('more_math')
if __name__ == '__main__':
import more_math
x = 42
print('more_math.sqrt({}) -> {:.6f}'.format(x, more_math.sqrt(x)))
print('more_math.added_func({}) -> {:.6f}'.format(x, more_math.added_func(x)))
Output:
more_math.sqrt(42) -> 6.480741
more_math.added_func(42) -> 7.480741
Related
I am reading this piece of code by Alex Martelli mentioned in this question. I understand that sys.modules[__name__] tells you what module you are currently at, but this line of code at the end of his constant.py really confuses me. What is the meaning and the point of having such a statement that declares the current module by the end of the file?
# Put in const.py...:
class _const:
class ConstError(TypeError): pass
def __setattr__(self,name,value):
if self.__dict__.has_key(name):
raise self.ConstError, "Can't rebind const(%s)"%name
self.__dict__[name]=value
import sys
sys.modules[__name__]=_const() #this I don't understand
# that's all -- now any client-code can
import const
Basically, my question is that in my opinion this line of code does not do anything; am I understanding it wrong?
Since in Python you don't have to put class definitions in separate files, I argue that I don't really need two modules unless I want to reuse the class "const." Then in this case sys.moldules[__name__]=_const() is not necessary either... Am I understanding it correctly?
I believe it is binding an instance to the module. So when you do import const, you actually get an instance of the class _const.
This allows you to call methods on it. Like, for example, the __setattr__, where in this case it checks that you only bind a variable once.
This is described by Guido van Rossum as a hack that is sometimes recommended:
https://mail.python.org/pipermail/python-ideas/2012-May/014969.html
# foo.py
import sys
class Foo:
def funct1(self, <args>): <code>
def funct2(self, <args>): <code>
sys.modules[__name__] = Foo()
This works because the import machinery is actively enabling this
hack, and as its final step pulls the actual module out of
sys.modules, after loading it. (This is no accident. The hack was
proposed long ago and we decided we liked enough to support it in the
import machinery.)
You can easily override __getattr__ / __getattribute__ / __setattr__
this way. It also makes "subclassing" the module a little easier
(although accessing the class to be used as a base class is a little
tricky: you'd have to use foo.__class__). But of course the kind of
API that Greg was griping about would never be implemented this way,
so that's fairly useless. And if you were designing a module as an
inheritable class right from the start you're much better off just
using a class instead of the above hack.
Python's docs say that there's an alternative to local imports to prevent loading the module on startup:
https://docs.python.org/3/library/unittest.mock-examples.html#mocking-imports-with-patch-dict
...to prevent “up front costs” by delaying the import.
This can also be solved in better ways than an unconditional local
import (store the module as a class or module attribute and only do
the import on first use).
I don't understand the explanation in brackets. How do I do this? However I think about it, I seem to end up with local imports anyway.
The documentation likely refers to the use of importlib.import_module, which exposes Python's import functionality:
import importlib
class Example():
TO_IMPORT = "os" # the name of the module to lazily import
__module = None
def __init__(self):
if self.__module is None:
self.__module = importlib.import_module(self.TO_IMPORT)
Note that this way the module is only imported once when the class is first instantiated and is not available in global namespace.
Further, it allows you to change which module is imported, which could be useful e.g. in cases where the same class is used as an interface to different backends:
import importlib
class Example():
def __init__(self, backend="some_module"):
self.module = importlib.import_module(backend)
I would like to convert a singleton-object programmatically into a Python module so that I can use the methods of this singleton-object directly by importing them via the module instead of accessing them as object attributes. By "programmatically" I mean that I do not want to have to copy-paste the class methods explicitly into a module file. I need some sort of a workaround that allows me to import the object methods into to global scope of another module.
I would really appreciate if someone could help me on this one.
Here is a basic example that should illustrate my problem:
mymodule.py
class MyClass:
"""This is my custom class"""
def my_method(self):
return "myValue"
singleton = MyClass()
main_as_is.py
from mymodule import MyClass
myobject = MyClass()
print(myobject.my_method())
main_to_be.py
from mymodule import my_method # or from mymodule.singleton import my_method
print(my_method())
You can use the same strategy that the standard random module uses. All the functions in that module are actually methods of a "private" instance of the Random class. That's convenient for most common uses of the module, although sometimes it's useful to create your own instances of Random so that you can have multiple independent random streams.
I've adapted your code to illustrate that technique. I named the class and its instance with a single leading underscore, since that's the usual convention in Python to signify a private name, but bear in mind it's simply a convention, Python doesn't do anything to enforce this privacy.
mymodule.py
class _MyClass:
""" This is my custom class """
def my_method(self):
return "myValue"
_myclass = _MyClass()
my_method = _myclass.my_method
main_to_be.py
from mymodule import my_method
print(my_method())
output
myValue
BTW, the from mymodule import method1, method2 syntax is ok if you only import a small number of names, or it's clear from the name which module it's from (like math module functions and constants), and you don't import from many modules. Otherwise it's better to use this sort of syntax
import mymodule as mm
# Call a method from the module
mm.method1()
That way it's obvious which names are local, and which ones are imported and where they're imported from. Sure, it's a little more typing, but it makes the code a whole lot more readable. And it eliminates the possibility of name collisions.
FWIW, here's a way to automate adding all of the _myclass methods without explicitly listing them (but remember "explicit is better than implicit"). At the end of "mymodule.py", in place of my_method = _myclass.my_method, add this:
globals().update({k: getattr(_myclass, k) for k in _MyClass.__dict__
if not k.startswith('__')})
I'm not comfortable with recommending this, since it directly injects items into the globals() dict. Note that that code will add all class attributes, not just methods.
In your question you talk about singleton objects. We don't normally use singletons in Python, and many programmers in various OOP languages consider them to be an anti-pattern. See https://stackoverflow.com/questions/12755539/why-is-singleton-considered-an-anti-pattern for details. For this application there is absolutely no need at all to use a singleton. If you only want a single instance of _MyClass then simply don't create another instance of it, just use the instance that mymodule creates for you. But if your boss insists that you must use a singleton, please see the example code here.
Using the OOP 'State' pattern in python leaded me to this dependency problem:
StateA, StateBand StateC are 3 states implementing the same method event1.
StateB inherits its behavior from StateA.
file a.py:
#from b import StateB
from c import StateC
class StateA(object):
def event1(self):
return StateC()
print type(StateA().event1())
file b.py:
import a
class StateB(a.StateA):
def event1(self):
return self
file c.py:
class StateC(object):
def event1(self):
return self
As long as I don't need StateB in a.py, this works. But what if I want to use type StateB in StateA?
Importing StateB (see the first commented line in a.py) leads to this cyclic dependency error:
ImportError: cannot import name StateB
Circle dependencies is a problem connected with code design. In practice, you could meet that probably only in badly organized code. Try to refactor your code to resolve this issue.
Another approach (which I not suggest you to do, just FYI) is an import outside of module-level import, but you should avoid such an approach.
Since your classes are tightly coupled, I would just put them into the same module:
file my_state.py:
class StateA(object):
def event1(self):
return StateC()
class StateB(StateA):
def event1(self):
return self
class StateC(object):
def event1(self):
return self
print(type(StateA().event1()))
I have a large python code with many modules and classes. I have a special class, whose single instance is needed everywhere throughout the code (it's a threaded application, and that instance of a class also holds Thread Local Storage, locks, etc). It's a bit uncomfortable to always "populate" that instance in every imported module. I know, using globals is not the best practice, but anyway: is there any "import hook" in python, so I can do with hooking on it to have my instance available in every modules without extra work? It should work for normal imports, "from mod import ..." too, and for import constructs too. If this is not possible, can you suggest a better solution? Certenly it's not fun to pass that instance to the constructors of every classes, etc ... Inheritance also does not help, since I have modules without classes, and also I need a single instance, not the class itself ...
class master():
def import_module(self, name):
mod = __import__(name)
mod.m = self
return mod
[...]
m = master()
Currently I am thinking something like that: but then I have to use m.import_module() to import modules, then other modules will have instance of master class with name of "m" available, so I can use m.import_module() too, etc. But then I have to give up to use "normal" import statements, and I should write this:
example_mod = m.module_import("example_mod")
instead of just this:
import example_mod
(but for sure I can do with this too, to assign "m" to example_mod.m then)
Certainly it's not fun to pass that instance to the constructors of
every classes
You don't have to do this. Set up your global class in a module like config and import it
# /myapp/enviroment/__init__.py
class ThatSingleInstanceClass: pass
# create the singleton object directly or have a function init the module
singleton = ThatSingleInstanceClass()
# /myapp/somewhere.py
# all you need to use the object is importing it
from myapp.enviroment import singleton
class SomeClass:
def __init__(self): # no need to pass that object
print "Always the same object:", singleton
What's wrong with having each module import the needed object? Explicit is better than implicit.