I am a bit unsure how to use self outside of a class. A lot of built in methods in python use self as a parameter and there is no need for you to declare the class; For example, you can use the string.upper() command to capitalize each letter without needing to tell python which class to use. In case I'm not explaining myself well, I have included what my code looks like below.
def ispalendrome(self): return self == self[::-1]
largestProd = 999**2
largest5Palendromes = []
while len(largest5Palendromes) <= 5:
if str(largestProd).ispalendrome(): largest5Palendromes.append(largestProd)
largestProd -= 1
print largest5Palendromes
Note: I understand there are other ways of accomplishing this task, but I would like to know if this is possible. TYVM.
using https://github.com/clarete/forbiddenfruit
from forbiddenfruit import curse
def ispalendrome(self): #note that self is really just a variable name ... it doent have to be named self
return self == self[::-1]
curse(str, "ispalendrome",ispalendrome)
"hello".ispalendrome()
note that just because you can does not mean its a good idea
alternatively it is much better to just do
def ispalendrome(a_string):
return a_string == a_string[::-1]
ispalendrome("hello")
It feels like you want to monkey patch a method onto this. If so, then welcome to the dark side young one. Let us begin the cursed ritual. In essence you want to monkey patch. All we need is a little monkey blood. Just kidding. We need type.MethodType. But note, that you cannot monkey patch stdlib types:
>>> from types import MethodType
>>> def palindrome(self): return self == self[::-1]
...
>>> str.palindrome = MethodType(palindrome, str)
Traceback (most recent call last):
File "<input>", line 1, in <module>
TypeError: can't set attributes of built-in/extension type 'str'
But that won't stop you from causing havoc in other classes:
>>> class String(object):
... def __init__(self, done):
... self.done = done
...
...
...
>>> s = String("stanley yelnats")
>>> def palindrome(self): return self.done[::-1]
>>> s.palindrome = MethodType(palindrome, s)
>>> s.palindrome()
'stanley yelnats'
You see how easy that was? But we're just getting started. This is just a mere instance lets kill a class now shall we? The next part will get you laughing maniacally:
>>> from types import DynamicClassAttribute
>>> class String(object):
... def __init__(self, done):
... self.done = done
...
...
...
>>> s = String("cheese")
>>> def palindrome(self): return self.done[::-1];
...
>>> String.palindrome = DynamicClassAttribute(palindrome)
>>> s.palindrome
'eseehc'
After this, if you do not feel evil. Then you must come over to my evil lair, where I shall show you more evil tricks and share cookies.
Self has no special meaning - it is just a variable name. Its use in classes is merely conventional (so it may be confusing to use it elsewhere).
You can, however, set the properties of a class after-the-fact, and these could be class methods or instance methods (the latter with "self" by convention). This will NOT work with the built-in classes like str, though [edit: so you'd have to "curse" or subclass, see other answer]
In
def ispalendrome(self)
there's no need to name the parameter self (indeed, it's a bit misleading), as this isn't an instance method. I would call it s (for string):
def is_palindrome(s):
What you may be referring to is the use of bound methods on the class, where:
an_instance = TheClass()
an_instance.instance_method() # self is passed implicitly
is equivalent to:
an_instance = TheClass()
TheClass.instance_method(an_instance) # self is passed explicitly
In this particular case, for example:
>>> "foo".upper()
'FOO'
>>> str.upper("foo")
'FOO'
In Python the first argument to a class method is the object instance itself, by convention it is called self. You should prevent using self for other purposes.
To explain it more detailed:
If you have a class
class A(object):
def __init__(self):
self.b = 1
and you make an instance of it:
a = A()
this calls the init method and the parameter self if filled with a fresh object. Then self.b = 1 is called and add the attribute b to the new object. This object is then going to become knows as a.
"self" is the name of the first parameter to a function - like any parameter, it has no meaning outside of that function. What it corresponds to is the object on which that function is called.
Related
I have a class A which can be 'initialized' in two different ways. So, I provide a 'factory-like' interface for it based on the second answer in this post.
class A(object):
#staticmethod
def from_method_1(<method_1_parameters>):
a = A()
# set parameters of 'a' using <method_1_parameters>
return a
#staticmethod
def from_method_2(<method_2_parameters>):
a = A()
# set parameters of 'a' using <method_2_parameters>
return a
The two methods are different enough that I can't just plug their parameters into the class's __init__. So, class A should be initialized using:
a = A.from_method_1(<method_1_parameters>)
or
a = A.from_method_2(<method_2_parameters>)
However, it is still possible to call the 'default init' for A:
a = A() # just an empty 'A' object
Is there any way to prevent this? I can't just raise NotImplementedError from __init__ because the two 'factory methods' use it too.
Or do I need to use a completely different approach altogether.
Has been a very long time since this question was asked but I think it's interesting enough to be revived.
When I first saw your problem the private constructor concept just popped out my mind. It's a concept important in other OOP languages, but as Python doesn't enforces privacy I didn't really thought about it since Python became my main language.
Therefore, I became curious and I found this "Private Constructor in Python" question. It covers pretty much all about this topic and I think the second answer can be helpful in here.
Basically it uses name mangling to declare a pseudo-private class attribute (there isn't such thing as private variables in Python) and assign the class object to it. Therefore you'll have an as-private-as-Python allows variable to use to check if your initialization was made from an class method or from an outside call. I made the following example based on this mechanism:
class A(object):
__obj = object()
def __init__(self, obj=None):
assert(obj == A.__obj), \
'A object must be created using A.from_method_1 or A.from_method_2'
#classmethod
def from_method_1(cls):
a = A(cls.__obj)
print('Created from method 1!')
return a
#classmethod
def from_method_2(cls):
a = A(cls.__obj)
print('Created from method 2!')
return a
Tests:
>>> A()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "t.py", line 6, in __init__
'A object must be created using A.from_method_1 or A.from_method_2'
AssertionError: A object must be created using A.from_method_1 or A.from_method_2
>>> A.from_method_1()
Created from method 1!
<t.A object at 0x7f3f7f2ca450>
>>> A.from_method_2()
Created from method 2!
<t.A object at 0x7f3f7f2ca350>
However, as this solution is a workaround with name mangling, it does have one flaw if you know how to look for it:
>>> A(A._A__obj)
<t.A object at 0x7f3f7f2ca450>
I am trying to learn about classes, can someone explain to me why this code is not working. I thought when calling a function from a class, "self" is automatically ommitted, but the interpreter tells me that argument "a" is missing (he thinks self = 10).
#! coding=utf-8
class test:
def __init__(self):
"do something here"
def do(self,a):
return a**2
d = test.do
print(d(10))
Instantiate the class first:
d = test().do
print(d(10)) # prints 100
test.do is an unbound method, test().do is bound. The difference is explained in this thread: Class method differences in Python: bound, unbound and static.
You have to instantiate the class first:
d = test()
then you can call a method:
print(d.do(10))
if you want to use method statically you have to declare it in python
#! coding=utf-8
class test:
def __init__(self):
"do something here"
#staticmethod
def do(a):
return a**2
d = test.do
print(d(10)) #and that's work
Since you haven't instantiated the class (a fancy term for created) you can't be assigning methods to any random variable. Like already said, you must create the object first, whilst making sure the method you call is a part of the class you called or connected to the class in some way (such as creating another class and then communicating that class with the current class). So you should first type d=test() followed by d.do().
Also, remember that in your declaration of the method you crated a parameter so what you done was wrong in itself anyway, because when you declared the do function, you should have put within the brackets the number you wanted to send to the method to calculate its square. So you type test.do(10) and then the 10 is sent by the self reference to the method to be done whatever it is you told it to do.
One more thing: although it isn't a huge deal, it helps if all of your class names begin with a capital letter, as this is usually the 'pythonic' way to do things, and it also makes your code much easier to read, because when you first called the class, somebody could easily mistaken it for an ordinary function
class test:
def __init__(self):
"do something here"
def do(self,a):
return a**2
def __call__(self,a):
return self.do(a)
a = test
test.do(a,10)
#or
a = test().do
a(10)
#or
a = test()
test.do(a,10)
#or
a = test()
print(a(10))
I'm writing a decorator, and for various annoying reasons[0] it would be expedient to check if the function it is wrapping is being defined stand-alone or as part of a class (and further which classes that new class is subclassing).
For example:
def my_decorator(f):
defined_in_class = ??
print "%r: %s" %(f, defined_in_class)
#my_decorator
def foo(): pass
class Bar(object):
#my_decorator
def bar(self): pass
Should print:
<function foo …>: False
<function bar …>: True
Also, please note:
At the point decorators are applied the function will still be a function, not an unbound method, so testing for instance/unbound method (using typeof or inspect) will not work.
Please only offer suggestions that solve this problem — I'm aware that there are many similar ways to accomplish this end (ex, using a class decorator), but I would like them to happen at decoration time, not later.
[0]: specifically, I'm writing a decorator that will make it easy to do parameterized testing with nose. However, nose will not run test generators on subclasses of unittest.TestCase, so I would like my decorator to be able to determine if it's being used inside a subclass of TestCase and fail with an appropriate error. The obvious solution - using isinstance(self, TestCase) before calling the wrapped function doesn't work, because the wrapped function needs to be a generator, which doesn't get executed at all.
Take a look at the output of inspect.stack() when you wrap a method. When your decorator's execution is underway, the current stack frame is the function call to your decorator; the next stack frame down is the # wrapping action that is being applied to the new method; and the third frame will be the class definition itself, which merits a separate stack frame because the class definition is its own namespace (that is wrapped up to create a class when it is done executing).
I suggest, therefore:
defined_in_class = (len(frames) > 2 and
frames[2][4][0].strip().startswith('class '))
If all of those crazy indexes look unmaintainable, then you can be more explicit by taking the frame apart piece by piece, like this:
import inspect
frames = inspect.stack()
defined_in_class = False
if len(frames) > 2:
maybe_class_frame = frames[2]
statement_list = maybe_class_frame[4]
first_statment = statement_list[0]
if first_statment.strip().startswith('class '):
defined_in_class = True
Note that I do not see any way to ask Python about the class name or inheritance hierarchy at the moment your wrapper runs; that point is "too early" in the processing steps, since the class creation is not yet finished. Either parse the line that begins with class yourself and then look in that frame's globals to find the superclass, or else poke around the frames[1] code object to see what you can learn — it appears that the class name winds up being frames[1][0].f_code.co_name in the above code, but I cannot find any way to learn what superclasses will be attached when the class creation finishes up.
A little late to the party here, but this has proven to be a reliable means of determining if a decorator is being used on a function defined in a class:
frames = inspect.stack()
className = None
for frame in frames[1:]:
if frame[3] == "<module>":
# At module level, go no further
break
elif '__module__' in frame[0].f_code.co_names:
className = frame[0].f_code.co_name
break
The advantage of this method over the accepted answer is that it works with e.g. py2exe.
Some hacky solution that I've got:
import inspect
def my_decorator(f):
args = inspect.getargspec(f).args
defined_in_class = bool(args and args[0] == 'self')
print "%r: %s" %(f, defined_in_class)
But it relays on the presence of self argument in function.
you can use the package wrapt to check for
- instance/class methods
- classes
- freestanding functions/static methods:
See the project page of wrapt: https://pypi.org/project/wrapt/
You could check if the decorator itself is being called at the module level or nested within something else.
defined_in_class = inspect.currentframe().f_back.f_code.co_name != "<module>"
I think the functions in the inspect module will do what you want, particularly isfunction and ismethod:
>>> import inspect
>>> def foo(): pass
...
>>> inspect.isfunction(foo)
True
>>> inspect.ismethod(foo)
False
>>> class C(object):
... def foo(self):
... pass
...
>>> inspect.isfunction(C.foo)
False
>>> inspect.ismethod(C.foo)
True
>>> inspect.isfunction(C().foo)
False
>>> inspect.ismethod(C().foo)
True
You can then follow the Types and Members table to access the function inside the bound or unbound method:
>>> C.foo.im_func
<function foo at 0x1062dfaa0>
>>> inspect.isfunction(C.foo.im_func)
True
>>> inspect.ismethod(C.foo.im_func)
False
For specific debugging purposes I'd like to wrap the del function of an arbitrary object to perform extra tasks like write the last value of the object to a file.
Ideally I want to write
monkey(x)
and it should mean that the final value of x is printed when x is deleted
Now I figured that del is a class method. So the following is a start:
class Test:
def __str__(self):
return "Test"
def p(self):
print(str(self))
def monkey(x):
x.__class__.__del__=p
a=Test()
monkey(a)
del a
However if I want to monkey specific objects only I suppose I need to dynamically rewrite their class to a new one?! Moreover I need to do this anyway, since I cannot access del of built-in types?
Anyone knows how to implement that?
While special 'double underscore' methods like __del__, __str__, __repr__, etc. can be monkey-patched on the instance level, they'll just be ignored, unless they are called directly (e.g., if you take Omnifarious's answer: del a won't print a thing, but a.__del__() would).
If you still want to monkey patch a single instance a of class A at runtime, the solution is to dynamically create a class A1 which is derived from A, and then change a's class to the newly-created A1. Yes, this is possible, and a will behave as if nothing has changed - except that now it includes your monkey patched method.
Here's a solution based on a generic function I wrote for another question:
Python method resolution mystery
def override(p, methods):
oldType = type(p)
newType = type(oldType.__name__ + "_Override", (oldType,), methods)
p.__class__ = newType
class Test(object):
def __str__(self):
return "Test"
def p(self):
print(str(self))
def monkey(x):
override(x, {"__del__": p})
a=Test()
b=Test()
monkey(a)
print "Deleting a:"
del a
print "Deleting b:"
del b
del a deletes the name 'a' from the namespace, but not the object referenced by that name. See this:
>>> x = 7
>>> y = x
>>> del x
>>> print y
7
Also, some_object.__del__ is not guaranteed to be called at all.
Also, I already answered your question here (in german).
You can also inherit from some base class and override the __del__ method (then only thing you would need would be to override class when constructing an object).
Or you can use super built-in method.
Edit: This won't actually work, and I'm leaving it here largely as a warning to others.
You can monkey patch an individual object. self will not get passed to functions that you monkey patch in this way, but that's easily remedied with functools.partial.
Example:
def monkey_class(x):
x.__class__.__del__ = p
def monkey_object(x):
x.__del__ = functools.partial(p, x)
I'd like to do something like this:
class SillyWalk(object):
#staticmethod
def is_silly_enough(walk):
return (False, "It's never silly enough")
def walk(self, appraisal_method=is_silly_enough):
self.do_stuff()
(was_good_enough, reason) = appraisal_method(self)
if not was_good_enough:
self.execute_self_modifying_code(reason)
return appraisal_method
def do_stuff(self):
pass
def execute_self_modifying_code(self, problem):
from __future__ import deepjuju
deepjuju.kiss_booboo_better(self, problem)
with the idea being that someone can do
>>> silly_walk = SillyWalk()
>>> appraise = walk()
>>> is_good_walk = appraise(silly_walk)
and also get some magical machine learning happening; this last bit is not of particular interest to me, it was just the first thing that occurred to me as a way to exemplify the use of the static method in both an in-function context and from the caller's perspective.
Anyway, this doesn't work, because is_silly_enough is not actually a function: it is an object whose __get__ method will return the original is_silly_enough function. This means that it only works in the "normal" way when it's referenced as an object attribute. The object in question is created by the staticmethod() function that the decorator puts in between SillyWalk's is_silly_enough attribute and the function that's originally defined with that name.
This means that in order to use the default value of appraisal_method from within either SillyWalk.walk or its caller, we have to either
call appraisal_method.__get__(instance, owner)(...) instead of just calling appraisal_method(...)
or assign it as the attribute of some object, then reference that object property as a method that we call as we would call appraisal_method.
Given that neither of these solutions seem particularly Pythonic™, I'm wondering if there is perhaps a better way to get this sort of functionality. I essentially want a way to specify that a method should, by default, use a particular class or static method defined within the scope of the same class to carry out some portion of its daily routine.
I'd prefer not to use None, because I'd like to allow None to convey the message that that particular function should not be called. I guess I could use some other value, like False or NotImplemented, but it seems a) hackety b) annoying to have to write an extra couple of lines of code, as well as otherwise-redundant documentation, for something that seems like it could be expressed quite succinctly as a default parameter.
What's the best way to do this?
Maybe all you need is to use the function (and not the method) in the first place?
class SillyWalk(object):
def is_silly_enough(walk):
return (False, "It's never silly enough")
def walk(self, appraisal_function=is_silly_enough):
self.do_stuff()
(was_good_enough, reason) = appraisal_function(self)
if not was_good_enough:
self.execute_self_modifying_code(reason)
return appraisal_function
def do_stuff(self):
pass
def execute_self_modifying_code(self, problem):
deepjuju.kiss_booboo_better(self, problem)
Note that the default for appraisal_function will now be a function and not a method, even though is_silly_enough will be bound as a class method once the class is created (at the end of the code).
This means that
>>> SillyWalk.is_silly_enough
<unbound method SillyWalk.is_silly_enough>
but
>>> SillyWalk.walk.im_func.func_defaults[0] # the default argument to .walk
<function is_silly_enough at 0x0000000002212048>
And you can call is_silly_enough with a walk argument, or call a walk instance with .is_silly_enough().
If you really wanted is_silly_enough to be a static method, you could always add
is_silly_enough = staticmethod(is_silly_enough)
anywhere after the definition of walk.
I ended up writing an (un)wrapper function, to be used within function definition headers, eg
def walk(self, appraisal_method=unstaticmethod(is_silly_enough)):
This actually seems to work, at least it makes my doctests that break without it pass.
Here it is:
def unstaticmethod(static):
"""Retrieve the original function from a `staticmethod` object.
This is intended for use in binding class method default values
to static methods of the same class.
For example:
>>> class C(object):
... #staticmethod
... def s(*args, **kwargs):
... return (args, kwargs)
... def m(self, args=[], kwargs={}, f=unstaticmethod(s)):
... return f(*args, **kwargs)
>>> o = C()
>>> o.s(1, 2, 3)
((1, 2, 3), {})
>>> o.m((1, 2, 3))
((1, 2, 3), {})
"""
# TODO: Technically we should be passing the actual class of the owner
# instead of `object`, but
# I don't know if there's a way to get that info dynamically,
# since the class is not actually declared
# when this function is called during class method definition.
# I need to figure out if passing `object` instead
# is going to be an issue.
return static.__get__(None, object)
update:
I wrote doctests for the unstaticmethod function itself; they pass too. I'm still not totally sure that this is an actual smart thing to do, but it does seem to work.
Not sure if I get exactly what you're after, but would it be cleaner to use getattr?
>>> class SillyWalk(object):
#staticmethod
def ise(walk):
return (False, "boo")
def walk(self, am="ise"):
wge, r = getattr(self, am)(self)
print wge, r
>>> sw = SillyWalk()
>>> sw.walk("ise")
False boo