Given how dynamic Python is, I'll be shocked if this isn't somehow possible:
I would like to change the implementation of sys.stdout.write.
I got the idea from this answer to another question of mine: https://stackoverflow.com/a/24492990/901641
I tried to simply write this:
original_stdoutWrite = sys.stdout.write
def new_stdoutWrite(*a, **kw):
original_stdoutWrite("The new one was called! ")
original_stdoutWrite(*a, **kw)
sys.stdout.write = new_stdoutWrite
But it tells me AttributeError: 'file' object attribute 'write' is read-only.
This is a nice attempt to keep me from doing something potentially (probably) stupid, but I'd really like to go ahead and do it anyways. I suspect the interpreter has some kind of lookup table its using that I can modify, but I couldn't find anything like that on Google. __setattr__ didn't work, either - it returned the exact same error about the attribute being read-only.
I'm specifically looking for a Python 2.7 solution, if that's important, although there's no reason to resist throwing in answers that work for other versions since I suspect other people in the future will look here with similar questions regarding other versions.
Despite its dynamicity, Python does not allow monkey-patching built-in types such as file. It even prevents you to do so by modifying the __dict__ of such a type — the __dict__ property returns the dict wrapped in a read-only proxy, so both assignment to file.write and to file.__dict__['write'] fail. And for at least two good reasons:
the C code expects the file built-in type to correspond to the PyFile type structure, and file.write to the PyFile_Write() function used internally.
Python implements caching of attribute access on types to speed up method lookup and instance method creation. This cache would be broken if it were allowed to directly assign to type dicts.
Monkey-patching is of course allowed for classes implemented in Python which can handle dynamic modifications just fine.
However... if you really know what you are doing, you can use the low-level APIs such as ctypes to hook into the implementation and get to the type dict. For example:
# WARNING: do NOT attempt this in production code!
import ctypes
def magic_get_dict(o):
# find address of dict whose offset is stored in the type
dict_addr = id(o) + type(o).__dictoffset__
# retrieve the dict object itself
dict_ptr = ctypes.cast(dict_addr, ctypes.POINTER(ctypes.py_object))
return dict_ptr.contents.value
def magic_flush_mro_cache():
ctypes.PyDLL(None).PyType_Modified(ctypes.py_object(object))
# monkey-patch file.write
dct = magic_get_dict(file)
dct['write'] = lambda f, s, orig_write=file.write: orig_write(f, '42')
# flush the method cache for the monkey-patch to take effect
magic_flush_mro_cache()
# magic!
import sys
sys.stdout.write('hello world\n')
Despite Python mostly being a dynamic language, there are native objects types like str, file (including stdout), dict, and list that are actually implemented in low-level C and are completely static:
>>> a = []
>>> a.append = 'something else'
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
AttributeError: 'list' object attribute 'append' is read-only
>>> a.hello = 3
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
AttributeError: 'list' object has no attribute 'hello'
>>> a.__dict__ # normal python classes would have this
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
AttributeError: 'list' object has no attribute '__dict__'
If your object is native C code, your only hope is to use an actual regular class. For your case, like already mentioned, you could do something like:
class NewOut(type(sys.stdout)):
def write(self, *args, **kwargs):
super(NewOut, self).write('The new one was called! ')
super(NewOut, self).write(*args, **kwargs)
sys.stdout = NewOut()
or, to do something similar to your original code:
original_stdoutWrite = sys.stdout.write
class MyClass(object):
pass
sys.stdout = MyClass()
def new_stdoutWrite(*a, **kw):
original_stdoutWrite("The new one was called! ")
original_stdoutWrite(*a, **kw)
sys.stdout.write = new_stdoutWrite
Related
I have a (bad?) habit of displaying classes in Python like structures in Matlab, where each attribute is printed along with its value in a nice clean layout. This is done by implementing the __repr__ method in the class.
When working with objects inside of dictionaries or lists, this display style can be a bit distracting. In this case I'd like to do a more basic display.
Here's the envisioned pseudocode:
def __repr__(self):
if direct_call():
return do_complicated_printing(self)
else:
#something simple that isn't a ton of lines/characters
return type(self)
In this code direct_call() means that this isn't being called as part of another display call. Perhaps this might entail looking for repr in the stack? How would I implement direct call detection?
So I might have something like:
>>> data
<class my_class> with properties:
a: 1
cheese: 2
test: 'no testing'
But in a list I'd want a display like:
>>> data2 = [data, data, data, data]
>>> data2
[<class 'my_class'>,<class 'my_class',<class 'my_class'>,<class 'my_class'>]
I know it is possible for me to force this type of display by calling some function that does this, but I want my_class to be able to control this behavior, without extra work from the user in asking for it.
In other words, this is not a solution:
>>> print_like_I_want(data2)
This is a strange thing to want to do, and generally a function or method ought to do the same thing whoever is calling it. But in this case, __repr__ is only meant for the programmer's convenience, so convenience seems like a good enough reason to make it work the way you're asking for.
However, unfortunately what you want isn't actually possible, because for whatever reason, the list.__repr__ method isn't visible on the stack. I tested in Python 3.5.2 and Python 3.8.1:
>>> class ReprRaises:
... def __repr__(self):
... raise Exception()
...
>>> r = ReprRaises()
>>> r
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "<stdin>", line 3, in __repr__
Exception
>>> [r]
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "<stdin>", line 3, in __repr__
Exception
As you can see, the stack is the same whether or not the object being repr'd is in a list. (The __repr__ frame on the stack belongs to the ReprRaises class, not list.)
I also tested using inspect.stack:
>>> import inspect
>>> class ReprPrints:
... def __repr__(self):
... print(*inspect.stack(), sep='\n')
... return 'foo'
>>> r = ReprPrints()
>>> r
FrameInfo(frame=<frame object at 0x7fcbe4a38588>, filename='<stdin>', lineno=3, function='__repr__', code_context=None, index=None)
FrameInfo(frame=<frame object at 0x7fcbe44fb388>, filename='<stdin>', lineno=1, function='<module>', code_context=None, index=None)
foo
>>> [r]
FrameInfo(frame=<frame object at 0x7fcbe4a38588>, filename='<stdin>', lineno=3, function='__repr__', code_context=None, index=None)
FrameInfo(frame=<frame object at 0x7fcbe44fb388>, filename='<stdin>', lineno=1, function='<module>', code_context=None, index=None)
[foo]
Again, there's no visible difference in the call stack between the object itself vs. the object in a list; so there's nothing for your __repr__ to check for.
So, the closest you can get is some kind of print_like_I_want function. This can at least be written in a way that lets each class define its own behaviour:
def pp(obj):
try:
_pp = obj._pp
except AttributeError:
print(repr(obj))
else:
print(_pp())
The only way I can think of to do it with fewer keypresses is by overloading a unary operator, like the usually-useless unary plus:
>>> class OverloadUnaryPlus:
... def __repr__(self):
... return 'foo'
... def __pos__(self):
... print('bar')
...
>>> obj = OverloadUnaryPlus()
>>> obj
foo
>>> +obj
bar
__repr__ is intended to provide a short, often programmatic display of an object. It's used as the method of choice for all built in containers to display elements. You should therefore override __repr__ to provide your short output.
__str__ is the function intended for the full fancy display of an object. It's what normally shows up when you print an object. You can also trigger it by calling str. You should put your long fancy output in __str__, not __repr__.
The only modification you will have to make is to explicitly call print(obj) or str(obj) rather than repr(obj) or just obj in the REPL. As #kaya3's excellent answer shows, stack inspection won't help you much, and in my opinion would not be a clean solution even if it did.
Let's say this is my class:
class A:
def __init__(self):
self.good_attr = None
self.really_good_attr = None
self.another_good_attr = None
Then a caller can set the values on those variables:
a = A()
a.good_attr = 'a value'
a.really_good_attr = 'better value'
a.another_good_attr = 'a good value'
But they can also add new attributes:
a.goood_value = 'evil'
This is not desirable for my use case. My object is being used to pass a number of values into a set of methods. (So essentially, this object replaces a long list of shared parameters on a few methods to avoid duplication and clearly distinguish what's shared and what's different.) If a caller typos an attribute name, then the attribute would just be ignored, resulting in unexpected and confusing and potentially hard to figure out behavior. It would be better to fail fast, notifying the caller that they used an attribute name that will be ignored. So something similar to the following is the behavior I would like when they use an attribute name that doesn't already exist on the object:
>>> a.goood_value = 'evil'
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
AttributeError: A instance has no attribute 'goood_value'
How can I achieve this?
I would also like to note that I'm fully aware that a caller can create a new class and do whatever they want, bypassing this entirely. This would be unsupported behavior, though. Making the object I do provide just creates a fail-fast bonehead check to save time against typos for those who do leverage the object I'm providing (myself included), rather than making them scratch their heads wondering why things are behaving in unexpected ways.
You can hook into attribute setting with the __setattr__ method. This method is called for all attribute setting, so take into account it'll be called for your 'correct' attributes too:
class A(object):
good_attr = None
really_good_attr = None
another_good_attr = None
def __setattr__(self, name, value):
if not hasattr(self, name):
raise AttributeError(
'{} instance has no attribute {!r}'.format(
type(self).__name__, name))
super(A, self).__setattr__(name, value)
Because good_attr, etc. are defined on the class the hasattr() call returns True for those attributes, and no exception is raised. You can set those same attributes in __init__ too, but the attributes have to be defined on the class for hasattr() to work.
The alternative would be to create a whitelist you could test against.
Demo:
>>> a = A()
>>> a.good_attr = 'foo'
>>> a.bad_attr = 'foo'
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "<string>", line 10, in __setattr__
AttributeError: A instance has no attribute 'bad_attr'
A determined developer can still add attributes to your instance by adding keys to the a.__dict__ instance dictionary, of course.
Another option is to use a side-effect of using __slots__; slots are used to save memory as a dictionary takes a little more space than just putting values directly into the C structure Python creates for each instance (no keys and dynamic table are needed then). That side-effect is that there is no place for more attributes on such a class instance:
class A(object):
__slots__ = ('good_attr', 'really_good_attr', 'another_good_attr')
def __init__(self):
self.good_attr = None
self.really_good_attr = None
self.another_good_attr = None
The error message then looks like:
>>> a = A()
>>> a.good_attr = 'foo'
>>> a.bad_attr = 'foo'
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
AttributeError: 'A' object has no attribute 'bad_attr'
but do read the caveats listed in the documentation for using __slots__.
Because there is no __dict__ instance attribute when using __slots__, this option really closes the door on setting arbitrary attributes on the instances.
A more idiomatic option is to use a named tuple.
Python 3.6 and higher
In Python 3.6 and higher, you can use typing.NamedTuple to achieve this very easily:
from typing import NamedTuple, Any
class A(NamedTuple):
good_attr: Any = None
really_good_attr: Any = None
another_good_attr: Any = None
More specific type constraints can be used if desired, but the annotations must be included for NamedTuple to pick up on the attributes.
This blocks not only the addition of new attributes, but also the setting of existing attributes:
>>> a = A()
>>> a.goood_value = 'evil'
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
AttributeError: 'A' object has no attribute 'goood_value'
>>> a.good_attr = 'a value'
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
AttributeError: can't set attribute
This forces you to specify all the values at construction time instead:
a = A(
good_attr='a value',
really_good_attr='better value',
another_good_attr='a good value',
)
Doing so is typically not a problem, and when it is, it can be worked around with the judicious use of local variables.
Python 3.5 and lower (including 2.x)
These versions of Python either do not have the typing module or typing.NamedTuple does not work as used above. In these versions, you can use collections.namedtuple to achieve mostly the same effect.
Defining the class is simple:
from collections import namedtuple
A = namedtuple('A', ['good_attr', 'really_good_attr', 'another_good_attr'])
And then construction works as above:
a = A(
good_attr='a value',
really_good_attr='better value',
another_good_attr='a good value',
)
However, this does not allow for the omission of some values from calling the constructor. You can either include None values explicitly when constructing the object:
a = A(
good_attr='a value',
really_good_attr=None,
another_good_attr='a good value',
)
Or you can use one of several techniques to give the argument a default value:
A.__new__.func_defaults = (None,) * 3
a = A(
good_attr='a value',
another_good_attr='a good value',
)
make the parameter private by adding two underscores to it, ex self.__good_attr, this way someone can't set that parameter outside of the class. Then make a function that sets the __good_attr variable and have that function throw an exception if it's wrong.
An example from the book Core Python Programming on the topic Delegation doesn't seem to be working.. Or may be I didn't understand the topic clearly..
Below is the code, in which the class CapOpen wraps a file object and defines a modified behaviour of file when opened in write mode. It should write all strings in UPPERCASE only.
However when I try to open the file for reading, and iterate over it to print each line, I get the following exception:
Traceback (most recent call last):
File "D:/_Python Practice/Core Python Programming/chapter_13_Classes/
WrappingFileObject.py", line 29, in <module>
for each_line in f:
TypeError: 'CapOpen' object is not iterable
This is strange, because although I haven't explicitly defined iterator methods, I'd expect the calls to be delegated via __getattr__ to the underlying file object. Here's the code. Have I missed anything?
class CapOpen(object):
def __init__(self, filename, mode='r', buf=-1):
self.file = open(filename, mode, buf)
def __str__(self):
return str(self.file)
def __repr__(self):
return `self.file`
def write(self, line):
self.file.write(line.upper())
def __getattr__(self, attr):
return getattr(self.file, attr)
f = CapOpen('wrappingfile.txt', 'w')
f.write('delegation example\n')
f.write('faye is good\n')
f.write('at delegating\n')
f.close()
f = CapOpen('wrappingfile.txt', 'r')
for each_line in f: # I am getting Exception Here..
print each_line,
I am using Python 2.7.
This is a non-intuitive consequence of a Python implementation decision for new-style classes:
In addition to bypassing any instance attributes in the interest of
correctness, implicit special method lookup generally also bypasses
the __getattribute__() method even of the object’s metaclass...
Bypassing the __getattribute__() machinery in this fashion provides
significant scope for speed optimisations within the interpreter, at
the cost of some flexibility in the handling of special methods (the
special method must be set on the class object itself in order to be
consistently invoked by the interpreter).
This is also explicitly pointed out in the documentation for __getattr__/__getattribute__:
Note
This method may still be bypassed when looking up special methods as
the result of implicit invocation via language syntax or built-in
functions. See Special method lookup for new-style classes.
In other words, you can't rely on __getattr__ to always intercept your method lookups when your attributes are undefined. This is not intuitive, because it is reasonable to expect these implicit lookups to follow the same path as all other clients that access your object. If you call f.__iter__ directly from other code, it will resolve as expected. However, that isn't the case when called directly from the language.
The book you quote is pretty old, so the original example probably used old-style classes. If you remove the inheritance from object, your code will work as intended. That being said, you should avoid writing old style classes, since they will become obsolete in Python 3. If you want to, you can still maintain the delegation style here by implementing __iter__ and immediately delegating to the underlying self.file.__iter__.
Alternatively, inherit from the file object directly and __iter__ will be available by normal lookup, so that will also work.
For an object to be iterable, its class has to have __iter__ or __getitem__ defined.
__getattr__ is only called when something is being retrieved from the instance, but because there are several ways that iteration is supported, Python is looking first to see if the appropriate methods even exist.
Try this:
class Fake(object):
def __getattr__(self, name):
print "Nope, no %s here!" % name
raise AttributeError
f = Fake()
for not_here in f:
print not_here
As you can see, the same error is raised: TypeError: 'Fake' object is not iterable.
If you then do this:
print '__getattr__' in Fake.__dict__
print '__iter__' in Fake.__dict__
print '__getitem__' in Fake.__dict__
You can see what Python is seeing: that neither __iter__ nor __getitem__ exist, so Python does not know how to iterate over it. While Python could just try and then catch the exception, I suspect the reason why it does not is that catching exceptions is quite a bit slower.
See my answer here for the many ways to make an iterator.
I am not able to understand why I am getting a Type Error for the following statement
log.debug('vec : %s blasted : %s\n' %(str(vec), str(bitBlasted)))
type(vec) is unicode
bitBlasted is a list
I am getting the following error
TypeError: 'str' object is not callable
Shadowing the built-in
Either as Collin said, you could be shadowing the built-in str:
>>> str = some_variable_or_string #this is wrong
>>> str(123.0) #Or this will happen
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: 'str' object is not callable
One solution would be to change the variable name to str_ or something. A better solution would be to avoid this kind of Hungarian naming system -- this isn't Java, use Python's polymorphism to its fullest and use a more descriptive name instead.
Not defining a proper method
Another possibility is that the object may not have a proper __str__ method or even one at all.
The way Python checks for the str method is:-
the __str__ method of the class
the __str__ method of its parent class
the __repr__ method of the class
the __repr__ method of its parent class
and the final fallback: a string in form of <module>.<classname> instance at <address> where <module> is self.__class__.__module__, <classname> is self.__class__.__name__ and <address> is id(self)
Even better than __str__ would be to use the new __unicode__ method (in Python 3.x, they're __bytes__ and __str__. You could then implement __str__ as a stub method:
class foo:
...
def __str__(self):
return unicode(self).encode('utf-8')
See this question for more details.
As mouad said, you've used the name str somewhere higher in the file. That shadows the existing built-in str, and causes the error. For example:
>>> mynum = 123
>>> print str(mynum)
123
>>> str = 'abc'
>>> print str(mynum)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: 'str' object is not callable
I have problem with casting in python.
I have a method in file module_A.py:
import Common.Models.Pax as Pax
def verify_passangers_data(self,paxes):
for i in range(len(paxes)):
pax=paxes[i]
Here is my Pax.py
class Pax:
""""""
#----------------------------------------------------------------------
def __init__(self):
"""Constructor"""
#----------------------------------------------------------------------
class Adult(Pax):
def __init__(self,last_day_of_travel,first_name,last_name,nationality,address=None):
self.birth_day=datetime.today() - timedelta(days = random.randrange(6563, 20793-(date.today()-last_day_of_travel).days))
self.first_name=first_name
self.last_name=last_name
self.nationality=nationality
self.address=address
This is how I create collection in another module(module_C.py):
paxes=[]
paxes.append(Pax.Adult(last_day_of_travel,'FirstName','LastName',Nationality.Poland,DataRepository.addresses['Default']))
but, look at my output from debug probe (in wing ide)
>>> type(pax)
<class 'Common.Models.Pax.Adult'>
>>> pax is Common.Models.Pax.Adult
Traceback (most recent call last):
File "<string>", line 1, in <fragment>
builtins.NameError: name 'Common' is not defined
How can I check is pax is instance of Adult?
How can I check is pax is instance of Adult?
Use the isinstance function:
isinstance(pax, Common.Models.Pax.Adult)
Make you have imported the class, though (e.g., import Common.Models.Pax).
(Although purists would argue that there's rarely a need to check the type of a Python object. Python is dynamically typed, so you should generally check to see if an object responds to a particular method call, rather than checking its type. But you may have a good reason for needing to check the type, too.)
You can use isinstance:
isinstance(pax, Common.Models.Pax.Adult)
Or the builtin type function:
type(pax) == Common.Models.Pax.Adult
Of course, you will have to import the module so that Common.Models.Pax.Adult is defined. That's why you're getting that error at the end.
You need to have imported the type in order to reference it:
>>> x is socket._fileobject
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
NameError: name 'socket' is not defined
>>> import socket
>>> x is socket._fileobject
False
Presumably, you obtained the instance pointed to by pax from some other call, so you haven't actually imported the class into your namespace.
Also, is tests object identity (are these the same object?), not type. You want instanceof(pax,Common...).
You have two errors, first one is using is instead of isinstance function. Second is trying to refer module by it's absolute name, but you've imported it with alias.
Thus what you should do is:
isinstance(pax,Pax.Adult)