I have an object:
c = Character(...)
I convert it to a string by using:
p = "{0}".format(c)
print(p)
>>> <Character.Character object at 0x000002267ED6DA50>
How do i get the object back so i can run this code?
p.get_name()
You absolutely can if you are using CPython (where the id is the memory address). Different implementations may not work the same way.
>>> import ctypes
>>> class A:
... pass
...
>>> a = A()
>>> id(a)
140669136944864
>>> b = ctypes.cast(id(a), ctypes.py_object).value
>>> b
<__main__.A object at 0x7ff015f03ee0>
>>> a is b
True
So we've de-referenced the id of a back into a py_object and snagged its value.
If your main goal is to serialize and deserialize objects. (ie. turn objects into string and back while preserving all the data and functions) your can use pickle. You can use pickle.dumps to convert any object into a string and pickle.loads to convert it back into an object. docs
>>> import pickle
>>> class Student:
... def __init__(self, name, age):
... self.name = name
... self.age = age
...
>>> a = Student("name", 20)
>>> pickle.dumps(a)
b'\x80\x04\x951\x00\x00\x00\x00\x00\x00\x00\x8c\x08__main__\x94\x8c\x07Student\x94\x93\x94)\x81\x94}\x94(\x8c\x04name\x94h\x05\x8c\x03age\x94K\x14ub.'
>>> s = pickle.dumps(a)
>>> b = pickle.loads(s)
>>> b
<__main__.Student object at 0x7f04a856c910>
>>> b.name == a.name and b.age == a.age
True
It is not possible in the general case to use the ID embedded in the default __str__ implementation to retrieve an object.
Borrowing from g.d.d.c's answer, let's define a function to do this:
import ctypes
def get_object_by_id(obj_id):
return ctypes.cast(obj_id, ctypes.py_object).value
def get_object_by_repr(obj_repr):
return get_object_by_id(int(obj_repr[-19:-1], 16))
It works for any object that is still in scope, provided that it's using the default __repr__/__str__ implementation that includes the hex-encoded id at the end of the string:
>>> class A:
... pass
...
>>> a = A()
>>> r = str(a)
>>> r
'<__main__.A object at 0x000001C584E8BC10>'
>>> get_object_by_repr(r)
<__main__.A object at 0x000001C584E8BC10>
But what if our original A has gone out of scope?
>>> def get_a_repr():
... a = A()
... return str(a)
...
>>> r = get_a_repr()
>>> get_object_by_repr(r)
(crash)
(and I don't mean an uncaught exception, I mean Python itself crashes)
You don't necessarily need to define a in a function to do this; it also can happen if you just rebind a in the local scope (note: GC isn't necessarily guaranteed to happen as soon as you rebind the variable, so this may not behave 100% deterministically, but the below is a real example):
>>> a = A()
>>> r = str(a)
>>> get_object_by_repr(r)
<__main__.A object at 0x000001C73C73BBE0>
>>> a = A()
>>> get_object_by_repr(r)
<__main__.A object at 0x000001C73C73BBE0>
>>> a
<__main__.A object at 0x000001C73C73B9A0>
>>> get_object_by_repr(r)
(crash)
I'd expect this to also happen if you passed this string between processes, stored it in a file for later use by the same script, or any of the other things you'd normally be doing with a serialized object.
The reason this happens is that unlike C, Python garbage-collects objects that have gone out of scope and which do not have any references -- and the id value itself (which is just an int), or the string representation of the object that has the id embedded in it, is not recognized by the interpreter as a live reference! And because ctypes lets you reach right into the guts of the interpreter (usually a bad idea if you don't know exactly what you're doing), you're telling it to dereference a pointer to freed memory, and it crashes.
In other situations, you might actually get the far more insidious bug of getting a different object because that memory address has since been repurposed to hold something else (I'm not sure how likely this is).
To actually solve the problem of turning a str() representation into the original object, the object must be serialized, i.e. turned into a string that contains all the data needed to reconstruct an exact copy of the object, even if the original object no longer exists. How to do this depends entirely on the actual content of the object, but a pretty standard (language-agnostic) solution is to make the class JSON-serializable; check out How to make a class JSON serializable.
This question already has answers here:
What are the differences between type() and isinstance()?
(8 answers)
Closed 2 years ago.
From what I read googling, it seems that isinstanceof() is always better than type().
What are some situations when using type() is better than isinstanceof() in python?
I am using python 3.7.
They do two different things, you can't really compare them directly. What you've probably read is that you should prefer isinstance when checking the type of an object at runtime. But that isn't the only use-case for type (that is the use-case for isinstance, as its name implies).
What may not be obvious is that type is a class. You can think of "type" and "class" as synonymous. Indeed, it is the class of class objects, a metaclass. But it is a class just like int, float, list, dict etc. Or just like a use-defined class, class Foo: pass.
In its single argument form, it returns the class of whatever object you pass in. This is the form that can be used for type-checking. It is essentially equivalent to some_object.__class__.
>>> "a string".__class__
<class 'str'>
>>> type("a string")
<class 'str'>
Note:
>>> type(type) is type
True
You might also find this form useful if you ever wanted access to the type of an object itself for other reasons.
In its three-argument form, type(name, bases, namespace) it returns a new type object, a new class. Just like any other type constructor, just like list() returns a new list.
So instead of:
class Foo:
bar = 42
def __init__(self, val):
self.val = val
You could write:
def _foo_init(self, val):
self.val = val
Foo = type('Foo', (object,), {'bar':42, '__init__': _foo_init})
isinstance is a function which checks if... an object is an instance of some type. It is a function used for introspection.
When you want to introspect on the type of an object, usually you will probably use isintance(some_object, SomeType), but you might also use type(some_object) is SomeType. The key difference is that isinstance will return True if some_object.__class__ is precisely SomeType or any of the other types SomeType inherits from (i.e. in the method resolution order of SomeType, SomeType.mro()).
So, isinstance(some_object, SomeType) is essentially equivalent to some_object.__class__ is SomeType or some_object.__class__ in SomeType.mro()
Whereas if you use type(some_object) is SomeType, you are only asking some_object.__class__ is SomeType.
Here's a practical example of when you might want to use type instead of isinstance, suppose you wanted to distinguish between int and bool objects. In Python, bool inherits from int, so:
>>> issubclass(bool, int)
True
So that means:
>>> some_boolean = True
>>> isinstance(some_boolean, int)
True
but
>>> type(some_boolean) is int
False
type says the type of variable:
a = 10
type(a)
It will give its type as 'int'
isinstance() says if variable is related to specified type
class b:
def __init__(self):
print('Hi')
c = b()
m = isinstance(c, b)
It will return True because object c is of class type a otherwise it will return False.
In the following, setattr succeeds in the first invocation, but fails in the second, with:
AttributeError: 'method' object has no attribute 'i'
Why is this, and is there a way of setting an attribute on a method such that it will only exist on one instance, not for each instance of the class?
class c:
def m(self):
print(type(c.m))
setattr(c.m, 'i', 0)
print(type(self.m))
setattr(self.m, 'i', 0)
Python 3.2.2
The short answer: There is no way of adding custom attributes to bound methods.
The long answer follows.
In Python, there are function objects and method objects. When you define a class, the def statement creates a function object that lives within the class' namespace:
>>> class c:
... def m(self):
... pass
...
>>> c.m
<function m at 0x025FAE88>
Function objects have a special __dict__ attribute that can hold user-defined attributes:
>>> c.m.i = 0
>>> c.m.__dict__
{'i': 0}
Method objects are different beasts. They are tiny objects just holding a reference to the corresponding function object (__func__) and one to its host object (__self__):
>>> c().m
<bound method c.m of <__main__.c object at 0x025206D0>>
>>> c().m.__self__
<__main__.c object at 0x02625070>
>>> c().m.__func__
<function m at 0x025FAE88>
>>> c().m.__func__ is c.m
True
Method objects provide a special __getattr__ that forwards attribute access to the function object:
>>> c().m.i
0
This is also true for the __dict__ property:
>>> c().m.__dict__['a'] = 42
>>> c.m.a
42
>>> c().m.__dict__ is c.m.__dict__
True
Setting attributes follows the default rules, though, and since they don't have their own __dict__, there is no way to set arbitrary attributes.
This is similar to user-defined classes defining __slots__ and no __dict__ slot, when trying to set a non-existing slot raises an AttributeError (see the docs on __slots__ for more information):
>>> class c:
... __slots__ = ('a', 'b')
...
>>> x = c()
>>> x.a = 1
>>> x.b = 2
>>> x.c = 3
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
AttributeError: 'c' object has no attribute 'c'
Q: "Is there a way of setting an attribute on a method such that it will only exist on one instance, not for each instance of the class?"
A: Yes:
class c:
def m(self):
print(type(c.m))
setattr(c.m, 'i', 0)
print(type(self))
setattr(self, 'i', 0)
The static variable on functions in the post you link to is not useful for methods. It sets an attribute on the function so that this attribute is available the next time the function is called, so you can make a counter or whatnot.
But methods have an object instance associated with them (self). Hence you have no need to set attributes on the method, as you simply can set it on the instance instead. That is in fact exactly what the instance is for.
The post you link to shows how to make a function with a static variable. I would say that in Python doing so would be misguided. Instead look at this answer: What is the Python equivalent of static variables inside a function?
That is the way to do it in Python in a way that is clear and easily understandable. You use a class and make it callable. Setting attributes on functions is possible and there are probably cases where it's a good idea, but in general it will just end up confusing people.
This question already has answers here:
What's the canonical way to check for type in Python?
(15 answers)
Closed 6 months ago.
Is there a simple way to determine if a variable is a list, dictionary, or something else?
There are two built-in functions that help you identify the type of an object. You can use type() if you need the exact type of an object, and isinstance() to check an object’s type against something. Usually, you want to use isinstance() most of the times since it is very robust and also supports type inheritance.
To get the actual type of an object, you use the built-in type() function. Passing an object as the only parameter will return the type object of that object:
>>> type([]) is list
True
>>> type({}) is dict
True
>>> type('') is str
True
>>> type(0) is int
True
This of course also works for custom types:
>>> class Test1 (object):
pass
>>> class Test2 (Test1):
pass
>>> a = Test1()
>>> b = Test2()
>>> type(a) is Test1
True
>>> type(b) is Test2
True
Note that type() will only return the immediate type of the object, but won’t be able to tell you about type inheritance.
>>> type(b) is Test1
False
To cover that, you should use the isinstance function. This of course also works for built-in types:
>>> isinstance(b, Test1)
True
>>> isinstance(b, Test2)
True
>>> isinstance(a, Test1)
True
>>> isinstance(a, Test2)
False
>>> isinstance([], list)
True
>>> isinstance({}, dict)
True
isinstance() is usually the preferred way to ensure the type of an object because it will also accept derived types. So unless you actually need the type object (for whatever reason), using isinstance() is preferred over type().
The second parameter of isinstance() also accepts a tuple of types, so it’s possible to check for multiple types at once. isinstance will then return true, if the object is of any of those types:
>>> isinstance([], (tuple, list, set))
True
Use type():
>>> a = []
>>> type(a)
<type 'list'>
>>> f = ()
>>> type(f)
<type 'tuple'>
It might be more Pythonic to use a try...except block. That way, if you have a class which quacks like a list, or quacks like a dict, it will behave properly regardless of what its type really is.
To clarify, the preferred method of "telling the difference" between variable types is with something called duck typing: as long as the methods (and return types) that a variable responds to are what your subroutine expects, treat it like what you expect it to be. For example, if you have a class that overloads the bracket operators with getattr and setattr, but uses some funny internal scheme, it would be appropriate for it to behave as a dictionary if that's what it's trying to emulate.
The other problem with the type(A) is type(B) checking is that if A is a subclass of B, it evaluates to false when, programmatically, you would hope it would be true. If an object is a subclass of a list, it should work like a list: checking the type as presented in the other answer will prevent this. (isinstance will work, however).
On instances of object you also have the:
__class__
attribute. Here is a sample taken from Python 3.3 console
>>> str = "str"
>>> str.__class__
<class 'str'>
>>> i = 2
>>> i.__class__
<class 'int'>
>>> class Test():
... pass
...
>>> a = Test()
>>> a.__class__
<class '__main__.Test'>
Beware that in python 3.x and in New-Style classes (aviable optionally from Python 2.6) class and type have been merged and this can sometime lead to unexpected results. Mainly for this reason my favorite way of testing types/classes is to the isinstance built in function.
Determine the type of a Python object
Determine the type of an object with type
>>> obj = object()
>>> type(obj)
<class 'object'>
Although it works, avoid double underscore attributes like __class__ - they're not semantically public, and, while perhaps not in this case, the builtin functions usually have better behavior.
>>> obj.__class__ # avoid this!
<class 'object'>
type checking
Is there a simple way to determine if a variable is a list, dictionary, or something else? I am getting an object back that may be either type and I need to be able to tell the difference.
Well that's a different question, don't use type - use isinstance:
def foo(obj):
"""given a string with items separated by spaces,
or a list or tuple,
do something sensible
"""
if isinstance(obj, str):
obj = str.split()
return _foo_handles_only_lists_or_tuples(obj)
This covers the case where your user might be doing something clever or sensible by subclassing str - according to the principle of Liskov Substitution, you want to be able to use subclass instances without breaking your code - and isinstance supports this.
Use Abstractions
Even better, you might look for a specific Abstract Base Class from collections or numbers:
from collections import Iterable
from numbers import Number
def bar(obj):
"""does something sensible with an iterable of numbers,
or just one number
"""
if isinstance(obj, Number): # make it a 1-tuple
obj = (obj,)
if not isinstance(obj, Iterable):
raise TypeError('obj must be either a number or iterable of numbers')
return _bar_sensible_with_iterable(obj)
Or Just Don't explicitly Type-check
Or, perhaps best of all, use duck-typing, and don't explicitly type-check your code. Duck-typing supports Liskov Substitution with more elegance and less verbosity.
def baz(obj):
"""given an obj, a dict (or anything with an .items method)
do something sensible with each key-value pair
"""
for key, value in obj.items():
_baz_something_sensible(key, value)
Conclusion
Use type to actually get an instance's class.
Use isinstance to explicitly check for actual subclasses or registered abstractions.
And just avoid type-checking where it makes sense.
You can use type() or isinstance().
>>> type([]) is list
True
Be warned that you can clobber list or any other type by assigning a variable in the current scope of the same name.
>>> the_d = {}
>>> t = lambda x: "aight" if type(x) is dict else "NOPE"
>>> t(the_d) 'aight'
>>> dict = "dude."
>>> t(the_d) 'NOPE'
Above we see that dict gets reassigned to a string, therefore the test:
type({}) is dict
...fails.
To get around this and use type() more cautiously:
>>> import __builtin__
>>> the_d = {}
>>> type({}) is dict
True
>>> dict =""
>>> type({}) is dict
False
>>> type({}) is __builtin__.dict
True
be careful using isinstance
isinstance(True, bool)
True
>>> isinstance(True, int)
True
but type
type(True) == bool
True
>>> type(True) == int
False
While the questions is pretty old, I stumbled across this while finding out a proper way myself, and I think it still needs clarifying, at least for Python 2.x (did not check on Python 3, but since the issue arises with classic classes which are gone on such version, it probably doesn't matter).
Here I'm trying to answer the title's question: how can I determine the type of an arbitrary object? Other suggestions about using or not using isinstance are fine in many comments and answers, but I'm not addressing those concerns.
The main issue with the type() approach is that it doesn't work properly with old-style instances:
class One:
pass
class Two:
pass
o = One()
t = Two()
o_type = type(o)
t_type = type(t)
print "Are o and t instances of the same class?", o_type is t_type
Executing this snippet would yield:
Are o and t instances of the same class? True
Which, I argue, is not what most people would expect.
The __class__ approach is the most close to correctness, but it won't work in one crucial case: when the passed-in object is an old-style class (not an instance!), since those objects lack such attribute.
This is the smallest snippet of code I could think of that satisfies such legitimate question in a consistent fashion:
#!/usr/bin/env python
from types import ClassType
#we adopt the null object pattern in the (unlikely) case
#that __class__ is None for some strange reason
_NO_CLASS=object()
def get_object_type(obj):
obj_type = getattr(obj, "__class__", _NO_CLASS)
if obj_type is not _NO_CLASS:
return obj_type
# AFAIK the only situation where this happens is an old-style class
obj_type = type(obj)
if obj_type is not ClassType:
raise ValueError("Could not determine object '{}' type.".format(obj_type))
return obj_type
using type()
x='hello this is a string'
print(type(x))
output
<class 'str'>
to extract only the str use this
x='this is a string'
print(type(x).__name__)#you can use__name__to find class
output
str
if you use type(variable).__name__ it can be read by us
In many practical cases instead of using type or isinstance you can also use #functools.singledispatch, which is used to define generic functions (function composed of multiple functions implementing the same operation for different types).
In other words, you would want to use it when you have a code like the following:
def do_something(arg):
if isinstance(arg, int):
... # some code specific to processing integers
if isinstance(arg, str):
... # some code specific to processing strings
if isinstance(arg, list):
... # some code specific to processing lists
... # etc
Here is a small example of how it works:
from functools import singledispatch
#singledispatch
def say_type(arg):
raise NotImplementedError(f"I don't work with {type(arg)}")
#say_type.register
def _(arg: int):
print(f"{arg} is an integer")
#say_type.register
def _(arg: bool):
print(f"{arg} is a boolean")
>>> say_type(0)
0 is an integer
>>> say_type(False)
False is a boolean
>>> say_type(dict())
# long error traceback ending with:
NotImplementedError: I don't work with <class 'dict'>
Additionaly we can use abstract classes to cover several types at once:
from collections.abc import Sequence
#say_type.register
def _(arg: Sequence):
print(f"{arg} is a sequence!")
>>> say_type([0, 1, 2])
[0, 1, 2] is a sequence!
>>> say_type((1, 2, 3))
(1, 2, 3) is a sequence!
As an aside to the previous answers, it's worth mentioning the existence of collections.abc which contains several abstract base classes (ABCs) that complement duck-typing.
For example, instead of explicitly checking if something is a list with:
isinstance(my_obj, list)
you could, if you're only interested in seeing if the object you have allows getting items, use collections.abc.Sequence:
from collections.abc import Sequence
isinstance(my_obj, Sequence)
if you're strictly interested in objects that allow getting, setting and deleting items (i.e mutable sequences), you'd opt for collections.abc.MutableSequence.
Many other ABCs are defined there, Mapping for objects that can be used as maps, Iterable, Callable, et cetera. A full list of all these can be seen in the documentation for collections.abc.
value = 12
print(type(value)) # will return <class 'int'> (means integer)
or you can do something like this
value = 12
print(type(value) == int) # will return true
type() is a better solution than isinstance(), particularly for booleans:
True and False are just keywords that mean 1 and 0 in python. Thus,
isinstance(True, int)
and
isinstance(False, int)
both return True. Both booleans are an instance of an integer. type(), however, is more clever:
type(True) == int
returns False.
In general you can extract a string from object with the class name,
str_class = object.__class__.__name__
and using it for comparison,
if str_class == 'dict':
# blablabla..
elif str_class == 'customclass':
# blebleble..
For the sake of completeness, isinstance will not work for type checking of a subtype that is not an instance. While that makes perfect sense, none of the answers (including the accepted one) covers it. Use issubclass for that.
>>> class a(list):
... pass
...
>>> isinstance(a, list)
False
>>> issubclass(a, list)
True
What is the difference between type(obj) and obj.__class__? Is there ever a possibility of type(obj) is not obj.__class__?
I want to write a function that works generically on the supplied objects, using a default value of 1 in the same type as another parameter. Which variation, #1 or #2 below, is going to do the right thing?
def f(a, b=None):
if b is None:
b = type(a)(1) # #1
b = a.__class__(1) # #2
This is an old question, but none of the answers seems to mention that. in the general case, it IS possible for a new-style class to have different values for type(instance) and instance.__class__:
class ClassA(object):
def display(self):
print("ClassA")
class ClassB(object):
__class__ = ClassA
def display(self):
print("ClassB")
instance = ClassB()
print(type(instance))
print(instance.__class__)
instance.display()
Output:
<class '__main__.ClassB'>
<class '__main__.ClassA'>
ClassB
The reason is that ClassB is overriding the __class__ descriptor, however the internal type field in the object is not changed. type(instance) reads directly from that type field, so it returns the correct value, whereas instance.__class__ refers to the new descriptor replacing the original descriptor provided by Python, which reads the internal type field. Instead of reading that internal type field, it returns a hardcoded value.
Old-style classes are the problem, sigh:
>>> class old: pass
...
>>> x=old()
>>> type(x)
<type 'instance'>
>>> x.__class__
<class __main__.old at 0x6a150>
>>>
Not a problem in Python 3 since all classes are new-style now;-).
In Python 2, a class is new-style only if it inherits from another new-style class (including object and the various built-in types such as dict, list, set, ...) or implicitly or explicitly sets __metaclass__ to type.
type(obj) and type.__class__ do not behave the same for old style classes:
>>> class a(object):
... pass
...
>>> class b(a):
... pass
...
>>> class c:
... pass
...
>>> ai=a()
>>> bi=b()
>>> ci=c()
>>> type(ai) is ai.__class__
True
>>> type(bi) is bi.__class__
True
>>> type(ci) is ci.__class__
False
There's an interesting edge case with proxy objects (that use weak references):
>>> import weakref
>>> class MyClass:
... x = 42
...
>>> obj = MyClass()
>>> obj_proxy = weakref.proxy(obj)
>>> obj_proxy.x # proxies attribute lookup to the referenced object
42
>>> type(obj_proxy) # returns type of the proxy
weakproxy
>>> obj_proxy.__class__ # returns type of the referenced object
__main__.MyClass
>>> del obj # breaks the proxy's weak reference
>>> type(obj_proxy) # still works
weakproxy
>>> obj_proxy.__class__ # fails
ReferenceError: weakly-referenced object no longer exists
FYI - Django does this.
>>> from django.core.files.storage import default_storage
>>> type(default_storage)
django.core.files.storage.DefaultStorage
>>> default_storage.__class__
django.core.files.storage.FileSystemStorage
As someone with finite cognitive capacity who's just trying to figure out what's going in order to get work done... it's frustrating.