I cannot understand why isinstance function as second parameter need a tuple instead of some iterable?
isinstance(some_object, (some_class1, some_class2))
works fine, but
isinstance(some_object, [some_class1, some_class2])
raise a TypeError
The reason seems to be "allowing only tuples is enough, it's simpler, it avoids the danger of some corner cases, and it seemed neater to the BDFL" (i.e. Guido). (Kudos to #Caleb for posting the key link in the comments.)
Here is an excerpt from this email conversation with Guido van Rossum that specifically addresses the case of other iterables for the isinstance function. (Click on the link for the complete conversation.)
On Thu, Jan 2, 2014 at 1:37 PM, James Powell wrote:
This is driven by a real-world example wherein a large number of
prefixes stored in a set, necessitating:
any('spam'.startswith(c) for c in prefixes)
# or
'spam'.startswith(tuple(prefixes))
Neither of these strikes me as bad. Also, depending on whether the set
of prefixes itself changes dynamically, it may be best to lift the
tuple() call out of the startswith() call.
...
However, .startswith doesn't seem to be the only example of this, and
the other examples are free of the string/iterable ambiguity:
isinstance(x, {int, float})
But this is even less likely to have a dynamically generated argument.
And there could still be another ambiguity here: a metaclass could
conceivably make its instances (i.e. classes) iterable.
It is exacly as it should behave, according to the docs: https://docs.python.org/3/library/functions.html#isinstance
If classinfo is a tuple of type objects (or recursively, other such tuples), return true if object is an instance of any of the types. If classinfo is not a type or tuple of types and such tuples, a TypeError exception is raised.
Because a string is also "some iterable". So you could write:
isinstance(some_object, 'foobar')
and it would check if some_object is an instance of f, o, b, a or r.
This wouldn't work obviously, so isinstance would need to check if the second argument is not a string. Since isinstance needs to do a type check, it might as well make sure the second argument is always a tuple.
Because this is the way the language was designed...
When you write code that can accept more that one type, it is easier to have fixed types that you can directly test than using duck typing. For example as strings are iterable, when you want to accept either a string of a sequence of strings you must first test for the string type.
Here I can imagine no strong reason for limiting to the tuple type, but no strong reason either to extend it to any sequence. You could try to propose it on the python-ideas list.
At a high level need a container type for isinstance checks, so you have tuples, lists, sets, and dicts for built-in containers. Most likely, they decided on tuple over a set because the expected use case for isinstance is a small number of types, and a tuple is faster to check for containment of than compared to a set.
Mutability really isn't a consideration. If they really needed immutability, they could have just re-wrapped the iterable into a tuple before processing.
Related
I'm trying to define a function that can take containers of containers of objects, and I don't care if the containers are Tuples or Lists.
(I know I could go through the hassle of implementing this, but I don't want to, and I'm still not sure if that would solve my problem.)
So, I have the following code:
from typing import Any, Union, List, Tuple
# (list or tuple) of Any
AnyBucket = Union[List[Any], Tuple[Any, ...]]
# should be (list or tuple) of (lists or tuples) of Any
AnyBuckets = Union[List[AnyBucket], Tuple[AnyBucket, ...]]
def takes_any_buckets(inpt: AnyBuckets) -> None:
print(inpt[0][0])
input_value: List[List[Any]] = [[1], [2]]
# Argument 1 to "takes_any_buckets" has incompatible type "List[List[Any]]";
# expected "Union[List[Union[List[Any], Tuple[Any, ...]]], Tuple[Union[List[Any], Tuple[Any, ...]], ...]]"
takes_any_buckets(input_value)
I have two questions:
Why is Mypy throwing an error for this?
What can I do to get the functionality (not getting a Mypy error) that I want?
(I don't want to just disable the error with # type: ignore, I want to define a type that will work.)
My guess for (1) is that it has something to do with Lists being invariant, but with this complex of an example, I can't figure out how.
I have no ideas for (2) other than possibly implementing the previously mentioned hassle, which would allow me to get rid of the Unions, which I suspect may be part of the problem.
It does indeed have to do with lists being invariant. The current version of Mypy will tell you this; if you run your code in the current version, you get the following notes, along with the error:
main.py:14: note: "List" is invariant -- see https://mypy.readthedocs.io/en/stable/common_issues.html#variance
main.py:14: note: Consider using "Sequence" instead, which is covariant
What it means for lists to be invariant is that a list of things of type A is a sub-type of a list of things of type B if and only if A is equal to B. This means that even when A is a subtype of B, List[A] is not a subtype of List[B]. (See here, both for an explanation of invariance and why mypy treats lists as invariant: https://mypy.readthedocs.io/en/stable/generics.html#variance-of-generic-types).
So List[List[Any]] is not a subtype of List[AnyBucket], even though List[Any] is a subtype of AnyBucket, so input_value does not have a type compatible with AnyBuckets. Hence the error.
As Mypy and ShadowRanger both recommend above, you should probably use Sequence, a covariant type that covers iterables that can be indexed into (to a first approximation -- there are some other things that need to be true of a collection for it to be a sequence). You'll need to import it from collections.abc; see here: https://docs.python.org/3/library/collections.abc.html#collections.abc.Sequence
I say "probably" because it depends on exactly what the function you want to type does with the collection. If, for instance, all you do is iterate through the collection (once), then you want to use Iterable, a more generic type.
An advantage of using something like Sequence is that if there are any other types of objects floating around out there -- perhaps user defined ones -- that can do the job you want, they will be valid inputs to your function (as long as they have been registered as Sequences). In general, it's nice if you can use structural typing, which is the static-typing version equivalent of duck-typing, which is the Python Way.
How do I check if an object is of a given type, or if it inherits from a given type?
How do I check if the object o is of type str?
Beginners often wrongly expect the string to already be "a number" - either expecting Python 3.x input to convert type, or expecting that a string like '1' is also simultaneously an integer. This is the wrong canonical for those questions. Please carefully read the question and then use How do I check if a string represents a number (float or int)?, How can I read inputs as numbers? and/or Asking the user for input until they give a valid response as appropriate.
Use isinstance to check if o is an instance of str or any subclass of str:
if isinstance(o, str):
To check if the type of o is exactly str, excluding subclasses of str:
if type(o) is str:
See Built-in Functions in the Python Library Reference for relevant information.
Checking for strings in Python 2
For Python 2, this is a better way to check if o is a string:
if isinstance(o, basestring):
because this will also catch Unicode strings. unicode is not a subclass of str; both str and unicode are subclasses of basestring. In Python 3, basestring no longer exists since there's a strict separation of strings (str) and binary data (bytes).
Alternatively, isinstance accepts a tuple of classes. This will return True if o is an instance of any subclass of any of (str, unicode):
if isinstance(o, (str, unicode)):
The most Pythonic way to check the type of an object is... not to check it.
Since Python encourages Duck Typing, you should just try...except to use the object's methods the way you want to use them. So if your function is looking for a writable file object, don't check that it's a subclass of file, just try to use its .write() method!
Of course, sometimes these nice abstractions break down and isinstance(obj, cls) is what you need. But use sparingly.
isinstance(o, str) will return True if o is an str or is of a type that inherits from str.
type(o) is str will return True if and only if o is a str. It will return False if o is of a type that inherits from str.
After the question was asked and answered, type hints were added to Python. Type hints in Python allow types to be checked but in a very different way from statically typed languages. Type hints in Python associate the expected types of arguments with functions as runtime accessible data associated with functions and this allows for types to be checked. Example of type hint syntax:
def foo(i: int):
return i
foo(5)
foo('oops')
In this case we want an error to be triggered for foo('oops') since the annotated type of the argument is int. The added type hint does not cause an error to occur when the script is run normally. However, it adds attributes to the function describing the expected types that other programs can query and use to check for type errors.
One of these other programs that can be used to find the type error is mypy:
mypy script.py
script.py:12: error: Argument 1 to "foo" has incompatible type "str"; expected "int"
(You might need to install mypy from your package manager. I don't think it comes with CPython but seems to have some level of "officialness".)
Type checking this way is different from type checking in statically typed compiled languages. Because types are dynamic in Python, type checking must be done at runtime, which imposes a cost -- even on correct programs -- if we insist that it happen at every chance. Explicit type checks may also be more restrictive than needed and cause unnecessary errors (e.g. does the argument really need to be of exactly list type or is anything iterable sufficient?).
The upside of explicit type checking is that it can catch errors earlier and give clearer error messages than duck typing. The exact requirements of a duck type can only be expressed with external documentation (hopefully it's thorough and accurate) and errors from incompatible types can occur far from where they originate.
Python's type hints are meant to offer a compromise where types can be specified and checked but there is no additional cost during usual code execution.
The typing package offers type variables that can be used in type hints to express needed behaviors without requiring particular types. For example, it includes variables such as Iterable and Callable for hints to specify the need for any type with those behaviors.
While type hints are the most Pythonic way to check types, it's often even more Pythonic to not check types at all and rely on duck typing. Type hints are relatively new and the jury is still out on when they're the most Pythonic solution. A relatively uncontroversial but very general comparison: Type hints provide a form of documentation that can be enforced, allow code to generate earlier and easier to understand errors, can catch errors that duck typing can't, and can be checked statically (in an unusual sense but it's still outside of runtime). On the other hand, duck typing has been the Pythonic way for a long time, doesn't impose the cognitive overhead of static typing, is less verbose, and will accept all viable types and then some.
In Python 3.10, you can use | in isinstance:
>>> isinstance('1223', int | str)
True
>>> isinstance('abcd', int | str)
True
isinstance(o, str)
Link to docs
You can check for type of a variable using __name__ of a type.
Ex:
>>> a = [1,2,3,4]
>>> b = 1
>>> type(a).__name__
'list'
>>> type(a).__name__ == 'list'
True
>>> type(b).__name__ == 'list'
False
>>> type(b).__name__
'int'
For more complex type validations I like typeguard's approach of validating based on python type hint annotations:
from typeguard import check_type
from typing import List
try:
check_type('mylist', [1, 2], List[int])
except TypeError as e:
print(e)
You can perform very complex validations in very clean and readable fashion.
check_type('foo', [1, 3.14], List[Union[int, float]])
# vs
isinstance(foo, list) and all(isinstance(a, (int, float)) for a in foo)
I think the cool thing about using a dynamic language like Python is you really shouldn't have to check something like that.
I would just call the required methods on your object and catch an AttributeError. Later on this will allow you to call your methods with other (seemingly unrelated) objects to accomplish different tasks, such as mocking an object for testing.
I've used this a lot when getting data off the web with urllib2.urlopen() which returns a file like object. This can in turn can be passed to almost any method that reads from a file, because it implements the same read() method as a real file.
But I'm sure there is a time and place for using isinstance(), otherwise it probably wouldn't be there :)
The accepted answer answers the question in that it provides the answers to the asked questions.
Q: What is the best way to check whether a given object is of a given type? How about checking whether the object inherits from a given type?
A: Use isinstance, issubclass, type to check based on types.
As other answers and comments are quick to point out however, there's a lot more to the idea of "type-checking" than that in python. Since the addition of Python 3 and type hints, much has changed as well. Below, I go over some of the difficulties with type checking, duck typing, and exception handling. For those that think type checking isn't what is needed (it usually isn't, but we're here), I also point out how type hints can be used instead.
Type Checking
Type checking is not always an appropriate thing to do in python. Consider the following example:
def sum(nums):
"""Expect an iterable of integers and return the sum."""
result = 0
for n in nums:
result += n
return result
To check if the input is an iterable of integers, we run into a major issue. The only way to check if every element is an integer would be to loop through to check each element. But if we loop through the entire iterator, then there will be nothing left for intended code. We have two options in this kind of situation.
Check as we loop.
Check beforehand but store everything as we check.
Option 1 has the downside of complicating our code, especially if we need to perform similar checks in many places. It forces us to move type checking from the top of the function to everywhere we use the iterable in our code.
Option 2 has the obvious downside that it destroys the entire purpose of iterators. The entire point is to not store the data because we shouldn't need to.
One might also think that checking if checking all of the elements is too much then perhaps we can just check if the input itself is of the type iterable, but there isn't actually any iterable base class. Any type implementing __iter__ is iterable.
Exception Handling and Duck Typing
An alternative approach would be to forgo type checking altogether and focus on exception handling and duck typing instead. That is to say, wrap your code in a try-except block and catch any errors that occur. Alternatively, don't do anything and let exceptions rise naturally from your code.
Here's one way to go about catching an exception.
def sum(nums):
"""Try to catch exceptions?"""
try:
result = 0
for n in nums:
result += n
return result
except TypeError as e:
print(e)
Compared to the options before, this is certainly better. We're checking as we run the code. If there's a TypeError anywhere, we'll know. We don't have to place a check everywhere that we loop through the input. And we don't have to store the input as we iterate over it.
Furthermore, this approach enables duck typing. Rather than checking for specific types, we have moved to checking for specific behaviors and look for when the input fails to behave as expected (in this case, looping through nums and being able to add n).
However, the exact reasons which make exception handling nice can also be their downfall.
A float isn't an int, but it satisfies the behavioral requirements to work.
It is also bad practice to wrap the entire code with a try-except block.
At first these may not seem like issues, but here's some reasons that may change your mind.
A user can no longer expect our function to return an int as intended. This may break code elsewhere.
Since exceptions can come from a wide variety of sources, using the try-except on the whole code block may end up catching exceptions you didn't intend to. We only wanted to check if nums was iterable and had integer elements.
Ideally we'd like to catch exceptions our code generators and raise, in their place, more informative exceptions. It's not fun when an exception is raised from someone else's code with no explanation other than a line you didn't write and that some TypeError occured.
In order to fix the exception handling in response to the above points, our code would then become this... abomination.
def sum(nums):
"""
Try to catch all of our exceptions only.
Re-raise them with more specific details.
"""
result = 0
try:
iter(nums)
except TypeError as e:
raise TypeError("nums must be iterable")
for n in nums:
try:
result += int(n)
except TypeError as e:
raise TypeError("stopped mid iteration since a non-integer was found")
return result
You can kinda see where this is going. The more we try to "properly" check things, the worse our code is looking. Compared to the original code, this isn't readable at all.
We could argue perhaps this is a bit extreme. But on the other hand, this is only a very simple example. In practice, your code is probably much more complicated than this.
Type Hints
We've seen what happens when we try to modify our small example to "enable type checking". Rather than focusing on trying to force specific types, type hinting allows for a way to make types clear to users.
from typing import Iterable
def sum(nums: Iterable[int]) -> int:
result = 0
for n in nums:
result += n
return result
Here are some advantages to using type-hints.
The code actually looks good now!
Static type analysis may be performed by your editor if you use type hints!
They are stored on the function/class, making them dynamically usable e.g. typeguard and dataclasses.
They show up for functions when using help(...).
No need to sanity check if your input type is right based on a description or worse lack thereof.
You can "type" hint based on structure e.g. "does it have this attribute?" without requiring subclassing by the user.
The downside to type hinting?
Type hints are nothing more than syntax and special text on their own. It isn't the same as type checking.
In other words, it doesn't actually answer the question because it doesn't provide type checking. Regardless, however, if you are here for type checking, then you should be type hinting as well. Of course, if you've come to the conclusion that type checking isn't actually necessary but you want some semblance of typing, then type hints are for you.
To Hugo:
You probably mean list rather than array, but that points to the whole problem with type checking - you don't want to know if the object in question is a list, you want to know if it's some kind of sequence or if it's a single object. So try to use it like a sequence.
Say you want to add the object to an existing sequence, or if it's a sequence of objects, add them all
try:
my_sequence.extend(o)
except TypeError:
my_sequence.append(o)
One trick with this is if you are working with strings and/or sequences of strings - that's tricky, as a string is often thought of as a single object, but it's also a sequence of characters. Worse than that, as it's really a sequence of single-length strings.
I usually choose to design my API so that it only accepts either a single value or a sequence - it makes things easier. It's not hard to put a [ ] around your single value when you pass it in if need be.
(Though this can cause errors with strings, as they do look like (are) sequences.)
If you have to check for the type of str or int please use instanceof. As already mentioned by others the explanation is to also include sub classes. One important example for sub classes from my perspective are Enums with data type like IntEnum or StrEnum. Which are a pretty nice way to define related constants. However, it is kind of annoying if libraries do not accept those as such types.
Example:
import enum
class MyEnum(str, enum.Enum):
A = "a"
B = "b"
print(f"is string: {isinstance(MyEnum.A, str)}") # True
print(f"is string: {type(MyEnum.A) == str}") # False!!!
print(f"is string: {type(MyEnum.A.value) == str}") # True
In Python, you can use the built-in isinstance() function to check if an object is of a given type, or if it inherits from a given type.
To check if the object o is of type str, you would use the following code:
if isinstance(o, str):
# o is of type str
You can also use type() function to check the object type.
if type(o) == str:
# o is of type str
You can also check if the object is a sub class of a particular class using issubclass() function.
if issubclass(type(o),str):
# o is sub class of str
A simple way to check type is to compare it with something whose type you know.
>>> a = 1
>>> type(a) == type(1)
True
>>> b = 'abc'
>>> type(b) == type('')
True
I think the best way is to typing well your variables. You can do this by using the "typing" library.
Example:
from typing import NewType
UserId = NewType ('UserId', int)
some_id = UserId (524313`)
See https://docs.python.org/3/library/typing.html.
I've been playing for a bit with startswith() and I've discovered something interesting:
>>> tup = ('1', '2', '3')
>>> lis = ['1', '2', '3', '4']
>>> '1'.startswith(tup)
True
>>> '1'.startswith(lis)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: startswith first arg must be str or a tuple of str, not list
Now, the error is obvious and casting the list into a tuple will work just fine as it did in the first place:
>>> '1'.startswith(tuple(lis))
True
Now, my question is: why the first argument must be str or a tuple of str prefixes, but not a list of str prefixes?
AFAIK, the Python code for startswith() might look like this:
def startswith(src, prefix):
return src[:len(prefix)] == prefix
But that just confuses me more, because even with it in mind, it still shouldn't make any difference whether is a list or tuple. What am I missing ?
There is technically no reason to accept other sequence types, no. The source code roughly does this:
if isinstance(prefix, tuple):
for substring in prefix:
if not isinstance(substring, str):
raise TypeError(...)
return tailmatch(...)
elif not isinstance(prefix, str):
raise TypeError(...)
return tailmatch(...)
(where tailmatch(...) does the actual matching work).
So yes, any iterable would do for that for loop. But, all the other string test APIs (as well as isinstance() and issubclass()) that take multiple values also only accept tuples, and this tells you as a user of the API that it is safe to assume that the value won't be mutated. You can't mutate a tuple but the method could in theory mutate the list.
Also note that you usually test for a fixed number of prefixes or suffixes or classes (in the case of isinstance() and issubclass()); the implementation is not suited for a large number of elements. A tuple implies that you have a limited number of elements, while lists can be arbitrarily large.
Next, if any iterable or sequence type would be acceptable, then that would include strings; a single string is also a sequence. Should then a single string argument be treated as separate characters, or as a single prefix?
So in other words, it's a limitation to self-document that the sequence won't be mutated, is consistent with other APIs, it carries an implication of a limited number of items to test against, and removes ambiguity as to how a single string argument should be treated.
Note that this was brought up before on the Python Ideas list; see this thread; Guido van Rossum's main argument there is that you either special case for single strings or for only accepting a tuple. He picked the latter and doesn't see a need to change this.
This has already been suggested on Python-ideas a couple of years back see: str.startswith taking any iterator instead of just tuple and GvR had this to say:
The current behavior is intentional, and the ambiguity of strings
themselves being iterables is the main reason. Since startswith() is
almost always called with a literal or tuple of literals anyway, I see
little need to extend the semantics.
In addition to that, there seemed to be no real motivation as to why to do this.
The current approach keeps things simple and fast,
unicode_startswith (and endswith) check for a tuple argument and then for a string one. They then call tailmatch in the appropriate direction. This is, arguably, very easy to understand in its current state, even for strangers to C code.
Adding other cases will only lead to more bloated and complex code for little benefit while also requiring similar changes to any other parts of the unicode object.
On a similar note, here is an excerpt from a talk by core developer, Raymond Hettinger discussing API design choices regarding certain string methods, including recent changes to the str.startswith signature. While he briefly mentions this fact that str.startswith accepts a string or tuple of strings and does not expound, the talk is informative on the decisions and pain points both core developers and contributors have dealt with leading up to the present API.
USAGE CONTEXT ADDED AT END
I often want to operate on an abstract object like a list. e.g.
def list_ish(thing):
for i in xrange(0,len(thing)):
print thing[i]
Now this appropriate if thing is a list, but will fail if thing is a dict for example. what is the pythonic why to ask "do you behave like a list?"
NOTE:
hasattr('__getitem__') and not hasattr('keys')
this will work for all cases I can think of, but I don't like defining a duck type negatively, as I expect there could be cases that it does not catch.
really what I want is to ask.
"hey do you operate on integer indicies in the way I expect a list to do?" e.g.
thing[i], thing[4:7] = [...], etc.
NOTE: I do not want to simply execute my operations inside of a large try/except, since they are destructive. it is not cool to try and fail here....
USAGE CONTEXT
-- A "point-lists" is a list-like-thing that contains dict-like-things as its elements.
-- A "matrix" is a list-like-thing that contains list-like-things
-- I have a library of functions that operate on point-lists and also in an analogous way on matrix like things.
-- for example, From the users point of view destructive operations like the "spreadsheet-like" operations "column-slice" can operate on both matrix objects and also on point-list objects in an analogous way -- the resulting thing is like the original one, but only has the specified columns.
-- since this particular operation is destructive it would not be cool to proceed as if an object were a matrix, only to find out part way thru the operation, it was really a point-list or none-of-the-above.
-- I want my 'is_matrix' and 'is_point_list' tests to be performant, since they sometimes occur inside inner loops. So I would be satisfied with a test which only investigated element zero for example.
-- I would prefer tests that do not involve construction of temporary objects, just to determine an object's type, but maybe that is not the python way.
in general I find the whole duck typing thing to be kinda messy, and fraught with bugs and slowness, but maybe I dont yet think like a true Pythonista
happy to drink more kool-aid...
One thing you can do, that should work quickly on a normal list and fail on a normal dict, is taking a zero-length slice from the front:
try:
thing[:0]
except TypeError:
# probably not list-like
else:
# probably list-like
The slice fails on dicts because slices are not hashable.
However, str and unicode also pass this test, and you mention that you are doing destructive edits. That means you probably also want to check for __delitem__ and __setitem__:
def supports_slices_and_editing(thing):
if hasattr(thing, '__setitem__') and hasattr(thing, '__delitem__'):
try:
thing[:0]
return True
except TypeError:
pass
return False
I suggest you organize the requirements you have for your input, and the range of possible inputs you want your function to handle, more explicitly than you have so far in your question. If you really just wanted to handle lists and dicts, you'd be using isinstance, right? Maybe what your method does could only ever delete items, or only ever replace items, so you don't need to check for the other capability. Document these requirements for future reference.
When dealing with built-in types, you can use the Abstract Base Classes. In your case, you may want to test against collections.Sequence or collections.MutableSequence:
if isinstance(your_thing, collections.Sequence):
# access your_thing as a list
This is supported in all Python versions after (and including) 2.6.
If you are using your own classes to build your_thing, I'd recommend that you inherit from these abstract base classes as well (directly or indirectly). This way, you can ensure that the sequence interface is implemented correctly, and avoid all the typing mess.
And for third-party libraries, there's no simple way to check for a sequence interface, if the third-party classes didn't inherit from the built-in types or abstract classes. In this case you'll have to check for every interface that you're going to use, and only those you use. For example, your list_ish function used __len__ and __getitem__, so only check whether these two methods exist. A wrong behavior of __getitem__ (e.g. a dict) should raise an exception.
Perhaps their is no ideal pythonic answer here, so I am proposing a 'hack' solution, but don't know enough about the class structure of python to know if I am getting this right:
def is_list_like(thing):
return hasattr(thing, '__setslice__')
def is_dict_like(thing):
return hasattr(thing, 'keys')
My reduce goals here are to simply have performant tests that will:
(1) never call a dict-thing, nor a string-like-thing a list List item
(2) returns the right answer for python types
(3) will return the right answer if someone implement a "full" set of core method for a list/dict
(4) is fast (ideally does not allocate objects during the test)
EDIT: Incorporated ideas from #DanGetz
I'm looking for a way to test if an object is not of a "list-ish" type, that is - not only that the object is not iterable (e.g. - you can also run iter on a string, or on a simple object that implements iter) but that the object is not in the list family. I define the "list" family as list/tuple/set/frozenset, or anything that inherits from those, however - as there might be something that I'm missing, I would like to find a more general way than running isinstance against all of those types.
I thought of two possible ways to do it, but both seem somewhat awkward as they very much test against every possible list type, and I'm looking for a more general solution.
First option:
return not isinstance( value, (frozenset, list, set, tuple,) )
Second option:
return not hasattr(value, '__iter__')
Is testing for the __iter__ attribute enough? Is there a better way for finding whether an object is not a list-type?
Thanks in advance.
Edit:
(Quoted from comment to #Rosh Oxymoron's Solution):
Thinking about the definition better now, I believe it would be more right to say that I need to find everything that is not array-type in definition, but it can still be a string/other simple object...Checking against collections.Iterable will still give me True for objects which implement the __iter__ method.
There is no term 'list-ish' and there is no magic build-in check_if_value_is_an_instance_of_some_i_dont_know_what_set_of_types.
You solution with not isinstance( value, (frozenset, list, set, tuple,) ) is pretty good - it is clear and explicit.
There is no such family – it's not well-defined, and naturally there's no way to check for it. The closest thing possible is an iterable that is not a string. You can test if the object for iterability and then explicitly check if it is a string:
if isinstance(ob, collections.Iterable) and not isinstance(ob, types.StringTypes):
print "An iterable container"
A better approach would be to always ask for an iterable object, and when you need to pass a single string S, pass [S] instead. The ability to pass a string is a feature, e.g.:
alphabet = set('abcdefgijklmopqrstuvwxyz')
If you special-case string, you will:
Break the ability to use your function with the most natural way to pass a collection of characters.
Create inconsistency for user-defined string types and/or other containers that are string-ish (e.g. array.array can represent a chunk of data, just like string).
A string can be used to represent both a single piece of text and a collection of characters, and because of the second it is also list-ish.