Python Typing: List of specific values - python

I want to type a variable to be list of finite set of valid values.
So basically, I would like to have the typing equivalent of the following minimal example:
valid_parameters = ["value", "other value"]
def check_type(parameters_list):
for parameter in parameters_list:
if parameter not in valid_parameters:
raise ValueError("invalid parameter")
valid_list = ["value"]
check_type(valid_list)
# work
invalid_list = ["different_value"]
check_type(invalid_list)
# raise error
I checked typing already, but I didn't manage to find a solution. I tried to create list of Literal, but it didn't seem to work. Is there such a solution? Can it be created?

Using Literal won't work during runtime, type hints are meant to be used with type checkers, IDEs, linters, etc.
Hence the following will not fail during runtime:
>>> from typing import Literal
>>> VALID = Literal["value", "other value"]
>>> def foo(my_param: VALID) -> None:
... print(my_param)
...
>>> foo("value")
value
>>> foo("bar")
bar
But you can use an Enum to achieve what you want:
>>> from enum import Enum
>>> class Parameter(Enum):
... VALUE = "value"
... OTHER_VALUE = "other value"
...
>>> def foo(my_param: Parameter) -> None:
... print(my_param)
...
>>> foo(Parameter("value"))
Parameter.VALUE
>>> foo(Parameter("wrong"))
ValueError: 'wrong' is not a valid Parameter
(This was tested on Python 3.8.0)

Related

How to type-hint a method that retrieves dynamic enum value?

I have a Python module that has a number of simple enums that are defined as follows:
class WordType(Enum):
ADJ = "Adjective"
ADV = "Adverb"
class Number(Enum):
S = "Singular"
P = "Plural"
Because there are a lot of these enums and I only decide at runtime which enums to query for any given input, I wanted a function that can retrieve the value given the enum-type and the enum-value as strings. I succeeded in doing that as follows:
names = inspect.getmembers(sys.modules[__name__], inspect.isclass)
def get_enum_type(name: str):
enum_class = [x[1] for x in names if x[0] == name]
return enum_class[0]
def get_enum_value(object_name: str, value_name: str):
return get_enum_type(object_name)[value_name]
This works well, but now I'm adding type hinting and I'm struggling with how to define the return types for these methods: I've tried slice and Literal[], both suggested by mypy, but neither checks out (maybe because I don't understand what type parameter I can give to Literal[]).
I am willing to modify the enum definitions, but I'd prefer to keep the dynamic querying as-is. Worst case scenario, I can do # type: ignore or just return -> Any, but I hope there's something better.
As you don't want to check-type for any Enum, I suggest to introduce a base type (say GrammaticalEnum) to mark all your Enums and to group them in an own module:
# module grammar_enums
import sys
import inspect
from enum import Enum
class GrammaticalEnum(Enum):
"""use as a base to mark all grammatical enums"""
pass
class WordType(GrammaticalEnum):
ADJ = "Adjective"
ADV = "Adverb"
class Number(GrammaticalEnum):
S = "Singular"
P = "Plural"
# keep this statement at the end, as all enums must be known first
grammatical_enums = dict(
m for m in inspect.getmembers(sys.modules[__name__], inspect.isclass)
if issubclass(m[1], GrammaticalEnum))
# you might prefer the shorter alternative:
# grammatical_enums = {k: v for (k, v) in globals().items()
# if inspect.isclass(v) and issubclass(v, GrammaticalEnum)}
Regarding typing, yakir0 already suggested the right types,
but with the common base you can narrow them.
If you like, you even could get rid of your functions at all:
from grammar_enums import grammatical_enums as g_enums
from grammar_enums import GrammaticalEnum
# just use g_enums instead of get_enum_value like this
WordType_ADJ: GrammaticalEnum = g_enums['WordType']['ADJ']
# ...or use your old functions:
# as your grammatical enums are collected in a dict now,
# you don't need this function any more:
def get_enum_type(name: str) -> Type[GrammaticalEnum]:
return g_enums[name]
def get_enum_value(enum_name: str, value_name: str) -> GrammaticalEnum:
# return get_enum_type(enum_name)[value_name]
return g_enums[enum_name][value_name]
You can always run your functions and print the result of the function to get a sense of what it should be. Note that you can use Enum in type hinting like any other class.
For example:
>>> result = get_enum_type('WordType')
... print(result)
... print(type(result))
<enum 'WordType'>
<class 'enum.EnumMeta'>
So you can actually use
get_enum_type(name: str) -> EnumMeta
But you can make it prettier by using Type from typing since EnumMeta is the type of a general Enum.
get_enum_type(name: str) -> Type[Enum]
For a similar process with get_enum_value you get
>>> type(get_enum_value('WordType', 'ADJ'))
<enum 'WordType'>
Obviously you won't always return the type WordType so you can use Enum to generalize the return type.
To sum it all up:
get_enum_type(name: str) -> Type[Enum]
get_enum_value(object_name: str, value_name: str) -> Enum
As I said in a comment, I don't think it's possible to have your dynamic code and have MyPy predict the outputs. For example, I don't think it would be possible to have MyPy know that get_enum_type("WordType") should be a WordType whereas get_enum_type("Number") should be a Number.
As others have said, you could clarify that they'll be Enums. You could add a base type, and say that they'll specifically be one of the base types. Part of the problem is that, although you could promise it, MyPy wouldn't be able to confirm. It can't know that inspect.getmembers(sys.modules[__name__], inspect.isclass) will just have Enums or GrammaticalEnums in [1].
If you're willing to change the implementation of your lookup, then I'd suggest you could make profitable use of __init_subclass__. Something like
GRAMMATICAL_ENUM_LOOKUP: "Mapping[str, GrammaticalEnum]" = {}
class GrammaticalEnum(Enum):
def __init_subclass__(cls, **kwargs):
GRAMMATICAL_ENUM_LOOKUP[cls.__name__] = cls
super().__init_subclass__(**kwargs)
def get_enum_type(name: str) -> Type[GrammaticalEnum]:
return GRAMMATICAL_ENUM_LOOKUP[name]
This at least has the advantage that the MyPy can see what's going on, and should be broadly happy with it. It knows that everything will in fact be a valid GrammaticalEnum because that's all that GRAMMATICAL_ENUM_LOOKUP gets populated with.

Getting the literal out of a python Literal type, at runtime?

How can I get the literal value out of a Literal[] from typing?
from typing import Literal, Union
Add = Literal['add']
Multiply = Literal['mul']
Action = Union[Add,Multiply]
def do(a: Action):
if a == Add:
print("Adding!")
elif a == Multiply:
print("Multiplying!")
else:
raise ValueError
do('add')
The code above type checks since 'add' is of type Literal['add'], but at runtime, it raises a ValueError since the string 'add' is not the same as typing.Literal['add'].
How can I, at runtime, reuse the literals that I defined at type level?
The typing module provides a function get_args which retrieves the arguments with which your Literal was initialized.
>>> from typing import Literal, get_args
>>> l = Literal['add', 'mul']
>>> get_args(l)
('add', 'mul')
However, I don't think you gain anything by using a Literal for what you propose. What would make more sense to me is to use the strings themselves, and then maybe define a Literal for the very strict purpose of validating that arguments belong to this set of strings.
>>> def my_multiply(*args):
... print("Multiplying {0}!".format(args))
...
>>> def my_add(*args):
... print("Adding {0}!".format(args))
...
>>> op = {'mul': my_multiply, 'add': my_add}
>>> def do(action: Literal[list(op.keys())]):
... return op[action]
Remember, a type annotation is essentially a specialized type definition, not a value. It restricts which values are allowed to pass through, but by itself it merely implements a constraint -- a filter which rejects values which you don't want to allow. And as illustrated above, its argument is a set of allowed values, so the constraint alone merely specifies which values it will accept, but the actual value only comes when you concretely use it to validate a value.
I guess that the desire to get the value from the type is to avoid code duplication, and enable broader refactors. But let's think about it a second...
Let's consider code duplication. We don't want to have to write the same literal value twice. But here's the thing, we're going to have to write down something twice, either the type or the literal, so why not the literal?
Let's consider enabling refactors. In this case we're worried that if we change the literal value of the type then code using the existing value will no longer work, it would be nice if we could change them all at once. Notice that the problem solved by the type-checker is adjacent to this one: when you change that value it will warn you everywhere that that value is no longer valid.
In this case you can opt to use an Enum to put the literal value inside the Literal type:
from typing import Literal, overload
from enum import Enum
class E(Enum):
opt1 = 'opt1'
opt2 = 'opt2'
#overload
def f(x: Literal[E.opt1]) -> str:
...
#overload
def f(x: Literal[E.opt2]) -> int:
...
def f(x: E):
if x == E.opt1:
return 'got 0'
elif x == E.opt2:
return 123
raise ValueError(x)
a = f(E.opt1)
b = f(E.opt2)
reveal_type(a)
reveal_type(b)
# > mypy .\tmp.py
# tmp.py:28: note: Revealed type is "builtins.str"
# tmp.py:29: note: Revealed type is "builtins.int"
# Success: no issues found in 1 source file
Now when I want to change the "value" of E.opt1 no one else even cares, and when I want to change the "name" of E.opt1 to E.opt11 a refactoring tool will do it everywhere for me.
The "main problem" with this is that it will require users to use the Enum, when the whole point was trying to provide a convenient, value-based but type-safe, interface, right? Consider the following, enum-less code:
from typing import Literal, overload, get_args
from enum import Enum
TOpt1 = Literal['opt1']
#overload
def f(x: TOpt1) -> str:
...
#overload
def f(x: Literal['opt2']) -> int:
...
def f(x):
if x == get_args(TOpt1):
return 'got 0'
elif x == 'opt2':
return 123
raise ValueError(x)
a = f('opt1')
b = f('opt2')
reveal_type(a)
reveal_type(b)
# > mypy .\tmp.py
# tmp.py:24: note: Revealed type is "builtins.str"
# tmp.py:25: note: Revealed type is "builtins.int"
I put both styles of checking the value of the argument in there: def f(x: TOpt1) and if x == get_args(TOpt1) vs def f(x: Literal['opt2']) and elif x == 'opt2'. While the first style is "better" in some abstract sense, I would not write it that way unless TOpt1 appears in multiple places (multiple overloads, or different functions). If it's just to be used in the one function for the one overload then I absolutely would just use the values directly and not bother with get_args and defining type aliases, because in the actual definition of f I would much rather look at a value than wonder about a type-argument.

Lexical cast from string to type

Recently, I was trying to store and read information from files in Python, and came across a slight problem: I wanted to read type information from text files. Type casting from string to int or to float is quite efficient, but type casting from string to type seems to be another problem. Naturally, I tried something like this:
var_type = type('int')
However, type isn't used as a cast but as a mechanism to find the type of the variable, which is actually str here.
I found a way to do it with:
var_type = eval('int')
But I generally try to avoid functions/statements like eval or exec where I can. So my question is the following: Is there another pythonic (and more specific) way to cast a string to a type?
I like using locate, which works on built-in types:
>>> from pydoc import locate
>>> locate('int')
<type 'int'>
>>> t = locate('int')
>>> t('1')
1
...as well as anything it can find in the path:
>>> locate('datetime.date')
<type 'datetime.date'>
>>> d = locate('datetime.date')
>>> d(2015, 4, 23)
datetime.date(2015, 4, 23)
...including your custom types:
>>> locate('mypackage.model.base.BaseModel')
<class 'mypackage.model.base.BaseModel'>
>>> m = locate('mypackage.model.base.BaseModel')
>>> m()
<mypackage.model.base.BaseModel object at 0x1099f6c10>
You're a bit confused on what you're trying to do. Types, also known as classes, are objects, like everything else in python. When you write int in your programs, you're referencing a global variable called int which happens to be a class. What you're trying to do is not "cast string to type", it's accessing builtin variables by name.
Once you understand that, the solution is easy to see:
def get_builtin(name):
return getattr(__builtins__, name)
If you really wanted to turn a type name into a type object, here's how you'd do it. I use deque to do a breadth-first tree traversal without recursion.
def gettype(name):
from collections import deque
# q is short for "queue", here
q = deque([object])
while q:
t = q.popleft()
if t.__name__ == name:
return t
else:
print 'not', t
try:
# Keep looking!
q.extend(t.__subclasses__())
except TypeError:
# type.__subclasses__ needs an argument, for whatever reason.
if t is type:
continue
else:
raise
else:
raise ValueError('No such type: %r' % name)
Why not just use a look-up table?
known_types = {
'int': int,
'float': float,
'str': str
# etc
}
var_type = known_types['int']
Perhaps this is what you want, it looks into builtin types only:
def gettype(name):
t = getattr(__builtins__, name)
if isinstance(t, type):
return t
raise ValueError(name)

Determine the type of an object? [duplicate]

This question already has answers here:
What's the canonical way to check for type in Python?
(15 answers)
Closed 6 months ago.
Is there a simple way to determine if a variable is a list, dictionary, or something else?
There are two built-in functions that help you identify the type of an object. You can use type() if you need the exact type of an object, and isinstance() to check an object’s type against something. Usually, you want to use isinstance() most of the times since it is very robust and also supports type inheritance.
To get the actual type of an object, you use the built-in type() function. Passing an object as the only parameter will return the type object of that object:
>>> type([]) is list
True
>>> type({}) is dict
True
>>> type('') is str
True
>>> type(0) is int
True
This of course also works for custom types:
>>> class Test1 (object):
pass
>>> class Test2 (Test1):
pass
>>> a = Test1()
>>> b = Test2()
>>> type(a) is Test1
True
>>> type(b) is Test2
True
Note that type() will only return the immediate type of the object, but won’t be able to tell you about type inheritance.
>>> type(b) is Test1
False
To cover that, you should use the isinstance function. This of course also works for built-in types:
>>> isinstance(b, Test1)
True
>>> isinstance(b, Test2)
True
>>> isinstance(a, Test1)
True
>>> isinstance(a, Test2)
False
>>> isinstance([], list)
True
>>> isinstance({}, dict)
True
isinstance() is usually the preferred way to ensure the type of an object because it will also accept derived types. So unless you actually need the type object (for whatever reason), using isinstance() is preferred over type().
The second parameter of isinstance() also accepts a tuple of types, so it’s possible to check for multiple types at once. isinstance will then return true, if the object is of any of those types:
>>> isinstance([], (tuple, list, set))
True
Use type():
>>> a = []
>>> type(a)
<type 'list'>
>>> f = ()
>>> type(f)
<type 'tuple'>
It might be more Pythonic to use a try...except block. That way, if you have a class which quacks like a list, or quacks like a dict, it will behave properly regardless of what its type really is.
To clarify, the preferred method of "telling the difference" between variable types is with something called duck typing: as long as the methods (and return types) that a variable responds to are what your subroutine expects, treat it like what you expect it to be. For example, if you have a class that overloads the bracket operators with getattr and setattr, but uses some funny internal scheme, it would be appropriate for it to behave as a dictionary if that's what it's trying to emulate.
The other problem with the type(A) is type(B) checking is that if A is a subclass of B, it evaluates to false when, programmatically, you would hope it would be true. If an object is a subclass of a list, it should work like a list: checking the type as presented in the other answer will prevent this. (isinstance will work, however).
On instances of object you also have the:
__class__
attribute. Here is a sample taken from Python 3.3 console
>>> str = "str"
>>> str.__class__
<class 'str'>
>>> i = 2
>>> i.__class__
<class 'int'>
>>> class Test():
... pass
...
>>> a = Test()
>>> a.__class__
<class '__main__.Test'>
Beware that in python 3.x and in New-Style classes (aviable optionally from Python 2.6) class and type have been merged and this can sometime lead to unexpected results. Mainly for this reason my favorite way of testing types/classes is to the isinstance built in function.
Determine the type of a Python object
Determine the type of an object with type
>>> obj = object()
>>> type(obj)
<class 'object'>
Although it works, avoid double underscore attributes like __class__ - they're not semantically public, and, while perhaps not in this case, the builtin functions usually have better behavior.
>>> obj.__class__ # avoid this!
<class 'object'>
type checking
Is there a simple way to determine if a variable is a list, dictionary, or something else? I am getting an object back that may be either type and I need to be able to tell the difference.
Well that's a different question, don't use type - use isinstance:
def foo(obj):
"""given a string with items separated by spaces,
or a list or tuple,
do something sensible
"""
if isinstance(obj, str):
obj = str.split()
return _foo_handles_only_lists_or_tuples(obj)
This covers the case where your user might be doing something clever or sensible by subclassing str - according to the principle of Liskov Substitution, you want to be able to use subclass instances without breaking your code - and isinstance supports this.
Use Abstractions
Even better, you might look for a specific Abstract Base Class from collections or numbers:
from collections import Iterable
from numbers import Number
def bar(obj):
"""does something sensible with an iterable of numbers,
or just one number
"""
if isinstance(obj, Number): # make it a 1-tuple
obj = (obj,)
if not isinstance(obj, Iterable):
raise TypeError('obj must be either a number or iterable of numbers')
return _bar_sensible_with_iterable(obj)
Or Just Don't explicitly Type-check
Or, perhaps best of all, use duck-typing, and don't explicitly type-check your code. Duck-typing supports Liskov Substitution with more elegance and less verbosity.
def baz(obj):
"""given an obj, a dict (or anything with an .items method)
do something sensible with each key-value pair
"""
for key, value in obj.items():
_baz_something_sensible(key, value)
Conclusion
Use type to actually get an instance's class.
Use isinstance to explicitly check for actual subclasses or registered abstractions.
And just avoid type-checking where it makes sense.
You can use type() or isinstance().
>>> type([]) is list
True
Be warned that you can clobber list or any other type by assigning a variable in the current scope of the same name.
>>> the_d = {}
>>> t = lambda x: "aight" if type(x) is dict else "NOPE"
>>> t(the_d) 'aight'
>>> dict = "dude."
>>> t(the_d) 'NOPE'
Above we see that dict gets reassigned to a string, therefore the test:
type({}) is dict
...fails.
To get around this and use type() more cautiously:
>>> import __builtin__
>>> the_d = {}
>>> type({}) is dict
True
>>> dict =""
>>> type({}) is dict
False
>>> type({}) is __builtin__.dict
True
be careful using isinstance
isinstance(True, bool)
True
>>> isinstance(True, int)
True
but type
type(True) == bool
True
>>> type(True) == int
False
While the questions is pretty old, I stumbled across this while finding out a proper way myself, and I think it still needs clarifying, at least for Python 2.x (did not check on Python 3, but since the issue arises with classic classes which are gone on such version, it probably doesn't matter).
Here I'm trying to answer the title's question: how can I determine the type of an arbitrary object? Other suggestions about using or not using isinstance are fine in many comments and answers, but I'm not addressing those concerns.
The main issue with the type() approach is that it doesn't work properly with old-style instances:
class One:
pass
class Two:
pass
o = One()
t = Two()
o_type = type(o)
t_type = type(t)
print "Are o and t instances of the same class?", o_type is t_type
Executing this snippet would yield:
Are o and t instances of the same class? True
Which, I argue, is not what most people would expect.
The __class__ approach is the most close to correctness, but it won't work in one crucial case: when the passed-in object is an old-style class (not an instance!), since those objects lack such attribute.
This is the smallest snippet of code I could think of that satisfies such legitimate question in a consistent fashion:
#!/usr/bin/env python
from types import ClassType
#we adopt the null object pattern in the (unlikely) case
#that __class__ is None for some strange reason
_NO_CLASS=object()
def get_object_type(obj):
obj_type = getattr(obj, "__class__", _NO_CLASS)
if obj_type is not _NO_CLASS:
return obj_type
# AFAIK the only situation where this happens is an old-style class
obj_type = type(obj)
if obj_type is not ClassType:
raise ValueError("Could not determine object '{}' type.".format(obj_type))
return obj_type
using type()
x='hello this is a string'
print(type(x))
output
<class 'str'>
to extract only the str use this
x='this is a string'
print(type(x).__name__)#you can use__name__to find class
output
str
if you use type(variable).__name__ it can be read by us
In many practical cases instead of using type or isinstance you can also use #functools.singledispatch, which is used to define generic functions (function composed of multiple functions implementing the same operation for different types).
In other words, you would want to use it when you have a code like the following:
def do_something(arg):
if isinstance(arg, int):
... # some code specific to processing integers
if isinstance(arg, str):
... # some code specific to processing strings
if isinstance(arg, list):
... # some code specific to processing lists
... # etc
Here is a small example of how it works:
from functools import singledispatch
#singledispatch
def say_type(arg):
raise NotImplementedError(f"I don't work with {type(arg)}")
#say_type.register
def _(arg: int):
print(f"{arg} is an integer")
#say_type.register
def _(arg: bool):
print(f"{arg} is a boolean")
>>> say_type(0)
0 is an integer
>>> say_type(False)
False is a boolean
>>> say_type(dict())
# long error traceback ending with:
NotImplementedError: I don't work with <class 'dict'>
Additionaly we can use abstract classes to cover several types at once:
from collections.abc import Sequence
#say_type.register
def _(arg: Sequence):
print(f"{arg} is a sequence!")
>>> say_type([0, 1, 2])
[0, 1, 2] is a sequence!
>>> say_type((1, 2, 3))
(1, 2, 3) is a sequence!
As an aside to the previous answers, it's worth mentioning the existence of collections.abc which contains several abstract base classes (ABCs) that complement duck-typing.
For example, instead of explicitly checking if something is a list with:
isinstance(my_obj, list)
you could, if you're only interested in seeing if the object you have allows getting items, use collections.abc.Sequence:
from collections.abc import Sequence
isinstance(my_obj, Sequence)
if you're strictly interested in objects that allow getting, setting and deleting items (i.e mutable sequences), you'd opt for collections.abc.MutableSequence.
Many other ABCs are defined there, Mapping for objects that can be used as maps, Iterable, Callable, et cetera. A full list of all these can be seen in the documentation for collections.abc.
value = 12
print(type(value)) # will return <class 'int'> (means integer)
or you can do something like this
value = 12
print(type(value) == int) # will return true
type() is a better solution than isinstance(), particularly for booleans:
True and False are just keywords that mean 1 and 0 in python. Thus,
isinstance(True, int)
and
isinstance(False, int)
both return True. Both booleans are an instance of an integer. type(), however, is more clever:
type(True) == int
returns False.
In general you can extract a string from object with the class name,
str_class = object.__class__.__name__
and using it for comparison,
if str_class == 'dict':
# blablabla..
elif str_class == 'customclass':
# blebleble..
For the sake of completeness, isinstance will not work for type checking of a subtype that is not an instance. While that makes perfect sense, none of the answers (including the accepted one) covers it. Use issubclass for that.
>>> class a(list):
... pass
...
>>> isinstance(a, list)
False
>>> issubclass(a, list)
True

Help--Function Pointers in Python

My idea of program:
I have a dictionary:
options = { 'string' : select_fun(function pointer),
'float' : select_fun(function pointer),
'double' : select_fun(function pointer)
}
whatever type comes single function select_fun(function pointer) gets called.
Inside select_fun(function pointer),I will have diff functions for float, double and so on.
Depending on function pointers, specified function will get called.
I don't know whether my programming knowledge is good or bad, still I need help.
Could you be more specific on what you're trying to do? You don't have to do anything special to get function pointers in Python -- you can pass around functions like regular objects:
def plus_1(x):
return x + 1
def minus_1(x):
return x - 1
func_map = {'+' : plus_1, '-' : minus_1}
func_map['+'](3) # returns plus_1(3) ==> 4
func_map['-'](3) # returns minus_1(3) ==> 2
You can use the type() built-in function to detect the type of the function.
Say, if you want to check if a certain name hold a string data, you could do this:
if type(this_is_string) == type('some random string'):
# this_is_string is indeed a string
So in your case, you could do it like this:
options = { 'some string' : string_function,
(float)(123.456) : float_function,
(int)(123) : int_function
}
def call_option(arg):
# loop through the dictionary
for (k, v) in options.iteritems():
# if found matching type...
if type(k) == type(arg):
# call the matching function
func = option[k]
func(arg)
Then you can use it like this:
call_option('123') # string_function gets called
call_option(123.456) # float_function gets called
call_option(123) # int_function gets called
I don't have a python interpreter nearby and I don't program in Python much so there may be some errors, but you should get the idea.
EDIT: As per #Adam's suggestion, there are built-in type constants that you can check against directly, so a better approach would be:
from types import *
options = { types.StringType : string_function,
types.FloatType : float_function,
types.IntType : int_function,
types.LongType : long_function
}
def call_option(arg):
for (k, v) in options.iteritems():
# check if arg is of type k
if type(arg) == k:
# call the matching function
func = options[k]
func(arg)
And since the key itself is comparable to the value of the type() function, you can just do this:
def call_option(arg):
func = options[type(arg)]
func(arg)
Which is more elegant :-) save for some error-checking.
EDIT: And for ctypes support, after some fiddling around, I've found that ctypes.[type_name_here] is actually implented as classes. So this method still works, you just need to use the ctypes.c_xxx type classes.
options = { ctypes.c_long : c_long_processor,
ctypes.c_ulong : c_unsigned_long_processor,
types.StringType : python_string_procssor
}
call_option = lambda x: options[type(x)](x)
Looking at your example, it seems to me some C procedure, directly translated to Python.
For this reason, I think there could be some design issue, because usually, in Python, you do not care about type of an object, but only about the messages you can send to it.
Of course, there are plenty of exceptions to this approach, but still in this case I would try encapsulating in some polymorphism; eg.
class StringSomething(object):
data = None
def data_function(self):
string_function_pointer(self.data)
class FloatSomething(object):
data = None
def data_function(self):
float_function_pointer(self.data)
etc.
Again, all of this under the assumption you are translating from a procedural language to python; if it is not the case, then discard my answer :-)
Functions are the first-class objects in Python therefore you can pass them as arguments to other functions as you would with any other object such as string or an integer.
There is no single-precision floating point type in Python. Python's float corresponds to C's double.
def process(anobject):
if isinstance(anobject, basestring):
# anobject is a string
fun = process_string
elif isinstance(anobject, (float, int, long, complex)):
# anobject is a number
fun = process_number
else:
raise TypeError("expected string or number but received: '%s'" % (
type(anobject),))
return fun(anobject)
There is functools.singledispatch that allows to create a generic function:
from functools import singledispatch
from numbers import Number
#singledispatch
def process(anobject): # default implementation
raise TypeError("'%s' type is not supported" % type(anobject))
#process.register(str)
def _(anobject):
# handle strings here
return process_string(anobject)
process.register(Number)(process_number) # use existing function for numbers
On Python 2, similar functionality is available as pkgutil.simplegeneric().
Here's a couple of code example of using generic functions:
Remove whitespaces and newlines from JSON file
Make my_average(a, b) work with any a and b for which f_add and d_div are defined. As well as builtins
Maybe you want to call the same select_fun() every time, with a different argument. If that is what you mean, you need a different dictionary:
>>> options = {'string' : str, 'float' : float, 'double' : float }
>>> options
{'double': <type 'float'>, 'float': <type 'float'>, 'string': <type 'str'>}
>>> def call_option(val, func):
... return func(val)
...
>>> call_option('555',options['float'])
555.0
>>>

Categories