My idea of program:
I have a dictionary:
options = { 'string' : select_fun(function pointer),
'float' : select_fun(function pointer),
'double' : select_fun(function pointer)
}
whatever type comes single function select_fun(function pointer) gets called.
Inside select_fun(function pointer),I will have diff functions for float, double and so on.
Depending on function pointers, specified function will get called.
I don't know whether my programming knowledge is good or bad, still I need help.
Could you be more specific on what you're trying to do? You don't have to do anything special to get function pointers in Python -- you can pass around functions like regular objects:
def plus_1(x):
return x + 1
def minus_1(x):
return x - 1
func_map = {'+' : plus_1, '-' : minus_1}
func_map['+'](3) # returns plus_1(3) ==> 4
func_map['-'](3) # returns minus_1(3) ==> 2
You can use the type() built-in function to detect the type of the function.
Say, if you want to check if a certain name hold a string data, you could do this:
if type(this_is_string) == type('some random string'):
# this_is_string is indeed a string
So in your case, you could do it like this:
options = { 'some string' : string_function,
(float)(123.456) : float_function,
(int)(123) : int_function
}
def call_option(arg):
# loop through the dictionary
for (k, v) in options.iteritems():
# if found matching type...
if type(k) == type(arg):
# call the matching function
func = option[k]
func(arg)
Then you can use it like this:
call_option('123') # string_function gets called
call_option(123.456) # float_function gets called
call_option(123) # int_function gets called
I don't have a python interpreter nearby and I don't program in Python much so there may be some errors, but you should get the idea.
EDIT: As per #Adam's suggestion, there are built-in type constants that you can check against directly, so a better approach would be:
from types import *
options = { types.StringType : string_function,
types.FloatType : float_function,
types.IntType : int_function,
types.LongType : long_function
}
def call_option(arg):
for (k, v) in options.iteritems():
# check if arg is of type k
if type(arg) == k:
# call the matching function
func = options[k]
func(arg)
And since the key itself is comparable to the value of the type() function, you can just do this:
def call_option(arg):
func = options[type(arg)]
func(arg)
Which is more elegant :-) save for some error-checking.
EDIT: And for ctypes support, after some fiddling around, I've found that ctypes.[type_name_here] is actually implented as classes. So this method still works, you just need to use the ctypes.c_xxx type classes.
options = { ctypes.c_long : c_long_processor,
ctypes.c_ulong : c_unsigned_long_processor,
types.StringType : python_string_procssor
}
call_option = lambda x: options[type(x)](x)
Looking at your example, it seems to me some C procedure, directly translated to Python.
For this reason, I think there could be some design issue, because usually, in Python, you do not care about type of an object, but only about the messages you can send to it.
Of course, there are plenty of exceptions to this approach, but still in this case I would try encapsulating in some polymorphism; eg.
class StringSomething(object):
data = None
def data_function(self):
string_function_pointer(self.data)
class FloatSomething(object):
data = None
def data_function(self):
float_function_pointer(self.data)
etc.
Again, all of this under the assumption you are translating from a procedural language to python; if it is not the case, then discard my answer :-)
Functions are the first-class objects in Python therefore you can pass them as arguments to other functions as you would with any other object such as string or an integer.
There is no single-precision floating point type in Python. Python's float corresponds to C's double.
def process(anobject):
if isinstance(anobject, basestring):
# anobject is a string
fun = process_string
elif isinstance(anobject, (float, int, long, complex)):
# anobject is a number
fun = process_number
else:
raise TypeError("expected string or number but received: '%s'" % (
type(anobject),))
return fun(anobject)
There is functools.singledispatch that allows to create a generic function:
from functools import singledispatch
from numbers import Number
#singledispatch
def process(anobject): # default implementation
raise TypeError("'%s' type is not supported" % type(anobject))
#process.register(str)
def _(anobject):
# handle strings here
return process_string(anobject)
process.register(Number)(process_number) # use existing function for numbers
On Python 2, similar functionality is available as pkgutil.simplegeneric().
Here's a couple of code example of using generic functions:
Remove whitespaces and newlines from JSON file
Make my_average(a, b) work with any a and b for which f_add and d_div are defined. As well as builtins
Maybe you want to call the same select_fun() every time, with a different argument. If that is what you mean, you need a different dictionary:
>>> options = {'string' : str, 'float' : float, 'double' : float }
>>> options
{'double': <type 'float'>, 'float': <type 'float'>, 'string': <type 'str'>}
>>> def call_option(val, func):
... return func(val)
...
>>> call_option('555',options['float'])
555.0
>>>
Related
How can I get the literal value out of a Literal[] from typing?
from typing import Literal, Union
Add = Literal['add']
Multiply = Literal['mul']
Action = Union[Add,Multiply]
def do(a: Action):
if a == Add:
print("Adding!")
elif a == Multiply:
print("Multiplying!")
else:
raise ValueError
do('add')
The code above type checks since 'add' is of type Literal['add'], but at runtime, it raises a ValueError since the string 'add' is not the same as typing.Literal['add'].
How can I, at runtime, reuse the literals that I defined at type level?
The typing module provides a function get_args which retrieves the arguments with which your Literal was initialized.
>>> from typing import Literal, get_args
>>> l = Literal['add', 'mul']
>>> get_args(l)
('add', 'mul')
However, I don't think you gain anything by using a Literal for what you propose. What would make more sense to me is to use the strings themselves, and then maybe define a Literal for the very strict purpose of validating that arguments belong to this set of strings.
>>> def my_multiply(*args):
... print("Multiplying {0}!".format(args))
...
>>> def my_add(*args):
... print("Adding {0}!".format(args))
...
>>> op = {'mul': my_multiply, 'add': my_add}
>>> def do(action: Literal[list(op.keys())]):
... return op[action]
Remember, a type annotation is essentially a specialized type definition, not a value. It restricts which values are allowed to pass through, but by itself it merely implements a constraint -- a filter which rejects values which you don't want to allow. And as illustrated above, its argument is a set of allowed values, so the constraint alone merely specifies which values it will accept, but the actual value only comes when you concretely use it to validate a value.
I guess that the desire to get the value from the type is to avoid code duplication, and enable broader refactors. But let's think about it a second...
Let's consider code duplication. We don't want to have to write the same literal value twice. But here's the thing, we're going to have to write down something twice, either the type or the literal, so why not the literal?
Let's consider enabling refactors. In this case we're worried that if we change the literal value of the type then code using the existing value will no longer work, it would be nice if we could change them all at once. Notice that the problem solved by the type-checker is adjacent to this one: when you change that value it will warn you everywhere that that value is no longer valid.
In this case you can opt to use an Enum to put the literal value inside the Literal type:
from typing import Literal, overload
from enum import Enum
class E(Enum):
opt1 = 'opt1'
opt2 = 'opt2'
#overload
def f(x: Literal[E.opt1]) -> str:
...
#overload
def f(x: Literal[E.opt2]) -> int:
...
def f(x: E):
if x == E.opt1:
return 'got 0'
elif x == E.opt2:
return 123
raise ValueError(x)
a = f(E.opt1)
b = f(E.opt2)
reveal_type(a)
reveal_type(b)
# > mypy .\tmp.py
# tmp.py:28: note: Revealed type is "builtins.str"
# tmp.py:29: note: Revealed type is "builtins.int"
# Success: no issues found in 1 source file
Now when I want to change the "value" of E.opt1 no one else even cares, and when I want to change the "name" of E.opt1 to E.opt11 a refactoring tool will do it everywhere for me.
The "main problem" with this is that it will require users to use the Enum, when the whole point was trying to provide a convenient, value-based but type-safe, interface, right? Consider the following, enum-less code:
from typing import Literal, overload, get_args
from enum import Enum
TOpt1 = Literal['opt1']
#overload
def f(x: TOpt1) -> str:
...
#overload
def f(x: Literal['opt2']) -> int:
...
def f(x):
if x == get_args(TOpt1):
return 'got 0'
elif x == 'opt2':
return 123
raise ValueError(x)
a = f('opt1')
b = f('opt2')
reveal_type(a)
reveal_type(b)
# > mypy .\tmp.py
# tmp.py:24: note: Revealed type is "builtins.str"
# tmp.py:25: note: Revealed type is "builtins.int"
I put both styles of checking the value of the argument in there: def f(x: TOpt1) and if x == get_args(TOpt1) vs def f(x: Literal['opt2']) and elif x == 'opt2'. While the first style is "better" in some abstract sense, I would not write it that way unless TOpt1 appears in multiple places (multiple overloads, or different functions). If it's just to be used in the one function for the one overload then I absolutely would just use the values directly and not bother with get_args and defining type aliases, because in the actual definition of f I would much rather look at a value than wonder about a type-argument.
In Python I can do something like this:
def wrap(f):
def wrapper(*args, **kwargs):
print "args: ", args, kwargs
res = f(*args, **kwargs)
print "result: ", res
return res
return wrapper
This lets me wrap any function regardless of the arguments they take. For instance:
In [8]: def f(thing):
print "in f:", thing
return 3
In [9]: wrapped_f = wrap(f)
In [10]: wrapped_f(2)
args: (2,) {}
in f: 2
result: 3
Out[10]: 3
Is there a way to do something similar (write a wrapper that can be applied to any function regardless of its input/output types) in Scala?
You could certainly do this with macros. You can convert a method call to a function with partial application:
object Foo {
def bar(i: Int): Int = i + 1
}
val fn = Foo.bar _
defined object Foo
fn: Int => Int = <function1>
Now you have an object, in this case of type Function1[Int, Int], which you can pass to a Scala macro, which would be something like this (not tested):
object DecoratorMacros {
import reflect.macros.blackbox
import language.experimental.macros
def decorate[A <: Function](fn: A): [A] = macro decorate_impl[A]
def decorate_impl[A: c.WeakTypeTag](c: blackbox.Context) = {
import c.universe._
val type = weakTypeOf[A]
...
}
}
In the body of the macro, you can inspect the whole type signature of fn: A, which will include the arguments. You can then write code to do your desired side effects, and return a function which you can then invoke. Something like this:
DecoratorMacros.decorate(Foo.bar _)(42)
Macros are fairly involved, but I can elaborate if you think this is a path you'd like to go down.
There is a fundamental issue here: in Scala you have to know what arguments the function should get and actually pass them so that the compiler can be sure that the types match.
Say there is def f(a: List[Int], b: String) = ... and def g(args: Any*) = f(args). This won't compile! (Any* means any amount of objects with any type). The problem is that Any* is still only one single argument which actually is translated to one kind of Array.
Just to make this more clear you could think of an example situation: you have called the wrap(f) with some function f(a: String, b: String). Then you have the output of the wrapper which would somehow accept any amount of any kind of arguments and you make the call wrapper_f(List(1), "a"). In this situation the wrapper_f(...) call should be correct but inside the wrapper the wrapped function has a completely different parameter list which can not accept a List[Int] and a String. Thus you would get the "Type Error" in runtime which should (in general) be impossible in statically typed programming languages (or at least in Scala).
Is it possible to dynamically unwrap a list/tuple/map items as arguments to a function in Scala? I am looking for a Scala equivalent of Python's args/kwargs.
For instance, in Python if a function is defined as def foo(bar1, bar2, bar3=None, bar4=1) then given a list x=[1,7] and a dictionary y={'bar3':True, 'bar4':9} you can call foo as foo(*x, **y).
Just to be clear, the following is valid Python code:
def foo(bar1, bar2, bar3=None, bar4=1): print("bar1="+str(bar1)+" bar2="+str(bar2)+" bar3="+str(bar3)+" bar4="+str(bar4))
x=[1,7]
y={'bar3':True, 'bar4':9}
foo(*x,**y)
However, there is no analogous Scala syntax. There are some similar things, but the main reason this is never going to be possible is that it would violate the compile-time type checking that Scala requires. Let's look more closely.
The reasons
First, think about the varargs portion. Here you want to be able to pass in an arbitrary-length list of arguments and have it fill in the relevant function parameters. This will never work in Scala because the type checker requires that the parameters passed into a function be valid. In your scenario, foo() can accept a parameter list x of length two, but no less. But since any Seq can have an arbitrary number of parameters, how would the type checker know that the x being pass it is valid at compile time?
Second, think about the keywword arguments. Here you are asking for the function to accept an arbitrary Map of arguments and values. But you get the same problem: How can the compile-time type checker know that you are passing in all of the necessary arguments? Or, further, that they are the right types? After all, they example you give is a Map containing both a Boolean and an Int, which would have the type Map[String, Any], so how would the type checker know that this would match your parameter types?
Some solutions
Scala's varargs
You can do some similar things, but not this exactly. For example, if you defined your function to explicitly use varargs, you can pass in a Seq:
def foo(bar1: Int*) = println(f"bar1=$bar1")
val x = Seq(1, 2)
foo(x:_*)
This works because Scala knows that it only needs a sequence of zero or more arguments, and a Seq will always contain zero or more items, so it matches. Further, it only works if the types match as well; here it's expecting a sequence of Ints, and gets it.
tupled
The other thing you can do is to pass in a tuple of arguments:
def foo(bar1: Int, bar2: Int, bar3: Boolean = false, bar4: Int = 1) = println(f"bar1=$bar1 bar2=$bar2 bar3=$bar3 bar4=$bar4")
val x = (1, 2, true, 9)
(foo _).tupled(x)
Again, this works because Scala's type checker can verify that the arguments are valid. The function requires four arguments, of types Int, Int, Boolean, and Int, and since a tuple in Scala has a fixed length and known (and possibly different) types for each position, the type-checker can verify that the arguments match the expected parameters.
Sort of an edge case for the OQ but if you want to pass a Map of arguments for a case class this seems to work:
scala> case class myCC(foo: String = "bar", negInt: Int = -1)
scala> val row = myCC()
scala> println(row)
myCC(bar,-1)
scala> val overrides = Map("foo" -> "baz")
scala> row.getClass.getDeclaredFields foreach { f =>
f.setAccessible(true)
overrides.foreach{case (k,v) => if (k == f.getName) f.set(row, v)}
}
scala> println(row)
myCC(baz,-1)
(borrowed from Scala: How to access a class property dynamically by name?)
Original answers don't mention handling Map as list of pairs - it can be easily converted to map (even -> operator is just shorthand for pair).
def parse(options: (String, String)*) = println (options.toMap)
You can use varargs syntax:
def printAll(strings: String*) {
strings.map(println)
}
No you can use this function so:
printAll("foo")
so:
printAll("foo", "bar")
or so:
printAll("foo", "bar", "baz")
Recently, I was trying to store and read information from files in Python, and came across a slight problem: I wanted to read type information from text files. Type casting from string to int or to float is quite efficient, but type casting from string to type seems to be another problem. Naturally, I tried something like this:
var_type = type('int')
However, type isn't used as a cast but as a mechanism to find the type of the variable, which is actually str here.
I found a way to do it with:
var_type = eval('int')
But I generally try to avoid functions/statements like eval or exec where I can. So my question is the following: Is there another pythonic (and more specific) way to cast a string to a type?
I like using locate, which works on built-in types:
>>> from pydoc import locate
>>> locate('int')
<type 'int'>
>>> t = locate('int')
>>> t('1')
1
...as well as anything it can find in the path:
>>> locate('datetime.date')
<type 'datetime.date'>
>>> d = locate('datetime.date')
>>> d(2015, 4, 23)
datetime.date(2015, 4, 23)
...including your custom types:
>>> locate('mypackage.model.base.BaseModel')
<class 'mypackage.model.base.BaseModel'>
>>> m = locate('mypackage.model.base.BaseModel')
>>> m()
<mypackage.model.base.BaseModel object at 0x1099f6c10>
You're a bit confused on what you're trying to do. Types, also known as classes, are objects, like everything else in python. When you write int in your programs, you're referencing a global variable called int which happens to be a class. What you're trying to do is not "cast string to type", it's accessing builtin variables by name.
Once you understand that, the solution is easy to see:
def get_builtin(name):
return getattr(__builtins__, name)
If you really wanted to turn a type name into a type object, here's how you'd do it. I use deque to do a breadth-first tree traversal without recursion.
def gettype(name):
from collections import deque
# q is short for "queue", here
q = deque([object])
while q:
t = q.popleft()
if t.__name__ == name:
return t
else:
print 'not', t
try:
# Keep looking!
q.extend(t.__subclasses__())
except TypeError:
# type.__subclasses__ needs an argument, for whatever reason.
if t is type:
continue
else:
raise
else:
raise ValueError('No such type: %r' % name)
Why not just use a look-up table?
known_types = {
'int': int,
'float': float,
'str': str
# etc
}
var_type = known_types['int']
Perhaps this is what you want, it looks into builtin types only:
def gettype(name):
t = getattr(__builtins__, name)
if isinstance(t, type):
return t
raise ValueError(name)
Basically I want to do this:
obj = 'str'
type ( obj ) == string
I tried:
type ( obj ) == type ( string )
and it didn't work.
Also, what about the other types? For example, I couldn't replicate NoneType.
isinstance()
In your case, isinstance("this is a string", str) will return True.
You may also want to read this: http://www.canonical.org/~kragen/isinstance/
First, avoid all type comparisons. They're very, very rarely necessary. Sometimes, they help to check parameter types in a function -- even that's rare. Wrong type data will raise an exception, and that's all you'll ever need.
All of the basic conversion functions will map as equal to the type function.
type(9) is int
type(2.5) is float
type('x') is str
type(u'x') is unicode
type(2+3j) is complex
There are a few other cases.
isinstance( 'x', basestring )
isinstance( u'u', basestring )
isinstance( 9, int )
isinstance( 2.5, float )
isinstance( (2+3j), complex )
None, BTW, never needs any of this kind of type checking. None is the only instance of NoneType. The None object is a Singleton. Just check for None
variable is None
BTW, do not use the above in general. Use ordinary exceptions and Python's own natural polymorphism.
isinstance works:
if isinstance(obj, MyClass): do_foo(obj)
but, keep in mind: if it looks like a duck, and if it sounds like a duck, it is a duck.
EDIT: For the None type, you can simply do:
if obj is None: obj = MyClass()
For other types, check out the types module:
>>> import types
>>> x = "mystring"
>>> isinstance(x, types.StringType)
True
>>> x = 5
>>> isinstance(x, types.IntType)
True
>>> x = None
>>> isinstance(x, types.NoneType)
True
P.S. Typechecking is a bad idea.
You can always use the type(x) == type(y) trick, where y is something with known type.
# check if x is a regular string
type(x) == type('')
# check if x is an integer
type(x) == type(1)
# check if x is a NoneType
type(x) == type(None)
Often there are better ways of doing that, particularly with any recent python. But if you only want to remember one thing, you can remember that.
In this case, the better ways would be:
# check if x is a regular string
type(x) == str
# check if x is either a regular string or a unicode string
type(x) in [str, unicode]
# alternatively:
isinstance(x, basestring)
# check if x is an integer
type(x) == int
# check if x is a NoneType
x is None
Note the last case: there is only one instance of NoneType in python, and that is None. You'll see NoneType a lot in exceptions (TypeError: 'NoneType' object is unsubscriptable -- happens to me all the time..) but you'll hardly ever need to refer to it in code.
Finally, as fengshaun points out, type checking in python is not always a good idea. It's more pythonic to just use the value as though it is the type you expect, and catch (or allow to propagate) exceptions that result from it.
Use isinstance(object, type). As above this is easy to use if you know the correct type, e.g.,
isinstance('dog', str) ## gives bool True
But for more esoteric objects, this can be difficult to use.
For example:
import numpy as np
a = np.array([1,2,3])
isinstance(a,np.array) ## breaks
but you can do this trick:
y = type(np.array([1]))
isinstance(a,y) ## gives bool True
So I recommend instantiating a variable (y in this case) with a type of the object you want to check (e.g., type(np.array())), then using isinstance.
You're very close! string is a module, not a type. You probably want to compare the type of obj against the type object for strings, namely str:
type(obj) == str # this works because str is already a type
Alternatively:
type(obj) == type('')
Note, in Python 2, if obj is a unicode type, then neither of the above will work. Nor will isinstance(). See John's comments to this post for how to get around this... I've been trying to remember it for about 10 minutes now, but was having a memory block!
Use str instead of string
type ( obj ) == str
Explanation
>>> a = "Hello"
>>> type(a)==str
True
>>> type(a)
<type 'str'>
>>>
It is because you have to write
s="hello"
type(s) == type("")
type accepts an instance and returns its type. In this case you have to compare two instances' types.
If you need to do preemptive checking, it is better if you check for a supported interface than the type.
The type does not really tell you much, apart of the fact that your code want an instance of a specific type, regardless of the fact that you could have another instance of a completely different type which would be perfectly fine because it implements the same interface.
For example, suppose you have this code
def firstElement(parameter):
return parameter[0]
Now, suppose you say: I want this code to accept only a tuple.
import types
def firstElement(parameter):
if type(parameter) != types.TupleType:
raise TypeError("function accepts only a tuple")
return parameter[0]
This is reducing the reusability of this routine. It won't work if you pass a list, or a string, or a numpy.array. Something better would be
def firstElement(parameter):
if not (hasattr(parameter, "__getitem__") and callable(getattr(parameter,"__getitem__"))):
raise TypeError("interface violation")
return parameter[0]
but there's no point in doing it: parameter[0] will raise an exception if the protocol is not satisfied anyway... this of course unless you want to prevent side effects or having to recover from calls that you could invoke before failing. (Stupid) example, just to make the point:
def firstElement(parameter):
if not (hasattr(parameter, "__getitem__") and callable(getattr(parameter,"__getitem__"))):
raise TypeError("interface violation")
os.system("rm file")
return parameter[0]
in this case, your code will raise an exception before running the system() call. Without interface checks, you would have removed the file, and then raised the exception.
I use type(x) == type(y)
For instance, if I want to check something is an array:
type( x ) == type( [] )
string check:
type( x ) == type( '' ) or type( x ) == type( u'' )
If you want to check against None, use is
x is None
i think this should do it
if isinstance(obj, str)
Type doesn't work on certain classes. If you're not sure of the object's type use the __class__ method, as so:
>>>obj = 'a string'
>>>obj.__class__ == str
True
Also see this article - http://www.siafoo.net/article/56
To get the type, use the __class__ member, as in unknown_thing.__class__
Talk of duck-typing is useless here because it doesn't answer a perfectly good question. In my application code I never need to know the type of something, but it's still useful to have a way to learn an object's type. Sometimes I need to get the actual class to validate a unit test. Duck typing gets in the way there because all possible objects have the same API, but only one is correct. Also, sometimes I'm maintaining somebody else's code, and I have no idea what kind of object I've been passed. This is my biggest problem with dynamically typed languages like Python. Version 1 is very easy and quick to develop. Version 2 is a pain in the buns, especially if you didn't write version 1. So sometimes, when I'm working with a function I didn't write, I need to know the type of a parameter, just so I know what methods I can call on it.
That's where the __class__ parameter comes in handy. That (as far as I can tell) is the best way (maybe the only way) to get an object's type.
You can compare classes for check level.
#!/usr/bin/env python
#coding:utf8
class A(object):
def t(self):
print 'A'
def r(self):
print 'rA',
self.t()
class B(A):
def t(self):
print 'B'
class C(A):
def t(self):
print 'C'
class D(B, C):
def t(self):
print 'D',
super(D, self).t()
class E(C, B):
pass
d = D()
d.t()
d.r()
e = E()
e.t()
e.r()
print isinstance(e, D) # False
print isinstance(e, E) # True
print isinstance(e, C) # True
print isinstance(e, B) # True
print isinstance(e, (A,)) # True
print e.__class__ >= A, #False
print e.__class__ <= C, #False
print e.__class__ < E, #False
print e.__class__ <= E #True
Because type returns an object, you can access de the name of the object using object.name
Example:
years = 5
user = {'name':'Smith', 'age':20}
print(type(a).__name__)
output: 'int'
print(type(b).__name__ )
output: 'dict'