How would I run a function given its name? - python

I have a large number of blending functions:
mix(a, b)
add(a, b)
sub(a, b)
xor(a, b)
...
These functions all take the same inputs and provide different outputs, all of the same type.
However, I do not know which function must be run until runtime.
How would I go about implementing this behavior?
Example code:
def add(a, b):
return a + b
def mix(a, b):
return a * b
# Required blend -> decided by other code
blend_name = "add"
a = input("Some input")
b = input("Some other input")
result = run(add, a, b) # I need a run function
I have looked online, but most searches lead to either running functions from the console, or how to define a function.

I'm not really big fan of using dictionary in this case so here is my approach using getattr. although technically its almost the same thing and principle is also almost the same, code looks cleaner for me at least
class operators():
def add(self, a, b):
return (a + b)
def mix(self, a, b):
return(a * b)
# Required blend -> decided by other code
blend_name = "add"
a = input("Some input")
b = input("Some other input")
method = getattr(operators, blend_name)
result = method(operators, a, b)
print(result) #prints 12 for input 1 and 2 for obvious reasons
EDIT
this is edited code without getattr and it looks way cleaner. so you can make this class the module and import as needed, also adding new operators are easy peasy, without caring to add an operator in two places (in the case of using dictionary to store functions as a key/value)
class operators():
def add(self, a, b):
return (a + b)
def mix(self, a, b):
return(a * b)
def calculate(self, blend_name, a, b):
return(operators.__dict__[blend_name](self, a, b))
# Required blend -> decided by other code
oper = operators()
blend_name = "add"
a = input("Some input")
b = input("Some other input")
result = oper.calculate(blend_name, a, b)
print(result)

You can create a dictionary that maps the function names to their function objects and use that to call them. For example:
functions = {"add": add, "sub": sub} # and so on
func = functions[blend_name]
result = func(a, b)
Or, a little more compact, but perhaps less readable:
result = functions[blend_name](a, b)

You could use the globals() dictionary for the module.
result = globals()[blend_name](a, b)
It would be prudent to add some validation for the values of blend_name

Related

Add a function using another function's parameter declaration

I am trying to add some customized logic outside of an existing function. Here are the example:
# existing function that I cannot change
def sum(a, b, c, d):
return a+b+c+d
# the function I want to build
def sumMultiply(a, b, c, d, multiplier):
return multiplier * sum(a, b, c, d)
This is a stupid example, but essentially I want to build a new function that takes all the parameter of the existing function and add a few new arguments.
The above solution is problematic when the existing function changes its definition. For example:
# in some updates the original function dropped one parameter
def sum(a, b, c):
return a+b+c
# the new function will give an error since there is no parameter "d"
def sumMultiply(a, b, c, d, multiplier):
return multiplier * sum(a, b, c, d) # error
How can I specify the new function so that I do not need to worry about changing the new function definition when the existing function definition changes?
One way would be to use arbitrary positional or keyword arguments:
def sumMultiply(multiplier, *numbers):
return multiplier * sum(*numbers)
def sumMultiply(multiplier, *args, **kwargs):
return multiplier * sum(*args, **kwargs)
However, if you see yourself passing around the same set of data around, consider making a parameter object. In your case, it can simply be a list:
def sum(numbers):
...
def sumMultiply(multiplier, numbers):
return multiplier * sum(numbers)
There are some additional downsides to using arbitrary arguments:
the arguments are implicit: you might need to dig through several layers to see what you actually need to provide
they don't play well with type annotations and other static analysers (e.g. PyCharm's refactorings)
I would create a decorator function
def create_fun_multiplier(fun, multiplier=1):
def multiplier_fun(*args):
return multiplier * fun(*args)
return multiplier_fun
def my_sum(a, b, c):
return a + b + c
sumMultiply = create_fun_multiplier(my_sum, multiplier=2)
print(sumMultiply(3, 4, 7))
I would look at using keyword args for this problem.
eg.
def sum(a, b, c):
return a + b + c
def sumMultiply(*args, multiplier=1):
return multiplier * sum(*args)

Function as an argument of another function

I'm learning this language hence I'm new with Python. The code is:
def add(a, b):
return a + b
def double_add(x, a, b):
return x(x(a, b), x(a, b))
a = 4
b = 5
print(double_add(add, a, b))
The add function is simple, it adds two numbers. The double_add function has three arguments. I understand what is happening (With some doubts). The result is 18. I can't understand how double_add uses add to function.
The question is, what is the connection between these two functions?
It would be helpful if tell me some examples of using a function as an argument of another function.
Thanks in advance.
In python language, functions (and methods) are first class objects. First Class objects are those objects, which can be handled uniformly.
So, you just pass a method as an argument.
Your method will return add(add(4, 5), add(4, 5)) which is add(9, 9) and it's equals to 18.
A function is an object just like any other in Python. So you can pass it as argument, assign attributes to it, and well maybe most importantely - call it. We can look at a simpler example to understand how passing a function works:
def add(a, b):
return a + b
def sub(a, b):
return a - b
def operate(func, a, b):
return func(a, b)
a = 4
b = 5
print(operate(add, a, b))
print(operate(sub, a, b))
operate(print, a, b)
And this prints out:
9
-1
4 5
That is because in each case, func is assigned with the respective function object passed as an argument, and then by doing func(a, b) it actually calls that function on the given arguments.
So what happens with your line:
return x(x(a, b), x(a, b))
is first both x(a, b) are evaluated as add(4, 5) which gives 9. And then the outer x(...) is evaluated as add(9, 9) which gives 18.
If you would add print(x) in the double_add function you would see that it would print <function add at 0x10dd12290>.
Therefore, the code of double_add is basically the same as if you would do following:
print(add(add(a,b), add(a,b))) # returns 18 in your case
Functions are objects in Python, just like anything else such as lists, strings.. and you can pass them same way you do with variables.
The function object add is passed as an argument to double_add, where it is locally referred to as x. x is then called on each, and then on the two return values from that.
def double_add(x, a, b):
return x(x(a, b), x(a, b))
Let's write it differently so it's easier to explain:
def double_add(x, a, b):
result1 = x(a, b)
result2 = x(a, b)
return x(result1, result2)
This means, take the function x, and apply it to the parameters a and b. x could be whatever function here.
print(double_add(add, a, b))
Then this means: call the double_add function, giving itaddas the first parameter. Sodouble_add`, would do:
result1 = add(a, b)
result2 = add(a, b)
return add(result1, result2)
This is a very simple example of what is called "dependency injection". What it means is that you are not explicitly defining an interaction between the two functions, instead you are defining that double_add should use some function, but it only knows what it is when the code is actually run. (At runtime you are injecting the depedency on a specific function, instead of hardcoding it in the function itself),
Try for example the following
def add(a, b):
return a + b
def subtract(a, b):
return a - b
def double_add(x, a, b):
return x(x(a, b), x(a, b))
a = 4
b = 5
print(double_add(add, a, b))
print(double_add(subtract, a, b))
In other words, double_add has become a generic function that will execute whatever you give it twice and print the result

Best way to handle functions and sub functions

What is the 'Pythonic' way to handling functions and using subfunctions in a scenario where they are used in a particular order?
As one of the ideas seem to be that functions should be doing 1 thing, I run into the situation that I find myself splitting up functions while they have a fixed order of execution.
When functions are really a kind of 'do step 1', 'then with outcome of step 1, do step 2' I currently end up wrapping the step functions into another function while defining them on the same level. However, I'm wondering if this is indeed the way I should be doing this.
Example code:
def step_1(data):
# do stuff on data
return a
def step_2(data, a):
# do stuff on data with a
return b
def part_1(data):
a = step_1(data)
b = step_2(data, a)
return a, b
def part_2(data_set_2, a, b):
# do stuff on data_set_2 with a and b as input
return c
I'd be calling this from another file/script (or Jupyter notebook) as part_1 and then part_2
Seems to be working just fine for my purposes right now, but as I said I'm wondering at this (early) stage if I should be using a different approach for this.
I guess you can use a Class here, otherwise your code can be made shorter using the following:
def step_1(data):
# do stuff on data
return step_2(data, a)
def step_2(data, a):
# do stuff on data with a
return a, b
def part_2(data_set_2, a, b):
# do stuff on data_set_2 with a and b as input
return c
As a rule of thumb, if more functions use the same arguments, it is a good idea to group them together into a class. But you can also define a main() or run() function that makes uses of your functions in a sequential fashion. Since the example you have made is not too complex, I would avoid using classes and go for something like:
def step_1(data):
# do stuff on data
return step_2(data, a)
def step_2(data, a):
# do stuff on data with a
return a, b
def part_2(data_set_2, a, b):
# do stuff on data_set_2 with a and b as input
return c
def run(data, data_set_2, a, b):
step_1(data)
step_2(data, a)
part_2(data_set_2, a, b)
run(data, data_set_2, a, b)
If the code grows in complexity, using classes is advised. In the end, it's your choice.

How can you take a function as a parameter and call it?

My task is the following: "Write a function named operate that takes as parameters 2 integers named a, b and a function named func which that takes 2 integers as parameters. Also write the functions add, sub, mul, and div that take 2 integer parameters and perform the operation corresponding to their name and print the result. Calling operate(a, b, func) should result in a call to func(a, b)". I've done the first four parts, but I'm stuck on how to implement operate. Here is my code so far:
# this adds two numbers given
def add(a,b):
print (a + b)
# this subtracts two numbers given
def sub(a,b):
print (b - a)
# this multiplies two numbers given
def mul(a,b):
print (a * b)
# this divides two numbers given
def div(a,b):
print (a / b)
To achieve this you need to return something from your functions, not just print something. This lets you use the result later. To do this just use the return statement with some expression:
def add(a, b):
return a + b
def sub(a, b):
return a - b
def mul(a, b):
return a * b
def div(a, b):
return a / b
I've changed the order of your sub operation to be more in line with how subtraction is generally defined.
To now write an operate function is actually pretty easy. You've been given two parts already: the signature should be operate(a, b, func) and you should call func(a, b). This is actually almost all of what it will end up as - all you need to do is again return it (you could also print it here if you wanted):
def operate(a, b, func):
return func(a, b)
You can now do something like this:
print(operate(3, 2, add))
print(operate(3, 2, sub))
print(operate(3, 2, mul))
print(operate(3, 2, div))
Which will result in the output:
5
1
6
1.5
In a comments I asked about the standard library - you see, all of these are already implemented by Python. You can replace the first four function definitions with this:
from operator import add, sub, mul, truediv as div
Leaving you to only define operate and do some testing.

From Haskell to functional Python

I want to translate some Haskell code into Python.
The Haskell classes/instances look like:
{-# LANGUAGE MultiParamTypeClasses #-}
module MyModule where
class Example a b where
doSomething :: a -> b -> Bool
doSomethingElse :: a -> b -> Int
instance Example Int Int where
doSomething a b = (a + b * 2) > 5
doSomethingElse a b = a - b * 4
Is there a way in Python to approximate the Haskell class/instance construct?
What is the least offensive way to translate this into Python?
This doesn't really have an analogue in Python, but you can fake it:
def int_int_doSomething(a, b):
return (a + b * 2) > 5
def int_int_doSomethingElse(a, b):
return a - b * 4
Example = {}
Example[(int, int)] = (int_int_doSomething, int_int_doSomethingElse)
def doSomething(a, b):
types = type(a), type(b)
return Example[types][0](a, b)
def doSomethingElse(a, b):
types = type(a), type(b)
return Example[types][1](a, b)
All you have to do is add new values to Example for each type combination you want to have. You could even throw in some extra error handling in doSomething and doSomethingElse, or some other methods to make it easier. Another way would be to make an object that keeps track of all of these and lets you add new types to the map in a more managed way, but it's just more bookkeeping on top of what I've already shown.
Keep in mind that this is essentially how Haskell does it, too, except the checks are performed at compile time. Typeclasses are really nothing more than a dictionary lookup on the type to pick the appropriate functions to insert into the computation. Haskell just does this automatically for you at compile time instead of you having to manage it yourself like you do in Python.
To add that bookkeeping, you could do something like the following, keeping it in its own module and then it'll only (by default) export the symbols in __all__. This keeps things looking more like the Haskell version:
class _Example(object):
def __init__(self, doSomething, doSomethingElse):
self.doSomething = doSomething
self.doSomethingElse = doSomethingElse
ExampleStore = {}
def register(type1, type2, instance):
ExampleStore[(type1, type2)] = instance
def doSomething(a, b):
types = type(a), type(b)
return ExampleStore[types].doSomething(a, b)
def doSomethingElse(a, b):
types = type(a), type(b)
return ExampleStore[types].doSomethingElse(a, b)
def Example(type1, type2, doSomething, doSomethingElse):
register(type1, type2, _Example(doSomething, doSomethingElse))
__all__ = [
'doSomethingElse',
'doSomethingElse',
'Example'
]
Then you can make instances like
Example(int, int,
doSomething=lambda a, b: (a + b * 2) > 5,
doSomethingElse=lambda a, b: a - b * 4
)
Which looks almost like Haskell.
You don't have parametric types in Python, as it's dynamically typed. Also the distinction between classes and instances is clear in Python, but as classes are themselves "live objects", the distinction of usage might be a little bit blurred sometimes...
For your case, a classical implementation might go as:
#you don't really need this base class, it's just for documenting purposes
class Example:
def doSomething(self, a, b):
raise "Not Implemented"
def doSomethingElse(self, a, b):
raise "Not Implemented"
class ConcreteClass(Example):
def __init__(self, x, y, z):
self.x = x
self.y = y
self.z = z
def doSomething(self, a, b):
return (a + b * self.x) > self.y
def doSomethingElse(self, a, b):
return a - b * self.z
instance = ConcreteClass((2, 5, 4)
but I personally dislike that convoluted style, so you might just go with something more lightweight, like:
from collections import namedtuple
Example = namedtuple('Example', 'doSomething doSomethingElse')
instance = Example((lambda a, b: (a + b * 2) > 5),
(lambda a, b: a - b *4 ))
And of course, rely on duck typing and usually "let it crash". The lack of type safety should be made up with extensive unit testing.

Categories