Here's my situation:
import foo, bar, etc
frequency = ["hours","days","weeks"]
class geoProcessClass():
def __init__(self,geoTaskHandler,startDate,frequency,frequencyMultiple=1,*args):
self.interval = self.__determineTimeDelta(frequency,frequencyMultiple)
def __determineTimeDelta(self,frequency,frequencyMultiple):
if frequency in frequency:
interval = datetime.timedelta(print eval(frequency + "=" + str(frequencyMultiple)))
return interval
else:
interval = datetime.timedelta("days=1")
return interval
I want to dynamically define a time interval with timedelta, but this does not seem to work.
Is there any specific way to make this work? I'm getting invalid syntax here.
Are there any better ways to do it?
You can call a function with dynamic arguments using syntax like func(**kwargs) where kwargs is dictionary of name/value mappings for the named arguments.
I also renamed the global frequency list to frequencies since the line if frequency in frequency didn't make a whole lot of sense.
class geoProcessClass():
def __init__(self, geoTaskHandler, startDate, frequency, frequencyMultiple=1, *args):
self.interval = self.determineTimeDelta(frequency, frequencyMultiple)
def determineTimeDelta(self, frequency, frequencyMultiple):
frequencies = ["hours", "days", "weeks"]
if frequency in frequencies:
kwargs = {frequency: frequencyMultiple}
else:
kwargs = {"days": 1}
return datetime.timedelta(**kwargs)
For what it's worth, stylistically it's usually frowned upon to silently correct errors a caller makes. If the caller calls you with invalid arguments you should probably fail immediately and loudly rather than try to keep chugging. I'd recommend against that if statement.
For more information on variable-length and keyword argument lists, see:
The Official Python Tutorial
PEP 3102: Keyword-Only Arguments
Your use of print eval(...) looks a bit over-complicated (and wrong, as you mention).
If you want to pass a keyword argument to a function, just do it:
interval = datetime.timedelta(frequency = str(frequencyMultiple)
I don't see a keyword argument called frequency though, so that might be a separate problem.
Related
While doing programming exercises on codewars.com, I encountered an exercise on currying and partial functions.
Being a novice in programming and new to the topic, I searched on the internet for information on the topic and got quite far into solving the exercise. However I have now stumbled upon an obstacle I can't seem to overcome and am here looking for a nudge in the right direction.
The exercise is rather simple: write a function that can curry and/or partial any input function and evaluates the input function once enough input parameters are supplied. The input function can accept any number of input parameters. Also the curry/partial function should be very flexible in how it is called, being able to handle many, many different ways of calling the function. Also, the curry/partial function is allowed to be called with more inputs than required by the input function, in that case all the excess inputs need to be ignored.
Following the exercise link, all the test cases can be found that the function needs to be able to handle.
The code I came up with is the following:
from functools import partial
from inspect import signature
def curry_partial(func, *initial_args):
""" Generates a 'curried' version of a function. """
# Process any initial arguments that where given. If the number of arguments that are given exceeds
# minArgs (the number of input arguments that func needs), func is evaluated
minArgs = len(signature(func).parameters)
if initial_args:
if len(initial_args) >= minArgs:
return func(*initial_args[:minArgs])
func = partial(func, *initial_args)
minArgs = len(signature(func).parameters)
# Do the currying
def g(*myArgs):
nonlocal minArgs
# Evaluate function if we have the necessary amount of input arguments
if minArgs is not None and minArgs <= len(myArgs):
return func(*myArgs[:minArgs])
def f(*args):
nonlocal minArgs
newArgs = myArgs + args if args else myArgs
if minArgs is not None and minArgs <= len(newArgs):
return func(*newArgs[:minArgs])
else:
return g(*newArgs)
return f
return g
Now this code fails when the following test is executed:
test.assert_equals(curry_partial(curry_partial(curry_partial(add, a), b), c), sum)
where add = a + b + c (properly defined function), a = 1, b = 2, c = 3, and sum = 6.
The reason this fails is because curry_partial(add, a) returns a function handle to the function g. In the second call, curry_partial(<function_handle to g>, b), the calculation minArgs = len(signature(func).parameters) doesn't work like I want it to, because it will now calculate how many input arguments function g requires (which is 1: i.e. *myArgs), and not how many the original func still requires. So the question is, how can I write my code such that I can keep track of how many input arguments my original func still needs (reducing that number each time I am partialling the function with any given initial arguments).
I still have much to learn about programming and currying/partial, so most likely I have not chosen the most convenient approach. But I'd like to learn. The difficulty in this exercise for me is the combination of partial and curry, i.e. doing a curry loop while partialling any initial arguments that are encountered.
Try this out.
from inspect import signature
# Here `is_set` acts like a flip-flop
is_set = False
params = 0
def curry_partial(func, *partial_args):
"""
Required argument: func
Optional argument: partial_args
Return:
1) Result of the `func` if
`partial_args` contains
required number of items.
2) Function `wrapper` if `partial_args`
contains less than the required
number of items.
"""
global is_set, params
if not is_set:
is_set = True
# if func is already a value
# we should return it
try: params = len(signature(func).parameters)
except: return func
try:
is_set = False
return func(*partial_args[:params])
except:
is_set = True
def wrapper(*extra_args):
"""
Optional argument: extra_args
Return:
1) Result of the `func` if `args`
contains required number of
items.
2) Result of `curry_partial` if
`args` contains less than the
required number of items.
"""
args = (partial_args + extra_args)
try:
is_set = False
return func(*args[:params])
except:
is_set = True
return curry_partial(func, *args)
return wrapper
This indeed isn't very good by design. Instead you should use class, to do all the internal works like, for example, the flip-flop (don't worry we don't need any flip-flop there ;-)).
Whenever there's a function that takes arbitrary arguments, you can always instantiate that class passing the function. But this time however, I leave that on you.
I am not sure about currying, but if you need a simple partial function generator, you could try something like this:
from functools import partial
from inspect import signature
def execute_or_partial(f, *args):
max = len(signature(f).parameters)
if len(args) >= max:
return f(*args[:max])
else:
return partial(f, *args)
s = lambda x, y, z: x + y + z
t = execute_or_partial(s, 1)
u = execute_or_partial(t, 2)
v = execute_or_partial(u, 3)
print(v)
or
print(execute_or_partial(execute_or_partial(execute_or_partial(s, 1), 2), 3))
Even if it doesn't solve your original problem, see if you can use the above code to reduce code repetition (I am not sure, but I think there is some code repetition in the inner function?); that will make the subsequent problems easier to solve.
There could be functions in the standard library that already solve this problem. Many pure functional languages like Haskell have this feature built into the language.
My question is about how to deal with the piece of code where I am using the Caesar´s cipher.
Functions Decrypt and Encrypt have to deal with the limits of the alphabet (A - Z and a - z). I tried to write the two possible cycles for both alphabets in one cycle function named cycleencrypt.
But the function takes about 6 arguments and I have read somewhere that is less readable and understandable having more than 3 arguments in one function so my question is:
Should I reduce the number of arguments by splitting in two functions and make the piece of code longer (but maybe more understandable)?
Thanks for any answer I aprreciate that.
EDIT: Docstrings around the functions were deleted to make visible the
main purpose of my question.
def offsetctrl(offset):
while offset < 0:
offset += 26
return offset
def cycleencrypt(string, offset, index, listing, first, last):
offset = offsetctrl(offset)
if string >= ord(first) and string <= ord(last):
string += offset
while string > ord(last):
string = ord(first) + (string - ord(last) -1)
listing[index] = chr(string)
Cycle for encrypting with a lots of arguments and control of negative offset´s
def encrypt(retezec, offset):
listing = list(retezec)
for index in range(0, len(retezec)):
string = ord(retezec[index])
cycleencrypt(string, offset, index, listing, 'A', 'Z')
cycleencrypt(string, offset, index, listing, 'a', 'z')
print(''.join(listing))
main encryption part taking many arguments in two lines with printing
def decrypt(retezec, offset):
return encrypt(retezec, -offset)
if __name__ == "__main__":
encrypt("hey fellow how is it going", 5)
decrypt("mjd kjqqtb mtb nx ny ltnsl", 5)
In this kind of situation, it's often better to write your code as a class. Your class's constructor could take just the minimum number of arguments that are required (which may be none at all!), and then optional arguments could be set as properties of the class or by using other methods.
When designing a class like this, I find it's most useful to start by writing the client code first -- that is, write the code that will use the class first, and then work backwards from there to design the class.
For example, I might want the code to look something like this:
cypher = Cypher()
cypher.offset = 17
cypher.set_alphabet('A', 'Z')
result = cypher.encrypt('hey fellow how is it going')
Hopefully it should be clear how to work from here to the design of the Cypher class, but if not, please ask a question on Stack Overflow about that!
If you want to provide encrypt and decrypt convenience methods, it's still easy to do. For example, you can write a function like:
def encrypt(text, offset):
cypher = Cypher()
cypher.offset = offset
return cypher.encrypt(text)
Here is the docstring of datetime.datetime:
class datetime(date):
"""datetime(year, month, day[, hour[, minute[, second[, microsecond[,tzinfo]]]]])
...
"""
And the signature of its constructor:
def __new__(cls, year, month=None, day=None, hour=0, minute=0, second=0, microsecond=0, tzinfo=None):
What we could learn from it:
Add exactly as many arguments as it makes sense to add
Use parameters and to give sensible default values to arguments
Side thought: do you think users of your library would should use cycleencrypt()? You could mark it private (with underscore), so everybody will see it's not a public API and they should use encrypt() and decrypt() instead.
The number of arguments doesn't really matters as long as there are not a dozen of them (maybe someone can link to what you mention about having more than 3 arguments, I may be wrong).
To be more readable in the definition of a function, write comments by following docstrings convention.
To be more readable at the call of a function, gives default values in the definition as much as possible for the more useful values (for example, offset can have the value 1 by default, and index 0).
Either way, for a long line, use PEP8 guidelines which describes a way to jump lines correctly (the lines must not exceed 80 characters, according to PEP8).
def cycleencrypt(string, offset=1, index=0,
listing, first, last):
"""Description
:param string: description
:param offset: description
:param index: description
:param listing: description
:param first: description
:param last: description
:return description
"""
offset = offsetctrl(offset)
if string >= ord(first) and string <= ord(last):
string += offset
while string > ord(last):
string = ord(first) + (string - ord(last) - 1)
listing[index] = chr(string)
According to Clean Code, a function should take zero arguments when at all possible.
If we take a trivial example in Python with a function that does nothing more than add 2 to whatever is passed to it:
def add_two(x):
return x + 2
>add_two(1)
3
The only way I can see that such a function can have zero arguments passed to it is by incorporating it into a class:
class Number(object):
def __init__(self, x):
self.x = x
def add_two(self):
return self.x + 2
>n = Number(1)
>n.add_two()
3
It hardly seems worth the effort. Is there another way of achieving the no-argument function in this example?
If we take a trivial example in Python with a function that does
nothing more than add 2 to whatever is passed to it:
...
Is there another way of achieving the no-argument function in this
example?
In a word, no.
In several words: by definition, your function takes an argument. There's no way to take an argument without taking an argument.
In General
That is truly bad general advice, no argument methods are great for objects that modify the object in a way that requires no external data, or for functions that only interact with globals (you should really have minimal need to do this). In your case, you need to modify a given object with external data, and both objects are a builtin type (int).
In this case, just use a function with two arguments.
Too Many Arguments
Now, here is where the advice gets good. Say I need to do a complicated task, with numerous arguments.
def complicatedfunction(arg1, arg2, arg3, arg4, arg5, arg6):
'''This requires 6 pieces of info'''
return ((arg1 + arg2) / (arg3 * arg4)) ** (arg5 + arg6)
In this case, you are making unreadable code, and you should reduce the number of arguments to 5 or less.
Reducing Argument Counts
In this case, you could pass a namedtuple:
from collections import namedtuple
Numbers = namedtuple("Numbers", "numerator denominator exponent")
mynumber = Numbers(1+2, 3+4, 5+6)
def complicatedfunction(number):
return (number.numerator / number.denominator) ** (number.exponent)
This has the added benefit of then making your code easy to modify in the future. Say I need to add another argument: I can just add that as a value into the namedtuple.
from collections import namedtuple
Numbers = namedtuple("Numbers", "numerator denominator exponent add")
mynumber = Numbers(1+2, 3+4, 5+6, 2)
def complicatedfunction(number):
value = (number.numerator / number.denominator) ** (number.exponent)
return value + number.add
Object-Oriented Design
Or, if I want to use that specific namedtuple for a lot of different tasks, I can subclass it and then get a 0-argument method for a specific goal. I can add as many methods as I want, which allows me to use the object in a versatile manner, as follows:
from collections import namedtuple
class Numbers(namedtuple("Numbers", "numerator denominator exponent add")):
def process(self):
value = (self.numerator / self.denominator) ** (self.exponent)
return value + self.add
mynumber = Numbers(1+2, 3+4, 5+6, 2)
Why don't you use default argument method
def add_two(x=0):
return x + 2
>>> add_two(1)
3
>>> add_two()
2
>>>
What is the best data structure to cache (save/store/memorize) so many function result in database.
Suppose function calc_regress with flowing definition in python:
def calc_regress(ind_key, dep_key, count=30):
independent_list = sql_select_recent_values(count, ind_key)
dependant_list = sql_select_recent_values(count, dep_key)
import scipy.stats as st
return st.linregress(independent_list, dependant_list)
I see answers to What kind of table structure should be used to store memoized function parameters and results in a relational database? but it seem to resolve problem of just one function while I have about 500 function.
Option A
You could use the structure in the linked answer, un-normalized with the number of columns = max number of arguments among the 500 functions. Also need to add a column for the function name.
Then you could do a SELECT * FROM expensive_func_results WHERE func_name = 'calc_regress' AND arg1 = ind_key AND arg2 = dep_key and arg3 = count, etc.
Ofcourse, that's not a very good design to use. For the same function called with fewer parameters, columns with null values/non-matches need to be ignored; otherwise you'll get multiple result rows.
Option B
Create the table/structure as func_name, arguments, result where 'arguments' is always a kwargs dictionary or positional args but not mixed per entry. Even with the kwargs dict stored as a string, order of keys->values in it is not predictable/consistent even if it's the same args. So you'll need to order it before converting to a string and storing it. When you want to query, you'll use SELECT * FROM expensive_func_results WHERE func_name = 'calc_regress' AND arguments = 'str(kwargs_dict)', where str(kwargs_dict) is something you'll set programmatically. It could also be set to the result of inspect.getargspec, (or inspect.getcallargs) though you'll have to check for consistency.
You won't be able to do queries on the argument combos unless you provide all the arguments to the query or partial match with LIKE.
Option C
Normalised all the way: One table func_calls as func_name, args_combo_id, arg_name_idx, arg_value. Each row of the table will store one arg for one combo of that function's calling args. Another table func_results as func_name, args_combo_id, result. You could also normalise further for func_name to be mapped to a func_id.
In this one, the order of keyword args doesn't matter since you'll be doing an Inner join to select each parameter. This query will have to be built programmatically or done via a stored procedure, since the number of joins required to fetch all the parameters is determined by the number of parameters. Your function above has 3 params but you may have another with 10. arg_name_idx is 'argument name or index' so it also works for mixed kwargs + args. Some duplication may occur in cases like calc_regress(ind_key=1, dep_key=2, count=30) and calc_regress(1, 2, 30) (as well as calc_regress(1, 2) with a default value for count <-- this cases should be avoided, the table entry should have all args); since the args_combo_id will be different for both but result will obviously be the same. Again, the inspect module may help in this area.
[Edit] PS: Additionally, for the func_name, you may need to use a fully qualified name to avoid conflicts across modules in your package. And decorators may interfere with that as well; without a deco.__name__ = func.__name__, etc.
PPS: If objects are being passed to functions being memoized in the db, make sure that their __str__ is something useful & repeatable/consistent to store as arg values.
This particular case doesn't require you to re-create objects from the arg values in the db, otherwise, you'd need to make __str__ or __repr__ like the way __repr__ was intended to be (but isn't generally done):
this should look like a valid Python expression that could be used to recreate an object with the same value (given an appropriate environment).
I'd use a key value storage here, where the key could be a concatenation of the id of the function object (to guarantee the key uniqness) and its arguments while the value would be the function returned value.
So calc_regress(1, 5, 30) call would produce an example key 139694472779248_1_5_30 where the first part is id(calc_regress). An example key producing function:
>>> def produce_cache_key(fun, *args, **kwargs):
... args_key = '_'.join(str(a) for a in args)
... kwargs_key = '_'.join('%s%s' % (k, v) for k, v in kwargs.items())
... return '%s_%s_%s' % (id(fun), args_key, kwargs_key)
You could keep your results in memory using a dictionary and a decorator:
>>> def cache_result(cache):
... def decorator(fun):
... def wrapper(*args, **kwargs):
... key = produce_cache_key(fun, *args, **kwargs)
... if key not in cache:
... cache[key] = fun(*args, **kwargs)
... return cache[key]
... return wrapper
... return decorator
...
>>>
>>> #cache_result(cache_dict)
... def fx(x, y, z=0):
... print 'Doing some expensive job...'
...
>>> cache = {}
>>> fx(1, 2, z=1)
Doing some expensive job...
>>> fx(1, 2, z=1)
>>>
Why have a function as my_fun(an_arg, *arg) or even this a_func(dict=None, **args) why do people prefer to do this instead of saying just my_func(*args)? Are we not just repeating ourselves by using the former?
There's difference between my_fun(an_arg, *arg) and my_func(*args).
my_fun(an_arg, *arg)
Pass at least 1 argument or more arguments.
my_func(*args)
Pass any number of arguments, even 0.
Demo:
>>> def my_fun(an_arg, *arg):
pass
...
>>> def my_fun1(*arg):
pass
...
>>> my_fun()
...
TypeError: my_fun() missing 1 required positional argument: 'an_arg'
>>> my_fun1(1) #works fine
It's to help give your function a bit more meaning. Let's say that I'm trying to take in a function that increments a list of numbers by some parameter. Here's a silly example to illustrate:
def increase(increment, *nums):
return [num + increment for num in nums]
In this case, it's very clear what the first argument does, and what it's used for. In contrast, if we did this:
def increase(*args):
return [num + args[0] for num in args[1:]]
...then it's less clear what we're doing, and what all the arguments do.
In addition, it's also useful if we want to take in data, transform it, and pass in the rest of my arguments to another function.
Here's another contrived example:
def log(message, func, *args):
print message
func(*args)
Once again, if we just used only *args, our meaning is less clear:
def log(*args):
print args[0]
args[1](args[2:])
This would be much more error-prone, and hard to modify. It would also cause the function to fail if there weren't enough arguments -- by doing it the first way, you essentially make the first two elements mandatory and the rest optional.
They aren't equivalent forms. The other "repeat" forms bind arguments to discreet parameters and, more importantly, indicate that some parameters are required. Try to call def my_fun(an_arg, *arg): pass with 0 arguments.
While discreet (known) parameters work in many cases, *args allows for "sequence-variadic" functions and **kwargs allows "parameter-variadic" functions.