Very often I process single elements of tuples like this:
size, duration, name = some_external_function()
size = int(size)
duration = float(duration)
name = name.strip().lower()
If some_external_function would return some equally typed tuple I could use map in order to have a (more functional) closed expression:
size, duration, name = map(magic, some_external_function())
Is there something like an element wise map? Something I could run like this:
size, duration, name = map2((int, float, strip), some_external_function())
Update: I know I can use comprehension together with zip, e.g.
size, duration, name = [f(v) for f, v in zip(
(int, float, str.strip), some_external_function())]
-- I'm looking for a 'pythonic' (best: built-in) solution!
To the Python developers:
What about
(size)int, (duration)float, (name)str.strip = some_external_function()
?
If I see this in any upcoming Python version, I'll send you a beer :)
Quite simply: use a function and args unpacking...
def transform(size, duration, name):
return int(size), float(duration), name.strip().lower()
# if you don't know what the `*` does then follow the link above...
size, name, duration = transform(*some_external_function())
Dead simple, perfectly readable and testable.
Map does not really apply here. It comes in handy when you want to apply a simple function over all elements of a list, such as map(float, list_ints).
There isn't one explicit built-in function to do this. However, a way to simplify your approach and avoid n separate calls to the functions to be applied, could be to define an iterable containing the functions, and apply them to the returned non-unpacked tuple from the function on a generator comprehension and then unpack them:
funcs = int, float, lambda x: x.strip().lower()
t = 1., 2, 'Some String ' # example returned tuple
size, duration, name = (f(i) for f,i in zip(funcs, t))
Or perhaps a little cleaner:
def transform(t, funcs):
return (f(i) for f,i in zip(funcs, t))
size, duration, name = transform(t, funcs)
size
# 1
duration
# 2.0
name
# 'some string'
class SomeExternalData:
def __init__(self, size: int, duration: float, name: str):
self.size = size
self.duration = duration
self.name = name.strip().lower()
#classmethod
def from_strings(cls, size, duration, name):
return cls(int(size), float(duration), name)
data = SomeExternalData.from_strings(*some_external_function())
It's far from a one-liner, but it's the most declarative, readable, reusable and maintainable approach to this problem IMO. Model your data explicitly instead of treating individual values ad hoc.
AFAIK there is no built-in solution so we can write generic function ourselves and reuse it afterwards
def map2(functions, arguments): # or some other name
return (function(argument) for function, argument in zip(functions, arguments)) # we can also return `tuple` here for example
The possible problem can be that number of arguments can be less than number of functions or vice versa, but in your case it shouldn't be a problem.
After that
size, duration, name = map2((int, float, str.strip), some_external_function())
We can go further with functools.partial and give a name to our "transformer" like
from functools import partial
...
transform = partial(map2, (int, float, str.strip))
and reuse it in other places as well.
Based on Bruno's transform, which I think is the best answer to the problem, I wanted to see if I could make a generic transform function that did not need a hardcoded set of formatters, but could take any number of elements, given a matching number of formatters.
(This is really overkill, unless you need a large number of such magic mappers or if you need to generate them dynamically.)
Here I am using Python 3.6's guaranteed dictionary order to "unpack" the formatters in their declared order and separate them from inputs variadic.
def transform(*inputs, **tranformer):
return [f(val) for val, f in zip(inputs, tranformer.values())]
size, duration, name = transform(*some_external_function(), f1=int, f2=float, f3=str.lower)
And to make the process even more generic and allow predefined transform functions you can use operator.partial.
from functools import partial
def prep(f_tranformer, *format_funcs):
formatters = {"f%d"%ix : func for ix, func in enumerate(format_funcs)}
return partial(transform, **formatters)
transform2 = prep(transform, int, float, str.lower)
which you can use as:
size, duration, name = transform2(*some_external_function())
I'll second bruno's answer as being my preferred choice. I guess it will depend on how often you are calling this function will determine how much value it is in refactoring such a hindrance. If you were going to be calling that external function multiple times, you could also consider decorating it:
from functools import wraps
def type_wrangler(func):
def wrangler():
n,s,d = func()
return str(n), int(s), float(d)
return wrangler
def external_func():
return 'a_name', '10', '5.6'
f = type_wrangler(external_func)
print(f())
Related
In Python we can assign a function to a variable. For example, the math.sine function:
sin = math.sin
rad = math.radians
print sin(rad(my_number_in_degrees))
Is there any easy way of assigning multiple functions (ie, a function of a function) to a variable? For example:
sin = math.sin(math.radians) # I cannot use this with brackets
print sin (my_number_in_degrees)
Just create a wrapper function:
def sin_rad(degrees):
return math.sin(math.radians(degrees))
Call your wrapper function as normal:
print sin_rad(my_number_in_degrees)
I think what the author wants is some form of functional chaining. In general, this is difficult, but may be possible for functions that
take a single argument,
return a single value,
the return values for the previous function in the list is of the same type as that of the input type of the next function is the list
Let us say that there is a list of functions that we need to chain, off of which take a single argument, and return a single argument. Also, the types are consistent. Something like this ...
functions = [np.sin, np.cos, np.abs]
Would it be possible to write a general function that chains all of these together? Well, we can use reduce although, Guido doesn't particularly like the map, reduce implementations and was about to take them out ...
Something like this ...
>>> reduce(lambda m, n: n(m), functions, 3)
0.99005908575986534
Now how do we create a function that does this? Well, just create a function that takes a value and returns a function:
import numpy as np
def chainFunctions(functions):
def innerFunction(y):
return reduce(lambda m, n: n(m), functions, y)
return innerFunction
if __name__ == '__main__':
functions = [np.sin, np.cos, np.abs]
ch = chainFunctions( functions )
print ch(3)
You could write a helper function to perform the function composition for you and use it to create the kind of variable you want. Some nice features are that it can combine a variable number of functions together that each accept a variable number of arguments.
import math
try:
reduce
except NameError: # Python 3
from functools import reduce
def compose(*funcs):
""" Compose a group of functions (f(g(h(...)))) into a single composite func. """
return reduce(lambda f, g: lambda *args, **kwargs: f(g(*args, **kwargs)), funcs)
sindeg = compose(math.sin, math.radians)
print(sindeg(90)) # -> 1.0
How can I dynamically get the names and values of all arguments to a class method? (For debugging).
The following code works, but it would need to be repeated a few dozen times (one for each method). Is there a simpler, more Pythonic way to do this?
class Foo:
def foo(self, a, b):
myself = getattr(self, inspect.stack()[0][3])
argnames = inspect.getfullargspec(myself).args[1:]
d = {}
for argname in argnames:
d[argname] = locals()[argname]
log.debug(d)
That's six lines of code for something that should be a lot simpler.
Sure, I can hardcode the debugging code separately for each method, but it seems easier to use copy/paste. Besides, it's way too easy to leave out an argument or two when hardcoding, which could make the debugging more confusing.
I would also prefer to assign local variables instead of accessing the values using a kwargs dict, because the rest of the code (not shown) could get clunky real fast, and is partially copied/pasted.
What is the simplest way to do this?
An alternative:
from collections import OrderedDict
class Foo:
def foo(self, *args):
argnames = 'a b'.split()
kwargs = OrderedDict(zip(argnames, args))
log.debug(kwargs)
for argname, argval in kwargs.items():
locals()[argname] = argval
This saves one line per method, but at the expense of IDE autocompete/intellisense when calling the method.
As wpercy wrote, you can reduce the last three lines to a single line using a dict comprehension. The caveat is that it only works in some versions of Python.
However, in Python 3, a dict comprehension has its own namespace and locals wouldn't work. So a workaround is to put the locals func after the in:
from itertools import repeat
class Foo:
def foo(self, a, b):
myname = inspect.stack()[0][3]
argnames = inspect.getfullargspec(getattr(self, myname)).args[1:]
args = [(x, parent[x]) for x, parent in zip(argnames, repeat(locals()))]
log.debug('{}: {!s}'.format(myname, args))
This saves two lines per method.
Assume there are some useful transformation functions, for example random_spelling_error, that we would like to apply n times.
My temporary solution looks like this:
def reapply(n, fn, arg):
for i in range(n):
arg = fn(arg)
return arg
reapply(3, random_spelling_error, "This is not a test!")
Is there a built-in or otherwise better way to do this?
It need not handle variable lengths args or keyword args, but it could. The function will be called at scale, but the values of n will be low and the size of the argument and return value will be small.
We could call this reduce but that name was of course taken for a function that can do this and too much more, and was removed in Python 3. Here is Guido's argument:
So in my mind, the applicability of reduce() is pretty much limited to
associative operators, and in all other cases it's better to write out
the accumulation loop explicitly.
reduce is still available in python 3 using the functools module. I don't really know that it's any more pythonic, but here's how you could achieve it in one line:
from functools import reduce
def reapply(n, fn, arg):
return reduce(lambda x, _: fn(x), range(n), arg)
Get rid of the custom function completely, you're trying to compress two readable lines into one confusing function call. Which one do you think is easier to read and understand, your way:
foo = reapply(3, random_spelling_error, foo)
Or a simple for loop that's one more line:
for _ in range(3):
foo = random_spelling_error(foo)
Update: According to your comment
Let's assume that there are many transformation functions I may want to apply.
Why not try something like this:
modifiers = (random_spelling_error, another_function, apply_this_too)
for modifier in modifiers:
for _ in range(3):
foo = modifier(foo)
Or if you need different amount of repeats for different functions, try creating a list of tuples:
modifiers = [
(random_spelling_error, 5),
(another_function, 3),
...
]
for modifier, count in modifiers:
for _ in range(count):
foo = modifier(foo)
some like recursion, not always obviously 'better'
def reapply(n, fn, arg):
if n:
arg = reapply(n-1, fn, fn(arg))
return arg
reapply(1, lambda x: x**2, 2)
Out[161]: 4
reapply(2, lambda x: x**2, 2)
Out[162]: 16
I have two similar codes that need to be parsed and I'm not sure of the most pythonic way to accomplish this.
Suppose I have two similar "codes"
secret_code_1 = 'asdf|qwer-sdfg-wert$$otherthing'
secret_code_2 = 'qwersdfg-qw|er$$otherthing'
both codes end with $$otherthing and contain a number of values separated by -
At first I thought of using functools.wrap to separate some of the common logic from the logic specific to each type of code, something like this:
from functools import wraps
def parse_secret(f):
#wraps(f)
def wrapper(code, *args):
_code = code.split('$$')[0]
return f(code, *_code.split('-'))
return wrapper
#parse_secret
def parse_code_1b(code, a, b, c):
a = a.split('|')[0]
return (a,b,c)
#parse_secret
def parse_code_2b(code, a, b):
b = b.split('|')[1]
return (a,b)
However doing it this way makes it kind of confusing what parameters you should actually pass to the parse_code_* functions i.e.
parse_code_1b(secret_code_1)
parse_code_2b(secret_code_2)
So to keep the formal parameters of the function easier to reason about I changed the logic to something like this:
def _parse_secret(parse_func, code):
_code = code.split('$$')[0]
return parse_func(code, *_code.split('-'))
def _parse_code_1(code, a, b, c):
"""
a, b, and c are descriptive parameters that explain
the different components in the secret code
returns a tuple of the decoded parts
"""
a = a.split('|')[0]
return (a,b,c)
def _parse_code_2(code, a, b):
"""
a and b are descriptive parameters that explain
the different components in the secret code
returns a tuple of the decoded parts
"""
b = b.split('|')[1]
return (a,b)
def parse_code_1(code):
return _parse_secret(_parse_code_1, code)
def parse_code_2(code):
return _parse_secret(_parse_code_2, code)
Now it's easier to reason about what you pass to the functions:
parse_code_1(secret_code_1)
parse_code_2(secret_code_2)
However this code is significantly more verbose.
Is there a better way to do this? Would an object-oriented approach with classes make more sense here?
repl.it example
repl.it example
Functional approaches are more concise and make more sense.
We can start from expressing concepts in pure functions, the form that is easiest to compose.
Strip $$otherthing and split values:
parse_secret = lambda code: code.split('$$')[0].split('-')
Take one of inner values:
take = lambda value, index: value.split('|')[index]
Replace one of the values with its inner value:
parse_code = lambda values, p, q: \
[take(v, q) if p == i else v for (i, v) in enumerate(values)]
These 2 types of codes have 3 differences:
Number of values
Position to parse "inner" values
Position of "inner" values to take
And we can compose parse functions by describing these differences. Split values are keep packed so that things are easier to compose.
compose = lambda length, p, q: \
lambda code: parse_code(parse_secret(code)[:length], p, q)
parse_code_1 = compose(3, 0, 0)
parse_code_2 = compose(2, 1, 1)
And use composed functions:
secret_code_1 = 'asdf|qwer-sdfg-wert$$otherthing'
secret_code_2 = 'qwersdfg-qw|er$$otherthing'
results = [parse_code_1(secret_code_1), parse_code_2(secret_code_2)]
print(results)
I believe something like this could work:
secret_codes = ['asdf|qwer-sdfg-wert$$otherthing', 'qwersdfg-qw|er$$otherthing']
def parse_code(code):
_code = code.split('$$')
if '-' in _code[0]:
return _parse_secrets(_code[1], *_code[0].split('-'))
return _parse_secrets(_code[0], *_code[1].split('-'))
def _parse_secrets(code, a, b, c=None):
"""
a, b, and c are descriptive parameters that explain
the different components in the secret code
returns a tuple of the decoded parts
"""
if c is not None:
return a.split('|')[0], b, c
return a, b.split('|')[1]
for secret_code in secret_codes:
print(parse_code(secret_code))
Output:
('asdf', 'sdfg', 'wert')
('qwersdfg', 'er')
I'm not sure about your secret data structure but if you used the index of the position of elements with data that has | in it and had an appropriate number of secret data you could also do something like this and have an infinite(well almost) amount of secrets potentially:
def _parse_secrets(code, *data):
"""
data is descriptive parameters that explain
the different components in the secret code
returns a tuple of the decoded parts
"""
i = 0
decoded_secrets = []
for secret in data:
if '|' in secret:
decoded_secrets.append(secret.split('|')[i])
else:
decoded_secrets.append(secret)
i += 1
return tuple(decoded_secrets)
I'm really not sure what exactly you mean. But I came with idea which might be what you are looking for.
What about using a simple function like this:
def split_secret_code(code):
return [code] + code[:code.find("$$")].split("-")
And than just use:
parse_code_1(*split_secret_code(secret_code_1))
I'm not sure exactly what constraints you're working with, but it looks like:
There are different types of codes with different rules
The number of dash separated args can vary
Which arg has a pipe can vary
Straightforward Example
This is not too hard to solve, and you don't need fancy wrappers, so I would just drop them because it adds reading complexity.
def pre_parse(code):
dash_code, otherthing = code.split('$$')
return dash_code.split('-')
def parse_type_1(code):
dash_args = pre_parse(code)
dash_args[0], toss = dash_args[0].split('|')
return dash_args
def parse_type_2(code):
dash_args = pre_parse(code)
toss, dash_args[1] = dash_args[1].split('|')
return dash_args
# Example call
parse_type_1(secret_code_1)
Trying to answer question as stated
You can supply arguments in this way by using python's native decorator pattern combined with *, which rolls/unrolls positional arguments into a tuple, so you don't need to know exactly how many there are.
def dash_args(code):
dash_code, otherthing = code.split('$$')
return dash_code.split('-')
def pre_parse(f):
def wrapper(code):
# HERE is where the outer function, the wrapper,
# supplies arguments to the inner function.
return f(code, *dash_args(code))
return wrapper
#pre_parse
def parse_type_1(code, *args):
new_args = list(args)
new_args[0], toss = args[0].split('|')
return new_args
#pre_parse
def parse_type_2(code, *args):
new_args = list(args)
toss, new_args[1] = args[1].split('|')
return new_args
# Example call:
parse_type_1(secret_code_1)
More Extendable Example
If for some reason you needed to support many variations on this kind of parsing, you could use a simple OOP setup, like
class BaseParser(object):
def get_dash_args(self, code):
dash_code, otherthing = code.split('$$')
return dash_code.split('-')
class PipeParser(BaseParser):
def __init__(self, arg_index, split_index):
self.arg_index = arg_index
self.split_index = split_index
def parse(self, code):
args = self.get_dash_args(code)
pipe_arg = args[self.arg_index]
args[self.arg_index] = pipe_arg.split('|')[self.split_index]
return args
# Example call
pipe_parser_1 = PipeParser(0, 0)
pipe_parser_1.parse(secret_code_1)
pipe_parser_2 = PipeParser(1, 1)
pipe_parser_2.parse(secret_code_2)
My suggestion attempts the following:
to be non-verbose enough
to separate common and specific logic in a clear way
to be sufficiently extensible
Basically, it separates common and specific logic into different functions (you could do the same using OOP). The thing is that it uses a mapper variable that contains the logic to select a specific parser, according to each code's content. Here it goes:
def parse_common(code):
"""
Provides common parsing logic.
"""
encoded_components = code.split('$$')[0].split('-')
return encoded_components
def parse_code_1(code, components):
"""
Specific parsing for type-1 codes.
"""
components[0] = components[0].split('|')[0] # decoding some type-1 component
return tuple([c for c in components])
def parse_code_2(code, components):
"""
Specific parsing for type-2 codes.
"""
components[1] = components[1].split('|')[1] # decoding some type-2 component
return tuple([c for c in components])
def parse_code_3(code, components):
"""
Specific parsing for type-3 codes.
"""
components[2] = components[2].split('||')[0] # decoding some type-3 component
return tuple([c for c in components])
# ... and so on, if more codes need to be added ...
# Maps specific parser, according to the number of components
CODE_PARSER_SELECTOR = [
(3, parse_code_1),
(2, parse_code_2),
(4, parse_code_3)
]
def parse_code(code):
# executes common parsing
components = parse_common(code)
# selects specific parser
parser_info = [s for s in CODE_PARSER_SELECTOR if len(components) == s[0]]
if parser_info is not None and len(parser_info) > 0:
parse_func = parser_info[0][1]
return parse_func(code, components)
else:
raise RuntimeError('No parser found for code: %s' % code)
secret_codes = [
'asdf|qwer-sdfg-wert$$otherthing', # type 1
'qwersdfg-qw|er$$otherthing', # type 2
'qwersdfg-hjkl-yui||poiuy-rtyu$$otherthing' # type 3
]
print [parse_code(c) for c in secret_codes]
Are you married to the string parsing? If you are passing variables with values and are in no need for variable names you can "pack" them into integer.
If you are working with cryptography you can formulate a long hexadecimal number of characters and then pass it as int with "stop" bytes (0000 for example since "0" is actually 48 try: chr(48) ) and if you are married to a string I would suggest a lower character byte identifier for example ( 1 -> aka try: chr(1) ) so you can scan the integer and bit shift it by 8 to get bytes with 8 bit mask ( this would look like (secret_code>>8)&0xf.
Hashing works in similar manner since one variable with somename and somevalue, somename and somevalue can be parsed as integer and then joined with stop module, then retrieved when needed.
Let me give you an example for hashing
# lets say
a = 1
# of sort hashing would be
hash = ord('a')+(0b00<<8)+(1<<16)
#where a hashed would be 65633 in integer value on 64 bit computer
# and then you just need to find a 0b00 aka separator
if you want to use only variables ( names don't matter ) then you need to hash only variable value so the size of parsed value is a lot smaller ( not name part and no need for separator (0b00) and you can use separator cleverly to divide necessary data one fold (0b00) twofolds (0b00, 0b00<<8) etc.
a = 1
hash = a<<8 #if you want to shift it 1 byte
But if you want to hide it and you need cryptography example, you can do the above methods and then scramble, shift ( a->b ) or just convert it to another type later. You just need to figure out the order of operations you are doing. Since a-STOP-b-PASS-c is not equal to a-PASS-b-STOP-c.
You can find bitwise operators here binary operators
But have in mind that 65 is number and 65 is a character as well it only matters where are those bytes sent, if they are sent to graphics card they are pixels, if they are sent to audiocard they are sounds and if they are sent to mathematical processing they are numbers, and as programmers that is our playground.
But if this is not answering your problem, you can always use map.
def mapProcces(proccesList,listToMap):
currentProcces = proccesList.pop(0)
listToMap = map( currentProcces, listToMap )
if proccesList != []:
return mapProcces( proccesList, listToMap )
else:
return list( listToMap )
then you could map it:
mapProcces([str.lower,str.upper,str.title],"stackowerflow")
or you can simply replace every definitive separator with space and then split space.
secret_code_1 = 'asdf|qwer-sdfg-wert$$otherthing'
separ = "|,-,$".split(",")
secret_code_1 = [x if x not in separ else " " for x in secret_code_1]# replaces separators with empty chars
secret_code_1 = "".join(secret_code_1) #coverts list to a string
secret_code_1 = secret_code_1.split(" ") #it splited them to list
secret_code_1 = filter(None,secret_code_1) # filter empty chars ''
first,second,third,fourth,other = secret_code_1
And there you have it, your secret_code_1 is split and assigned to definitive amount of variables. Of course " " is used as declaration, you can use whatever you want, you can replace every separator with "someseparator" if you want and then split with "someseparator". You can also use str.replace function to make it clearer.
I hope this helps
I think you need to provide more information of exactly what you're trying to achieve, and what the clear constraints are. For instance, how many times can $$ occur? Will there always be a | dividor? That kind of thing.
To answer your question broadly, an elegant pythonic way to do this is to use python's unpacking feature, combined with split. for example
secret_code_1 = 'asdf|qwer-sdfg-wert$$otherthing'
first_$$_part, last_$$_part = secret_code_1.split('$$')
By using this technique, in addition to simple if blocks, you should be able to write an elegant parser.
If I understand it correctly, you want to be able to define your functions as if the parsed arguments are passed, but want to pass the unparsed code to the functions instead.
You can do that very similarly to the first solution you presented.
from functools import wraps
def parse_secret(f):
#wraps(f)
def wrapper(code):
args = code.split('$$')[0].split('-')
return f(*args)
return wrapper
#parse_secret
def parse_code_1(a, b, c):
a = a.split('|')[0]
return (a,b,c)
#parse_secret
def parse_code_2(a, b):
b = b.split('|')[1]
return (a,b)
For the secret codes mentioned in the examples,
secret_code_1 = 'asdf|qwer-sdfg-wert$$otherthing'
print (parse_code_1(secret_code_1))
>> ('asdf', 'sdfg', 'wert')
secret_code_2 = 'qwersdfg-qw|er$$otherthing'
print (parse_code_2(secret_code_2))
>> ('qwersdfg', 'er')
I haven't understood anything of your question, neither your code, but maybe a simple way to do it is by regular expression?
import re
secret_code_1 = 'asdf|qwer-sdfg-wert$$otherthing'
secret_code_2 = 'qwersdfg-qw|er$$otherthing'
def parse_code(code):
regex = re.search('([\w-]+)\|([\w-]+)\$\$([\w]+)', code) # regular expression
return regex.group(3), regex.group(1).split("-"), regex.group(2).split("-")
otherthing, first_group, second_group = parse_code(secret_code_2)
print(otherthing) # otherthing, string
print(first_group) # first group, list
print(second_group) # second group, list
The output:
otherthing
['qwersdfg', 'qw']
['er']
What is the best data structure to cache (save/store/memorize) so many function result in database.
Suppose function calc_regress with flowing definition in python:
def calc_regress(ind_key, dep_key, count=30):
independent_list = sql_select_recent_values(count, ind_key)
dependant_list = sql_select_recent_values(count, dep_key)
import scipy.stats as st
return st.linregress(independent_list, dependant_list)
I see answers to What kind of table structure should be used to store memoized function parameters and results in a relational database? but it seem to resolve problem of just one function while I have about 500 function.
Option A
You could use the structure in the linked answer, un-normalized with the number of columns = max number of arguments among the 500 functions. Also need to add a column for the function name.
Then you could do a SELECT * FROM expensive_func_results WHERE func_name = 'calc_regress' AND arg1 = ind_key AND arg2 = dep_key and arg3 = count, etc.
Ofcourse, that's not a very good design to use. For the same function called with fewer parameters, columns with null values/non-matches need to be ignored; otherwise you'll get multiple result rows.
Option B
Create the table/structure as func_name, arguments, result where 'arguments' is always a kwargs dictionary or positional args but not mixed per entry. Even with the kwargs dict stored as a string, order of keys->values in it is not predictable/consistent even if it's the same args. So you'll need to order it before converting to a string and storing it. When you want to query, you'll use SELECT * FROM expensive_func_results WHERE func_name = 'calc_regress' AND arguments = 'str(kwargs_dict)', where str(kwargs_dict) is something you'll set programmatically. It could also be set to the result of inspect.getargspec, (or inspect.getcallargs) though you'll have to check for consistency.
You won't be able to do queries on the argument combos unless you provide all the arguments to the query or partial match with LIKE.
Option C
Normalised all the way: One table func_calls as func_name, args_combo_id, arg_name_idx, arg_value. Each row of the table will store one arg for one combo of that function's calling args. Another table func_results as func_name, args_combo_id, result. You could also normalise further for func_name to be mapped to a func_id.
In this one, the order of keyword args doesn't matter since you'll be doing an Inner join to select each parameter. This query will have to be built programmatically or done via a stored procedure, since the number of joins required to fetch all the parameters is determined by the number of parameters. Your function above has 3 params but you may have another with 10. arg_name_idx is 'argument name or index' so it also works for mixed kwargs + args. Some duplication may occur in cases like calc_regress(ind_key=1, dep_key=2, count=30) and calc_regress(1, 2, 30) (as well as calc_regress(1, 2) with a default value for count <-- this cases should be avoided, the table entry should have all args); since the args_combo_id will be different for both but result will obviously be the same. Again, the inspect module may help in this area.
[Edit] PS: Additionally, for the func_name, you may need to use a fully qualified name to avoid conflicts across modules in your package. And decorators may interfere with that as well; without a deco.__name__ = func.__name__, etc.
PPS: If objects are being passed to functions being memoized in the db, make sure that their __str__ is something useful & repeatable/consistent to store as arg values.
This particular case doesn't require you to re-create objects from the arg values in the db, otherwise, you'd need to make __str__ or __repr__ like the way __repr__ was intended to be (but isn't generally done):
this should look like a valid Python expression that could be used to recreate an object with the same value (given an appropriate environment).
I'd use a key value storage here, where the key could be a concatenation of the id of the function object (to guarantee the key uniqness) and its arguments while the value would be the function returned value.
So calc_regress(1, 5, 30) call would produce an example key 139694472779248_1_5_30 where the first part is id(calc_regress). An example key producing function:
>>> def produce_cache_key(fun, *args, **kwargs):
... args_key = '_'.join(str(a) for a in args)
... kwargs_key = '_'.join('%s%s' % (k, v) for k, v in kwargs.items())
... return '%s_%s_%s' % (id(fun), args_key, kwargs_key)
You could keep your results in memory using a dictionary and a decorator:
>>> def cache_result(cache):
... def decorator(fun):
... def wrapper(*args, **kwargs):
... key = produce_cache_key(fun, *args, **kwargs)
... if key not in cache:
... cache[key] = fun(*args, **kwargs)
... return cache[key]
... return wrapper
... return decorator
...
>>>
>>> #cache_result(cache_dict)
... def fx(x, y, z=0):
... print 'Doing some expensive job...'
...
>>> cache = {}
>>> fx(1, 2, z=1)
Doing some expensive job...
>>> fx(1, 2, z=1)
>>>