I have two functions that are very similar:
def hier_group(self):
if self.sku:
return {f"{self.hierarchic}": f"${self.hierarchic}", "id": "$id", "ix": "$ix"}
else:
return {f"{self.hierarchic}": f"${self.hierarchic}", "ix": "$ix"}
def hier_group_merge(self):
if self.sku:
return {f"{self.hierarchic}": f"${self.hierarchic}", "id": "$id"}
else:
return {f"{self.hierarchic}": f"${self.hierarchic}"}
I am trying to reduce into 1 function that has only one if/else.
The only difference in both functions is "ix": "$ix".
What I am trying to do is the following:
def hier_group(self, ix=True):
if self.sku:
return {f"{self.hierarchic}": f"${self.hierarchic}", "id": "$id" f'{',"ix": "$ix"' if ix == True else ""}'}
else:
return {f"{self.hierarchic}": f"${self.hierarchic}" f'{',"ix": "$ix"' if ix == True else ""}'}
But it's getting trick to return , "ix": "$ix".
Build a base dictionary, then add keys as appropriate.
def hier_group(self, ix=True):
d = { f'{self.hierarchic}': f'${self.hierarchic}' }
if self.sku:
d['id'] = '$id'
if ix:
d['ix'] = '$ix'
return d
However, there are many who believe using two functions, rather than having one function behave like two different functions based on a Boolean argument, is preferable.
def hier_group(self):
d = { f'{self.hierarchic}': f'${self.hierarchic}' }
if self.sku:
d['id'] = '$id'
return d
def hier_group_with_ix(self):
d = self.hier_group()
d.update('ix': '$ix')
return d
You might also use a private method that takes an arbitrary list of attribute names.
# No longer needs self, so make it a static method
#staticmethod
def _build_group(attributes):
return {f'{x}: f'${x} for x in attributes}
def build_group(self, ix=True):
attributes = [self.hierarchic]
if ix:
attributes.append('ix')
if self.sku:
attributes.append('id')
return self._build_group(attributes)
You will probably ask: why is using a Boolean attribute here OK? My justification is that you aren't really altering the control flow
of build_group with such an argument; you are using it to
build up a list of explicit arguments for the private method. (The dataclass decorator in the standard library takes a similar approach: a number of Boolean-valued arguments to indicate whether various methods should be generated automatically.)
You can avoid repeating common parts:
def hier_group(self, ix=True):
out = {f"{self.hierarchic}": f"${self.hierarchic}"}
if self.sku:
out["id"] = "$id"
if ix:
out["ix"] = "$ix"
Related
I have a dictionary with some function expressions as values. Each of the values are very similar, except the part in the middle. In the following example, only earn_yld, free_cash_flow_yield and eps_growth are different in the long formula.
factor_bql = {
"ltm_earnings_yield": bq.func.dropna(bq.data.earn_yld(as_of_date=bq.func.RANGE(params['start'],params['end']))),
"ltm_fcf_yield": bq.func.dropna(bq.data.free_cash_flow_yield(as_of_date=bq.func.RANGE(params['start'],params['end']))),
'ltm_eps_growth': bq.func.dropna(bq.data.eps_growth(as_of_date=bq.func.RANGE(params['start'],params['end'])))
}
Is there any way to write a function or variable to simplify the values of the dictionary to something like
def simple_formula(xyz):
... ...
factor_bql = {
"ltm_earnings_yield": simple_formula('earn_yld'),
"ltm_fcf_yield": simple_formula('free_cash_flow_yield'),
'ltm_eps_growth': simple_formula('eps_growth')
}
I'd do this in following way:
def simple_formula(fn):
return bq.func.dropna(fn(as_of_date=bq.func.RANGE(params['start'],params['end'])))
factor_bql = {
"ltm_earnings_yield": simple_formula(bq.data.earn_yld),
"ltm_fcf_yield": simple_formula(bq.data.free_cash_flow_yield),
'ltm_eps_growth': simple_formula(bq.data.eps_growth)
}
So, functions themselves (not their names) are parameters of simple_formula.
You can use the globals function to call a function in the current module by the string representation of its name.
def func1(bar):
return "func1" + str(bar)
def func2(bar):
return "func2" + str(bar)
def simple_formula(func_name):
return globals()[func_name](bar="baz")
factor_bql = {
"key1": simple_formula("func1"),
"key2": simple_formula("func2"),
}
print(factor_bql["key2"]) # prints "func2baz"
Assuming bq.data is some object:
def simple_formula(xyz):
method = getattr(bq.data, xyx) # get a method by its name
return bq.func.dropna(method(as_of_date=bq.func.RANGE(params['start'],params['end'])))
I have a utility function that takes the argument case and return the value accordingly
helper.py
def get_sport_associated_value(dictionary, category, case):
if case == 'type':
return "soccer"
else:
return 1 #if case = 'id'
I have a main function that use the above function
crud_operations.py
def get_data(category):
dictionary ={.....}
id = get_sport_associated_value(dictionary, category, 'id')
.....
.....
type = get_sport_associated_value(dictionary, category, 'type')
....
return "successful"
Now I am unittesting the get_data() module using the unittest.Mock. I am having trouble passing the values to id and type.
#mock.patch('helper.get_sport_associated_value')
def test_get_data(self, mock_sport):
with app.app_context():
mock_sport.side_effect = self.side_effect
mock_sport.get_sport_associated_value("id")
mock_sport.get_sport_associated_value("type")
result = get_queries("Soccer")
asserEquals(result, "successful")
def side_effect(*args, **kwargs):
if args[0] == "type":
print("Soccer")
return "Soccer"
elif args[0] == "id":
print("1")
return 1
I tried this using side_effect function and facing problem to mock the get_sport_associated_value() according to the different values of input argument.
Question 2 : Which is the best method to use mock or mock.magicmock in this scenario?
Any help is appreciated with unit testing
Thanks
You're incorrectly testing args[0] as a case. The arguments of the side_effect callback function should be the same as the function you want to mock:
def side_effect(dictionary, category, case):
if case == "type":
return "Soccer"
elif case == "id":
return 1
I am trying to use dictionary as switch case on python, however, the parameter does not seem to be pass to the called function, please help:
def switchcase(num,cc):
def fa(num):
out= num*1.1;
def fb(num):
out= num*2.2;
def fc(num):
out= num*3.3;
def fd(num):
out= num*4.4;
options = {
"a":fa(num),
"b":fb(num),
"c":fc(num),
"d":fd(num)
}
out=0
options[cc];
return out
print switchcase(10,"a")
the output is 0, I could not figure out the problem
The problem is:
out=0
options[cc];
return out
Basically -- no matter what options[cc] gives you, you're going to return 0 because that's the value of out. Note that setting out in the various fa, fb, ... functions does not change the value of out in the caller.
You probably want:
def switchcase(num,cc):
def fa(num):
return num*1.1;
def fb(num):
return num*2.2;
def fc(num):
return num*3.3;
def fd(num):
return num*4.4;
options = {
"a":fa(num),
"b":fb(num),
"c":fc(num),
"d":fd(num)
}
return options[cc];
Also note that this will be horribly inefficient in practice. You're creating 4 functions (and calling each) every time you call switchcase.
I'm guessing that you actually want to create a pre-made map of functions. Then you can pick up the function that you actually want from the map and call it with the given number:
def fa(num):
return num*1.1
def fb(num):
return num*2.2
def fc(num):
return num*3.3
def fd(num):
return num*4.4
OPTIONS = {
"a":fa,
"b":fb,
"c":fc,
"d":fd
}
def switchcase(num,cc):
return OPTIONS[cc](num)
Here is an alternative take. You can just navigate to the necessary methods you have outside the switcher, and also pass optional arguments if you need:
def fa(num):
return num*1.1
def fb(num):
return num*2.2
def fc(num):
return num*3.3
def fd(num, option=1):
return num*4.4*option
def f_default(num):
return num
def switchcase(cc):
return {
"a":fa,
"b":fb,
"c":fc,
"d":fd,
}.get(cc, f_default)
print switchcase("a")(10) # for Python 3 --> print(switchcase("a")(10))
print switchcase("d")(10, 3) # for Python 3 --> print(switchcase("d")(10, 3))
print(switchcase("a")(10))
11.0
print(switchcase("d")(10, 3))
132.0
print(switchcase("ddd")(10))
10
Another shorter version would be:
def switchcase(num, cc):
return {
"a": lambda: num * 1.1,
"b": lambda: num * 2.2,
"c": lambda: num * 3.3,
"d": lambda: num * 4.4,
}.get(cc, lambda: None)()
print (switchcase(10,"a"))
d[x] where d is a dict, invokes d.__getitem__(x). Is there a way to create a class F, so that y=F(X); d[y] would invoke some method in F instead: y.someMethod(d)?
Background: I'm trying to make a dict with "aliased" keys, so that if I have d[a]=42, then d[alias_of_a] would return 42 as well. This is pretty straightforward with the custom __getitem__, for example:
class oneOf(object):
def __init__(self, *keys):
self.keys = keys
class myDict(dict):
def __getitem__(self, item):
if isinstance(item, oneOf):
for k in item.keys:
if k in self:
return self[k]
return dict.__getitem__(self, item)
a = myDict({
'Alpha': 1,
'B': 2,
})
print a[oneOf('A', 'Alpha')]
print a[oneOf('B', 'Bravo')]
However, I'm wondering if it could be possible without overriding dict:
a = {
'Alpha': 1,
'B': 2,
}
print a[???('A', 'Alpha')]
print a[???('B', 'Bravo')]
If this is not possible, how to make it work the other way round:
a = {
???('A', 'Alpha'): 1,
???('B', 'Bravo'): 2,
}
print a['A']
print a['Bravo']
What it important to me is that I'd like to avoid extending dict.
This use-case is impossible:
a = {
'Alpha': 1,
'B': 2,
}
a[???('A', 'Alpha')]
a[???('B', 'Bravo')]
This is because the dict will first hash the object. In order to force a collision, which will allow overriding equality to take hold, the hashes need to match. But ???('A', 'Alpha') can only hash to one of 'A' or 'Alpha', and if it makes the wrong choice it has failed.
The other use-case has a similar deduction applied to it:
a = {
???('A', 'Alpha'): 1,
???('B', 'Bravo'): 2,
}
a['A']
a['Bravo']
a['A'] will look up with a different hash to a['Alpha'], so again ???('A', 'Alpha') needs to have both hashes, which is impossible.
You need cooperation from both the keys and the values in order for this to work.
You could in theory use inspect.getouterframes in the __hash__ method to check the values of the dictionary, but this would only work if dictionaries had Python frames. If your intent is to monkey patch a function that sort-of does what you want but not quite, this might (just about) work(ish, sort of).
import inspect
class VeryHackyAnyOfHack:
def __init__(self, variable_name_hack, *args):
self.variable_name_hack = variable_name_hack
self.equal_to = args
def __eq__(self, other):
return other in self.equal_to
def __hash__(self):
outer_frame = inspect.getouterframes(inspect.currentframe())[1]
assumed_target_dict = outer_frame[0].f_locals[self.variable_name_hack]
for item in self.equal_to:
if item in assumed_target_dict:
return hash(item)
# Failure
return hash(item[0])
This is used like so:
import random
def check_thing_agains_dict(item):
if random.choice([True, False]):
internal_dict = {"red": "password123"}
else:
internal_dict = {"blue": "password123"}
return internal_dict[item]
myhack = VeryHackyAnyOfHack('internal_dict', "red", "blue")
check_thing_agains_dict(myhack)
#>>> 'password123'
Again, the very fact that you have to do this means that in practice it's not possible. It's also a language extension, so this isn't portable.
The built-in dict provides very simple lookup semantics: given a hashable object x, return the object y that x was mapped to previously. If you want multiple keys that map to the same object, you'll need to set that up explicitly:
# First, initialize the dictionary with one key per equivalence class
a = { 'a': 1, 'b': 2 }
# Then, set up any aliases.
a['Alpha'] = a['a']
a['Bravo'] = a['b']
The TransformDict class being considered for inclusion in Python 3.5 would simplify this somewhat by allowing you to replace step 2 with a "secondary" lookup function that would map the given key to its canonical representation prior to the primary lookup. Something like
def key_transform(key):
if key in {'Alpha', 'Aleph'}:
return 'a'
elif key in {'Bravo', 'Beta', 'Beth'}:
return 'b'
a = TransformDict(key_transform, a=1, b=2)
assert a['Alpha'] is a['a']
I am trying to override the behavior of the dict on json.dumps. For instance, I can order the keys. Thus, I create a class which inherits if dict, and override some of its methods.
import json
class A(dict):
def __iter__(self):
for i in range(10):
yield i
def __getitem__(self, name):
return None
print json.dumps(A())
But it does not call any of my methods and only gives me {}
There is a way to give me the rigt behavior:
import json
class A(dict):
def __init__(self):
dict.__init__(self, {None:None})
def __iter__(self):
for i in range(10):
yield i
def __getitem__(self, name):
return None
print json.dumps(A())
Wich finally gives {"0": null, "1": null, "2": null, "3": null, "4": null, "5": null, "6": null, "7": null, "8": null, "9": null}
Thus, it is clear that the C implementation of json.dumps somehow test if the dict is empty. Unfortunately, I cannot figure out which method is called. First, __getattribute__ does not work, and second I've overrided quite every method dict defines or could define without success.
So, could someone explain to me how the C implementation of json.dumps check if the dict is empty, and is there a way to override it (I find my __init__ pretty ugly).
Thank you.
Edit:
I finally found where this appends in the C code, and it looks not customizable
_json.c line 2083:
if (open_dict == NULL || close_dict == NULL || empty_dict == NULL) {
open_dict = PyString_InternFromString("{");
close_dict = PyString_InternFromString("}");
empty_dict = PyString_InternFromString("{}");
if (open_dict == NULL || close_dict == NULL || empty_dict == NULL)
return -1;
}
if (Py_SIZE(dct) == 0)
return PyList_Append(rval, empty_dict);
So it looks like Py_SIZE is used to check if the dict is empty. But this is a macro (not a function), which only return a property of the python object.
object.h line 114:
#define Py_REFCNT(ob) (((PyObject*)(ob))->ob_refcnt)
#define Py_TYPE(ob) (((PyObject*)(ob))->ob_type)
#define Py_SIZE(ob) (((PyVarObject*)(ob))->ob_size)
So since its not a function, it cannot be overrided and thus its behavior cannot be customized.
Finally, the "non empty dict trick" is necessary if one want to customize json.dumps by inheriting a dict (of course other ways to achive this are possible).
Would it be easier to modify the behaviour of the encoder rather than creating a new dict sub class?
class OrderedDictJSONEncoder(json.JSONEncoder):
def default(self, obj):
if hasattr(obj, 'keys'):
return {} # replace your unordered dict with an OrderedDict from collections
else:
return super(OrderedDictJSONEncoder, self).default(obj)
And use it like so:
json.dumps(my_dict_to_encode, cls=OrderedDictJSONEncoder)
This seems like the right place to turn an unordered Python dict into an ordered JSON object.
I don't know exactly what the encoder does, but it's not written in C, the Python source for the json package is here: http://hg.python.org/cpython/file/2a872126f4a1/Lib/json
Also if you just want to order the items, there's
json.dumps(A(), sort_keys=True)
Also see this question ("How to perfectly override a dict?") and its first answer, that explains that you should subclass collections.MutableMapping in most cases.
Or just give a subclassed encoder, as aychedee mentioned.