I want to mock a function that calls an external function with parameters.
I know how to mock a function, but I can't give parameters. I tried with #patch, side_effects, but no success.
def functionToTest(self, ip):
var1 = self.config.get(self.section, 'externalValue1')
var2 = self.config.get(self.section, 'externalValue2')
var3 = self.config.get(self.section, 'externalValue3')
if var1 == "xxx":
return False
if var2 == "yyy":
return False
[...]
In my test I can do this:
def test_functionToTest(self):
[...]
c.config = Mock()
c.config.get.return_value = 'xxx'
So both var1, var2 and var3 take "xxx" same value, but I don't know how to mock every single instruction and give var1, var2 and var3 values I want
(python version 2.7.3)
Use side_effect to queue up a series of return values.
c.config = Mock()
c.config.get.side_effect = ['xxx', 'yyy', 'zzz']
The first time c.config.get is called, it will return 'xxx'; the second time, 'yyy'; and the third time, 'zzz'. (If it is called a fourth time, it will raise a StopIteration error.)
Related
Is there a way to record variables & arguments of a program in Python, without manually decorate the functions in it?
For example, given the following code:
def get_b(a):
# do something with a
# ...
b = 3
return b
def get_a():
a = 2
return a
def foo():
a = get_a()
b = get_b(a)
return a, b
if __name__ '__main__':
a, b = foo()
I'd like to know what were the values of the arguments/variables in that particular run. Maybe something like this:
function get_a:
variables: "a" = 2
function get_b:
parameters: "a" = 2
variables: "b" = 3
Is there a way to "record" all of this information?
Add the following code in your module's global scope. It will give what you asked for. The code makes use of sys.setprofile. Make sure to consult the documentation. setprofile is not meant for everyday usage. It can cause performance degradation and my sample code would not work in multi-threaded programs.
What we are doing is for every statement which calls or returns a function our profile function is invoked. In this profile function we have access to the event type (call, return) and frame info object. From frame info object we can access all the variables in the namespace of the invoking or returning function. From those local variables we filter those variables which were passed as parameters to the respective function. That's just it.
import inspect
def profile(frame, event, arg):
if (fn_name := frame.f_code.co_name) != '<module>' and event == 'return':
args_info = inspect.getargvalues(frame)
args_locals = args_info.locals
params = [f'{arg}={args_locals.pop(arg)}' for arg in args_info.args]
if not params:
params = ''
else:
params = ', '.join(params)
if args_locals:
args_locals = ', '.join([f'{k}={v}' for (k, v) in args_locals.items()])
else:
args_locals = ''
print(f'Function {fn_name}:')
print(f' params: {params}')
print(f' locals: {args_locals}')
sys.setprofile(profile)
Output:
Function get_a:
params:
locals: a=2
Function get_b:
params: a=2
locals: b=3
Function foo:
params:
locals: a=2, b=3
Suggested reading: What cool hacks can be done using sys.settrace?
I have three functions, for example:
from cachetools import cached, TTLCache
import pandas as pd
cache=TTLCache(10,1000)
#cached(cache)
def function1():
df=pd.DataFrame({'one':range(5),'two':range(5,10)}) #just a little data, doesn't matter what
return df
#cached(cache)
def function2(df):
var1=df['one']
var2=df['two']
return var1, var2
def function3():
df=function1()
var1,var2=function2(df) #pass df to function 2 for some work
print('this is var1[0]: '+str(var1[0]))
print('this is var2[0]: '+str(var2[0]))
function3()
I want there to be a cached version of df, var1, and var2. Basically, I want to reassign df inside of function3 only if it is not cached, then do the following for var1 and var2, which depend on df. Is there a way to do this? When I remove #cached(cache) from function2 then the code works.
This is the error I get
TypeError: 'DataFrame' objects are mutable, thus they cannot be hashed
Try to use cacheout lib, it worked for me
import pandas as pd
from cacheout import Cache
cache = Cache()
#cache.memoize()
def function1():
df = pd.DataFrame({'one': range(5), 'two': range(5, 10)})
return df
#cache.memoize()
def function2(df):
var1 = df['one']
var2 = df['two']
return var1, var2
def function3():
df = function1()
var1, var2 = function2(df)
print('this is var1[0]: ' + str(var1[0]))
print('this is var2[0]: ' + str(var2[0]))
function3()
Output:
this is var1[0]: 0
this is var2[0]: 5
As the accepted answer mentioned, the issue seems to be with cachetools. If you absolutely must you cachetools, then you can convert the df to a string and back, but that computational expense may be prohibitive.
cache=TTLCache(10,1000)
#cached(cache)
def function1():
df=pd.DataFrame({'one':range(5),'two':range(5,10)}) #just a little data, doesn't matter what
print('iran')
return df.to_csv(index=False) #return df as string
#cached(cache)
def function2(df):
df = pd.read_csv(StringIO(df)) #return string df to normal pandas df.
var1=df['one']
var2=df['two']
print('iran2')
return var1, var2
def function3():
df=function1()
var1,var2=function2(df)
print('this is var1[0]: '+str(var1[0]))
print('this is var2[0]: '+str(var2[0]))
function3()
I have a function (func.py). Structure of which look like this:
database = 'VENUS'
def first_function():
print("do some thing")
def second_function():
print("call third function)
third_function()
def third_function(db = database):
print("do some other thing")
I need to import this function and used the inner defined function. But, I want to use a different key for database. Basically, I want to overwrite database = 'VENUS' and use database = 'MARS' while second function call the third function. is there any way to do this?
Just provide the database name as argument
first_function("MARS")
second_function("MARS")
So the problem here, if I understood correctly, is that the default argument for func.third_function is defined at import time. It doesn't matter if you later modify the func.database variable, since the change will not reflect on the default argument of func.third_function.
One (admittedly hacky) solution is to inject a variable using a closure over the imported function. Example:
file.py:
x = 1
def print_x(xvalue = x)
print(xvalue)
Python console:
>>> import file
>>> file.print_x()
1
>>> file.x = 10
>>> file.print_x() # does not work (as you're probably aware)
1
>>> def inject_var(func_to_inject, var):
def f(*args, **kwargs):
return func_to_inject(var, *args, **kwargs)
return f
>>> file.print_x = inject_var(file.print_x, 10)
>>> file.print_x() # works
10
So using the inject_var as written above, you could probably do:
func.third_function = inject_var(func.third_function, "MARS")
I'm trying to use a simple try statement and a for loop statement to initialize a list of variables that were not defined previously.
Here's the code I wrote:
for i in ['var1', 'var2', 'var3']:
try:
i
except NameError:
i = []
It doesn't work as I expect it to. After running it, I want to have var1 = [], var2=[] and var3=[] if these variables haven't been defined previously.
Here' a little more detail on what I'm trying to accomplish. A scheduled task is supposed to run every 60 seconds and I want to keep track of progress:
def run_schduled():
for i in ['var1', 'var2', 'var3']:
try:
i
except NameError:
i = []
var1.append(random.randint(0,100))
var2.append(random.randint(0,100))
var3.append(random.randint(0,100))
schedule.every(60).seconds.do(run_schduled)
while True:
schedule.run_pending()
time.sleep(30)
One solution is to use a defaultdict:
from collections import defaultdict
my_dict = defaultdict(lambda: [])
my_dict['var1'].append(1)
print(my_dict['var1']) # prints '[1]'
This would not allow you to simply do print(var1), however, because it would still be undefined in your local (or global) namespace as a tagged value. It would only exist in the defaultdict instance as key.
Another option would be to use a class:
class TaskRunner:
def __init__(self, var1=None, var2=None, var3=None):
self.var1 = var1 or []
self.var2 = var2 or []
self.var3 = var3 or []
def run_scheduled(self):
for i in [self.var1, self.var2, self.var3]:
i.append(random.randrange(1, 10000000))
runner = TaskRunner()
schedule.every(60).seconds.do(runner.run_scheduled)
You can also use pickle to save instances to load later (i.e., in subsequent runs of your job).
Try globals:
In [82]: for i in ['var1', 'var2', 'var3']:
...: if i in globals():
...: print(f'{i} already present in the global namespace')
...: else:
...: globals()[i] = []
...:
In [83]: var1
Out[83]: []
I am new to python and trying to update a variable, say x, in an imported module and then trying to use the updated variable x in other variable, say y, but y uses the old value of x instead of the new value. Please help provide some pointers to make it work!
My intention is to use a py file to list all global variable which I can use them in other py files. I could update a global variable and use it but not sure how to use an updated global variable in other variables.
Sample code:
a.py:
var1 = 0
var2 = var1 + 1
b.py:
import a
def update_var():
a.var1 = 10
print("Updated var1 is {}".format(a.var1))
print("var2 is {}".format(a.var2))
if __name__ == "__main__":
update_var()
Output:
Updated var1 is 10
var2 is 1
Expected Output:
Since i am updating var1 to 10, i am expecting that the updated value be used in var2
Updated var1 is 10
var2 is 11
Python doesn't work that way. When you import a module, the code in the module is executed. In your case, that means two variables are defined: a.var1 with value 0 and a.var2 with value 1. If you then modify a.var1, you won't affect a.var2, its value was defined when you imported the module and it won't change unless you explicitly alter it.
This is due to var2 being initialized only once whilst importing.
The way around this would be to write a getter or and update function.
A possible getter function would be:
a.py
var1 = 0
var2 = var1 + 1
def getVar2():
return var1 + 1
b.py:
import a
def update_var():
a.var1 = 10
print("Updated var1 is {}".format(a.var1))
print("var2 is {}".format(a.getVar2()))
if __name__ == "__main__":
update_var()
A possible update function would be:
a.py
var1 = 0
var2 = var1 + 1
def updateVar2():
var2 = var1+1
b.py:
import a
def update_var():
a.var1 = 10
a.updateVar2()
print("Updated var1 is {}".format(a.var1))
print("var2 is {}".format(a.var2()))
if __name__ == "__main__":
update_var()
Based on the inputs from #GPhilo and my personal experiences i came up with below working solutions, guess solution 2 is more pythonic.
Solution 1:
a.py:
class Globals:
def __init__(self, value):
self.var1 = value
self.var2 = self.var1 + 1
b.py:
from a import Globals
def update_var():
globals_instance = Globals(10)
print("Updated var1 is {}".format(globals_instance.var1))
print("var2 is {}".format(globals_instance.var2))
if __name__ == "__main__":
update_var()
Output:
Updated var1 is 10
var2 is 11
Solution 2:
Change implementation of a.py as below"
a.py:
class Globals:
def __init__(self, value):
self._var1 = value
self.var2 = self._var1 + 1
#property
def var1(self):
return self._var1
#var1.setter
def var1(self, value):
self._var1 = value