Django. How to locate slow tests? - python

How to locate slow django tests? How to locate tests, on which test runner can 'stuck'? Do you know any good custom django test runners, that can provide more detailed information on test performance?

You can get Django to print the tests it's running with:
./manage.py test -v 3
This will print the name of the test, run it, then print "ok". So you can figure out which test is slow.

You could try nose. Plenty of tutorials available on installing it along with Django. To get a high level overview of testing time, look into the pinocchio nose extensions, specifically the stopwatch one.

To identify slow tests, you would measure the duration it takes for each test to run and define a threshold of what slow means to you.
By default, Django's test runner does not show detailed timing information. You could use an alternative test runner, some of which will show timing data for test runs.
However, you could easily cook your own, as Django uses Python's unittest facilities, which are well documented and can be extended.
The following code can be found in one file here:
https://gist.github.com/cessor/44446799736bbe801dc5565b28bfe58b
To run tests, Django uses a DiscoverRunner (which wraps Python's TestRunner). The runner will discover testcases from your project structure by convention. Cases are collected in a TestSuite and executed. For each test method, a TestResult will be created which is then used to communicate the result to the developer.
To measure how long it takes for a test to run, you may exchange the TestResult for a custom class:
import unittest
from django.test.runner import DiscoverRunner
class StopwatchTestResult(unittest.TextTestResult):
...
class StopwatchTestRunner(DiscoverRunner):
def get_resultclass(self):
return StopwatchTestResult
In this example, the code above is placed in a Django app called example in a module called testrunner.py. To execute it, invoke Django's test command like this:
$ python manage.py test --testrunner=example.testrunner.StopwatchTestRunner -v 2
The StopwatchTestResult can be made to register timing data:
import time
...
class StopwatchTestResult(unittest.TextTestResult):
"""
Times test runs and formats the result
"""
# Collection shared between all result instaces to calculate statistics
timings = {}
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
self.start = 0
self.stop = 0
self.elapsed = 0
def startTest(self, test):
self.start = time.time()
super().startTest(test)
def stopTest(self, test):
super().stopTest(test)
self.stop = time.time()
self.elapsed = self.stop - self.start
self.timings[test] = self.elapsed
def getDescription(self, test):
"""
Format test result with timing info
e.g. `test_add (module) [0.1s]`
"""
description = super().getDescription(test)
return f'{description} [{self.elapsed:0.4f}s]'
For each test method, TestResult.startTest is called and registers a time stamp. The duration of each test run is calculated in TestResult.stopTest.
When the verbosity is increased, the description is printed, which allows you to format the timing data:
test_student_can_retract_their_request_commit (person.tests.test_view_new.test_view_person_new.NewViewTest) [0.0080s] ... ok
test_student_can_see_their_request_status (person.tests.test_view_new.test_view_person_new.NewViewTest) [0.0121s] ... ok
test_student_can_submit_request (person.tests.test_view_new.test_view_person_new.NewViewTest) [0.0101s] ... ok
To identify slow tests, you can either eyeball the solution or use a systematic approach:
class StopwatchTestRunner(DiscoverRunner):
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
self._stats = kwargs['stats']
#classmethod
def add_arguments(cls, parser):
DiscoverRunner.add_arguments(parser)
parser.add_argument(
"--stats",
action="store_true",
help="Print timing statistics",
)
def run_tests(self, test_labels, extra_tests=None, **kwargs):
super().run_tests(test_labels, extra_tests, **kwargs)
if self._stats:
StopwatchTestResult.print_stats()
def get_resultclass(self):
...
This test runner adds a new option to the test command. The command will print additional statistics, when the --stats option is set:
$ python manage.py test --testrunner=example.testrunner.StopwatchTestRunner -v 2 --stats
The actual calculation and output are performed in StopwatchTestResult.print_stats(). It is implemented as a class method because it refers to data belonging to all TestResults:
import statistics
class StopwatchTestResult(unittest.TextTestResult):
...
#classmethod
def print_stats(cls):
"""
Calculate and print timings
These data are likely skewed, as is normal for reaction time data,
therefore mean and standard deviation are difficult to interpret.
Thus, the IQR is used to identify outliers.
"""
timings = StopwatchTestResult.timings.values()
count = len(timings)
mean = statistics.mean(timings)
stdev = statistics.stdev(timings)
slowest = max(timings)
q1, median, q3 = statistics.quantiles(timings)
fastest = min(timings)
total = sum(timings)
print()
print("Statistics")
print("==========")
print("")
print(f"count: {count:.0f}")
print(f" mean: {mean:.4f}s")
print(f" std: {stdev:.4f}s")
print(f" min: {fastest:.4f}s")
print(f" 25%: {q1:.4f}s")
print(f" 50%: {median:.4f}s")
print(f" 75%: {q3:.4f}s")
print(f" max: {slowest:.4f}s")
print(f"total: {total:.4f}s")
# https://en.wikipedia.org/wiki/Interquartile_range
iqr = q3 - q1
fast = q1 - 1.5 * iqr
slow_threshold = q3 + 1.5 * iqr
slow_tests = [
(test, elapsed)
for test, elapsed
in StopwatchTestResult.timings.items()
if elapsed >= slow_threshold
]
if not slow_tests: return
print()
print("Outliers")
print("========")
print("These were particularly slow:")
print()
for test, elapsed in slow_tests:
print(' ', test, f"[{elapsed:0.4f}s]")
To identify slow tests, you can analyze the obtained timing data for outliers. Usually, one would calculate the mean and a standard deviation and define outliers as all values beyond a certain threshold, such as +1.5 Standard deviations from the mean.
However, when collecting timing data, the distribution will be skewed, because responses can't be faster than 0 seconds and slow outliers will draw the distribution's mean to the right in a long tail. Under these circumstances, mean and standard deviation can be difficult to interpret, thus the code uses a different approach to identify outliers.
Use statistics.quantiles from Python's statistics module (requires Python 3.8). This yields the 25%, 50% and 75% boundaries of the distribution. To identify outliers, you define the threshold as:
slow_threshold = q3 + 1.5 * iqr
You may want to finetune this formular for your purposes. The IQR is a reference for the dispersion of your distribution (interquartile range, q3 - q1). Instead of with q3, you might calculate the threshold starting from the median. You could reduce or increase the constant factor, which yould make the outlier detection more or less sensitive.
All together, this produces the following output:
...
test_bad_request (content.tests.test_image_upload_view.ImageUploadTests) [0.1159s] ... ok
test_view_editor_can_access_create_page (edv.knowledgebase.tests.test_view_create_page.CreatePageTests) [3.5587s] ... ok
test_admins_are_redirected_to_where_they_were_before (web.tests.test_view_login.LoginOfficialUser) [0.4595s] ... ok
test_ldap_users_are_redirected_to_their_profile_pages (web.tests.test_view_login.LoginOfficialUser) [0.4522s] ... ok
...
Statistics
==========
count: 708
mean: 0.0222s
std: 0.2177s
min: 0.0000s
25%: 0.0000s
50%: 0.0000s
75%: 0.0156s
max: 4.5008s
total: 15.6883s
Outliers
========
These were particularly slow:
test_account_view (account.tests.test_view_account.AccountViewTest) [0.0690s]
test_headline_en (arbeitseinheit.tests.test_arbeitseinheit_page.ArbeitseinheitTest) [0.0846s]
test_admins_see_admin_menu (arbeitseinheit.tests.test_arbeitseinheit_superuser.ArbeitseinheitTest) [0.1628s]
test_editors_can_post_in_their_arbeitseinheit (arbeitseinheit.tests.test_view_create_news.CreateArticleViewTests) [0.0625s]
test_bad_request (content.tests.test_image_upload_view.ImageUploadTests) [0.1159s]
...

Related

How can I postpone test verification?

So, the problem is next:
I have a class with tests.
EXP of this class:
class TestClass:
def test1(self):
step 1 (some method to create test data)
...
...
step n (some method to create test data)
expected = result of step 1 ... step n execution
actual = method_a()--> get actual results from some resources (2 min is needed for this data to
be created on this resource)
verification_method(expected, actual)
def test2(self):
step 1 (some method to create test data)
...
...
step n (some method to create test data)
expected = result of step 1 ... step n execution
actual = method_a()--> get actual results from some resources (2 min is needed for this data to
be created on this resource)
verification_method(expected, actual)
So, in that case, execution will take at least 2 min * number of tests.
How can I move actual = method_a()--> get actual results from some resources (2 min is needed for this data to be created on this resource) and as a result verification_method(expected, actual) to "something " what will be executed after all tests.
For now, I have a global variable test_name_and_expected_result which contains expected for all tests, and move
actual = method_a()--> get actual results from some resources (2 min is needed for this data to
be created on this resource)
verification_method(expected, actual)
to class fixture teardown. But as a result, I have no proper test status as I can see only general failure but not for a specific test.
Also works for me :
If there is the possibility to forcibly overwrite pytest results for a specific test (as I have the name of tests in test_name_and_expected_result)

Optimize Variable From A Function In Python

I'm used to using Excel for this kind of problem but I'm trying my hand at Python for now.
Basically I have two sets of arrays, one constant, and the other's values come from a user-defined function.
This is the function, simple enough.
import scipy.stats as sp
def calculate_probability(spread, std_dev):
return sp.norm.sf(0.5, spread, std_dev)
I have two arrays of data, one with entries that run through the calculate_probability function (these are the spreads), and the other a set of constants called expected_probabilities.
spreads = [10.5, 9.5, 10, 8.5]
expected_probabilities = [0.8091, 0.7785, 0.7708, 0.7692]
The below function is what I am seeking to optimise.
import numpy as np
def calculate_mse(std_dev):
spread_inputs = np.array(spreads)
model_probabilities = calculate_probability(spread_inputs,std_dev)
subtracted_vector = np.subtract(model_probabilities,expected_probabilities)
vector_powered = np.power(subtracted_vector,2)
mse_sum = np.sum(vector_powered)
return mse_sum/len(spreads)
I would like to find a value of std_dev such that function calculate_mse returns as close to zero as possible. This is very easy in Excel using solver but I am not sure how to do it in Python. What is the best way?
EDIT: I've changed my calculate_mse function so that it only takes a standard deviation as a parameter to be optimised. I've tried to return Andrew's answer in an API format using flask but I've run into some issues:
class Minimize(Resource):
std_dev_guess = 12.0 # might have a better guess than zeros
result = minimize(calculate_mse, std_dev_guess)
def get(self):
return {'data': result},200
api.add_resource(Minimize,'/minimize')
This is the error:
NameError: name 'result' is not defined
I guess something is wrong with the input?
I'd suggest using scipy's optimization library. From there, you have a couple options, the easiest from your current setup would be to just use the minimize method. Minimize itself has a massive amount of options, from simplex methods (default) to BFGS and COBYLA.
https://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.minimize.html
from scipy.optimize import minimize
n_params = 4 # based of your code so far
spreads_guess = np.zeros(n_params) # might have a better guess than zeros
result = minimize(calculate_mse, spreads_guess)
Give it a shot and if you have extra questions I can edit the answer and elaborate as needed.
Here's just a couple suggestions to clean up your code.
class Minimize(Resource):
def _calculate_probability(self, spread, std_dev):
return sp.norm.sf(0.5, spread, scale=std_dev)
def _calculate_mse(self, std_dev):
spread_inputs = np.array(self.spreads)
model_probabilities = self._calculate_probability(spread_inputs, std_dev)
mse = np.sum((model_probabilities - self.expected_probabilities)**2) / len(spread_inputs)
print(mse)
return mse
def __init__(self, expected_probabilities, spreads, std_dev_guess):
self.std_dev_guess = std_dev_guess
self.spreads = spreads
self.expected_probabilities = expected_probabilities
self.result = None
def solve(self):
self.result = minimize(self._calculate_mse, self.std_dev_guess, method='BFGS')
def get(self):
return {'data': self.result}, 200
# run something like
spreads = [10.5, 9.5, 10, 8.5]
expected_probabilities = [0.8091, 0.7785, 0.7708, 0.7692]
minimizer = Minimize(expected_probabilities, spreads, 10.)
print(minimizer.get()) # returns none since it hasn't been run yet, up to you how to handle this
minimizer.solve()
print(minimizer.get())

PyTest skipping test based on target code version

What I am trying to do is to skip tests that are not supported by the code I am testing. My PyTest is running tests against an embedded system that could have different versions of code running. What I want to do mark my test such that they only run if they are supported by the target.
I have added a pytest_addoption method:
def pytest_addoption(parser):
parser.addoption(
'--target-version',
action='store', default='28',
help='Version of firmware running in target')
Create a fixture to decide if the test should be run:
#pytest.fixture(autouse = True)
def version_check(request, min_version: int = 0, max_version: int = 10000000):
version_option = int(request.config.getoption('--target-version'))
if min_version and version_option < min_version:
pytest.skip('Version number is lower that versions required to run this test '
f'({min_version} vs {version_option})')
if max_version and version_option > max_version:
pytest.skip('Version number is higher that versions required to run this test '
f'({max_version} vs {version_option})')
Marking the tests like this:
#pytest.mark.version_check(min_version=24)
def test_this_with_v24_or_greater():
print('Test passed')
#pytest.mark.version_check(max_version=27)
def test_not_supported_after_v27():
print('Test passed')
#pytest.mark.version_check(min_version=13, max_version=25)
def test_works_for_range_of_versions():
print('Test passed')
In the arguments for running the test I just want to add --target-version 22 and have only the right tests run. I haven't been able to figure out how to pass the arguments from #pytest.mark.version_check(max_version=27), to version_check.
Is there a way to do this or am I completely off track and should be looking at something else to accomplish this?
You are not far from a solution, but you're mixing up markers with fixtures; they are not the same, even if you give them the same name. You can, however, read markers of each test function in your version_check fixture and skip the test depending on what was provided by the version_check marker if set. Example:
#pytest.fixture(autouse=True)
def version_check(request):
version_option = int(request.config.getoption('--target-version'))
# request.node is the current test item
# query the marker "version_check" of current test item
version_marker = request.node.get_closest_marker('version_check')
# if test item was not marked, there's no version restriction
if version_marker is None:
return
# arguments of #pytest.mark.version_check(min_version=10) are in marker.kwargs
# arguments of #pytest.mark.version_check(0, 1, 2) would be in marker.args
min_version = version_marker.kwargs.get('min_version', 0)
max_version = version_marker.kwargs.get('max_version', 10000000)
# the rest is your logic unchanged
if version_option < min_version:
pytest.skip('Version number is lower that versions required to run this test '
f'({min_version} vs {version_option})')
if version_option > max_version:
pytest.skip('Version number is higher that versions required to run this test '
f'({max_version} vs {version_option})')

Pygmo2: migration between islands in an archipelago during evolution

I'm trying to use the Python library Pygmo2 (https://esa.github.io/pagmo2/index.html) to parallelize an optimization problem.
To my understanding, parallelization can be achieved with an archipelago of islands (in this case, mp_island).
As a minimal working example, one of the tutorials from the official site can serve: https://esa.github.io/pagmo2/docs/python/tutorials/using_archipelago.html
I extracted the code:
class toy_problem:
def __init__(self, dim):
self.dim = dim
def fitness(self, x):
return [sum(x), 1 - sum(x*x), - sum(x)]
def gradient(self, x):
return pg.estimate_gradient(lambda x: self.fitness(x), x)
def get_nec(self):
return 1
def get_nic(self):
return 1
def get_bounds(self):
return ([-1] * self.dim, [1] * self.dim)
def get_name(self):
return "A toy problem"
def get_extra_info(self):
return "\tDimensions: " + str(self.dim)
import pygmo as pg
a_cstrs_sa = pg.algorithm(pg.cstrs_self_adaptive(iters=1000))
p_toy = pg.problem(toy_problem(50))
p_toy.c_tol = [1e-4, 1e-4]
archi = pg.archipelago(n=32,algo=a_cstrs_sa, prob=p_toy, pop_size=70)
print(archi)
archi.evolve()
print(archi)
Looking at the documentation of the old version of the library (http://esa.github.io/pygmo/documentation/migration.html), migration between islands seems to be an essential feature of the island parallelization model.
Also, to my understanding, optimization algorithms like evolutionary algorithms could not work without it.
However, in the documentation of Pygmo2, I can nowhere find how to perform migration.
Is it happening automatically in an archipelago?
Does it depend on the selected algorithm?
Is it not yet implemented in Pygmo2?
Is the documentation on this yet missing or did I just not find it?
Can somebody enlighten me?
pagmo2 is now implementing migration since v2.11, the PR has benn completed and merged into master. Almost all capabilities present in pagmo1.x are restored. We will still add more topologies in the future, but they can already be implemented manually. Refer to docs here: https://esa.github.io/pagmo2/docs/cpp/cpp_docs.html
Tutorial and example are missing and will be added in the near future (help is welcome)
The migration framework has not been fully ported from pagmo1 to pagmo2 yet. There is a long-standing PR open here:
https://github.com/esa/pagmo2/pull/102
We will complete the implementation of the migration framework in the next few months, hopefully by the beginning of the summer.
IMHO, the PyGMO2/pagmo documentation is confirming the migration feature to be present.
The archipelago class is the main parallelization engine of pygmo. It essentially is a container of island able to initiate evolution (optimization tasks) in each island asynchronously while keeping track of the results and of the information exchange (migration) between the tasks ...
With an exception of thread_island-s ( where some automated inference may take place and enforce 'em for thread-safe UDI-s ), all other island types - { mp_island | ipyparallel_island }-s do create a GIL-independent form of a parallelism, yet the computing is performed via an async-operated .evolve() method
In original PyGMO, the archipelago class was auto .__init__()-ed with attribute topology = unconnected(), unless specified explicitly, as documented in PyGMO, having a tuple of call-interfaces for archipelago.__init__() method ( showing just the matching one ):
__init__( <PyGMO.algorithm> algo,
<PyGMO.problem> prob,
<int> n_isl,
<int> n_ind [, topology = unconnected(),
distribution_type = point_to_point,
migration_direction = destination
]
)
But, adding that, one may redefine the default, so as to meet one's PyGMO evolutionary process preferences:
topo = topology.erdos_renyi( nodes = 100,
p = 0.03
) # Erdos-Renyi ( random ) topology
or
set a Clustered Barabási-Albert, with ageing vertices graph topology:
topo = topology.clustered_ba( m0 = 3,
m = 3,
p = 0.5,
a = 1000,
nodes = 0
) # clustered Barabasi-Albert,
# # with Ageing vertices topology
or:
topo = topology.watts_strogatz( nodes = 100,
p = 0.1
) # Watts-Strogatz ( circle
# + links ) topology
and finally, set it by assignment into the class-instance attribute:
archi = pg.archipelago( n = 32,
algo = a_cstrs_sa,
prob = p_toy,
pop_size = 70
) # constructs an archipelago
archi.topology = topo # sets the topology to the
# # above selected, pre-defined <topo>

Linking container class properties to contained class properties

I'm working on a simulation of a cluster of solar panels (a system/container). The properties of this cluster are linked almost one-to-one to the properties of the elements -- the panels (subsystem/contained) -- via the number of elements per cluster. E.g. the energy production of the cluster is simply the number of panels in the cluster times the production of the single cluster. Same for the cost, weight, etc. My question is how the link the container class to the contained class.
Let me illustrate with a naive example approach:
class PanelA(BasePanel):
... _x,_y,_area,_production,etc.
#property
def production(self):
# J/panel/s
return self._area * self._efficiency
... and 20 similar properties
#property
def _technical_detail
class PanelB(BasePanel):
.... similar
class PanelCluster():
....
self.panel = PanelA()
self.density = 100 # panels/ha
#property
def production(self):
# J/ha/h
uc = 60*60 # unit conversion
rho = self.density
production_single_panel = self.panel.production
return uc*rho*production_single_panel
... and e.g. 20 similar constructions
Note, in this naive approach one would write e.g. 20 such methods which seems not in line with this DRY-principle.
What would be a better alternative? (Ab)Use getattr?
For example?
class Panel():
unit = {'production':'J/panel/s'}
class PanelCluster():
panel = Panel()
def __getattr__(self,key):
if self.panel.__hasattr__(key)
panel_unit = self.panel.unit[key]
if '/panel/s' in panel_unit:
uc = 60*60 # unit conversion
rho = self.density
value_per_panel = getattr(self.panel,key)
return uc*rho*value_per_panel
else:
return getattr(self.panel,key)
This already seems more 'programmatic' but might be naive -- again. So I wonder what are the options and the pros/cons thereof?
There are a number of Python issues with your code, e.g.:
yield means something specific in Python, probably not a good identifier
and it's spelled:
hasattr(self.panel.unit, key)
getattr(self.panel, key)
That aside, you're probably looking for a solution involving inheritance. Perhaps both Panel and PanelCluster need to inherit from a PanelFunction class?
class PanelFunctions(object):
#property
def Yield(...):
...
class Panel(PanelFunctions):
..
class PanelCluster(PanelFunctions):
..
I would leave the properties as separte definitions since you'll need to write unit tests for all of them (and it will be much easier to determine coverage that way).

Categories