Is it possible to retrieve or reformat the falsifying example after a test failure? The point is to show the example data in a different format - data generated by the strategy is easy to work with in the code but not really user friendly, so I'm looking at how to display it in a different form. Even a post-mortem tool working with the example database would be enough, but there does not seem to be any API allowing that, or am I missing something?
You can call note to record additional information during a test, such as your own custom-formatted copy of the generated inputs.
When Hypothesis finds a falsifying example, it will also print out the notes that were recorded the execution of that particular example.
Even a post-mortem tool working with the example database would be enough, but there does not seem to be any API allowing that, or am I missing something?
The example database uses a private format and only records the choices a strategy made to generate the falsifying example, so there's no way to extract the data of the example short of re-running the test.
Stuart's recommendation of hypothesis.note(...) is a good one.
Related
During my current project, I have been receiving data from a set of long-range sensors, which are sending data as a series of bytes. Generally, due to having multiple types of sensors, the bytes structures and data contained are different, hence the need to make the functionality more dynamic as to avoid having to hard-code every single setup in the future (which is not practical).
The server will be using Django, which I believe is irrelevant to the issue at hand but I have mentioned just in case it might have something that can be used.
The bytes data I am receiving looks like this:
b'B\x10Vu\x87%\x00x\r\x0f\x04\x01\x00\x00\x00\x00\x1e\x00\x00\x00\x00ad;l'
And my current process looks like this:
Take the first bytes to get the deviceID (deviceID = val[0:6].hex())
Look up the format to be used in the struct.unpack() (here: >BBHBBhBHhHL after removing the first bytes for the id.
Now, the issue is the next step. Many of the datas I have have different forms of per-processing that needs to be done. F.e. some values need to be ran with a join statement (e.g. ".".join(str(values[2]) ) while others need some simple mathematical changes (-113 + 2 * values[4]) and finally, others needs a simple logic check (values[7]==0x80) to return a boolean value.
My question is, what's the best way to code those methods? I would really like to avoid hardcoding them, but it almost seems like the best idea. another idea I saw was to store the functionalities as a string and execute them such as seen here, but I've been reading that its a very bad idea, and that it also slows down execution. The last idea I had was to hardcode some general functions only and use something similar to here, but this doesn't solve the issue of having to hard-code every new sensor-type, which is not realistic in a live-installation. Are there any better methods to achieve the same thing?
I have also looked at here, with the idea that some functionality can be somehow optimized as an equation, but I didn't see that a possibility for every occurrence, especially when any string manipulation is needed at all.
Additionally, is there a possibility of using some maths to apply some basic string manipulation? I can hard-code one string manipulation maybe, but to be honest this whole thing has been bugging me...
Finally, I am considering if I go with the function storing as string then executing, is there a way to set some "security" to avoid any malicious exploitation? Since such a method is... awful insecure to say the least.
However, after almost a week total of searching I am so far unable to find a better solution than storing functions as a string and running eval on them, despite not liking that option. If anyone finds a better option before then, I would be extremely grateful to any tips or ideas.
Appendum: Minimum code that can be used to show-case and test different methods:
import struct
def decode(input):
val = bytearray(input)
deviceID = val[0:6].hex()
del(val[0:6])
print(deviceID)
values = list(struct.unpack('>BBHBBhBHhHL', val))
print(values)
# Now what?
decode(b'B\x10Vu\x87%\x00x\r\x0f\x04\x01\x00\x00\x00\x00\x1e\x00\x00\x00\x00ad;l')
I'm a total amateur/hobbyist developer trying to learn more about testing the software I write. While I understand the core concept of testing, as the functions get more complicated, I feel as though it's a rabbit hole of varations, outcomes, conditions etc. For example...
The function below reads files from a directory into a Pandas DataFrame. A few columns adjustments are made before the data is passed to a different function that ultimately imports the data to our database.
I've already coded a test for the convert_date_string function. But what about this entire function as as whole - how do I write a test for it? In my mind, much of the Pandas library is already tested - thus making sure core functionality there works with my setup seems like a waste. But, maybe it isn't. Or, maybe this is a refactoring question to break this down into smaller parts?
Anyway, here is the code... any insight would be appreciated!
def process_file(import_id=None):
all_files = glob.glob(config.IMPORT_DIRECTORY + "*.txt")
if len(all_files) == 0:
return []
import_data = (pd.read_csv(f, sep='~', encoding='latin-1',
warn_bad_lines=True, error_bad_lines=False,
low_memory=False) for f in all_files)
data = pd.concat(import_data, ignore_index=True, sort=False)
data.columns = [col.lower() for col in data.columns]
data = data.where((pd.notnull(data)), None)
data['import_id'] = import_id
data['date'] = data['date'].apply(lambda x: convert_date_string(x))
insert_data_into_database(data=data, table='sales')
return all_files
There are mainly two kind of tests - proper unit tests, and integration tests.
Unit tests, as the name implies, test "units" of your program (functions, classes...) in isolation (without considering how they interact with other units). This of course require those units can be tested in isolation. For example, a pure function (a function that compute a results from it's inputs, where the result depends only on the inputs and will always be the same for the same inputs, and which doesn't have any side effect) is very easy to test, while a function that reads data from a hardcoded path on your filesystem, makes http requests to a hardcoded url and updates a database (whose connection data are also hardcoded) is almost impossible to test in isolation (and actually almost impossible to test).
So the first point is to write your code with testability in mind: favour small, focused units with a single clear responsability and as few dependencies as possible (and preferably taking their dependencies as arguments so you can pass a mock instead). This is of course a bit of a platonic ideal, but it's a worthy goal still. As a last resort, when you cannot get rid of dependencies or parameterize them, you can use a package like mock that will replace your dependencies with bogus objects having a similar interface.
Integration testing is about testing whole subsystems from a much higher level - for example for a website project, you may want to test that if you submit the "contact" form an email is sent to a given address and that the data are also stored in the database. You obviously want to do so with a disposable test database and a disposable test mailbox.
The function you posted is possibly doing a bit too much - it reads files, builds a panda dataframe, applies some processing, and stores thing in a database. You may want to try and factor it into more functions - one to get the files list, one to collect data from the files, one to process the data etc, you already have the one storing the data in the database - and rewrite your "process_files" (which is actually doing more than processing) to call those functions. This will make it easier to test each part in isolation. Once done with this, you can use mock to test the "process_file" functions and check that it calls the other functions with the expected arguments, or run it against a test directory and a test database and check the results in the database.
In general, I wouldn't go down the road of testing pandas or any other dependencies. The way I see it, it is important to make sure that a package that i use is well developed and well supported, then making tests for it will be redundant. Pandas is a very well supported package.
As to your question about the specific function and interest in testing in general, I will highly recommend checking out the Hypothesis python package (you'r in luck - its currently only for python). It provides mock data and generates edge cases for testing purposes.
an example from their docs:
from hypothesis import given
from hypothesis.strategies import text
#given(text())
def test_decode_inverts_encode(s):
assert decode(encode(s)) == s
here you tell it that the function needs to receive text as input, and the package will run it multiple times with different variables that answer the criteria. It will also try all kind of of edge cases.
It can do much more once implemented.
I'm writing unit tests that have a database dependency (so technically they're functional tests). Often these tests not only rely on the database to be live and functional, but they can also rely on certain data to be available.
For example, in one test I might query the database to retrieve sample data that I am going to use to test the update or delete functionality. If data doesn't already exist, then this isn't exactly a failure in this context. I'm only concerned about the pass/fail status of the update or delete, and in this situation we didn't even get far enough to test it. So I don't want to give a false positive or false negative.
Is there an elegant way to have the unit test return a 3rd possible result? Such as a warning?
In general I think the advice by Paul Becotte is best for most cases:
This is a failure though- your tests failed to set up the system in
the way that your test required. Saying "the data wasn't there, so it
is okay for this test to fail" is saying that you don't really care
about whether the functionality you are testing works. Make your test
reliably insert the data immediately before retrieving it, which will
give you better insight into the state of your system.
However, in my particular case, I am writing a functional test that relies on data generated and manipulated from several processes. Generating it quickly at the beginning of the test just isn't practical (at least yet).
What I ultimately found to work as I need it to use skipTest as mentioned here:
Skip unittest if some-condition in SetUpClass fails
I'm writing an ORM-like library for Mongo. I've written some models, and want to make sure that they and the machinery that supports them are correct, so I'd like to write unit tests for them. I figured the best way to do that would be to simply write out some test data as JSON, then pass them to my models and see if the valid data is considered valid and the invalid data as invalid.
My question is where to put that data: it seems like a lot of non-TestCase stuff to put in test_models.py, but adding a separate test_data_for_models.json contributes its own headaches (versioning, etc.)
Which is the more recommended/idiomatic? Bear in mind I don't really need to generate any data--I'll likely just adapt data that's already in the db--I'm just unsure where to put it.
I'd like to write a set of "fuzzy" unit tests in python. So far I've been using testtools, but switching to a different framework would be fine.
My test suite is aiming to test the performance of image-processing algorithms. I'd like to be able to have tests report fuzzy pass states. In other words, the results are "good enough" but it might be useful to investigate.
I have something like this:
suite = unittest.TestLoader().loadTestsFromTestCase(TestMyAlgorithm)
result = testtools.TestResult()
result.startTestRun()
try:
suite.run(result)
finally:
result.stopTestRun()
I'd like to use information in the result object to generate a report, but it looks like all of the information associated with passed tests has been tossed.
I'm wondering if I'm abusing the notion of a unit test to fit this sort of investigation.
Is there a standard way to perform this sort of testing in python?
Assuming your goal here is really reporting, get a tool which can generate a detailed report in xml format (e.g. nosetests; py.test likely has similar support), and process the report however you want in a second step.