We are sucessfully using pytest (Python 3) to run a test suite testing some hardware devices (electronics).
For a subset of these tests, we need the tester to change the hardware arrangement, and afterwards change it back.
My approach was to use a module-level fixture attached to the tests in question (which are all in a separate module), with two input calls:
#pytest.fixture(scope="module")
def disconnect_component():
input('Disconnect component, then press enter')
yield # At this point all the tests with this fixture are run
input('Connect component again, then press enter')
When running this, I get OSError: reading from stdin while output is captured. I can avoid this by calling pytest with --capture=no, and have confirmed that my approach works, meaning I get the first query before the test subset in question, and the second one after they have run.
The big drawback is that this deactivates capturing stdin/stderr for the whole test suite, which some of the other test rely on.
I also tried to use capsys.disabled (docs) like this
#pytest.fixture(scope="module")
def disconnect_component(capsys):
with capsys.disabled():
input('Disconnect component, then press enter')
yield # At this point all the tests with this fixture are run
input('Connect component again, then press enter')
but when running this I get ScopeMismatch: You tried to access the 'function' scoped fixture 'capsys' with a 'module' scoped request object, involved factories.
Can I make pytest wait for user action in some other way than input? If not, can I disable capturing just for the tests using above fixture?
So, I found a hint by a pytest dev, based on which I basically do what the capsys.disable() function does:
#pytest.fixture(scope="module")
def disconnect_component(pytestconfig):
capmanager = pytestconfig.pluginmanager.getplugin('capturemanager')
capmanager.suspend_global_capture(in_=True)
input('Disconnect component, then press enter')
capmanager.resume_global_capture()
yield # At this point all the tests with this fixture are run
capmanager.suspend_global_capture(in_=True)
input('Connect component again, then press enter')
capmanager.resume_global_capture()
This works flawlessly as far as I can see. Don't forget the in_=True bit.
Edit: From pytest 3.3.0 (I think), capmanager.suspendcapture and capmanager.resumecapture were renamed to capmanager.suspend_global_capture and capmanager.resume_global_capture, respectively.
As of pytest 5, as a fixture, you can use this:
#pytest.fixture
def suspend_capture(pytestconfig):
class suspend_guard:
def __init__(self):
self.capmanager = pytestconfig.pluginmanager.getplugin('capturemanager')
def __enter__(self):
self.capmanager.suspend_global_capture(in_=True)
def __exit__(self, _1, _2, _3):
self.capmanager.resume_global_capture()
yield suspend_guard()
Example usage:
def test_input(suspend_capture):
with suspend_capture:
input("hello")
Maybe it's worth noting that above solution doesn't have to be in a fixture. I've made a helper function for that:
import pytest
def ask_user_input(msg=''):
""" Asks user to check something manually and answer a question
"""
notification = "\n\n???\tANSWER NEEDED\t???\n\n{}".format(msg)
# suspend input capture by py.test so user input can be recorded here
capture_manager = pytest.config.pluginmanager.getplugin('capturemanager')
capture_manager.suspendcapture(in_=True)
answer = raw_input(notification)
# resume capture after question have been asked
capture_manager.resumecapture()
logging.debug("Answer: {}".format(answer))
return answer
For future reference, if you need to use input with pytest. You can do this in any part of your pytest, setup_class, test_..., teardown_method, etc. This is for pytest > 3.3.x
import pytest
capture_manager = pytest.config.pluginmanager.getplugin('capturemanager')
capture_manager.suspend_global_capture(in_=True)
answer = input('My reference text here')
capture_manager.resume_global_capture()
Solutions that use the global pytest.config object no longer work. For my use case, using --capture=sys together with a custom input() that uses stdin and stdout directly works well.
def fd_input(prompt):
with os.fdopen(os.dup(1), "w") as stdout:
stdout.write("\n{}? ".format(prompt))
with os.fdopen(os.dup(2), "r") as stdin:
return stdin.readline()
Related
In my pytest suite, I run plenty of iOS UI tests. Which comes with plenty of frustrating issues. Im experimenting with the use of a hook based retry logic. Essentially, I have a pytest_runtest_call hook, where I collect the output of a test via a yield, and do extra processing with that data. Based on the failed output of the test I would like to re-trigger the test again, including the setup and teardown of the test. Is this possible? I am trying to do this without adding any extra packages (reinventing the wheel here, but in an effort to better understand pytest logic). Here's the idea of where I'm currently at with my hook:
def pytest_runtest_call(item):
output = yield
if output.excinfo:
# The retry_logic_handler.should_try will return a bool of whether or not to retry the test
if retry_logic_handler.should_retry(item):
# Re-run this test including the runtest_setup & runtest_teardown
...
_validate_expected_failures(output)
I understand I will get a lot of "just write better tests" but in my case, UI testing is rather unpredictable. And the data I am collecting helps clarify why a retry is required.
So the ultimate question is, how can I add retry logic to my pytest hooks?
Turns out this will do the trick, a different hook but the same objective.
#pytest.hookimpl(hookwrapper=True)
def pytest_runtest_makereport(item, call):
report = yield
result = report.get_result()
if result.outcome == 'failed':
try:
attempt = item.runtest()
except:
attempt = 'failed'
if not attempt:
result.outcome = 'passed'
I have a python function foo with a while True loop inside.
For background: It is expected do stream info from the web, do some writing and run indefinitely. The asserts test if the writing was done correctly.
Clearly I need it to stop sometime, in order to test.
What I did was to run via multirpocessing and introduce a timeout there, however when I see the test coverage, the function which ran through the multiprocessing, are not marked as covered.
Question 1: Why does pytest now work this way?
Question 2: How can I make this work?
I was thinking it's probably because I technically exit the loop, so maybe pytest does not mark this as tested....
import time
import multiprocessing
def test_a_while_loop():
# Start through multiprocessing in order to have a timeout.
p = multiprocessing.Process(
target=foo
name="Foo",
)
try:
p.start()
# my timeout
time.sleep(10)
p.terminate()
finally:
# Cleanup.
p.join()
# Asserts below
...
More info
I looked into adding a decorator such as #pytest.mark.timeout(5), but that did not work and it stops the whole function, so I never get to the asserts. (as suggested here).
If I don't find a way, I will just test the parts, but ideally I would like to find a way to test by breaking the loop.
I know I can re-write my code in order to make it have a timeout, but that would mean changing the code to make it testable, which I don't think is a good design.
Mocks I have not tried (as suggested here), because I don't believe I can mock what I do, since it writes info from the web. I need to actually see the "original" working.
Break out the functionality you want to test into a helper method. Test the helper method.
def scrape_web_info(url):
data = get_it(url)
return data
# In production:
while True:
scrape_web_info(...)
# During test:
def test_web_info():
assert scrape_web_info(...) == ...
Yes, it is possible and the code above shows one way to do it (run through a multiprocessing with a timeout).
Since the asserts were running fine, I found out that the issue was not the pytest, but the coverage report not accounting for the multiprocessing properly.
I describe how I fix this (now separate) issue question here.
Actually, I had the same problem with an endless task to test and coverage. However, In my code, there is a .run_forever() method which runs a .run_once() method inside in an infinite loop. So, I can write a unit test for the .run_once() method to test its functionality. Nevertheless, if you want to test your forever function despite the Halting Problem for getting more extent code coverage, I propose the following approach using a timeout regardless of tools you've mentioned including multiprocessing or #pytest.mark.timeout(5) which didn't work for me either:
First, install the interruptingcow PyPI package to have a nice timeout for raising an optional exception: pip install interruptingcow
Then:
import pytest
import asyncio
from interruptingcow import timeout
from <path-to-loop-the-module> import EventLoop
class TestCase:
#pytest.mark.parametrize("test_case", ['none'])
def test_events(self, test_case: list):
assert EventLoop().run_once() # It's usual
#pytest.mark.parametrize("test_case", ['none'])
def test_events2(self, test_case: list):
try:
with timeout(10, exception=asyncio.CancelledError):
EventLoop().run_forever()
assert False
except asyncio.CancelledError:
assert True
Is it possible in python to store functions that would have been executed?
More specifically, would it be possible to create that using a context manager? I've seen once a context manager that would retry all the wrapped code inside, but I could not figure out how to do that.
For instance, let's say I have
def print():
print("printed")
What I want to achieve is something like this:
with StopFunctionsExecution() as functions_executed:
print()
print()
print()
# So far, nothing in console
functions_executed.run_all() # or on __exit__
# All 3 printed in console
You cannot "schedule" the execution of regular code instead of executing it right away because that would require changes to how that code is interpreted -- i.e. changes to the interpreter.
But you can change when the interpreter gets to execute that code:
def batch(*args):
for fn, *args_ in args:
fn.__call__(*args_)
def foo():
print("blah")
print("blar")
print("xyzzy")
batch_list = ((print, "foo"),(print, "bar"), (print, "baz"), (foo,))
<somewhere later>
batch(*batch_list)
This is the way how calls are scheduled for execution elsewhere in threading and multiprocessing -- by passing the function and its arguments separately.
I am looking for a way to prevent tests from being executed when, for example, a required test server cannot be contacted.
Is essential to be able to detect this before starting to execute the tests, and failing as fast as possible.
The tests are run using py.test or using tox which calls py.test.
I do have a piece of code that detects if the tests server is up, but I don't know which is the right place to put this into.
Initially I assumed that this would be a global fixture but that's not quite true as it would mean that will run for each test and what I want is not to run them, at all.
I'm using skipif for skipping tests.
I have a function, which returns True or False based on the availability of the program I need:
def have_gpg():
try:
subprocess.call(['gpg', '--version'])
except subprocess.CalledProcessError:
return False
return True
Then for tests I need to skip if GPG is not present, I use skipif:
#pytest.mark.skipif("not have_gpg()")
def test_gpg_decode(tmpdir):
...
assert out = 'test'
See more documentation on skipif.
Background:
I am currently writing a process monitoring tool (Windows and Linux) in Python and implementing unit test coverage. The process monitor hooks into the Windows API function EnumProcesses on Windows and monitors the /proc directory on Linux to find current processes. The process names and process IDs are then written to a log which is accessible to the unit tests.
Question:
When I unit test the monitoring behavior I need a process to start and terminate. I would love if there would be a (cross-platform?) way to start and terminate a fake system process that I could uniquely name (and track its creation in a unit test).
Initial ideas:
I could use subprocess.Popen() to open any system process but this runs into some issues. The unit tests could falsely pass if the process I'm using to test is run by the system as well. Also, the unit tests are run from the command line and any Linux process I can think of suspends the terminal (nano, etc.).
I could start a process and track it by its process ID but I'm not exactly sure how to do this without suspending the terminal.
These are just thoughts and observations from initial testing and I would love it if someone could prove me wrong on either of these points.
I am using Python 2.6.6.
Edit:
Get all Linux process IDs:
try:
processDirectories = os.listdir(self.PROCESS_DIRECTORY)
except IOError:
return []
return [pid for pid in processDirectories if pid.isdigit()]
Get all Windows process IDs:
import ctypes, ctypes.wintypes
Psapi = ctypes.WinDLL('Psapi.dll')
EnumProcesses = self.Psapi.EnumProcesses
EnumProcesses.restype = ctypes.wintypes.BOOL
count = 50
while True:
# Build arguments to EnumProcesses
processIds = (ctypes.wintypes.DWORD*count)()
size = ctypes.sizeof(processIds)
bytes_returned = ctypes.wintypes.DWORD()
# Call enum processes to find all processes
if self.EnumProcesses(ctypes.byref(processIds), size, ctypes.byref(bytes_returned)):
if bytes_returned.value < size:
return processIds
else:
# We weren't able to get all the processes so double our size and try again
count *= 2
else:
print "EnumProcesses failed"
sys.exit()
Windows code is from here
edit: this answer is getting long :), but some of my original answer still applies, so I leave it in :)
Your code is not so different from my original answer. Some of my ideas still apply.
When you are writing Unit Test, you want to only test your logic. When you use code that interacts with the operating system, you usually want to mock that part out. The reason being that you don't have much control over the output of those libraries, as you found out. So it's easier to mock those calls.
In this case, there are two libraries that are interacting with the sytem: os.listdir and EnumProcesses. Since you didn't write them, we can easily fake them to return what we need. Which in this case is a list.
But wait, in your comment you mentioned:
"The issue I'm having with it however is that it really doesn't test
that my code is seeing new processes on the system but rather that the
code is correctly monitoring new items in a list."
The thing is, we don't need to test the code that actually monitors the processes on the system, because it's a third party code. What we need to test is that your code logic handles the returned processes. Because that's the code you wrote. The reason why we are testing over a list, is because that's what your logic is doing. os.listir and EniumProcesses return a list of pids (numeric strings and integers, respectively) and your code acts on that list.
I'm assuming your code is inside a Class (you are using self in your code). I'm also assuming that they are isolated inside their own methods (you are using return). So this will be sort of what I suggested originally, except with actual code :) Idk if they are in the same class or different classes, but it doesn't really matter.
Linux method
Now, testing your Linux process function is not that difficult. You can patch os.listdir to return a list of pids.
def getLinuxProcess(self):
try:
processDirectories = os.listdir(self.PROCESS_DIRECTORY)
except IOError:
return []
return [pid for pid in processDirectories if pid.isdigit()]
Now for the test.
import unittest
from fudge import patched_context
import os
import LinuxProcessClass # class that contains getLinuxProcess method
def test_LinuxProcess(self):
"""Test the logic of our getLinuxProcess.
We patch os.listdir and return our own list, because os.listdir
returns a list. We do this so that we can control the output
(we test *our* logic, not a built-in library's functionality).
"""
# Test we can parse our pdis
fakeProcessIds = ['1', '2', '3']
with patched_context(os, 'listdir', lamba x: fakeProcessIds):
myClass = LinuxProcessClass()
....
result = myClass.getLinuxProcess()
expected = [1, 2, 3]
self.assertEqual(result, expected)
# Test we can handle IOERROR
with patched_context(os, 'listdir', lamba x: raise IOError):
myClass = LinuxProcessClass()
....
result = myClass.getLinuxProcess()
expected = []
self.assertEqual(result, expected)
# Test we only get pids
fakeProcessIds = ['1', '2', '3', 'do', 'not', 'parse']
.....
Windows method
Testing your Window's method is a little trickier. What I would do is the following:
def prepareWindowsObjects(self):
"""Create and set up objects needed to get the windows process"
...
Psapi = ctypes.WinDLL('Psapi.dll')
EnumProcesses = self.Psapi.EnumProcesses
EnumProcesses.restype = ctypes.wintypes.BOOL
self.EnumProcessses = EnumProcess
...
def getWindowsProcess(self):
count = 50
while True:
.... # Build arguments to EnumProcesses and call enun process
if self.EnumProcesses(ctypes.byref(processIds),...
..
else:
return []
I separated the code into two methods to make it easier to read (I believe you are already doing this). Here is the tricky part, EnumProcesses is using pointers and they are not easy to play with. Another thing is, that I don't know how to work with pointers in Python, so I couldn't tell you of an easy way to mock that out =P
What I can tell you is to simply not test it. Your logic there is very minimal. Besides increasing the size of count, everything else in that function is creating the space EnumProcesses pointers will use. Maybe you can add a limit to the count size but other than that, this method is short and sweet. It returns the windows processes and nothing more. Just what I was asking for in my original comment :)
So leave that method alone. Don't test it. Make sure though, that anything that uses getWindowsProcess and getLinuxProcess get's mocked out as per my original suggestion.
Hopefully this makes more sense :) If it doesn't let me know and maybe we can have a chat session or do a video call or something.
original answer
I'm not exactly sure how to do what you are asking, but whenever I need to test code that depends on some outside force (external libraries, popen or in this case processes) I mock out those parts.
Now, I don't know how your code is structured, but maybe you can do something like this:
def getWindowsProcesses(self, ...):
'''Call Windows API function EnumProcesses and
return the list of processes
'''
# ... call EnumProcesses ...
return listOfProcesses
def getLinuxProcesses(self, ...):
'''Look in /proc dir and return list of processes'''
# ... look in /proc ...
return listOfProcessses
These two methods only do one thing, get the list of processes. For Windows, it might just be a call to that API and for Linux just reading the /proc dir. That's all, nothing more. The logic for handling the processes will go somewhere else. This makes these methods extremely easy to mock out since their implementations are just API calls that return a list.
Your code can then easy call them:
def getProcesses(...):
'''Get the processes running.'''
isLinux = # ... logic for determining OS ...
if isLinux:
processes = getLinuxProcesses(...)
else:
processes = getWindowsProcesses(...)
# ... do something with processes, write to log file, etc ...
In your test, you can then use a mocking library such as Fudge. You mock out these two methods to return what you expect them to return.
This way you'll be testing your logic since you can control what the result will be.
from fudge import patched_context
...
def test_getProcesses(self, ...):
monitor = MonitorTool(..)
# Patch the method that gets the processes. Whenever it gets called, return
# our predetermined list.
originalProcesses = [....pids...]
with patched_context(monitor, "getLinuxProcesses", lamba x: originalProcesses):
monitor.getProcesses()
# ... assert logic is right ...
# Let's "add" some new processes and test that our logic realizes new
# processes were added.
newProcesses = [...]
updatedProcesses = originalProcessses + (newProcesses)
with patched_context(monitor, "getLinuxProcesses", lamba x: updatedProcesses):
monitor.getProcesses()
# ... assert logic caught new processes ...
# Let's "kill" our new processes and test that our logic can handle it
with patched_context(monitor, "getLinuxProcesses", lamba x: originalProcesses):
monitor.getProcesses()
# ... assert logic caught processes were 'killed' ...
Keep in mind that if you test your code this way, you won't get 100% code coverage (since your mocked methods won't be run), but this is fine. You're testing your code and not third party's, which is what matters.
Hopefully this might be able to help you. I know it doesn't answer your question, but maybe you can use this to figure out the best way to test your code.
Your original idea of using subprocess is a good one. Just create your own executable and name it something that identifies it as a testing thing. Maybe make it do something like sleep for a while.
Alternately, you could actually use the multiprocessing module. I've not used python in windows much, but you should be able to get process identifying data out of the Process object you create:
p = multiprocessing.Process(target=time.sleep, args=(30,))
p.start()
pid = p.getpid()