When i run pytest on my system, it gives an error: no tests ran in 0.01 seconds . Can someone tell me if its because of error in my code or some other reason.
Output is :
============================= test session starts ==============================
platform darwin -- Python 2.7.11, pytest-2.9.2, py-1.4.31, pluggy-0.3.1
rootdir: /Users/Desktop, inifile:
collected 0 items
========================= no tests ran in 0.01 seconds =========================
Process finished with exit code 0
I also had the same issue, the fix is to change the rootdir.
Go to Setting->tools->externaltools and set the parameters as "$FileName$" and working directory as "$FileDir$" for your pyTest
Related
I can't use allure's 'open' command to auto-open the report
The output shows that everything is OK
F:\python_venv\py310_selenium\Scripts\python.exe F:\PyCharm_project\temp_test_allure\main.py
============================= test session starts =============================
platform win32 -- Python 3.10.4, pytest-7.2.1, pluggy-1.0.0
rootdir: F:\PyCharm_project\temp_test_allure, configfile: pytest.ini
plugins: allure-pytest-2.12.0, rerunfailures-11.0
collected 1 item
test_aa.py . [100%]
============================== 1 passed in 0.07s ==============================
Report successfully generated to report_dir
Starting web server...
2023-02-02 21:16:04.127:INFO::main: Logging initialized #549ms to org.eclipse.jetty.util.log.StdErrLog
Server started at <http://198.18.0.1:26299/>. Press <Ctrl+C> to exit
After these output, the browser will open but returns code 502:
Here is my code
# #File : main.py.py
import pytest
import os
if __name__ == '__main__':
pytest.main() #start pytest framework
os.system('allure generate temps_dir -o report_dir --clean') # generate report
os.system('allure open report_dir -p 0') # auto open report
Here is the pytest.ini
[pytest]
addopts =
--alluredir=./temps_dir
--clean-alluredir
I have configured the environment and installed the latest packages. When manually opening index.html in pycharm, it can be displayed normally. I don't know if it's allure's problem, or another plug-in problem:
How to allure to open the report normally?
I've been working on a git hook program, and in it, I'm using subprocess to run pytest, and it's giving some strange indentation in the stdout. This is running on git bash on windows, so it's certainly possible that it's a newline problem, but note that adding universal_newlines=True kwarg didn't help either
My calling code:
process_result = subprocess.run(['pytest', './test'], text=True)
My output:
$ git commit -m "t"
============================= test session starts ==============================
platform linux -- Python 3.8.10, pytest-7.1.2, pluggy-1.0.0
rootdir: /mnt/c/Users/[usr]/Documents/barb
collected 1 item
test/test_cli.py . [100%]
============================== 1 passed in 0.56s ===============================
Any ideas on how to fix this issue?
Two different Terminal windows open. Both set to the same dir. The second one was created by doing a "New tab" while in the first one
In the first one:
me $ pytest test_MakeInfo.py
================================================================================ test session starts =================================================================================
platform darwin -- Python 3.7.4, pytest-6.2.5, py-1.10.0, pluggy-0.13.0
rootdir: /Users/me/Documents/workspace-vsc/Pipeline/src/python
plugins: arraydiff-0.3, remotedata-0.3.2, doctestplus-0.4.0, openfiles-0.4.0
collected 12 items
test_MakeInfo.py ............ [100%]
================================================================================= 12 passed in 0.87s =================================================================================
me $ which pytest
/Users/me/opt/anaconda3/bin/pytest
In the second one:
me $ pytest test_MakeInfo.py
================================================================================ test session starts =================================================================================
platform darwin -- Python 3.7.4, pytest-6.2.5, py-1.10.0, pluggy-0.13.0
rootdir: /Users/me/Documents/workspace-vsc/Pipeline/src/python
plugins: arraydiff-0.3, remotedata-0.3.2, doctestplus-0.4.0, openfiles-0.4.0
collected 0 items / 1 error
======================================================================================= ERRORS =======================================================================================
_________________________________________________________________________ ERROR collecting test_MakeInfo.py __________________________________________________________________________
ImportError while importing test module '/Users/me/Documents/workspace-vsc/Pipeline/src/python/test_MakeInfo.py'.
Hint: make sure your test modules/packages have valid Python names.
Traceback:
/Users/me/opt/anaconda3/lib/python3.7/importlib/__init__.py:127: in import_module
return _bootstrap._gcd_import(name[level:], package, level)
test_MakeInfo.py:6: in <module>
from MakeInfo import main, makeInfo, makeTumorInfo, _getNormalTumorInfo
E ModuleNotFoundError: No module named 'MakeInfo'
============================================================================== short test summary info ===============================================================================
ERROR test_MakeInfo.py
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! Interrupted: 1 error during collection !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
================================================================================== 1 error in 0.17s ==================================================================================
me $ which pytest
/Users/me/opt/anaconda3/bin/pytest
What environment variables should I be looking at for differences? Because so far as I can tell everything's the same between the two
You want to check your PYTHONPATH and PATH environment variables.
Depending on the shell you use, they may not have been set the same when opening a new tab.
For example, in bash, you could append the required directory to your path using ~/.bash_profile or ~/.bashrc
I'm new to testing with Pytest, and I've run into a minor but annoying hangup.
In the command line test session results, I see my tests passing, but the percentage shown is not 100%, for some tests.
With some aggressive logging, I was able to confirm that my tests are passing as expected.
My question is: what's the meaning of the percentage that is displayed for the collected tests?
Example:
platform win32 -- Python 3.7.0a4, pytest-3.5.0, py-1.5.3, pluggy-0.6.0
rootdir: C:\api-check, inifile:
collected 5 items
test_Me.py ... [ 60%]
test_env.py .. [100%]
========================== 5 passed in 6.04 seconds ===========================
This is a makeshift progress bar.
It displays the "percentage of work" done so far -- most probably, total completed tests by the total number of tests to run (that it precalculated at the start).
If your tests ran for longer, you would probably see the number in the line changing as it crunches through the specific file.
It's one of the features included on Pytest, since version 3.3 (2017).
As my comrade #ivan_pozdeev mentioned, it's a progress indicator, indeed.
Here's an example, where you have 4 tests collected:
$ pytest test.py -v
================================ test session starts =============================
platform linux -- Python 3.6.7, pytest-4.4.0, py-1.8.0, pluggy-0.9.0 -- /usr/bin/python3
cachedir: .pytest_cache
rootdir: /home/ivanleoncz/git/pysd
collected 4 items
test.py::test_active_services PASSED [ 25%]
test.py::test_enabled_services PASSED [ 50%]
test.py::test_is_enabled PASSED [ 75%]
test.py::test_is_active PASSED [100%]
============================== 4 passed in 0.55 seconds ==========================
100% of the tests / the amount of collected tests == a progression of 25%, from one test to another
I have a file test_pytest.py
def test_dummy():
assert 1 + 1 == 2
I can run pytest from within a python console in an IDE such as Spyder (my use case is Blender but the behavior is general for an embedded python console)
pytest.main(['-v','test_pytest.py'])
============================= test session starts ==============================
platform linux -- Python 3.5.2, pytest-2.9.2, py-1.4.31, pluggy-0.3.1 -- /home/elfnor/anaconda3/bin/python
cachedir: Documents/python/.cache
rootdir: /home/elfnor/Documents/python, inifile:
collecting ... collected 1 items
Documents/python/test_pytest.py::test_dummy PASSED
=========================== 1 passed in 0.01 seconds ===========================
I then add another test to test_pytest.py, and save the file.
def test_dummy():
assert 1 + 1 == 2
def test_another():
assert 2 + 2 == 4
And rerun pytest.main(['-v','test_pytest.py']) but pytest dosen't find the new test.
I've tried reloading the pytest module, but no luck. If I kill the console (in Spyder), re-import pytest and rerun, the new test is found.
I haven't found a way to kill the console in Blender (without killing Blender). Is there a different way to force pytest to find new tests when used in a console.?