ModuleNotFoundError when running unittest with subprocess - python

I'm writing tests in python's unittest and decided to use the module parameterized to deal with test parameterization. Now all's fine and dandy when I'm running tests directly with unittest's CLI - simply running python -m unittest in root directory launches all the tests as expected. However, I decided for my script to have it's own command flag to run the tests, so when you run, say, python ./main.py -t [additional arguments for unittest], the script itself runs python -m unittest [additional arguments for unittest]. For that to happen, I'm using subprocess.run. And this also works... well, to some extent. The problem is the following - when I'm using python -m unittest, no errors (except for the ones being tested) are raised, but using my script to run the tests raises ModuleNotFoundError: No module named 'parameterized', along with a few other dependencies my code is using. I'm clueless why is that happening.
To be honest I'm not that familiar with unittest, so maybe my approach is the problem, maybe I should handle this in completely different way. Any feedback would be much appreciated.

Related

PyCharm pytest GUI runs tests, terminal doesn't

I'm trying to run pytest from within a script, so I want it to run all test files by simply invoking pytest with no arguments. However, pytest doesn't collect the testfiles.
Hirarchy is as follows
project
-somedirectory
--somecode.py
-tests
--test.py
test.py look as follows
def test_one():
assert True
Running pytest via the PyCharm GUI works as expected, test.py is collected, run and passed.
Running pytest tests/test.py works as expected, test.py is collected, run and passed.
Running pytest without arguments does not work as expected, it doesn't collect test.py.
Running pwd returns /path/to/project.
The behaviour is the same for Windows command prompt and the PyCharm terminal.
By default pytest looks for files with the pattern "test_*.py", your file "test.py" doesn't match this pattern so is likely being ignored. Try renaming it to something else? "test_foo.py" for example
This works. I did not realise the naming convention related to files as well as functions.

Is it possible to run pdb and unittest python modules at the same time?

For example:
python3 -m unittest -m pdb test/test_string.py
or
python3 -m unittest pdb test/test_string.py
So ideally I'd like python to start in debug mode on exceptions, but without add any extra code into the files.
If you install Twisted and use Twisted Trial as you runner, you can do it like this:
trial --debug test/test_string.py
Tangentially, it's better if you put your tests in an importable package and name them by module name rather than filename (eg test.test_string instead of test/test_string.py).

Run pytest in PDB with pipenv

I saw this question: Can I debug with python debugger when using py.test somehow? but it doesn't really help, because I need to debug hooks, some of them not written by me, where modifying the code of the hook is really cumbersome.
Also, pytest runs through pipenv run. It's already difficult to make them both work together. I couldn't so far find a combination of pdb, pipenv and pytest that would launch each other.
Another way I could do it is by calling pytest.main() from my code, however, this means that other people who want to run my tests will have to use this "trampoline" to run other tests. I can live with this, but it still feels like it shouldn't be necessary.
I guess this is what you need, invoke pdb as early as possible:
`pipenv --py` -c 'import pdb, pytest; pdb.set_trace(); pytest.main()'

(i)PDB post mortem when using 'unittest discover'

Most of the times when I run my tests, I do this via python -m unittest discover (I'm lazy).
Lets say that one of the tests raises an exception. Is there a way to make unittest framework run a post-mortem on it (preferably ipdb.pm()), without modifying the code of the tests?
I know I could add it directly to the code, but since I also use automatic runners / GitlabCI, I don't want those to hang up on the pdb shell.

Python pdb on python script run as package

I have a python program that I usually run as a part of a package:
python -m mymod.client
in order to deal with relative imports inside "mymod/client.py." How do I run this with pdb - the python debugger. The following does not work:
python -m pdb mymod.client
It yields the error:
Error: mymod.client does not exist
EDIT #1 (to address possible duplicity of question)
My question isn't really about running two modules simultaneously python, rather it is about how to use pdb on a python script that has relative imports inside it and which one usually deals with by running the script with "python -m."
Restated, my question could then be, how do I use pdb on such a script while not having to change the script itself just to have it run with pdb (ie: preserving the relative imports inside the script as much as possible). Shouldn't this be possible, or am I forced to refactor in some way if I want to use pdb? If so what would be the minimal changes to the structure of the script that I'd have to introduce to allow me to leverage pdb.
In summary, I don't care how I run the script, just so long as I can get it working with pdb without changing it's internal structure (relative imports, etc) too much.
I think I have a solution.
Run it like this:
python -m pdb path/mymod/client.py arg1 arg2
that will run it as a script, but will not treat it as a package.
At the top of client.py, the first line should be:
import mymod
That will get the package itself loaded.
I am still playing with this, but it seems to work so far.
This is not possible. Though unstated in documentation, Python will not parse two modules via the -m command line option.

Categories