How to test whether the dependencies installed? - python

I've been learning PyTorch for deep learning recently.
Using anaconda I found some problems when I ran the program.
For example, I encountered the following import error
"no module named kiwisolver"
when my program imported matplotlib. It is fixed, but such error is very frustrating. The program runs for a long time.
Is there any way to check whether all the required dependencies are installed?

Depending on how your program is structured...
Many Python programs use the if __name__ == "__main__": idiom so that they don't immediately execute code. This lets you import the code without it immediately running.
For example, if you have my_py_torch.py, then if you run python to launch the Python interpreter in interactive mode, you can import your code:
import my_py_torch
Importing your code will process any imports, execute any top-level code, and define any functions and classes, but, as long as you use the if __name__ == "__main__": idiom, it won't actually run the (long-running) code. That's typically enough to let you know if you have major issues like syntax errors, bad imports, or missing dependencies.
Code can still circumvent this: you may have functions or methods that only import modules locally (when they're actually run), or code may wrap imports in try / except blocks to handle missing dependencies then later throw an error if the dependency is used. So it's not foolproof, but it can be a useful test.

Related

VS code not recognise a python module

I am testing VS code and I like it very much. But I have an issue with a Python module.
The module is xspec (https://heasarc.gsfc.nasa.gov/docs/xanadu/xspec/python/html/index.html). The issue is that VS code does not recognize the module, underlining in red all the functions.
This is probably because xspec needs to be initialized before using it. To inizialize it, an entire software has to be initialized (https://heasarc.gsfc.nasa.gov/docs/software/heasoft/). Basically, every time I need the xspec module I have to inizialized the heasoft sofwtware before running python, otherwise the module is not recognized.
There is a way to solve it? Or there is a way to add exceptions to the VS code highlining errors?

Importing Python module prints docstring of different, unrelated script?

I've encountered this issue with two separate modules now, one that I attempted to download myself (Quartz; could probably be the way I installed it, but let's ignore this scenario for now) and another that I installed using pip install (Pandas; let's focus on this one).
I wrote a two-line script that includes just import pandas and print('test'), for testing purposes. When I execute this in the terminal, instead of printing test to confirm the script runs correctly, it prints the docstring for another completely unrelated script:
[hidden]~/Python/$ python3 test.py
Usage: python emailResponse.py [situation] - copy situation response
The second line is a docstring I wrote for a simple fetch script for responding to emails, which is unrelated. What's worse is, if I just envoke Python3 in the terminal, and try import pandas, it'll print that same docstring and take me out of Python3 and back into the terminal shell / bash (sorry if this is not the right verbiage; still learning). The same results happen trying import Quartz as well, but no other modules are impacted (at least, that I'm aware of).
I'm at a complete loss why this might be the case. It was easy enough to avoid using Quartz, but I need Pandas for work purposes and this issue is starting to directly affect my work.
Any idea why this might be the case?

Python spyder debug freezes with circular importing

I have a problem with the debugger when some modules in my code call each other.
Practical example:
A file dog.py contains the following code:
import cat
print("Dog")
The file cat.py is the following:
import dog
print("Cat")
When I run dog.py (or cat.py) I don't have any problem and the program runs smoothly.
However, when I try to debug it, the whole spyder freezes and I have to kill the program.
Do you know how can I fix this? I would like to use this circular importing, as the modules use functions that are in the other modules.
Thank you!
When I run dog.py (or cat.py) I don't have any problem and the program runs smoothly.
AFAICT that's mostly because a script is imported under the special name ("__main__"), while a module is imported under it's own name (here "dog" or "cat"). NB : the only difference between a script and a module is actually loaded - passed an argument to the python runtime (python dog.py) or imported from a script or any module with an import statement.
(Actually circular imports issues are a bit more complicated than what I describe above, but I'll leave this to someone more knowledgeable.)
To make a long story short: except for this particular use case (which is actually more of a side effect), Python does not support circular imports. If you have functions (classes, whatever) shared by other scripts or modules, put these functions in a different module. Or if you find out that two modules really depends on each other, you may just want to regroup them into a single module (or regroup the parts that depend on each other in a same module and everything else in one or more other modules).
Also: unless it's a trivial one-shot util or something that only depends on the stdlib, your script's content is often better reduced to a main function parsing command-line arguments / reading config files / whatever, importing the required modules and starting the effective process.

Idiom for modules in packages that are not directly executed

Is there an idiom or suggested style for modules that are never useful when directly executed, but simply serve as components of a larger package — e.g. those containing definitions, etc.?
For example is it customary to omit #!/usr/bin/env python; add a comment; report a message to the user or execute other code (e.g. using a check of whether or not __name__ is '__main__' — or simply do nothing special?
Most of the python code I write is modules that generally don't get directly called as scripts. Sometimes when I'm working on a small project and don't want to set up a more complex testing system, I'll call my tests for the module bellow if __name__ == '__main__':, that way I can quickly test my module just by calling python modulename.py (this sometimes does not play nice with relative imports and such but it works well enough for small projects). Whether or not I do this, I drop the shebang and don't give execute remission to the file because I don't like to make modules executable unless they're meant to be run as scripts.

Importing module-under-test per test rather than module level imports?

I've recently come across some unit testing code that imports the modules to test in the function to test, rather than a module level import.
Then, after Googling it, I found that Pylons/Pyramid best practices reason that "import failures...should never prevent those tests from being run."
Should that be standard practice?
I find it a bit ugly, plus, their class example looks like slight over-engineering.
If you import all the modules tested at the top of the file, instead of in the unittest functions, then an import error will prevent any of your tests from running. I have two opinions about this. It depends how you're running your unittests.
If you're running tests on the commandline, or from Hudson or Jenkins, then you'll notice the import error and correct it immediately. In that case I don't think it's a problem to import everything at the module level. It's certainly more convenient, and requires less duplication.
If there's some chance that an import error will cause a silent failure (say, if your unittest framework can't even tell you that it failed unless it can import your test module), then it might be safer to import modules within your test functions.

Categories