serverless functions, pytest & python imports - python

We have a number of python components, that are deployed independently of each other (a serverless type environment). I'm struggling to get python imports working in a way that's compatible with this, while also running pytest across all of the components.
With the following file/directory structure:
pipelines
- components
- componentA
__init__.py
main.py
otherfile.py
test_main.py
- componentB
__init__.py
main.py
The resulting file structure on the cloud platform for componentA:
main.py
otherfile.py
(__init__.py appears to be removed by the platform)
In order to import code from otherfile.py in componentA, we write
from otherfile import some_func
which works on the cloud platform and when running main.py directly, but then fails with pytest ("ModuleNotFoundError"). And vice-versa, using
from .otherfile import some_func
or
from componentA.otherfile import some_func
works great for pytest but not for production ("ImportError: attempted relative import with no known parent package").
I realise we could try and fudge something on the server so the function is nested in another folder, but that will be very kludgy to achieve.
Is there a way we can make these two play nicely together?

Related

python import packages that know nothing about my system configuration

I have a project I am working on, let's call it Project, which lives in the directory Project somewhere wholly unknown to me (really it lives both on my local system and on a couple Docker build systems). In that project, I have some source files, source/module1.py and source/module2.py. I also have some example files, some test files, and an init.py So my directory looks something like this:
Project
__init__.py
/source
module1.py
module2.py
/test
testRunner.py
/examples
awesomeExample.py
However, module1 needs some stuff from module2. My naive self thought this could be done by putting an import statement in module1:
import module2
# Do some other interesting stuff
And this works, but only when I am running / importing module 1 from the source directory. If I am, for example, running some unit tests in another directory test/testRunner.py, either from the test directory or in the main Project directory, the import will fail. Same with trying to use it when running an example in the examples directory.
So here is my problem: in general, I don't know where the calling script lives. It might be in the examples directory, it might be in the test directory, or it might be in the main Project directory (for example when trying to import stuff with an init.py). How do I ensure that module1 can always import module2 in each of these scenarios?
I am not looking for a solution like "add all those directories to your python path". Initially I just added Project to my python path on my local machine, and then did all my imports relative to that (import Project.source.module2), but this (predictably) caused my builds to fail on the Docker instances. I don't just want this to work on my local machine, but also on the Docker instances I'm using to build and test this software, and on any user's machine that subsequently installs it (i.e. by doing a pip install Project. What is the most robust way to make sure this dependency is satisfied? How can I make sure module1 can import module2 regardless of where module1 itself is imported from? Any python 3.x.x solution is welcome.
I figured out a way to do it (credit here) - it's a little inelegant, but extremely robust. Works on my local machine independent of whether using an import statement or running a script directly, as well as my build servers which use github actions and Travis CI.
Basically, I added a file in the source directory, called context.py with the following contents:
import os
import sys
fileLocation = os.path.dirname(os.path.abspath(__file__))
sourceLocation = os.path.abspath(os.path.join(fileLocation, '..', 'source/'))
sys.path.insert(0, sourceLocation)
This finds out the current file being executed from python, and then uses that to add to the python path. And then in my module1.py file, at the top I have:
import context
import module2
Now, whenever module1 is imported, it successfully imports module2. More elegant answers or comments on why this works and in which cases it might fail are appreciated.

(Python Unittest) Module being tested cannot import its dependencies: ModuleNotFoundError

I am working on developing unittests for a project that has been already completed, however I am having a hard time running my unittests without modifying the original code. The module I am trying to test has other dependencies in the same folder that will not import when the unittests are run. Here is what my directory looks like:
root
|--main_folder
|--module1.py
|--module2.py
|--tests
|--test_module1.py
The original code in module1.py successfully imports module2.py on its own like this: from module2 import Practices where Practices is a function from module2.
The issue I am running into is that in order to run test_module1.py (which I am doing by calling python3 -m unittest from the root directory), I have to modify module1.py itself such that it says: from main_folder.module2 import Practices.
If I run the test file without modifying module1.py, I get the error ModuleNotFoundError: No module named 'module2'.
Ideally I cannot modify the code in this way, and I am trying to find a way to make my tests work without touching the application itself. How should I go about this? module1.py runs normally when I run the application without modifying the file, however modifying it so that the tests work breaks the main application. What can I do to make my tests independent of the code for the main app?
(For some more background, the test_module1.py file works by calling from main_folder.module1 import fun1 where fun1 is the function I am trying to test)
Try running your tests using one of the following commands (replacting the actual paths):
if your tests import the modules "from main_folder import ..."
env PYTHONPATH=/root python3 -m unittest
or if your tests import directly "import module1":
env PYTHONPATH=/root/main_folder python3 -m unittest
As a side note, you might need to have existing
main_folder/__init__.py
file, to get the main_folder recognized as package, depending of the python version you're using. If you currently don't have such file, try creating it (empty, no need to put code inside it) and check if the issue persists.

pytest cannot locate dynamically created module

I have a directory structure like what follows:
package_root/lib/package_name/foo.py
in foo.py I have a function that creates a file (bar.py) that contains a function (f). foo.py also has a function that then imports bar and runs bar.f().
When I state, within foo.py "import bar", it works, and I have access to bar.f and it runs just fine.
However, this is not the case when running pytest. We run pytest from package_root and it cannot find the module bar when it attempts to import it. This is because (I believe) when running pytest, it creates bar.py in /package_root which contains no init.py file. Since our tests run automatically for our cicd pipeline, I need it to be able to properly import when running pytest from package_root. Any suggestions?
As far as I comprehend from your question is that you are facing imports issue in your pipeline(Correct me if am wrong). In pytest, normally your framework should contain the pytest.ini/tox.ini along with all the test scripts. Please refer link(http://doc.pytest.org/en/latest/customize.html).
Create a file pytest.ini/tox.ini in your framework design from where you are running your code(in your case in the package_root/ directory).
#pytest.ini
[pytest]
python_paths = . lib/<package-name>/

Python import system mechanics for split test and app directories

I am struggling to successfully import my project to the test-suite in my project, as well as being able to run the program from the command-line. I've been able to run my test-suite for some time, under the impression that if the tests work, so does the command-line stuff--evidently this isn't the case. I do not yet intend on using my program as a library. The api.py acts is the entry-point for the program.
I have a project with the following structure (the same directory hierarchy as requests):
myapp/
myapp/
__init__.py
api.py # depends on commands.py
commands.py # depends on utils.py
utils.py
tests/
context.py
test_api.py # depends on api.py
test_commands.py # depends on commands.py, utils.py
In the file context.py I have a path modification adding myapp to the PYTHONPATH, so I can successfully run the tests on my code. Here is the contents of that file
import os
import sys
sys.path.insert(0, os.path.abspath('..'))
import myapp
I've tried imaginable import combination I can think of. Far too many to list! I have also perused the Python reference import system page, and this tutorial.
How should I import my dependencies?
Turns out this was the correct layout, I mistook the error for something else. Although for future reference, relative imports in Python 3 must be explicit: when in the myapp package directory you can't say import commands, you must instead import it as from . import commands. This was defined in PEP 328 also see this SO post on the topic. Run your package with python -m mutil.api not python ./mutil/api.py as the latter won't give the interpreter context of the current path.

Python - organising code and test suite

I am very new to python, coming from a php background and cant figure out the best way to organise my code.
Currently I am working through the project euler exercises to learn python. I would like to have a directory for my solution to the problem and a directory that mirrors this for tests.
So ideally:
Problem
App
main.py
Tests
maintTest.py
Using php this is very easy as i can just require_once the correct file, or amend the include_path.
How can this be achieved in python? Obviously this is a very simplistic example - therefore some advice on how this is approached on a larger scale would also be extremely grateful.
This depends on which test runner you want to use.
pytest
I recently learned to like pytest.
It has a section about how to organize the code.
If you can not import your main into the code then you can use the tricks below.
unittest
When I use unittest I do it like this:
with import main
Problem
App
main.py
Tests
test_main.py
test_main.py
import sys
import os
import unittest
sys.path.append(os.path.join(os.path.dirname(__file__), 'App'))
import main
# do the tests
if __name__ == '__main__':
unittest.run()
or with import App.main
Problem
App
__init__.py
main.py
Tests
test.py
test_main.py
test.py
import sys
import os
import unittest
sys.path.append(os.path.dirname(__file__))
test_main.py
from test import *
import App.main
# do the tests
if __name__ == '__main__':
unittest.run()
I have always loved nosetests, so here is my solution:
Problem
App
__init__.py
main.py
Tests
__init__.py
tests.py
Then, open the command prompt, CD to /path/to/Problem and type:
nosetests Tests
it will automatically recognize and run the tests. However, read this:
Any python source file, directory or package that matches the
testMatch regular expression (by default: (?:^|[b_.-])[Tt]est) will be
collected as a test (or source for collection of tests).
[...]
Within a test directory or package, any python source file matching testMatch will be examined for test cases. Within a test module, functions and classes whose names match testMatch and TestCase subclasses with any name will be loaded and executed as tests.
This basically means that your tests (both your files and function/methods/classes) have to begin with the "test" or "Test" word.
More on Nosetests usage here: Basic Usage.

Categories