I'm messing around with Azure Functions with Python and running into issues with getting a proper project directory structure. My goal is to have a library directory that I can put all the business logic in and then reference that from the functions entry point and also have a test directory that can test the functions and the library code directly. I have the following structure currently:
__app__
| - MyFirstFunction
| | - __init__.py
| | - function.json
| - library
| | - __init__.py
| | - services
| | | - __init__.py
| | | - sample_service.py
| - host.json
| - requirements.txt
| - __init__.py
| - tests
| | - __init__.py
| | - test_first_function.py
I ended up with this structure and am able to reference sample_service from the azure function using from __app__.library.services import sample_service and things seem to work, but I am unable to get the unit test to be able to properly reference the azure function. I have tried from ..HttpTrigger import main and from __app__.HttpTrigger import main in the test, but visual studio code is unable to discover the test when either of those imports statements are in place. I get the following errors respectively when I execute the test_first_function.py file directly: ImportError: attempted relative import with no known parent package and ModuleNotFoundError: No module named '__app__'
I'm far from an expert when it comes to Python modules and packages so any insight in how to fix these issues would be helpful. It would be good to get other ideas on how anyone else structures their Python azure functions projects
If you move your tests directory one level up, outside the __app__ directory, you should be able to, then, run the tests and import using from __app__.MyFirstFunction import myfunction
There is a discussion going on for improving the testing experience and the proper guidance (discussion) (work-item).
Meanwhile, the above suggestion should work fine. You could use this project (azure-functions branch) as a reference (linked by Brett Canon in the discussion tagged above).
Related
A have several DAGs of similar structure and I wanted to use advice described in Airflow docs as Modules Management:
This is an example structure that you might have in your dags folder:
<DIRECTORY ON PYTHONPATH>
| .airflowignore -- only needed in ``dags`` folder, see below
| -- my_company
| __init__.py
| common_package
| | __init__.py
| | common_module.py
| | subpackage
| | __init__.py
| | subpackaged_util_module.py
|
| my_custom_dags
| __init__.py
| my_dag1.py
| my_dag2.py
| base_dag.py
In the case above, these are the ways you could import the python
files:
from my_company.common_package.common_module import SomeClass
from my_company.common_package.subpackage.subpackaged_util_module import AnotherClass
from my_company.my_custom_dags.base_dag import BaseDag
That works fine in Airflow.
However I used to validate my DAGs locally by running (also as advised by a piece of documentation - DAG Loader Test):
python my_company/my_custom_dags/my_dag1.py
When using the imports, it complains:
Traceback (most recent call last):
File "/[...]/my_company/my_custom_dags/my_dag1.py", line 1, in <module>
from my_company.common_package.common_module import SomeClass
ModuleNotFoundError: No module named 'my_company'
How should I run it so that it understands the context and reckognizes the package?
It works when run this way:
PYTHONPATH=. python my_company/my_custom_dags/my_dag1.py
It seems that when entry point is my_dag1.py that's inside my_company/my_custom_dags Python considers it as its "working directory" and only looks for modules within that path. When I add . to PYTHONPATH it can also look at entire directory structure and reckognize module my_company and below.
(I'm not expert in Python, so above explanation might be somewhat innacurrate, but this is my understanding. I'm also not sure if that's indeed the cleanest way to make it work)
I've got a pydev project (i'm using eclipse as an ide) and my project has a number of packages, the structure is as follows
+top_level_app
|
-+app_package
|
|-----+database_pkg
| |
| -init.py
| -insert_db.py
| -read_db.py
|
|-----+parser_pkg
| |
| -init.py
| -my_parser_1.py
| -my_parser_2.py
|
|-----+resources pkg
| |
| -my database.sqlite
| -my_logtile.test
| -my_icon.ico
|
|-main.py
|-setup.py
Now it runs fine from the ide, but I need to create a windows executable (I am using py2exe) the setup.py is meant to be used to create a fully self contained executable, python interpreter and all bundled in.
My application makes use of a number of libraries such as jinja2,matplotlib, bokeh
I also have a sqlite database and some icons that are resources, i.e when the application runs it connects to the sqlite database to do stuff
The path to all these resources is quite important as they must all exist in the release folder created by py2exe.
So far when I try to import app_package in my setup.py I get
"Module Import error app_package not found"
This is wierd as it runs fine from eclipse, also how do I make sure resources are available in the right location when the release folder is created?
Essentially with this "nice" project layout how do I maintain all the elements when I try and create an executalbe that will run on someone else's machine, without any external dependencies?
My Ansible directory structure looks something like this.
Ansible-Repo
|
+-- playbooks
| |
| +-- run_custom_module1
|
+-- library
| |
| +-- custom_module1
| +-- custom_module2
|
+-- bin
| |
| +-- usefulpythonfunctions.py
I want to be able to import usefulpythonfunctions.py from the bin inside my Ansible Module. I have an import usefulpythonfunctions.py at the top of my module, but I receive the following error when I run the playbook.
\r\nImportError: No module named usefulpythonfunctions\r\n", "msg": "MODULE FAILURE", "parsed": false}
Luckily 2.3 release has introduced this feature.
Now you can place your shared code in module_utils folder where your playbook lives and it will be automatically exported by ansible. Also you might be interested in checking the directory structure documentation
You can access your module in module_utils as you would access a standard util
from ansible.module_utils.my_shared_code import MySharedCodeClient
UPDATE
Now you can configure a path where your module utils live using ANSIBLE_MODULE_UTILS env variable. For more details please check documentation
I've been looking into ways to do this and it doesn't appear this is possible in the current version of Ansible.
Honestly, the pull request mentioned by #slackhacker seems to be the cleanest option, but that looks like it may not be available until Ansible 2.1...
Until then...
Deploy your dependencies using ansible
The library you want has to be copied to each system you run on. So, let's just have ansible take care of it for us.
---
tasks:
- name: Copy mylibrary
# Only requirement is that the dst path exists in the default system python path
copy: src=path-to-library dst=/usr/local/lib/python2.7/dist-packages
- name:
mymodule:
arg1: foo
arg2: bar
Download dependencies in your module code
If your module code is accessible to the server you are running on (via HTTP, SSH, etc), you could code the module to go out and grab the dependencies directy in python instead of doing it as a separate task.
This has the added value of not requiring extra steps from the end user, but will probably run slower due to the extra copies having to be made.
Roll your own package
You could also roll your own package using pip or any other packaging toolchain. This is probably not for the faint of heart.
According to my understanding is this not possible in Ansible, but there is pending pull request that offers some hope.
Try creating a folder called usefulpythonfunctions, create an __init__.py and place the functions you want in there.
Additionally, you should be able to create bin/foo/bar.py and use from foo import bar to call functions or classes out of bar.
I'm looking for the best way to access the root directory of a sub-package in order to retrieve common configuration files (which in my case are JSON files).
As an example, the directory structure may look as follows:
| README
| setup.py
| proj_name
| __init__.py
| pack_01
| __init__.py
| file_01.py
| file_02.py
| pack_02
| __init__.py
| file_03.py
| file_04.py
| conf
| conf_01.json
| conf_02.json
| conf_03.json
In this example, the package config files are in the proj_name/conf directory.
When importing say file_01 using a command such as import proj_name.pack_01.file_01, the attribute __file__ would point to proj_name/pack_01/file_01.py and a line such as root_dir=dirname(dirname(__file__)) is required; this however implies a knowledge of the directory structure when writing the sub-package.
Is there a way to access the root package (proj_name in this case) and its directory using a variable such as __root_package__ or something similar? What is the tidiest way to achieve this in python? And would the method still work when building an egg using a setup.py utility?
Thank you in advance!
I have the following setup of a library in python:
library
| - __init__.py
| - lib1.py
| - ...
| - tools
| - __init__.py
| - testlib1.py
so in other words, the directory library contain python modules and the directory tools, which is a subdirectory of library contains e.g. one file testlib1.pyto test the library lib1.py.
testlib1.py therefore need to import lib1.py from the directory above to do some tests etc., just by calling python testlib1.py from somewhere on the computer, assuming the file is in the search path.
In addition, I only want ONE PYTHONPATH to be specified.
But we all know the following idea for testlib1.py does not work because the relative import does not work:
from .. import lib1
...
do something with lib1
I accept two kind of answers:
An answer which describes how still to be possible to call testlib1.py directly as the executing python script.
An answer that explains a better conceptual setup of of the modules etc, with the premise that everything has to be in the directory project and the tools has to be in a different directory than the actual libraries.
If something is not clear I suggest to ask and I will update the question.
Try adding a __init__.py to the tools directory. The relative import should work.
You can't. If you plan to use relative imports then you can't run the module by itself.
Either drop the relative imports or drop the idea of running testlib1.py directly.
Also I believe a test file should never use relative imports. Test should check whether the library works, and thus the code should be as similar as possible to the code that the users would actually use. And usually users do not add files to the library to use relative imports but use absolute imports.
By the way, I think your file organization is too much "java-like": mixing source code and tests. If you want to do tests then why not have something like:
project/
|
+-- src/
| |
| +--library/
| | |
| | +- lib1.py
| | |
| | #...
| +--library2/ #etc.
|
+-- tests/
|
+--testlibrary/
| |
| +- testlib1.py
#etc
To run the tests just use a tool like nosetests which automatically looks for this kind of folder structure and provide tons of options to change the search/test settings.
Actually, I found a solution, which does
works with the current implementation
does not require any changes in PYTHONPATH
without the need to hardcode the top-lebel directory
In testlib1.py the following code will do (has been tested):
import os
import sys
dirparts = os.path.dirname(os.path.abspath(__file__)).split('/')
sys.path.append('/'.join(dirparts[:-1]))
import mylib1
Not exactly sure this is a very clean or straightforward solution, but it allows to import any module from the directory one level up, which is what I need (as the test code or extra code or whetever is always located one level below the actual module files).