I am looking for a way to call Fabric from a script inside one of my packages, essentially turning it into an alias for fab -f /path/to/my/installed/package/scripts/fabfile.py.
Is there a standard way to do that, or should I just call it from subprocess?
I don't really have complete solution for you problem, but you would need to start off using the package pkg_resources to perfectly identify the location of your fabric file inside the other project.
In the following example I've create a small testing project called hellofabric containg a file called testfab.py (please ignore fabfile.py in the root folder of the project, this comes from my python bootstrap script). Here is the file structure.
.
├── fabfile.py
├── hellofabric
│ ├── __init__.py
│ ├── testfab.py
│ └── version.txt
├── hellofabric.egg-info
│ ├── dependency_links.txt
│ ├── entry_points.txt
│ ├── not-zip-safe
│ ├── PKG-INFO
│ ├── SOURCES.txt
│ └── top_level.txt
├── MANIFEST.in
├── README.rst
└── setup.py
testfab.py contains the following code.
import fabric.api as fab
#fab.task
def hellofabric():
fab.local("echo Hello from fabric")
Next step would be to create a dist file of this project (python setup.py sdist), and try to install that distribution file inside of your destination project. Once I did that I was able to execute the following which executed the fabric script.
>>> from hellofabric import testfab
>>> testfab.hellofabric()
[localhost] local: echo Hello from fabric
Hello from fabric
>>>
Hope this what you are looking for.
Related
I'm experimenting with packaging some Pythons projects and have followed this guide. The anonymized file tree can be seen below. The toml file is a barebones one from the tutorial modified appropriately. Building and uploading works well. So far so good.
.
├── LICENSE
├── pyproject.toml
├── README.md
├── src
│ └── mymodule
│ ├── __init__.py
│ └── main.py
└── tests
My next intended step is to package an older smaller well behaving project which includes a test suite written with unittest. Simplified structure below.
.
├── mymodule
│ ├── submoduleA
│ │ ├── __init__.py
│ │ └ foo.py
│ ├── submoduleB
│ │ ├── __init__.py
│ │ └ bar.py
│ ├── baz.py
│ └── __init__.py
└── tests
├── test_submoduleA.py
└── test_submoduleB.py
This is where my progress grinds to a halt.
There are many different ways to skin a cat but none directly involves unittest as far as I can tell. I have opted to go ahead by using tox to call the former.
Similarly when I have a look at different Python project repos the structure under tests seem to differ a bit.
End intent/wish: Convert said older project to a packagable one, editing the tests as little as possible, and using the tests for testing while developing and to do basic tests on the target device later.
Questions:
What is the purpose of the tests folder? Eg to run tests while developing files in src, to test the built package and/or to verify a package works once installed?
Is it possible to use the pyproject.toml file with unittest?
I have a project
testci/
├── __init__.py
├── README.md
├── requirements.txt
├── src
│ ├── __init__.py
│ └── mylib.py
└── test
├── __init__.py
└── pow_test.py
When I run python3.6 test/pow_test.py I see an error:
File "test/pow_test.py", line 3, in
import testci.src.mylib as mylib
ModuleNotFoundError: No module named 'testci'
pow_test.py
from testci.src.mylib import get_abs
def test_abs():
assert get_abs(-10) == 10
How can I fix this error?
System details: Ububntu 16.04 LTS, Python 3.6.10
try this
from .src import mylib
from mylib import get_abs
if it won't work then import one by one. But don't import the root folder since the file you are importing to is on the same folder you are trying to import then it will always raise an error
Run Python with the -m argument within the base testsci package to execute as a submodule.
I made a similar mock folder structure:
├───abc_blah
│ │ abc_blah.py
│ │ __init__.py
│
└───def
│ def.py
│ __init__.py
abc_blah.py
print('abc')
def.py
import abc_blah.abc_blah
Execute like such:
python -m def.def
Correctly prints out 'abc' as expected here.
simply add __package__ = "testci" and also it is a good practice to add a try and except block
Your final code should look something like
try:
from testci.src.mylib import get_abs
except ModuleNotFoundError:
from ..testci.src.mylib import get_abs
for running it, type python -m test.pow_test
I think your issue is how the package is installed. The import looks fine to me. As it says CI I'm guessing you're having the package installed remotely with only the test folder somehow.
Try adding a setup.py file where you define that both the test as well as the src packages are part of your testci package.
there are many ways to organize a project, keep things consider in mind, structure should be simple and more scaleable, can differentiate the things in codebase easily.
one of the few good possible ways to structure a project is below
project/
├── app.py
├── dockerfile
├── pipfile
├── Readme.md
├── requiements.txt
├── src_code
│ ├── code
│ │ ├── __init__.py
│ │ └── mylib.py
│ └── test
│ ├── __init__.py
│ └── test_func.py
└── travisfile
here app.py is main file which is responsible to run your entire project
I'm trying to build a package that uses both python and cython modules. The problem I'm having deals with imports after building and installing where I'm not sure how to make files import from the .so file generated by the build process.
Before building my folder structure looks like this
root/
├── c_integrate.c
├── c_integrate.pyx
├── cython_builder.py
├── __init__.py
├── integrator_class.py
├── integrator_modules
│ ├── cython_integrator.py
│ ├── __init__.py
│ ├── integrator.py
│ ├── numba_integrator.py
│ ├── numpy_integrator.py
│ ├── quadratic_error.png
│ ├── report3.txt
│ ├── report4.txt
│ └── report5.txt
├── report6.txt
├── setup.py
└── test
├── __init__.py
└── test_integrator.py
Building with python3.5 setup.py build gives this new folder in root
root/build/
├── lib.linux-x86_64-3.5
│ ├── c_integrate.cpython-35m-x86_64-linux-gnu.so
│ ├── integrator_modules
│ │ ├── cython_integrator.py
│ │ ├── __init__.py
│ │ ├── integrator.py
│ │ ├── numba_integrator.py
│ │ └── numpy_integrator.py
│ └── test
│ ├── __init__.py
│ └── test_integrator.py
The setup.py file looks like this
from setuptools import setup, Extension, find_packages
import numpy
setup(
name = "integrator_package",
author = "foo",
packages = find_packages(),
ext_modules = [Extension("c_integrate", ["c_integrate.c"])],
include_dirs=[numpy.get_include()],
)
My question is then: how do I write import statements of the functions from the .so file into ìntegrator_class.py in root and cython_integrator and test_integrator located in the build directory. Appending to sys.path seems like a quick and dirty solution that I don't much like.
EDIT:
As pointed out in the comments I haven't installed the package. This is because I don't know what to write to import from the .so file
In no specific order:
The file setup.py is typically located below the root of a project. Example:
library_name/
__init__.py
file1.py
setup.py
README
Then, the build directory appears alongside the project's source and not in the project source.
To import the file c_integrate.cpython-35m-x86_64-linux-gnu.so in Python, just import "c_integrate". The rest of the naming is taken care of automatically as it is just the platform information. See PEP 3149
A valid module is one of
a directory with a modulename/__init__.py file
a file named modulename.py
a file named modulename.PLATFORMINFO.so
of course located in the Python path. So there is no need for a __init__.py file for a compiled Cython module.
For your situation, move the Cython code in the project directory and either do a relative import import .c_integrate or a full from integrator_modules import c_integrate where the latter only works when your package is installed.
A few of this information can be found in my blog post on Cython modules http://pdebuyl.be/blog/2017/cython-module.html
I believe that this should let you build a proper package, comment below if not.
EDIT: to complete the configuration (see comments below), the poster also
Fixed the module path in the setup.py file so that it is the full module name starting from the PYTHONPATH: Extension("integrator_package.integrator_modules.c_integrator", ["integrator_package/integrator_modules/c_integrator.c"] instead of Extension("c_integrate", ["c_integrate.c"])]
Cythonize the module, build it and use with a same Python interpreter.
Further comment: the setup.py file can cythonize the file as well. Include the .pyx file instead of the .c file as the source.
cythonize(Extension('integrator_package.integrator_modules.c_integrator',
["integrator_package/integrator_modules/c_integrator.pyx"],
include_dirs=[numpy.get_include()]))
This is my project structure :
└──myfolder
└──myproject
├── __init__.py
├── tester.py
├── learners
│ ├── __init__.py
│ ├── bag_learner.py
│ ├── dqn_learner.py
│ ├── q_learner.py
│ ├── q_learner.pyc
│ ├── stock_dqn_learner.py
│ ├── stock_q_base_learner.py
│ └── stock_q_learner.py
└── utility
├── __init__.py
├── analysis.py
└── util.py
I usually run program by python tester.py at myproject directory.
Now I'm trying to run this program via gcp command. What I did was to move to myfolder directory and run program by gcloud ml-engine local train --module-name=myproject.tester --package-path=myproject. But it occured an error:
File "myproject/learners/q_learner.py", line 6, in <module>
from utility import *
ImportError: No module named utility
I thought that program couldn't recognize myproject directory as a PYTHONPATH. So I changed directory to myproject, and run program by gcloud ml-engine local train --module-name=tester --package-path=./. but it also occured an error:
/Users/Chois/.pyenv/versions/2.7.13/bin/python2: No module named tester
How can I deal with it?
Is it possible for your import to be something like:
import myproject.utility as utility
And then proceed along the path you were on which was to execute gcloud commands with myfolder as the working directory.
This is my first time trying to set up a vagrant environment or a python virtuelenv, so forgive me if I am missing something basic.
Right now, I ssh into my vagrant box and in the home directory I have placed my venv folder. I have run
source venv/bin/activate
From my home directory I move to /vagrant, and within here I have my project files laid out something like this:
├──project
├── LICENSE
│
├── project
│ │ ├── exceptions.py
│ │ ├── __init__.py
│ │ ├── resources
│ │ │ ├── base.py
│ │ │ ├── __init__.py
│ │ └── target
│ │ └── __init__.py
│ │ └── test.py
│ ├── README.md
My problem is I am unable to import my modules in different directories. For example, if I am in /vagrant/project/project/target/test.py and I attempt:
import project.exceptions
I will get the error
ImportError: No module named project.exceptions
If I am in the /vagrant/project/project directory and I run
import exceptions
that works fine.
I have read up on similar problems people have experienced on StackOverflow.
Based on this question: Can't import package from virtualenv I have checked that my sys.executable path is the same in both my python interpreter as well as when I run a script (home/vagrant/venv/bin/python)
Based on this question: Import error with virtualenv. I have run ~/venv/bin/python directly and attempted to import, but the import still fails.
Let me know if there is more information I can provide. Thank you.
You have two options:
You can install your project into the virtual environment, by writing a setup.py file and by calling python setup.py install. See the Python Packaging User Guide.
You can set the PYTHONPATH environment variable to point to your project, like this:
$ export PYTHONPATH=$PYTHONPATH:/vagrant/project