I'm writing a lookup plugin for Ansible and would like to deploy it to PyPI, so users would be able to install it with pip install and use in their playbooks.
However I do not understand how Ansible is doing the plugin discovery. It's apparently checking ./lookup_plugins path in the playbook's folder, as well as a couple of fixed paths (one in ~/.ansible, and another in usr/share/). However what I want is to install plugin package into virtualenv.
Is it even possible? If so, how?
AFAIK this is not possible (at least as of Ansible 2.3).
But I'm not a Python expert, maybe there are some workarounds possible.
Ansible search for lookup plugins in the following locations:
lookup_plugins directory near your playbook file
lookup_plugins directory inside any role that is applied in playbook
configured lookup plugins directories
default location: ~/.ansible/plugins/lookup:/usr/share/ansible/plugins/lookup
can be overwritten with lookup_plugins configuration option or
with ANSIBLE_LOOKUP_PLUGINS environment variable
ansible/plugins/lookup directory inside current ansible python package
So for plugin mylookup to be found by Ansible, there should be a file mylookup.py in any of the above locations.
If your plugin is very complex to be distributed as a single py-file, you can wrap it into a package plus a separate tiny helper file, so users will have to:
pip install my_super_lookup
Create ~/.ansible/plugins/lookup/easy_name.py:
import my_super_lookup
LookupModule = my_super_lookup.LookupModule
Use with_easy_name: ... or lookup('easy_name',...) in playbooks
Related
I'm developing a router and need a python module snmp_passpersist to be pre-installed.
The original source is coded in python2, so I modified it as to adapt to python3, and need to pre-install into the product image.
I know how to install a python module onto a running live environment by means of pip and a setup.py
that come with original source, but now I'm in the buildroot env of OpenWRT.
I read through the customizing package overview of OpenWRT, but it is for C language and binary executables.
It looks like that some more steps should be done with a python module/package instead of a cp command, e.g. compiling *.py file into *.pyc, and making a egg file with a lot of package info, etc.
Maybe it works to copy simply the egg file into the target lib folder, but I worry about there will be no version information in the PIP environment.
I want to known the correct/formal way.
Thanks!
You should follow an official python package from Openwrt
Add the include makefile for python
include ../pypi.mk
include $(INCLUDE_DIR)/package.mk
include ../python3-package.mk
There is some built-in command for the makefile, ex: $(eval $(call Py3Package,python3-curl))
Pre-built the python package and you can get this in a custom image
Ex: https://github.com/openwrt/packages/blob/openwrt-21.02/lang/python/python-curl/Makefile
I am seeking to deploy (on Linux) a Conan package on a system and there is no deploy() method specified in the conanfile.py. So (I believe) this means the package will be installed in the current directory.
Instead I'd like to specify a default directory. I have tried using conan install -if /some/directory but get:
conan install: error: unrecognized arguments: -if /some/directory
(It seems I need to couple -g with -if ? But I'm not trying to build the package just deploy it.)
Is there a way to do this? Have I understood the default behaviour correctly?
Update: Writing the question helped clarify my thoughts so I tried with conan install -g deploy -if /some/directory PACKAGE and while I no longer get the -if related error message, it still doesn't work: merely leaving a conanbuildinfo.txt file in the specified directory.
Short answer appears to be I can't.
As per conan documentation: conan install installs the requirements specified in a recipe (conanfile.py or conanfile.txt). It can also be used to install a concrete package specifying a reference.
It normally installs packages on the cache
Depending on the conanfile.py and the installation documentation provided, if it doesn't implement deploy() method, this possibly means that the artifacts are copied to the package. You can figure that out from package() in conanfile.py.
One workaround to copy the artifacts to your custom directory is to copy it manually from the installed package in the cache. You can normally find the package on the path ~/.conan/data/<Package_Reference>/package/<Binary_Package_ID> shown in the terminal while running conan install.
If the package has source code that can be built to generate the binary artifacts, then you can use the following commands to generate the binary artifacts on a local folder:
$ conan source . --source-folder src
$ conan install . --install-folder build
$ conan build . --build-folder build --source-folder src
You will find the generated artifacts on build folder.
Searching online pops support answers to the same issue in the project's repository,
https://github.com/conan-io/conan/issues/2250
I see references to environment variables CONAN_USER_HOME, CONAN_USER_HOME_SHORT and to a command conan config set storage.path=<your new path here>.
I should not forget that separating the installation across multiple conan sessions on the same machine should consider 3 areas of racing,
Filesystem-wide root.
User-specific root.
Session-specific root.
The "short Windows path" workaround to an older OS limit of path lengths (which extends into the new OSes until opting-in and relying on Unicode API) avoids collision between users. Because the workaround appends only the user's hash to the filesystem root, it will be vulnerable to collisions in sessions running by the same user, even with different checkouts of code consuming the libraries (as in the case of a build server).
Conan's own instruction on non-conflicting installs into different CONAN_USER_HOMEs will need CONAN_USER_HOME_SHORT=None and tools using Unicode filesystem API on Windows.
I came across an article on serverlesscode.com about building Python 3 apps for AWS Lambda that recommends using pip (or pip3) to install dependencies in a /vendored subdirectory. I like this idea as it keeps the file structure clean, but I'm having some issues achieving it.
I'm using Serverless Framework and my modules are imported in my code in the normal way, e.g. from pynamodb.models import Model
I've used the command pip install -t vendored/ -r requirements.txt to install my various dependencies (per requirements.txt) in the subdirectory, which seems to work as expected - I can see all modules installed in the subdirectory.
When the function is called, however, I get the error Unable to import module 'handler': No module named 'pynamodb' (where pynamodb is one of the installed modules).
I can resolve this error by changing my pip installation to the project root, i.e. not in the /vendored folder (pip install -t ./ -r requirements.txt). This installs exactly the same files.
There must be a configuration that I'm missing that points to the subfolder, but Googling hasn't revealed whether I need to import my modules in a different way, or if there is some other global config I need to change.
To summarise: how can I use Pip to install my dependencies in a subfolder within my project?
Edit: noting tkwargs' good suggestion on the use of the serverless plugin for packaging, it would still be good to understand how this might be done without venv, for example. The primary purpose is not specifically to make packaging easier (it's pretty easy as-is with pip), but to keep my file structure cleaner by avoiding additional folders in the root.
I've seen some people using the sys module in their lambda function's code to add the subdirectory, vendored in this case, to their python path... I'm not a fan of that as a solution because it would mean needing to do that for every single lambda function and add the need for extra boiler plate code. The solution I ended up using is to modify the PYTHONPATH runtime environment variable to include my subdirectories. For example, in my serverless.yml I have:
provider:
environment:
PYTHONPATH: '/var/task/vendored:/var/runtime'
By setting this as an environment variable at this level it will apply to every lambda function you are deploying in your serverless.yml -- you could also specify it at a per lambda function level if for some reason you didn't want it applied to all of them.
I wasn't sure how to self reference the existing value of PYTHONPATH to ensure I wasn't incorrectly overwriting it while in the process of adding my custom path "/var/task/vendored"... would love to know if anyone else has.
I have a python app with its setup.py that's working just fine to install it through setuptools. I am then packaging it up in DEB and PKGNG using the excellent Effing package management. I've also made some quick tests with setuptools-pkg and that seems to work too.
Now I have a need to distribute the packages including init scripts to start/stop/manage the service. I have my init scripts in the source repo and, according to what seems to be best practice, I'm not doing anything with them in setuptools and I'm handling them in the os-specific packaging: for debian-based systems I use the --deb-init, --deb-upstart and --deb-systemd FPM options as needed.
How can I build a FreeBSD package that includes the correct rc.d script, using FPM or through any other means?
All the examples I've seen are adding the rc.d script when building a package through the ports collection but this is an internal app and is not going to be published to the Ports or on PyPi. I want to be able to check out the repository on a FreeBSD system, launch a command that gives me a package, distribute it to other FreeBSD systems, install it using pkg and have my init script correctly deployed to /usr/local/etc/rc.d/<myappname>. There's no need to keep using FPM for that, anything works as long as it gives me a well-formed package.
I would highly suggest creating your package as if it were any other port either if is going to be published or not.
One of the advantages you can inherit by doing this is that you could also include all your test and automate the deployment having out of the box the base for a continues integration/delivery setup.
Check out poudriere. You could indeed maintain a set of custom ports with your very own settings and distribute them across your environments without any hassle:
pkg install -r your-poudriere yourpkg
In case this is probably too much or probably doesn't adapt well to your use case, you can always fallback to ansible, in where you could create a custom rc.d within a template of an ansible role.
If you just want to build and deploy something, let's say a microservice, then probably pkg is not the best tool, maybe you just need a supervisor that can work on all your platforms (sysutils/immortal) so that you could just distribute your code and have a single recipe for starting/stoping the service.
nbari's answer is probably the Right Way™ to do this and I'd probably create my own "port" and use that to build the package on a central host.
At the time of my original question I had taken a different approach that I'm reporting here for the sake of completeness.
I am still building the applications package (ie. myapp-1.0.0.txz) with fpm -s python -t freebsd, which basically uses Python's setuptools infrastructure to get the necessary informations, and I don't include any rc.d file in it.
I also build a second package which I will call myapp-init-1.0.0.txz with the source directory type (ie. fpm -s dir -t freebsd) and I only include the init script in that package.
Both packages get distributed to hosts and installed, thus solving my distribution issue.
I have a simple python shell script (no gui) who uses a couple of dependencies (requests and BeautifulfSoup4).
I would like to share this simple script over multiple computers. Each computer has already python installed and they are all Linux powered.
At this moment, on my development environments, the application runs inside a virtualenv with all its dependencies.
Is there any way to share this application with all the dependencies without the needing of installing them with pip?
I would like to just run python myapp.py to run it.
You will need to either create a single-file executable, using something like bbfreeze or pyinstaller or bundle your dependencies (assuming they're pure-python) into a .zip file and then source it as your PYTHONPATH (ex: PYTHONPATH=deps.zip python myapp.py).
The much better solution would be to create a setup.py file and use pip. Your setup.py file can create dependency links to files or repos if you don't want those machines to have access to the outside world. See this related issue.
As long as you make the virtualenv relocatable (use the --relocatable option on it in its original place), you can literally just copy the whole virtualenv over. If you create it with --copy-only (you'll need to patch the bug in virtualenv), then you shouldn't even need to have python installed elsewhere on the target machines.
Alternatively, look at http://guide.python-distribute.org/ and learn how to create an egg or wheel. An egg can then be run directly by python.
I haven't tested your particular case, but you can find source code (either mirrored or original) on a site like github.
For example, for BeautifulSoup, you can find the code here.
You can put the code into the same folder (probably a rename is a good idea, so as to not call an existing package). Just note that you won't get any updates.