Pip has a configuration which is typically in ~/.pip/pip.conf on Linux, %APPDATA%\pip\pip.ini on Windows, and possibly other locations on virtual environments.
I could write some code to locate Pip's config file and then parse it using an ini-file parser (included with Python), however it occurs to me that this code must already exist within Pip. Pip surely must have a mechanism to locate and parse it's own configuration file.
I'd like to be able to access that configuration via Pip's API. In particular I'm trying to get hold of the index URL that Pip is using (along with any credentials which may be embedding). That will allow my service to guarantee that it's going to hit the same repository that Pip used to install from.
Is there an easy way to access this information?
The objective here is to access Pip's configuration information without having to re-implement the code which searches for Pip's config file.
There's no good way to do this, but this gets you somewhat directly to the user's config file. This won't work for site-configs.
import os
import pip.appdirs
import pip.locations
os.path.join(pip.appdirs.user_config_dir("pip"), pip.locations.config_basename)
And a way that gets all the config file locations:
>>> from pip.baseparser import ConfigOptionParser
>>> ConfigOptionParser(name="foo").get_config_files()
['C:\\ProgramData\\pip\\pip.ini', 'C:\\Users\\salimfadhley\\pip\\pip.ini', 'C:\\Users\\salimfadhley\\AppData\\Roaming\\pip\\pip.ini']
>>>
Related
Is it possible to use modules installed via python pip in gcloud deployment manager templates (python templates, not jinja)?
I have only being able to find reference of how to import .py files through a deployment manager schema file. e.g.
app.py.schema
info:
title: app
author: me
description: this is a description
imports:
- path: helper.py
i.e. i can only import a single .py at a time, so not useful for importing pip modules.
this link explains that to use libraries that is not explicitly supported we need to import the full library source. Although it does not mention if this full library source can actually be a pip module, or is it only referring to single .py files.
The module i'm trying to use inside my python templates is netaddr for manipulating ip address and subnets.
Any help is appreciated.
what you are looking for it not possible, you cannot install module using pip with interacting the the API, unless if you want to import the whole netaddr module as source code in your *.yaml config file (by adding the path for all the files related to the module) then importing which function your *.py file as Google mention in the documentation some library are supported, even with that some sys and network call will be rejected, you may think about using template_module
Original Answer:
Yes, you can check the link Here for importing multiple python files and using multiple templates.
My context is appengine_config.py, but this is really a general Python question.
Given that we've cloned a repo of an app that has an empty directory lib in it, and that we populate lib with packages by using the command pip install -r requirements.txt --target lib, then:
dirname ='lib'
dirpath = os.path.join(os.path.dirname(__file__), dirname)
For importing purposes, we can add such a filesystem path to the beginning of the Python path in the following way (we use index 1 because the first position should remain '.', the current directory):
sys.path.insert(1, dirpath)
However, that won't work if any of the packages in that directory are namespace packages.
To support namespace packages we can instead use:
site.addsitedir(dirpath)
But that appends the new directory to the end of the path, which we don't want in case we need to override a platform-supplied package (such as WebOb) with a newer version.
The solution I have so far is this bit of code which I'd really like to simplify:
sys.path, remainder = sys.path[:1], sys.path[1:]
site.addsitedir(dirpath)
sys.path.extend(remainder)
Is there a cleaner or more Pythonic way of accomplishing this?
For this answer I assume you know how to use setuptools and setup.py.
Assuming you would like to use the standard setuptools workflow for development, I recommend using this code snipped in your appengine_config.py:
import os
import sys
if os.environ.get('CURRENT_VERSION_ID') == 'testbed-version':
# If we are unittesting, fake the non-existence of appengine_config.
# The error message of the import error is handled by gae and must
# exactly match the proper string.
raise ImportError('No module named appengine_config')
# Imports are done relative because Google app engine prohibits
# absolute imports.
lib_dir = os.path.join(os.path.dirname(os.path.realpath(__file__)), 'libs')
# Add every library to sys.path.
if os.path.isdir(lib_dir):
for lib in os.listdir(lib_dir):
if lib.endswith('.egg'):
lib = os.path.join(lib_dir, lib)
# Insert to override default libraries such as webob 1.1.1.
sys.path.insert(0, lib)
And this piece of code in setup.cfg:
[develop]
install-dir = libs
always-copy = true
If you type python setup.py develop, the libraries are downloaded as eggs in the libs directory. appengine_config inserts them to your path.
We use this at work to include webob==1.3.1 and internal packages which are all namespaced using our company namespace.
You may want to have a look at the answers in the Stack Overflow thread, "How do I manage third-party Python libraries with Google App Engine? (virtualenv? pip?)," but for your particular predicament with namespace packages, you're running up against a long-standing issue I filed against site.addsitedir's behavior of appending to sys.path instead of inserting after the first element. Please feel free to add to that discussion with a link to this use case.
I do want to address something else that you said that I think is misleading:
My context is appengine_config.py, but this is really a general Python
question.
The question actually arises from the limitations of Google App Engine and the inability to install third-party packages, and hence, seeking a workaround. Rather than manually adjusting sys.path and using site.addsitedir. In general Python development, if your code uses these, you're Doing It Wrong.
The Python Packaging Authority (PyPA) describes the best practices to put third party libraries on your path, which I outline below:
Create a virtualenv
Mark out your dependencies in your setup.py and/or requirements files (see PyPA's "Concepts and Analyses")
Install your dependencies into the virtualenv with pip
Install your project, itself, into the virtualenv with pip and the -e/--editable flag.
Unfortunately, Google App Engine is incompatible with virtualenv and with pip. GAE chose to block this toolset in an attempt sandbox the environment. Hence, one must use hacks to work around the limitations of GAE to use additional or newer third party libraries.
If you dislike this limitation and want to use standard Python tooling for managing third-party package dependencies, other Platform as a Service providers out there eagerly await your business.
I'm writing a simple IronWorker in Python to do some work with the AWS API.
To do so I want to use the boto library which is distributed via PyPi repository. The boto library is not installed by default in the IronWorker runtime environment.
How can I bundle the boto library dependancy with my IronWorker code?
Ideally I'm hoping I can use something like the gem dependancy bundling available for Ruby IronWorkers - i.e in myRuby.worker specify
gemfile '../Gemfile', 'common', 'worker' # merges gems from common and worker groups
In the Python Loggly sample, I see that the hoover library is used:
#here we have to include hoover library with worker.
hoover_dir = os.path.dirname(hoover.__file__)
shutil.copytree(hoover_dir, worker_dir + '/loggly') #copy it to worker directory
However, I can't see where/how you specify which hoover library version you want, or where to download it from.
What is the official/correct way to use 3rd party libraries in Python IronWorkers?
Newer iron_worker version has native support of pip command.
So, you need:
runtime "python"
exec "something.py"
pip "boto"
pip "someotherpip"
full_remote_build true
[edit]We've worked on our toolset a bit since this answer was written and accepted. The answer from my colleague below is the recommended course moving forward.[/edit]
I wrote the Python client library for IronWorker. I'm also employed by Iron.io.
If you're using the Python client library, the easiest (and recommended) way to do this is to just copy over the library's installed folder, and include it when uploading the package. That's what the Python Loggly sample is doing above. As you said, that doesn't specify a version or where to download the library from, because it doesn't care. It just takes the one installed on your system and uses it. Whatever you get when you enter "import boto" on your local machine is what would be uploaded.
The other option is using our CLI to upload your worker, with a .worker file.
To do this, here's what you'd need to do:
Create a botoworker.worker file:
runtime "binary"
build 'pip install --install-option="--prefix=`pwd`/pips" boto'
file 'botoworker.py'
exec "botoworker.sh"
That second line is the pip command that will be run to install the dependency. You can modify it like you would any pip command run from the command line. It's going to execute that command on the worker during the "build" phase, so it's only executed once instead of every time you run a task.
The third line should be changed to the Python file you want to run--it's your Python worker file. Here's the one we used to test this:
import boto
If you save that as botoworker.py, the above should work without any modification. :)
The fourth line is a shell script that's going to actually run your worker. I've included the one we used below. Just save it as botoworker.sh, and you won't have to worry about modifying the .worker file above.
PYTHONPATH="$HOME/pips/lib/python2.7/site-packages:$PYTHONPATH" python botoworker.py "$#"
You'll notice it refers to your Python file--if you don't name your Python file botoworker.py, remember to change it here, too. All this does is set your PYTHONPATH to include the installed library, and then runs your Python file.
To upload this, just make sure you have the CLI installed (gem install iron_worker_ng, making sure your Ruby version is 1.9.3 or higher) and then run "iron_worker upload botoworker" in your shell, from the same directory your botoworker.worker file is in.
Hope this helps!
I'm using suds (brilliant library, btw), and I'd like to make it portable (so that everyone who uses the code that relies on it, can just checkout the files and run it).
I have tracked down 'suds-0.4-py2.6.egg' (in python/lib/site-packages), and put it in with my files, and I've tried:
import path.to.egg.file.suds
from path.to.egg.file.suds import *
import path.to.egg.file.suds-0.4-py2.6
The first two complain that suds doesn't exist, and the last one has invalid syntax.
In the __init__.py file, I have:
__all__ = [ "FileOne" ,
"FileTwo",
"suds-0.4-py2.6"]
and have previously tried
__all__ = [ "FileOne" ,
"FileTwo",
"suds"]
but neither work.
Is this the right way of going about it? If so, how can I get my imports to work. If not, how else can I achieve the same result?
Thanks
You must add your egg file to sys.path, like this:
import sys
# insert at 0 instead of appending to end to take precedence
# over system-installed suds (if there is one).
sys.path.insert(0, "suds-0.4-py2.6.egg")
import suds
.egg files are zipped archives; hence you cannot directly import them as you have discovered.
The easy way is to simply unzip the archive, and then copy the suds directory to your application's source code directory. Since Python will stop at the first module it discovers; your local copy of suds will be used even if it is not installed globally for Python.
One step up from that, is to add the egg to your path by appending it to sys.path.
However, the proper way would be to package your application for distribution; or provide a requirements file that lets other people know what external packages your program depends on.
Usually I distribute my program with a requirements.txt file that contain all dependencies and their version.
The users can then install these libraries with:
pip install -r requirements.txt
I don't think including eggs with your code is a good idea, what if the user use python2.7 instead of python2.6
More info about requirement file: http://www.pip-installer.org/en/latest/requirements.html
I am writing a utility in python that needs to check for (and if necessary, install and even upgrade) various other modules with in a target project/virtualenv, based on user supplied flags and/or input. I am currently trying to utilize 'pip' directly/programatically (because of it's existing support for the various repo types I will need to access), but I am having difficulty in finding examples or documentation on using it this way.
This seemed like the direction to go:
import pip
vcs = pip.vcs.VersionControl(url="http://path/to/repo/")
...but it gives no joy.
I need help with some of the basics aparently - like how can I use pip to pull/export a copy of an svn repo into a given local directory. Ultimately, I will also need to use it for git and mercurial checkouts as well as standard pypi installs. Any links, docs or pointers would be much appreciated.
Pip uses a particular format for vcs urls. The format is
vcsname+url#rev
#rev is optional, you can use it to reference a specific commit/tag
To use pip to retrieve a repository from a generic vcs to a local directory you may do this
from pip.vcs import VcsSupport
req_url = 'git+git://url/repo'
dest_path = '/this/is/the/destination'
vcs = VcsSupport()
vc_type, url = req_url.split('+',1)
backend = vcs.get_backend(vc_type)
if backend:
vcs_backend = backend(req_url)
vcs_backend.obtain(dest_path)
else:
print('Not a repository')
Check https://pip.pypa.io/en/stable/reference/pip_install/#id8 to know which vcs are supported