How to distribute / access data files in Python egg? - python

I'm writing a Django application that is using pip & virtualenv to manage its development environment.
One of the dependencies, pkgme, comes with many data files which are its "backends" and are configured in its setup.py with data_files=$FOO (rather than package_data).
When pkgme looks for its backends, it looks in os.path.join(sys.prefix, "share", "pkgme", "backends"). This works great when pkgme has been installed normally, and seems to match the documentation but does not work when pkgme is installed as an egg.
There, the data files are installed under $VIRTUAL_ENV/lib/python2.7/site-packages/pkgme-0.1-py2.7.egg/share rather than the expected $VIRTUAL_ENV/share.
Which leaves me with two questions:
Should I be using something other than the os.path.join above to find the data files regardless of whether we are using an egg installation or a traditional system installation? If so, what?
Should I be distributing my data files differently so as to make them more readily available in an egg?
Note that I know about pkgutil.get_data, but would rather not use it. I'm not interested in the contents of these data files, I want to know their location instead, so I can execute them.
My current plan is to do this:
Use package_data instead of data_files
Change pkgme to look for backends relative to pkgme.__file__ rather than sys.prefix

Your current plan is essentially correct, or is at any rate a workable option.
When setuptools creates an egg, it checks whether code in the egg makes use of __file__, and if so, it marks the egg as not being installable in compressed form. In this way, when the egg is installed by easy_install, it'll get extracted to an .egg/ directory instead of being left in an .egg file.
If you want to support compressed/drop-in installation (i.e., just dumping the egg in a directory without "installing" it), then you should use the pkg_resources.resource_filename() (docs here) API instead of __file__, but then your package will be dependent on setuptools or distribute in order to have that API available.

I ended up doing the following:
Changing pkgme to use pkg_resources.resource_filename() to find its own included backends
Added an entry point that any backend written in Python can use to publish the location of its own backend scripts
Kept the sys.prefix-based check for any backend that don't want to use Python
The diff can be found here: http://bazaar.launchpad.net/~pkgme-committers/pkgme/trunk/revision/86

Related

unable to load sublime text 3 package [duplicate]

I'm trying to write a plugin for Sublime Text 3.
I have to use several third party packages in my code. I have managed to get the code working by manually copying the packages into /home/user/.config/sublime-text-3/Packages/User/, then I used relative imports to get to the needed code. How would I distribute the plugin to the end users? Telling them to copy the needed dependencies to the appropriate location is certainly not the way to go. How are 3rd party modules supposed to be used properly with Sublime Text plugins? I can't find any documentation online; all I see is the recommendation to put the modules in the folder.
Sublime uses it's own embedded Python interpreter (currently Python 3.3.6 although the next version will also support Python 3.8 as well) and as such it will completely ignore any version of Python that you may or may not have installed on your system, as well as any libraries that are installed for that version.
For that reason, if you want to use external modules (hereafter dependencies) you need to do extra work. There are a variety of ways to accomplish this, each with their own set of pros and cons.
The following lists the various ways that you can achieve this; all of them require a bit of an understanding about how modules work in Python in order to understand what's going on. By and large except for the paths involved there's nothing too "Sublime Text" about the mechanisms at play.
NOTE: The below is accurate as of the time of this answer. However there are plans for Package Control to change how it works with dependencies that are forthcoming that may change some aspect of this.
This is related to the upcoming version of Sublime supporting multiple versions of Python (and the manner in which it supports them) which the current Package Control mechanism does not support.
It's unclear at the moment if the change will bring a new way to specify dependencies or if only the inner workings of how the dependencies are installed will change. The existing mechanism may remain in place regardless just for backwards compatibility, however.
All roads to accessing a Python dependency from a Sublime plugin involve putting the code for it in a place where the Python interpreter is going to look for it. This is similar to how standard Python would do things, except that locations that are checked are contained within the area that Sublime uses to store your configuration (referred to as the Data directory) and instead of a standalone Python interpreter, Python is running in the plugin host.
Populate the library into the Lib folder
Since version 3.0 (build 3143), Sublime will create a folder named Lib in the data directory and inside of it a directory based on the name of the Python version. If you use Preferences > Browse Packages and go up one folder level, you'll see Lib, and inside of it a folder named e.g. python3.3 (or if you're using a newer build, python33 and python38).
Those directories are directly on the Python sys.path by default, so anything placed inside of them will be immediately available to any plugin just as a normal Python library (or any of those built in) would be. You could consider these folders to be something akin to the site-packages folder in standard Python.
So, any method by which you could install a standard Python library can be used so long as the result is files ending up in this folder. You could for example install a library via pip and then manually copy the files to that location from site-packages, manually install from sources, etc.
Lib/python3.3/
|-- librarya
| `-- file1.py
|-- libraryb
| `-- file2.py
`-- singlefile.py
Version restrictions apply here; the dependency that you want to use must support the version of Python that Sublime is using, or it won't work. This is particularly important for Python libraries with a native component (e.g. a .dll, .so or .dylib), which may require hand compiling the code.
This method is not automatic; you would need to do it to use your package locally, and anyone that wants to use your package would also need to do it as well. Since Sublime is currently using an older version of Python, it can be problematic to obtain a correct version of libraries as well.
In the future, Package Control will install dependencies in this location (Will added the folder specifically for this purpose during the run up to version 3.0), but as of the time I'm writing this answer that is not currently the case.
Vendor your dependencies directly inside of your own package
The Packages folder is on the sys.path by default as well; this is how Sublime finds and loads packages. This is true of both the physical Packages folder, as well as the "virtual" packages folder that contains the contents of sublime-package files.
For example, one can access the class that provides the exec command via:
from Default.exec import ExecCommand
This will work even though the exec.py file is actually stored in Default.sublime-package in the Sublime text install folder and not physically present in the Packages folder.
As a result of this, you can vendor any dependencies that you require directly inside of your own package. Here this could be the User package or any other package that you're creating.
It's important to note that Sublime will treat any Python file in the top level of a package as a plugin and try to load it as one. Hence it's important that if you go this route you create a sub-folder in your package and put the library in there.
MyPackage/
|-- alibrary
| `-- code.py
`-- my_plugin.py
With this structure, you can access the module directly:
import MyPackage.alibrary
from MyPackage.alibrary import someSymbol
Not all Python modules lend themselves to this method directly without modification; some code changes in the dependency may be required in order to allow different parts of the library to see other parts of itself, for example if it doesn't use relative import to get at sibling files. License restrictions may also get in the way of this as well, depending on the library that you're using.
On the other hand, this directly locks the version of the library that you're using to exactly the version that you tested with, which ensures that you won't be in for any undue surprises further on down the line.
Using this method, anything you do to distribute your package will automatically also distribute the vendored library that's contained inside. So if you're distributing by Package Control, you don't need to do anything special and it will Just Work™.
Modify the sys.path to point to a custom location
The Python that's embedded into Sublime is still standard Python, so if desired you can manually manipulate the sys.path that describes what folders to look for packages in so that it will look in a place of your choosing in addition to the standard locations that Sublime sets up automatically.
This is generally not a good idea since if done incorrectly things can go pear shaped quickly. It also still requires you to manually install libraries somewhere yourself first, and in that case you're better off using the Lib folder as outlined above, which is already on the sys.path.
I would consider this method an advanced solution and one you might use for testing purposes during development but otherwise not something that would be user facing. If you plan to distribute your package via Package Control, the review of your package would likely kick back a manipulation of the sys.path with a request to use another method.
Use Package Control's Dependency system (and the dependency exists)
Package control contains a dependency mechanism that uses a combination of the two prior methods to provide a way to install a dependency automatically. There is a list of available dependencies as well, though the list may not be complete.
If the dependency that you're interested in using is already available, you're good to go. There are two different ways to go about declaring that you need one or more dependencies on your package.
NOTE: Package Control doesn't currently support dependencies of dependencies; if a dependency requires that another library also be installed, you need to explicitly mention them both yourself.
The first involves adding a dependencies key to the entry for your package in the package control channel file. This is a step that you'd take at the point where you're adding your package to Package Control, which is something that's outside the scope of this answer.
While you're developing your package (or if you decide that you don't want to distribute your package via Package Control when you're done), then you can instead add a dependencies.json file into the root of your package (an example dependencies.json file is available to illustrate this).
Once you do that, you can choose Package Control: Satisfy Dependencies from the command Palette to have Package Control download and install the dependency for you (if needed).
This step is automatic if your package is being distributed and installed by Package Control; otherwise you need to tell your users to take this step once they install the package.
Use Package Control's Dependency system (but the dependency does not exist)
The method that Package Control uses to install dependencies is, as outlined at the top of the question subject to change at some point in the (possibly near) future. This may affect the instructions here. The overall mechanism may remain the same as far as setup is concerned, with only the locations of the installation changing, but that remains to be seen currently.
Package Control installs dependencies via a special combination of vendoring and also manipulation of the sys.path to allow things to be found. In order to do so, it requires that you lay out your dependency in a particular way and provide some extra metadata as well.
The layout for the package that contains the dependency when you're building it would have a structure similar to the following:
Packages/my_dependency/
├── .sublime-dependency
└── prefix
└── my_dependency
└── file.py
Package Control installs a dependency as a Package, and since Sublime treats every Python file in the root of a package as a plugin, the code for the dependency is not kept in the top level of the package. As seen above, the actual content of the dependency is stored inside of the folder labeled as prefix above (more on that in a second).
When the dependency is installed, Package Control adds an entry to it's special 0_package_control_loader package that causes the prefix folder to be added to the sys.path, which makes everything inside of it available to import statements as normal. This is why there's an inherent duplication of the name of the library (my_dependency in this example).
Regarding the prefix folder, this is not actually named that and instead has a special name that determines what combination of Sublime Text version, platform and architecture the dependency is available on (important for libraries that contain binaries, for example).
The name of the prefix folder actually follows the form {st_version}_{os}_{arch}, {st_version}_{os}, {st_version} or all. {st_version} can be st2 or st3, {os} can be windows, linux or osx and {arch} can be x32 or x64.
Thus you could say that your dependency supports only st3, st3_linux, st3_windows_x64 or any combination thereof. For something with native code you may specify several different versions by having multiple folders, though commonly all is used when the dependency contains pure Python code that will work regardless of the Sublime version, OS or architecture.
In this example, if we assume that the prefix folder is named all because my_dependency is pure Python, then the result of installing this dependency would be that Packages/my_dependency/all would be added to the sys.path, meaning that if you import my_dependency you're getting the code from inside of that folder.
During development (or if you don't want to distribute your dependency via Package Control), you create a .sublime-dependency file in the root of the package as shown above. This should be a text file with a single line that contains a 2 digit number (e.g. 01 or 50). This controls in what order each installed dependency will get added to the sys.path. You'd typically pick a lower number if your dependency has no other dependencies and a higher value if it does (so that it gets injected after those).
Once you have the initial dependency laid out in the correct format in the Packages folder, you would use the command Package Control: Install Local Dependency from the Command Palette, and then select the name of your dependency.
This causes Package Control to "install" the dependency (i.e. update the 0_package_control_loader package) to make the dependency active. This step would normally be taken by Package Control automatically when it installs a dependency for the first time, so if you are also manually distributing your dependency you need to provide instructions to take this step.

how to locate source code path from module

I have a python package built from source code in /Document/pythonpackage directory
/Document/pythonpackage/> python setup.py install
This creates a folder in site-packages directory of python
import pythonpackage
print(pythonpackage.__file__)
>/anaconda3/lib/python3.7/site-packages/pythonpackage-x86_64.egg/pythonpackage/__init__.py
I am running a script on multiple environments so the only path I know I will have is pythonpackage.__file__. However Document/pythonpackage has some data that is not in site-packages is there a way to automatically find the path to /Document/pythonpackage given that you only have access to the module in python?
working like that is discouraged. it's generally assumed that after installing a package the user can remove the installation directory (as most automated package managers would do). instead you'd make sure your setup.py copied any data files over into the relevant places, and then your code would pick them up from there.
assuming you're using the standard setuptools, you can see the docs on Including Data Files, which says at the bottom:
In summary, the three options allow you to:
include_package_data
Accept all data files and directories matched by MANIFEST.in.
package_data
Specify additional patterns to match files that may or may not be matched by MANIFEST.in or found in source control.
exclude_package_data
Specify patterns for data files and directories that should not be included when a package is installed, even if they would otherwise have been included due to the use of the preceding options.
and then says:
Typically, existing programs manipulate a package’s __file__ attribute in order to find the location of data files. However, this manipulation isn’t compatible with PEP 302-based import hooks, including importing from zip files and Python Eggs. It is strongly recommended that, if you are using data files, you should use the ResourceManager API of pkg_resources to access them
Not sure, but you could create a repository for your module and use pip to install it. The egg folder would then have a file called PKG-INFO which would contain the url to the repository you imported your module from.

Load text file in python module after installation using pip/other installer

My goal is to make a program I've written easily accessible to potential employers/etc. in order to... showcase my skills.. or whatever. I am not a computer scientist, and I've never written a python module meant for installation before, so I'm new to this aspect.
I've written a machine learning algorithm, and fit parameters to data that I have locally. I would like to distribute the algorithm with "default" parameters, so that the downloader can use it "out of the box" for classification without having a training set. I've written methods which save the parameters to/load the parameters from text files, which I've confirmed work on my platform. I could simply ask users to download the files I've mentioned seperately and use the loadParameters method I've created to manually load the parameters, but I would like to make the installation process as easy as possible for people who may be evaluating me.
What I'm not sure is how to package the text files in such a way that they can automatically be loaded in the __init__ method of the object I have.
I have put the algorithm and files on github here, and written a setup.py script so that it can be downloaded from github using pip like this:
pip install --upgrade https://github.com/NathanWycoff/SySE/tarball/master
However, this doesn't seem to install the text files containing the data I need, only the __init__.py python file containing my code.
So I guess the question boils down to: How do I force pip to download additional files aside from just the module in __init__.py? Or, is there a better way to load default parameters?
Yes, there is a better way, how you can distribute data files with python package.
First of all, read something about proper python package structure. For instance, it's not recommended to put a code into __init__ files. They're just marking that a directory is a python package, plus you can do some import statements there. So, it's better, if you put your SySE class to (for instance) file syse.py in that directory and in __init__.py you can from .syse import SySE.
To the data files. By default, setuptools will distribute only *.py and several other special files (README, LICENCE and so on). However, you can tell to setuptools that you want distribute some other files with the package. Use setup's kwarg package_data, more about that here. Also don't forget to include all you data file into MANIFEST.in, more on that here.
If you do all the above correctly, than you can use package pkg_resources to discover your data files on runtime. pkg_resources handles all possible situations - your package can be distributed in several ways, it can be installed from pip server, it can be installed from wheel, as egg,...more on that here.
Lastly, if you package is public, I can only recommend to upload it on pypi (in case it is not public, you can run your own pip server). Register there and upload your package. You could than do only pip install syse to install it from anywhere. It's quite likely the best way, how to distribute your package.
It's quite a lot work and reading but I'm pretty sure you will benefit from it.
Hope this help.

What is the pythonic way to share common files in multiple projects?

Lets say I have projects x and y in brother directories: projects/x and projects/y.
There are some utility funcs common to both projects in myutils.py and some db stuff in mydbstuff.py, etc.
Those are minor common goodies, so I don't want to create a single package for them.
Questions arise about the whereabouts of such files, possible changes to PYTHONPATH, proper way to import, etc.
What is the 'pythonic way' to use such files?
The pythonic way is to create a single extra package for them.
Why don't you want to create a package? You can distribute this package with both projects, and the effect would be the same.
You'll never do it right for all instalation scenarios and platforms if you do it by mangling with PYTHONPATH and custom imports.
Just create another package and be done in no time.
You can add path to shared files to sys.path either directly by sys.path.append(pathToShared) or by defining .pth files and add them to with site.addsitedir. Path files (.pth) are simple text files with a path in each line.
You can also create a .pth file, which will store the directory(ies) that you want added to your PYTHONPATH. .pth files are copied to the Python/lib/site-packages directory, and any directory in that file will be added to your PYTHONPATH at runtime.
http://docs.python.org/library/site.html
StackOVerflow question (see accepted solution)
I agree with 'create a package'.
If you cannot do that, how about using symbolic links/junctions (ln -s on Linux, linkd on Windows)?
I'd advise using setuptools for this. It allows you to set dependencies so you can make sure all of these packages/individual modules are on the sys.path before installing a package. If you want to install something that's just a single source file, it has support for automagically generating a simple setup.py for it. This may be useful if you decide not to go the package route.
If you plan on deploying this on multiple computers, I will usually set up a webserver with all the dependencies I plan on using so it can install them for you automatically.
I've also heard good things about paver, but haven't used it myself.

How can I make a Python extension module packaged as an egg loadable without installing it?

I'm in the middle of reworking our build scripts to be based upon the wonderful Waf tool (I did use SCons for ages but its just way too slow).
Anyway, I've hit the following situation and I cannot find a resolution to it:
I have a product that depends on a number of previously built egg files.
I'm trying to package the product using PyInstaller as part of the build process.
I build the dependencies first.
Next I want to run PyInstaller to package the product that depends on the eggs I built. I need PyInstaller to be able to load those egg files as part of it's packaging process.
This sounds easy: you work out what PYTHONPATH should be, construct a copy of sys.environ setting the variable up correctly, and then invoke the PyInstaller script using subprocess.Popen passing the previously configured environment as the env argument.
The problem is that setting PYTHONPATH alone does not seem to be enough if the eggs you are adding are extension modules that are packaged as zipsafe. In this case, it turns out that the embedded libraries are not able to be imported.
If I unzip the eggs (renaming the directories to .egg), I can import them with no further settings but this is not what I want in this case.
I can also get the eggs to import from a subshell by doing the following:
Setting PYTHONPATH to the directory that contains the egg you want to import (not the path of the egg itself)
Loading a python shell and using pkg_resources.require to locate the egg.
Once this has been done, the egg loads as normal. Again, this is not practical because I need to be able to run my python shell in a manner where it is ready to import these eggs from the off.
The dirty option would be to output a wrapper script that took the above actions before calling the real target script but this seems like the wrong thing to do: there must be a better way to do this.
Heh, I think this was my bad. The issue appear to have been that the zipsafe flag in setup.py for the extension package was set to False, which appears to affect your ability to treat it as such at all.
Now that I've set that to True I can import the egg files, simply by adding each one to the PYTHONPATH.
I hope someone else finds this answer useful one day!
Although you have a solution, you could always try "virtualenv" that creates a virtual environment of python where you can install and test Python Packages without messing with the core system python:
http://pypi.python.org/pypi/virtualenv

Categories