I have been developing a private python package (my first py package) and want to change the name while retainng all my git commits.
I formatted it in a similar way to Cookie Cutter Data Science where all the code lives in the src dir. This has been fine while building but when upload it to a server or another computer I don't want to have to call it like...
from src.data import *
I have tried just renaming it using git mv src/ newname/ but when I push this change to Github all my files are lost (i know they are there but I would prefer to easily see all my past changes). I shared it in the form of a .whl file.
So do I just have to rename it and deal with loosing the changes? Or is there a different git command to use. Or is there some configuration in the setup.py file i can do?
Here is my setup.py for reference.
from setuptools import find_packages, setup
setup(
name='newname',
packages=find_packages(),
version='0.1.0',
description='...',
author='...',
license='MIT',
)
Thank you!
setup(
…
package_dir={'': 'src'},
packages=find_packages("src"),
…
)
See https://setuptools.readthedocs.io/en/latest/setuptools.html#using-find-packages
Then change your import to
from data import *
I have created a custom python package following this guide, so I have the following structure:
mypackage/ <-- VCS root
mypackage/
submodule1/
submodule2/
setup.py
And setup.py contains exactly the same information as in the guide:
from setuptools import setup, find_packages
setup(name='mypackage',
version='0.1',
description='desc',
url='vcs_url',
author='Hodossy, Szabolcs',
author_email='myemail#example.com',
license='MIT',
packages=find_packages(),
install_requires=[
# deps
],
zip_safe=False)
I have noticed if I go into the folder where setup.py is, and then call python setup.py install in a virtual environment, in site-packages the following structure is installed:
.../site-packages/mypackage-0.1-py3.6.egg/mypackage/
submodule1/
submodule2/
but if I call it from one folder up like python mypackage/setup.py install, then the structure is the following:
.../site-packages/mypackage-0.1-py3.6.egg/mypackage/
mypackage/
submodule1/
submodule2/
This later one ruins all imports from my module, as the path is different for the submodules.
Could you explain what is happening here and how to prevent that kind of behaviour?
This is experienced with Python 3.6 on both Windows and Linux.
Your setup.py does not contain any paths, but seems to only find the files via find_packages. So of course it depends from where you run it. The setup.py isn't strictly tied to its location. Of course you could do things like chdir to the basename of the setup file path in sys.argv[0], but that's rather ugly.
The question is, WHY do you want to build it that way? It looks more like you would want a structure like
mypackage-source
mypackage
submodule1
submodule2
setup.py
And then execute setup.py from the work directory. If you want to be able to run it from anywhere, the better workaround would be to put a shellscript next to it, like
#!/bin/sh
cd ``basename $0``
python setup.py $#
which separates the task of changing to the right directory (here I assume the directory with setup.py in the workdir) from running setup.py
I have a file structure like this:
project_folder/
notebooks/
notebook01.ipynb
notebook02.ipynb
...
notebookXY.ipynb
module01.py
module02.py
module03.py
In .ipynb files inside notebook/ folder I want to import classes and functions from module01.py, module02.py and module03.py.
I have found answer in this question that it is possible using following lines of code inside every notebook and run those lines as first cell every time:
import os
import sys
module_path = os.path.abspath(os.path.join('..'))
if module_path not in sys.path:
sys.path.append(module_path)
Is there please a better way for this? What if I have A LOT of .ipynb files inside notebooks/ folder, do I have to paste those lines of code at the beginning of every single one? Is there a better, more minimalist or cleaner way?
Try adding the project_folder to your PYTHONPATH environment variable. This will allow you to tell python to search that directory for imports.
You would do this in your user profile settings, or in your startup script - not in python. It's something that has to be set before python ever gets run.
Another solution is to move all your Python modules (.py files) into a folder and make them an installable package. If you pip install it into your current environment, you can then import the package into any notebook in that environment, regardless of folder structure.
So in your situation you could have:
project_folder/
notebooks/
notebook01.ipynb
notebook02.ipynb
...
notebookXY.ipynb
my_package/
__init__.py
module01.py
module02.py
module03.py
setup.py
__init__.py can just be an empty file, and tells Python "everything in this folder is part of a package"
For an explanation of what goes in setup.py see here.
A basic setup.py can be as simple as this:
import setuptools
setuptools.setup(
name="my_package",
version="0.0.1",
description="A small example package",
packages=setuptools.find_packages(),
python_requires='>=3.7',
)
Install it:
cd project_folder
pip install [-e] .
Including the optional -e flag will install my_package in "editable" mode, meaning that instead of copying the files into your virtual environment, a symlink will be created to the files where they are.
Now in any notebook you can do:
import my_package
Or
from my_package.module01 import <some object>
I'm pip-installing my module like so:
cd my_working_dir
pip install -e .
When I later import the module from within Python, can I somehow detect if the module is installed in this editable mode?
Right now, I'm just checking if there's a .git folder in os.path.dirname(mymodule.__file__)) which, well, only works if there's actually a .git folder there. Is there a more reliable way?
Another workaround:
Place an "not to install" file into your package. This can be a README.md, or a not_to_install.txt file. Use any non-pythonic extension, to prevent that file installation. Then check if that file exists in your package.
The suggested source structure:
my_repo
|-- setup.py
`-- awesome_package
|-- __init__.py
|-- not_to_install.txt
`-- awesome_module.py
setup.py:
# setup.py
from setuptools import setup, find_packages
setup(
name='awesome_package',
version='1.0.0',
# find_packages() will ignore non-python files.
packages=find_packages(),
)
The __init__.py or the awesome_module.py:
import os
# The current directory
__here__ = os.path.dirname(os.path.realpath(__file__))
# Place the following function into __init__.py or into awesome_module.py
def check_editable_installation():
'''
Returns true if the package was installed with the editable flag.
'''
not_to_install_exists = os.path.isfile(os.path.join(__here__, 'not_to_install.txt'))
return not_to_install_exists
Run pip list, and there is a column called "Editable project location". If that column has a value, specifically the directory from which you installed it, then the package is pip installed in editable mode.
I don't know of a way to detect this directly (e.g. ask setuptools).
You could try to detect that you package can not be reached through the paths in sys.path. But that's tedious. It's also not bullet proof -- what if it can be reached through sys.path but it's also installed as en editable package?
The best option is to look at the artifacts an editable install leaves in your site-packages folder. There's a file called my_package.egg-link there.
from pathlib import Path
# get site packages folder through some other magic
# assuming this current file is located in the root of your package
current_package_root = str(Path(__file__).parent.parent)
installed_as_editable = False
egg_link_file = Path(site_packages_folder) / "my_package.egg-link"
try:
linked_folder = egg_link_file.read_text()
installed_as_editable = current_package_root in linked_folder
except FileNotFoundError:
installed_as_editable = False
Note: to make this a bit more bullet-proof, read only the first line of the egg-link file and parse it using Path() as well to account for proper slashes etc.
Recently I had to test if various packages were installed in editable mode across different machines. Running pip show <package name> reveals not only the version, but other information about it, which includes the location of the source code. If the package was not installed in editable mode, this location will point to site-packages, so for my case it was sufficient with checking the output of such command:
import subprocess
def check_if_editable(name_of_the_package:str) -> bool:
out = subprocess.check_output(["pip", "show", f"{name_of_the_package}"]).decode()
return "site-packages" in out
Haven't found a definite source for this, but if the output of
$ pip show my-package -f
is something like this:
..
..
Files:
__editable__.my-package-0.0.5.pth
my-package-0.0.5.dist-info/INSTALLER
my-package-0.0.5.dist-info/METADATA
my-package-0.0.5.dist-info/RECORD
my-package-0.0.5.dist-info/REQUESTED
my-package-0.0.5.dist-info/WHEEL
my-package-0.0.5.dist-info/direct_url.json
my-package-0.0.5.dist-info/top_level.txt
then it's probably editable.
How do I make setup.py include a file that isn't part of the code? (Specifically, it's a license file, but it could be any other thing.)
I want to be able to control the location of the file. In the original source folder, the file is in the root of the package. (i.e. on the same level as the topmost __init__.py.) I want it to stay exactly there when the package is installed, regardless of operating system. How do I do that?
Probably the best way to do this is to use the setuptools package_data directive. This does mean using setuptools (or distribute) instead of distutils, but this is a very seamless "upgrade".
Here's a full (but untested) example:
from setuptools import setup, find_packages
setup(
name='your_project_name',
version='0.1',
description='A description.',
packages=find_packages(exclude=['ez_setup', 'tests', 'tests.*']),
package_data={'': ['license.txt']},
include_package_data=True,
install_requires=[],
)
Note the specific lines that are critical here:
package_data={'': ['license.txt']},
include_package_data=True,
package_data is a dict of package names (empty = all packages) to a list of patterns (can include globs). For example, if you want to only specify files within your package, you can do that too:
package_data={'yourpackage': ['*.txt', 'path/to/resources/*.txt']}
The solution here is definitely not to rename your non-py files with a .py extension.
See Ian Bicking's presentation for more info.
UPDATE: Another [Better] Approach
Another approach that works well if you just want to control the contents of the source distribution (sdist) and have files outside of the package (e.g. top-level directory) is to add a MANIFEST.in file. See the Python documentation for the format of this file.
Since writing this response, I have found that using MANIFEST.in is typically a less frustrating approach to just make sure your source distribution (tar.gz) has the files you need.
For example, if you wanted to include the requirements.txt from top-level, recursively include the top-level "data" directory:
include requirements.txt
recursive-include data *
Nevertheless, in order for these files to be copied at install time to the package’s folder inside site-packages, you’ll need to supply include_package_data=True to the setup() function. See Adding Non-Code Files for more information.
To accomplish what you're describing will take two steps...
The file needs to be added to the source tarball
setup.py needs to be modified to install the data file to the source path
Step 1: To add the file to the source tarball, include it in the MANIFEST
Create a MANIFEST template in the folder that contains setup.py
The MANIFEST is basically a text file with a list of all the files that will be included in the source tarball.
Here's what the MANIFEST for my project look like:
CHANGELOG.txt
INSTALL.txt
LICENSE.txt
pypreprocessor.py
README.txt
setup.py
test.py
TODO.txt
Note: While sdist does add some files automatically, I prefer to explicitly specify them to be sure instead of predicting what it does and doesn't.
Step 2: To install the data file to the source folder, modify setup.py
Since you're looking to add a data file (LICENSE.txt) to the source install folder you need to modify the data install path to match the source install path. This is necessary because, by default, data files are installed to a different location than source files.
To modify the data install dir to match the source install dir...
Pull the install dir info from distutils with:
from distutils.command.install import INSTALL_SCHEMES
Modify the data install dir to match the source install dir:
for scheme in INSTALL_SCHEMES.values():
scheme['data'] = scheme['purelib']
And, add the data file and location to setup():
data_files=[('', ['LICENSE.txt'])]
Note: The steps above should accomplish exactly what you described in a standard manner without requiring any extension libraries.
It is 2019, and here is what is working -
despite advice here and there, what I found on the internet halfway documented is using setuptools_scm, passed as options to setuptools.setup. This will include any data files that are versioned on your VCS, be it git or any other, to the wheel package, and will make "pip install" from the git repository to bring those files along.
So, I just added these two lines to the setup call on "setup.py". No extra installs or import required:
setup_requires=['setuptools_scm'],
include_package_data=True,
No need to manually list package_data, or in a MANIFEST.in file - if it is versioned, it is included in the package. The docs on "setuptools_scm" put emphasis on creating a version number from the commit position, and disregard the really important part of adding the data files. (I can't care less if my intermediate wheel file is named "*0.2.2.dev45+g3495a1f" or will use the hardcoded version number "0.3.0dev0" I've typed in - but leaving crucial files for the program to work behind is somewhat important)
create MANIFEST.in in the project root with recursive-include to the required directory or include with the file name.
include LICENSE
include README.rst
recursive-include package/static *
recursive-include package/templates *
documentation can be found here
Step 1: create a MANIFEST.in file in the same folder with setup.py
Step 2: include the relative path to the files you want to add in MANIFEST.in
include README.rst
include docs/*.txt
include funniest/data.json
Step 3: set include_package_data=True in the setup() function to copy these files to site-package
Reference is here.
I wanted to post a comment to one of the questions but I don't enough reputation to do that >.>
Here's what worked for me (came up with it after referring the docs):
package_data={
'mypkg': ['../*.txt']
},
include_package_data: False
The last line was, strangely enough, also crucial for me (you can also omit this keyword argument - it works the same).
What this does is it copies all text files in your top-level or root directory (one level up from the package mypkg you want to distribute).
None of the above really worked for me. What saved me was this answer.
Apparently, in order for these data files to be extracted during installation, I had to do a couple of things:
Like already mentioned - Add a MANIFEST.in to the project and specify the folder/files you want to be included. In my case: recursive-include folder_with_extra_stuff *
Again, like already mentioned - Add include_package_data=True to your setup.py. This is crucial, because without it only the files that match *.py will be brought.
This is what was missing! - Add an empty __init__.py to your data folder. For me I had to add this file to my folder-with-extra-stuff.
Extra - Not sure if this is a requirement, but with my own python modules I saw that they're zipped inside the .egg file in site-packages. So I had to add zip_safe=False to my setup.py file.
Final Directory Structure
my-app/
├─ app/
│ ├─ __init__.py
│ ├─ __main__.py
├─ folder-with-extra-stuff/
│ ├─ __init__.py
│ ├─ data_file.json
├─ setup.py
├─ MANIFEST.in
This works in 2020!
As others said create "MANIFEST.in" where your setup.py is located.
Next in manifest include/exclude all the necessary things. Be careful here regarding the syntax.
Ex: lets say we have template folder to be included in the source package.
in manifest file do this :
recursive-include template *
Make sure you leave space between dir-name and pattern for files/dirs like above.
Dont do like this like we do in .gitignore
recursive-include template/* [this won't work]
Other option is to use include. There are bunch of options. Look up here at their docs for Manifest.in
And the final important step, include this param in your setup.py and you are good to go!
setup(
...
include_package_data=True,
......
)
Hope that helps! Happy Coding!
In setup.py under setup( :
setup(
name = 'foo library'
...
package_data={
'foolibrary.folderA': ['*'], # All files from folder A
'foolibrary.folderB': ['*.txt'] #All text files from folder B
},
Here is a simpler answer that worked for me.
First, per a Python Dev's comment above, setuptools is not required:
package_data is also available to pure distutils setup scripts
since 2.3. – Éric Araujo
That's great because putting a setuptools requirement on your package means you will have to install it also. In short:
from distutils.core import setup
setup(
# ...snip...
packages = ['pkgname'],
package_data = {'pkgname': ['license.txt']},
)
I just wanted to follow up on something I found working with Python 2.7 on Centos 6. Adding the package_data or data_files as mentioned above did not work for me. I added a MANIFEST.IN with the files I wanted which put the non-python files into the tarball, but did not install them on the target machine via RPM.
In the end, I was able to get the files into my solution using the "options" in the setup/setuptools. The option files let you modify various sections of the spec file from setup.py. As follows.
from setuptools import setup
setup(
name='theProjectName',
version='1',
packages=['thePackage'],
url='',
license='',
author='me',
author_email='me#email.com',
description='',
options={'bdist_rpm': {'install_script': 'filewithinstallcommands'}},
)
file - MANIFEST.in:
include license.txt
file - filewithinstallcommands:
mkdir -p $RPM_BUILD_ROOT/pathtoinstall/
#this line installs your python files
python setup.py install -O1 --root=$RPM_BUILD_ROOT --record=INSTALLED_FILES
#install license.txt into /pathtoinstall folder
install -m 700 license.txt $RPM_BUILD_ROOT/pathtoinstall/
echo /pathtoinstall/license.txt >> INSTALLED_FILES
None of the answers worked for me because my files were at the top level, outside the package. I used a custom build command instead.
import os
import setuptools
from setuptools.command.build_py import build_py
from shutil import copyfile
HERE = os.path.abspath(os.path.dirname(__file__))
NAME = "thepackage"
class BuildCommand(build_py):
def run(self):
build_py.run(self)
if not self.dry_run:
target_dir = os.path.join(self.build_lib, NAME)
for fn in ["VERSION", "LICENSE.txt"]:
copyfile(os.path.join(HERE, fn), os.path.join(target_dir,fn))
setuptools.setup(
name=NAME,
cmdclass={"build_py": BuildCommand},
description=DESCRIPTION,
...
)
For non-python files to be included in an installation, they must be within one of the installed package directories. If you specify non-python files outside of your package directories in MANIFEST.in, they will be included in your distribution, but they will not be installed. The "documented" ways of installing arbitrary files outside of your package directories do not work reliably (as everyone has noticed by now).
The above answer from Julian Mann copies the files to your package directory in the build directory, so it does work, but not if you are installing in editable/develop mode (pip install -e or python setup.py develop). Based on this answer to a related question (and Julian's answer), below is an example that copies files to your installed package location either way after all the other install/develop tasks are done. The assumption here is that files file1 and file2 in your root-level data directory will be copied to your installed package directory (my_package), and that they will be accessible from python modules in your package using os.path.join(os.path.dirname(__file__), 'file1'), etc.
Remember to also do the MANIFEST.in stuff described above so that these files are also included in your distribution. Why setuptools would include files in your distribution and then silently never install them, is beyond my ken. Though installing them outside of your package directory is probably even more dubious.
import os
from setuptools import setup
from setuptools.command.develop import develop
from setuptools.command.install import install
from shutil import copyfile
HERE = os.path.abspath(os.path.dirname(__file__))
NAME = 'my_package'
def copy_files (target_path):
source_path = os.path.join(HERE, 'data')
for fn in ["file1", "file2"]:
copyfile(os.path.join(source_path, fn), os.path.join(target_path,fn))
class PostDevelopCommand(develop):
"""Post-installation for development mode."""
def run(self):
develop.run(self)
copy_files (os.path.abspath(NAME))
class PostInstallCommand(install):
"""Post-installation for installation mode."""
def run(self):
install.run(self)
copy_files (os.path.abspath(os.path.join(self.install_lib, NAME)))
setup(
name=NAME,
cmdclass={
'develop': PostDevelopCommand,
'install': PostInstallCommand,
},
version='0.1.0',
packages=[NAME],
include_package_data=True,
setup_requires=['setuptools_scm'],
)
Figured out a workaround: I renamed my lgpl2.1_license.txt to lgpl2.1_license.txt.py, and put some triple quotes around the text. Now I don't need to use the data_files option nor to specify any absolute paths. Making it a Python module is ugly, I know, but I consider it less ugly than specifying absolute paths.